Sample records for fault current problems

  1. Comparison of the quench and fault current limiting characteristics of the flux-coupling type SFCL with single and three-phase transformer

    NASA Astrophysics Data System (ADS)

    Jung, Byung Ik; Cho, Yong Sun; Park, Hyoung Min; Chung, Dong Chul; Choi, Hyo Sang

    2013-01-01

    The South Korean power grid has a network structure for the flexible operation of the system. The continuously increasing power demand necessitated the increase of power facilities, which decreased the impedance in the power system. As a result, the size of the fault current in the event of a system fault increased. As this increased fault current size is threatening the breaking capacity of the circuit breaker, the main protective device, a solution to this problem is needed. The superconducting fault current limiter (SFCL) has been designed to address this problem. SFCL supports the stable operation of the circuit breaker through its excellent fault-current-limiting operation [1-5]. In this paper, the quench and fault current limiting characteristics of the flux-coupling-type SFCL with one three-phase transformer were compared with those of the same SFCL type but with three single-phase transformers. In the case of the three-phase transformers, both the superconducting elements of the fault and sound phases were quenched, whereas in the case of the single-phase transformer, only that of the fault phase was quenched. For the fault current limiting rate, both cases showed similar rates for the single line-to-ground fault, but for the three-wire earth fault, the fault current limiting rate of the single-phase transformer was over 90% whereas that of the three-phase transformer was about 60%. It appears that when the three-phase transformer was used, the limiting rate decreased because the fluxes by the fault current of each phase were linked in one core. When the power loads of the superconducting elements were compared by fault type, the initial (half-cycle) load was great when the single-phase transformer was applied, whereas for the three-phase transformer, its power load was slightly lower at the initial stage but became greater after the half fault cycle.

  2. Wavelet Based Protection Scheme for Multi Terminal Transmission System with PV and Wind Generation

    NASA Astrophysics Data System (ADS)

    Manju Sree, Y.; Goli, Ravi kumar; Ramaiah, V.

    2017-08-01

    A hybrid generation is a part of large power system in which number of sources usually attached to a power electronic converter and loads are clustered can operate independent of the main power system. The protection scheme is crucial against faults based on traditional over current protection since there are adequate problems due to fault currents in the mode of operation. This paper adopts a new approach for detection, discrimination of the faults for multi terminal transmission line protection in presence of hybrid generation. Transient current based protection scheme is developed with discrete wavelet transform. Fault indices of all phase currents at all terminals are obtained by analyzing the detail coefficients of current signals using bior 1.5 mother wavelet. This scheme is tested for different types of faults and is found effective for detection and discrimination of fault with various fault inception angle and fault impedance.

  3. MgB2-based superconductors for fault current limiters

    NASA Astrophysics Data System (ADS)

    Sokolovsky, V.; Prikhna, T.; Meerovich, V.; Eisterer, M.; Goldacker, W.; Kozyrev, A.; Weber, H. W.; Shapovalov, A.; Sverdun, V.; Moshchil, V.

    2017-02-01

    A promising solution of the fault current problem in power systems is the application of fast-operating nonlinear superconducting fault current limiters (SFCLs) with the capability of rapidly increasing their impedance, and thus limiting high fault currents. We report the results of experiments with models of inductive (transformer type) SFCLs based on the ring-shaped bulk MgB2 prepared under high quasihydrostatic pressure (2 GPa) and by hot pressing technique (30 MPa). It was shown that the SFCLs meet the main requirements to fault current limiters: they possess low impedance in the nominal regime of the protected circuit and can fast increase their impedance limiting both the transient and the steady-state fault currents. The study of quenching currents of MgB2 rings (SFCL activation current) and AC losses in the rings shows that the quenching current density and critical current density determined from AC losses can be 10-20 times less than the critical current determined from the magnetization experiments.

  4. Fault detection and diagnosis of induction motors using motor current signature analysis and a hybrid FMM-CART model.

    PubMed

    Seera, Manjeevan; Lim, Chee Peng; Ishak, Dahaman; Singh, Harapajan

    2012-01-01

    In this paper, a novel approach to detect and classify comprehensive fault conditions of induction motors using a hybrid fuzzy min-max (FMM) neural network and classification and regression tree (CART) is proposed. The hybrid model, known as FMM-CART, exploits the advantages of both FMM and CART for undertaking data classification and rule extraction problems. A series of real experiments is conducted, whereby the motor current signature analysis method is applied to form a database comprising stator current signatures under different motor conditions. The signal harmonics from the power spectral density are extracted as discriminative input features for fault detection and classification with FMM-CART. A comprehensive list of induction motor fault conditions, viz., broken rotor bars, unbalanced voltages, stator winding faults, and eccentricity problems, has been successfully classified using FMM-CART with good accuracy rates. The results are comparable, if not better, than those reported in the literature. Useful explanatory rules in the form of a decision tree are also elicited from FMM-CART to analyze and understand different fault conditions of induction motors.

  5. The reflection of evolving bearing faults in the stator current's extended park vector approach for induction machines

    NASA Astrophysics Data System (ADS)

    Corne, Bram; Vervisch, Bram; Derammelaere, Stijn; Knockaert, Jos; Desmet, Jan

    2018-07-01

    Stator current analysis has the potential of becoming the most cost-effective condition monitoring technology regarding electric rotating machinery. Since both electrical and mechanical faults are detected by inexpensive and robust current-sensors, measuring current is advantageous on other techniques such as vibration, acoustic or temperature analysis. However, this technology is struggling to breach into the market of condition monitoring as the electrical interpretation of mechanical machine-problems is highly complicated. Recently, the authors built a test-rig which facilitates the emulation of several representative mechanical faults on an 11 kW induction machine with high accuracy and reproducibility. Operating this test-rig, the stator current of the induction machine under test can be analyzed while mechanical faults are emulated. Furthermore, while emulating, the fault-severity can be manipulated adaptively under controllable environmental conditions. This creates the opportunity of examining the relation between the magnitude of the well-known current fault components and the corresponding fault-severity. This paper presents the emulation of evolving bearing faults and their reflection in the Extended Park Vector Approach for the 11 kW induction machine under test. The results confirm the strong relation between the bearing faults and the stator current fault components in both identification and fault-severity. Conclusively, stator current analysis increases reliability in the application as a complete, robust, on-line condition monitoring technology.

  6. A Novel Wide-Area Backup Protection Based on Fault Component Current Distribution and Improved Evidence Theory

    PubMed Central

    Zhang, Zhe; Kong, Xiangping; Yin, Xianggen; Yang, Zengli; Wang, Lijun

    2014-01-01

    In order to solve the problems of the existing wide-area backup protection (WABP) algorithms, the paper proposes a novel WABP algorithm based on the distribution characteristics of fault component current and improved Dempster/Shafer (D-S) evidence theory. When a fault occurs, slave substations transmit to master station the amplitudes of fault component currents of transmission lines which are the closest to fault element. Then master substation identifies suspicious faulty lines according to the distribution characteristics of fault component current. After that, the master substation will identify the actual faulty line with improved D-S evidence theory based on the action states of traditional protections and direction components of these suspicious faulty lines. The simulation examples based on IEEE 10-generator-39-bus system show that the proposed WABP algorithm has an excellent performance. The algorithm has low requirement of sampling synchronization, small wide-area communication flow, and high fault tolerance. PMID:25050399

  7. Diagnosis of Misalignment in Overhung Rotor using the K-S Statistic and A2 Test

    NASA Astrophysics Data System (ADS)

    Garikapati, Diwakar; Pacharu, RaviKumar; Munukurthi, Rama Satya Satyanarayana

    2018-02-01

    Vibration measurement at the bearings of rotating machinery has become a useful technique for diagnosing incipient fault conditions. In particular, vibration measurement can be used to detect unbalance in rotor, bearing failure, gear problems or misalignment between a motor shaft and coupled shaft. This is a particular problem encountered in turbines, ID fans and FD fans used for power generation. For successful fault diagnosis, it is important to adopt motor current signature analysis (MCSA) techniques capable of identifying the faults. It is also useful to develop techniques for inferring information such as the severity of fault. It is proposed that modeling the cumulative distribution function of motor current signals with respect to appropriate theoretical distributions, and quantifying the goodness of fit with the Kolmogorov-Smirnov (KS) statistic and A2 test offers a suitable signal feature for diagnosis. This paper demonstrates the successful comparison of the K-S feature and A2 test for discriminating the misalignment fault from normal function.

  8. On-board fault management for autonomous spacecraft

    NASA Technical Reports Server (NTRS)

    Fesq, Lorraine M.; Stephan, Amy; Doyle, Susan C.; Martin, Eric; Sellers, Suzanne

    1991-01-01

    The dynamic nature of the Cargo Transfer Vehicle's (CTV) mission and the high level of autonomy required mandate a complete fault management system capable of operating under uncertain conditions. Such a fault management system must take into account the current mission phase and the environment (including the target vehicle), as well as the CTV's state of health. This level of capability is beyond the scope of current on-board fault management systems. This presentation will discuss work in progress at TRW to apply artificial intelligence to the problem of on-board fault management. The goal of this work is to develop fault management systems. This presentation will discuss work in progress at TRW to apply artificial intelligence to the problem of on-board fault management. The goal of this work is to develop fault management systems that can meet the needs of spacecraft that have long-range autonomy requirements. We have implemented a model-based approach to fault detection and isolation that does not require explicit characterization of failures prior to launch. It is thus able to detect failures that were not considered in the failure and effects analysis. We have applied this technique to several different subsystems and tested our approach against both simulations and an electrical power system hardware testbed. We present findings from simulation and hardware tests which demonstrate the ability of our model-based system to detect and isolate failures, and describe our work in porting the Ada version of this system to a flight-qualified processor. We also discuss current research aimed at expanding our system to monitor the entire spacecraft.

  9. Pattern Recognition Application of Support Vector Machine for Fault Classification of Thyristor Controlled Series Compensated Transmission Lines

    NASA Astrophysics Data System (ADS)

    Yashvantrai Vyas, Bhargav; Maheshwari, Rudra Prakash; Das, Biswarup

    2016-06-01

    Application of series compensation in extra high voltage (EHV) transmission line makes the protection job difficult for engineers, due to alteration in system parameters and measurements. The problem amplifies with inclusion of electronically controlled compensation like thyristor controlled series compensation (TCSC) as it produce harmonics and rapid change in system parameters during fault associated with TCSC control. This paper presents a pattern recognition based fault type identification approach with support vector machine. The scheme uses only half cycle post fault data of three phase currents to accomplish the task. The change in current signal features during fault has been considered as discriminatory measure. The developed scheme in this paper is tested over a large set of fault data with variation in system and fault parameters. These fault cases have been generated with PSCAD/EMTDC on a 400 kV, 300 km transmission line model. The developed algorithm has proved better for implementation on TCSC compensated line with its improved accuracy and speed.

  10. Fault-tolerant arithmetic via time-shared TMR

    NASA Astrophysics Data System (ADS)

    Swartzlander, Earl E.

    1999-11-01

    Fault tolerance is increasingly important as society has come to depend on computers for more and more aspects of daily life. The current concern about the Y2K problems indicates just how much we depend on accurate computers. This paper describes work on time- shared TMR, a technique which is used to provide arithmetic operations that produce correct results in spite of circuit faults.

  11. Fault detection and isolation

    NASA Technical Reports Server (NTRS)

    Bernath, Greg

    1994-01-01

    In order for a current satellite-based navigation system (such as the Global Positioning System, GPS) to meet integrity requirements, there must be a way of detecting erroneous measurements, without help from outside the system. This process is called Fault Detection and Isolation (FDI). Fault detection requires at least one redundant measurement, and can be done with a parity space algorithm. The best way around the fault isolation problem is not necessarily isolating the bad measurement, but finding a new combination of measurements which excludes it.

  12. Research on Fault Characteristics and Line Protections Within a Large-scale Photovoltaic Power Plant

    NASA Astrophysics Data System (ADS)

    Zhang, Chi; Zeng, Jie; Zhao, Wei; Zhong, Guobin; Xu, Qi; Luo, Pandian; Gu, Chenjie; Liu, Bohan

    2017-05-01

    Centralized photovoltaic (PV) systems have different fault characteristics from distributed PV systems due to the different system structures and controls. This makes the fault analysis and protection methods used in distribution networks with distributed PV not suitable for a centralized PV power plant. Therefore, a consolidated expression for the fault current within a PV power plant under different controls was calculated considering the fault response of the PV array. Then, supported by the fault current analysis and the on-site testing data, the overcurrent relay (OCR) performance was evaluated in the collection system of an 850 MW PV power plant. It reveals that the OCRs at downstream side on overhead lines may malfunction. In this case, a new relay scheme was proposed using directional distance elements. In the PSCAD/EMTDC, a detailed PV system model was built and verified using the on-site testing data. Simulation results indicate that the proposed relay scheme could effectively solve the problems under variant fault scenarios and PV plant output levels.

  13. Sensor fault-tolerant control for gear-shifting engaging process of automated manual transmission

    NASA Astrophysics Data System (ADS)

    Li, Liang; He, Kai; Wang, Xiangyu; Liu, Yahui

    2018-01-01

    Angular displacement sensor on the actuator of automated manual transmission (AMT) is sensitive to fault, and the sensor fault will disturb its normal control, which affects the entire gear-shifting process of AMT and results in awful riding comfort. In order to solve this problem, this paper proposes a method of fault-tolerant control for AMT gear-shifting engaging process. By using the measured current of actuator motor and angular displacement of actuator, the gear-shifting engaging load torque table is built and updated before the occurrence of the sensor fault. Meanwhile, residual between estimated and measured angular displacements is used to detect the sensor fault. Once the residual exceeds a determined fault threshold, the sensor fault is detected. Then, switch control is triggered, and the current observer and load torque table estimates an actual gear-shifting position to replace the measured one to continue controlling the gear-shifting process. Numerical and experiment tests are carried out to evaluate the reliability and feasibility of proposed methods, and the results show that the performance of estimation and control is satisfactory.

  14. Diagnosis of Electric Submersible Centrifugal Pump

    NASA Astrophysics Data System (ADS)

    Kovalchuk, M. S.; Poddubniy, D. A.

    2018-01-01

    The paper deals with the development of system operational diagnostics of electrical submersible pumps (ESP). At the initial stage of studies have explored current methods of the diagnosis of ESP, examined the existing problems of their diagnosis. Resulting identified a number of main standard ESP faults, mechanical faults such as bearing wear, protective sleeves of the shaft and the hubs of guide vanes, misalignment and imbalance of the shafts, which causes the breakdown of the stator bottom or top bases. All this leads to electromagnetic faults: rotor eccentricity, weakening the pressing of steel packs, wire breakage or a short circuit in the stator winding, etc., leading to changes in the consumption current.

  15. Subaru FATS (fault tracking system)

    NASA Astrophysics Data System (ADS)

    Winegar, Tom W.; Noumaru, Junichi

    2000-07-01

    The Subaru Telescope requires a fault tracking system to record the problems and questions that staff experience during their work, and the solutions provided by technical experts to these problems and questions. The system records each fault and routes it to a pre-selected 'solution-provider' for each type of fault. The solution provider analyzes the fault and writes a solution that is routed back to the fault reporter and recorded in a 'knowledge-base' for future reference. The specifications of our fault tracking system were unique. (1) Dual language capacity -- Our staff speak both English and Japanese. Our contractors speak Japanese. (2) Heterogeneous computers -- Our computer workstations are a mixture of SPARCstations, Macintosh and Windows computers. (3) Integration with prime contractors -- Mitsubishi and Fujitsu are primary contractors in the construction of the telescope. In many cases, our 'experts' are our contractors. (4) Operator scheduling -- Our operators spend 50% of their work-month operating the telescope, the other 50% is spent working day shift at the base facility in Hilo, or day shift at the summit. We plan for 8 operators, with a frequent rotation. We need to keep all operators informed on the current status of all faults, no matter the operator's location.

  16. Dynamic ruptures on faults of complex geometry: insights from numerical simulations, from large-scale curvature to small-scale fractal roughness

    NASA Astrophysics Data System (ADS)

    Ulrich, T.; Gabriel, A. A.

    2016-12-01

    The geometry of faults is subject to a large degree of uncertainty. As buried structures being not directly observable, their complex shapes may only be inferred from surface traces, if available, or through geophysical methods, such as reflection seismology. As a consequence, most studies aiming at assessing the potential hazard of faults rely on idealized fault models, based on observable large-scale features. Yet, real faults are known to be wavy at all scales, their geometric features presenting similar statistical properties from the micro to the regional scale. The influence of roughness on the earthquake rupture process is currently a driving topic in the computational seismology community. From the numerical point of view, rough faults problems are challenging problems that require optimized codes able to run efficiently on high-performance computing infrastructure and simultaneously handle complex geometries. Physically, simulated ruptures hosted by rough faults appear to be much closer to source models inverted from observation in terms of complexity. Incorporating fault geometry on all scales may thus be crucial to model realistic earthquake source processes and to estimate more accurately seismic hazard. In this study, we use the software package SeisSol, based on an ADER-Discontinuous Galerkin scheme, to run our numerical simulations. SeisSol allows solving the spontaneous dynamic earthquake rupture problem and the wave propagation problem with high-order accuracy in space and time efficiently on large-scale machines. In this study, the influence of fault roughness on dynamic rupture style (e.g. onset of supershear transition, rupture front coherence, propagation of self-healing pulses, etc) at different length scales is investigated by analyzing ruptures on faults of varying roughness spectral content. In particular, we investigate the existence of a minimum roughness length scale in terms of rupture inherent length scales below which the rupture ceases to be sensible. Finally, the effect of fault geometry on ground-motions, in the near-field, is considered. Our simulations feature a classical linear slip weakening on the fault and a viscoplastic constitutive model off the fault. The benefits of using a more elaborate fast velocity-weakening friction law will also be considered.

  17. Soft Computing Application in Fault Detection of Induction Motor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Konar, P.; Puhan, P. S.; Chattopadhyay, P. Dr.

    2010-10-26

    The paper investigates the effectiveness of different patter classifier like Feed Forward Back Propagation (FFBPN), Radial Basis Function (RBF) and Support Vector Machine (SVM) for detection of bearing faults in Induction Motor. The steady state motor current with Park's Transformation has been used for discrimination of inner race and outer race bearing defects. The RBF neural network shows very encouraging results for multi-class classification problems and is hoped to set up a base for incipient fault detection of induction motor. SVM is also found to be a very good fault classifier which is highly competitive with RBF.

  18. Closure of the Africa-Eurasia-North America plate motion circuit and tectonics of the Gloria fault

    NASA Technical Reports Server (NTRS)

    Argus, Donald F.; Gordon, Richard G.; Demets, Charles; Stein, Seth

    1989-01-01

    The current motions of the African, Eurasian, and North American plates are examined. The problems addressed include whether there is resolvable motion of a Spitsbergen microplate, the direction of motion between the African and North American plates, whether the Gloria fault is an active transform fault, and the implications of plate circuit closures for rates of intraplate deformation. Marine geophysical data and magnetic profiles are used to construct a model which predicts about 4 mm/yr slip across the Azores-Gibraltar Ridge, and west-northwest convergence near Gibraltar. The analyzed data are consistent with a rigid plate model with the Gloria fault being a transform fault.

  19. Online Detection of Broken Rotor Bar Fault in Induction Motors by Combining Estimation of Signal Parameters via Min-norm Algorithm and Least Square Method

    NASA Astrophysics Data System (ADS)

    Wang, Pan-Pan; Yu, Qiang; Hu, Yong-Jun; Miao, Chang-Xin

    2017-11-01

    Current research in broken rotor bar (BRB) fault detection in induction motors is primarily focused on a high-frequency resolution analysis of the stator current. Compared with a discrete Fourier transformation, the parametric spectrum estimation technique has a higher frequency accuracy and resolution. However, the existing detection methods based on parametric spectrum estimation cannot realize online detection, owing to the large computational cost. To improve the efficiency of BRB fault detection, a new detection method based on the min-norm algorithm and least square estimation is proposed in this paper. First, the stator current is filtered using a band-pass filter and divided into short overlapped data windows. The min-norm algorithm is then applied to determine the frequencies of the fundamental and fault characteristic components with each overlapped data window. Next, based on the frequency values obtained, a model of the fault current signal is constructed. Subsequently, a linear least squares problem solved through singular value decomposition is designed to estimate the amplitudes and phases of the related components. Finally, the proposed method is applied to a simulated current and an actual motor, the results of which indicate that, not only parametric spectrum estimation technique.

  20. On concentrated solute sources in faulted aquifers

    NASA Astrophysics Data System (ADS)

    Robinson, N. I.; Werner, A. D.

    2017-06-01

    Finite aperture faults and fractures within aquifers (collectively called 'faults' hereafter) theoretically enable flowing water to move through them but with refractive displacement, both on entry and exit. When a 2D or 3D point source of solute concentration is located upstream of the fault, the plume emanating from the source relative to one in a fault-free aquifer is affected by the fault, both before it and after it. Previous attempts to analyze this situation using numerical methods faced challenges in overcoming computational constraints that accompany requisite fine mesh resolutions. To address these, an analytical solution of this problem is developed and interrogated using statistical evaluation of solute distributions. The method of solution is based on novel spatial integral representations of the source with axes rotated from the direction of uniform water flow and aligning with fault faces and normals. Numerical exemplification is given to the case of a 2D steady state source, using various parameter combinations. Statistical attributes of solute plumes show the relative impact of parameters, the most important being, fault rotation, aperture and conductivity ratio. New general observations of fault-affected solution plumes are offered, including: (a) the plume's mode (i.e. peak concentration) on the downstream face of the fault is less displaced than the refracted groundwater flowline, but at some distance downstream of the fault, these realign; (b) porosities have no influence in steady state calculations; (c) previous numerical modeling results of barrier faults show significant boundary effects. The current solution adds to available benchmark problems involving fractures, faults and layered aquifers, in which grid resolution effects are often barriers to accurate simulation.

  1. Solid-State Fault Current Limiter Development : Design and Testing Update of a 15kV SSCL Power Stack

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dr. Ram Adapa; Mr. Dante Piccone

    2012-04-30

    ABSTRACT The Solid-State Fault Current Limiter (SSCL) is a promising technology that can be applied to utility power delivery systems to address the problem of increasing fault currents associated with load growth. As demand continues to grow, more power is added to utility system either by increasing generator capacity or by adding distributed generators, resulting in higher available fault currents, often beyond the capabilities of the present infrastructure. The SSCL is power-electronics based equipment designed to work with the present utility system to address this problem. The SSCL monitors the line current and dynamically inserts additional impedance into the linemore » in the event of a fault being detected. The SSCL is based on a modular design and can be configured for 5kV through 69kV systems at nominal current ratings of 1000A to 4000A. Results and Findings This report provides the final test results on the development of 15kV class SSCL single phase power stack. The scope of work included the design of the modular standard building block sub-assemblies, the design and manufacture of the power stack and the testing of the power stack for the key functional tests of continuous current capability and fault current limiting action. Challenges and Objectives Solid-State Current Limiter technology impacts a wide spectrum of utility engineering and operating personnel. It addresses the problems associated with load growth both at Transmission and Distribution class networks. The design concept is pioneering in terms of developing the most efficient and compact power electronics equipment for utility use. The initial test results of the standard building blocks are promising. The independent laboratory tests of the power stack are promising. However the complete 3 phase system needs rigorous testing for performance and reliability. Applications, Values, and Use The SSCL is an intelligent power-electronics device which is modular in design and can provide current limiting or current interrupting capabilities. It can be applied to variety of applications from distribution class to transmission class power delivery grids and networks. It can also be applied to single major commercial and industrial loads and distributed generator supplies. The active switching of devices can be further utilized for protection of substation transformers. The stress on the system can be reduced substantially improving the life of the power system. It minimizes the voltage sag by speedy elimination of heavy fault currents and promises to be an important element of the utility power system. DOE Perspective This development effort is now focused on a 15kV system. This project will help mitigate the challenges of increasing available fault current. DOE has made a major contribution in providing a cost effective SSCL designed to integrate seamlessly into the Transmission and Distribution networks of today and the future. Approach SSCL development program for a 69kV SSCL was initiated which included the use of the Super GTO advanced semiconductor device which won the 2007 R&D100 Award. In the beginning, steps were identified to accomplish the economically viable design of a 69kV class Solid State Current Limiter that is extremely reliable, cost effective, and compact enough to be applied in urban transmission. The prime thrust in design and development was to encompass the 1000A and the 3000A ratings and provide a modular design to cover the wide range of applications. The focus of the project was then shifted to a 15kV class SSCL. The specifications for the 15kV power stack are reviewed. The design changes integrated into the 15kV power stack are discussed. In this Technical Update the complete project is summarized followed by a detailed test report. The power stack independent high voltage laboratory test requirements and results are presented. Keywords Solid State Current Limiter, SSCL, Fault Current Limiter, Fault Current Controller, Power electronics controller, Intelligent power-electronics Device, IED« less

  2. Fuzzy-Wavelet Based Double Line Transmission System Protection Scheme in the Presence of SVC

    NASA Astrophysics Data System (ADS)

    Goli, Ravikumar; Shaik, Abdul Gafoor; Tulasi Ram, Sankara S.

    2015-06-01

    Increasing the power transfer capability and efficient utilization of available transmission lines, improving the power system controllability and stability, power oscillation damping and voltage compensation have made strides and created Flexible AC Transmission (FACTS) devices in recent decades. Shunt FACTS devices can have adverse effects on distance protection both in steady state and transient periods. Severe under reaching is the most important problem of relay which is caused by current injection at the point of connection to the system. Current absorption of compensator leads to overreach of relay. This work presents an efficient method based on wavelet transforms, fault detection, classification and location using Fuzzy logic technique which is almost independent of fault impedance, fault distance and fault inception angle. The proposed protection scheme is found to be fast, reliable and accurate for various types of faults on transmission lines with and without Static Var compensator at different locations and with various incidence angles.

  3. Model-Based Fault Diagnosis for Turboshaft Engines

    NASA Technical Reports Server (NTRS)

    Green, Michael D.; Duyar, Ahmet; Litt, Jonathan S.

    1998-01-01

    Tests are described which, when used to augment the existing periodic maintenance and pre-flight checks of T700 engines, can greatly improve the chances of uncovering a problem compared to the current practice. These test signals can be used to expose and differentiate between faults in various components by comparing the responses of particular engine variables to the expected. The responses can be processed on-line in a variety of ways which have been shown to reveal and identify faults. The combination of specific test signals and on-line processing methods provides an ad hoc approach to the isolation of faults which might not otherwise be detected during pre-flight checkout.

  4. A computer-vision-based rotating speed estimation method for motor bearing fault diagnosis

    NASA Astrophysics Data System (ADS)

    Wang, Xiaoxian; Guo, Jie; Lu, Siliang; Shen, Changqing; He, Qingbo

    2017-06-01

    Diagnosis of motor bearing faults under variable speed is a problem. In this study, a new computer-vision-based order tracking method is proposed to address this problem. First, a video recorded by a high-speed camera is analyzed with the speeded-up robust feature extraction and matching algorithm to obtain the instantaneous rotating speed (IRS) of the motor. Subsequently, an audio signal recorded by a microphone is equi-angle resampled for order tracking in accordance with the IRS curve, through which the frequency-domain signal is transferred to an angular-domain one. The envelope order spectrum is then calculated to determine the fault characteristic order, and finally the bearing fault pattern is determined. The effectiveness and robustness of the proposed method are verified with two brushless direct-current motor test rigs, in which two defective bearings and a healthy bearing are tested separately. This study provides a new noninvasive measurement approach that simultaneously avoids the installation of a tachometer and overcomes the disadvantages of tacholess order tracking methods for motor bearing fault diagnosis under variable speed.

  5. Fault Mitigation Schemes for Future Spaceflight Multicore Processors

    NASA Technical Reports Server (NTRS)

    Alexander, James W.; Clement, Bradley J.; Gostelow, Kim P.; Lai, John Y.

    2012-01-01

    Future planetary exploration missions demand significant advances in on-board computing capabilities over current avionics architectures based on a single-core processing element. The state-of-the-art multi-core processor provides much promise in meeting such challenges while introducing new fault tolerance problems when applied to space missions. Software-based schemes are being presented in this paper that can achieve system-level fault mitigation beyond that provided by radiation-hard-by-design (RHBD). For mission and time critical applications such as the Terrain Relative Navigation (TRN) for planetary or small body navigation, and landing, a range of fault tolerance methods can be adapted by the application. The software methods being investigated include Error Correction Code (ECC) for data packet routing between cores, virtual network routing, Triple Modular Redundancy (TMR), and Algorithm-Based Fault Tolerance (ABFT). A robust fault tolerance framework that provides fail-operational behavior under hard real-time constraints and graceful degradation will be demonstrated using TRN executing on a commercial Tilera(R) processor with simulated fault injections.

  6. Operating characteristics of superconducting fault current limiter using 24kV vacuum interrupter driven by electromagnetic repulsion switch

    NASA Astrophysics Data System (ADS)

    Endo, M.; Hori, T.; Koyama, K.; Yamaguchi, I.; Arai, K.; Kaiho, K.; Yanabu, S.

    2008-02-01

    Using a high temperature superconductor, we constructed and tested a model Superconducting Fault Current Limiter (SFCL). SFCL which has a vacuum interrupter with electromagnetic repulsion mechanism. We set out to construct high voltage class SFCL. We produced the electromagnetic repulsion switch equipped with a 24kV vacuum interrupter(VI). There are problems that opening speed becomes late. Because the larger vacuum interrupter the heavier weight of its contact. For this reason, the current which flows in a superconductor may be unable to be interrupted within a half cycles of current. In order to solve this problem, it is necessary to change the design of the coil connected in parallel and to strengthen the electromagnetic repulsion force at the time of opening the vacuum interrupter. Then, the design of the coil was changed, and in order to examine whether the problem is solvable, the current limiting test was conducted. We examined current limiting test using 4 series and 2 parallel-connected YBCO thin films. We used 12-centimeter-long YBCO thin film. The parallel resistance (0.1Ω) is connected with each YBCO thin film. As a result, we succeed in interrupting the current of superconductor within a half cycle of it. Furthermore, series and parallel-connected YBCO thin film could limit without failure.

  7. Assessment on the influence of resistive superconducting fault current limiter in VSC-HVDC system

    NASA Astrophysics Data System (ADS)

    Lee, Jong-Geon; Khan, Umer Amir; Hwang, Jae-Sang; Seong, Jae-Kyu; Shin, Woo-Ju; Park, Byung-Bae; Lee, Bang-Wook

    2014-09-01

    Due to fewer risk of commutation failures, harmonic occurrences and reactive power consumptions, Voltage Source Converter (VSC) based HVDC system is known as the optimum solution of HVDC power system for the future power grid. However, the absence of suitable fault protection devices for HVDC system hinders the efficient VSC-HVDC power grid design. In order to enhance the reliability of the VSC-HVDC power grid against the fault current problems, the application of resistive Superconducting Fault Current Limiters (SFCLs) could be considered. Also, SFCLs could be applied to the VSC-HVDC system with integrated AC Power Systems in order to enhance the transient response and the robustness of the system. In this paper, in order to evaluate the role of SFCLs in VSC-HVDC systems and to determine the suitable position of SFCLs in VSC-HVDC power systems integrated with AC power System, a simulation model based on Korea Jeju-Haenam HVDC power system was designed in Matlab Simulink/SimPowerSystems. This designed model was composed of VSC-HVDC system connected with an AC microgrid. Utilizing the designed VSC-HVDC systems, the feasible locations of resistive SFCLs were evaluated when DC line-to-line, DC line-to-ground and three phase AC faults were occurred. Consequently, it was found that the simulation model was effective to evaluate the positive effects of resistive SFCLs for the effective suppression of fault currents in VSC-HVDC systems as well as in integrated AC Systems. Finally, the optimum locations of SFCLs in VSC-HVDC transmission systems were suggested based on the simulation results.

  8. Intelligent on-line fault tolerant control for unanticipated catastrophic failures.

    PubMed

    Yen, Gary G; Ho, Liang-Wei

    2004-10-01

    As dynamic systems become increasingly complex, experience rapidly changing environments, and encounter a greater variety of unexpected component failures, solving the control problems of such systems is a grand challenge for control engineers. Traditional control design techniques are not adequate to cope with these systems, which may suffer from unanticipated dynamic failures. In this research work, we investigate the on-line fault tolerant control problem and propose an intelligent on-line control strategy to handle the desired trajectories tracking problem for systems suffering from various unanticipated catastrophic faults. Through theoretical analysis, the sufficient condition of system stability has been derived and two different on-line control laws have been developed. The approach of the proposed intelligent control strategy is to continuously monitor the system performance and identify what the system's current state is by using a fault detection method based upon our best knowledge of the nominal system and nominal controller. Once a fault is detected, the proposed intelligent controller will adjust its control signal to compensate for the unknown system failure dynamics by using an artificial neural network as an on-line estimator to approximate the unexpected and unknown failure dynamics. The first control law is derived directly from the Lyapunov stability theory, while the second control law is derived based upon the discrete-time sliding mode control technique. Both control laws have been implemented in a variety of failure scenarios to validate the proposed intelligent control scheme. The simulation results, including a three-tank benchmark problem, comply with theoretical analysis and demonstrate a significant improvement in trajectory following performance based upon the proposed intelligent control strategy.

  9. Simulation of fault performance of a diesel engine driven brushless alternator through PSPICE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Narayanan, S.S.Y.; Ananthakrishnan, P.; Hangari, V.U.

    1995-12-31

    Analysis of the fault performance of a brushless alternator with damper windings in the main alternator has been handled ab initio as a total modeling and simulation problem through proper application of Park`s equivalent circuit approach individually to the main exciter alternator units of the brushless alternator and the same has been implemented through PSPICE. The accuracy of the parameters used in the modeling and results obtained through PSPICE implementation are then evaluated for a specific 125 kVA brushless alternator in two stages as followed: first, by comparison of the predicted fault performance obtained from simulation of the 125 kVAmore » main alternator alone treated as a conventional alternator with the results obtained through the use of closed form analytical expressions available in the literature for fault currents and torques in such conventional alternators. Secondly, by comparison of some of the simulation results with those obtained experimentally on the brushless alternator itself. To enable proper calculation of derating factors to be used in the design of such brushless alternators, simulation results then include harmonic analysis of the steady state fault currents and torques. Throughout these studies, the brushless alternator is treated to be on no load at the instant of occurrence of fault.« less

  10. Computer memory power control for the Galileo spacecraft

    NASA Technical Reports Server (NTRS)

    Detwiler, R. C.

    1983-01-01

    The developmental history, major design drives, and final topology of the computer memory power system on the Galileo spacecraft are described. A unique method of generating memory backup power directly from the fault current drawn during a spacecraft power overload or fault condition allows this system to provide continuous memory power. This concept provides a unique solution to the problem of volatile memory loss without the use of a battery of other large energy storage elements usually associated with uninterrupted power supply designs.

  11. Signal injection as a fault detection technique.

    PubMed

    Cusidó, Jordi; Romeral, Luis; Ortega, Juan Antonio; Garcia, Antoni; Riba, Jordi

    2011-01-01

    Double frequency tests are used for evaluating stator windings and analyzing the temperature. Likewise, signal injection on induction machines is used on sensorless motor control fields to find out the rotor position. Motor Current Signature Analysis (MCSA), which focuses on the spectral analysis of stator current, is the most widely used method for identifying faults in induction motors. Motor faults such as broken rotor bars, bearing damage and eccentricity of the rotor axis can be detected. However, the method presents some problems at low speed and low torque, mainly due to the proximity between the frequencies to be detected and the small amplitude of the resulting harmonics. This paper proposes the injection of an additional voltage into the machine being tested at a frequency different from the fundamental one, and then studying the resulting harmonics around the new frequencies appearing due to the composition between injected and main frequencies.

  12. Signal Injection as a Fault Detection Technique

    PubMed Central

    Cusidó, Jordi; Romeral, Luis; Ortega, Juan Antonio; Garcia, Antoni; Riba, Jordi

    2011-01-01

    Double frequency tests are used for evaluating stator windings and analyzing the temperature. Likewise, signal injection on induction machines is used on sensorless motor control fields to find out the rotor position. Motor Current Signature Analysis (MCSA), which focuses on the spectral analysis of stator current, is the most widely used method for identifying faults in induction motors. Motor faults such as broken rotor bars, bearing damage and eccentricity of the rotor axis can be detected. However, the method presents some problems at low speed and low torque, mainly due to the proximity between the frequencies to be detected and the small amplitude of the resulting harmonics. This paper proposes the injection of an additional voltage into the machine being tested at a frequency different from the fundamental one, and then studying the resulting harmonics around the new frequencies appearing due to the composition between injected and main frequencies. PMID:22163801

  13. Combinatorial Optimization Algorithms for Dynamic Multiple Fault Diagnosis in Automotive and Aerospace Applications

    NASA Astrophysics Data System (ADS)

    Kodali, Anuradha

    In this thesis, we develop dynamic multiple fault diagnosis (DMFD) algorithms to diagnose faults that are sporadic and coupled. Firstly, we formulate a coupled factorial hidden Markov model-based (CFHMM) framework to diagnose dependent faults occurring over time (dynamic case). Here, we implement a mixed memory Markov coupling model to determine the most likely sequence of (dependent) fault states, the one that best explains the observed test outcomes over time. An iterative Gauss-Seidel coordinate ascent optimization method is proposed for solving the problem. A soft Viterbi algorithm is also implemented within the framework for decoding dependent fault states over time. We demonstrate the algorithm on simulated and real-world systems with coupled faults; the results show that this approach improves the correct isolation rate as compared to the formulation where independent fault states are assumed. Secondly, we formulate a generalization of set-covering, termed dynamic set-covering (DSC), which involves a series of coupled set-covering problems over time. The objective of the DSC problem is to infer the most probable time sequence of a parsimonious set of failure sources that explains the observed test outcomes over time. The DSC problem is NP-hard and intractable due to the fault-test dependency matrix that couples the failed tests and faults via the constraint matrix, and the temporal dependence of failure sources over time. Here, the DSC problem is motivated from the viewpoint of a dynamic multiple fault diagnosis problem, but it has wide applications in operations research, for e.g., facility location problem. Thus, we also formulated the DSC problem in the context of a dynamically evolving facility location problem. Here, a facility can be opened, closed, or can be temporarily unavailable at any time for a given requirement of demand points. These activities are associated with costs or penalties, viz., phase-in or phase-out for the opening or closing of a facility, respectively. The set-covering matrix encapsulates the relationship among the rows (tests or demand points) and columns (faults or locations) of the system at each time. By relaxing the coupling constraints using Lagrange multipliers, the DSC problem can be decoupled into independent subproblems, one for each column. Each subproblem is solved using the Viterbi decoding algorithm, and a primal feasible solution is constructed by modifying the Viterbi solutions via a heuristic. The proposed Viterbi-Lagrangian relaxation algorithm (VLRA) provides a measure of suboptimality via an approximate duality gap. As a major practical extension of the above problem, we also consider the problem of diagnosing faults with delayed test outcomes, termed delay-dynamic set-covering (DDSC), and experiment with real-world problems that exhibit masking faults. Also, we present simulation results on OR-library datasets (set-covering formulations are predominantly validated on these matrices in the literature), posed as facility location problems. Finally, we implement these algorithms to solve problems in aerospace and automotive applications. Firstly, we address the diagnostic ambiguity problem in aerospace and automotive applications by developing a dynamic fusion framework that includes dynamic multiple fault diagnosis algorithms. This improves the correct fault isolation rate, while minimizing the false alarm rates, by considering multiple faults instead of the traditional data-driven techniques based on single fault (class)-single epoch (static) assumption. The dynamic fusion problem is formulated as a maximum a posteriori decision problem of inferring the fault sequence based on uncertain outcomes of multiple binary classifiers over time. The fusion process involves three steps: the first step transforms the multi-class problem into dichotomies using error correcting output codes (ECOC), thereby solving the concomitant binary classification problems; the second step fuses the outcomes of multiple binary classifiers over time using a sliding window or block dynamic fusion method that exploits temporal data correlations over time. We solve this NP-hard optimization problem via a Lagrangian relaxation (variational) technique. The third step optimizes the classifier parameters, viz., probabilities of detection and false alarm, using a genetic algorithm. The proposed algorithm is demonstrated by computing the diagnostic performance metrics on a twin-spool commercial jet engine, an automotive engine, and UCI datasets (problems with high classification error are specifically chosen for experimentation). We show that the primal-dual optimization framework performed consistently better than any traditional fusion technique, even when it is forced to give a single fault decision across a range of classification problems. Secondly, we implement the inference algorithms to diagnose faults in vehicle systems that are controlled by a network of electronic control units (ECUs). The faults, originating from various interactions and especially between hardware and software, are particularly challenging to address. Our basic strategy is to divide the fault universe of such cyber-physical systems in a hierarchical manner, and monitor the critical variables/signals that have impact at different levels of interactions. The proposed diagnostic strategy is validated on an electrical power generation and storage system (EPGS) controlled by two ECUs in an environment with CANoe/MATLAB co-simulation. Eleven faults are injected with the failures originating in actuator hardware, sensor, controller hardware and software components. Diagnostic matrix is established to represent the relationship between the faults and the test outcomes (also known as fault signatures) via simulations. The results show that the proposed diagnostic strategy is effective in addressing the interaction-caused faults.

  14. Sequential Test Strategies for Multiple Fault Isolation

    NASA Technical Reports Server (NTRS)

    Shakeri, M.; Pattipati, Krishna R.; Raghavan, V.; Patterson-Hine, Ann; Kell, T.

    1997-01-01

    In this paper, we consider the problem of constructing near optimal test sequencing algorithms for diagnosing multiple faults in redundant (fault-tolerant) systems. The computational complexity of solving the optimal multiple-fault isolation problem is super-exponential, that is, it is much more difficult than the single-fault isolation problem, which, by itself, is NP-hard. By employing concepts from information theory and Lagrangian relaxation, we present several static and dynamic (on-line or interactive) test sequencing algorithms for the multiple fault isolation problem that provide a trade-off between the degree of suboptimality and computational complexity. Furthermore, we present novel diagnostic strategies that generate a static diagnostic directed graph (digraph), instead of a static diagnostic tree, for multiple fault diagnosis. Using this approach, the storage complexity of the overall diagnostic strategy reduces substantially. Computational results based on real-world systems indicate that the size of a static multiple fault strategy is strictly related to the structure of the system, and that the use of an on-line multiple fault strategy can diagnose faults in systems with as many as 10,000 failure sources.

  15. Improved multi-objective ant colony optimization algorithm and its application in complex reasoning

    NASA Astrophysics Data System (ADS)

    Wang, Xinqing; Zhao, Yang; Wang, Dong; Zhu, Huijie; Zhang, Qing

    2013-09-01

    The problem of fault reasoning has aroused great concern in scientific and engineering fields. However, fault investigation and reasoning of complex system is not a simple reasoning decision-making problem. It has become a typical multi-constraint and multi-objective reticulate optimization decision-making problem under many influencing factors and constraints. So far, little research has been carried out in this field. This paper transforms the fault reasoning problem of complex system into a paths-searching problem starting from known symptoms to fault causes. Three optimization objectives are considered simultaneously: maximum probability of average fault, maximum average importance, and minimum average complexity of test. Under the constraints of both known symptoms and the causal relationship among different components, a multi-objective optimization mathematical model is set up, taking minimizing cost of fault reasoning as the target function. Since the problem is non-deterministic polynomial-hard(NP-hard), a modified multi-objective ant colony algorithm is proposed, in which a reachability matrix is set up to constrain the feasible search nodes of the ants and a new pseudo-random-proportional rule and a pheromone adjustment mechinism are constructed to balance conflicts between the optimization objectives. At last, a Pareto optimal set is acquired. Evaluation functions based on validity and tendency of reasoning paths are defined to optimize noninferior set, through which the final fault causes can be identified according to decision-making demands, thus realize fault reasoning of the multi-constraint and multi-objective complex system. Reasoning results demonstrate that the improved multi-objective ant colony optimization(IMACO) can realize reasoning and locating fault positions precisely by solving the multi-objective fault diagnosis model, which provides a new method to solve the problem of multi-constraint and multi-objective fault diagnosis and reasoning of complex system.

  16. The technique of entropy optimization in motor current signature analysis and its application in the fault diagnosis of gear transmission

    NASA Astrophysics Data System (ADS)

    Chen, Xiaoguang; Liang, Lin; Liu, Fei; Xu, Guanghua; Luo, Ailing; Zhang, Sicong

    2012-05-01

    Nowadays, Motor Current Signature Analysis (MCSA) is widely used in the fault diagnosis and condition monitoring of machine tools. However, although the current signal has lower SNR (Signal Noise Ratio), it is difficult to identify the feature frequencies of machine tools from complex current spectrum that the feature frequencies are often dense and overlapping by traditional signal processing method such as FFT transformation. With the study in the Motor Current Signature Analysis (MCSA), it is found that the entropy is of importance for frequency identification, which is associated with the probability distribution of any random variable. Therefore, it plays an important role in the signal processing. In order to solve the problem that the feature frequencies are difficult to be identified, an entropy optimization technique based on motor current signal is presented in this paper for extracting the typical feature frequencies of machine tools which can effectively suppress the disturbances. Some simulated current signals were made by MATLAB, and a current signal was obtained from a complex gearbox of an iron works made in Luxembourg. In diagnosis the MCSA is combined with entropy optimization. Both simulated and experimental results show that this technique is efficient, accurate and reliable enough to extract the feature frequencies of current signal, which provides a new strategy for the fault diagnosis and the condition monitoring of machine tools.

  17. Comparative analysis of techniques for evaluating the effectiveness of aircraft computing systems

    NASA Technical Reports Server (NTRS)

    Hitt, E. F.; Bridgman, M. S.; Robinson, A. C.

    1981-01-01

    Performability analysis is a technique developed for evaluating the effectiveness of fault-tolerant computing systems in multiphase missions. Performability was evaluated for its accuracy, practical usefulness, and relative cost. The evaluation was performed by applying performability and the fault tree method to a set of sample problems ranging from simple to moderately complex. The problems involved as many as five outcomes, two to five mission phases, permanent faults, and some functional dependencies. Transient faults and software errors were not considered. A different analyst was responsible for each technique. Significantly more time and effort were required to learn performability analysis than the fault tree method. Performability is inherently as accurate as fault tree analysis. For the sample problems, fault trees were more practical and less time consuming to apply, while performability required less ingenuity and was more checkable. Performability offers some advantages for evaluating very complex problems.

  18. Fault tolerance of artificial neural networks with applications in critical systems

    NASA Technical Reports Server (NTRS)

    Protzel, Peter W.; Palumbo, Daniel L.; Arras, Michael K.

    1992-01-01

    This paper investigates the fault tolerance characteristics of time continuous recurrent artificial neural networks (ANN) that can be used to solve optimization problems. The principle of operations and performance of these networks are first illustrated by using well-known model problems like the traveling salesman problem and the assignment problem. The ANNs are then subjected to 13 simultaneous 'stuck at 1' or 'stuck at 0' faults for network sizes of up to 900 'neurons'. The effects of these faults is demonstrated and the cause for the observed fault tolerance is discussed. An application is presented in which a network performs a critical task for a real-time distributed processing system by generating new task allocations during the reconfiguration of the system. The performance degradation of the ANN under the presence of faults is investigated by large-scale simulations, and the potential benefits of delegating a critical task to a fault tolerant network are discussed.

  19. On the Adaptive Protection of Microgrids: A Review on How to Mitigate Cyber Attacks and Communication Failures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Habib, Hany F; Lashway, Christopher R; Mohammed, Osama A

    One main challenge in the practical implementation of a microgrid is the design of an adequate protection scheme in both grid connected and islanded modes. Conventional overcurrent protection schemes face selectivity and sensitivity issues during grid and microgrid faults since the fault current level is different in both cases for the same relay. Various approaches have been implemented in the past to deal with this problem, yet the most promising ones are the implementation of adaptive protection techniques abiding by the IEC 61850 communication standard. This paper presents a critical review of existing adaptive protection schemes, the technical challenges formore » the use of classical protection techniques and the need for an adaptive, smart protection system. However, the risk of communication link failures and cyber security threats still remain a challenge in implementing a reliable adaptive protection scheme. A contingency is needed where a communication issue prevents the relay from adjusting to a lower current level during islanded mode. An adaptive protection scheme is proposed that utilizes energy storage (ES) and hybrid ES (HESS) already available in the network as a mechanism to source the higher fault current. Four common grid ES and HESS are reviewed for their suitability in feeding the fault while some solutions are proposed.« less

  20. Adjustable direct current and pulsed circuit fault current limiter

    DOEpatents

    Boenig, Heinrich J.; Schillig, Josef B.

    2003-09-23

    A fault current limiting system for direct current circuits and for pulsed power circuit. In the circuits, a current source biases a diode that is in series with the circuits' transmission line. If fault current in a circuit exceeds current from the current source biasing the diode open, the diode will cease conducting and route the fault current through the current source and an inductor. This limits the rate of rise and the peak value of the fault current.

  1. Final Project Report. Scalable fault tolerance runtime technology for petascale computers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Krishnamoorthy, Sriram; Sadayappan, P

    With the massive number of components comprising the forthcoming petascale computer systems, hardware failures will be routinely encountered during execution of large-scale applications. Due to the multidisciplinary, multiresolution, and multiscale nature of scientific problems that drive the demand for high end systems, applications place increasingly differing demands on the system resources: disk, network, memory, and CPU. In addition to MPI, future applications are expected to use advanced programming models such as those developed under the DARPA HPCS program as well as existing global address space programming models such as Global Arrays, UPC, and Co-Array Fortran. While there has been amore » considerable amount of work in fault tolerant MPI with a number of strategies and extensions for fault tolerance proposed, virtually none of advanced models proposed for emerging petascale systems is currently fault aware. To achieve fault tolerance, development of underlying runtime and OS technologies able to scale to petascale level is needed. This project has evaluated range of runtime techniques for fault tolerance for advanced programming models.« less

  2. High voltage design structure for high temperature superconducting device

    DOEpatents

    Tekletsadik, Kasegn D [Rexford, NY

    2008-05-20

    In accordance with the present invention, modular corona shields are employed in a HTS device to reduce the electric field surrounding the HTS device. In a exemplary embodiment a fault current limiter module in the insulation region of a cryogenic cooling system has at least one fault current limiter set which employs a first corona shield disposed along the top portion of the fault current limiter set and is electrically coupled to the fault current limiter set. A second corona shield is disposed along the bottom portion of the fault current limiter set and is electrically coupled to the fault current limiter set. An insulation barrier is disposed within the insulation region along at least one side of the fault current limiter set. The first corona shield and the second corona shield act together to reduce the electric field surrounding the fault limiter set when voltage is applied to the fault limiter set.

  3. A quantum annealing approach for fault detection and diagnosis of graph-based systems

    NASA Astrophysics Data System (ADS)

    Perdomo-Ortiz, A.; Fluegemann, J.; Narasimhan, S.; Biswas, R.; Smelyanskiy, V. N.

    2015-02-01

    Diagnosing the minimal set of faults capable of explaining a set of given observations, e.g., from sensor readouts, is a hard combinatorial optimization problem usually tackled with artificial intelligence techniques. We present the mapping of this combinatorial problem to quadratic unconstrained binary optimization (QUBO), and the experimental results of instances embedded onto a quantum annealing device with 509 quantum bits. Besides being the first time a quantum approach has been proposed for problems in the advanced diagnostics community, to the best of our knowledge this work is also the first research utilizing the route Problem → QUBO → Direct embedding into quantum hardware, where we are able to implement and tackle problem instances with sizes that go beyond previously reported toy-model proof-of-principle quantum annealing implementations; this is a significant leap in the solution of problems via direct-embedding adiabatic quantum optimization. We discuss some of the programmability challenges in the current generation of the quantum device as well as a few possible ways to extend this work to more complex arbitrary network graphs.

  4. What does fault tolerant Deep Learning need from MPI?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Amatya, Vinay C.; Vishnu, Abhinav; Siegel, Charles M.

    Deep Learning (DL) algorithms have become the {\\em de facto} Machine Learning (ML) algorithm for large scale data analysis. DL algorithms are computationally expensive -- even distributed DL implementations which use MPI require days of training (model learning) time on commonly studied datasets. Long running DL applications become susceptible to faults -- requiring development of a fault tolerant system infrastructure, in addition to fault tolerant DL algorithms. This raises an important question: {\\em What is needed from MPI for designing fault tolerant DL implementations?} In this paper, we address this problem for permanent faults. We motivate the need for amore » fault tolerant MPI specification by an in-depth consideration of recent innovations in DL algorithms and their properties, which drive the need for specific fault tolerance features. We present an in-depth discussion on the suitability of different parallelism types (model, data and hybrid); a need (or lack thereof) for check-pointing of any critical data structures; and most importantly, consideration for several fault tolerance proposals (user-level fault mitigation (ULFM), Reinit) in MPI and their applicability to fault tolerant DL implementations. We leverage a distributed memory implementation of Caffe, currently available under the Machine Learning Toolkit for Extreme Scale (MaTEx). We implement our approaches by extending MaTEx-Caffe for using ULFM-based implementation. Our evaluation using the ImageNet dataset and AlexNet neural network topology demonstrates the effectiveness of the proposed fault tolerant DL implementation using OpenMPI based ULFM.« less

  5. Fault-tolerance of a neural network solving the traveling salesman problem

    NASA Technical Reports Server (NTRS)

    Protzel, P.; Palumbo, D.; Arras, M.

    1989-01-01

    This study presents the results of a fault-injection experiment that stimulates a neural network solving the Traveling Salesman Problem (TSP). The network is based on a modified version of Hopfield's and Tank's original method. We define a performance characteristic for the TSP that allows an overall assessment of the solution quality for different city-distributions and problem sizes. Five different 10-, 20-, and 30- city cases are sued for the injection of up to 13 simultaneous stuck-at-0 and stuck-at-1 faults. The results of more than 4000 simulation-runs show the extreme fault-tolerance of the network, especially with respect to stuck-at-0 faults. One possible explanation for the overall surprising result is the redundancy of the problem representation.

  6. Verification of an IGBT Fusing Switch for Over-current Protection of the SNS HVCM

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Benwell, Andrew; Kemp, Mark; Burkhart, Craig

    2010-06-11

    An IGBT based over-current protection system has been developed to detect faults and limit the damage caused by faults in high voltage converter modulators. During normal operation, an IGBT enables energy to be transferred from storage capacitors to a H-bridge. When a fault occurs, the over-current protection system detects the fault, limits the fault current and opens the IGBT to isolate the remaining stored energy from the fault. This paper presents an experimental verification of the over-current protection system under applicable conditions.

  7. A Method to Simultaneously Detect the Current Sensor Fault and Estimate the State of Energy for Batteries in Electric Vehicles

    PubMed Central

    Xu, Jun; Wang, Jing; Li, Shiying; Cao, Binggang

    2016-01-01

    Recently, State of energy (SOE) has become one of the most fundamental parameters for battery management systems in electric vehicles. However, current information is critical in SOE estimation and current sensor is usually utilized to obtain the latest current information. However, if the current sensor fails, the SOE estimation may be confronted with large error. Therefore, this paper attempts to make the following contributions: Current sensor fault detection and SOE estimation method is realized simultaneously. Through using the proportional integral observer (PIO) based method, the current sensor fault could be accurately estimated. By taking advantage of the accurate estimated current sensor fault, the influence caused by the current sensor fault can be eliminated and compensated. As a result, the results of the SOE estimation will be influenced little by the fault. In addition, the simulation and experimental workbench is established to verify the proposed method. The results indicate that the current sensor fault can be estimated accurately. Simultaneously, the SOE can also be estimated accurately and the estimation error is influenced little by the fault. The maximum SOE estimation error is less than 2%, even though the large current error caused by the current sensor fault still exists. PMID:27548183

  8. A Method to Simultaneously Detect the Current Sensor Fault and Estimate the State of Energy for Batteries in Electric Vehicles.

    PubMed

    Xu, Jun; Wang, Jing; Li, Shiying; Cao, Binggang

    2016-08-19

    Recently, State of energy (SOE) has become one of the most fundamental parameters for battery management systems in electric vehicles. However, current information is critical in SOE estimation and current sensor is usually utilized to obtain the latest current information. However, if the current sensor fails, the SOE estimation may be confronted with large error. Therefore, this paper attempts to make the following contributions: Current sensor fault detection and SOE estimation method is realized simultaneously. Through using the proportional integral observer (PIO) based method, the current sensor fault could be accurately estimated. By taking advantage of the accurate estimated current sensor fault, the influence caused by the current sensor fault can be eliminated and compensated. As a result, the results of the SOE estimation will be influenced little by the fault. In addition, the simulation and experimental workbench is established to verify the proposed method. The results indicate that the current sensor fault can be estimated accurately. Simultaneously, the SOE can also be estimated accurately and the estimation error is influenced little by the fault. The maximum SOE estimation error is less than 2%, even though the large current error caused by the current sensor fault still exists.

  9. Diagnosing a Strong-Fault Model by Conflict and Consistency

    PubMed Central

    Zhou, Gan; Feng, Wenquan

    2018-01-01

    The diagnosis method for a weak-fault model with only normal behaviors of each component has evolved over decades. However, many systems now demand a strong-fault models, the fault modes of which have specific behaviors as well. It is difficult to diagnose a strong-fault model due to its non-monotonicity. Currently, diagnosis methods usually employ conflicts to isolate possible fault and the process can be expedited when some observed output is consistent with the model’s prediction where the consistency indicates probably normal components. This paper solves the problem of efficiently diagnosing a strong-fault model by proposing a novel Logic-based Truth Maintenance System (LTMS) with two search approaches based on conflict and consistency. At the beginning, the original a strong-fault model is encoded by Boolean variables and converted into Conjunctive Normal Form (CNF). Then the proposed LTMS is employed to reason over CNF and find multiple minimal conflicts and maximal consistencies when there exists fault. The search approaches offer the best candidate efficiency based on the reasoning result until the diagnosis results are obtained. The completeness, coverage, correctness and complexity of the proposals are analyzed theoretically to show their strength and weakness. Finally, the proposed approaches are demonstrated by applying them to a real-world domain—the heat control unit of a spacecraft—where the proposed methods are significantly better than best first and conflict directly with A* search methods. PMID:29596302

  10. A Solid-State Fault Current Limiting Device for VSC-HVDC Systems

    NASA Astrophysics Data System (ADS)

    Larruskain, D. Marene; Zamora, Inmaculada; Abarrategui, , Oihane; Iturregi, Araitz

    2013-08-01

    Faults in the DC circuit constitute one of the main limitations of voltage source converter VSC-HVDC systems, as the high fault currents can damage seriously the converters. In this article, a new design for a fault current limiter (FCL) is proposed, which is capable of limiting the fault current as well as interrupting it, isolating the DC grid. The operation of the proposed FCL is analysed and verified with the most usual faults that can occur in overhead lines.

  11. High temperature superconducting fault current limiter

    DOEpatents

    Hull, J.R.

    1997-02-04

    A fault current limiter for an electrical circuit is disclosed. The fault current limiter includes a high temperature superconductor in the electrical circuit. The high temperature superconductor is cooled below its critical temperature to maintain the superconducting electrical properties during operation as the fault current limiter. 15 figs.

  12. Nitsche Extended Finite Element Methods for Earthquake Simulation

    NASA Astrophysics Data System (ADS)

    Coon, Ethan T.

    Modeling earthquakes and geologically short-time-scale events on fault networks is a difficult problem with important implications for human safety and design. These problems demonstrate a. rich physical behavior, in which distributed loading localizes both spatially and temporally into earthquakes on fault systems. This localization is governed by two aspects: friction and fault geometry. Computationally, these problems provide a stern challenge for modelers --- static and dynamic equations must be solved on domains with discontinuities on complex fault systems, and frictional boundary conditions must be applied on these discontinuities. The most difficult aspect of modeling physics on complicated domains is the mesh. Most numerical methods involve meshing the geometry; nodes are placed on the discontinuities, and edges are chosen to coincide with faults. The resulting mesh is highly unstructured, making the derivation of finite difference discretizations difficult. Therefore, most models use the finite element method. Standard finite element methods place requirements on the mesh for the sake of stability, accuracy, and efficiency. The formation of a mesh which both conforms to fault geometry and satisfies these requirements is an open problem, especially for three dimensional, physically realistic fault. geometries. In addition, if the fault system evolves over the course of a dynamic simulation (i.e. in the case of growing cracks or breaking new faults), the geometry must he re-meshed at each time step. This can be expensive computationally. The fault-conforming approach is undesirable when complicated meshes are required, and impossible to implement when the geometry is evolving. Therefore, meshless and hybrid finite element methods that handle discontinuities without placing them on element boundaries are a desirable and natural way to discretize these problems. Several such methods are being actively developed for use in engineering mechanics involving crack propagation and material failure. While some theory and application of these methods exist, implementations for the simulation of networks of many cracks have not yet been considered. For my thesis, I implement and extend one such method, the eXtended Finite Element Method (XFEM), for use in static and dynamic models of fault networks. Once this machinery is developed, it is applied to open questions regarding the behavior of networks of faults, including questions of distributed deformation in fault systems and ensembles of magnitude, location, and frequency in repeat ruptures. The theory of XFEM is augmented to allow for solution of problems with alternating regimes of static solves for elastic stress conditions and short, dynamic earthquakes on networks of faults. This is accomplished using Nitsche's approach for implementing boundary conditions. Finally, an optimization problem is developed to determine tractions along the fault, enabling the calculation of frictional constraints and the rupture front. This method is verified via a series of static, quasistatic, and dynamic problems. Armed with this technique, we look at several problems regarding geometry within the earthquake cycle in which geometry is crucial. We first look at quasistatic simulations on a community fault model of Southern California, and model slip distribution across that system. We find the distribution of deformation across faults compares reasonably well with slip rates across the region, as constrained by geologic data. We find geometry can provide constraints for friction, and consider the minimization of shear strain across the zone as a function of friction and plate loading direction, and infer bounds on fault strength in the region. Then we consider the repeated rupture problem, modeling the full earthquake cycle over the course of many events on several fault geometries. In this work, we look at distributions of events, studying the effect of geometry on statistical metrics of event ensembles. Finally, this thesis is a proof of concept for the XFEM on earthquake cycle models on fault systems. We identify strengths and weaknesses of the method, and identify places for future improvement. We discuss the feasibility of the method's use in three dimensions, and find the method to be a strong candidate for future crustal deformation simulations.

  13. High temperature superconducting fault current limiter

    DOEpatents

    Hull, John R.

    1997-01-01

    A fault current limiter (10) for an electrical circuit (14). The fault current limiter (10) includes a high temperature superconductor (12) in the electrical circuit (14). The high temperature superconductor (12) is cooled below its critical temperature to maintain the superconducting electrical properties during operation as the fault current limiter (10).

  14. Application of Landsat imagery to problems of petroleum exploration in Qaidam Basin, China

    USGS Publications Warehouse

    Bailey, G.B.; Anderson, P.D.

    1982-01-01

    Tertiary and Quaternary nonmarine, petroleum-bearing sedimentary rocks have been extensively deformed by compressive forces. These forces created many folds which are current targets of Chinese exploration programs. Image-derived interpretations of folds, strike-slip faults, thrust faults, normal or reverse faults, and fractures compared very favorably, in terms of locations and numbers mapped, with Chinese data compiled from years of extensive field mapping. Many potential hydrocarbon trapping structures were precisely located. Orientations of major structural trends defined from Landsat imagery correlate well with those predicted for the area based on global tectonic theory. These correlations suggest that similar orientations exist in the eastern half of the basin where folded rocks are mostly obscured by unconsolidated surface sediments and where limited exploration has occurred.--Modified journal abstract.

  15. Current limiting behavior in three-phase transformer-type SFCLs using an iron core according to variety of fault

    NASA Astrophysics Data System (ADS)

    Cho, Yong-Sun; Jung, Byung-Ik; Ha, Kyoung-Hun; Choi, Soo-Geun; Park, Hyoung-Min; Choi, Hyo-Sang

    To apply the superconducting fault current limiter (SFCL) to the power system, the reliability of the fault-current-limiting operation must be ensured in diverse fault conditions. The SFCL must also be linked to the operation of the high-speed recloser in the power system. In this study, a three-phase transformer-type SFCL, which has a neutral line to improve the simultaneous quench characteristics of superconducting elements, was manufactured to analyze the fault-current-limiting characteristic according to the single, double, and triple line-to-ground faults. The transformer-type SFCL, wherein three-phase windings are connected to one iron core, reduced the burden on the superconducting element as the superconducting element on the sound phase was also quenched in the case of the single line-to-ground fault. In the case of double or triple line-to-ground faults, the flux from the faulted phase winding was interlinked with other faulted or sound phase windings, and the fault-current-limiting rate decreased because the windings of three phases were inductively connected by one iron core.

  16. Estimating the cost of compensating victims of medical negligence.

    PubMed Central

    Fenn, P.; Hermans, D.; Dingwall, R.

    1994-01-01

    The current system in Britain for compensating victims of medical injury depends on an assessment of negligence. Despite the sporadic pressure on the government to adopt a "no fault" approach, such as exists in Sweden, the negligence system will probably remain for the immediate future. The cost of this system was estimated to be 52.3m pounds for England 1990-1. The problem for the future, however, is one of forecasting accuracy at provider level: too high a guess and current patient care will suffer; too low a guess and future patient care will suffer. The introduction of a mutual insurance scheme may not resolve these difficulties, as someone will have to set the rates. Moreover, the figures indicate that if a no fault scheme was introduced the cost might be four times that of the current system, depending on the type of scheme adopted. PMID:8081145

  17. Insulation effect on thermal stability of Coated Conductors wires in liquid nitrogen

    NASA Astrophysics Data System (ADS)

    Rubeli, Thomas; Dutoit, Bertrand; Martynova, Irina; Makarevich, Artem; Molodyk, Alexander; Samoilenkov, Sergey

    2017-02-01

    Superconducting wires are not perfectly homogeneous in term of critical current as well as stabilization. In resistive fault current limiter applications this could lead to hot spots if the fault current is only slightly above the nominal current of the device. Increasing stabilization by using thicker silver coating for example may prevent this problem but this method implies longer wire length to maintain the same impedance during a fault. Very efficient cooling in another way to prevent hot spots, this can be achieved in nucleate boiling regime. Optimal insulation can be used to prevent film boiling regime, staying in nucleate boiling regime in a much broader temperature range. In this work a novel technique is used to monitor in real time the temperature of the wire during the quench. Using this method several increasing insulation thicknesses are tested, measuring for each the heat exchange rate to the nitrogen bath. Exchange rate measurements are made in quasistatic regime and during the re-cooling of the wire. SuperOx wires provided with different insulation thicknesses exhibit an excellent stability, far above a bare wire. On the other side, for very thick insulations the stability gain is lost. Re-cooling speeds dependency on insulation thicknesses is measured too.

  18. System and method for bearing fault detection using stator current noise cancellation

    DOEpatents

    Zhou, Wei; Lu, Bin; Habetler, Thomas G.; Harley, Ronald G.; Theisen, Peter J.

    2010-08-17

    A system and method for detecting incipient mechanical motor faults by way of current noise cancellation is disclosed. The system includes a controller configured to detect indicia of incipient mechanical motor faults. The controller further includes a processor programmed to receive a baseline set of current data from an operating motor and define a noise component in the baseline set of current data. The processor is also programmed to repeatedly receive real-time operating current data from the operating motor and remove the noise component from the operating current data in real-time to isolate any fault components present in the operating current data. The processor is then programmed to generate a fault index for the operating current data based on any isolated fault components.

  19. Step-by-step magic state encoding for efficient fault-tolerant quantum computation

    PubMed Central

    Goto, Hayato

    2014-01-01

    Quantum error correction allows one to make quantum computers fault-tolerant against unavoidable errors due to decoherence and imperfect physical gate operations. However, the fault-tolerant quantum computation requires impractically large computational resources for useful applications. This is a current major obstacle to the realization of a quantum computer. In particular, magic state distillation, which is a standard approach to universality, consumes the most resources in fault-tolerant quantum computation. For the resource problem, here we propose step-by-step magic state encoding for concatenated quantum codes, where magic states are encoded step by step from the physical level to the logical one. To manage errors during the encoding, we carefully use error detection. Since the sizes of intermediate codes are small, it is expected that the resource overheads will become lower than previous approaches based on the distillation at the logical level. Our simulation results suggest that the resource requirements for a logical magic state will become comparable to those for a single logical controlled-NOT gate. Thus, the present method opens a new possibility for efficient fault-tolerant quantum computation. PMID:25511387

  20. Step-by-step magic state encoding for efficient fault-tolerant quantum computation.

    PubMed

    Goto, Hayato

    2014-12-16

    Quantum error correction allows one to make quantum computers fault-tolerant against unavoidable errors due to decoherence and imperfect physical gate operations. However, the fault-tolerant quantum computation requires impractically large computational resources for useful applications. This is a current major obstacle to the realization of a quantum computer. In particular, magic state distillation, which is a standard approach to universality, consumes the most resources in fault-tolerant quantum computation. For the resource problem, here we propose step-by-step magic state encoding for concatenated quantum codes, where magic states are encoded step by step from the physical level to the logical one. To manage errors during the encoding, we carefully use error detection. Since the sizes of intermediate codes are small, it is expected that the resource overheads will become lower than previous approaches based on the distillation at the logical level. Our simulation results suggest that the resource requirements for a logical magic state will become comparable to those for a single logical controlled-NOT gate. Thus, the present method opens a new possibility for efficient fault-tolerant quantum computation.

  1. A New Deep Learning Model for Fault Diagnosis with Good Anti-Noise and Domain Adaptation Ability on Raw Vibration Signals

    PubMed Central

    Zhang, Wei; Peng, Gaoliang; Li, Chuanhao; Chen, Yuanhang; Zhang, Zhujun

    2017-01-01

    Intelligent fault diagnosis techniques have replaced time-consuming and unreliable human analysis, increasing the efficiency of fault diagnosis. Deep learning models can improve the accuracy of intelligent fault diagnosis with the help of their multilayer nonlinear mapping ability. This paper proposes a novel method named Deep Convolutional Neural Networks with Wide First-layer Kernels (WDCNN). The proposed method uses raw vibration signals as input (data augmentation is used to generate more inputs), and uses the wide kernels in the first convolutional layer for extracting features and suppressing high frequency noise. Small convolutional kernels in the preceding layers are used for multilayer nonlinear mapping. AdaBN is implemented to improve the domain adaptation ability of the model. The proposed model addresses the problem that currently, the accuracy of CNN applied to fault diagnosis is not very high. WDCNN can not only achieve 100% classification accuracy on normal signals, but also outperform the state-of-the-art DNN model which is based on frequency features under different working load and noisy environment conditions. PMID:28241451

  2. Dynamic Fault Detection Chassis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mize, Jeffery J

    2007-01-01

    Abstract The high frequency switching megawatt-class High Voltage Converter Modulator (HVCM) developed by Los Alamos National Laboratory for the Oak Ridge National Laboratory's Spallation Neutron Source (SNS) is now in operation. One of the major problems with the modulator systems is shoot-thru conditions that can occur in a IGBTs H-bridge topology resulting in large fault currents and device failure in a few microseconds. The Dynamic Fault Detection Chassis (DFDC) is a fault monitoring system; it monitors transformer flux saturation using a window comparator and dV/dt events on the cathode voltage caused by any abnormality such as capacitor breakdown, transformer primarymore » turns shorts, or dielectric breakdown between the transformer primary and secondary. If faults are detected, the DFDC will inhibit the IGBT gate drives and shut the system down, significantly reducing the possibility of a shoot-thru condition or other equipment damaging events. In this paper, we will present system integration considerations, performance characteristics of the DFDC, and discuss its ability to significantly reduce costly down time for the entire facility.« less

  3. A Non-linear Geodetic Data Inversion Using ABIC for Slip Distribution on a Fault With an Unknown dip Angle

    NASA Astrophysics Data System (ADS)

    Fukahata, Y.; Wright, T. J.

    2006-12-01

    We developed a method of geodetic data inversion for slip distribution on a fault with an unknown dip angle. When fault geometry is unknown, the problem of geodetic data inversion is non-linear. A common strategy for obtaining slip distribution is to first determine the fault geometry by minimizing the square misfit under the assumption of a uniform slip on a rectangular fault, and then apply the usual linear inversion technique to estimate a slip distribution on the determined fault. It is not guaranteed, however, that the fault determined under the assumption of a uniform slip gives the best fault geometry for a spatially variable slip distribution. In addition, in obtaining a uniform slip fault model, we have to simultaneously determine the values of the nine mutually dependent parameters, which is a highly non-linear, complicated process. Although the inverse problem is non-linear for cases with unknown fault geometries, the non-linearity of the problems is actually weak, when we can assume the fault surface to be flat. In particular, when a clear fault trace is observed on the EarthOs surface after an earthquake, we can precisely estimate the strike and the location of the fault. In this case only the dip angle has large ambiguity. In geodetic data inversion we usually need to introduce smoothness constraints in order to compromise reciprocal requirements for model resolution and estimation errors in a natural way. Strictly speaking, the inverse problem with smoothness constraints is also non-linear, even if the fault geometry is known. The non-linearity has been dissolved by introducing AkaikeOs Bayesian Information Criterion (ABIC), with which the optimal value of the relative weight of observed data to smoothness constraints is objectively determined. In this study, using ABIC in determining the optimal dip angle, we dissolved the non-linearity of the inverse problem. We applied the method to the InSAR data of the 1995 Dinar, Turkey earthquake and obtained a much shallower dip angle than before.

  4. A fault-tolerant strategy based on SMC for current-controlled converters

    NASA Astrophysics Data System (ADS)

    Azer, Peter M.; Marei, Mostafa I.; Sattar, Ahmed A.

    2018-05-01

    The sliding mode control (SMC) is used to control variable structure systems such as power electronics converters. This paper presents a fault-tolerant strategy based on the SMC for current-controlled AC-DC converters. The proposed SMC is based on three sliding surfaces for the three legs of the AC-DC converter. Two sliding surfaces are assigned to control the phase currents since the input three-phase currents are balanced. Hence, the third sliding surface is considered as an extra degree of freedom which is utilised to control the neutral voltage. This action is utilised to enhance the performance of the converter during open-switch faults. The proposed fault-tolerant strategy is based on allocating the sliding surface of the faulty leg to control the neutral voltage. Consequently, the current waveform is improved. The behaviour of the current-controlled converter during different types of open-switch faults is analysed. Double switch faults include three cases: two upper switch fault; upper and lower switch fault at different legs; and two switches of the same leg. The dynamic performance of the proposed system is evaluated during healthy and open-switch fault operations. Simulation results exhibit the various merits of the proposed SMC-based fault-tolerant strategy.

  5. System and method for motor fault detection using stator current noise cancellation

    DOEpatents

    Zhou, Wei; Lu, Bin; Nowak, Michael P.; Dimino, Steven A.

    2010-12-07

    A system and method for detecting incipient mechanical motor faults by way of current noise cancellation is disclosed. The system includes a controller configured to detect indicia of incipient mechanical motor faults. The controller further includes a processor programmed to receive a baseline set of current data from an operating motor and define a noise component in the baseline set of current data. The processor is also programmed to acquire at least on additional set of real-time operating current data from the motor during operation, redefine the noise component present in each additional set of real-time operating current data, and remove the noise component from the operating current data in real-time to isolate any fault components present in the operating current data. The processor is then programmed to generate a fault index for the operating current data based on any isolated fault components.

  6. Satellite Fault Diagnosis Using Support Vector Machines Based on a Hybrid Voting Mechanism

    PubMed Central

    Yang, Shuqiang; Zhu, Xiaoqian; Jin, Songchang; Wang, Xiang

    2014-01-01

    The satellite fault diagnosis has an important role in enhancing the safety, reliability, and availability of the satellite system. However, the problem of enormous parameters and multiple faults makes a challenge to the satellite fault diagnosis. The interactions between parameters and misclassifications from multiple faults will increase the false alarm rate and the false negative rate. On the other hand, for each satellite fault, there is not enough fault data for training. To most of the classification algorithms, it will degrade the performance of model. In this paper, we proposed an improving SVM based on a hybrid voting mechanism (HVM-SVM) to deal with the problem of enormous parameters, multiple faults, and small samples. Many experimental results show that the accuracy of fault diagnosis using HVM-SVM is improved. PMID:25215324

  7. A new method of converter transformer protection without commutation failure

    NASA Astrophysics Data System (ADS)

    Zhang, Jiayu; Kong, Bo; Liu, Mingchang; Zhang, Jun; Guo, Jianhong; Jing, Xu

    2018-01-01

    With the development of AC / DC hybrid transmission technology, converter transformer as nodes of AC and DC conversion of HVDC transmission technology, its reliable safe and stable operation plays an important role in the DC transmission. As a common problem of DC transmission, commutation failure poses a serious threat to the safe and stable operation of power grid. According to the commutation relation between the AC bus voltage of converter station and the output DC voltage of converter, the generalized transformation ratio is defined, and a new method of converter transformer protection based on generalized transformation ratio is put forward. The method uses generalized ratio to realize the on-line monitoring of the fault or abnormal commutation components, and the use of valve side of converter transformer bushing CT current characteristics of converter transformer fault accurately, and is not influenced by the presence of commutation failure. Through the fault analysis and EMTDC/PSCAD simulation, the protection can be operated correctly under the condition of various faults of the converter.

  8. Robust Diagnosis Method Based on Parameter Estimation for an Interturn Short-Circuit Fault in Multipole PMSM under High-Speed Operation.

    PubMed

    Lee, Jewon; Moon, Seokbae; Jeong, Hyeyun; Kim, Sang Woo

    2015-11-20

    This paper proposes a diagnosis method for a multipole permanent magnet synchronous motor (PMSM) under an interturn short circuit fault. Previous works in this area have suffered from the uncertainties of the PMSM parameters, which can lead to misdiagnosis. The proposed method estimates the q-axis inductance (Lq) of the faulty PMSM to solve this problem. The proposed method also estimates the faulty phase and the value of G, which serves as an index of the severity of the fault. The q-axis current is used to estimate the faulty phase, the values of G and Lq. For this reason, two open-loop observers and an optimization method based on a particle-swarm are implemented. The q-axis current of a healthy PMSM is estimated by the open-loop observer with the parameters of a healthy PMSM. The Lq estimation significantly compensates for the estimation errors in high-speed operation. The experimental results demonstrate that the proposed method can estimate the faulty phase, G, and Lq besides exhibiting robustness against parameter uncertainties.

  9. Resilience Design Patterns - A Structured Approach to Resilience at Extreme Scale (version 1.1)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hukerikar, Saurabh; Engelmann, Christian

    Reliability is a serious concern for future extreme-scale high-performance computing (HPC) systems. Projections based on the current generation of HPC systems and technology roadmaps suggest the prevalence of very high fault rates in future systems. The errors resulting from these faults will propagate and generate various kinds of failures, which may result in outcomes ranging from result corruptions to catastrophic application crashes. Therefore the resilience challenge for extreme-scale HPC systems requires management of various hardware and software technologies that are capable of handling a broad set of fault models at accelerated fault rates. Also, due to practical limits on powermore » consumption in HPC systems future systems are likely to embrace innovative architectures, increasing the levels of hardware and software complexities. As a result the techniques that seek to improve resilience must navigate the complex trade-off space between resilience and the overheads to power consumption and performance. While the HPC community has developed various resilience solutions, application-level techniques as well as system-based solutions, the solution space of HPC resilience techniques remains fragmented. There are no formal methods and metrics to investigate and evaluate resilience holistically in HPC systems that consider impact scope, handling coverage, and performance & power efficiency across the system stack. Additionally, few of the current approaches are portable to newer architectures and software environments that will be deployed on future systems. In this document, we develop a structured approach to the management of HPC resilience using the concept of resilience-based design patterns. A design pattern is a general repeatable solution to a commonly occurring problem. We identify the commonly occurring problems and solutions used to deal with faults, errors and failures in HPC systems. Each established solution is described in the form of a pattern that addresses concrete problems in the design of resilient systems. The complete catalog of resilience design patterns provides designers with reusable design elements. We also define a framework that enhances a designer's understanding of the important constraints and opportunities for the design patterns to be implemented and deployed at various layers of the system stack. This design framework may be used to establish mechanisms and interfaces to coordinate flexible fault management across hardware and software components. The framework also supports optimization of the cost-benefit trade-offs among performance, resilience, and power consumption. The overall goal of this work is to enable a systematic methodology for the design and evaluation of resilience technologies in extreme-scale HPC systems that keep scientific applications running to a correct solution in a timely and cost-efficient manner in spite of frequent faults, errors, and failures of various types.« less

  10. Arc burst pattern analysis fault detection system

    NASA Technical Reports Server (NTRS)

    Russell, B. Don (Inventor); Aucoin, B. Michael (Inventor); Benner, Carl L. (Inventor)

    1997-01-01

    A method and apparatus are provided for detecting an arcing fault on a power line carrying a load current. Parameters indicative of power flow and possible fault events on the line, such as voltage and load current, are monitored and analyzed for an arc burst pattern exhibited by arcing faults in a power system. These arcing faults are detected by identifying bursts of each half-cycle of the fundamental current. Bursts occurring at or near a voltage peak indicate arcing on that phase. Once a faulted phase line is identified, a comparison of the current and voltage reveals whether the fault is located in a downstream direction of power flow toward customers, or upstream toward a generation station. If the fault is located downstream, the line is de-energized, and if located upstream, the line may remain energized to prevent unnecessary power outages.

  11. Distributing Earthquakes Among California's Faults: A Binary Integer Programming Approach

    NASA Astrophysics Data System (ADS)

    Geist, E. L.; Parsons, T.

    2016-12-01

    Statement of the problem is simple: given regional seismicity specified by a Gutenber-Richter (G-R) relation, how are earthquakes distributed to match observed fault-slip rates? The objective is to determine the magnitude-frequency relation on individual faults. The California statewide G-R b-value and a-value are estimated from historical seismicity, with the a-value accounting for off-fault seismicity. UCERF3 consensus slip rates are used, based on geologic and geodetic data and include estimates of coupling coefficients. The binary integer programming (BIP) problem is set up such that each earthquake from a synthetic catalog spanning millennia can occur at any location along any fault. The decision vector, therefore, consists of binary variables, with values equal to one indicating the location of each earthquake that results in an optimal match of slip rates, in an L1-norm sense. Rupture area and slip associated with each earthquake are determined from a magnitude-area scaling relation. Uncertainty bounds on the UCERF3 slip rates provide explicit minimum and maximum constraints to the BIP model, with the former more important to feasibility of the problem. There is a maximum magnitude limit associated with each fault, based on fault length, providing an implicit constraint. Solution of integer programming problems with a large number of variables (>105 in this study) has been possible only since the late 1990s. In addition to the classic branch-and-bound technique used for these problems, several other algorithms have been recently developed, including pre-solving, sifting, cutting planes, heuristics, and parallelization. An optimal solution is obtained using a state-of-the-art BIP solver for M≥6 earthquakes and California's faults with slip-rates > 1 mm/yr. Preliminary results indicate a surprising diversity of on-fault magnitude-frequency relations throughout the state.

  12. Passive fault current limiting device

    DOEpatents

    Evans, Daniel J.; Cha, Yung S.

    1999-01-01

    A passive current limiting device and isolator is particularly adapted for use at high power levels for limiting excessive currents in a circuit in a fault condition such as an electrical short. The current limiting device comprises a magnetic core wound with two magnetically opposed, parallel connected coils of copper, a high temperature superconductor or other electrically conducting material, and a fault element connected in series with one of the coils. Under normal operating conditions, the magnetic flux density produced by the two coils cancel each other. Under a fault condition, the fault element is triggered to cause an imbalance in the magnetic flux density between the two coils which results in an increase in the impedance in the coils. While the fault element may be a separate current limiter, switch, fuse, bimetal strip or the like, it preferably is a superconductor current limiter conducting one-half of the current load compared to the same limiter wired to carry the total current of the circuit. The major voltage during a fault condition is in the coils wound on the common core in a preferred embodiment.

  13. Passive fault current limiting device

    DOEpatents

    Evans, D.J.; Cha, Y.S.

    1999-04-06

    A passive current limiting device and isolator is particularly adapted for use at high power levels for limiting excessive currents in a circuit in a fault condition such as an electrical short. The current limiting device comprises a magnetic core wound with two magnetically opposed, parallel connected coils of copper, a high temperature superconductor or other electrically conducting material, and a fault element connected in series with one of the coils. Under normal operating conditions, the magnetic flux density produced by the two coils cancel each other. Under a fault condition, the fault element is triggered to cause an imbalance in the magnetic flux density between the two coils which results in an increase in the impedance in the coils. While the fault element may be a separate current limiter, switch, fuse, bimetal strip or the like, it preferably is a superconductor current limiter conducting one-half of the current load compared to the same limiter wired to carry the total current of the circuit. The major voltage during a fault condition is in the coils wound on the common core in a preferred embodiment. 6 figs.

  14. Adaptive Neural Networks Decentralized FTC Design for Nonstrict-Feedback Nonlinear Interconnected Large-Scale Systems Against Actuator Faults.

    PubMed

    Li, Yongming; Tong, Shaocheng

    The problem of active fault-tolerant control (FTC) is investigated for the large-scale nonlinear systems in nonstrict-feedback form. The nonstrict-feedback nonlinear systems considered in this paper consist of unstructured uncertainties, unmeasured states, unknown interconnected terms, and actuator faults (e.g., bias fault and gain fault). A state observer is designed to solve the unmeasurable state problem. Neural networks (NNs) are used to identify the unknown lumped nonlinear functions so that the problems of unstructured uncertainties and unknown interconnected terms can be solved. By combining the adaptive backstepping design principle with the combination Nussbaum gain function property, a novel NN adaptive output-feedback FTC approach is developed. The proposed FTC controller can guarantee that all signals in all subsystems are bounded, and the tracking errors for each subsystem converge to a small neighborhood of zero. Finally, numerical results of practical examples are presented to further demonstrate the effectiveness of the proposed control strategy.The problem of active fault-tolerant control (FTC) is investigated for the large-scale nonlinear systems in nonstrict-feedback form. The nonstrict-feedback nonlinear systems considered in this paper consist of unstructured uncertainties, unmeasured states, unknown interconnected terms, and actuator faults (e.g., bias fault and gain fault). A state observer is designed to solve the unmeasurable state problem. Neural networks (NNs) are used to identify the unknown lumped nonlinear functions so that the problems of unstructured uncertainties and unknown interconnected terms can be solved. By combining the adaptive backstepping design principle with the combination Nussbaum gain function property, a novel NN adaptive output-feedback FTC approach is developed. The proposed FTC controller can guarantee that all signals in all subsystems are bounded, and the tracking errors for each subsystem converge to a small neighborhood of zero. Finally, numerical results of practical examples are presented to further demonstrate the effectiveness of the proposed control strategy.

  15. Rupture Dynamics Simulation for Non-Planar fault by a Curved Grid Finite Difference Method

    NASA Astrophysics Data System (ADS)

    Zhang, Z.; Zhu, G.; Chen, X.

    2011-12-01

    We first implement the non-staggered finite difference method to solve the dynamic rupture problem, with split-node, for non-planar fault. Split-node method for dynamic simulation has been used widely, because of that it's more precise to represent the fault plane than other methods, for example, thick fault, stress glut and so on. The finite difference method is also a popular numeric method to solve kinematic and dynamic problem in seismology. However, previous works focus most of theirs eyes on the staggered-grid method, because of its simplicity and computational efficiency. However this method has its own disadvantage comparing to non-staggered finite difference method at some fact for example describing the boundary condition, especially the irregular boundary, or non-planar fault. Zhang and Chen (2006) proposed the MacCormack high order non-staggered finite difference method based on curved grids to precisely solve irregular boundary problem. Based upon on this non-staggered grid method, we make success of simulating the spontaneous rupture problem. The fault plane is a kind of boundary condition, which could be irregular of course. So it's convinced that we could simulate rupture process in the case of any kind of bending fault plane. We will prove this method is valid in the case of Cartesian coordinate first. In the case of bending fault, the curvilinear grids will be used.

  16. Reclosing operation characteristics of the flux-coupling type SFCL in a single-line-to ground fault

    NASA Astrophysics Data System (ADS)

    Jung, B. I.; Cho, Y. S.; Choi, H. S.; Ha, K. H.; Choi, S. G.; Chul, D. C.; Sung, T. H.

    2011-11-01

    The recloser that is used in distribution systems is a relay system that behaves sequentially to protect power systems from transient and continuous faults. This reclosing operation of the recloser can improve the reliability and stability of the power supply. For cooperation with this recloser, the superconducting fault current limiter (SFCL) must properly perform the reclosing operation. This paper analyzed the reclosing operation characteristics of the three-phase flux-coupling type SFCL in the event of a ground fault. The fault current limiting characteristics according to the changing number of turns of the primary and secondary coils were examined. As the number of turns of the first coil increased, the first maximum fault current decreased. Furthermore, the voltage of the quenched superconducting element also decreased. This means that the power burden of the superconducting element decreases based on the increasing number of turns of the primary coil. The fault current limiting characteristic of the SFCL according to the reclosing time limited the fault current within a 0.5 cycles (8 ms), which is shorter than the closing time of the recloser. In other words, the superconducting element returned to the superconducting state before the second fault and normally performed the fault current limiting operation. If the SFCL did not recover before the recloser reclosing time, the normal current that was flowing in the transmission line after the recovery of the SFCL from the fault would have been limited and would have caused losses. Therefore, the fast recovery time of a SFCL is critical to its cooperation with the protection system.

  17. Experimental Measurements of Permeability Evolution along Faults during Progressive Slip

    NASA Astrophysics Data System (ADS)

    Strutz, M.; Mitchell, T. M.; Renner, J.

    2010-12-01

    Little is currently known about the dynamic changes in fault-parallel permeability along rough faults during progressive slip. With increasing slip, asperities are worn to produce gouge which can dramatically reduce along fault permeability within the slip zone. However, faults can have a range of roughness which can affect both the porosity and both the amount and distribution of fault wear material produced in the slipping zone during the early stages of fault evolution. In this novel study we investigate experimentally the evolution of permeability along a fault plane in granite sawcut sliding blocks with a variety of intial roughnesses in a triaxial apparatus. Drillholes in the samples allow the permeability to be measured along the fault plane during loading and subsequent fault displacement. Use of the pore pressure oscillation technique (PPO) allows the continuous measurement of permeability without having to stop loading. To achieve a range of intial starting roughnesses, faults sawcut surfaces were prepared using a variety of corundum powders ranging from 10 µm to 220 µm, and for coarser roughness were air-blasted with glass beads up to 800µm in size. Fault roughness has been quantified with a laser profileometer. During sliding, we measure the acoustic emissions in order to detect grain cracking and asperity shearing which may relate to both the mechanical and permeability data. Permeability shows relative reductions of up to over 4 orders of magnitude during stable sliding as asperities are sheared to produce a fine fault gouge. This variation in permeability is greatest for the roughest faults, reducing as fault roughness decreases. The onset of permeability reduction is contemporaneous with a dramatic reduction in the amount of detected acoustic emissions, where a continuous layer of fault gouge has developed. The amount of fault gouge produced is related to the initial roughness, with the rough faults showing larger fault gouge layers at the end of slip. Following large stress drops and stick slip events, permeability can both increase and decrease due to dynamic changes in pore pressure during fast sliding events. We present a summary of preliminary data to date, and discuss some of the problems and unknowns when using the PPO method to measure permeability.

  18. Fault Analysis in Solar Photovoltaic Arrays

    NASA Astrophysics Data System (ADS)

    Zhao, Ye

    Fault analysis in solar photovoltaic (PV) arrays is a fundamental task to increase reliability, efficiency and safety in PV systems. Conventional fault protection methods usually add fuses or circuit breakers in series with PV components. But these protection devices are only able to clear faults and isolate faulty circuits if they carry a large fault current. However, this research shows that faults in PV arrays may not be cleared by fuses under some fault scenarios, due to the current-limiting nature and non-linear output characteristics of PV arrays. First, this thesis introduces new simulation and analytic models that are suitable for fault analysis in PV arrays. Based on the simulation environment, this thesis studies a variety of typical faults in PV arrays, such as ground faults, line-line faults, and mismatch faults. The effect of a maximum power point tracker on fault current is discussed and shown to, at times, prevent the fault current protection devices to trip. A small-scale experimental PV benchmark system has been developed in Northeastern University to further validate the simulation conclusions. Additionally, this thesis examines two types of unique faults found in a PV array that have not been studied in the literature. One is a fault that occurs under low irradiance condition. The other is a fault evolution in a PV array during night-to-day transition. Our simulation and experimental results show that overcurrent protection devices are unable to clear the fault under "low irradiance" and "night-to-day transition". However, the overcurrent protection devices may work properly when the same PV fault occurs in daylight. As a result, a fault under "low irradiance" and "night-to-day transition" might be hidden in the PV array and become a potential hazard for system efficiency and reliability.

  19. Comparative study of superconducting fault current limiter both for LCC-HVDC and VSC-HVDC systems

    NASA Astrophysics Data System (ADS)

    Lee, Jong-Geon; Khan, Umer Amir; Lim, Sung-Woo; Shin, Woo-ju; Seo, In-Jin; Lee, Bang-Wook

    2015-11-01

    High Voltage Direct Current (HVDC) system has been evaluated as the optimum solution for the renewable energy transmission and long-distance power grid connections. In spite of the various advantages of HVDC system, it still has been regarded as an unreliable system compared to AC system due to its vulnerable characteristics on the power system fault. Furthermore, unlike AC system, optimum protection and switching device has not been fully developed yet. Therefore, in order to enhance the reliability of the HVDC systems mitigation of power system fault and reliable fault current limiting and switching devices should be developed. In this paper, in order to mitigate HVDC fault, both for Line Commutated Converter HVDC (LCC-HVDC) and Voltage Source Converter HVDC (VSC-HVDC) system, an application of resistive superconducting fault current limiter which has been known as optimum solution to cope with the power system fault was considered. Firstly, simulation models for two types of LCC-HVDC and VSC-HVDC system which has point to point connection model were developed. From the designed model, fault current characteristics of faulty condition were analyzed. Second, application of SFCL on each types of HVDC system and comparative study of modified fault current characteristics were analyzed. Consequently, it was deduced that an application of AC-SFCL on LCC-HVDC system with point to point connection was desirable solution to mitigate the fault current stresses and to prevent commutation failure in HVDC electric power system interconnected with AC grid.

  20. Reducing a Knowledge-Base Search Space When Data Are Missing

    NASA Technical Reports Server (NTRS)

    James, Mark

    2007-01-01

    This software addresses the problem of how to efficiently execute a knowledge base in the presence of missing data. Computationally, this is an exponentially expensive operation that without heuristics generates a search space of 1 + 2n possible scenarios, where n is the number of rules in the knowledge base. Even for a knowledge base of the most modest size, say 16 rules, it would produce 65,537 possible scenarios. The purpose of this software is to reduce the complexity of this operation to a more manageable size. The problem that this system solves is to develop an automated approach that can reason in the presence of missing data. This is a meta-reasoning capability that repeatedly calls a diagnostic engine/model to provide prognoses and prognosis tracking. In the big picture, the scenario generator takes as its input the current state of a system, including probabilistic information from Data Forecasting. Using model-based reasoning techniques, it returns an ordered list of fault scenarios that could be generated from the current state, i.e., the plausible future failure modes of the system as it presently stands. The scenario generator models a Potential Fault Scenario (PFS) as a black box, the input of which is a set of states tagged with priorities and the output of which is one or more potential fault scenarios tagged by a confidence factor. The results from the system are used by a model-based diagnostician to predict the future health of the monitored system.

  1. 30 CFR 75.814 - Electrical protection.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... protection must not be dependent upon control power and may consist of a current transformer and overcurrent... restarting of the equipment. (b) Current transformers used for the ground-fault protection specified in... series with ground-fault current transformers. (c) Each ground-fault current device specified in...

  2. 30 CFR 75.814 - Electrical protection.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... protection must not be dependent upon control power and may consist of a current transformer and overcurrent... restarting of the equipment. (b) Current transformers used for the ground-fault protection specified in... series with ground-fault current transformers. (c) Each ground-fault current device specified in...

  3. 30 CFR 75.814 - Electrical protection.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... protection must not be dependent upon control power and may consist of a current transformer and overcurrent... restarting of the equipment. (b) Current transformers used for the ground-fault protection specified in... series with ground-fault current transformers. (c) Each ground-fault current device specified in...

  4. 30 CFR 75.814 - Electrical protection.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... protection must not be dependent upon control power and may consist of a current transformer and overcurrent... restarting of the equipment. (b) Current transformers used for the ground-fault protection specified in... series with ground-fault current transformers. (c) Each ground-fault current device specified in...

  5. 30 CFR 75.814 - Electrical protection.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... protection must not be dependent upon control power and may consist of a current transformer and overcurrent... restarting of the equipment. (b) Current transformers used for the ground-fault protection specified in... series with ground-fault current transformers. (c) Each ground-fault current device specified in...

  6. Feasibility analysis of a novel hybrid-type superconducting circuit breaker in multi-terminal HVDC networks

    NASA Astrophysics Data System (ADS)

    Khan, Umer Amir; Lee, Jong-Geon; Seo, In-Jin; Amir, Faisal; Lee, Bang-Wook

    2015-11-01

    Voltage source converter-based HVDC systems (VSC-HVDC) are a better alternative than conventional thyristor-based HVDC systems, especially for developing multi-terminal HVDC systems (MTDC). However, one of the key obstacles in developing MTDC is the absence of an adequate protection system that can quickly detect faults, locate the faulty line and trip the HVDC circuit breakers (DCCBs) to interrupt the DC fault current. In this paper, a novel hybrid-type superconducting circuit breaker (SDCCB) is proposed and feasibility analyses of its application in MTDC are presented. The SDCCB has a superconducting fault current limiter (SFCL) located in the main current path to limit fault currents until the final trip signal is received. After the trip signal the IGBT located in the main line commutates the current into a parallel line where DC current is forced to zero by the combination of IGBTs and surge arresters. Fault simulations for three-, four- and five-terminal MTDC were performed and SDCCB performance was evaluated in these MTDC. Passive current limitation by SFCL caused a significant reduction of fault current interruption stress in the SDCCB. It was observed that the DC current could change direction in MTDC after a fault and the SDCCB was modified to break the DC current in both the forward and reverse directions. The simulation results suggest that the proposed SDCCB could successfully suppress the DC fault current, cause a timely interruption, and isolate the faulty HVDC line in MTDC.

  7. Fault-tolerant, high-level quantum circuits: form, compilation and description

    NASA Astrophysics Data System (ADS)

    Paler, Alexandru; Polian, Ilia; Nemoto, Kae; Devitt, Simon J.

    2017-06-01

    Fault-tolerant quantum error correction is a necessity for any quantum architecture destined to tackle interesting, large-scale problems. Its theoretical formalism has been well founded for nearly two decades. However, we still do not have an appropriate compiler to produce a fault-tolerant, error-corrected description from a higher-level quantum circuit for state-of the-art hardware models. There are many technical hurdles, including dynamic circuit constructions that occur when constructing fault-tolerant circuits with commonly used error correcting codes. We introduce a package that converts high-level quantum circuits consisting of commonly used gates into a form employing all decompositions and ancillary protocols needed for fault-tolerant error correction. We call this form the (I)initialisation, (C)NOT, (M)measurement form (ICM) and consists of an initialisation layer of qubits into one of four distinct states, a massive, deterministic array of CNOT operations and a series of time-ordered X- or Z-basis measurements. The form allows a more flexible approach towards circuit optimisation. At the same time, the package outputs a standard circuit or a canonical geometric description which is a necessity for operating current state-of-the-art hardware architectures using topological quantum codes.

  8. Evaluation of passenger health risk assessment of sustainable indoor air quality monitoring in metro systems based on a non-Gaussian dynamic sensor validation method.

    PubMed

    Kim, MinJeong; Liu, Hongbin; Kim, Jeong Tai; Yoo, ChangKyoo

    2014-08-15

    Sensor faults in metro systems provide incorrect information to indoor air quality (IAQ) ventilation systems, resulting in the miss-operation of ventilation systems and adverse effects on passenger health. In this study, a new sensor validation method is proposed to (1) detect, identify and repair sensor faults and (2) evaluate the influence of sensor reliability on passenger health risk. To address the dynamic non-Gaussianity problem of IAQ data, dynamic independent component analysis (DICA) is used. To detect and identify sensor faults, the DICA-based squared prediction error and sensor validity index are used, respectively. To restore the faults to normal measurements, a DICA-based iterative reconstruction algorithm is proposed. The comprehensive indoor air-quality index (CIAI) that evaluates the influence of the current IAQ on passenger health is then compared using the faulty and reconstructed IAQ data sets. Experimental results from a metro station showed that the DICA-based method can produce an improved IAQ level in the metro station and reduce passenger health risk since it more accurately validates sensor faults than do conventional methods. Copyright © 2014 Elsevier B.V. All rights reserved.

  9. Support vector machines-based fault diagnosis for turbo-pump rotor

    NASA Astrophysics Data System (ADS)

    Yuan, Sheng-Fa; Chu, Fu-Lei

    2006-05-01

    Most artificial intelligence methods used in fault diagnosis are based on empirical risk minimisation principle and have poor generalisation when fault samples are few. Support vector machines (SVM) is a new general machine-learning tool based on structural risk minimisation principle that exhibits good generalisation even when fault samples are few. Fault diagnosis based on SVM is discussed. Since basic SVM is originally designed for two-class classification, while most of fault diagnosis problems are multi-class cases, a new multi-class classification of SVM named 'one to others' algorithm is presented to solve the multi-class recognition problems. It is a binary tree classifier composed of several two-class classifiers organised by fault priority, which is simple, and has little repeated training amount, and the rate of training and recognition is expedited. The effectiveness of the method is verified by the application to the fault diagnosis for turbo pump rotor.

  10. A method for joint routing, wavelength dimensioning and fault tolerance for any set of simultaneous failures on dynamic WDM optical networks

    NASA Astrophysics Data System (ADS)

    Jara, Nicolás; Vallejos, Reinaldo; Rubino, Gerardo

    2017-11-01

    The design of optical networks decomposes into different tasks, where the engineers must basically organize the way the main system's resources are used, minimizing the design and operation costs and respecting critical performance constraints. More specifically, network operators face the challenge of solving routing and wavelength dimensioning problems while aiming to simultaneously minimize the network cost and to ensure that the network performance meets the level established in the Service Level Agreement (SLA). We call this the Routing and Wavelength Dimensioning (R&WD) problem. Another important problem to be solved is how to deal with failures of links when the network is operating. When at least one link fails, a high rate of data loss may occur. To avoid it, the network must be designed in such a manner that upon one or multiple failures, the affected connections can still communicate using alternative routes, a mechanism known as Fault Tolerance (FT). When the mechanism allows to deal with an arbitrary number of faults, we speak about Multiple Fault Tolerance (MFT). The different tasks before mentioned are usually solved separately, or in some cases by pairs, leading to solutions that are not necessarily close to optimal ones. This paper proposes a novel method to simultaneously solve all of them, that is, the Routing, the Wavelength Dimensioning, and the Multiple Fault Tolerance problems. The method allows to obtain: a) all the primary routes by which each connection normally transmits its information, b) the additional routes, called secondary routes, used to keep each user connected in cases where one or more simultaneous failures occur, and c) the number of wavelengths available at each link of the network, calculated such that the blocking probability of each connection is lower than a pre-determined threshold (which is a network design parameter), despite the occurrence of simultaneous link failures. The solution obtained by the new algorithm is significantly more efficient than current methods, its implementation is notably simple and its on-line operation is very fast. In the paper, different examples illustrate the results provided by the proposed technique.

  11. Self-triggering superconducting fault current limiter

    DOEpatents

    Yuan, Xing [Albany, NY; Tekletsadik, Kasegn [Rexford, NY

    2008-10-21

    A modular and scaleable Matrix Fault Current Limiter (MFCL) that functions as a "variable impedance" device in an electric power network, using components made of superconducting and non-superconducting electrically conductive materials. The matrix fault current limiter comprises a fault current limiter module that includes a superconductor which is electrically coupled in parallel with a trigger coil, wherein the trigger coil is magnetically coupled to the superconductor. The current surge doing a fault within the electrical power network will cause the superconductor to transition to its resistive state and also generate a uniform magnetic field in the trigger coil and simultaneously limit the voltage developed across the superconductor. This results in fast and uniform quenching of the superconductors, significantly reduces the burnout risk associated with non-uniformity often existing within the volume of superconductor materials. The fault current limiter modules may be electrically coupled together to form various "n" (rows).times."m" (columns) matrix configurations.

  12. Improvement in operational characteristics of KEPCO’s line-commutation-type superconducting hybrid fault current limiter

    NASA Astrophysics Data System (ADS)

    Yim, S.-W.; Park, B.-C.; Jeong, Y.-T.; Kim, Y.-J.; Yang, S.-E.; Kim, W.-S.; Kim, H.-R.; Du, H.-I.

    2013-01-01

    A 22.9 kV class hybrid fault current limiter (FCL) developed by Korea Electric Power Corporation and LS Industrial Systems in 2006 operates using the line commutation mechanism and begins to limit the fault current after the first half-cycle. The first peak of the fault current is available for protective coordination in the power system. However, it also produces a large electromagnetic force and imposes a huge stress on power facilities such as the main transformer and gas-insulated switchgear. In this study, we improved the operational characteristics of the hybrid FCL in order to reduce the first peak of the fault current. While maintaining the structure of the hybrid FCL system, we developed a superconducting module that detects and limits the fault current during the first half-cycle. To maintain the protective coordination capacity, the hybrid FCL was designed to reduce the first peak value of the fault current by up to approximately 30%. The superconducting module was also designed to produce a minimum AC loss, generating a small, uniform magnetic field distribution during normal operation. Performance tests confirmed that when applied to the hybrid FCL, the superconducting module showed successful current limiting operation without any damage.

  13. A novel concept of fault current limiter based on saturable core in high voltage DC transmission system

    NASA Astrophysics Data System (ADS)

    Yuan, Jiaxin; Zhou, Hang; Gan, Pengcheng; Zhong, Yongheng; Gao, Yanhui; Muramatsu, Kazuhiro; Du, Zhiye; Chen, Baichao

    2018-05-01

    To develop mechanical circuit breaker in high voltage direct current (HVDC) system, a fault current limiter is required. Traditional method to limit DC fault current is to use superconducting technology or power electronic devices, which is quite difficult to be brought to practical use under high voltage circumstances. In this paper, a novel concept of high voltage DC transmission system fault current limiter (DCSFCL) based on saturable core was proposed. In the DCSFCL, the permanent magnets (PM) are added on both up and down side of the core to generate reverse magnetic flux that offset the magnetic flux generated by DC current and make the DC winding present a variable inductance to the DC system. In normal state, DCSFCL works as a smoothing reactor and its inductance is within the scope of the design requirements. When a fault occurs, the inductance of DCSFCL rises immediately and limits the steepness of the fault current. Magnetic field simulations were carried out, showing that compared with conventional smoothing reactor, DCSFCL can decrease the high steepness of DC fault current by 17% in less than 10ms, which verifies the feasibility and effectiveness of this method.

  14. System for detecting and limiting electrical ground faults within electrical devices

    DOEpatents

    Gaubatz, Donald C.

    1990-01-01

    An electrical ground fault detection and limitation system for employment with a nuclear reactor utilizing a liquid metal coolant. Elongate electromagnetic pumps submerged within the liquid metal coolant and electrical support equipment experiencing an insulation breakdown occasion the development of electrical ground fault current. Without some form of detection and control, these currents may build to damaging power levels to expose the pump drive components to liquid metal coolant such as sodium with resultant undesirable secondary effects. Such electrical ground fault currents are detected and controlled through the employment of an isolated power input to the pumps and with the use of a ground fault control conductor providing a direct return path from the affected components to the power source. By incorporating a resistance arrangement with the ground fault control conductor, the amount of fault current permitted to flow may be regulated to the extent that the reactor may remain in operation until maintenance may be performed, notwithstanding the existence of the fault. Monitors such as synchronous demodulators may be employed to identify and evaluate fault currents for each phase of a polyphase power, and control input to the submerged pump and associated support equipment.

  15. System and method of detecting cavitation in pumps

    DOEpatents

    Lu, Bin; Sharma, Santosh Kumar; Yan, Ting; Dimino, Steven A.

    2017-10-03

    A system and method for detecting cavitation in pumps for fixed and variable supply frequency applications is disclosed. The system includes a controller having a processor programmed to repeatedly receive real-time operating current data from a motor driving a pump, generate a current frequency spectrum from the current data, and analyze current data within a pair of signature frequency bands of the current frequency spectrum. The processor is further programmed to repeatedly determine fault signatures as a function of the current data within the pair of signature frequency bands, repeatedly determine fault indices based on the fault signatures and a dynamic reference signature, compare the fault indices to a reference index, and identify a cavitation condition in a pump based on a comparison between the reference index and a current fault index.

  16. Comparative investigation of vibration and current monitoring for prediction of mechanical and electrical faults in induction motor based on multiclass-support vector machine algorithms

    NASA Astrophysics Data System (ADS)

    Gangsar, Purushottam; Tiwari, Rajiv

    2017-09-01

    This paper presents an investigation of vibration and current monitoring for effective fault prediction in induction motor (IM) by using multiclass support vector machine (MSVM) algorithms. Failures of IM may occur due to propagation of a mechanical or electrical fault. Hence, for timely detection of these faults, the vibration as well as current signals was acquired after multiple experiments of varying speeds and external torques from an experimental test rig. Here, total ten different fault conditions that frequently encountered in IM (four mechanical fault, five electrical fault conditions and one no defect condition) have been considered. In the case of stator winding fault, and phase unbalance and single phasing fault, different level of severity were also considered for the prediction. In this study, the identification has been performed of the mechanical and electrical faults, individually and collectively. Fault predictions have been performed using vibration signal alone, current signal alone and vibration-current signal concurrently. The one-versus-one MSVM has been trained at various operating conditions of IM using the radial basis function (RBF) kernel and tested for same conditions, which gives the result in the form of percentage fault prediction. The prediction performance is investigated for the wide range of RBF kernel parameter, i.e. gamma, and selected the best result for one optimal value of gamma for each case. Fault predictions has been performed and investigated for the wide range of operational speeds of the IM as well as external torques on the IM.

  17. Fault current limiter with shield and adjacent cores

    DOEpatents

    Darmann, Francis Anthony; Moriconi, Franco; Hodge, Eoin Patrick

    2013-10-22

    In a fault current limiter (FCL) of a saturated core type having at least one coil wound around a high permeability material, a method of suppressing the time derivative of the fault current at the zero current point includes the following step: utilizing an electromagnetic screen or shield around the AC coil to suppress the time derivative current levels during zero current conditions.

  18. Technical know-how relevant to planning of borehole investigation for fault characterization

    NASA Astrophysics Data System (ADS)

    Mizuno, T.; Takeuchi, R.; Tsuruta, T.; Matsuoka, T.; Kunimaru, T.; Saegusa, H.

    2011-12-01

    As part of the national R&D program for geological disposal of high-level radioactive waste (HLW), the broad scientific study of the deep geological environment, JAEA has established the Mizunami Underground Research Laboratory (MIU) in Central Japan as a generic underground research laboratory (URL) facility. The MIU Project focuses on the crystalline rocks. In the case of fractured rock, a fault is one of the major discontinuity structures which control the groundwater flow conditions. It is important to estimate geological, hydrogeological, hydrochemical and rock mechanical characteristics of faults, and then to evaluate its role in the engineering design of repository and the assessment of long-term safety of HLW disposal. Therefore, investigations for fault characterization have been performed to estimate its characteristics and to evaluate existing conceptual and/or numerical models of the geological environment in the MIU project. Investigations related to faults have been conducted based on the conventional concept that a fault consists of a "fault core (FC)" characterized by distribution of the faulted rocks and a "fractured zone (FZ)" along FC. With the progress of investigations, furthermore, it is clear that there is also a case in which an "altered zone (AZ)" characterized by alteration of host rocks to clay minerals can be developed around the FC. Intensity of alteration in AZ generally decreases with distance from the FC, and AZ transits to FZ. Therefore, the investigation program focusing on properties of AZ is required for revising the existing conceptual and/or numerical models of geological environment. In this study, procedures for planning of fault characterizations have been summarized based on the technical know-how learnt through the MIU Project for the development of Knowledge Management System performed by JAEA under a contract with the Ministry of Economy, Trade and Industry as part of its R&D supporting program for developing geological disposal technology in 2010. Taking into account the experience from the fault characterization in the MIU Project, an optimization procedure for investigation program is summarized as follows; 1) Definition of investigation aim, 2) Confirmation of current understanding of the geological environment, 3) Specification and prioritization of the data to be obtained 4) Selection of the methodology for obtaining the data, 5) Specification of sequence of the investigations, and 6) Establishment of drilling and casing program including optional cases and taking into account potential problems. Several geological conceptual models with uncertainty of geological structures were illustrated to define the investigation aim and to confirm the current uncertainties. These models were also available to establish optional cases by predicting the type and location of potential problems. The procedures and case study related to establishment of the investigation program are summarized in this study and can be available for site characterization works conducted by the implementing body (NUMO) in future candidate areas.

  19. A Kalman Filter Based Technique for Stator Turn-Fault Detection of the Induction Motors

    NASA Astrophysics Data System (ADS)

    Ghanbari, Teymoor; Samet, Haidar

    2017-11-01

    Monitoring of the Induction Motors (IMs) through stator current for different faults diagnosis has considerable economic and technical advantages in comparison with the other techniques in this content. Among different faults of an IM, stator and bearing faults are more probable types, which can be detected by analyzing signatures of the stator currents. One of the most reliable indicators for fault detection of IMs is lower sidebands of power frequency in the stator currents. This paper deals with a novel simple technique for detecting stator turn-fault of the IMs. Frequencies of the lower sidebands are determined using the motor specifications and their amplitudes are estimated by a Kalman Filter (KF). Instantaneous Total Harmonic Distortion (ITHD) of these harmonics is calculated. Since variation of the ITHD for the three-phase currents is considerable in case of stator turn-fault, the fault can be detected using this criterion, confidently. Different simulation results verify high performance of the proposed method. The performance of the method is also confirmed using some experiments.

  20. PV Systems Reliability Final Technical Report: Ground Fault Detection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lavrova, Olga; Flicker, Jack David; Johnson, Jay

    We have examined ground faults in PhotoVoltaic (PV) arrays and the efficacy of fuse, current detection (RCD), current sense monitoring/relays (CSM), isolation/insulation (Riso) monitoring, and Ground Fault Detection and Isolation (GFID) using simulations based on a Simulation Program with Integrated Circuit Emphasis SPICE ground fault circuit model, experimental ground faults installed on real arrays, and theoretical equations.

  1. Current Sensor Fault Reconstruction for PMSM Drives

    PubMed Central

    Huang, Gang; Luo, Yi-Ping; Zhang, Chang-Fan; He, Jing; Huang, Yi-Shan

    2016-01-01

    This paper deals with a current sensor fault reconstruction algorithm for the torque closed-loop drive system of an interior PMSM. First, sensor faults are equated to actuator ones by a new introduced state variable. Then, in αβ coordinates, based on the motor model with active flux linkage, a current observer is constructed with a specific sliding mode equivalent control methodology to eliminate the effects of unknown disturbances, and the phase current sensor faults are reconstructed by means of an adaptive method. Finally, an αβ axis current fault processing module is designed based on the reconstructed value. The feasibility and effectiveness of the proposed method are verified by simulation and experimental tests on the RT-LAB platform. PMID:26840317

  2. Resilience Design Patterns - A Structured Approach to Resilience at Extreme Scale (version 1.0)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hukerikar, Saurabh; Engelmann, Christian

    Reliability is a serious concern for future extreme-scale high-performance computing (HPC) systems. Projections based on the current generation of HPC systems and technology roadmaps suggest that very high fault rates in future systems. The errors resulting from these faults will propagate and generate various kinds of failures, which may result in outcomes ranging from result corruptions to catastrophic application crashes. Practical limits on power consumption in HPC systems will require future systems to embrace innovative architectures, increasing the levels of hardware and software complexities. The resilience challenge for extreme-scale HPC systems requires management of various hardware and software technologies thatmore » are capable of handling a broad set of fault models at accelerated fault rates. These techniques must seek to improve resilience at reasonable overheads to power consumption and performance. While the HPC community has developed various solutions, application-level as well as system-based solutions, the solution space of HPC resilience techniques remains fragmented. There are no formal methods and metrics to investigate and evaluate resilience holistically in HPC systems that consider impact scope, handling coverage, and performance & power eciency across the system stack. Additionally, few of the current approaches are portable to newer architectures and software ecosystems, which are expected to be deployed on future systems. In this document, we develop a structured approach to the management of HPC resilience based on the concept of resilience-based design patterns. A design pattern is a general repeatable solution to a commonly occurring problem. We identify the commonly occurring problems and solutions used to deal with faults, errors and failures in HPC systems. The catalog of resilience design patterns provides designers with reusable design elements. We define a design framework that enhances our understanding of the important constraints and opportunities for solutions deployed at various layers of the system stack. The framework may be used to establish mechanisms and interfaces to coordinate flexible fault management across hardware and software components. The framework also enables optimization of the cost-benefit trade-os among performance, resilience, and power consumption. The overall goal of this work is to enable a systematic methodology for the design and evaluation of resilience technologies in extreme-scale HPC systems that keep scientific applications running to a correct solution in a timely and cost-ecient manner in spite of frequent faults, errors, and failures of various types.« less

  3. Design and optimization of LCL-VSC grid-tied converter having short circuit fault current limiting ability

    NASA Astrophysics Data System (ADS)

    Liu, Mengqi; Liu, Haijun; Wang, Zhikai

    2017-01-01

    Traditional LCL grid-tied converters haven't the ability to limit the short-circuit fault current and only remove grid-connected converter using the breaker. However, the VSC converters become uncontrollable after the short circuit fault cutting off and the power switches may be damaged if the circuit breaker removes slowly. Compared to the filter function of the LCL passive components in traditional VSC converters, the novel LCL-VSC converter has the ability of limiting the short circuit fault current using the reasonable designed LCL parameters. In this paper the mathematical model of the LCL converter is established and the characteristics of the short circuit fault current generated by the ac side and dc side are analyzed. Thus one design and optimization scheme of the reasonable LCL passive parameter is proposed for the LCL-VSC converter having short circuit fault current limiting ability. In addition to ensuring the LCL passive components filtering the high-frequency harmonic, this scheme also considers the impedance characteristics to limit the fault current of AC and DC short circuit fault respectively flowing through the power switch no more than the maximum allowable operating current, in order to make the LCL converter working continuously. Finally, the 200kW simulation system is set up to prove the validity and feasibility of the theoretical analysis using the proposed design and optimization scheme.

  4. Estimation of Faults in DC Electrical Power System

    NASA Technical Reports Server (NTRS)

    Gorinevsky, Dimitry; Boyd, Stephen; Poll, Scott

    2009-01-01

    This paper demonstrates a novel optimization-based approach to estimating fault states in a DC power system. Potential faults changing the circuit topology are included along with faulty measurements. Our approach can be considered as a relaxation of the mixed estimation problem. We develop a linear model of the circuit and pose a convex problem for estimating the faults and other hidden states. A sparse fault vector solution is computed by using 11 regularization. The solution is computed reliably and efficiently, and gives accurate diagnostics on the faults. We demonstrate a real-time implementation of the approach for an instrumented electrical power system testbed, the ADAPT testbed at NASA ARC. The estimates are computed in milliseconds on a PC. The approach performs well despite unmodeled transients and other modeling uncertainties present in the system.

  5. Influence of basal-plane dislocation structures on expansion of single Shockley-type stacking faults in forward-current degradation of 4H-SiC p-i-n diodes

    NASA Astrophysics Data System (ADS)

    Hayashi, Shohei; Yamashita, Tamotsu; Senzaki, Junji; Miyazato, Masaki; Ryo, Mina; Miyajima, Masaaki; Kato, Tomohisa; Yonezawa, Yoshiyuki; Kojima, Kazutoshi; Okumura, Hajime

    2018-04-01

    The origin of expanded single Shockley-type stacking faults in forward-current degradation of 4H-SiC p-i-n diodes was investigated by the stress-current test. At a stress-current density lower than 25 A cm-2, triangular stacking faults were formed from basal-plane dislocations in the epitaxial layer. At a stress-current density higher than 350 A cm-2, both triangular and long-zone-shaped stacking faults were formed from basal-plane dislocations that converted into threading edge dislocations near the interface between the epitaxial layer and the substrate. In addition, the conversion depth of basal-plane dislocations that expanded into the stacking fault was inside the substrate deeper than the interface. These results indicate that the conversion depth of basal-plane dislocations strongly affects the threshold stress-current density at which the expansion of stacking faults occurs.

  6. Learning and diagnosing faults using neural networks

    NASA Technical Reports Server (NTRS)

    Whitehead, Bruce A.; Kiech, Earl L.; Ali, Moonis

    1990-01-01

    Neural networks have been employed for learning fault behavior from rocket engine simulator parameters and for diagnosing faults on the basis of the learned behavior. Two problems in applying neural networks to learning and diagnosing faults are (1) the complexity of the sensor data to fault mapping to be modeled by the neural network, which implies difficult and lengthy training procedures; and (2) the lack of sufficient training data to adequately represent the very large number of different types of faults which might occur. Methods are derived and tested in an architecture which addresses these two problems. First, the sensor data to fault mapping is decomposed into three simpler mappings which perform sensor data compression, hypothesis generation, and sensor fusion. Efficient training is performed for each mapping separately. Secondly, the neural network which performs sensor fusion is structured to detect new unknown faults for which training examples were not presented during training. These methods were tested on a task of fault diagnosis by employing rocket engine simulator data. Results indicate that the decomposed neural network architecture can be trained efficiently, can identify faults for which it has been trained, and can detect the occurrence of faults for which it has not been trained.

  7. The role of knowledge structures in fault diagnosis

    NASA Technical Reports Server (NTRS)

    Smith, P. J.; Giffin, W. C.; Rockwell, T. H.; Thomas, M. E.

    1984-01-01

    The use of human memory and knowledge structures to direct fault diagnosis performance was investigated. The performances of 20 pilots with instrument flight ratings were studied in a fault diagnosis task. The pilots were read a scenario which described flight conditions under which the symptoms which are indicative of a problem were detected. They were asked to think out loud as they requested and interpreted various pieces of information to diagnose the cause of the problem. Only 11 of the 20 pilots successfully diagnosed the problem. Pilot performance on this fault diagnosis task was modeled in the use of domain specific knowledge organized in a frame system. Eighteen frames, with a common structure, were necessary to account for the data from all twenty subjects.

  8. Displaced rocks, strong motion, and the mechanics of shallow faulting associated with the 1999 Hector Mine, California, earthquake

    USGS Publications Warehouse

    Michael, Andrew J.; Ross, Stephanie L.; Stenner, Heidi D.

    2002-01-01

    The paucity of strong-motion stations near the 1999 Hector Mine earthquake makes it impossible to make instrumental studies of key questions about near-fault strong-motion patterns associated with this event. However, observations of displaced rocks allow a qualitative investigation of these problems. By observing the slope of the desert surface and the frictional coefficient between these rocks and the desert surface, we estimate the minimum horizontal acceleration needed to displace the rocks. Combining this information with observations of how many rocks were displaced in different areas near the fault, we infer the level of shaking. Given current empirical shaking attenuation relationships, the number of rocks that moved is slightly lower than expected; this implies that slightly lower than expected shaking occurred during the Hector Mine earthquake. Perhaps more importantly, stretches of the fault with 4 m of total displacement at the surface displaced few nearby rocks on 15?? slopes, suggesting that the horizontal accelerations were below 0.2g within meters of the fault scarp. This low level of shaking suggests that the shallow parts of this rupture did not produce strong accelerations. Finally, we did not observe an increased incidence of displaced rocks along the fault zone itself. This suggests that, despite observations of fault-zone-trapped waves generated by aftershocks of the Hector Mine earthquake, such waves were not an important factor in controlling peak ground acceleration during the mainshock.

  9. Is the Multigrid Method Fault Tolerant? The Two-Grid Case

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ainsworth, Mark; Glusa, Christian

    2016-06-30

    The predicted reduced resiliency of next-generation high performance computers means that it will become necessary to take into account the effects of randomly occurring faults on numerical methods. Further, in the event of a hard fault occurring, a decision has to be made as to what remedial action should be taken in order to resume the execution of the algorithm. The action that is chosen can have a dramatic effect on the performance and characteristics of the scheme. Ideally, the resulting algorithm should be subjected to the same kind of mathematical analysis that was applied to the original, deterministic variant.more » The purpose of this work is to provide an analysis of the behaviour of the multigrid algorithm in the presence of faults. Multigrid is arguably the method of choice for the solution of large-scale linear algebra problems arising from discretization of partial differential equations and it is of considerable importance to anticipate its behaviour on an exascale machine. The analysis of resilience of algorithms is in its infancy and the current work is perhaps the first to provide a mathematical model for faults and analyse the behaviour of a state-of-the-art algorithm under the model. It is shown that the Two Grid Method fails to be resilient to faults. Attention is then turned to identifying the minimal necessary remedial action required to restore the rate of convergence to that enjoyed by the ideal fault-free method.« less

  10. Nonlinear data-driven identification of polymer electrolyte membrane fuel cells for diagnostic purposes: A Volterra series approach

    NASA Astrophysics Data System (ADS)

    Ritzberger, D.; Jakubek, S.

    2017-09-01

    In this work, a data-driven identification method, based on polynomial nonlinear autoregressive models with exogenous inputs (NARX) and the Volterra series, is proposed to describe the dynamic and nonlinear voltage and current characteristics of polymer electrolyte membrane fuel cells (PEMFCs). The structure selection and parameter estimation of the NARX model is performed on broad-band voltage/current data. By transforming the time-domain NARX model into a Volterra series representation using the harmonic probing algorithm, a frequency-domain description of the linear and nonlinear dynamics is obtained. With the Volterra kernels corresponding to different operating conditions, information from existing diagnostic tools in the frequency domain such as electrochemical impedance spectroscopy (EIS) and total harmonic distortion analysis (THDA) are effectively combined. Additionally, the time-domain NARX model can be utilized for fault detection by evaluating the difference between measured and simulated output. To increase the fault detectability, an optimization problem is introduced which maximizes this output residual to obtain proper excitation frequencies. As a possible extension it is shown, that by optimizing the periodic signal shape itself that the fault detectability is further increased.

  11. Review: Evaluation of Foot-and-Mouth Disease Control Using Fault Tree Analysis.

    PubMed

    Isoda, N; Kadohira, M; Sekiguchi, S; Schuppers, M; Stärk, K D C

    2015-06-01

    An outbreak of foot-and-mouth disease (FMD) causes huge economic losses and animal welfare problems. Although much can be learnt from past FMD outbreaks, several countries are not satisfied with their degree of contingency planning and aiming at more assurance that their control measures will be effective. The purpose of the present article was to develop a generic fault tree framework for the control of an FMD outbreak as a basis for systematic improvement and refinement of control activities and general preparedness. Fault trees are typically used in engineering to document pathways that can lead to an undesired event, that is, ineffective FMD control. The fault tree method allows risk managers to identify immature parts of the control system and to analyse the events or steps that will most probably delay rapid and effective disease control during a real outbreak. The present developed fault tree is generic and can be tailored to fit the specific needs of countries. For instance, the specific fault tree for the 2001 FMD outbreak in the UK was refined based on control weaknesses discussed in peer-reviewed articles. Furthermore, the specific fault tree based on the 2001 outbreak was applied to the subsequent FMD outbreak in 2007 to assess the refinement of control measures following the earlier, major outbreak. The FMD fault tree can assist risk managers to develop more refined and adequate control activities against FMD outbreaks and to find optimum strategies for rapid control. Further application using the current tree will be one of the basic measures for FMD control worldwide. © 2013 Blackwell Verlag GmbH.

  12. Nonlocal sparse model with adaptive structural clustering for feature extraction of aero-engine bearings

    NASA Astrophysics Data System (ADS)

    Zhang, Han; Chen, Xuefeng; Du, Zhaohui; Li, Xiang; Yan, Ruqiang

    2016-04-01

    Fault information of aero-engine bearings presents two particular phenomena, i.e., waveform distortion and impulsive feature frequency band dispersion, which leads to a challenging problem for current techniques of bearing fault diagnosis. Moreover, although many progresses of sparse representation theory have been made in feature extraction of fault information, the theory also confronts inevitable performance degradation due to the fact that relatively weak fault information has not sufficiently prominent and sparse representations. Therefore, a novel nonlocal sparse model (coined NLSM) and its algorithm framework has been proposed in this paper, which goes beyond simple sparsity by introducing more intrinsic structures of feature information. This work adequately exploits the underlying prior information that feature information exhibits nonlocal self-similarity through clustering similar signal fragments and stacking them together into groups. Within this framework, the prior information is transformed into a regularization term and a sparse optimization problem, which could be solved through block coordinate descent method (BCD), is formulated. Additionally, the adaptive structural clustering sparse dictionary learning technique, which utilizes k-Nearest-Neighbor (kNN) clustering and principal component analysis (PCA) learning, is adopted to further enable sufficient sparsity of feature information. Moreover, the selection rule of regularization parameter and computational complexity are described in detail. The performance of the proposed framework is evaluated through numerical experiment and its superiority with respect to the state-of-the-art method in the field is demonstrated through the vibration signals of experimental rig of aircraft engine bearings.

  13. Robust fault diagnosis of physical systems in operation. Ph.D. Thesis - Rutgers - The State Univ.

    NASA Technical Reports Server (NTRS)

    Abbott, Kathy Hamilton

    1991-01-01

    Ideas are presented and demonstrated for improved robustness in diagnostic problem solving of complex physical systems in operation, or operative diagnosis. The first idea is that graceful degradation can be viewed as reasoning at higher levels of abstraction whenever the more detailed levels proved to be incomplete or inadequate. A form of abstraction is defined that applies this view to the problem of diagnosis. In this form of abstraction, named status abstraction, two levels are defined. The lower level of abstraction corresponds to the level of detail at which most current knowledge-based diagnosis systems reason. At the higher level, a graph representation is presented that describes the real-world physical system. An incremental, constructive approach to manipulating this graph representation is demonstrated that supports certain characteristics of operative diagnosis. The suitability of this constructive approach is shown for diagnosing fault propagation behavior over time, and for sometimes diagnosing systems with feedback. A way is shown to represent different semantics in the same type of graph representation to characterize different types of fault propagation behavior. An approach is demonstrated that threats these different behaviors as different fault classes, and the approach moves to other classes when previous classes fail to generate suitable hypotheses. These ideas are implemented in a computer program named Draphys (Diagnostic Reasoning About Physical Systems) and demonstrated for the domain of inflight aircraft subsystems, specifically a propulsion system (containing two turbofan systems and a fuel system) and hydraulic subsystem.

  14. Linear complementarity formulation for 3D frictional sliding problems

    USGS Publications Warehouse

    Kaven, Joern; Hickman, Stephen H.; Davatzes, Nicholas C.; Mutlu, Ovunc

    2012-01-01

    Frictional sliding on quasi-statically deforming faults and fractures can be modeled efficiently using a linear complementarity formulation. We review the formulation in two dimensions and expand the formulation to three-dimensional problems including problems of orthotropic friction. This formulation accurately reproduces analytical solutions to static Coulomb friction sliding problems. The formulation accounts for opening displacements that can occur near regions of non-planarity even under large confining pressures. Such problems are difficult to solve owing to the coupling of relative displacements and tractions; thus, many geomechanical problems tend to neglect these effects. Simple test cases highlight the importance of including friction and allowing for opening when solving quasi-static fault mechanics models. These results also underscore the importance of considering the effects of non-planarity in modeling processes associated with crustal faulting.

  15. The study of active tectonic based on hyperspectral remote sensing

    NASA Astrophysics Data System (ADS)

    Cui, J.; Zhang, S.; Zhang, J.; Shen, X.; Ding, R.; Xu, S.

    2017-12-01

    As of the latest technical methods, hyperspectral remote sensing technology has been widely used in each brach of the geosciences. However, it is still a blank for using the hyperspectral remote sensing to study the active structrure. Hyperspectral remote sensing, with high spectral resolution, continuous spectrum, continuous spatial data, low cost, etc, has great potentialities in the areas of stratum division and fault identification. Blind fault identification in plains and invisible fault discrimination in loess strata are the two hot problems in the current active fault research. Thus, the study of active fault based on the hyperspectral technology has great theoretical significance and practical value. Magnetic susceptibility (MS) records could reflect the rhythm alteration of the formation. Previous study shown that MS has correlation with spectral feature. In this study, the Emaokou section, located to the northwest of the town of Huairen, in Shanxi Province, has been chosen for invisible fault study. We collected data from the Emaokou section, including spectral data, hyperspectral image, MS data. MS models based on spectral features were established and applied to the UHD185 image for MS mapping. The results shown that MS map corresponded well to the loess sequences. It can recognize the stratum which can not identity by naked eyes. Invisible fault has been found in this section, which is useful for paleoearthquake analysis. The faults act as the conduit for migration of terrestrial gases, the fault zones, especially the structurally weak zones such as inrtersections or bends of fault, may has different material composition. We take Xiadian fault for study. Several samples cross-fault were collected and these samples were measured by ASD Field Spec 3 spectrometer. Spectral classification method has been used for spectral analysis, we found that the spectrum of the fault zone have four special spectral region(550-580nm, 600-700nm, 700-800nm and 800-900nm), which different with the spectrum of the none-fault zone. It could help us welly located the fault zone. The located result correspond well to the physical prospecting method result. The above study shown that Hypersepctral remote sensing technology provide a new method for active study.

  16. NO-FAULT COMPENSATION FOR MEDICAL INJURIES: TRENDS AND CHALLENGES.

    PubMed

    Kassim, Puteri Nemie

    2014-12-01

    As an alternative to the tort or fault-based system, a no-fault compensation system has been viewed as having the potential to overcome problems inherent in the tort system by providing fair, speedy and adequate compensation for medically injured victims. Proponents of the suggested no-fault compensation system have argued that this system is more efficient in terms of time and money, as well as in making the circumstances in which compensation is paid, much clearer. However, the arguments against no-fault compensation systems are mainly on issues of funding difficulties, accountability and deterrence, particularly, once fault is taken out of the equation. Nonetheless, the no-fault compensation system has been successfully implemented in various countries but, at the same time, rejected in some others, as not being implementable. In the present trend, the no-fault system seems to fit the needs of society by offering greater access to justice for medically injured victims and providing a clearer "road map" towards obtaining suitable redress. This paper aims at providing the readers with an overview of the characteristics of the no fault compensation system and some examples of countries that have implemented it. Qualitative Research-Content Analysis. Given the many problems and hurdles posed by the tort or fault-based system, it is questionable that it can efficiently play its role as a mechanism that affords fair and adequate compensation for victims of medical injuries. However, while a comprehensive no-fault compensation system offers a tempting alternative to the tort or fault-based system, to import such a change into our local scenario requires a great deal of consideration. There are major differences, mainly in terms of social standing, size of population, political ideology and financial commitment, between Malaysia and countries that have successfully implemented no-fault systems. Nevertheless, implementing a no-fault compensation system in Malaysia is not entirely impossible. A custom-made no-fault model tailored to suit our local scenario can be promising, provided that a thorough research is made, assessing the viability of a no-fault system in Malaysia, addressing the inherent problems and, consequently, designing a workable no-fault system in Malaysia.

  17. Investigation of fault modes in permanent magnet synchronous machines for traction applications

    NASA Astrophysics Data System (ADS)

    Choi, Gilsu

    Over the past few decades, electric motor drives have been more widely adopted to power the transportation sector to reduce our dependence on foreign oil and carbon emissions. Permanent magnet synchronous machines (PMSMs) are popular in many applications in the aerospace and automotive industries that require high power density and high efficiency. However, the presence of magnets that cannot be turned off in the event of a fault has always been an issue that hinders adoption of PMSMs in these demanding applications. This work investigates the design and analysis of PMSMs for automotive traction applications with particular emphasis on fault-mode operation caused by faults appearing at the terminals of the machine. New models and analytical techniques are introduced for evaluating the steady-state and dynamic response of PMSM drives to various fault conditions. Attention is focused on modeling the PMSM drive including nonlinear magnetic behavior under several different fault conditions, evaluating the risks of irreversible demagnetization caused by the large fault currents, as well as developing fault mitigation techniques in terms of both the fault currents and demagnetization risks. Of the major classes of machine terminal faults that can occur in PMSMs, short-circuit (SC) faults produce much more dangerous fault currents than open-circuit faults. The impact of different PMSM topologies and parameters on their responses to symmetrical and asymmetrical short-circuit (SSC & ASC) faults has been investigated. A detailed investigation on both the SSC and ASC faults is presented including both closed-form and numerical analysis. The demagnetization characteristics caused by high fault-mode stator currents (i.e., armature reaction) for different types of PMSMs are investigated. A thorough analysis and comparison of the relative demagnetization vulnerability for different types of PMSMs is presented. This analysis includes design guidelines and recommendations for minimizing the demagnetization risks while examining corresponding trade-offs. Two PM machines have been tested to validate the predicted fault currents and braking torque as well as demagnetization risks in PMSM drives. The generality and scalability of key results have also been demonstrated by analyzing several PM machines with a variety of stator, rotor, and winding configurations for various power ratings.

  18. We introduce an algorithm for the simultaneous reconstruction of faults and slip fields. We prove that the minimum of a related regularized functional converges to the unique solution of the fault inverse problem. We consider a Bayesian approach. We use a parallel multi-core platform and we discuss techniques to save on computational time.

    NASA Astrophysics Data System (ADS)

    Volkov, D.

    2017-12-01

    We introduce an algorithm for the simultaneous reconstruction of faults and slip fields on those faults. We define a regularized functional to be minimized for the reconstruction. We prove that the minimum of that functional converges to the unique solution of the related fault inverse problem. Due to inherent uncertainties in measurements, rather than seeking a deterministic solution to the fault inverse problem, we consider a Bayesian approach. The advantage of such an approach is that we obtain a way of quantifying uncertainties as part of our final answer. On the downside, this Bayesian approach leads to a very large computation. To contend with the size of this computation we developed an algorithm for the numerical solution to the stochastic minimization problem which can be easily implemented on a parallel multi-core platform and we discuss techniques to save on computational time. After showing how this algorithm performs on simulated data and assessing the effect of noise, we apply it to measured data. The data was recorded during a slow slip event in Guerrero, Mexico.

  19. A Generalised Fault Protection Structure Proposed for Uni-grounded Low-Voltage AC Microgrids

    NASA Astrophysics Data System (ADS)

    Bui, Duong Minh; Chen, Shi-Lin; Lien, Keng-Yu; Jiang, Jheng-Lun

    2016-04-01

    This paper presents three main configurations of uni-grounded low-voltage AC microgrids. Transient situations of a uni-grounded low-voltage (LV) AC microgrid (MG) are simulated through various fault tests and operation transition tests between grid-connected and islanded modes. Based on transient simulation results, available fault protection methods are proposed for main and back-up protection of a uni-grounded AC microgrid. In addition, concept of a generalised fault protection structure of uni-grounded LVAC MGs is mentioned in the paper. As a result, main contributions of the paper are: (i) definition of different uni-grounded LVAC MG configurations; (ii) analysing transient responses of a uni-grounded LVAC microgrid through line-to-line faults, line-to-ground faults, three-phase faults and a microgrid operation transition test, (iii) proposing available fault protection methods for uni-grounded microgrids, such as: non-directional or directional overcurrent protection, under/over voltage protection, differential current protection, voltage-restrained overcurrent protection, and other fault protection principles not based on phase currents and voltages (e.g. total harmonic distortion detection of currents and voltages, using sequence components of current and voltage, 3I0 or 3V0 components), and (iv) developing a generalised fault protection structure with six individual protection zones to be suitable for different uni-grounded AC MG configurations.

  20. Detection of stator winding faults in induction motors using three-phase current monitoring.

    PubMed

    Sharifi, Rasool; Ebrahimi, Mohammad

    2011-01-01

    The objective of this paper is to propose a new method for the detection of inter-turn short circuits in the stator windings of induction motors. In the previous reported methods, the supply voltage unbalance was the major difficulty, and this was solved mostly based on the sequence component impedance or current which are difficult to implement. Some other methods essentially are included in the offline methods. The proposed method is based on the motor current signature analysis and utilizes three phase current spectra to overcome the mentioned problem. Simulation results indicate that under healthy conditions, the rotor slot harmonics have the same magnitude in three phase currents, while under even 1 turn (0.3%) short circuit condition they differ from each other. Although the magnitude of these harmonics depends on the level of unbalanced voltage, they have the same magnitude in three phases in these conditions. Experiments performed under various load, fault, and supply voltage conditions validate the simulation results and demonstrate the effectiveness of the proposed technique. It is shown that the detection of resistive slight short circuits, without sensitivity to supply voltage unbalance is possible. Copyright © 2010 ISA. Published by Elsevier Ltd. All rights reserved.

  1. Load flows and faults considering dc current injections

    NASA Technical Reports Server (NTRS)

    Kusic, G. L.; Beach, R. F.

    1991-01-01

    The authors present novel methods for incorporating current injection sources into dc power flow computations and determining network fault currents when electronic devices limit fault currents. Combinations of current and voltage sources into a single network are considered in a general formulation. An example of relay coordination is presented. The present study is pertinent to the development of the Space Station Freedom electrical generation, transmission, and distribution system.

  2. Detection of faults and software reliability analysis

    NASA Technical Reports Server (NTRS)

    Knight, J. C.

    1987-01-01

    Specific topics briefly addressed include: the consistent comparison problem in N-version system; analytic models of comparison testing; fault tolerance through data diversity; and the relationship between failures caused by automatically seeded faults.

  3. Fault Tree in the Trenches, A Success Story

    NASA Technical Reports Server (NTRS)

    Long, R. Allen; Goodson, Amanda (Technical Monitor)

    2000-01-01

    Getting caught up in the explanation of Fault Tree Analysis (FTA) minutiae is easy. In fact, most FTA literature tends to address FTA concepts and methodology. Yet there seems to be few articles addressing actual design changes resulting from the successful application of fault tree analysis. This paper demonstrates how fault tree analysis was used to identify and solve a potentially catastrophic mechanical problem at a rocket motor manufacturer. While developing the fault tree given in this example, the analyst was told by several organizations that the piece of equipment in question had been evaluated by several committees and organizations, and that the analyst was wasting his time. The fault tree/cutset analysis resulted in a joint-redesign of the control system by the tool engineering group and the fault tree analyst, as well as bragging rights for the analyst. (That the fault tree found problems where other engineering reviews had failed was not lost on the other engineering groups.) Even more interesting was that this was the analyst's first fault tree which further demonstrates how effective fault tree analysis can be in guiding (i.e., forcing) the analyst to take a methodical approach in evaluating complex systems.

  4. Improving Multiple Fault Diagnosability using Possible Conflicts

    NASA Technical Reports Server (NTRS)

    Daigle, Matthew J.; Bregon, Anibal; Biswas, Gautam; Koutsoukos, Xenofon; Pulido, Belarmino

    2012-01-01

    Multiple fault diagnosis is a difficult problem for dynamic systems. Due to fault masking, compensation, and relative time of fault occurrence, multiple faults can manifest in many different ways as observable fault signature sequences. This decreases diagnosability of multiple faults, and therefore leads to a loss in effectiveness of the fault isolation step. We develop a qualitative, event-based, multiple fault isolation framework, and derive several notions of multiple fault diagnosability. We show that using Possible Conflicts, a model decomposition technique that decouples faults from residuals, we can significantly improve the diagnosability of multiple faults compared to an approach using a single global model. We demonstrate these concepts and provide results using a multi-tank system as a case study.

  5. Method and system for early detection of incipient faults in electric motors

    DOEpatents

    Parlos, Alexander G; Kim, Kyusung

    2003-07-08

    A method and system for early detection of incipient faults in an electric motor are disclosed. First, current and voltage values for one or more phases of the electric motor are measured during motor operations. A set of current predictions is then determined via a neural network-based current predictor based on the measured voltage values and an estimate of motor speed values of the electric motor. Next, a set of residuals is generated by combining the set of current predictions with the measured current values. A set of fault indicators is subsequently computed from the set of residuals and the measured current values. Finally, a determination is made as to whether or not there is an incipient electrical, mechanical, and/or electromechanical fault occurring based on the comparison result of the set of fault indicators and a set of predetermined baseline values.

  6. Multivariate fault isolation of batch processes via variable selection in partial least squares discriminant analysis.

    PubMed

    Yan, Zhengbing; Kuang, Te-Hui; Yao, Yuan

    2017-09-01

    In recent years, multivariate statistical monitoring of batch processes has become a popular research topic, wherein multivariate fault isolation is an important step aiming at the identification of the faulty variables contributing most to the detected process abnormality. Although contribution plots have been commonly used in statistical fault isolation, such methods suffer from the smearing effect between correlated variables. In particular, in batch process monitoring, the high autocorrelations and cross-correlations that exist in variable trajectories make the smearing effect unavoidable. To address such a problem, a variable selection-based fault isolation method is proposed in this research, which transforms the fault isolation problem into a variable selection problem in partial least squares discriminant analysis and solves it by calculating a sparse partial least squares model. As different from the traditional methods, the proposed method emphasizes the relative importance of each process variable. Such information may help process engineers in conducting root-cause diagnosis. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  7. A single dynamic observer-based module for design of simultaneous fault detection, isolation and tracking control scheme

    NASA Astrophysics Data System (ADS)

    Davoodi, M.; Meskin, N.; Khorasani, K.

    2018-03-01

    The problem of simultaneous fault detection, isolation and tracking (SFDIT) control design for linear systems subject to both bounded energy and bounded peak disturbances is considered in this work. A dynamic observer is proposed and implemented by using the H∞/H-/L1 formulation of the SFDIT problem. A single dynamic observer module is designed that generates the residuals as well as the control signals. The objective of the SFDIT module is to ensure that simultaneously the effects of disturbances and control signals on the residual signals are minimised (in order to accomplish the fault detection goal) subject to the constraint that the transfer matrix from the faults to the residuals is equal to a pre-assigned diagonal transfer matrix (in order to accomplish the fault isolation goal), while the effects of disturbances, reference inputs and faults on the specified control outputs are minimised (in order to accomplish the fault-tolerant and tracking control goals). A set of linear matrix inequality (LMI) feasibility conditions are derived to ensure solvability of the problem. In order to illustrate and demonstrate the effectiveness of our proposed design methodology, the developed and proposed schemes are applied to an autonomous unmanned underwater vehicle (AUV).

  8. Fault tolerant filtering and fault detection for quantum systems driven by fields in single photon states

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gao, Qing, E-mail: qing.gao.chance@gmail.com; Dong, Daoyi, E-mail: daoyidong@gmail.com; Petersen, Ian R., E-mail: i.r.petersen@gmai.com

    The purpose of this paper is to solve the fault tolerant filtering and fault detection problem for a class of open quantum systems driven by a continuous-mode bosonic input field in single photon states when the systems are subject to stochastic faults. Optimal estimates of both the system observables and the fault process are simultaneously calculated and characterized by a set of coupled recursive quantum stochastic differential equations.

  9. A formally verified algorithm for interactive consistency under a hybrid fault model

    NASA Technical Reports Server (NTRS)

    Lincoln, Patrick; Rushby, John

    1993-01-01

    Consistent distribution of single-source data to replicated computing channels is a fundamental problem in fault-tolerant system design. The 'Oral Messages' (OM) algorithm solves this problem of Interactive Consistency (Byzantine Agreement) assuming that all faults are worst-cass. Thambidurai and Park introduced a 'hybrid' fault model that distinguished three fault modes: asymmetric (Byzantine), symmetric, and benign; they also exhibited, along with an informal 'proof of correctness', a modified version of OM. Unfortunately, their algorithm is flawed. The discipline of mechanically checked formal verification eventually enabled us to develop a correct algorithm for Interactive Consistency under the hybrid fault model. This algorithm withstands $a$ asymmetric, $s$ symmetric, and $b$ benign faults simultaneously, using $m+1$ rounds, provided $n is greater than 2a + 2s + b + m$, and $m\\geg a$. We present this algorithm, discuss its subtle points, and describe its formal specification and verification in PVS. We argue that formal verification systems such as PVS are now sufficiently effective that their application to fault-tolerance algorithms should be considered routine.

  10. Large transient fault current test of an electrical roll ring

    NASA Technical Reports Server (NTRS)

    Yenni, Edward J.; Birchenough, Arthur G.

    1992-01-01

    The space station uses precision rotary gimbals to provide for sun tracking of its photoelectric arrays. Electrical power, command signals and data are transferred across the gimbals by roll rings. Roll rings have been shown to be capable of highly efficient electrical transmission and long life, through tests conducted at the NASA Lewis Research Center and Honeywell's Satellite and Space Systems Division in Phoenix, AZ. Large potential fault currents inherent to the power system's DC distribution architecture, have brought about the need to evaluate the effects of large transient fault currents on roll rings. A test recently conducted at Lewis subjected a roll ring to a simulated worst case space station electrical fault. The system model used to obtain the fault profile is described, along with details of the reduced order circuit that was used to simulate the fault. Test results comparing roll ring performance before and after the fault are also presented.

  11. Nickel-Hydrogen Battery Fault Clearing at Low State of Charge

    NASA Technical Reports Server (NTRS)

    Lurie, C.

    1997-01-01

    Fault clearing currents were achieved and maintained at discharge rates from C/2 to C/3 at high and low states of charge. The fault clearing plateau voltage is strong function of: discharge current, and voltage-prior-to-the-fault-clearing-event and a weak function of state of charge. Voltage performance, for the range of conditions reported, is summarized.

  12. Sliding Mode Observer-Based Current Sensor Fault Reconstruction and Unknown Load Disturbance Estimation for PMSM Driven System.

    PubMed

    Zhao, Kaihui; Li, Peng; Zhang, Changfan; Li, Xiangfei; He, Jing; Lin, Yuliang

    2017-12-06

    This paper proposes a new scheme of reconstructing current sensor faults and estimating unknown load disturbance for a permanent magnet synchronous motor (PMSM)-driven system. First, the original PMSM system is transformed into two subsystems; the first subsystem has unknown system load disturbances, which are unrelated to sensor faults, and the second subsystem has sensor faults, but is free from unknown load disturbances. Introducing a new state variable, the augmented subsystem that has sensor faults can be transformed into having actuator faults. Second, two sliding mode observers (SMOs) are designed: the unknown load disturbance is estimated by the first SMO in the subsystem, which has unknown load disturbance, and the sensor faults can be reconstructed using the second SMO in the augmented subsystem, which has sensor faults. The gains of the proposed SMOs and their stability analysis are developed via the solution of linear matrix inequality (LMI). Finally, the effectiveness of the proposed scheme was verified by simulations and experiments. The results demonstrate that the proposed scheme can reconstruct current sensor faults and estimate unknown load disturbance for the PMSM-driven system.

  13. Module Hipot and ground continuity test results

    NASA Technical Reports Server (NTRS)

    Griffith, J. S.

    1984-01-01

    Hipot (high voltage potential) and module frame continuity tests of solar energy conversion modules intended for deployment into large arrays are discussed. The purpose of the tests is to reveal potentially hazardous voltage conditions in installed modules, and leakage currents that may result in loss of power or cause ground fault system problems, i.e., current leakage potential and leakage voltage distribution. The tests show a combined failure rate of 36% (69% when environmental testing is included). These failure rates are believed easily corrected by greater care in fabrication.

  14. Sparsity-aware tight frame learning with adaptive subspace recognition for multiple fault diagnosis

    NASA Astrophysics Data System (ADS)

    Zhang, Han; Chen, Xuefeng; Du, Zhaohui; Yang, Boyuan

    2017-09-01

    It is a challenging problem to design excellent dictionaries to sparsely represent diverse fault information and simultaneously discriminate different fault sources. Therefore, this paper describes and analyzes a novel multiple feature recognition framework which incorporates the tight frame learning technique with an adaptive subspace recognition strategy. The proposed framework consists of four stages. Firstly, by introducing the tight frame constraint into the popular dictionary learning model, the proposed tight frame learning model could be formulated as a nonconvex optimization problem which can be solved by alternatively implementing hard thresholding operation and singular value decomposition. Secondly, the noises are effectively eliminated through transform sparse coding techniques. Thirdly, the denoised signal is decoupled into discriminative feature subspaces by each tight frame filter. Finally, in guidance of elaborately designed fault related sensitive indexes, latent fault feature subspaces can be adaptively recognized and multiple faults are diagnosed simultaneously. Extensive numerical experiments are sequently implemented to investigate the sparsifying capability of the learned tight frame as well as its comprehensive denoising performance. Most importantly, the feasibility and superiority of the proposed framework is verified through performing multiple fault diagnosis of motor bearings. Compared with the state-of-the-art fault detection techniques, some important advantages have been observed: firstly, the proposed framework incorporates the physical prior with the data-driven strategy and naturally multiple fault feature with similar oscillation morphology can be adaptively decoupled. Secondly, the tight frame dictionary directly learned from the noisy observation can significantly promote the sparsity of fault features compared to analytical tight frames. Thirdly, a satisfactory complete signal space description property is guaranteed and thus weak feature leakage problem is avoided compared to typical learning methods.

  15. Mixed linear-nonlinear fault slip inversion: Bayesian inference of model, weighting, and smoothing parameters

    NASA Astrophysics Data System (ADS)

    Fukuda, J.; Johnson, K. M.

    2009-12-01

    Studies utilizing inversions of geodetic data for the spatial distribution of coseismic slip on faults typically present the result as a single fault plane and slip distribution. Commonly the geometry of the fault plane is assumed to be known a priori and the data are inverted for slip. However, sometimes there is not strong a priori information on the geometry of the fault that produced the earthquake and the data is not always strong enough to completely resolve the fault geometry. We develop a method to solve for the full posterior probability distribution of fault slip and fault geometry parameters in a Bayesian framework using Monte Carlo methods. The slip inversion problem is particularly challenging because it often involves multiple data sets with unknown relative weights (e.g. InSAR, GPS), model parameters that are related linearly (slip) and nonlinearly (fault geometry) through the theoretical model to surface observations, prior information on model parameters, and a regularization prior to stabilize the inversion. We present the theoretical framework and solution method for a Bayesian inversion that can handle all of these aspects of the problem. The method handles the mixed linear/nonlinear nature of the problem through combination of both analytical least-squares solutions and Monte Carlo methods. We first illustrate and validate the inversion scheme using synthetic data sets. We then apply the method to inversion of geodetic data from the 2003 M6.6 San Simeon, California earthquake. We show that the uncertainty in strike and dip of the fault plane is over 20 degrees. We characterize the uncertainty in the slip estimate with a volume around the mean fault solution in which the slip most likely occurred. Slip likely occurred somewhere in a volume that extends 5-10 km in either direction normal to the fault plane. We implement slip inversions with both traditional, kinematic smoothing constraints on slip and a simple physical condition of uniform stress drop.

  16. Fault recovery characteristics of the fault tolerant multi-processor

    NASA Technical Reports Server (NTRS)

    Padilla, Peter A.

    1990-01-01

    The fault handling performance of the fault tolerant multiprocessor (FTMP) was investigated. Fault handling errors detected during fault injection experiments were characterized. In these fault injection experiments, the FTMP disabled a working unit instead of the faulted unit once every 500 faults, on the average. System design weaknesses allow active faults to exercise a part of the fault management software that handles byzantine or lying faults. It is pointed out that these weak areas in the FTMP's design increase the probability that, for any hardware fault, a good LRU (line replaceable unit) is mistakenly disabled by the fault management software. It is concluded that fault injection can help detect and analyze the behavior of a system in the ultra-reliable regime. Although fault injection testing cannot be exhaustive, it has been demonstrated that it provides a unique capability to unmask problems and to characterize the behavior of a fault-tolerant system.

  17. About problematic peculiarities of Fault Tolerance digital regulation organization

    NASA Astrophysics Data System (ADS)

    Rakov, V. I.; Zakharova, O. V.

    2018-05-01

    The solution of problems concerning estimation of working capacity of regulation chains and possibilities of preventing situations of its violation in three directions are offered. The first direction is working out (creating) the methods of representing the regulation loop (circuit) by means of uniting (combining) diffuse components and forming algorithmic tooling for building predicates of serviceability assessment separately for the components and the for regulation loops (circuits, contours) in general. The second direction is creating methods of Fault Tolerance redundancy in the process of complex assessment of current values of control actions, closure errors and their regulated parameters. The third direction is creating methods of comparing the processes of alteration (change) of control actions, errors of closure and regulating parameters with their standard models or their surroundings. This direction allows one to develop methods and algorithmic tool means, aimed at preventing loss of serviceability and effectiveness of not only a separate digital regulator, but also the whole complex of Fault Tolerance regulation.

  18. Machine learning techniques for fault isolation and sensor placement

    NASA Technical Reports Server (NTRS)

    Carnes, James R.; Fisher, Douglas H.

    1993-01-01

    Fault isolation and sensor placement are vital for monitoring and diagnosis. A sensor conveys information about a system's state that guides troubleshooting if problems arise. We are using machine learning methods to uncover behavioral patterns over snapshots of system simulations that will aid fault isolation and sensor placement, with an eye towards minimality, fault coverage, and noise tolerance.

  19. Observation sand Results Gained from the Jade Project

    DTIC Science & Technology

    2002-05-04

    project different dependency-based Center, 5095 Mawson Lakes (Adelaide) SA, Australia, email: models have been created that vary in their levels of...test eris Columna# T indenotesth n erfofmtests the Java programming language. Currently, exception han- of the respective test series. dling and...meets sentation in the debugging of software to reduce the problem error diagnosis in logic programs. In Proceedings 1 3 t h of structural faults in

  20. Direct current hybrid breakers: A design and its realization

    NASA Astrophysics Data System (ADS)

    Atmadji, Ali Mahfudz Surya

    2000-12-01

    The use of semiconductors for electric power circuit breakers instead of conventional breakers remains a utopia when designing fault current interrupters for high power networks. The major problems concerning power semiconductor circuit breakers are the excessive heat losses and their sensitivity to transients. However, conventional breakers are capable of dealing with such matters. A combination of the two methods, or so-called `hybrid breakers', would appear to be a solution; however, hybrid breakers use separate parallel branches for conducting the main current and interrupting the short-circuit current. Such breakers are intended for protecting direct current (DC) traction systems. In this thesis hybrid switching techniques for current limitation and purely solidstate current interruption are investigated for DC breakers. This work analyzes the transient behavior of hybrid breakers and compares their operations with conventional breakers and similar solid-state devices in DC systems. Therefore a hybrid breaker was constructed and tested in a specially designed high power test circuit. A vacuum breaker was chosen as the main breaker in the main conducting path; then a commutation path was connected across the vacuum breaker where it provided current limitation and interruption. The commutation path operated only during any current interruption and the process required additional circuits. These included a certain energy storage, overvoltage suppressor and commutation switch. So that when discharging this energy, a controlled counter-current injection could be produced. That counter-current opposed the main current in the breaker by superposition in order to create a forced current-zero. One-stage and two-stage commutation circuits have been treated extensively. This study project contains both theoretical and experimental investigations. A direct current shortcircuit source was constructed capable of delivering power equivalent to a fault. It supplied a direct voltage of 1kVDC which was rectified having been obtained from a 3-phase lOkV/380V supply. The source was successfully tested to deliver a fault current of 7kA with a time constant of 5ms. The hybrid breaker that was developed could provide protection for 750VDC traction systems. The breaker was equipped with a fault- recognizing circuit based on a current level triggering. An electronic circuit was built for this need and was included in the system. It monitored the system continuously and took action by generating trip signals when a fault was recognized. Interruption was followed by a suitable timing of the fast contact separation in the main breaker and the current-zero creation. An electrodynamically driven mechanism was successfully tested having a dead-time of 300μs to separate the main breaker contacts. Furthermore, a maximum peak current injection of RA at a frequency of 500Hz could be obtained in order to produce an artificial current-zero in the vacuum breaker. A successful current interruption with a prospective value of RA was achieved by the hybrid switching technique. In addition, measures were taken to prevent overvoltages. Experimentally, the concept of a hybrid breaker was compared with the functioning of all mechanical (air breaker) and all electronical (IGCT breaker) versions. Although a single stage interrupting method was verified experimentally, two two-stage interrupting methods were analyzed theoretically.

  1. Effect of a superconducting coil as a fault current limiter on current density distribution in BSCCO tape after an over-current pulse

    NASA Astrophysics Data System (ADS)

    Tallouli, M.; Shyshkin, O.; Yamaguchi, S.

    2017-07-01

    The development of power transmission lines based on long-length high temperature superconducting (HTS) tapes is complicated and technically challenging task. A serious problem for transmission line operation could become HTS power cable damage due to over-current pulse conditions. To avoid the cable damage in any urgent case the superconducting coil technology, i.e. superconductor fault current limiter (SFCL) is required. Comprehensive understanding of the current density characteristics of HTS tapes in both cases, either after pure over-current pulse or after over-current pulse limited by SFCL, is needed to restart or to continue the operation of the power transmission line. Moreover, current density distribution along and across the HTS tape provides us with the sufficient information about the quality of the tape performance in different current feeding regimes. In present paper we examine BSCCO HTS tape under two current feeding regimes. The first one is 100A feeding preceded by 900A over-current pulse. In this case none of tape protection was used. The second scenario is similar to the fist one but SFCL is used to limit an over-current value. For both scenarios after the pulse is gone and the current feeding is set up at 100A we scan magnetic field above the tape by means of Hall probe sensor. Then the feeding is turned of and the magnetic field scanning is repeated. Using the inverse problem numerical solver we calculate the corresponding direct and permanent current density distributions during the feeding and after switch off. It is demonstrated that in the absence of SFCL the current distribution is highly peaked at the tape center. At the same time the current distribution in the experiment with SFCL is similar to that observed under normal current feeding condition. The current peaking in the first case is explained by the effect of an opposite electric field induced at the tape edges during the overcurrent pulse decay, and by degradation of superconductivity at the edges due to penetration of magnetic field in superconducting core during the pulse.

  2. Study on Distribution Reliability with Parallel and On-site Distributed Generation Considering Protection Miscoordination and Tie Line

    NASA Astrophysics Data System (ADS)

    Chaitusaney, Surachai; Yokoyama, Akihiko

    In distribution system, Distributed Generation (DG) is expected to improve the system reliability as its backup generation. However, DG contribution in fault current may cause the loss of the existing protection coordination, e.g. recloser-fuse coordination and breaker-breaker coordination. This problem can drastically deteriorate the system reliability, and it is more serious and complicated when there are several DG sources in the system. Hence, the above conflict in reliability aspect unavoidably needs a detailed investigation before the installation or enhancement of DG is done. The model of composite DG fault current is proposed to find the threshold beyond which existing protection coordination is lost. Cases of protection miscoordination are described, together with their consequences. Since a distribution system may be tied with another system, the issues of tie line and on-site DG are integrated into this study. Reliability indices are evaluated and compared in the distribution reliability test system RBTS Bus 2.

  3. Hanging-wall deformation above a normal fault: sequential limit analyses

    NASA Astrophysics Data System (ADS)

    Yuan, Xiaoping; Leroy, Yves M.; Maillot, Bertrand

    2015-04-01

    The deformation in the hanging wall above a segmented normal fault is analysed with the sequential limit analysis (SLA). The method combines some predictions on the dip and position of the active fault and axial surface, with geometrical evolution à la Suppe (Groshong, 1989). Two problems are considered. The first followed the prototype proposed by Patton (2005) with a pre-defined convex, segmented fault. The orientation of the upper segment of the normal fault is an unknown in the second problem. The loading in both problems consists of the retreat of the back wall and the sedimentation. This sedimentation starts from the lowest point of the topography and acts at the rate rs relative to the wall retreat rate. For the first problem, the normal fault either has a zero friction or a friction value set to 25o or 30o to fit the experimental results (Patton, 2005). In the zero friction case, a hanging wall anticline develops much like in the experiments. In the 25o friction case, slip on the upper segment is accompanied by rotation of the axial plane producing a broad shear zone rooted at the fault bend. The same observation is made in the 30o case, but without slip on the upper segment. Experimental outcomes show a behaviour in between these two latter cases. For the second problem, mechanics predicts a concave fault bend with an upper segment dip decreasing during extension. The axial surface rooting at the normal fault bend sees its dips increasing during extension resulting in a curved roll-over. Softening on the normal fault leads to a stepwise rotation responsible for strain partitioning into small blocks in the hanging wall. The rotation is due to the subsidence of the topography above the hanging wall. Sedimentation in the lowest region thus reduces the rotations. Note that these rotations predicted by mechanics are not accounted for in most geometrical approaches (Xiao and Suppe, 1992) and are observed in sand box experiments (Egholm et al., 2007, referring to Dahl, 1987). References: Egholm, D. L., M. Sandiford, O. R. Clausen, and S. B. Nielsen (2007), A new strategy for discrete element numerical models: 2. sandbox applications, Journal of Geophysical Research, 112 (B05204), doi:10.1029/2006JB004558. Groshong, R. H. (1989), Half-graben structures: Balanced models of extensional fault-bend folds, Geological Society of America Bulletin, 101 (1), 96-105. Patton, T. L. (2005), Sandbox models of downward-steepening normal faults, AAPG Bulletin, 89 (6), 781-797. Xiao, H.-B., and J. Suppe (1992), Orgin of rollover, AAPG Bulletin, 76 (4), 509-529.

  4. Methodology for fault detection in induction motors via sound and vibration signals

    NASA Astrophysics Data System (ADS)

    Delgado-Arredondo, Paulo Antonio; Morinigo-Sotelo, Daniel; Osornio-Rios, Roque Alfredo; Avina-Cervantes, Juan Gabriel; Rostro-Gonzalez, Horacio; Romero-Troncoso, Rene de Jesus

    2017-01-01

    Nowadays, timely maintenance of electric motors is vital to keep up the complex processes of industrial production. There are currently a variety of methodologies for fault diagnosis. Usually, the diagnosis is performed by analyzing current signals at a steady-state motor operation or during a start-up transient. This method is known as motor current signature analysis, which identifies frequencies associated with faults in the frequency domain or by the time-frequency decomposition of the current signals. Fault identification may also be possible by analyzing acoustic sound and vibration signals, which is useful because sometimes this information is the only available. The contribution of this work is a methodology for detecting faults in induction motors in steady-state operation based on the analysis of acoustic sound and vibration signals. This proposed approach uses the Complete Ensemble Empirical Mode Decomposition for decomposing the signal into several intrinsic mode functions. Subsequently, the frequency marginal of the Gabor representation is calculated to obtain the spectral content of the IMF in the frequency domain. This proposal provides good fault detectability results compared to other published works in addition to the identification of more frequencies associated with the faults. The faults diagnosed in this work are two broken rotor bars, mechanical unbalance and bearing defects.

  5. Adaptive Fuzzy Output-Constrained Fault-Tolerant Control of Nonlinear Stochastic Large-Scale Systems With Actuator Faults.

    PubMed

    Li, Yongming; Ma, Zhiyao; Tong, Shaocheng

    2017-09-01

    The problem of adaptive fuzzy output-constrained tracking fault-tolerant control (FTC) is investigated for the large-scale stochastic nonlinear systems of pure-feedback form. The nonlinear systems considered in this paper possess the unstructured uncertainties, unknown interconnected terms and unknown nonaffine nonlinear faults. The fuzzy logic systems are employed to identify the unknown lumped nonlinear functions so that the problems of structured uncertainties can be solved. An adaptive fuzzy state observer is designed to solve the nonmeasurable state problem. By combining the barrier Lyapunov function theory, adaptive decentralized and stochastic control principles, a novel fuzzy adaptive output-constrained FTC approach is constructed. All the signals in the closed-loop system are proved to be bounded in probability and the system outputs are constrained in a given compact set. Finally, the applicability of the proposed controller is well carried out by a simulation example.

  6. Sliding Mode Observer-Based Current Sensor Fault Reconstruction and Unknown Load Disturbance Estimation for PMSM Driven System

    PubMed Central

    Li, Xiangfei; Lin, Yuliang

    2017-01-01

    This paper proposes a new scheme of reconstructing current sensor faults and estimating unknown load disturbance for a permanent magnet synchronous motor (PMSM)-driven system. First, the original PMSM system is transformed into two subsystems; the first subsystem has unknown system load disturbances, which are unrelated to sensor faults, and the second subsystem has sensor faults, but is free from unknown load disturbances. Introducing a new state variable, the augmented subsystem that has sensor faults can be transformed into having actuator faults. Second, two sliding mode observers (SMOs) are designed: the unknown load disturbance is estimated by the first SMO in the subsystem, which has unknown load disturbance, and the sensor faults can be reconstructed using the second SMO in the augmented subsystem, which has sensor faults. The gains of the proposed SMOs and their stability analysis are developed via the solution of linear matrix inequality (LMI). Finally, the effectiveness of the proposed scheme was verified by simulations and experiments. The results demonstrate that the proposed scheme can reconstruct current sensor faults and estimate unknown load disturbance for the PMSM-driven system. PMID:29211017

  7. Multi-Physics Modelling of Fault Mechanics Using REDBACK: A Parallel Open-Source Simulator for Tightly Coupled Problems

    NASA Astrophysics Data System (ADS)

    Poulet, Thomas; Paesold, Martin; Veveakis, Manolis

    2017-03-01

    Faults play a major role in many economically and environmentally important geological systems, ranging from impermeable seals in petroleum reservoirs to fluid pathways in ore-forming hydrothermal systems. Their behavior is therefore widely studied and fault mechanics is particularly focused on the mechanisms explaining their transient evolution. Single faults can change in time from seals to open channels as they become seismically active and various models have recently been presented to explain the driving forces responsible for such transitions. A model of particular interest is the multi-physics oscillator of Alevizos et al. (J Geophys Res Solid Earth 119(6), 4558-4582, 2014) which extends the traditional rate and state friction approach to rate and temperature-dependent ductile rocks, and has been successfully applied to explain spatial features of exposed thrusts as well as temporal evolutions of current subduction zones. In this contribution we implement that model in REDBACK, a parallel open-source multi-physics simulator developed to solve such geological instabilities in three dimensions. The resolution of the underlying system of equations in a tightly coupled manner allows REDBACK to capture appropriately the various theoretical regimes of the system, including the periodic and non-periodic instabilities. REDBACK can then be used to simulate the drastic permeability evolution in time of such systems, where nominally impermeable faults can sporadically become fluid pathways, with permeability increases of several orders of magnitude.

  8. Designing Fault-Injection Experiments for the Reliability of Embedded Systems

    NASA Technical Reports Server (NTRS)

    White, Allan L.

    2012-01-01

    This paper considers the long-standing problem of conducting fault-injections experiments to establish the ultra-reliability of embedded systems. There have been extensive efforts in fault injection, and this paper offers a partial summary of the efforts, but these previous efforts have focused on realism and efficiency. Fault injections have been used to examine diagnostics and to test algorithms, but the literature does not contain any framework that says how to conduct fault-injection experiments to establish ultra-reliability. A solution to this problem integrates field-data, arguments-from-design, and fault-injection into a seamless whole. The solution in this paper is to derive a model reduction theorem for a class of semi-Markov models suitable for describing ultra-reliable embedded systems. The derivation shows that a tight upper bound on the probability of system failure can be obtained using only the means of system-recovery times, thus reducing the experimental effort to estimating a reasonable number of easily-observed parameters. The paper includes an example of a system subject to both permanent and transient faults. There is a discussion of integrating fault-injection with field-data and arguments-from-design.

  9. Intelligent classifier for dynamic fault patterns based on hidden Markov model

    NASA Astrophysics Data System (ADS)

    Xu, Bo; Feng, Yuguang; Yu, Jinsong

    2006-11-01

    It's difficult to build precise mathematical models for complex engineering systems because of the complexity of the structure and dynamics characteristics. Intelligent fault diagnosis introduces artificial intelligence and works in a different way without building the analytical mathematical model of a diagnostic object, so it's a practical approach to solve diagnostic problems of complex systems. This paper presents an intelligent fault diagnosis method, an integrated fault-pattern classifier based on Hidden Markov Model (HMM). This classifier consists of dynamic time warping (DTW) algorithm, self-organizing feature mapping (SOFM) network and Hidden Markov Model. First, after dynamic observation vector in measuring space is processed by DTW, the error vector including the fault feature of being tested system is obtained. Then a SOFM network is used as a feature extractor and vector quantization processor. Finally, fault diagnosis is realized by fault patterns classifying with the Hidden Markov Model classifier. The importing of dynamic time warping solves the problem of feature extracting from dynamic process vectors of complex system such as aeroengine, and makes it come true to diagnose complex system by utilizing dynamic process information. Simulating experiments show that the diagnosis model is easy to extend, and the fault pattern classifier is efficient and is convenient to the detecting and diagnosing of new faults.

  10. Abnormal fault-recovery characteristics of the fault-tolerant multiprocessor uncovered using a new fault-injection methodology

    NASA Technical Reports Server (NTRS)

    Padilla, Peter A.

    1991-01-01

    An investigation was made in AIRLAB of the fault handling performance of the Fault Tolerant MultiProcessor (FTMP). Fault handling errors detected during fault injection experiments were characterized. In these fault injection experiments, the FTMP disabled a working unit instead of the faulted unit once in every 500 faults, on the average. System design weaknesses allow active faults to exercise a part of the fault management software that handles Byzantine or lying faults. Byzantine faults behave such that the faulted unit points to a working unit as the source of errors. The design's problems involve: (1) the design and interface between the simplex error detection hardware and the error processing software, (2) the functional capabilities of the FTMP system bus, and (3) the communication requirements of a multiprocessor architecture. These weak areas in the FTMP's design increase the probability that, for any hardware fault, a good line replacement unit (LRU) is mistakenly disabled by the fault management software.

  11. Fuzzy logic based on-line fault detection and classification in transmission line.

    PubMed

    Adhikari, Shuma; Sinha, Nidul; Dorendrajit, Thingam

    2016-01-01

    This study presents fuzzy logic based online fault detection and classification of transmission line using Programmable Automation and Control technology based National Instrument Compact Reconfigurable i/o (CRIO) devices. The LabVIEW software combined with CRIO can perform real time data acquisition of transmission line. When fault occurs in the system current waveforms are distorted due to transients and their pattern changes according to the type of fault in the system. The three phase alternating current, zero sequence and positive sequence current data generated by LabVIEW through CRIO-9067 are processed directly for relaying. The result shows that proposed technique is capable of right tripping action and classification of type of fault at high speed therefore can be employed in practical application.

  12. Testability analysis on a hydraulic system in a certain equipment based on simulation model

    NASA Astrophysics Data System (ADS)

    Zhang, Rui; Cong, Hua; Liu, Yuanhong; Feng, Fuzhou

    2018-03-01

    Aiming at the problem that the complicated structure and the shortage of fault statistics information in hydraulic systems, a multi value testability analysis method based on simulation model is proposed. Based on the simulation model of AMESim, this method injects the simulated faults and records variation of test parameters ,such as pressure, flow rate, at each test point compared with those under normal conditions .Thus a multi-value fault-test dependency matrix is established. Then the fault detection rate (FDR) and fault isolation rate (FIR) are calculated based on the dependency matrix. Finally the system of testability and fault diagnosis capability are analyzed and evaluated, which can only reach a lower 54%(FDR) and 23%(FIR). In order to improve testability performance of the system,. number and position of the test points are optimized on the system. Results show the proposed test placement scheme can be used to solve the problems that difficulty, inefficiency and high cost in the system maintenance.

  13. Fault tolerance in computational grids: perspectives, challenges, and issues.

    PubMed

    Haider, Sajjad; Nazir, Babar

    2016-01-01

    Computational grids are established with the intention of providing shared access to hardware and software based resources with special reference to increased computational capabilities. Fault tolerance is one of the most important issues faced by the computational grids. The main contribution of this survey is the creation of an extended classification of problems that incur in the computational grid environments. The proposed classification will help researchers, developers, and maintainers of grids to understand the types of issues to be anticipated. Moreover, different types of problems, such as omission, interaction, and timing related have been identified that need to be handled on various layers of the computational grid. In this survey, an analysis and examination is also performed pertaining to the fault tolerance and fault detection mechanisms. Our conclusion is that a dependable and reliable grid can only be established when more emphasis is on fault identification. Moreover, our survey reveals that adaptive and intelligent fault identification, and tolerance techniques can improve the dependability of grid working environments.

  14. Fault Diagnosis for the Heat Exchanger of the Aircraft Environmental Control System Based on the Strong Tracking Filter

    PubMed Central

    Ma, Jian; Lu, Chen; Liu, Hongmei

    2015-01-01

    The aircraft environmental control system (ECS) is a critical aircraft system, which provides the appropriate environmental conditions to ensure the safe transport of air passengers and equipment. The functionality and reliability of ECS have received increasing attention in recent years. The heat exchanger is a particularly significant component of the ECS, because its failure decreases the system’s efficiency, which can lead to catastrophic consequences. Fault diagnosis of the heat exchanger is necessary to prevent risks. However, two problems hinder the implementation of the heat exchanger fault diagnosis in practice. First, the actual measured parameter of the heat exchanger cannot effectively reflect the fault occurrence, whereas the heat exchanger faults are usually depicted by utilizing the corresponding fault-related state parameters that cannot be measured directly. Second, both the traditional Extended Kalman Filter (EKF) and the EKF-based Double Model Filter have certain disadvantages, such as sensitivity to modeling errors and difficulties in selection of initialization values. To solve the aforementioned problems, this paper presents a fault-related parameter adaptive estimation method based on strong tracking filter (STF) and Modified Bayes classification algorithm for fault detection and failure mode classification of the heat exchanger, respectively. Heat exchanger fault simulation is conducted to generate fault data, through which the proposed methods are validated. The results demonstrate that the proposed methods are capable of providing accurate, stable, and rapid fault diagnosis of the heat exchanger. PMID:25823010

  15. Fault diagnosis for the heat exchanger of the aircraft environmental control system based on the strong tracking filter.

    PubMed

    Ma, Jian; Lu, Chen; Liu, Hongmei

    2015-01-01

    The aircraft environmental control system (ECS) is a critical aircraft system, which provides the appropriate environmental conditions to ensure the safe transport of air passengers and equipment. The functionality and reliability of ECS have received increasing attention in recent years. The heat exchanger is a particularly significant component of the ECS, because its failure decreases the system's efficiency, which can lead to catastrophic consequences. Fault diagnosis of the heat exchanger is necessary to prevent risks. However, two problems hinder the implementation of the heat exchanger fault diagnosis in practice. First, the actual measured parameter of the heat exchanger cannot effectively reflect the fault occurrence, whereas the heat exchanger faults are usually depicted by utilizing the corresponding fault-related state parameters that cannot be measured directly. Second, both the traditional Extended Kalman Filter (EKF) and the EKF-based Double Model Filter have certain disadvantages, such as sensitivity to modeling errors and difficulties in selection of initialization values. To solve the aforementioned problems, this paper presents a fault-related parameter adaptive estimation method based on strong tracking filter (STF) and Modified Bayes classification algorithm for fault detection and failure mode classification of the heat exchanger, respectively. Heat exchanger fault simulation is conducted to generate fault data, through which the proposed methods are validated. The results demonstrate that the proposed methods are capable of providing accurate, stable, and rapid fault diagnosis of the heat exchanger.

  16. Fault Location Based on Synchronized Measurements: A Comprehensive Survey

    PubMed Central

    Al-Mohammed, A. H.; Abido, M. A.

    2014-01-01

    This paper presents a comprehensive survey on transmission and distribution fault location algorithms that utilize synchronized measurements. Algorithms based on two-end synchronized measurements and fault location algorithms on three-terminal and multiterminal lines are reviewed. Series capacitors equipped with metal oxide varistors (MOVs), when set on a transmission line, create certain problems for line fault locators and, therefore, fault location on series-compensated lines is discussed. The paper reports the work carried out on adaptive fault location algorithms aiming at achieving better fault location accuracy. Work associated with fault location on power system networks, although limited, is also summarized. Additionally, the nonstandard high-frequency-related fault location techniques based on wavelet transform are discussed. Finally, the paper highlights the area for future research. PMID:24701191

  17. A Segmented Ion-Propulsion Engine

    NASA Technical Reports Server (NTRS)

    Brophy, John R.

    1992-01-01

    New design approach for high-power (100-kW class or greater) ion engines conceptually divides single engine into combination of smaller discharge chambers integrated to operate as single large engine. Analogous to multicylinder automobile engine, benefits include reduction in required accelerator system span-to-gap ratio for large-area engines, reduction in required hollow-cathode emission current, mitigation of plasma-uniformity problem, increased tolerance to accelerator system faults, and reduction in vacuum-system pumping speed.

  18. Evaluating the Effect of Integrated System Health Management on Mission Effectiveness

    DTIC Science & Technology

    2013-03-01

    Health Status, Fault Detection , IMS Commands «Needline» 110 B.6 OV-5a « O V -5 » a c t O V -5 [ O V -5 a...UAS to self- detect , isolate, and diagnose system health problems. Current flight avionics architectures may include lower level sub-system health ... monitoring or may isolate health monitoring functions to a black box configuration, but a vehicle-wide health monitoring information system has

  19. Fault-scale controls on rift geometry: the Bilila-Mtakataka Fault, Malawi

    NASA Astrophysics Data System (ADS)

    Hodge, M.; Fagereng, A.; Biggs, J.; Mdala, H. S.

    2017-12-01

    Border faults that develop during initial stages of rifting determine the geometry of rifts and passive margins. At outcrop and regional scales, it has been suggested that border fault orientation may be controlled by reactivation of pre-existing weaknesses. Here, we perform a multi-scale investigation on the influence of anisotropic fabrics along a major developing border fault in the southern East African Rift, Malawi. The 130 km long Bilila-Mtakataka fault has been proposed to have slipped in a single MW 8 earthquake with 10 m of normal displacement. The fault is marked by an 11±7 m high scarp with an average trend that is oblique to the current plate motion. Variations in scarp height are greatest at lithological boundaries and where the scarp switches between following and cross-cutting high-grade metamorphic foliation. Based on the scarp's geometry and morphology, we define 6 geometrically distinct segments. We suggest that the segments link to at least one deeper structure that strikes parallel to the average scarp trend, an orientation consistent with the kinematics of an early phase of rift initiation. The slip required on a deep fault(s) to match the height of the current scarp suggests multiple earthquakes along the fault. We test this hypothesis by studying the scarp morphology using high-resolution satellite data. Our results suggest that during the earthquake(s) that formed the current scarp, the propagation of the fault toward the surface locally followed moderately-dipping foliation well oriented for reactivation. In conclusion, although well oriented pre-existing weaknesses locally influence shallow fault geometry, large-scale border fault geometry appears primarily controlled by the stress field at the time of fault initiation.

  20. Monitoring tooth profile faults in epicyclic gearboxes using synchronously averaged motor currents: Mathematical modeling and experimental validation

    NASA Astrophysics Data System (ADS)

    Ottewill, J. R.; Ruszczyk, A.; Broda, D.

    2017-02-01

    Time-varying transmission paths and inaccessibility can increase the difficulty in both acquiring and processing vibration signals for the purpose of monitoring epicyclic gearboxes. Recent work has shown that the synchronous signal averaging approach may be applied to measured motor currents in order to diagnose tooth faults in parallel shaft gearboxes. In this paper we further develop the approach, so that it may also be applied to monitor tooth faults in epicyclic gearboxes. A low-degree-of-freedom model of an epicyclic gearbox which incorporates the possibility of simulating tooth faults, as well as any subsequent tooth contact loss due to these faults, is introduced. By combining this model with a simple space-phasor model of an induction motor it is possible to show that, in theory, tooth faults in epicyclic gearboxes may be identified from motor currents. Applying the synchronous averaging approach to experimentally recorded motor currents and angular displacements recorded from a shaft mounted encoder, validate this finding. Comparison between experiments and theory highlight the influence of operating conditions, backlash and shaft couplings on the transient response excited in the currents by the tooth fault. The results obtained suggest that the method may be a viable alternative or complement to more traditional methods for monitoring gearboxes. However, general observations also indicate that further investigations into the sensitivity and robustness of the method would be beneficial.

  1. Origin analysis of expanded stacking faults by applying forward current to 4H-SiC p-i-n diodes

    NASA Astrophysics Data System (ADS)

    Hayashi, Shohei; Naijo, Takanori; Yamashita, Tamotsu; Miyazato, Masaki; Ryo, Mina; Fujisawa, Hiroyuki; Miyajima, Masaaki; Senzaki, Junji; Kato, Tomohisa; Yonezawa, Yoshiyuki; Kojima, Kazutoshi; Okumura, Hajime

    2017-08-01

    Stacking faults expanded by the application of forward current to 4H-SiC p-i-n diodes were observed using a transmission electron microscope to investigate the expansion origin. It was experimentally confirmed that long-zonal-shaped stacking faults expanded from basal-plane dislocations converted into threading edge dislocations. In addition, stacking fault expansion clearly penetrated into the substrate to a greater depth than the dislocation conversion point. This downward expansion of stacking faults strongly depends on the degree of high-density minority carrier injection.

  2. Parallel and distributed computation for fault-tolerant object recognition

    NASA Technical Reports Server (NTRS)

    Wechsler, Harry

    1988-01-01

    The distributed associative memory (DAM) model is suggested for distributed and fault-tolerant computation as it relates to object recognition tasks. The fault-tolerance is with respect to geometrical distortions (scale and rotation), noisy inputs, occulsion/overlap, and memory faults. An experimental system was developed for fault-tolerant structure recognition which shows the feasibility of such an approach. The approach is futher extended to the problem of multisensory data integration and applied successfully to the recognition of colored polyhedral objects.

  3. Double-layer rotor magnetic shield performance analysis in high temperature superconducting synchronous generators under short circuit fault conditions

    NASA Astrophysics Data System (ADS)

    Hekmati, Arsalan; Aliahmadi, Mehdi

    2016-12-01

    High temperature superconducting, HTS, synchronous machines benefit from a rotor magnetic shield in order to protect superconducting coils against asynchronous magnetic fields. This magnetic shield, however, suffers from exerted Lorentz forces generated in light of induced eddy currents during transient conditions, e.g. stator windings short-circuit fault. In addition, to the exerted electromagnetic forces, eddy current losses and the associated effects on the cryogenic system are the other consequences of shielding HTS coils. This study aims at investigating the Rotor Magnetic Shield, RMS, performance in HTS synchronous generators under stator winding short-circuit fault conditions. The induced eddy currents in different circumferential positions of the rotor magnetic shield along with associated Joule heating losses would be studied using 2-D time-stepping Finite Element Analysis, FEA. The investigation of Lorentz forces exerted on the magnetic shield during transient conditions has also been performed in this paper. The obtained results show that double line-to-ground fault is of the most importance among different types of short-circuit faults. It was revealed that when it comes to the design of the rotor magnetic shields, in addition to the eddy current distribution and the associated ohmic losses, two phase-to-ground fault should be taken into account since the produced electromagnetic forces in the time of fault conditions are more severe during double line-to-ground fault.

  4. Detection of CMOS bridging faults using minimal stuck-at fault test sets

    NASA Technical Reports Server (NTRS)

    Ijaz, Nabeel; Frenzel, James F.

    1993-01-01

    The performance of minimal stuck-at fault test sets at detecting bridging faults are evaluated. New functional models of circuit primitives are presented which allow accurate representation of bridging faults under switch-level simulation. The effectiveness of the patterns is evaluated using both voltage and current testing.

  5. Small-scale structural heterogeneity and well-communication problems in the Granny Creek oil field of West Virginia

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zheng, L.; Wilson, T.H.; Shumaker, R.C.

    1993-08-01

    Seismic interpretations of the Granny Creek oil field in West Virginia suggest the presence of numerous small-scale fracture zones and faults. Seismic disruptions interpreted as faults and/or fracture zones are represented by abrupt reflection offsets, local amplitude reductions, and waveform changes. These features are enhanced through reprocessing, and the majority of the improvements to the data result from the surface consistent application of zero-phase deconvolution. Reprocessing yields a 20% improvement of resolution. Seismic interpretations of these features as small faults and fracture zones are supported by nearby offset vertical seismic profiles and by their proximity to wells between which directmore » communication occurs during waterflooding. Four sets of faults are interpreted based on subsurface and seismic data. Direct interwell communication is interpreted to be associated only with a northeast-trending set of faults, which are believed to have detached structural origins. Subsequent reactivation of deeper basement faults may have opened fractures along this trend. These faults have a limited effect on primary production, but cause many well-communication problems and reduce secondary production. Seismic detection of these zones is important to the economic and effective design of secondary recovery operations, because direct well communication often results in significant reduction of sweep efficiency during waterflooding. Prior information about the location of these zones would allow secondary recovery operations to avoid potential problem areas and increase oil recovery.« less

  6. Robust Fault Detection and Isolation for Stochastic Systems

    NASA Technical Reports Server (NTRS)

    George, Jemin; Gregory, Irene M.

    2010-01-01

    This paper outlines the formulation of a robust fault detection and isolation scheme that can precisely detect and isolate simultaneous actuator and sensor faults for uncertain linear stochastic systems. The given robust fault detection scheme based on the discontinuous robust observer approach would be able to distinguish between model uncertainties and actuator failures and therefore eliminate the problem of false alarms. Since the proposed approach involves precise reconstruction of sensor faults, it can also be used for sensor fault identification and the reconstruction of true outputs from faulty sensor outputs. Simulation results presented here validate the effectiveness of the robust fault detection and isolation system.

  7. Fiber Bragg grating sensor for fault detection in high voltage overhead transmission lines

    NASA Astrophysics Data System (ADS)

    Moghadas, Amin

    2011-12-01

    A fiber optic based sensor capable of fault detection in both radial and network overhead transmission power line systems is investigated. Bragg wavelength shift is used to measure the fault current and detect fault in power systems. Magnetic fields generated by currents in the overhead transmission lines cause a strain in magnetostrictive material which is then detected by fiber Bragg grating (FBG) sensors. The Fiber Bragg interrogator senses the reflected FBG signals, and the Bragg wavelength shift is calculated and the signals are processed. A broadband light source in the control room scans the shift in the reflected signals. Any surge in the magnetic field relates to an increased fault current at a certain location. Also, fault location can be precisely defined with an artificial neural network (ANN) algorithm. This algorithm can be easily coordinated with other protective devices. It is shown that the faults in the overhead transmission line cause a detectable wavelength shift on the reflected signal of FBG sensors and can be used to detect and classify different kind of faults. The proposed method has been extensively tested by simulation and results confirm that the proposed scheme is able to detect different kinds of fault in both radial and network system.

  8. Fiber Bragg Grating Sensor for Fault Detection in Radial and Network Transmission Lines

    PubMed Central

    Moghadas, Amin A.; Shadaram, Mehdi

    2010-01-01

    In this paper, a fiber optic based sensor capable of fault detection in both radial and network overhead transmission power line systems is investigated. Bragg wavelength shift is used to measure the fault current and detect fault in power systems. Magnetic fields generated by currents in the overhead transmission lines cause a strain in magnetostrictive material which is then detected by Fiber Bragg Grating (FBG). The Fiber Bragg interrogator senses the reflected FBG signals, and the Bragg wavelength shift is calculated and the signals are processed. A broadband light source in the control room scans the shift in the reflected signal. Any surge in the magnetic field relates to an increased fault current at a certain location. Also, fault location can be precisely defined with an artificial neural network (ANN) algorithm. This algorithm can be easily coordinated with other protective devices. It is shown that the faults in the overhead transmission line cause a detectable wavelength shift on the reflected signal of FBG and can be used to detect and classify different kind of faults. The proposed method has been extensively tested by simulation and results confirm that the proposed scheme is able to detect different kinds of fault in both radial and network system. PMID:22163416

  9. H∞ robust fault-tolerant controller design for an autonomous underwater vehicle's navigation control system

    NASA Astrophysics Data System (ADS)

    Cheng, Xiang-Qin; Qu, Jing-Yuan; Yan, Zhe-Ping; Bian, Xin-Qian

    2010-03-01

    In order to improve the security and reliability for autonomous underwater vehicle (AUV) navigation, an H∞ robust fault-tolerant controller was designed after analyzing variations in state-feedback gain. Operating conditions and the design method were then analyzed so that the control problem could be expressed as a mathematical optimization problem. This permitted the use of linear matrix inequalities (LMI) to solve for the H∞ controller for the system. When considering different actuator failures, these conditions were then also mathematically expressed, allowing the H∞ robust controller to solve for these events and thus be fault-tolerant. Finally, simulation results showed that the H∞ robust fault-tolerant controller could provide precise AUV navigation control with strong robustness.

  10. Cascading Policies Provide Fault Tolerance for Pervasive Clinical Communications.

    PubMed

    Williams, Rose; Jalan, Srikant; Stern, Edie; Lussier, Yves A

    2005-03-21

    We implemented an end-to-end notification system that pushed urgent clinical laboratory results to Blackberry 7510 devices over the Nextel cellular network. We designed our system to use user roles and notification policies to abstract and execute clinical notification procedures. We anticipated some problems with dropped and non-delivered messages when the device was out-of-network, however, we did not expect the same problems in other situations like device reconnection to the network. We addressed these problems by creating cascading "fault tolerance" policies to drive notification escalation when messages timed-out or delivery failed. This paper describes our experience in providing an adaptable, fault tolerant pervasive notification system for delivering secure, critical, time-sensitive patient laboratory results.

  11. Multi-Fault Diagnosis of Rolling Bearings via Adaptive Projection Intrinsically Transformed Multivariate Empirical Mode Decomposition and High Order Singular Value Decomposition

    PubMed Central

    Lv, Yong; Song, Gangbing

    2018-01-01

    Rolling bearings are important components in rotary machinery systems. In the field of multi-fault diagnosis of rolling bearings, the vibration signal collected from single channels tends to miss some fault characteristic information. Using multiple sensors to collect signals at different locations on the machine to obtain multivariate signal can remedy this problem. The adverse effect of a power imbalance between the various channels is inevitable, and unfavorable for multivariate signal processing. As a useful, multivariate signal processing method, Adaptive-projection has intrinsically transformed multivariate empirical mode decomposition (APIT-MEMD), and exhibits better performance than MEMD by adopting adaptive projection strategy in order to alleviate power imbalances. The filter bank properties of APIT-MEMD are also adopted to enable more accurate and stable intrinsic mode functions (IMFs), and to ease mode mixing problems in multi-fault frequency extractions. By aligning IMF sets into a third order tensor, high order singular value decomposition (HOSVD) can be employed to estimate the fault number. The fault correlation factor (FCF) analysis is used to conduct correlation analysis, in order to determine effective IMFs; the characteristic frequencies of multi-faults can then be extracted. Numerical simulations and the application of multi-fault situation can demonstrate that the proposed method is promising in multi-fault diagnoses of multivariate rolling bearing signal. PMID:29659510

  12. Multi-Fault Diagnosis of Rolling Bearings via Adaptive Projection Intrinsically Transformed Multivariate Empirical Mode Decomposition and High Order Singular Value Decomposition.

    PubMed

    Yuan, Rui; Lv, Yong; Song, Gangbing

    2018-04-16

    Rolling bearings are important components in rotary machinery systems. In the field of multi-fault diagnosis of rolling bearings, the vibration signal collected from single channels tends to miss some fault characteristic information. Using multiple sensors to collect signals at different locations on the machine to obtain multivariate signal can remedy this problem. The adverse effect of a power imbalance between the various channels is inevitable, and unfavorable for multivariate signal processing. As a useful, multivariate signal processing method, Adaptive-projection has intrinsically transformed multivariate empirical mode decomposition (APIT-MEMD), and exhibits better performance than MEMD by adopting adaptive projection strategy in order to alleviate power imbalances. The filter bank properties of APIT-MEMD are also adopted to enable more accurate and stable intrinsic mode functions (IMFs), and to ease mode mixing problems in multi-fault frequency extractions. By aligning IMF sets into a third order tensor, high order singular value decomposition (HOSVD) can be employed to estimate the fault number. The fault correlation factor (FCF) analysis is used to conduct correlation analysis, in order to determine effective IMFs; the characteristic frequencies of multi-faults can then be extracted. Numerical simulations and the application of multi-fault situation can demonstrate that the proposed method is promising in multi-fault diagnoses of multivariate rolling bearing signal.

  13. Application of an inverse method for calculating three-dimensional fault geometries and clip vectors, Nun River Field, Nigeria

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kerr, H.G.; White, N.

    A general, automatic method for determining the three-dimensional geometry of a normal fault of any shape and size is applied to a three-dimensional seismic reflection data set from the Nun River field, Nigeria. In addition to calculating fault geometry, the method also automatically retrieves the extension direction without requiring any previous information about either the fault shape or the extension direction. Solutions are found by minimizing the misfit between sets of faults that are calculated from the observed geometries of two or more hanging-wall beds. In the example discussed here, the predicted fault surface is in excellent agreement with themore » shape of the seismically imaged fault. Although the calculated extension direction is oblique to the average strike of the fault, the value of this parameter is not well resolved. Our approach differs markedly from standard section-balancing models in two important ways. First, we do not assume that the extension direction is known, and second, the use of inverse theory ensures that formal confidence bounds can be determined for calculated fault geometries. This ability has important implications for a range of geological problems encountered at both exploration and production scales. In particular, once the three-dimensional displacement field has been constrained, the difficult but important problem of three-dimensional palinspastic restoration of hanging-wall structures becomes tractable.« less

  14. Steady-state analysis of a faulted three-phase four-wire system supplying induction motors with neutrals connected and other single-phase line-to-neutral loads

    NASA Technical Reports Server (NTRS)

    Wood, M. E.

    1980-01-01

    Four wire Wye connected ac power systems exhibit peculiar steady state fault characteristics when the fourth wire of three phase induction motors is connected. The loss of one phase of power source due to a series or shunt fault results in currents higher than anticipated on the remaining two phases. A theoretical approach to compute the fault currents and voltages is developed. A FORTRAN program is included in the appendix.

  15. Optimal Fault-Tolerant Control for Discrete-Time Nonlinear Strict-Feedback Systems Based on Adaptive Critic Design.

    PubMed

    Wang, Zhanshan; Liu, Lei; Wu, Yanming; Zhang, Huaguang

    2018-06-01

    This paper investigates the problem of optimal fault-tolerant control (FTC) for a class of unknown nonlinear discrete-time systems with actuator fault in the framework of adaptive critic design (ACD). A pivotal highlight is the adaptive auxiliary signal of the actuator fault, which is designed to offset the effect of the fault. The considered systems are in strict-feedback forms and involve unknown nonlinear functions, which will result in the causal problem. To solve this problem, the original nonlinear systems are transformed into a novel system by employing the diffeomorphism theory. Besides, the action neural networks (ANNs) are utilized to approximate a predefined unknown function in the backstepping design procedure. Combined the strategic utility function and the ACD technique, a reinforcement learning algorithm is proposed to set up an optimal FTC, in which the critic neural networks (CNNs) provide an approximate structure of the cost function. In this case, it not only guarantees the stability of the systems, but also achieves the optimal control performance as well. In the end, two simulation examples are used to show the effectiveness of the proposed optimal FTC strategy.

  16. Superconducting matrix fault current limiter with current-driven trigger mechanism

    DOEpatents

    Yuan; Xing

    2008-04-15

    A modular and scalable Matrix-type Fault Current Limiter (MFCL) that functions as a "variable impedance" device in an electric power network, using components made of superconducting and non-superconducting electrically conductive materials. An inductor is connected in series with the trigger superconductor in the trigger matrix and physically surrounds the superconductor. The current surge during a fault will generate a trigger magnetic field in the series inductor to cause fast and uniform quenching of the trigger superconductor to significantly reduce burnout risk due to superconductor material non-uniformity.

  17. Fault Current Distribution and Pole Earth Potential Rise (EPR) Under Substation Fault

    NASA Astrophysics Data System (ADS)

    Nnassereddine, M.; Rizk, J.; Hellany, A.; Nagrial, M.

    2013-09-01

    New high-voltage (HV) substations are fed by transmission lines. The position of these lines necessitates earthing design to ensure safety compliance of the system. Conductive structures such as steel or concrete poles are widely used in HV transmission mains. The earth potential rise (EPR) generated by a fault at the substation could result in an unsafe condition. This article discusses EPR based on substation fault. The pole EPR assessment under substation fault is assessed with and without mutual impedance consideration. Split factor determination with and without the mutual impedance of the line is also discussed. Furthermore, a simplified formula to compute the pole grid current under substation fault is included. Also, it includes the introduction of the n factor which determines the number of poles that required earthing assessments under substation fault. A case study is shown.

  18. ALLIANCE: An architecture for fault tolerant, cooperative control of heterogeneous mobile robots

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Parker, L.E.

    1995-02-01

    This research addresses the problem of achieving fault tolerant cooperation within small- to medium-sized teams of heterogeneous mobile robots. The author describes a novel behavior-based, fully distributed architecture, called ALLIANCE, that utilizes adaptive action selection to achieve fault tolerant cooperative control in robot missions involving loosely coupled, largely independent tasks. The robots in this architecture possess a variety of high-level functions that they can perform during a mission, and must at all times select an appropriate action based on the requirements of the mission, the activities of other robots, the current environmental conditions, and their own internal states. Since suchmore » cooperative teams often work in dynamic and unpredictable environments, the software architecture allows the team members to respond robustly and reliably to unexpected environmental changes and modifications in the robot team that may occur due to mechanical failure, the learning of new skills, or the addition or removal of robots from the team by human intervention. After presenting ALLIANCE, the author describes in detail experimental results of an implementation of this architecture on a team of physical mobile robots performing a cooperative box pushing demonstration. These experiments illustrate the ability of ALLIANCE to achieve adaptive, fault-tolerant cooperative control amidst dynamic changes in the capabilities of the robot team.« less

  19. Soft-Fault Detection Technologies Developed for Electrical Power Systems

    NASA Technical Reports Server (NTRS)

    Button, Robert M.

    2004-01-01

    The NASA Glenn Research Center, partner universities, and defense contractors are working to develop intelligent power management and distribution (PMAD) technologies for future spacecraft and launch vehicles. The goals are to provide higher performance (efficiency, transient response, and stability), higher fault tolerance, and higher reliability through the application of digital control and communication technologies. It is also expected that these technologies will eventually reduce the design, development, manufacturing, and integration costs for large, electrical power systems for space vehicles. The main focus of this research has been to incorporate digital control, communications, and intelligent algorithms into power electronic devices such as direct-current to direct-current (dc-dc) converters and protective switchgear. These technologies, in turn, will enable revolutionary changes in the way electrical power systems are designed, developed, configured, and integrated in aerospace vehicles and satellites. Initial successes in integrating modern, digital controllers have proven that transient response performance can be improved using advanced nonlinear control algorithms. One technology being developed includes the detection of "soft faults," those not typically covered by current systems in use today. Soft faults include arcing faults, corona discharge faults, and undetected leakage currents. Using digital control and advanced signal analysis algorithms, we have shown that it is possible to reliably detect arcing faults in high-voltage dc power distribution systems (see the preceding photograph). Another research effort has shown that low-level leakage faults and cable degradation can be detected by analyzing power system parameters over time. This additional fault detection capability will result in higher reliability for long-lived power systems such as reusable launch vehicles and space exploration missions.

  20. Planning and Resource Management in an Intelligent Automated Power Management System

    NASA Technical Reports Server (NTRS)

    Morris, Robert A.

    1991-01-01

    Power system management is a process of guiding a power system towards the objective of continuous supply of electrical power to a set of loads. Spacecraft power system management requires planning and scheduling, since electrical power is a scarce resource in space. The automation of power system management for future spacecraft has been recognized as an important R&D goal. Several automation technologies have emerged including the use of expert systems for automating human problem solving capabilities such as rule based expert system for fault diagnosis and load scheduling. It is questionable whether current generation expert system technology is applicable for power system management in space. The objective of the ADEPTS (ADvanced Electrical Power management Techniques for Space systems) is to study new techniques for power management automation. These techniques involve integrating current expert system technology with that of parallel and distributed computing, as well as a distributed, object-oriented approach to software design. The focus of the current study is the integration of new procedures for automatically planning and scheduling loads with procedures for performing fault diagnosis and control. The objective is the concurrent execution of both sets of tasks on separate transputer processors, thus adding parallelism to the overall management process.

  1. Assessing the Progress of Trapped-Ion Processors Towards Fault-Tolerant Quantum Computation

    NASA Astrophysics Data System (ADS)

    Bermudez, A.; Xu, X.; Nigmatullin, R.; O'Gorman, J.; Negnevitsky, V.; Schindler, P.; Monz, T.; Poschinger, U. G.; Hempel, C.; Home, J.; Schmidt-Kaler, F.; Biercuk, M.; Blatt, R.; Benjamin, S.; Müller, M.

    2017-10-01

    A quantitative assessment of the progress of small prototype quantum processors towards fault-tolerant quantum computation is a problem of current interest in experimental and theoretical quantum information science. We introduce a necessary and fair criterion for quantum error correction (QEC), which must be achieved in the development of these quantum processors before their sizes are sufficiently big to consider the well-known QEC threshold. We apply this criterion to benchmark the ongoing effort in implementing QEC with topological color codes using trapped-ion quantum processors and, more importantly, to guide the future hardware developments that will be required in order to demonstrate beneficial QEC with small topological quantum codes. In doing so, we present a thorough description of a realistic trapped-ion toolbox for QEC and a physically motivated error model that goes beyond standard simplifications in the QEC literature. We focus on laser-based quantum gates realized in two-species trapped-ion crystals in high-optical aperture segmented traps. Our large-scale numerical analysis shows that, with the foreseen technological improvements described here, this platform is a very promising candidate for fault-tolerant quantum computation.

  2. On the next generation of reliability analysis tools

    NASA Technical Reports Server (NTRS)

    Babcock, Philip S., IV; Leong, Frank; Gai, Eli

    1987-01-01

    The current generation of reliability analysis tools concentrates on improving the efficiency of the description and solution of the fault-handling processes and providing a solution algorithm for the full system model. The tools have improved user efficiency in these areas to the extent that the problem of constructing the fault-occurrence model is now the major analysis bottleneck. For the next generation of reliability tools, it is proposed that techniques be developed to improve the efficiency of the fault-occurrence model generation and input. Further, the goal is to provide an environment permitting a user to provide a top-down design description of the system from which a Markov reliability model is automatically constructed. Thus, the user is relieved of the tedious and error-prone process of model construction, permitting an efficient exploration of the design space, and an independent validation of the system's operation is obtained. An additional benefit of automating the model construction process is the opportunity to reduce the specialized knowledge required. Hence, the user need only be an expert in the system he is analyzing; the expertise in reliability analysis techniques is supplied.

  3. Comprehensive Fault Tolerance and Science-Optimal Attitude Planning for Spacecraft Applications

    NASA Astrophysics Data System (ADS)

    Nasir, Ali

    Spacecraft operate in a harsh environment, are costly to launch, and experience unavoidable communication delay and bandwidth constraints. These factors motivate the need for effective onboard mission and fault management. This dissertation presents an integrated framework to optimize science goal achievement while identifying and managing encountered faults. Goal-related tasks are defined by pointing the spacecraft instrumentation toward distant targets of scientific interest. The relative value of science data collection is traded with risk of failures to determine an optimal policy for mission execution. Our major innovation in fault detection and reconfiguration is to incorporate fault information obtained from two types of spacecraft models: one based on the dynamics of the spacecraft and the second based on the internal composition of the spacecraft. For fault reconfiguration, we consider possible changes in both dynamics-based control law configuration and the composition-based switching configuration. We formulate our problem as a stochastic sequential decision problem or Markov Decision Process (MDP). To avoid the computational complexity involved in a fully-integrated MDP, we decompose our problem into multiple MDPs. These MDPs include planning MDPs for different fault scenarios, a fault detection MDP based on a logic-based model of spacecraft component and system functionality, an MDP for resolving conflicts between fault information from the logic-based model and the dynamics-based spacecraft models" and the reconfiguration MDP that generates a policy optimized over the relative importance of the mission objectives versus spacecraft safety. Approximate Dynamic Programming (ADP) methods for the decomposition of the planning and fault detection MDPs are applied. To show the performance of the MDP-based frameworks and ADP methods, a suite of spacecraft attitude planning case studies are described. These case studies are used to analyze the content and behavior of computed policies in response to the changes in design parameters. A primary case study is built from the Far Ultraviolet Spectroscopic Explorer (FUSE) mission for which component models and their probabilities of failure are based on realistic mission data. A comparison of our approach with an alternative framework for spacecraft task planning and fault management is presented in the context of the FUSE mission.

  4. Method and system for diagnostics of apparatus

    NASA Technical Reports Server (NTRS)

    Gorinevsky, Dimitry (Inventor)

    2012-01-01

    Proposed is a method, implemented in software, for estimating fault state of an apparatus outfitted with sensors. At each execution period the method processes sensor data from the apparatus to obtain a set of parity parameters, which are further used for estimating fault state. The estimation method formulates a convex optimization problem for each fault hypothesis and employs a convex solver to compute fault parameter estimates and fault likelihoods for each fault hypothesis. The highest likelihoods and corresponding parameter estimates are transmitted to a display device or an automated decision and control system. The obtained accurate estimate of fault state can be used to improve safety, performance, or maintenance processes for the apparatus.

  5. Construction of testing facilities and verifying tests of a 22.9 kV/630 A class superconducting fault current limiter

    NASA Astrophysics Data System (ADS)

    Yim, S.-W.; Yu, S.-D.; Kim, H.-R.; Kim, M.-J.; Park, C.-R.; Yang, S.-E.; Kim, W.-S.; Hyun, O.-B.; Sim, J.; Park, K.-B.; Oh, I.-S.

    2010-11-01

    We have constructed and completed the preparation for a long-term operation test of a superconducting fault current limiter (SFCL) in a Korea Electric Power Corporation (KEPCO) test grid. The SFCL with rating of 22.9 kV/630 A, 3-phases, has been connected to the 22.9 kV test grid equipped with reclosers and other protection devices in Gochang Power Testing Center of KEPCO. The main goals of the test are the verification of SFCL performance and protection coordination studies. A line-commutation type SFCL was fabricated and installed for this project, and the superconducting components were cooled by a cryo-cooler to 77 K in the sub-cooled liquid nitrogen pressurized by 3 bar of helium gas. The verification test includes un-manned - long-term operation with and without loads and fault tests. Since the test site is 170 km away from the laboratory, we will adopt the un-manned operation with real-time remote monitoring and controlling using high speed internet. For the fault tests, we will apply fault currents up to around 8 kArms to the SFCL using an artificial fault generator. The fault tests may allow us not only to confirm the current limiting capability of the SFCL, but also to adjust the SFCL - recloser coordination such as resetting over-current relay parameters. This paper describes the construction of the testing facilities and discusses the plans for the verification tests.

  6. dc Arc Fault Effect on Hybrid ac/dc Microgrid

    NASA Astrophysics Data System (ADS)

    Fatima, Zahra

    The advent of distributed energy resources (DER) and reliability and stability problems of the conventional grid system has given rise to the wide spread deployment of microgrids. Microgrids provide many advantages by incorporating renewable energy sources and increasing the reliability of the grid by isolating from the main grid in case of an outage. AC microgrids have been installed all over the world, but dc microgrids have been gaining interest due to the advantages they provide over ac microgrids. However the entire power network backbone is still ac and dc microgrids require expensive converters to connect to the ac power network. As a result hybrid ac/dc microgrids are gaining more attention as it combines the advantages of both ac and dc microgrids such as direct integration of ac and dc systems with minimum number of conversions which increases the efficiency by reducing energy losses. Although dc electric systems offer many advantages such as no synchronization and no reactive power, successful implementation of dc systems requires appropriate protection strategies. One unique protection challenge brought by the dc systems is dc arc faults. A dc arc fault is generated when there is a gap in the conductor due to insulation degradation and current is used to bridge the gap, resulting in an arc with very high temperature. Such a fault if it goes undetected and is not extinguished can cause damage to the entire system and cause fires. The purpose of the research is to study the effect of the dc arc fault at different locations in the hybrid ac/dc microgrid and provide insight on the reliability of the grid components when it is impacted by arc faults at various locations in the grid. The impact of dc arc fault at different locations on the performance of the PV array, wind generation, and constant power loads (CPL) interfaced with dc/dc converters is studied. MATLAB/Simulink is used to model the hybrid ac/dc microgrid and arc fault.

  7. Current Aeromedical Issues in Rotary Wing Operations. (Problemes actuels de medecine aeronautique poses par les operations utilisant des voilures tournantes)

    DTIC Science & Technology

    1999-08-01

    harness design faults. the backache by examining and improving the design of aircraft cockpits and aircraft seats. The introduction of armoured seats...upwards and Much effort has gone into improving the lumbar support forwards. The weight of anterior body armour increases and comfort of ejection seats...point compartments require restraint straps be draped over flight restraint, crash attenuating pilot and aircrew seats jackets, survival vests

  8. Electrical Motor Current Signal Analysis using a Modulation Signal Bispectrum for the Fault Diagnosis of a Gearbox Downstream

    NASA Astrophysics Data System (ADS)

    Haram, M.; Wang, T.; Gu, F.; Ball, A. D.

    2012-05-01

    Motor current signal analysis has been an effective way for many years of monitoring electrical machines themselves. However, little work has been carried out in using this technique for monitoring their downstream equipment because of difficulties in extracting small fault components in the measured current signals. This paper investigates the characteristics of electrical current signals for monitoring the faults from a downstream gearbox using a modulation signal bispectrum (MSB), including phase effects in extracting small modulating components in a noisy measurement. An analytical study is firstly performed to understand amplitude, frequency and phase characteristics of current signals due to faults. It then explores the performance of MSB analysis in detecting weak modulating components in current signals. Experimental study based on a 10kw two stage gearbox, driven by a three phase induction motor, shows that MSB peaks at different rotational frequencies can be based to quantify the severity of gear tooth breakage and the degrees of shaft misalignment. In addition, the type and location of a fault can be recognized based on the frequency at which the change of MSB peak is the highest among different frequencies.

  9. Status and Progress of a Fault Current Limiting Hts Cable to BE Installed in the con EDISON Grid

    NASA Astrophysics Data System (ADS)

    Maguire, J.; Folts, D.; Yuan, J.; Henderson, N.; Lindsay, D.; Knoll, D.; Rey, C.; Duckworth, R.; Gouge, M.; Wolff, Z.; Kurtz, S.

    2010-04-01

    In the last decade, significant advances in the performance of second generation (2G) high temperature superconducting wire have made it suitable for commercially viable applications such as electric power cables and fault current limiters. Currently, the U.S. Department of Homeland Security is co-funding the design, development and demonstration of an inherently fault current limiting HTS cable under the Hydra project with American Superconductor and Consolidated Edison. The cable will be approximately 300 m long and is being designed to carry 96 MVA at a distribution level voltage of 13.8 kV. The underground cable will be installed and energized in New York City. The project is led by American Superconductor teamed with Con Edison, Ultera (Southwire and nkt cables joint venture), and Air Liquide. This paper describes the general goals, design criteria, status and progress of the project. Fault current limiting has already been demonstrated in 3 m prototype cables, and test results on a 25 m three-phase cable will be presented. An overview of the concept of a fault current limiting cable and the system advantages of this unique type of cable will be described.

  10. Fault detection and diagnosis for non-Gaussian stochastic distribution systems with time delays via RBF neural networks.

    PubMed

    Yi, Qu; Zhan-ming, Li; Er-chao, Li

    2012-11-01

    A new fault detection and diagnosis (FDD) problem via the output probability density functions (PDFs) for non-gausian stochastic distribution systems (SDSs) is investigated. The PDFs can be approximated by radial basis functions (RBFs) neural networks. Different from conventional FDD problems, the measured information for FDD is the output stochastic distributions and the stochastic variables involved are not confined to Gaussian ones. A (RBFs) neural network technique is proposed so that the output PDFs can be formulated in terms of the dynamic weighings of the RBFs neural network. In this work, a nonlinear adaptive observer-based fault detection and diagnosis algorithm is presented by introducing the tuning parameter so that the residual is as sensitive as possible to the fault. Stability and Convergency analysis is performed in fault detection and fault diagnosis analysis for the error dynamic system. At last, an illustrated example is given to demonstrate the efficiency of the proposed algorithm, and satisfactory results have been obtained. Copyright © 2012 ISA. Published by Elsevier Ltd. All rights reserved.

  11. Transform fault earthquakes in the North Atlantic: Source mechanisms and depth of faulting

    NASA Technical Reports Server (NTRS)

    Bergman, Eric A.; Solomon, Sean C.

    1987-01-01

    The centroid depths and source mechanisms of 12 large earthquakes on transform faults of the northern Mid-Atlantic Ridge were determined from an inversion of long-period body waveforms. The earthquakes occurred on the Gibbs, Oceanographer, Hayes, Kane, 15 deg 20 min, and Vema transforms. The depth extent of faulting during each earthquake was estimated from the centroid depth and the fault width. The source mechanisms for all events in this study display the strike slip motion expected for transform fault earthquakes; slip vector azimuths agree to 2 to 3 deg of the local strike of the zone of active faulting. The only anomalies in mechanism were for two earthquakes near the western end of the Vema transform which occurred on significantly nonvertical fault planes. Secondary faulting, occurring either precursory to or near the end of the main episode of strike-slip rupture, was observed for 5 of the 12 earthquakes. For three events the secondary faulting was characterized by reverse motion on fault planes striking oblique to the trend of the transform. In all three cases, the site of secondary reverse faulting is near a compression jog in the current trace of the active transform fault zone. No evidence was found to support the conclusions of Engeln, Wiens, and Stein that oceanic transform faults in general are either hotter than expected from current thermal models or weaker than normal oceanic lithosphere.

  12. A dynamic fault tree model of a propulsion system

    NASA Technical Reports Server (NTRS)

    Xu, Hong; Dugan, Joanne Bechta; Meshkat, Leila

    2006-01-01

    We present a dynamic fault tree model of the benchmark propulsion system, and solve it using Galileo. Dynamic fault trees (DFT) extend traditional static fault trees with special gates to model spares and other sequence dependencies. Galileo solves DFT models using a judicious combination of automatically generated Markov and Binary Decision Diagram models. Galileo easily handles the complexities exhibited by the benchmark problem. In particular, Galileo is designed to model phased mission systems.

  13. Benchmarking Defmod, an open source FEM code for modeling episodic fault rupture

    NASA Astrophysics Data System (ADS)

    Meng, Chunfang

    2017-03-01

    We present Defmod, an open source (linear) finite element code that enables us to efficiently model the crustal deformation due to (quasi-)static and dynamic loadings, poroelastic flow, viscoelastic flow and frictional fault slip. Ali (2015) provides the original code introducing an implicit solver for (quasi-)static problem, and an explicit solver for dynamic problem. The fault constraint is implemented via Lagrange Multiplier. Meng (2015) combines these two solvers into a hybrid solver that uses failure criteria and friction laws to adaptively switch between the (quasi-)static state and dynamic state. The code is capable of modeling episodic fault rupture driven by quasi-static loadings, e.g. due to reservoir fluid withdraw or injection. Here, we focus on benchmarking the Defmod results against some establish results.

  14. Stress heterogeneity above and within a deep geothermal reservoir: From borehole observations to geomechanical modelling

    NASA Astrophysics Data System (ADS)

    Seithel, Robin; Peters, Max; Lesueur, Martin; Kohl, Thomas

    2017-04-01

    Overpressured reservoir conditions, local stress concentrations or a locally rotated stress field can initiate substantial problems during drilling or reservoir exploitation. Increasing geothermal utilization in the Molasse basin area in S-Germany is faced with such problems of deeply seated reservoir sections. In several wells, radial fluid flow systems are interpreted as highly porous layers. However, in nearby wells a combination of linear fluid flow, local stress heterogeneities and structural geology hint to a rather fault dominated reservoir (Seithel et al. 2015). Due to missing knowledge of the stress magnitude, stress orientation and their coupling to reservoir response, we will present a THMC model of critical formations and the geothermal reservoir targeting nearby faults. In an area south of Munich, where several geothermal wells are constructed, such wells are interpreted and integrated into a 30 x 30 km simulated model area. One of the main objectives here is to create a geomechanical reservoir model in a thermo-mechanical manner in order to understand the coupling between reservoir heterogeneities and stress distributions. To this end, stress analyses of wellbore data and laboratory tests will help to calibrate a reliable model. In order to implement the complex geological structure of the studied wedge-shaped foreland basin, an automatic export of lithology, fault and borehole data (e.g. from Petrel) into a FE mesh is used. We will present a reservoir-scale model that considers thermo-mechanic effects and analyze their influence on reservoir deformation, fluid flow and stress concentration. We use the currently developed finite element application REDBACK (https://github.com/pou036/redback), inside the MOOSE framework (Poulet et al. 2016). We show that mechanical heterogeneities nearby fault zones and their orientation within the stress field correlate to fracture pattern, interpreted stress heterogeneities or variegated flow systems within the reservoir. REFERENCES Poulet, T.; Paesold, M.; Veveakis, M. (2016), Multi-Physics Modelling of Fault Mechanics Using REDBACK. A Parallel Open-Source Simulator for Tightly Coupled Problems. Rock Mechanics and Rock Engineering. doi: 10.1007/s00603-016-0927-y. Seithel, R.; Steiner, U.; Müller, B.I.R.; Hecht, Ch.; Kohl, T. (2015), Local stress anomaly in the Bavarian Molasse Basin, Geothermal Energy 3(1), p.77. doi:10.1186/s40517-014-0023-z

  15. Superconducting fault current-limiter with variable shunt impedance

    DOEpatents

    Llambes, Juan Carlos H; Xiong, Xuming

    2013-11-19

    A superconducting fault current-limiter is provided, including a superconducting element configured to resistively or inductively limit a fault current, and one or more variable-impedance shunts electrically coupled in parallel with the superconducting element. The variable-impedance shunt(s) is configured to present a first impedance during a superconducting state of the superconducting element and a second impedance during a normal resistive state of the superconducting element. The superconducting element transitions from the superconducting state to the normal resistive state responsive to the fault current, and responsive thereto, the variable-impedance shunt(s) transitions from the first to the second impedance. The second impedance of the variable-impedance shunt(s) is a lower impedance than the first impedance, which facilitates current flow through the variable-impedance shunt(s) during a recovery transition of the superconducting element from the normal resistive state to the superconducting state, and thus, facilitates recovery of the superconducting element under load.

  16. The Development of NASA's Fault Management Handbook

    NASA Technical Reports Server (NTRS)

    Fesq, Lorraine

    2011-01-01

    Disciplined approach to Fault Management (FM) has not always been emphasized by projects, contributing to major schedule and cost overruns. Progress is being made on a number of fronts outside of Handbook effort: (1) Processes, Practices and Tools being developed at some Centers and Institutions (2) Management recognition . Constellation FM roles, Discovery/New Frontiers mission reviews (3) Potential Technology solutions . New approaches could avoid many current pitfalls (3a) New FM architectures, including model ]based approach integrated with NASA fs MBSE efforts (3b) NASA fs Office of the Chief Technologist: FM identified in seven of NASA fs 14 Space Technology Roadmaps . opportunity to coalesce and establish thrust area to progressively develop new FM techniques FM Handbook will help ensure that future missions do not encounter same FM ]related problems as previous missions Version 1 of the FM Handbook is a good start.

  17. TCSC based filtering and improvement of power quality

    NASA Astrophysics Data System (ADS)

    Arulvendhan, K.; Dilli srinivasan, J.; Vinil, M.

    2018-04-01

    Thyristor Controlled Series Capacitor (TCSC) as a dynamic system, also its competence in growing power allocation in transmission lines, can be used to improve different power system problems. TCSC’s dissimilar advantages can be categorised as steady-state and transient ones. During a fault, TCSC can increase power quality by reducing the current and benefit to keep the voltage as high as conceivable. In this paper, the application of TCSC to enrich one of the vital power quality issues, i.e., voltage sag is investigated. Different operating modes of TCSC have dissimilar influences on the voltage of the bus that the line armed with TCSC is connected to. Relocating to bypass mode upon manifestation of a fault is a significant feature of TCSC to advance voltage sag. The simulations on a trial network disclose these facts.

  18. Current Sensor Fault Diagnosis Based on a Sliding Mode Observer for PMSM Driven Systems

    PubMed Central

    Huang, Gang; Luo, Yi-Ping; Zhang, Chang-Fan; Huang, Yi-Shan; Zhao, Kai-Hui

    2015-01-01

    This paper proposes a current sensor fault detection method based on a sliding mode observer for the torque closed-loop control system of interior permanent magnet synchronous motors. First, a sliding mode observer based on the extended flux linkage is built to simplify the motor model, which effectively eliminates the phenomenon of salient poles and the dependence on the direct axis inductance parameter, and can also be used for real-time calculation of feedback torque. Then a sliding mode current observer is constructed in αβ coordinates to generate the fault residuals of the phase current sensors. The method can accurately identify abrupt gain faults and slow-variation offset faults in real time in faulty sensors, and the generated residuals of the designed fault detection system are not affected by the unknown input, the structure of the observer, and the theoretical derivation and the stability proof process are concise and simple. The RT-LAB real-time simulation is used to build a simulation model of the hardware in the loop. The simulation and experimental results demonstrate the feasibility and effectiveness of the proposed method. PMID:25970258

  19. Determining the Impact of Steady-State PV Fault Current Injections on Distribution Protection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Seuss, John; Reno, Matthew J.; Broderick, Robert Joseph

    This report investigates the fault current contribution from a single large PV system and the impact it has on existing distribution overcurrent protection devices. Assumptions are made about the modeling of the PV system under fault to perform exhaustive steady - state fault analyses throughout distribution feeder models. Each PV interconnection location is tested to determine how the size of the PV system affects the fault current measured by each protection device. This data is then searched for logical conditions that indicate whether a protection device has operated in a manner that will cause more customer outages due to themore » addition of the PV system. This is referred to as a protection issue , and there are four unique types of issues that have been identified in the study. The PV system size at which any issues occur are recorded to determine the feeder's PV hosting capacity limitations due to interference with protection settings. The analysis is carried out on six feeder models. The report concludes with a discussion of the prevalence and cause of each protection issue caused by PV system fault current.« less

  20. Data-Centric Situational Awareness and Management in Intelligent Power Systems

    NASA Astrophysics Data System (ADS)

    Dai, Xiaoxiao

    The rapid development of technology and society has made the current power system a much more complicated system than ever. The request for big data based situation awareness and management becomes urgent today. In this dissertation, to respond to the grand challenge, two data-centric power system situation awareness and management approaches are proposed to address the security problems in the transmission/distribution grids and social benefits augmentation problem at the distribution-customer lever, respectively. To address the security problem in the transmission/distribution grids utilizing big data, the first approach provides a fault analysis solution based on characterization and analytics of the synchrophasor measurements. Specically, the optimal synchrophasor measurement devices selection algorithm (OSMDSA) and matching pursuit decomposition (MPD) based spatial-temporal synchrophasor data characterization method was developed to reduce data volume while preserving comprehensive information for the big data analyses. And the weighted Granger causality (WGC) method was investigated to conduct fault impact causal analysis during system disturbance for fault localization. Numerical results and comparison with other methods demonstrate the effectiveness and robustness of this analytic approach. As more social effects are becoming important considerations in power system management, the goal of situation awareness should be expanded to also include achievements in social benefits. The second approach investigates the concept and application of social energy upon the University of Denver campus grid to provide management improvement solutions for optimizing social cost. Social element--human working productivity cost, and economic element--electricity consumption cost, are both considered in the evaluation of overall social cost. Moreover, power system simulation, numerical experiments for smart building modeling, distribution level real-time pricing and social response to the pricing signals are studied for implementing the interactive artificial-physical management scheme.

  1. Apparatus including a plurality of spaced transformers for locating short circuits in cables

    NASA Technical Reports Server (NTRS)

    Cason, R. L.; Mcstay, J. J. (Inventor)

    1978-01-01

    A cable fault locator is described for sensing faults such as short circuits in power cables. The apparatus includes a plurality of current transformers strategically located along a cable. Trigger circuits are connected to each of the current transformers for placing a resistor in series with a resistive element responsive to an abnormally high current flowing through that portion of the cable. By measuring the voltage drop across the resistive element, the location of the fault can be determined.

  2. Accounting for uncertain fault geometry in earthquake source inversions - I: theory and simplified application

    NASA Astrophysics Data System (ADS)

    Ragon, Théa; Sladen, Anthony; Simons, Mark

    2018-05-01

    The ill-posed nature of earthquake source estimation derives from several factors including the quality and quantity of available observations and the fidelity of our forward theory. Observational errors are usually accounted for in the inversion process. Epistemic errors, which stem from our simplified description of the forward problem, are rarely dealt with despite their potential to bias the estimate of a source model. In this study, we explore the impact of uncertainties related to the choice of a fault geometry in source inversion problems. The geometry of a fault structure is generally reduced to a set of parameters, such as position, strike and dip, for one or a few planar fault segments. While some of these parameters can be solved for, more often they are fixed to an uncertain value. We propose a practical framework to address this limitation by following a previously implemented method exploring the impact of uncertainties on the elastic properties of our models. We develop a sensitivity analysis to small perturbations of fault dip and position. The uncertainties in fault geometry are included in the inverse problem under the formulation of the misfit covariance matrix that combines both prediction and observation uncertainties. We validate this approach with the simplified case of a fault that extends infinitely along strike, using both Bayesian and optimization formulations of a static inversion. If epistemic errors are ignored, predictions are overconfident in the data and source parameters are not reliably estimated. In contrast, inclusion of uncertainties in fault geometry allows us to infer a robust posterior source model. Epistemic uncertainties can be many orders of magnitude larger than observational errors for great earthquakes (Mw > 8). Not accounting for uncertainties in fault geometry may partly explain observed shallow slip deficits for continental earthquakes. Similarly, ignoring the impact of epistemic errors can also bias estimates of near surface slip and predictions of tsunamis induced by megathrust earthquakes. (Mw > 8)

  3. Research on Mechanical Fault Prediction Algorithm for Circuit Breaker Based on Sliding Time Window and ANN

    NASA Astrophysics Data System (ADS)

    Wang, Xiaohua; Rong, Mingzhe; Qiu, Juan; Liu, Dingxin; Su, Biao; Wu, Yi

    A new type of algorithm for predicting the mechanical faults of a vacuum circuit breaker (VCB) based on an artificial neural network (ANN) is proposed in this paper. There are two types of mechanical faults in a VCB: operation mechanism faults and tripping circuit faults. An angle displacement sensor is used to measure the main axle angle displacement which reflects the displacement of the moving contact, to obtain the state of the operation mechanism in the VCB, while a Hall current sensor is used to measure the trip coil current, which reflects the operation state of the tripping circuit. Then an ANN prediction algorithm based on a sliding time window is proposed in this paper and successfully used to predict mechanical faults in a VCB. The research results in this paper provide a theoretical basis for the realization of online monitoring and fault diagnosis of a VCB.

  4. Quaternary Geology and Surface Faulting Hazard: Active and Capable Faults in Central Apennines, Italy

    NASA Astrophysics Data System (ADS)

    Falcucci, E.; Gori, S.

    2015-12-01

    The 2009 L'Aquila earthquake (Mw 6.1), in central Italy, raised the issue of surface faulting hazard in Italy, since large urban areas were affected by surface displacement along the causative structure, the Paganica fault. Since then, guidelines for microzonation were drew up that take into consideration the problem of surface faulting in Italy, and laying the bases for future regulations about related hazard, similarly to other countries (e.g. USA). More specific guidelines on the management of areas affected by active and capable faults (i.e. able to produce surface faulting) are going to be released by National Department of Civil Protection; these would define zonation of areas affected by active and capable faults, with prescriptions for land use planning. As such, the guidelines arise the problem of the time interval and general operational criteria to asses fault capability for the Italian territory. As for the chronology, the review of the international literature and regulatory allowed Galadini et al. (2012) to propose different time intervals depending on the ongoing tectonic regime - compressive or extensional - which encompass the Quaternary. As for the operational criteria, the detailed analysis of the large amount of works dealing with active faulting in Italy shows that investigations exclusively based on surface morphological features (e.g. fault planes exposition) or on indirect investigations (geophysical data), are not sufficient or even unreliable to define the presence of an active and capable fault; instead, more accurate geological information on the Quaternary space-time evolution of the areas affected by such tectonic structures is needed. A test area for which active and capable faults can be first mapped based on such a classical but still effective methodological approach can be the central Apennines. Reference Galadini F., Falcucci E., Galli P., Giaccio B., Gori S., Messina P., Moro M., Saroli M., Scardia G., Sposato A. (2012). Time intervals to assess active and capable faults for engineering practices in Italy. Eng. Geol., 139/140, 50-65.

  5. Parallel, Asynchronous Executive (PAX): System concepts, facilities, and architecture

    NASA Technical Reports Server (NTRS)

    Jones, W. H.

    1983-01-01

    The Parallel, Asynchronous Executive (PAX) is a software operating system simulation that allows many computers to work on a single problem at the same time. PAX is currently implemented on a UNIVAC 1100/42 computer system. Independent UNIVAC runstreams are used to simulate independent computers. Data are shared among independent UNIVAC runstreams through shared mass-storage files. PAX has achieved the following: (1) applied several computing processes simultaneously to a single, logically unified problem; (2) resolved most parallel processor conflicts by careful work assignment; (3) resolved by means of worker requests to PAX all conflicts not resolved by work assignment; (4) provided fault isolation and recovery mechanisms to meet the problems of an actual parallel, asynchronous processing machine. Additionally, one real-life problem has been constructed for the PAX environment. This is CASPER, a collection of aerodynamic and structural dynamic problem simulation routines. CASPER is not discussed in this report except to provide examples of parallel-processing techniques.

  6. Problems related to the integration of fault tolerant aircraft electronic systems

    NASA Technical Reports Server (NTRS)

    Bannister, J. A.; Adlakha, V.; Triyedi, K.; Alspaugh, T. A., Jr.

    1982-01-01

    Problems related to the design of the hardware for an integrated aircraft electronic system are considered. Taxonomies of concurrent systems are reviewed and a new taxonomy is proposed. An informal methodology intended to identify feasible regions of the taxonomic design space is described. Specific tools are recommended for use in the methodology. Based on the methodology, a preliminary strawman integrated fault tolerant aircraft electronic system is proposed. Next, problems related to the programming and control of inegrated aircraft electronic systems are discussed. Issues of system resource management, including the scheduling and allocation of real time periodic tasks in a multiprocessor environment, are treated in detail. The role of software design in integrated fault tolerant aircraft electronic systems is discussed. Conclusions and recommendations for further work are included.

  7. Method and system for controlling a permanent magnet machine during fault conditions

    DOEpatents

    Krefta, Ronald John; Walters, James E.; Gunawan, Fani S.

    2004-05-25

    Method and system for controlling a permanent magnet machine driven by an inverter is provided. The method allows for monitoring a signal indicative of a fault condition. The method further allows for generating during the fault condition a respective signal configured to maintain a field weakening current even though electrical power from an energy source is absent during said fault condition. The level of the maintained field-weakening current enables the machine to operate in a safe mode so that the inverter is protected from excess voltage.

  8. Integral Sensor Fault Detection and Isolation for Railway Traction Drive.

    PubMed

    Garramiola, Fernando; Del Olmo, Jon; Poza, Javier; Madina, Patxi; Almandoz, Gaizka

    2018-05-13

    Due to the increasing importance of reliability and availability of electric traction drives in Railway applications, early detection of faults has become an important key for Railway traction drive manufacturers. Sensor faults are important sources of failures. Among the different fault diagnosis approaches, in this article an integral diagnosis strategy for sensors in traction drives is presented. Such strategy is composed of an observer-based approach for direct current (DC)-link voltage and catenary current sensors, a frequency analysis approach for motor current phase sensors and a hardware redundancy solution for speed sensors. None of them requires any hardware change requirement in the actual traction drive. All the fault detection and isolation approaches have been validated in a Hardware-in-the-loop platform comprising a Real Time Simulator and a commercial Traction Control Unit for a tram. In comparison to safety-critical systems in Aerospace applications, Railway applications do not need instantaneous detection, and the diagnosis is validated in a short time period for reliable decision. Combining the different approaches and existing hardware redundancy, an integral fault diagnosis solution is provided, to detect and isolate faults in all the sensors installed in the traction drive.

  9. Integral Sensor Fault Detection and Isolation for Railway Traction Drive

    PubMed Central

    del Olmo, Jon; Poza, Javier; Madina, Patxi; Almandoz, Gaizka

    2018-01-01

    Due to the increasing importance of reliability and availability of electric traction drives in Railway applications, early detection of faults has become an important key for Railway traction drive manufacturers. Sensor faults are important sources of failures. Among the different fault diagnosis approaches, in this article an integral diagnosis strategy for sensors in traction drives is presented. Such strategy is composed of an observer-based approach for direct current (DC)-link voltage and catenary current sensors, a frequency analysis approach for motor current phase sensors and a hardware redundancy solution for speed sensors. None of them requires any hardware change requirement in the actual traction drive. All the fault detection and isolation approaches have been validated in a Hardware-in-the-loop platform comprising a Real Time Simulator and a commercial Traction Control Unit for a tram. In comparison to safety-critical systems in Aerospace applications, Railway applications do not need instantaneous detection, and the diagnosis is validated in a short time period for reliable decision. Combining the different approaches and existing hardware redundancy, an integral fault diagnosis solution is provided, to detect and isolate faults in all the sensors installed in the traction drive. PMID:29757251

  10. A Game Theoretic Fault Detection Filter

    NASA Technical Reports Server (NTRS)

    Chung, Walter H.; Speyer, Jason L.

    1995-01-01

    The fault detection process is modelled as a disturbance attenuation problem. The solution to this problem is found via differential game theory, leading to an H(sub infinity) filter which bounds the transmission of all exogenous signals save the fault to be detected. For a general class of linear systems which includes some time-varying systems, it is shown that this transmission bound can be taken to zero by simultaneously bringing the sensor noise weighting to zero. Thus, in the limit, a complete transmission block can he achieved, making the game filter into a fault detection filter. When we specialize this result to time-invariant system, it is found that the detection filter attained in the limit is identical to the well known Beard-Jones Fault Detection Filter. That is, all fault inputs other than the one to be detected (the "nuisance faults") are restricted to an invariant subspace which is unobservable to a projection on the output. For time-invariant systems, it is also shown that in the limit, the order of the state-space and the game filter can be reduced by factoring out the invariant subspace. The result is a lower dimensional filter which can observe only the fault to be detected. A reduced-order filter can also he generated for time-varying systems, though the computational overhead may be intensive. An example given at the end of the paper demonstrates the effectiveness of the filter as a tool for fault detection and identification.

  11. Integration of HTS Cables in the Future Grid of the Netherlands

    NASA Astrophysics Data System (ADS)

    Zuijderduin, R.; Chevtchenko, O.; Smit, J. J.; Aanhaanen, G.; Melnik, I.; Geschiere, A.

    Due to increasing power demand, the electricity grid of the Netherlands is changing. The future transmission grid will obtain electrical power generated by decentralized renewable sources, together with large scale generation units located at the coastal region. In this way electrical power has to be distributed and transmitted over longer distances from generation to end user. Potential grid issues like: amount of distributed power, grid stability and electrical loss dissipation merit particular attention. High temperature superconductors (HTS) can play an important role in solving these grid problems. Advantages to integrate HTS components at transmission voltages are numerous: more transmittable power together with less emissions, intrinsic fault current limiting capability, lower ac loss, better control of power flow, reduced footprint, less magnetic field emissions, etc. The main obstacle at present is the relatively high price of HTS conductor. However as the price goes down, initial market penetration of several HTS components (e.g.: cables, fault current limiters) is expected by year 2015. In the full paper we present selected ways to integrate EHV AC HTS cables depending on a particular future grid scenario in the Netherlands.

  12. Advanced Protection & Service Restoration for FREEDM Systems

    NASA Astrophysics Data System (ADS)

    Singh, Urvir

    A smart electric power distribution system (FREEDM system) that incorporates DERs (Distributed Energy Resources), SSTs (Solid State Transformers - that can limit the fault current to two times of the rated current) & RSC (Reliable & Secure Communication) capabilities has been studied in this work in order to develop its appropriate protection & service restoration techniques. First, a solution is proposed that can make conventional protective devices be able to provide effective protection for FREEDM systems. Results show that although this scheme can provide required protection but it can be quite slow. Using the FREEDM system's communication capabilities, a communication assisted Overcurrent (O/C) protection scheme is proposed & results show that by using communication (blocking signals) very fast operating times are achieved thereby, mitigating the problem of conventional O/C scheme. Using the FREEDM System's DGI (Distributed Grid Intelligence) capability, an automated FLISR (Fault Location, Isolation & Service Restoration) scheme is proposed that is based on the concept of 'software agents' & uses lesser data (than conventional centralized approaches). Test results illustrated that this scheme is able to provide a global optimal system reconfiguration for service restoration.

  13. Observer-Based Adaptive Fault-Tolerant Tracking Control of Nonlinear Nonstrict-Feedback Systems.

    PubMed

    Wu, Chengwei; Liu, Jianxing; Xiong, Yongyang; Wu, Ligang

    2017-06-28

    This paper studies an output-based adaptive fault-tolerant control problem for nonlinear systems with nonstrict-feedback form. Neural networks are utilized to identify the unknown nonlinear characteristics in the system. An observer and a general fault model are constructed to estimate the unavailable states and describe the fault, respectively. Adaptive parameters are constructed to overcome the difficulties in the design process for nonstrict-feedback systems. Meanwhile, dynamic surface control technique is introduced to avoid the problem of ''explosion of complexity''. Furthermore, based on adaptive backstepping control method, an output-based adaptive neural tracking control strategy is developed for the considered system against actuator fault, which can ensure that all the signals in the resulting closed-loop system are bounded, and the system output signal can be regulated to follow the response of the given reference signal with a small error. Finally, the simulation results are provided to validate the effectiveness of the control strategy proposed in this paper.

  14. Final Technical Report Recovery Act: Online Nonintrusive Condition Monitoring and Fault Detection for Wind Turbines

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wei Qiao

    2012-05-29

    The penetration of wind power has increased greatly over the last decade in the United States and across the world. The U.S. wind power industry installed 1,118 MW of new capacity in the first quarter of 2011 alone and entered the second quarter with another 5,600 MW under construction. By 2030, wind energy is expected to provide 20% of the U.S. electricity needs. As the number of wind turbines continues to grow, the need for effective condition monitoring and fault detection (CMFD) systems becomes increasingly important [3]. Online CMFD is an effective means of not only improving the reliability, capacitymore » factor, and lifetime, but it also reduces the downtime, energy loss, and operation and maintenance (O&M) of wind turbines. The goal of this project is to develop novel online nonintrusive CMFD technologies for wind turbines. The proposed technologies use only the current measurements that have been used by the control and protection system of a wind turbine generator (WTG); no additional sensors or data acquisition devices are needed. Current signals are reliable and easily accessible from the ground without intruding on the wind turbine generators (WTGs) that are situated on high towers and installed in remote areas. Therefore, current-based CMFD techniques have great economic benefits and the potential to be adopted by the wind energy industry. Specifically, the following objectives and results have been achieved in this project: (1) Analyzed the effects of faults in a WTG on the generator currents of the WTG operating at variable rotating speed conditions from the perspective of amplitude and frequency modulations of the current measurements; (2) Developed effective amplitude and frequency demodulation methods for appropriate signal conditioning of the current measurements to improve the accuracy and reliability of wind turbine CMFD; (3) Developed a 1P-invariant power spectrum density (PSD) method for effective signature extraction of wind turbine faults with characteristic frequencies in the current or current demodulated signals, where 1P stands for the shaft rotating frequency of a WTG; (4) Developed a wavelet filter for effective signature extraction of wind turbine faults without characteristic frequencies in the current or current demodulated signals; (5) Developed an effective adaptive noise cancellation method as an alternative to the wavelet filter method for signature extraction of wind turbine faults without characteristic frequencies in the current or current demodulated signals; (6) Developed a statistical analysis-based impulse detection method for effective fault signature extraction and evaluation of WTGs based on the 1P-invariant PSD of the current or current demodulated signals; (7) Validated the proposed current-based wind turbine CMFD technologies through extensive computer simulations and experiments for small direct-drive WTGs without gearboxes; and (8) Showed, through extensive experiments for small direct-drive WTGs, that the performance of the proposed current-based wind turbine CMFD technologies is comparable to traditional vibration-based methods. The proposed technologies have been successfully applied for detection of major failures in blades, shafts, bearings, and generators of small direct-drive WTGs. The proposed technologies can be easily integrated into existing wind turbine control, protection, and monitoring systems and can be implemented remotely from the wind turbines being monitored. The proposed technologies provide an alternative to vibration-sensor-based CMFD. This will reduce the cost and hardware complexity of wind turbine CMFD systems. The proposed technologies can also be combined with vibration-sensor-based methods to improve the accuracy and reliability of wind turbine CMFD systems. When there are problems with sensors, the proposed technologies will ensure proper CMFD for the wind turbines, including their sensing systems. In conclusion, the proposed technologies offer an effective means to achieve condition-based smart maintenance for wind turbines and have a great potential to be adopted by the wind energy industry due to their almost no-cost, nonintrusive features. Although only validated for small direct-drive wind turbines without gearboxes, the proposed technologies are also applicable for CMFD of large-size wind turbines with and without gearboxes. However, additional investigations are recommended in order to apply the proposed technologies to those large-size wind turbines.« less

  15. A Dynamic Finite Element Method for Simulating the Physics of Faults Systems

    NASA Astrophysics Data System (ADS)

    Saez, E.; Mora, P.; Gross, L.; Weatherley, D.

    2004-12-01

    We introduce a dynamic Finite Element method using a novel high level scripting language to describe the physical equations, boundary conditions and time integration scheme. The library we use is the parallel Finley library: a finite element kernel library, designed for solving large-scale problems. It is incorporated as a differential equation solver into a more general library called escript, based on the scripting language Python. This library has been developed to facilitate the rapid development of 3D parallel codes, and is optimised for the Australian Computational Earth Systems Simulator Major National Research Facility (ACcESS MNRF) supercomputer, a 208 processor SGI Altix with a peak performance of 1.1 TFlops. Using the scripting approach we obtain a parallel FE code able to take advantage of the computational efficiency of the Altix 3700. We consider faults as material discontinuities (the displacement, velocity, and acceleration fields are discontinuous at the fault), with elastic behavior. The stress continuity at the fault is achieved naturally through the expression of the fault interactions in the weak formulation. The elasticity problem is solved explicitly in time, using the Saint Verlat scheme. Finally, we specify a suitable frictional constitutive relation and numerical scheme to simulate fault behaviour. Our model is based on previous work on modelling fault friction and multi-fault systems using lattice solid-like models. We adapt the 2D model for simulating the dynamics of parallel fault systems described to the Finite-Element method. The approach uses a frictional relation along faults that is slip and slip-rate dependent, and the numerical integration approach introduced by Mora and Place in the lattice solid model. In order to illustrate the new Finite Element model, single and multi-fault simulation examples are presented.

  16. Detection of High-impedance Arcing Faults in Radial Distribution DC Systems

    NASA Technical Reports Server (NTRS)

    Gonzalez, Marcelo C.; Button, Robert M.

    2003-01-01

    High voltage, low current arcing faults in DC power systems have been researched at the NASA Glenn Research Center in order to develop a method for detecting these 'hidden faults', in-situ, before damage to cables and components from localized heating can occur. A simple arc generator was built and high-speed and low-speed monitoring of the voltage and current waveforms, respectively, has shown that these high impedance faults produce a significant increase in high frequency content in the DC bus voltage and low frequency content in the DC system current. Based on these observations, an algorithm was developed using a high-speed data acquisition system that was able to accurately detect high impedance arcing events induced in a single-line system based on the frequency content of the DC bus voltage or the system current. Next, a multi-line, radial distribution system was researched to see if the arc location could be determined through the voltage information when multiple 'detectors' are present in the system. It was shown that a small, passive LC filter was sufficient to reliably isolate the fault to a single line in a multi-line distribution system. Of course, no modification is necessary if only the current information is used to locate the arc. However, data shows that it might be necessary to monitor both the system current and bus voltage to improve the chances of detecting and locating high impedance arcing faults

  17. Frequency behavior of the residual current devices

    NASA Astrophysics Data System (ADS)

    Erdei, Z.; Horgos, M.; Lung, C.; Pop-Vadean, A.; Muresan, R.

    2017-01-01

    This paper presents an experimental investigation into the operating characteristic of residual current devices when in presence of a residual current at a frequency of 60Hz. In order to protect persons and equipment effectively the residual current devices are made to be very sensitive to the ground fault current or the touch current. Because of their high sensitivity the residual current circuit breakers are prone to tripping under no-fault conditions.

  18. New insights on the recent and current deformation in Central-Eastern Iran, derived from a combined tectonic and GPS analysis

    NASA Astrophysics Data System (ADS)

    Walpersdorf, A.; Manighetti, I.; Tavakoli, F.; Mousavi, Z.; Vergnolle, M.; Jadidi, A.; Hatzfeld, D.; Aghamohammadi, A.; Djamour, Y.; Nankali, H.; Sedighi, M.; Lutz, L.

    2012-04-01

    We have studied the recent to current deformation in Iran and especially Central-Eastern Iran by tightly combining tectonic and GPS analyses. Based on morphotectonic analyses of satellite images, we have identified and mapped the major active faults that dissect the entire ≈ 4500 km x 2500 km2 region that extends from Eastern Turkey to Western Afghanistan/Pakistan and hence encompasses Iran, emphasizing their large-scale organization and kinematic relationships. Doing so, we have identified the major fault systems that control the tectonics of Iran, especially in its central-eastern part. We have also analyzed the 11 years GPS record on the 92 stations deployed in central-eastern Iran in the framework of the Iranian-French collaboration. The GPS analysis reveals that all major faults identified as seismogenic in central-eastern Iran are indeed currently active and slipping at fast rates. The northerly-trending East Lut, West Lut, Kuhbanan, Anar and Deshir faults have a current right-lateral slip rate of 5.7 ± 0.9, 4.7 ± 1.7, 2.3 ± 1.9, 2.7 ± 1.3 and 0.5 ± 0.2 mm/yr, respectively, while the ≈ EW-trending Doruneh and Sedeh faults have a left-lateral current slip rate of 3.1 ± 1.8 and 1.7 ± 0.2 mm/yr, respectively. The large regions bounded by the northerly-striking faults behave as fairly rigid blocks that are all found to move towards both the N13°E ARA-EUR convergence direction and the WNW, at fast rates, in the range 6.5-12.5 and 1-5 mm/yr, respectively. Combined with the available data on the studied faults, our tectonic and geodetic results suggest that a bookshelf faulting strain transfer mechanism has been and is still operating in central-eastern Iran. The coeval dextral motion of the two major, overlapping, North Anatolian-Main Recent and Caucasus-Kopeh Dagh-Herat fault lines that embrace central-eastern Iran, induces a large-scale regional sinistral shear on either side of the region, which forces the northerly-trending right-lateral faults and the blocks they bound to rotate counterclockwise in the horizontal plane. The faults and blocks have been rotating over the last ≈12 Ma, at rates reaching 1.8 °/Ma, and are still currently rotating at about these rates. We estimate the sinistral shear imposed at both edges of the central-eastern rotating zone to be in the range 2.2 - 7.2 mm/yr. The Doruneh fault likely formed more recently than the other central-eastern Iranian faults, as the imposed sinistral shear was evolving from diffuse to more localized. As a consequence, the western half of the Doruneh fault currently accommodates a significant part of the imposed regional sinistral shear. Our study thus shows that the recent to current tectonics of central-eastern Iran is not only controlled by the ARA-EUR convergence, but also by the large-scale kinematics of the adjacent plates. We finally discuss the implications of the novel strain model that we propose on the seismicity that occurs in Central-Eastern Iran.

  19. Gravity inversion of a fault by Particle swarm optimization (PSO).

    PubMed

    Toushmalani, Reza

    2013-01-01

    Particle swarm optimization is a heuristic global optimization method and also an optimization algorithm, which is based on swarm intelligence. It comes from the research on the bird and fish flock movement behavior. In this paper we introduce and use this method in gravity inverse problem. We discuss the solution for the inverse problem of determining the shape of a fault whose gravity anomaly is known. Application of the proposed algorithm to this problem has proven its capability to deal with difficult optimization problems. The technique proved to work efficiently when tested to a number of models.

  20. A multiscale model of distributed fracture and permeability in solids in all-round compression

    NASA Astrophysics Data System (ADS)

    De Bellis, Maria Laura; Della Vecchia, Gabriele; Ortiz, Michael; Pandolfi, Anna

    2017-07-01

    We present a microstructural model of permeability in fractured solids, where the fractures are described in terms of recursive families of parallel, equidistant cohesive faults. Faults originate upon the attainment of tensile or shear strength in the undamaged material. Secondary faults may form in a hierarchical organization, creating a complex network of connected fractures that modify the permeability of the solid. The undamaged solid may possess initial porosity and permeability. The particular geometry of the superposed micro-faults lends itself to an explicit analytical quantification of the porosity and permeability of the damaged material. The model is the finite kinematics version of a recently proposed porous material model, applied with success to the simulation of laboratory tests and excavation problems [De Bellis, M. L., Della Vecchia, G., Ortiz, M., Pandolfi, A., 2016. A linearized porous brittle damage material model with distributed frictional-cohesive faults. Engineering Geology 215, 10-24. Cited By 0. 10.1016/j.enggeo.2016.10.010]. The extension adds over and above the linearized kinematics version for problems characterized by large deformations localized in narrow zones, while the remainder of the solid undergoes small deformations, as typically observed in soil and rock mechanics problems. The approach is particularly appealing as a means of modeling a wide scope of engineering problems, ranging from the prevention of water or gas outburst into underground mines, to the prediction of the integrity of reservoirs for CO2 sequestration or hazardous waste storage, to hydraulic fracturing processes.

  1. Finite element models of earthquake cycles in mature strike-slip fault zones

    NASA Astrophysics Data System (ADS)

    Lynch, John Charles

    The research presented in this dissertation is on the subject of strike-slip earthquakes and the stresses that build and release in the Earth's crust during earthquake cycles. Numerical models of these cycles in a layered elastic/viscoelastic crust are produced using the finite element method. A fault that alternately sticks and slips poses a particularly challenging problem for numerical implementation, and a new contact element dubbed the "Velcro" element was developed to address this problem (Appendix A). Additionally, the finite element code used in this study was bench-marked against analytical solutions for some simplified problems (Chapter 2), and the resolving power was tested for the fault region of the models (Appendix B). With the modeling method thus developed, there are two main questions posed. First, in Chapter 3, the effect of a finite-width shear zone is considered. By defining a viscoelastic shear zone beneath a periodically slipping fault, it is found that shear stress concentrates at the edges of the shear zone and thus causes the stress tensor to rotate into non-Andersonian orientations. Several methods are used to examine the stress patterns, including the plunge angles of the principal stresses and a new method that plots the stress tensor in a manner analogous to seismic focal mechanism diagrams. In Chapter 4, a simple San Andreas-like model is constructed, consisting of two great earthquake producing faults separated by a freely-slipping shorter fault. The model inputs of lower crustal viscosity, fault separation distance, and relative breaking strengths are examined for their effect on fault communication. It is found that with a lower crustal viscosity of 1018 Pa s (in the lower range of estimates for California), the two faults tend to synchronize their earthquake cycles, even in the cases where the faults have asymmetric breaking strengths. These models imply that postseismic stress transfer over hundreds of kilometers may play a significant roll in the variability of earthquake repeat times. Specifically, small perturbations in the model parameters can lead to results similar to such observed phenomena as earthquake clustering and disruptions to so-called "characteristic" earthquake cycles.

  2. 30 CFR 75.824 - Electrical protection.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... transformer and over-current relay in the neutral grounding resistor circuit. (vi) A single window-type current transformer that encircles all three-phase conductors must be used to activate the ground-fault... current transformer. (vii) A test circuit for the ground-fault device must be provided. The test circuit...

  3. 30 CFR 75.824 - Electrical protection.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... transformer and over-current relay in the neutral grounding resistor circuit. (vi) A single window-type current transformer that encircles all three-phase conductors must be used to activate the ground-fault... current transformer. (vii) A test circuit for the ground-fault device must be provided. The test circuit...

  4. 30 CFR 75.824 - Electrical protection.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... transformer and over-current relay in the neutral grounding resistor circuit. (vi) A single window-type current transformer that encircles all three-phase conductors must be used to activate the ground-fault... current transformer. (vii) A test circuit for the ground-fault device must be provided. The test circuit...

  5. 30 CFR 75.824 - Electrical protection.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... transformer and over-current relay in the neutral grounding resistor circuit. (vi) A single window-type current transformer that encircles all three-phase conductors must be used to activate the ground-fault... current transformer. (vii) A test circuit for the ground-fault device must be provided. The test circuit...

  6. 30 CFR 75.824 - Electrical protection.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... transformer and over-current relay in the neutral grounding resistor circuit. (vi) A single window-type current transformer that encircles all three-phase conductors must be used to activate the ground-fault... current transformer. (vii) A test circuit for the ground-fault device must be provided. The test circuit...

  7. Automatic translation of digraph to fault-tree models

    NASA Technical Reports Server (NTRS)

    Iverson, David L.

    1992-01-01

    The author presents a technique for converting digraph models, including those models containing cycles, to a fault-tree format. A computer program which automatically performs this translation using an object-oriented representation of the models has been developed. The fault-trees resulting from translations can be used for fault-tree analysis and diagnosis. Programs to calculate fault-tree and digraph cut sets and perform diagnosis with fault-tree models have also been developed. The digraph to fault-tree translation system has been successfully tested on several digraphs of varying size and complexity. Details of some representative translation problems are presented. Most of the computation performed by the program is dedicated to finding minimal cut sets for digraph nodes in order to break cycles in the digraph. Fault-trees produced by the translator have been successfully used with NASA's Fault-Tree Diagnosis System (FTDS) to produce automated diagnostic systems.

  8. Research and application of measuring system for disconnector for surge arresters

    NASA Astrophysics Data System (ADS)

    Chen, Ziyu; Luo, Xiaoyong

    2017-10-01

    This paper analyzes the current situation of the use of the disconnector for the arrester, and introduces the common faults and their causes. Based on the Labview programming environment, a low cost and convenient measurement system is developed to solve the problem of test trigger and waveform acquisition, which is aimed at the lack of electrical detection methods for small manufacturing enterprises in China. Finally, the system is applied to the actual production, and the performance of product are improved.

  9. What electrical measurements can say about changes in fault systems.

    PubMed Central

    Madden, T R; Mackie, R L

    1996-01-01

    Earthquake zones in the upper crust are usually more conductive than the surrounding rocks, and electrical geophysical measurements can be used to map these zones. Magnetotelluric (MT) measurements across fault zones that are parallel to the coast and not too far away can also give some important information about the lower crustal zone. This is because the long-period electric currents coming from the ocean gradually leak into the mantle, but the lower crust is usually very resistive and very little leakage takes place. If a lower crustal zone is less resistive it will be a leakage zone, and this can be seen because the MT phase will change as the ocean currents leave the upper crust. The San Andreas Fault is parallel to the ocean boundary and close enough to have a lot of extra ocean currents crossing the zone. The Loma Prieta zone, after the earthquake, showed a lot of ocean electric current leakage, suggesting that the lower crust under the fault zone was much more conductive than normal. It is hard to believe that water, which is responsible for the conductivity, had time to get into the lower crustal zone, so it was probably always there, but not well connected. If this is true, then the poorly connected water would be at a pressure close to the rock pressure, and it may play a role in modifying the fluid pressure in the upper crust fault zone. We also have telluric measurements across the San Andreas Fault near Palmdale from 1979 to 1990, and beginning in 1985 we saw changes in the telluric signals on the fault zone and east of the fault zone compared with the signals west of the fault zone. These measurements were probably seeing a better connection of the lower crust fluids taking place, and this may result in a fluid flow from the lower crust to the upper crust. This could be a factor in changing the strength of the upper crust fault zone. PMID:11607664

  10. Methods for locating ground faults and insulation degradation condition in energy conversion systems

    DOEpatents

    Agamy, Mohamed; Elasser, Ahmed; Galbraith, Anthony William; Harfman Todorovic, Maja

    2015-08-11

    Methods for determining a ground fault or insulation degradation condition within energy conversion systems are described. A method for determining a ground fault within an energy conversion system may include, in part, a comparison of baseline waveform of differential current to a waveform of differential current during operation for a plurality of DC current carrying conductors in an energy conversion system. A method for determining insulation degradation within an energy conversion system may include, in part, a comparison of baseline frequency spectra of differential current to a frequency spectra of differential current transient at start-up for a plurality of DC current carrying conductors in an energy conversion system. In one embodiment, the energy conversion system may be a photovoltaic system.

  11. Improved power control using optimal adjustable coefficients for three-phase photovoltaic inverter under unbalanced grid voltage.

    PubMed

    Wang, Qianggang; Zhou, Niancheng; Lou, Xiaoxuan; Chen, Xu

    2014-01-01

    Unbalanced grid faults will lead to several drawbacks in the output power quality of photovoltaic generation (PV) converters, such as power fluctuation, current amplitude swell, and a large quantity of harmonics. The aim of this paper is to propose a flexible AC current generation method by selecting coefficients to overcome these problems in an optimal way. Three coefficients are brought in to tune the output current reference within the required limits of the power quality (the current harmonic distortion, the AC current peak, the power fluctuation, and the DC voltage fluctuation). Through the optimization algorithm, the coefficients can be determined aiming to generate the minimum integrated amplitudes of the active and reactive power references with the constraints of the inverter current and DC voltage fluctuation. Dead-beat controller is utilized to track the optimal current reference in a short period. The method has been verified in PSCAD/EMTDC software.

  12. Improved Power Control Using Optimal Adjustable Coefficients for Three-Phase Photovoltaic Inverter under Unbalanced Grid Voltage

    PubMed Central

    Wang, Qianggang; Zhou, Niancheng; Lou, Xiaoxuan; Chen, Xu

    2014-01-01

    Unbalanced grid faults will lead to several drawbacks in the output power quality of photovoltaic generation (PV) converters, such as power fluctuation, current amplitude swell, and a large quantity of harmonics. The aim of this paper is to propose a flexible AC current generation method by selecting coefficients to overcome these problems in an optimal way. Three coefficients are brought in to tune the output current reference within the required limits of the power quality (the current harmonic distortion, the AC current peak, the power fluctuation, and the DC voltage fluctuation). Through the optimization algorithm, the coefficients can be determined aiming to generate the minimum integrated amplitudes of the active and reactive power references with the constraints of the inverter current and DC voltage fluctuation. Dead-beat controller is utilized to track the optimal current reference in a short period. The method has been verified in PSCAD/EMTDC software. PMID:25243215

  13. Induction machine bearing faults detection based on a multi-dimensional MUSIC algorithm and maximum likelihood estimation.

    PubMed

    Elbouchikhi, Elhoussin; Choqueuse, Vincent; Benbouzid, Mohamed

    2016-07-01

    Condition monitoring of electric drives is of paramount importance since it contributes to enhance the system reliability and availability. Moreover, the knowledge about the fault mode behavior is extremely important in order to improve system protection and fault-tolerant control. Fault detection and diagnosis in squirrel cage induction machines based on motor current signature analysis (MCSA) has been widely investigated. Several high resolution spectral estimation techniques have been developed and used to detect induction machine abnormal operating conditions. This paper focuses on the application of MCSA for the detection of abnormal mechanical conditions that may lead to induction machines failure. In fact, this paper is devoted to the detection of single-point defects in bearings based on parametric spectral estimation. A multi-dimensional MUSIC (MD MUSIC) algorithm has been developed for bearing faults detection based on bearing faults characteristic frequencies. This method has been used to estimate the fundamental frequency and the fault related frequency. Then, an amplitude estimator of the fault characteristic frequencies has been proposed and fault indicator has been derived for fault severity measurement. The proposed bearing faults detection approach is assessed using simulated stator currents data, issued from a coupled electromagnetic circuits approach for air-gap eccentricity emulating bearing faults. Then, experimental data are used for validation purposes. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  14. A Unified Nonlinear Adaptive Approach for Detection and Isolation of Engine Faults

    NASA Technical Reports Server (NTRS)

    Tang, Liang; DeCastro, Jonathan A.; Zhang, Xiaodong; Farfan-Ramos, Luis; Simon, Donald L.

    2010-01-01

    A challenging problem in aircraft engine health management (EHM) system development is to detect and isolate faults in system components (i.e., compressor, turbine), actuators, and sensors. Existing nonlinear EHM methods often deal with component faults, actuator faults, and sensor faults separately, which may potentially lead to incorrect diagnostic decisions and unnecessary maintenance. Therefore, it would be ideal to address sensor faults, actuator faults, and component faults under one unified framework. This paper presents a systematic and unified nonlinear adaptive framework for detecting and isolating sensor faults, actuator faults, and component faults for aircraft engines. The fault detection and isolation (FDI) architecture consists of a parallel bank of nonlinear adaptive estimators. Adaptive thresholds are appropriately designed such that, in the presence of a particular fault, all components of the residual generated by the adaptive estimator corresponding to the actual fault type remain below their thresholds. If the faults are sufficiently different, then at least one component of the residual generated by each remaining adaptive estimator should exceed its threshold. Therefore, based on the specific response of the residuals, sensor faults, actuator faults, and component faults can be isolated. The effectiveness of the approach was evaluated using the NASA C-MAPSS turbofan engine model, and simulation results are presented.

  15. A methodology for testing fault-tolerant software

    NASA Technical Reports Server (NTRS)

    Andrews, D. M.; Mahmood, A.; Mccluskey, E. J.

    1985-01-01

    A methodology for testing fault tolerant software is presented. There are problems associated with testing fault tolerant software because many errors are masked or corrected by voters, limiter, or automatic channel synchronization. This methodology illustrates how the same strategies used for testing fault tolerant hardware can be applied to testing fault tolerant software. For example, one strategy used in testing fault tolerant hardware is to disable the redundancy during testing. A similar testing strategy is proposed for software, namely, to move the major emphasis on testing earlier in the development cycle (before the redundancy is in place) thus reducing the possibility that undetected errors will be masked when limiters and voters are added.

  16. A survey of fault diagnosis technology

    NASA Technical Reports Server (NTRS)

    Riedesel, Joel

    1989-01-01

    Existing techniques and methodologies for fault diagnosis are surveyed. The techniques run the gamut from theoretical artificial intelligence work to conventional software engineering applications. They are shown to define a spectrum of implementation alternatives where tradeoffs determine their position on the spectrum. Various tradeoffs include execution time limitations and memory requirements of the algorithms as well as their effectiveness in addressing the fault diagnosis problem.

  17. Transforming incomplete fault tree to Ishikawa diagram as an alternative method for technology transfer

    NASA Astrophysics Data System (ADS)

    Batzias, Dimitris F.

    2012-12-01

    Fault Tree Analysis (FTA) can be used for technology transfer when the relevant problem (called 'top even' in FTA) is solved in a technology centre and the results are diffused to interested parties (usually Small Medium Enterprises - SMEs) that have not the proper equipment and the required know-how to solve the problem by their own. Nevertheless, there is a significant drawback in this procedure: the information usually provided by the SMEs to the technology centre, about production conditions and corresponding quality characteristics of the product, and (sometimes) the relevant expertise in the Knowledge Base of this centre may be inadequate to form a complete fault tree. Since such cases are quite frequent in practice, we have developed a methodology for transforming incomplete fault tree to Ishikawa diagram, which is more flexible and less strict in establishing causal chains, because it uses a surface phenomenological level with a limited number of categories of faults. On the other hand, such an Ishikawa diagram can be extended to simulate a fault tree as relevant knowledge increases. An implementation of this transformation, referring to anodization of aluminium, is presented.

  18. Robust dead reckoning system for mobile robots based on particle filter and raw range scan.

    PubMed

    Duan, Zhuohua; Cai, Zixing; Min, Huaqing

    2014-09-04

    Robust dead reckoning is a complicated problem for wheeled mobile robots (WMRs), where the robots are faulty, such as the sticking of sensors or the slippage of wheels, for the discrete fault models and the continuous states have to be estimated simultaneously to reach a reliable fault diagnosis and accurate dead reckoning. Particle filters are one of the most promising approaches to handle hybrid system estimation problems, and they have also been widely used in many WMRs applications, such as pose tracking, SLAM, video tracking, fault identification, etc. In this paper, the readings of a laser range finder, which may be also interfered with by noises, are used to reach accurate dead reckoning. The main contribution is that a systematic method to implement fault diagnosis and dead reckoning in a particle filter framework concurrently is proposed. Firstly, the perception model of a laser range finder is given, where the raw scan may be faulty. Secondly, the kinematics of the normal model and different fault models for WMRs are given. Thirdly, the particle filter for fault diagnosis and dead reckoning is discussed. At last, experiments and analyses are reported to show the accuracy and efficiency of the presented method.

  19. Robust Dead Reckoning System for Mobile Robots Based on Particle Filter and Raw Range Scan

    PubMed Central

    Duan, Zhuohua; Cai, Zixing; Min, Huaqing

    2014-01-01

    Robust dead reckoning is a complicated problem for wheeled mobile robots (WMRs), where the robots are faulty, such as the sticking of sensors or the slippage of wheels, for the discrete fault models and the continuous states have to be estimated simultaneously to reach a reliable fault diagnosis and accurate dead reckoning. Particle filters are one of the most promising approaches to handle hybrid system estimation problems, and they have also been widely used in many WMRs applications, such as pose tracking, SLAM, video tracking, fault identification, etc. In this paper, the readings of a laser range finder, which may be also interfered with by noises, are used to reach accurate dead reckoning. The main contribution is that a systematic method to implement fault diagnosis and dead reckoning in a particle filter framework concurrently is proposed. Firstly, the perception model of a laser range finder is given, where the raw scan may be faulty. Secondly, the kinematics of the normal model and different fault models for WMRs are given. Thirdly, the particle filter for fault diagnosis and dead reckoning is discussed. At last, experiments and analyses are reported to show the accuracy and efficiency of the presented method. PMID:25192318

  20. Reproducing the scaling laws for Slow and Fast ruptures

    NASA Astrophysics Data System (ADS)

    Romanet, Pierre; Bhat, Harsha; Madariaga, Raúl

    2017-04-01

    Modelling long term behaviour of large, natural fault systems, that are geometrically complex, is a challenging problem. This is why most of the research so far has concentrated on modelling the long term response of single planar fault system. To overcome this limitation, we appeal to a novel algorithm called the Fast Multipole Method which was developed in the context of modelling gravitational N-body problems. This method allows us to decrease the computational complexity of the calculation from O(N2) to O(N log N), N being the number of discretised elements on the fault. We then adapted this method to model the long term quasi-dynamic response of two faults, with step-over like geometry, that are governed by rate and state friction laws. We assume the faults have spatially uniform rate weakening friction. The results show that when stress interaction between faults is accounted, a complex spectrum of slip (including slow-slip events, dynamic ruptures and partial ruptures) emerges naturally. The simulated slow-slip and dynamic events follow the scaling law inferred by Ide et al. 2007 i. e. M ∝ T for slow-slip events and M ∝ T2 (in 2D) for dynamic events.

  1. Fuzzy model-based fault detection and diagnosis for a pilot heat exchanger

    NASA Astrophysics Data System (ADS)

    Habbi, Hacene; Kidouche, Madjid; Kinnaert, Michel; Zelmat, Mimoun

    2011-04-01

    This article addresses the design and real-time implementation of a fuzzy model-based fault detection and diagnosis (FDD) system for a pilot co-current heat exchanger. The design method is based on a three-step procedure which involves the identification of data-driven fuzzy rule-based models, the design of a fuzzy residual generator and the evaluation of the residuals for fault diagnosis using statistical tests. The fuzzy FDD mechanism has been implemented and validated on the real co-current heat exchanger, and has been proven to be efficient in detecting and isolating process, sensor and actuator faults.

  2. Multiple incipient sensor faults diagnosis with application to high-speed railway traction devices.

    PubMed

    Wu, Yunkai; Jiang, Bin; Lu, Ningyun; Yang, Hao; Zhou, Yang

    2017-03-01

    This paper deals with the problem of incipient fault diagnosis for a class of Lipschitz nonlinear systems with sensor biases and explores further results of total measurable fault information residual (ToMFIR). Firstly, state and output transformations are introduced to transform the original system into two subsystems. The first subsystem is subject to system disturbances and free from sensor faults, while the second subsystem contains sensor faults but without any system disturbances. Sensor faults in the second subsystem are then formed as actuator faults by using a pseudo-actuator based approach. Since the effects of system disturbances on the residual are completely decoupled, multiple incipient sensor faults can be detected by constructing ToMFIR, and the fault detectability condition is then derived for discriminating the detectable incipient sensor faults. Further, a sliding-mode observers (SMOs) based fault isolation scheme is designed to guarantee accurate isolation of multiple sensor faults. Finally, simulation results conducted on a CRH2 high-speed railway traction device are given to demonstrate the effectiveness of the proposed approach. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  3. Application of active quenching of second generation wire for current limiting

    DOE PAGES

    Solovyov, Vyacheslav F.; Li, Qiang

    2015-10-19

    Superconducting fault current limiters (SFCL's) are increasingly implemented in the power grid as a protection of substation equipment from fault currents. Resistive SFCL's are compact and light, however they are passively triggered and thus may not be sufficiently sensitive to respond to faults in the distribution grid. Here, we explore the prospect of adding an active management feature to a traditional resistive SFCL. A flexible radio-frequency coil, which is an integral part of the switching structure, acts as a triggering device. We show that the application of a short, 10 ms, burst of ac magnetic field during the fault triggersmore » a uniform quench of the wire and significantly reduces the reaction time of the wire at low currents. The ac field burst generates a high density of normal zones, which merge into a continuous resistive region at a rate much faster than that of sparse normal zones created by the transport current alone.« less

  4. Fault management for data systems

    NASA Technical Reports Server (NTRS)

    Boyd, Mark A.; Iverson, David L.; Patterson-Hine, F. Ann

    1993-01-01

    Issues related to automating the process of fault management (fault diagnosis and response) for data management systems are considered. Substantial benefits are to be gained by successful automation of this process, particularly for large, complex systems. The use of graph-based models to develop a computer assisted fault management system is advocated. The general problem is described and the motivation behind choosing graph-based models over other approaches for developing fault diagnosis computer programs is outlined. Some existing work in the area of graph-based fault diagnosis is reviewed, and a new fault management method which was developed from existing methods is offered. Our method is applied to an automatic telescope system intended as a prototype for future lunar telescope programs. Finally, an application of our method to general data management systems is described.

  5. In-flight Fault Detection and Isolation in Aircraft Flight Control Systems

    NASA Technical Reports Server (NTRS)

    Azam, Mohammad; Pattipati, Krishna; Allanach, Jeffrey; Poll, Scott; Patterson-Hine, Ann

    2005-01-01

    In this paper we consider the problem of test design for real-time fault detection and isolation (FDI) in the flight control system of fixed-wing aircraft. We focus on the faults that are manifested in the control surface elements (e.g., aileron, elevator, rudder and stabilizer) of an aircraft. For demonstration purposes, we restrict our focus on the faults belonging to nine basic fault classes. The diagnostic tests are performed on the features extracted from fifty monitored system parameters. The proposed tests are able to uniquely isolate each of the faults at almost all severity levels. A neural network-based flight control simulator, FLTZ(Registered TradeMark), is used for the simulation of various faults in fixed-wing aircraft flight control systems for the purpose of FDI.

  6. The Generic Spacecraft Analyst Assistant (GenSAA): A tool for automating spacecraft monitoring with expert systems

    NASA Technical Reports Server (NTRS)

    Hughes, Peter M.; Luczak, Edward C.

    1991-01-01

    Flight Operations Analysts (FOAs) in the Payload Operations Control Center (POCC) are responsible for monitoring a satellite's health and safety. As satellites become more complex and data rates increase, FOAs are quickly approaching a level of information saturation. The FOAs in the spacecraft control center for the COBE (Cosmic Background Explorer) satellite are currently using a fault isolation expert system named the Communications Link Expert Assistance Resource (CLEAR), to assist in isolating and correcting communications link faults. Due to the success of CLEAR and several other systems in the control center domain, many other monitoring and fault isolation expert systems will likely be developed to support control center operations during the early 1990s. To facilitate the development of these systems, a project was initiated to develop a domain specific tool, named the Generic Spacecraft Analyst Assistant (GenSAA). GenSAA will enable spacecraft analysts to easily build simple real-time expert systems that perform spacecraft monitoring and fault isolation functions. Lessons learned during the development of several expert systems at Goddard, thereby establishing the foundation of GenSAA's objectives and offering insights in how problems may be avoided in future project, are described. This is followed by a description of the capabilities, architecture, and usage of GenSAA along with a discussion of its application to future NASA missions.

  7. Photovoltaic module reliability improvement through application testing and failure analysis

    NASA Technical Reports Server (NTRS)

    Dumas, L. N.; Shumka, A.

    1982-01-01

    During the first four years of the U.S. Department of Energy (DOE) National Photovoltatic Program, the Jet Propulsion Laboratory Low-Cost Solar Array (LSA) Project purchased about 400 kW of photovoltaic modules for test and experiments. In order to identify, report, and analyze test and operational problems with the Block Procurement modules, a problem/failure reporting and analysis system was implemented by the LSA Project with the main purpose of providing manufacturers with feedback from test and field experience needed for the improvement of product performance and reliability. A description of the more significant types of failures is presented, taking into account interconnects, cracked cells, dielectric breakdown, delamination, and corrosion. Current design practices and reliability evaluations are also discussed. The conducted evaluation indicates that current module designs incorporate damage-resistant and fault-tolerant features which address field failure mechanisms observed to date.

  8. Pseudo-fault signal assisted EMD for fault detection and isolation in rotating machines

    NASA Astrophysics Data System (ADS)

    Singh, Dheeraj Sharan; Zhao, Qing

    2016-12-01

    This paper presents a novel data driven technique for the detection and isolation of faults, which generate impacts in a rotating equipment. The technique is built upon the principles of empirical mode decomposition (EMD), envelope analysis and pseudo-fault signal for fault separation. Firstly, the most dominant intrinsic mode function (IMF) is identified using EMD of a raw signal, which contains all the necessary information about the faults. The envelope of this IMF is often modulated with multiple vibration sources and noise. A second level decomposition is performed by applying pseudo-fault signal (PFS) assisted EMD on the envelope. A pseudo-fault signal is constructed based on the known fault characteristic frequency of the particular machine. The objective of using external (pseudo-fault) signal is to isolate different fault frequencies, present in the envelope . The pseudo-fault signal serves dual purposes: (i) it solves the mode mixing problem inherent in EMD, (ii) it isolates and quantifies a particular fault frequency component. The proposed technique is suitable for real-time implementation, which has also been validated on simulated fault and experimental data corresponding to a bearing and a gear-box set-up, respectively.

  9. Rethinking the problem of ionosphere-lithosphere coupling

    NASA Astrophysics Data System (ADS)

    Ruzhin, Yuri; Novikov, Victor

    2014-05-01

    An overview of research of possible relations between variations of geomagnetic field and seismicity is presented, including Sq-variations and geomagnetic storms. There are many papers demonstrating positive correlations between geomagnetic field variations and subsequent earthquake occurrence that allows to authors to talk about earthquake triggering impact provided by ionospheric processes on lithosphere. Nevertheless, there is another opinion on negligible impact of geomagnetic disturbances on the earthquake source supported by statistical analysis of correlation between variations of geomagnetic field and global and regional seismicity. Mainly, the both points of view on this problem are based on statistical research without detailed consideration of possible physical mechanisms which may be involved into the supposed earthquake triggering, or very rough estimations of possible increase of stresses in the faults under critical (near to failure) state were made. Recently it was shown that the fluids may play very important role in the electromagnetic earthquake triggering, and the secondary triggering mechanism should be considered when the fluid migrating into the fault under electromagnetic action may provide fault weakening up to earthquake triggering threshold. At the same time, depending on fault orientation, local hydrological structure of the crust around the fault, location of fluid reservoirs, etc. it may be possible that fluid migration from the fault may provide the fault strengthening, and in this case the impact of variation of geomagnetic field may provide an opposite effect. In so doing, it is useless to apply only statistical approach for the problem of ionosphere-lithosphere coupling, and in each case the possible behavior of fluids should be considered under electromagnetic impact on lithosphere. Experimental results supporting this idea and obtained at the spring-block model simulating the seismic cycle (slow accumulation and sharp drop of stresses in the fault gauge), as well as field observations of water level variations in the well during ionospheric disturbances are presented and discussed.

  10. Inverting the parameters of an earthquake-ruptured fault with a genetic algorithm

    NASA Astrophysics Data System (ADS)

    Yu, Ting-To; Fernàndez, Josè; Rundle, John B.

    1998-03-01

    Natural selection is the spirit of the genetic algorithm (GA): by keeping the good genes in the current generation, thereby producing better offspring during evolution. The crossover function ensures the heritage of good genes from parent to offspring. Meanwhile, the process of mutation creates a special gene, the character of which does not exist in the parent generation. A program based on genetic algorithms using C language is constructed to invert the parameters of an earthquake-ruptured fault. The verification and application of this code is shown to demonstrate its capabilities. It is determined that this code is able to find the global extreme and can be used to solve more practical problems with constraints gathered from other sources. It is shown that GA is superior to other inverting schema in many aspects. This easy handling and yet powerful algorithm should have many suitable applications in the field of geosciences.

  11. Fault tolerant system based on IDDQ testing

    NASA Astrophysics Data System (ADS)

    Guibane, Badi; Hamdi, Belgacem; Mtibaa, Abdellatif; Bensalem, Brahim

    2018-06-01

    Offline test is essential to ensure good manufacturing quality. However, for permanent or transient faults that occur during the use of the integrated circuit in an application, an online integrated test is needed as well. This procedure should ensure the detection and possibly the correction or the masking of these faults. This requirement of self-correction is sometimes necessary, especially in critical applications that require high security such as automotive, space or biomedical applications. We propose a fault-tolerant design for analogue and mixed-signal design complementary metal oxide (CMOS) circuits based on the quiescent current supply (IDDQ) testing. A defect can cause an increase in current consumption. IDDQ testing technique is based on the measurement of power supply current to distinguish between functional and failed circuits. The technique has been an effective testing method for detecting physical defects such as gate-oxide shorts, floating gates (open) and bridging defects in CMOS integrated circuits. An architecture called BICS (Built In Current Sensor) is used for monitoring the supply current (IDDQ) of the connected integrated circuit. If the measured current is not within the normal range, a defect is signalled and the system switches connection from the defective to a functional integrated circuit. The fault-tolerant technique is composed essentially by a double mirror built-in current sensor, allowing the detection of abnormal current consumption and blocks allowing the connection to redundant circuits, if a defect occurs. Spices simulations are performed to valid the proposed design.

  12. Power flow analysis and optimal locations of resistive type superconducting fault current limiters.

    PubMed

    Zhang, Xiuchang; Ruiz, Harold S; Geng, Jianzhao; Shen, Boyang; Fu, Lin; Zhang, Heng; Coombs, Tim A

    2016-01-01

    Based on conventional approaches for the integration of resistive-type superconducting fault current limiters (SFCLs) on electric distribution networks, SFCL models largely rely on the insertion of a step or exponential resistance that is determined by a predefined quenching time. In this paper, we expand the scope of the aforementioned models by considering the actual behaviour of an SFCL in terms of the temperature dynamic power-law dependence between the electrical field and the current density, characteristic of high temperature superconductors. Our results are compared to the step-resistance models for the sake of discussion and clarity of the conclusions. Both SFCL models were integrated into a power system model built based on the UK power standard, to study the impact of these protection strategies on the performance of the overall electricity network. As a representative renewable energy source, a 90 MVA wind farm was considered for the simulations. Three fault conditions were simulated, and the figures for the fault current reduction predicted by both fault current limiting models have been compared in terms of multiple current measuring points and allocation strategies. Consequently, we have shown that the incorporation of the E - J characteristics and thermal properties of the superconductor at the simulation level of electric power systems, is crucial for estimations of reliability and determining the optimal locations of resistive type SFCLs in distributed power networks. Our results may help decision making by distribution network operators regarding investment and promotion of SFCL technologies, as it is possible to determine the maximum number of SFCLs necessary to protect against different fault conditions at multiple locations.

  13. A fault diagnosis system for PV power station based on global partitioned gradually approximation method

    NASA Astrophysics Data System (ADS)

    Wang, S.; Zhang, X. N.; Gao, D. D.; Liu, H. X.; Ye, J.; Li, L. R.

    2016-08-01

    As the solar photovoltaic (PV) power is applied extensively, more attentions are paid to the maintenance and fault diagnosis of PV power plants. Based on analysis of the structure of PV power station, the global partitioned gradually approximation method is proposed as a fault diagnosis algorithm to determine and locate the fault of PV panels. The PV array is divided into 16x16 blocks and numbered. On the basis of modularly processing of the PV array, the current values of each block are analyzed. The mean current value of each block is used for calculating the fault weigh factor. The fault threshold is defined to determine the fault, and the shade is considered to reduce the probability of misjudgments. A fault diagnosis system is designed and implemented with LabVIEW. And it has some functions including the data realtime display, online check, statistics, real-time prediction and fault diagnosis. Through the data from PV plants, the algorithm is verified. The results show that the fault diagnosis results are accurate, and the system works well. The validity and the possibility of the system are verified by the results as well. The developed system will be benefit for the maintenance and management of large scale PV array.

  14. Lembang fault plane identification using electrical resistivity method for disaster mitigation

    NASA Astrophysics Data System (ADS)

    Maulinadya, S.; Ramadhan, M. Lutfi; N. Wening, F.; Pinehas, D.; Widodo

    2017-07-01

    Lembang Fault is an active fault lies from West to East located 10 kilometers in north of Bandung. It is a normal fault that its foot wall raises 40-450 meters above the ground. Its location that is not so far from Bandung, which is densely populated and frequently visited by tourists, makes Lembang Fault a threat if it becomes suddenly active. Its movement can cause earthquakes that can result in fatalities. Therefore, act of mitigation is necessary, such as educating people about Lembang Fault and its potential to cause disaster. The objective of this study is to find Lembang Fault plane below the surface with geo electrical mapping method and vertical elect rical sounding method around Ciwarega and The Peak, Lembang (west side of Lembang Fault). Both of these methods are using electricity current to measure rock resistivity. Currents are injected to the ground and potential differences are measured. According to Ohm's Law, resistivity can be calculated so that resistivity distribution can be obtained. In this study, high resistivity contrast is obtained; it is about 1400-5000 Ohm.m. This resistivity contrast can be caused by lateral lithology difference resulted by fault existence. This proves that there is actually a fault in Lembang that potentially cause disasters like earthquakes.

  15. Off-fault plasticity in three-dimensional dynamic rupture simulations using a modal Discontinuous Galerkin method on unstructured meshes: Implementation, verification, and application

    NASA Astrophysics Data System (ADS)

    Wollherr, Stephanie; Gabriel, Alice-Agnes; Uphoff, Carsten

    2018-05-01

    The dynamics and potential size of earthquakes depend crucially on rupture transfers between adjacent fault segments. To accurately describe earthquake source dynamics, numerical models can account for realistic fault geometries and rheologies such as nonlinear inelastic processes off the slip interface. We present implementation, verification, and application of off-fault Drucker-Prager plasticity in the open source software SeisSol (www.seissol.org). SeisSol is based on an arbitrary high-order derivative modal Discontinuous Galerkin (ADER-DG) method using unstructured, tetrahedral meshes specifically suited for complex geometries. Two implementation approaches are detailed, modelling plastic failure either employing sub-elemental quadrature points or switching to nodal basis coefficients. At fine fault discretizations the nodal basis approach is up to 6 times more efficient in terms of computational costs while yielding comparable accuracy. Both methods are verified in community benchmark problems and by three dimensional numerical h- and p-refinement studies with heterogeneous initial stresses. We observe no spectral convergence for on-fault quantities with respect to a given reference solution, but rather discuss a limitation to low-order convergence for heterogeneous 3D dynamic rupture problems. For simulations including plasticity, a high fault resolution may be less crucial than commonly assumed, due to the regularization of peak slip rate and an increase of the minimum cohesive zone width. In large-scale dynamic rupture simulations based on the 1992 Landers earthquake, we observe high rupture complexity including reverse slip, direct branching, and dynamic triggering. The spatio-temporal distribution of rupture transfers are altered distinctively by plastic energy absorption, correlated with locations of geometrical fault complexity. Computational cost increases by 7% when accounting for off-fault plasticity in the demonstrating application. Our results imply that the combination of fully 3D dynamic modelling, complex fault geometries, and off-fault plastic yielding is important to realistically capture dynamic rupture transfers in natural fault systems.

  16. Fast Fourier and discrete wavelet transforms applied to sensorless vector control induction motor for rotor bar faults diagnosis.

    PubMed

    Talhaoui, Hicham; Menacer, Arezki; Kessal, Abdelhalim; Kechida, Ridha

    2014-09-01

    This paper presents new techniques to evaluate faults in case of broken rotor bars of induction motors. Procedures are applied with closed-loop control. Electrical and mechanical variables are treated using fast Fourier transform (FFT), and discrete wavelet transform (DWT) at start-up and steady state. The wavelet transform has proven to be an excellent mathematical tool for the detection of the faults particularly broken rotor bars type. As a performance, DWT can provide a local representation of the non-stationary current signals for the healthy machine and with fault. For sensorless control, a Luenberger observer is applied; the estimation rotor speed is analyzed; the effect of the faults in the speed pulsation is compensated; a quadratic current appears and used for fault detection. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.

  17. Support vector machine based decision for mechanical fault condition monitoring in induction motor using an advanced Hilbert-Park transform.

    PubMed

    Ben Salem, Samira; Bacha, Khmais; Chaari, Abdelkader

    2012-09-01

    In this work we suggest an original fault signature based on an improved combination of Hilbert and Park transforms. Starting from this combination we can create two fault signatures: Hilbert modulus current space vector (HMCSV) and Hilbert phase current space vector (HPCSV). These two fault signatures are subsequently analysed using the classical fast Fourier transform (FFT). The effects of mechanical faults on the HMCSV and HPCSV spectrums are described, and the related frequencies are determined. The magnitudes of spectral components, relative to the studied faults (air-gap eccentricity and outer raceway ball bearing defect), are extracted in order to develop the input vector necessary for learning and testing the support vector machine with an aim of classifying automatically the various states of the induction motor. Copyright © 2012 ISA. Published by Elsevier Ltd. All rights reserved.

  18. Spatial Evaluation and Verification of Earthquake Simulators

    NASA Astrophysics Data System (ADS)

    Wilson, John Max; Yoder, Mark R.; Rundle, John B.; Turcotte, Donald L.; Schultz, Kasey W.

    2017-06-01

    In this paper, we address the problem of verifying earthquake simulators with observed data. Earthquake simulators are a class of computational simulations which attempt to mirror the topological complexity of fault systems on which earthquakes occur. In addition, the physics of friction and elastic interactions between fault elements are included in these simulations. Simulation parameters are adjusted so that natural earthquake sequences are matched in their scaling properties. Physically based earthquake simulators can generate many thousands of years of simulated seismicity, allowing for a robust capture of the statistical properties of large, damaging earthquakes that have long recurrence time scales. Verification of simulations against current observed earthquake seismicity is necessary, and following past simulator and forecast model verification methods, we approach the challenges in spatial forecast verification to simulators; namely, that simulator outputs are confined to the modeled faults, while observed earthquake epicenters often occur off of known faults. We present two methods for addressing this discrepancy: a simplistic approach whereby observed earthquakes are shifted to the nearest fault element and a smoothing method based on the power laws of the epidemic-type aftershock (ETAS) model, which distributes the seismicity of each simulated earthquake over the entire test region at a decaying rate with epicentral distance. To test these methods, a receiver operating characteristic plot was produced by comparing the rate maps to observed m>6.0 earthquakes in California since 1980. We found that the nearest-neighbor mapping produced poor forecasts, while the ETAS power-law method produced rate maps that agreed reasonably well with observations.

  19. An improved CS-LSSVM algorithm-based fault pattern recognition of ship power equipments.

    PubMed

    Yang, Yifei; Tan, Minjia; Dai, Yuewei

    2017-01-01

    A ship power equipments' fault monitoring signal usually provides few samples and the data's feature is non-linear in practical situation. This paper adopts the method of the least squares support vector machine (LSSVM) to deal with the problem of fault pattern identification in the case of small sample data. Meanwhile, in order to avoid involving a local extremum and poor convergence precision which are induced by optimizing the kernel function parameter and penalty factor of LSSVM, an improved Cuckoo Search (CS) algorithm is proposed for the purpose of parameter optimization. Based on the dynamic adaptive strategy, the newly proposed algorithm improves the recognition probability and the searching step length, which can effectively solve the problems of slow searching speed and low calculation accuracy of the CS algorithm. A benchmark example demonstrates that the CS-LSSVM algorithm can accurately and effectively identify the fault pattern types of ship power equipments.

  20. Robust fault detection of turbofan engines subject to adaptive controllers via a Total Measurable Fault Information Residual (ToMFIR) technique.

    PubMed

    Chen, Wen; Chowdhury, Fahmida N; Djuric, Ana; Yeh, Chih-Ping

    2014-09-01

    This paper provides a new design of robust fault detection for turbofan engines with adaptive controllers. The critical issue is that the adaptive controllers can depress the faulty effects such that the actual system outputs remain the pre-specified values, making it difficult to detect faults/failures. To solve this problem, a Total Measurable Fault Information Residual (ToMFIR) technique with the aid of system transformation is adopted to detect faults in turbofan engines with adaptive controllers. This design is a ToMFIR-redundancy-based robust fault detection. The ToMFIR is first introduced and existing results are also summarized. The Detailed design process of the ToMFIRs is presented and a turbofan engine model is simulated to verify the effectiveness of the proposed ToMFIR-based fault-detection strategy. Copyright © 2013 ISA. Published by Elsevier Ltd. All rights reserved.

  1. Data-based fault-tolerant control for affine nonlinear systems with actuator faults.

    PubMed

    Xie, Chun-Hua; Yang, Guang-Hong

    2016-09-01

    This paper investigates the fault-tolerant control (FTC) problem for unknown nonlinear systems with actuator faults including stuck, outage, bias and loss of effectiveness. The upper bounds of stuck faults, bias faults and loss of effectiveness faults are unknown. A new data-based FTC scheme is proposed. It consists of the online estimations of the bounds and a state-dependent function. The estimations are adjusted online to compensate automatically the actuator faults. The state-dependent function solved by using real system data helps to stabilize the system. Furthermore, all signals in the resulting closed-loop system are uniformly bounded and the states converge asymptotically to zero. Compared with the existing results, the proposed approach is data-based. Finally, two simulation examples are provided to show the effectiveness of the proposed approach. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  2. Using certification trails to achieve software fault tolerance

    NASA Technical Reports Server (NTRS)

    Sullivan, Gregory F.; Masson, Gerald M.

    1993-01-01

    A conceptually novel and powerful technique to achieve fault tolerance in hardware and software systems is introduced. When used for software fault tolerance, this new technique uses time and software redundancy and can be outlined as follows. In the initial phase, a program is run to solve a problem and store the result. In addition, this program leaves behind a trail of data called a certification trail. In the second phase, another program is run which solves the original problem again. This program, however, has access to the certification trail left by the first program. Because of the availability of the certification trail, the second phase can be performed by a less complex program and can execute more quickly. In the final phase, the two results are accepted as correct; otherwise an error is indicated. An essential aspect of this approach is that the second program must always generate either an error indication or a correct output even when the certification trail it receives from the first program is incorrect. The certification trail approach to fault tolerance was formalized and it was illustrated by applying it to the fundamental problem of finding a minimum spanning tree. Cases in which the second phase can be run concorrectly with the first and act as a monitor are discussed. The certification trail approach was compared to other approaches to fault tolerance. Because of space limitations we have omitted examples of our technique applied to the Huffman tree, and convex hull problems. These can be found in the full version of this paper.

  3. Adaptive Control Allocation for Fault Tolerant Overactuated Autonomous Vehicles

    DTIC Science & Technology

    2007-11-01

    Tolerant Overactuated Autonomous Vehicles Casavola, A.; Garone, E. (2007) Adaptive Control Allocation for Fault Tolerant Overactuated Autonomous ...Adaptive Control Allocation for Fault Tolerant Overactuated Autonomous Vehicles 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6...Tolerant Overactuated Autonomous Vehicles 3.2 - 2 RTO-MP-AVT-145 UNCLASSIFIED/UNLIMITED Control allocation problem (CAP) - Given a virtual input v(t

  4. Human problem solving performance in a fault diagnosis task

    NASA Technical Reports Server (NTRS)

    Rouse, W. B.

    1978-01-01

    It is proposed that humans in automated systems will be asked to assume the role of troubleshooter or problem solver and that the problems which they will be asked to solve in such systems will not be amenable to rote solution. The design of visual displays for problem solving in such situations is considered, and the results of two experimental investigations of human problem solving performance in the diagnosis of faults in graphically displayed network problems are discussed. The effects of problem size, forced-pacing, computer aiding, and training are considered. Results indicate that human performance deviates from optimality as problem size increases. Forced-pacing appears to cause the human to adopt fairly brute force strategies, as compared to those adopted in self-paced situations. Computer aiding substantially lessens the number of mistaken diagnoses by performing the bookkeeping portions of the task.

  5. Saturating time-delay transformer for overcurrent protection. [Patent application

    DOEpatents

    Praeg, W.F.

    1975-12-18

    Electrical loads connected to dc supplies are protected from damage by overcurrent in the case of a load fault by connecting in series with the load a saturating transformer that detects a load fault and limits the fault current to a safe level for a period long enough to correct the fault or else disconnect the power supply.

  6. Saturating time-delay transformer for overcurrent protection

    DOEpatents

    Praeg, Walter F.

    1977-01-01

    Electrical loads connected to d-c supplies are protected from damage by overcurrent in the case of a load fault by connecting in series with the load a saturating transformer that detects a load fault and limits the fault current to a safe level for a period long enough to correct the fault or else disconnect the power supply.

  7. Distributed Fault-Tolerant Control of Networked Uncertain Euler-Lagrange Systems Under Actuator Faults.

    PubMed

    Chen, Gang; Song, Yongduan; Lewis, Frank L

    2016-05-03

    This paper investigates the distributed fault-tolerant control problem of networked Euler-Lagrange systems with actuator and communication link faults. An adaptive fault-tolerant cooperative control scheme is proposed to achieve the coordinated tracking control of networked uncertain Lagrange systems on a general directed communication topology, which contains a spanning tree with the root node being the active target system. The proposed algorithm is capable of compensating for the actuator bias fault, the partial loss of effectiveness actuation fault, the communication link fault, the model uncertainty, and the external disturbance simultaneously. The control scheme does not use any fault detection and isolation mechanism to detect, separate, and identify the actuator faults online, which largely reduces the online computation and expedites the responsiveness of the controller. To validate the effectiveness of the proposed method, a test-bed of multiple robot-arm cooperative control system is developed for real-time verification. Experiments on the networked robot-arms are conduced and the results confirm the benefits and the effectiveness of the proposed distributed fault-tolerant control algorithms.

  8. Friction falls towards zero in quartz rock as slip velocity approaches seismic rates.

    PubMed

    Di Toro, Giulio; Goldsby, David L; Tullis, Terry E

    2004-01-29

    An important unsolved problem in earthquake mechanics is to determine the resistance to slip on faults in the Earth's crust during earthquakes. Knowledge of coseismic slip resistance is critical for understanding the magnitude of shear-stress reduction and hence the near-fault acceleration that can occur during earthquakes, which affects the amount of damage that earthquakes are capable of causing. In particular, a long-unresolved problem is the apparently low strength of major faults, which may be caused by low coseismic frictional resistance. The frictional properties of rocks at slip velocities up to 3 mm s(-1) and for slip displacements characteristic of large earthquakes have been recently simulated under laboratory conditions. Here we report data on quartz rocks that indicate an extraordinary progressive decrease in frictional resistance with increasing slip velocity above 1 mm s(-1). This reduction extrapolates to zero friction at seismic slip rates of approximately 1 m s(-1), and appears to be due to the formation of a thin layer of silica gel on the fault surface: it may explain the low strength of major faults during earthquakes.

  9. Multipoint Optimal Minimum Entropy Deconvolution and Convolution Fix: Application to vibration fault detection

    NASA Astrophysics Data System (ADS)

    McDonald, Geoff L.; Zhao, Qing

    2017-01-01

    Minimum Entropy Deconvolution (MED) has been applied successfully to rotating machine fault detection from vibration data, however this method has limitations. A convolution adjustment to the MED definition and solution is proposed in this paper to address the discontinuity at the start of the signal - in some cases causing spurious impulses to be erroneously deconvolved. A problem with the MED solution is that it is an iterative selection process, and will not necessarily design an optimal filter for the posed problem. Additionally, the problem goal in MED prefers to deconvolve a single-impulse, while in rotating machine faults we expect one impulse-like vibration source per rotational period of the faulty element. Maximum Correlated Kurtosis Deconvolution was proposed to address some of these problems, and although it solves the target goal of multiple periodic impulses, it is still an iterative non-optimal solution to the posed problem and only solves for a limited set of impulses in a row. Ideally, the problem goal should target an impulse train as the output goal, and should directly solve for the optimal filter in a non-iterative manner. To meet these goals, we propose a non-iterative deconvolution approach called Multipoint Optimal Minimum Entropy Deconvolution Adjusted (MOMEDA). MOMEDA proposes a deconvolution problem with an infinite impulse train as the goal and the optimal filter solution can be solved for directly. From experimental data on a gearbox with and without a gear tooth chip, we show that MOMEDA and its deconvolution spectrums according to the period between the impulses can be used to detect faults and study the health of rotating machine elements effectively.

  10. Certification of computational results

    NASA Technical Reports Server (NTRS)

    Sullivan, Gregory F.; Wilson, Dwight S.; Masson, Gerald M.

    1993-01-01

    A conceptually novel and powerful technique to achieve fault detection and fault tolerance in hardware and software systems is described. When used for software fault detection, this new technique uses time and software redundancy and can be outlined as follows. In the initial phase, a program is run to solve a problem and store the result. In addition, this program leaves behind a trail of data called a certification trail. In the second phase, another program is run which solves the original problem again. This program, however, has access to the certification trail left by the first program. Because of the availability of the certification trail, the second phase can be performed by a less complex program and can execute more quickly. In the final phase, the two results are compared and if they agree the results are accepted as correct; otherwise an error is indicated. An essential aspect of this approach is that the second program must always generate either an error indication or a correct output even when the certification trail it receives from the first program is incorrect. The certification trail approach to fault tolerance is formalized and realizations of it are illustrated by considering algorithms for the following problems: convex hull, sorting, and shortest path. Cases in which the second phase can be run concurrently with the first and act as a monitor are discussed. The certification trail approach are compared to other approaches to fault tolerance.

  11. Intelligent alarming

    NASA Technical Reports Server (NTRS)

    Braden, W. B.

    1992-01-01

    This talk discusses the importance of providing a process operator with concise information about a process fault including a root cause diagnosis of the problem, a suggested best action for correcting the fault, and prioritization of the problem set. A decision tree approach is used to illustrate one type of approach for determining the root cause of a problem. Fault detection in several different types of scenarios is addressed, including pump malfunctions and pipeline leaks. The talk stresses the need for a good data rectification strategy and good process models along with a method for presenting the findings to the process operator in a focused and understandable way. A real time expert system is discussed as an effective tool to help provide operators with this type of information. The use of expert systems in the analysis of actual versus predicted results from neural networks and other types of process models is discussed.

  12. Fast and accurate spectral estimation for online detection of partial broken bar in induction motors

    NASA Astrophysics Data System (ADS)

    Samanta, Anik Kumar; Naha, Arunava; Routray, Aurobinda; Deb, Alok Kanti

    2018-01-01

    In this paper, an online and real-time system is presented for detecting partial broken rotor bar (BRB) of inverter-fed squirrel cage induction motors under light load condition. This system with minor modifications can detect any fault that affects the stator current. A fast and accurate spectral estimator based on the theory of Rayleigh quotient is proposed for detecting the spectral signature of BRB. The proposed spectral estimator can precisely determine the relative amplitude of fault sidebands and has low complexity compared to available high-resolution subspace-based spectral estimators. Detection of low-amplitude fault components has been improved by removing the high-amplitude fundamental frequency using an extended-Kalman based signal conditioner. Slip is estimated from the stator current spectrum for accurate localization of the fault component. Complexity and cost of sensors are minimal as only a single-phase stator current is required. The hardware implementation has been carried out on an Intel i7 based embedded target ported through the Simulink Real-Time. Evaluation of threshold and detectability of faults with different conditions of load and fault severity are carried out with empirical cumulative distribution function.

  13. Adaptation of superconducting fault current limiter to high-speed reclosing

    NASA Astrophysics Data System (ADS)

    Koyama, T.; Yanabu, S.

    2009-10-01

    Using a high temperature superconductor, we constructed and tested a model superconducting fault current limiter (SFCL). The superconductor might break in some cases because of its excessive generation of heat. Therefore, it is desirable to interrupt early the current that flows to superconductor. So, we proposed the SFCL using an electromagnetic repulsion switch which is composed of a superconductor, a vacuum interrupter and a by-pass coil, and its structure is simple. Duration that the current flow in the superconductor can be easily minimized to the level of less than 0.5 cycle using this equipment. On the other hand, the fault current is also easily limited by large reactance of the parallel coil. There is duty of high-speed reclosing after interrupting fault current in the electric power system. After the fault current is interrupted, the back-up breaker is re-closed within 350 ms. So, the electromagnetic repulsion switch should return to former state and the superconductor should be recovered to superconducting state before high-speed reclosing. Then, we proposed the SFCL using an electromagnetic repulsion switch which employs our new reclosing function. We also studied recovery time of the superconductor, because superconductor should be recovered to superconducting state within 350 ms. In this paper, the recovery time characteristics of the superconducting wire were investigated. Also, we combined the superconductor with the electromagnetic repulsion switch, and we did performance test. As a result, a high-speed reclosing within 350 ms was proven to be possible.

  14. Qualitative Event-Based Diagnosis: Case Study on the Second International Diagnostic Competition

    NASA Technical Reports Server (NTRS)

    Daigle, Matthew; Roychoudhury, Indranil

    2010-01-01

    We describe a diagnosis algorithm entered into the Second International Diagnostic Competition. We focus on the first diagnostic problem of the industrial track of the competition in which a diagnosis algorithm must detect, isolate, and identify faults in an electrical power distribution testbed and provide corresponding recovery recommendations. The diagnosis algorithm embodies a model-based approach, centered around qualitative event-based fault isolation. Faults produce deviations in measured values from model-predicted values. The sequence of these deviations is matched to those predicted by the model in order to isolate faults. We augment this approach with model-based fault identification, which determines fault parameters and helps to further isolate faults. We describe the diagnosis approach, provide diagnosis results from running the algorithm on provided example scenarios, and discuss the issues faced, and lessons learned, from implementing the approach

  15. Reduction of ground noise in the transmitter crowbar instrumentation system by the use of baluns and other noise rejection methods

    NASA Technical Reports Server (NTRS)

    Daeges, J.; Bhanji, A.

    1987-01-01

    Electrical noise interference in the transmitter crowbar monitoring instrumentation system creates false sensing of crowbar faults during a crowbar firing. One predominant source of noise interference is the conduction of currents in the instrumentation cable shields. Since these circulating ground noise currents produce noise that is similar to the crowbar fault sensing signals, such noise interference reduces the ability to determine true crowbar faults.

  16. Resilience Design Patterns: A Structured Approach to Resilience at Extreme Scale

    DOE PAGES

    Engelmann, Christian; Hukerikar, Saurabh

    2017-09-01

    Reliability is a serious concern for future extreme-scale high-performance computing (HPC) systems. Projections based on the current generation of HPC systems and technology roadmaps suggest the prevalence of very high fault rates in future systems. While the HPC community has developed various resilience solutions, application-level techniques as well as system-based solutions, the solution space remains fragmented. There are no formal methods and metrics to integrate the various HPC resilience techniques into composite solutions, nor are there methods to holistically evaluate the adequacy and efficacy of such solutions in terms of their protection coverage, and their performance \\& power efficiency characteristics.more » Additionally, few of the current approaches are portable to newer architectures and software environments that will be deployed on future systems. In this paper, we develop a structured approach to the design, evaluation and optimization of HPC resilience using the concept of design patterns. A design pattern is a general repeatable solution to a commonly occurring problem. We identify the problems caused by various types of faults, errors and failures in HPC systems and the techniques used to deal with these events. Each well-known solution that addresses a specific HPC resilience challenge is described in the form of a pattern. We develop a complete catalog of such resilience design patterns, which may be used by system architects, system software and tools developers, application programmers, as well as users and operators as essential building blocks when designing and deploying resilience solutions. We also develop a design framework that enhances a designer's understanding the opportunities for integrating multiple patterns across layers of the system stack and the important constraints during implementation of the individual patterns. It is also useful for defining mechanisms and interfaces to coordinate flexible fault management across hardware and software components. The resilience patterns and the design framework also enable exploration and evaluation of design alternatives and support optimization of the cost-benefit trade-offs among performance, protection coverage, and power consumption of resilience solutions. Here, the overall goal of this work is to establish a systematic methodology for the design and evaluation of resilience technologies in extreme-scale HPC systems that keep scientific applications running to a correct solution in a timely and cost-efficient manner despite frequent faults, errors, and failures of various types.« less

  17. Resilience Design Patterns: A Structured Approach to Resilience at Extreme Scale

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Engelmann, Christian; Hukerikar, Saurabh

    Reliability is a serious concern for future extreme-scale high-performance computing (HPC) systems. Projections based on the current generation of HPC systems and technology roadmaps suggest the prevalence of very high fault rates in future systems. While the HPC community has developed various resilience solutions, application-level techniques as well as system-based solutions, the solution space remains fragmented. There are no formal methods and metrics to integrate the various HPC resilience techniques into composite solutions, nor are there methods to holistically evaluate the adequacy and efficacy of such solutions in terms of their protection coverage, and their performance \\& power efficiency characteristics.more » Additionally, few of the current approaches are portable to newer architectures and software environments that will be deployed on future systems. In this paper, we develop a structured approach to the design, evaluation and optimization of HPC resilience using the concept of design patterns. A design pattern is a general repeatable solution to a commonly occurring problem. We identify the problems caused by various types of faults, errors and failures in HPC systems and the techniques used to deal with these events. Each well-known solution that addresses a specific HPC resilience challenge is described in the form of a pattern. We develop a complete catalog of such resilience design patterns, which may be used by system architects, system software and tools developers, application programmers, as well as users and operators as essential building blocks when designing and deploying resilience solutions. We also develop a design framework that enhances a designer's understanding the opportunities for integrating multiple patterns across layers of the system stack and the important constraints during implementation of the individual patterns. It is also useful for defining mechanisms and interfaces to coordinate flexible fault management across hardware and software components. The resilience patterns and the design framework also enable exploration and evaluation of design alternatives and support optimization of the cost-benefit trade-offs among performance, protection coverage, and power consumption of resilience solutions. Here, the overall goal of this work is to establish a systematic methodology for the design and evaluation of resilience technologies in extreme-scale HPC systems that keep scientific applications running to a correct solution in a timely and cost-efficient manner despite frequent faults, errors, and failures of various types.« less

  18. Real-time fault diagnosis for propulsion systems

    NASA Technical Reports Server (NTRS)

    Merrill, Walter C.; Guo, Ten-Huei; Delaat, John C.; Duyar, Ahmet

    1991-01-01

    Current research toward real time fault diagnosis for propulsion systems at NASA-Lewis is described. The research is being applied to both air breathing and rocket propulsion systems. Topics include fault detection methods including neural networks, system modeling, and real time implementations.

  19. Fault current limiter and alternating current circuit breaker

    DOEpatents

    Boenig, Heinrich J.

    1998-01-01

    A solid-state circuit breaker and current limiter for a load served by an alternating current source having a source impedance, the solid-state circuit breaker and current limiter comprising a thyristor bridge interposed between the alternating current source and the load, the thyristor bridge having four thyristor legs and four nodes, with a first node connected to the alternating current source, and a second node connected to the load. A coil is connected from a third node to a fourth node, the coil having an impedance of a value calculated to limit the current flowing therethrough to a predetermined value. Control means are connected to the thyristor legs for limiting the alternating current flow to the load under fault conditions to a predetermined level, and for gating the thyristor bridge under fault conditions to quickly reduce alternating current flowing therethrough to zero and thereafter to maintain the thyristor bridge in an electrically open condition preventing the alternating current from flowing therethrough for a predetermined period of time.

  20. Fault current limiter and alternating current circuit breaker

    DOEpatents

    Boenig, H.J.

    1998-03-10

    A solid-state circuit breaker and current limiter are disclosed for a load served by an alternating current source having a source impedance, the solid-state circuit breaker and current limiter comprising a thyristor bridge interposed between the alternating current source and the load, the thyristor bridge having four thyristor legs and four nodes, with a first node connected to the alternating current source, and a second node connected to the load. A coil is connected from a third node to a fourth node, the coil having an impedance of a value calculated to limit the current flowing therethrough to a predetermined value. Control means are connected to the thyristor legs for limiting the alternating current flow to the load under fault conditions to a predetermined level, and for gating the thyristor bridge under fault conditions to quickly reduce alternating current flowing therethrough to zero and thereafter to maintain the thyristor bridge in an electrically open condition preventing the alternating current from flowing therethrough for a predetermined period of time. 9 figs.

  1. Analysis Impact of Distributed Generation Injection to Profile of Voltage and Short-Circuit Fault in 20 kV Distribution Network System

    NASA Astrophysics Data System (ADS)

    Mulyadi, Y.; Sucita, T.; Rahmawan, M. D.

    2018-01-01

    This study was a case study in PT. PLN (Ltd.) APJ Bandung area with the subject taken was the installation of distributed generation (DG) on 20-kV distribution channels. The purpose of this study is to find out the effect of DG to the changes in voltage profile and three-phase short circuit fault in the 20-kV distribution system with load conditions considered to be balanced. The reason for this research is to know how far DG can improve the voltage profile of the channel and to what degree DG can increase the three-phase short circuit fault on each bus. The method used in this study was comparing the simulation results of power flow and short-circuit fault using ETAP Power System software with manual calculations. The result obtained from the power current simulation before the installation of DG voltage was the drop at the end of the channel at 2.515%. Meanwhile, the three-phase short-circuit current fault before the DG installation at the beginning of the channel was 13.43 kA. After the installation of DG with injection of 50%, DG power obtained voltage drop at the end of the channel was 1.715% and the current fault at the beginning network was 14.05 kA. In addition, with injection of 90%, DG power obtained voltage drop at the end of the channel was 1.06% and the current fault at the beginning network was 14.13%.

  2. Coordinated Fault Tolerance for High-Performance Computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dongarra, Jack; Bosilca, George; et al.

    2013-04-08

    Our work to meet our goal of end-to-end fault tolerance has focused on two areas: (1) improving fault tolerance in various software currently available and widely used throughout the HEC domain and (2) using fault information exchange and coordination to achieve holistic, systemwide fault tolerance and understanding how to design and implement interfaces for integrating fault tolerance features for multiple layers of the software stack—from the application, math libraries, and programming language runtime to other common system software such as jobs schedulers, resource managers, and monitoring tools.

  3. Field based geothermal exploration: Structural controls in the Tarutung Basin/North Central Sumatra (Indonesia)

    NASA Astrophysics Data System (ADS)

    Nukman, M.; Moeck, I.

    2012-04-01

    The Tarutung Basin is one of several basins along the prominent Sumatra Fault System (SFS) which represents a dextral strike slip fault zone segmented into individual fault strands. The basins are located at right-stepping transfer. The Tarutung Basin hosts geothermal manifestations such as hot springs and travertines indicating a geothermal system with some decent potential in the subsurface. As part of geothermal exploration, field geology is investigated focusing on how the structural setting controls the thermal manifestation distribution. A complex fault pattern is now newly mapped and evidences sinistral faults striking E-W (Silangkitang), normal faults striking SE-NW at the eastern strand of Tarutung Basin (Sitompul) and normal faults striking NW-SE at the western strand of the basin (Sitaka). These structures form an angle greater than 450 with respect to the current maximum principal stress which is oriented in N-S. Secondary sinistral shear fractures identified as antithetic Riedel shears can be correlated with hot spring locations at Silangkitang, forming an angle of 500 with respect to the current maximum stress. A large angle of normal fault and antithetic Riedel shear trend with respect to the current maximum stress direction indicates that the structures have been rotated. Unidentified dextral strike slip faults might exist at the eastern strand of Tarutung Basin to accommodate the clockwise rotation between the eastern boundary of the basin and the NW-SE striking normal fault of Panabungan. Normal faults striking parallel with the SFS East of the basin are interpreted as dilatational jogs caused by the clockwise rotated block movement with respect to the NW-SE fault trend sinistral shear along ENE-WSW faults. Silicified pryroclastics in association with large discharge at hot springs at these NW-SE striking normal faults support this hypothesis. As proposed by Nivinkovich (1976) and Nishimura (1986) Sumatra has rotated 20° clockwise since the last two million years due to the increase in sea-floor spreading rate of the Indian-Australian plate. The combination of regional clockwise rotation of Sumatra with local clockwise rotation caused by simple shear along the dextral SFS might generate the complex fault pattern which controls fluid flow of thermal water and placement of hot springs. Acknowledgements : Deutscher Akademischer Austausch Dienst, DAAD. German Ministry for Education and Research, BMBF. Badan Geologi - KESDM Bandung, Indonesia.

  4. Low Insertion HVDC Circuit Breaker: Magnetically Pulsed Hybrid Breaker for HVDC Power Distribution Protection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    2012-01-09

    GENI Project: General Atomics is developing a direct current (DC) circuit breaker that could protect the grid from faults 100 times faster than its alternating current (AC) counterparts. Circuit breakers are critical elements in any electrical system. At the grid level, their main function is to isolate parts of the grid where a fault has occurred—such as a downed power line or a transformer explosion—from the rest of the system. DC circuit breakers must interrupt the system during a fault much faster than AC circuit breakers to prevent possible damage to cables, converters and other grid-level components. General Atomics’ high-voltagemore » DC circuit breaker would react in less than 1/1,000th of a second to interrupt current during a fault, preventing potential hazards to people and equipment.« less

  5. The effects of transistor source-to-gate bridging faults in complex CMOS gates

    NASA Astrophysics Data System (ADS)

    Visweswaran, G. S.; Ali, Akhtar-Uz-Zaman M.; Lala, Parag K.; Hartmann, Carlos R. P.

    1991-06-01

    A study of the effect of gate-to-source bridging faults in the pull-up section of a complex CMOS gate is presented. The manifestation of these faults depends on the resistance value of the connection causing the bridging. It is shown that such faults manifest themselves either as stuck-at or stuck-open faults and can be detected by tests for stuck-at and stuck-open faults generated for the equivalent logic current. It is observed that for transistor channel lengths larger than 1 microns there exists a range of values of the bridging resistance for which the fault behaves as a pseudo-stuck-open fault.

  6. Latest Progress of Fault Detection and Localization in Complex Electrical Engineering

    NASA Astrophysics Data System (ADS)

    Zhao, Zheng; Wang, Can; Zhang, Yagang; Sun, Yi

    2014-01-01

    In the researches of complex electrical engineering, efficient fault detection and localization schemes are essential to quickly detect and locate faults so that appropriate and timely corrective mitigating and maintenance actions can be taken. In this paper, under the current measurement precision of PMU, we will put forward a new type of fault detection and localization technology based on fault factor feature extraction. Lots of simulating experiments indicate that, although there are disturbances of white Gaussian stochastic noise, based on fault factor feature extraction principal, the fault detection and localization results are still accurate and reliable, which also identifies that the fault detection and localization technology has strong anti-interference ability and great redundancy.

  7. Intelligent Automation Approach for Improving Pilot Situational Awareness

    NASA Technical Reports Server (NTRS)

    Spirkovska, Lilly

    2004-01-01

    Automation in the aviation domain has been increasing for the past two decades. Pilot reaction to automation varies from highly favorable to highly critical depending on both the pilot's background and how effectively the automation is implemented. We describe a user-centered approach for automation that considers the pilot's tasks and his needs related to accomplishing those tasks. Further, we augment rather than replace how the pilot currently fulfills his goals, relying on redundant displays that offer the pilot an opportunity to build trust in the automation. Our prototype system automates the interpretation of hydraulic system faults of the UH-60 helicopter. We describe the problem with the current system and our methodology for resolving it.

  8. Formal and Legal Aspects of Buying and Commissioning Flats

    NASA Astrophysics Data System (ADS)

    Dubas, Sebastian; Nowotarski, Piotr; Milwicz, Roman

    2017-10-01

    Formal and legal aspects of buying flats and their reception is very current topic and touches wide group of buyers. Annually in Poland great amount of flats is being sold and put to use. However, the case of housing purchase requires knowledge of both the construction and the legal aspects each buyer has to encounter. The paper faces the subject of formal and legal aspects, and analyses accompanying procedure of purchase and reception of housing in Poland. The article presents principles associated with the acquisition of a dwelling, process of works reception, removal of detected faults, fault-free reception, transfer of ownership, warranties, guarantees and possibilities of their enforcement. Contracting parties of the developer agreement were revealed. In addition, the entities present in the course of works such as general contractor were mentioned, due to the fact of his direct influence on the results of a contract terms between developer and buyer. Logical connection between three parties (buyer-developer-general contractor) were shown and direct and indirect dependencies were revealed. Existing laws and regulations that govern the relationship between the developer and the buyer of a dwelling were determined showing basic rights and responsibilities of each. The article also presents problems resulting from delaying the completion of works by developer’s fault and indicates possible legal paths to follow in order claim their rights. Due to the fact, that many of discussed formal and legal aspects in this subject have their origin connected to construction works and design issues, author suggests increased quality control and efficient work organization in order to solve problems before appearance.

  9. A PC based time domain reflectometer for space station cable fault isolation

    NASA Technical Reports Server (NTRS)

    Pham, Michael; McClean, Marty; Hossain, Sabbir; Vo, Peter; Kouns, Ken

    1994-01-01

    Significant problems are faced by astronauts on orbit in the Space Station when trying to locate electrical faults in multi-segment avionics and communication cables. These problems necessitate the development of an automated portable device that will detect and locate cable faults using the pulse-echo technique known as Time Domain Reflectometry. A breadboard time domain reflectometer (TDR) circuit board was designed and developed at the NASA-JSC. The TDR board works in conjunction with a GRiD lap-top computer to automate the fault detection and isolation process. A software program was written to automatically display the nature and location of any possible faults. The breadboard system can isolate open circuit and short circuit faults within two feet in a typical space station cable configuration. Follow-on efforts planned for 1994 will produce a compact, portable prototype Space Station TDR capable of automated switching in multi-conductor cables for high fidelity evaluation. This device has many possible commercial applications, including commercial and military aircraft avionics, cable TV, telephone, communication, information and computer network systems. This paper describes the principle of time domain reflectometry and the methodology for on-orbit avionics utility distribution system repair, utilizing the newly developed device called the Space Station Time Domain Reflectometer (SSTDR).

  10. Delay test generation for synchronous sequential circuits

    NASA Astrophysics Data System (ADS)

    Devadas, Srinivas

    1989-05-01

    We address the problem of generating tests for delay faults in non-scan synchronous sequential circuits. Delay test generation for sequential circuits is a considerably more difficult problem than delay testing of combinational circuits and has received much less attention. In this paper, we present a method for generating test sequences to detect delay faults in sequential circuits using the stuck-at fault sequential test generator STALLION. The method is complete in that it will generate a delay test sequence for a targeted fault given sufficient CPU time, if such a sequence exists. We term faults for which no delay test sequence exists, under out test methodology, sequentially delay redundant. We describe means of eliminating sequential delay redundancies in logic circuits. We present a partial-scan methodology for enhancing the testability of difficult-to-test of untestable sequential circuits, wherein a small number of flip-flops are selected and made controllable/observable. The selection process guarantees the elimination of all sequential delay redundancies. We show that an intimate relationship exists between state assignment and delay testability of a sequential machine. We describe a state assignment algorithm for the synthesis of sequential machines with maximal delay fault testability. Preliminary experimental results using the test generation, partial-scan and synthesis algorithm are presented.

  11. Geometric Analyses of Rotational Faults.

    ERIC Educational Resources Information Center

    Schwert, Donald Peters; Peck, Wesley David

    1986-01-01

    Describes the use of analysis of rotational faults in undergraduate structural geology laboratories to provide students with applications of both orthographic and stereographic techniques. A demonstration problem is described, and an orthographic/stereographic solution and a reproducible black model demonstration pattern are provided. (TW)

  12. A SVM framework for fault detection of the braking system in a high speed train

    NASA Astrophysics Data System (ADS)

    Liu, Jie; Li, Yan-Fu; Zio, Enrico

    2017-03-01

    In April 2015, the number of operating High Speed Trains (HSTs) in the world has reached 3603. An efficient, effective and very reliable braking system is evidently very critical for trains running at a speed around 300 km/h. Failure of a highly reliable braking system is a rare event and, consequently, informative recorded data on fault conditions are scarce. This renders the fault detection problem a classification problem with highly unbalanced data. In this paper, a Support Vector Machine (SVM) framework, including feature selection, feature vector selection, model construction and decision boundary optimization, is proposed for tackling this problem. Feature vector selection can largely reduce the data size and, thus, the computational burden. The constructed model is a modified version of the least square SVM, in which a higher cost is assigned to the error of classification of faulty conditions than the error of classification of normal conditions. The proposed framework is successfully validated on a number of public unbalanced datasets. Then, it is applied for the fault detection of braking systems in HST: in comparison with several SVM approaches for unbalanced datasets, the proposed framework gives better results.

  13. Robust operative diagnosis as problem solving in a hypothesis space

    NASA Technical Reports Server (NTRS)

    Abbott, Kathy H.

    1988-01-01

    This paper describes an approach that formulates diagnosis of physical systems in operation as problem solving in a hypothesis space. Such a formulation increases robustness by: (1) incremental hypotheses construction via dynamic inputs, (2) reasoning at a higher level of abstraction to construct hypotheses, and (3) partitioning the space by grouping fault hypotheses according to the type of physical system representation and problem solving techniques used in their construction. It was implemented for a turbofan engine and hydraulic subsystem. Evaluation of the implementation on eight actual aircraft accident cases involving engine faults provided very promising results.

  14. Age constraints for the present fault configuration in the Imperial Valley, California: Evidence for northwestward propagation of the Gulf of California rift system

    NASA Technical Reports Server (NTRS)

    Larsen, Shawn; Reilinger, Robert

    1990-01-01

    Releveling and other geophysical data for the Imperial Valley of southern California suggest the northern section of the Imperial-Brawley fault system, which includes the Mesquite Basin and Brawley Seismic Zone, is much younger than the 4 to 5 million year age of the valley itself. A minimum age of 3000 years is calculated for the northern segment of the Imperial fault from correlations between surface topography and geodetically observed seismic/interseismic vertical movements. Calculations of a maximum age of 80,000 years is based upon displacements in the crystalline basement along the Imperial fault, inferred from seismic refraction surveys. This young age supports recent interpretations of heat flow measurements, which also suggest that the current patterns of seismicity and faults in the Imperial Valley are not long lived. The current fault geometry and basement morphology suggest northwestward growth of the Imperial fault and migration of the Brawley Seismic Zone. It is suggested that this migration is a manifestation of the propagation of the Gulf of California rift system into the North American continent.

  15. An experimental study of the influence of stress history on fault slip during injection of supercritical CO2

    NASA Astrophysics Data System (ADS)

    Cuss, Robert J.; Wiseall, Andrew C.; Tamayo-Mas, Elena; Harrington, Jon F.

    2018-04-01

    The injection of super-critical CO2 into a depleted reservoir will alter the pore pressure of the basin, which if sufficiently perturbed could result in fault slip. Therefore, knowledge of the acceptable pressure limits is required in order to maintain fault stability. A two-part laboratory study was conducted on fully saturated kaolinite fault gouge to investigate this issue. Previously, we showed that fault slip occurred once pore-pressure within the gouge was sufficient to overcome the normal stress acting on the fault. For kaolinite, this behaviour occurred at a pressure similar to the yield stress. The current study shows that following a slow-reduction in the maximum principal stress, as would be expected through changes in effective stress, the reactivation pressure shows a stress memory. Consequently, the pressure necessary to initiate fault slip is similar to that required at the maximum stress encountered. Therefore, fault slip is at least partially controlled by the previous maximum stress and not the current stress state. During the slow reduction in normal stress, the flow characteristics of the fault remain unchanged until pore-pressure exceeds shear stress and does not increase significantly until it exceeds normal stress. This results in fault slip, which slows the rate of flow increase as shear is an effective self-sealing mechanism. These observations lead to the conclusion that stress history is a vital parameter when considering fault stability.

  16. Study of Electromagnetic Repulsion Switch to High Speed Reclosing and Recover Time Characteristics of Superconductor

    NASA Astrophysics Data System (ADS)

    Koyama, Tomonori; Kaiho, Katsuyuki; Yamaguchi, Iwao; Yanabu, Satoru

    Using a high-temperature superconductor, we constructed and tested a model superconducting fault current limiter (SFCL). The superconductor and vacuum interrupter as the commutation switch were connected in parallel using a bypass coil. When the fault current flows in this equipment, the superconductor is quenched and the current is then transferred to the parallel coil due to the voltage drop in the superconductor. This large current in the parallel coil actuates the magnetic repulsion mechanism of the vacuum interrupter and the current in the superconductor is broken. Using this equipment, the current flow time in the superconductor can be easily minimized. On the other hand, the fault current is also easily limited by large reactance of the parallel coil. This system has many merits. So, we introduced to electromagnetic repulsion switch. There is duty of high speed re-closing after interrupting fault current in the electrical power system. So the SFCL should be recovered to superconducting state before high speed re-closing. But, superconductor generated heat at the time of quench. It takes time to recover superconducting state. Therefore it is a matter of recovery time. In this paper, we studied recovery time of superconductor. Also, we proposed electromagnetic repulsion switch with reclosing system.

  17. Faults in parts of north-central and western Houston metropolitan area, Texas

    USGS Publications Warehouse

    Verbeek, Earl R.; Ratzlaff, Karl W.; Clanton, Uel S.

    1979-01-01

    Hundreds of residential, commercial, and industrial structures in the Houston metropolitan area have sustained moderate to severe damage owing to their locations on or near active faults. Paved roads have been offset by faults at hundreds of locations, butted pipelines have been distorted by fault movements, and fault-induced gradient changes in drainage lines have raised concern among flood control engineers. Over 150 faults, many of them moving at rates of 0.5 to 2 cm/yr, have been mapped in the Houston area; the number of faults probably far exceeds this figure.This report includes a map of eight faults, in north-central and western Houston, at a scale useful for land-use planning. Seven of the faults, are known, to be active and have caused considerable damage to structures built on or near them. If the eighth fault is active, it may be of concern to new developments on the west side of Houston. A ninth feature shown on the map is regarded only as a possible fault, as an origin by faulting has not been firmly established.Seismic and drill-hold data for some 40 faults, studied in detail by various investigators have verified connections between scarps at the land surface and growth faults in the shallow subsurface. Some scarps, then, are known to be the surface manifestations of faults that have geologically long histories of movement. The degree to which natural geologic processes contribute to current fault movement, however, is unclear, for some of man’s activities may play a role in faulting as well.Evidence that current rates of fault movement far exceed average prehistoric rates and that most offset of the land surface in the Houston area has occurred only within the last 50 years indirectly suggest that fluid withdrawal may be accelerating or reinitiating movement on pre-existing faults. This conclusion, however, is based only on a coincidence in time between increased fault activity and increased rates of withdrawal of water, oil, and gas from subsurface sediments; no cause-and-effect relationship has been demonstrated. An alternative hypothesis is that natural fault movements are characterized by short—term episodicity and that Houston is experiencing the effects of a brief period of accelerated natural fault movement. Available data from monitored faults are insufficient to weigh the relative importance of natural vs. induced fault movements.

  18. Classification of Aircraft Maneuvers for Fault Detection

    NASA Technical Reports Server (NTRS)

    Oza, Nikunj C.; Tumer, Irem Y.; Tumer, Kagan; Huff, Edward M.; Clancy, Daniel (Technical Monitor)

    2002-01-01

    Automated fault detection is an increasingly important problem in aircraft maintenance and operation. Standard methods of fault detection assume the availability of either data produced during all possible faulty operation modes or a clearly-defined means to determine whether the data is a reasonable match to known examples of proper operation. In our domain of fault detection in aircraft, the first assumption is unreasonable and the second is difficult to determine. We envision a system for online fault detection in aircraft, one part of which is a classifier that predicts the maneuver being performed by the aircraft as a function of vibration data and other available data. We explain where this subsystem fits into our envisioned fault detection system as well its experiments showing the promise of this classification subsystem.

  19. Use of Fuzzy Logic Systems for Assessment of Primary Faults

    NASA Astrophysics Data System (ADS)

    Petrović, Ivica; Jozsa, Lajos; Baus, Zoran

    2015-09-01

    In electric power systems, grid elements are often subjected to very complex and demanding disturbances or dangerous operating conditions. Determining initial fault or cause of those states is a difficult task. When fault occurs, often it is an imperative to disconnect affected grid element from the grid. This paper contains an overview of possibilities for using fuzzy logic in an assessment of primary faults in the transmission grid. The tool for this task is SCADA system, which is based on information of currents, voltages, events of protection devices and status of circuit breakers in the grid. The function model described with the membership function and fuzzy logic systems will be presented in the paper. For input data, diagnostics system uses information of protection devices tripping, states of circuit breakers and measurements of currents and voltages before and after faults.

  20. An Intelligent Actuator Fault Reconstruction Scheme for Robotic Manipulators.

    PubMed

    Xiao, Bing; Yin, Shen

    2018-02-01

    This paper investigates a difficult problem of reconstructing actuator faults for robotic manipulators. An intelligent approach with fast reconstruction property is developed. This is achieved by using observer technique. This scheme is capable of precisely reconstructing the actual actuator fault. It is shown by Lyapunov stability analysis that the reconstruction error can converge to zero after finite time. A perfect reconstruction performance including precise and fast properties can be provided for actuator fault. The most important feature of the scheme is that, it does not depend on control law, dynamic model of actuator, faults' type, and also their time-profile. This super reconstruction performance and capability of the proposed approach are further validated by simulation and experimental results.

  1. Design of on-board Bluetooth wireless network system based on fault-tolerant technology

    NASA Astrophysics Data System (ADS)

    You, Zheng; Zhang, Xiangqi; Yu, Shijie; Tian, Hexiang

    2007-11-01

    In this paper, the Bluetooth wireless data transmission technology is applied in on-board computer system, to realize wireless data transmission between peripherals of the micro-satellite integrating electronic system, and in view of the high demand of reliability of a micro-satellite, a design of Bluetooth wireless network based on fault-tolerant technology is introduced. The reliability of two fault-tolerant systems is estimated firstly using Markov model, then the structural design of this fault-tolerant system is introduced; several protocols are established to make the system operate correctly, some related problems are listed and analyzed, with emphasis on Fault Auto-diagnosis System, Active-standby switch design and Data-Integrity process.

  2. Investigation of DC hybrid circuit breaker based on high-speed switch and arc generator

    NASA Astrophysics Data System (ADS)

    Wu, Yifei; Rong, Mingzhe; Wu, Yi; Yang, Fei; Li, Mei; Zhong, Jianying; Han, Guohui; Niu, Chunping; Hu, Yang

    2015-02-01

    A new design of DC hybrid circuit breaker based on high-speed switch (HSS) and arc generator (AG), which can drastically profit from low heat loss in normal state and fast current breaking under fault state, is presented and analyzed in this paper. AG is designed according to the magnetic pinch effect of liquid metal. By utilizing the arc voltage generated across AG, the fault current is rapidly commutated from HSS into parallel connected branch. As a consequence, the arcless open of HSS is achieved. The post-arc conducting resume time (Δ tc) of AG and the commutation original voltage (Uc), two key factors in the commutation process, are investigated experimentally. Particularly, influences of the liquid metal channel diameter (Φ) of AG, fault current rate of rise (di/dt) and Uc on Δ tc are focused on. Furthermore, a suitable Uc is determined during the current commutation process, aiming at the reliable arcless open of HSS and short breaking time. Finally, the fault current breaking test is carried out for the current peak value of 11.8 kA, and the validity of the design is confirmed by the experimental results.

  3. Investigation of DC hybrid circuit breaker based on high-speed switch and arc generator.

    PubMed

    Wu, Yifei; Rong, Mingzhe; Wu, Yi; Yang, Fei; Li, Mei; Zhong, Jianying; Han, Guohui; Niu, Chunping; Hu, Yang

    2015-02-01

    A new design of DC hybrid circuit breaker based on high-speed switch (HSS) and arc generator (AG), which can drastically profit from low heat loss in normal state and fast current breaking under fault state, is presented and analyzed in this paper. AG is designed according to the magnetic pinch effect of liquid metal. By utilizing the arc voltage generated across AG, the fault current is rapidly commutated from HSS into parallel connected branch. As a consequence, the arcless open of HSS is achieved. The post-arc conducting resume time (Δ tc) of AG and the commutation original voltage (Uc), two key factors in the commutation process, are investigated experimentally. Particularly, influences of the liquid metal channel diameter (Φ) of AG, fault current rate of rise (di/dt) and Uc on Δ tc are focused on. Furthermore, a suitable Uc is determined during the current commutation process, aiming at the reliable arcless open of HSS and short breaking time. Finally, the fault current breaking test is carried out for the current peak value of 11.8 kA, and the validity of the design is confirmed by the experimental results.

  4. Digital system bus integrity

    NASA Technical Reports Server (NTRS)

    Eldredge, Donald; Hitt, Ellis F.

    1987-01-01

    This report summarizes and describes the results of a study of current or emerging multiplex data buses as applicable to digital flight systems, particularly with regard to civil aircraft. Technology for pre-1995 and post-1995 timeframes has been delineated and critiqued relative to the requirements envisioned for those periods. The primary emphasis has been an assured airworthiness of the more prevalent type buses, with attention to attributes such as fault tolerance, environmental susceptibility, and problems under continuing investigation. Additionally, the capacity to certify systems relying on such buses has been addressed.

  5. Ultrareliable fault-tolerant control systems

    NASA Technical Reports Server (NTRS)

    Webster, L. D.; Slykhouse, R. A.; Booth, L. A., Jr.; Carson, T. M.; Davis, G. J.; Howard, J. C.

    1984-01-01

    It is demonstrated that fault-tolerant computer systems, such as on the Shuttles, based on redundant, independent operation are a viable alternative in fault tolerant system designs. The ultrareliable fault-tolerant control system (UFTCS) was developed and tested in laboratory simulations of an UH-1H helicopter. UFTCS includes asymptotically stable independent control elements in a parallel, cross-linked system environment. Static redundancy provides the fault tolerance. A polling is performed among the computers, with results allowing for time-delay channel variations with tight bounds. When compared with the laboratory and actual flight data for the helicopter, the probability of a fault was, for the first 10 hr of flight given a quintuple computer redundancy, found to be 1 in 290 billion. Two weeks of untended Space Station operations would experience a fault probability of 1 in 24 million. Techniques for avoiding channel divergence problems are identified.

  6. Flux-lock type of superconducting fault current limiters: A comprehensive review

    NASA Astrophysics Data System (ADS)

    Badakhshan, M.; Mousavi G., S. M.

    2018-04-01

    Power systems must be developed and extended to supply the continuous enhancement of demands for electrical energy. This development of systems in addition to the integration of distributed generation (DG) units to the power systems results higher capacity of system. Hence, short circuit current of network is confronted with persistent increasing. Since exploration of high temperature superconducting (HTS) materials, superconducting fault current limiters (SFCLs) have attracted a lot of attention all over the world. There are different types of SFCLs. Flux-lock type of SFCL because of its characteristics in fault current limitation is an important category of SFCLs. This paper aims to present a comprehensive review of research activities and applications of Flux-lock type of SFCLs in power systems.

  7. AC HTS Transmission Cable for Integration into the Future EHV Grid of the Netherlands

    NASA Astrophysics Data System (ADS)

    Zuijderduin, R.; Chevtchenko, O.; Smit, J. J.; Aanhaanen, G.; Melnik, I.; Geschiere, A.

    Due to increasing power demand, the electricity grid of the Netherlands is changing. The future grid must be capable to transmit all the connected power. Power generation will be more decentralized like for instance wind parks connected to the grid. Furthermore, future large scale production units are expected to be installed near coastal regions. This creates some potential grid issues, such as: large power amounts to be transmitted to consumers from west to east and grid stability. High temperature superconductors (HTS) can help solving these grid problems. Advantages to integrate HTS components at Extra High Voltage (EHV) and High Voltage (HV) levels are numerous: more power with less losses and less emissions, intrinsic fault current limiting capability, better control of power flow, reduced footprint, etc. Today's main obstacle is the relatively high price of HTS. Nevertheless, as the price goes down, initial market penetration for several HTS components is expected by year 2015 (e.g.: cables, fault current limiters). In this paper we present a design of intrinsically compensated EHV HTS cable for future grid integration. Discussed are the parameters of such cable providing an optimal power transmission in the future network.

  8. 77 FR 51722 - Airworthiness Directives; The Boeing Company Airplanes

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-08-27

    ... that AD to prevent electrical energy from lightning, hot shorts, or fault current from entering the... that AD to prevent electrical energy from lightning, hot shorts, or fault current from entering the... specifically invite comments on the overall regulatory, economic, environmental, and energy aspects of this...

  9. An imbalance fault detection method based on data normalization and EMD for marine current turbines.

    PubMed

    Zhang, Milu; Wang, Tianzhen; Tang, Tianhao; Benbouzid, Mohamed; Diallo, Demba

    2017-05-01

    This paper proposes an imbalance fault detection method based on data normalization and Empirical Mode Decomposition (EMD) for variable speed direct-drive Marine Current Turbine (MCT) system. The method is based on the MCT stator current under the condition of wave and turbulence. The goal of this method is to extract blade imbalance fault feature, which is concealed by the supply frequency and the environment noise. First, a Generalized Likelihood Ratio Test (GLRT) detector is developed and the monitoring variable is selected by analyzing the relationship between the variables. Then, the selected monitoring variable is converted into a time series through data normalization, which makes the imbalance fault characteristic frequency into a constant. At the end, the monitoring variable is filtered out by EMD method to eliminate the effect of turbulence. The experiments show that the proposed method is robust against turbulence through comparing the different fault severities and the different turbulence intensities. Comparison with other methods, the experimental results indicate the feasibility and efficacy of the proposed method. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  10. A Self-Stabilizing Hybrid Fault-Tolerant Synchronization Protocol

    NASA Technical Reports Server (NTRS)

    Malekpour, Mahyar R.

    2015-01-01

    This paper presents a strategy for solving the Byzantine general problem for self-stabilizing a fully connected network from an arbitrary state and in the presence of any number of faults with various severities including any number of arbitrary (Byzantine) faulty nodes. The strategy consists of two parts: first, converting Byzantine faults into symmetric faults, and second, using a proven symmetric-fault tolerant algorithm to solve the general case of the problem. A protocol (algorithm) is also present that tolerates symmetric faults, provided that there are more good nodes than faulty ones. The solution applies to realizable systems, while allowing for differences in the network elements, provided that the number of arbitrary faults is not more than a third of the network size. The only constraint on the behavior of a node is that the interactions with other nodes are restricted to defined links and interfaces. The solution does not rely on assumptions about the initial state of the system and no central clock nor centrally generated signal, pulse, or message is used. Nodes are anonymous, i.e., they do not have unique identities. A mechanical verification of a proposed protocol is also present. A bounded model of the protocol is verified using the Symbolic Model Verifier (SMV). The model checking effort is focused on verifying correctness of the bounded model of the protocol as well as confirming claims of determinism and linear convergence with respect to the self-stabilization period.

  11. Simulative and experimental investigation on stator winding turn and unbalanced supply voltage fault diagnosis in induction motors using Artificial Neural Networks.

    PubMed

    Lashkari, Negin; Poshtan, Javad; Azgomi, Hamid Fekri

    2015-11-01

    The three-phase shift between line current and phase voltage of induction motors can be used as an efficient fault indicator to detect and locate inter-turn stator short-circuit (ITSC) fault. However, unbalanced supply voltage is one of the contributing factors that inevitably affect stator currents and therefore the three-phase shift. Thus, it is necessary to propose a method that is able to identify whether the unbalance of three currents is caused by ITSC or supply voltage fault. This paper presents a feedforward multilayer-perceptron Neural Network (NN) trained by back propagation, based on monitoring negative sequence voltage and the three-phase shift. The data which are required for training and test NN are generated using simulated model of stator. The experimental results are presented to verify the superior accuracy of the proposed method. Copyright © 2015. Published by Elsevier Ltd.

  12. Asset surveillance system: apparatus and method

    NASA Technical Reports Server (NTRS)

    Bickford, Randall L. (Inventor)

    2007-01-01

    System and method for providing surveillance of an asset comprised of numerically fitting at least one mathematical model to obtained residual data correlative to asset operation; storing at least one mathematical model in a memory; obtaining a current set of signal data from the asset; retrieving at least one mathematical model from the memory, using the retrieved mathematical model in a sequential hypothesis test for determining if the current set of signal data is indicative of a fault condition; determining an asset fault cause correlative to a determined indication of a fault condition; providing an indication correlative to a determined fault cause, and an action when warranted. The residual data can be mode partitioned, a current mode of operation can be determined from the asset, and at least one mathematical model can be retrieved from the memory as a function of the determined mode of operation.

  13. Planning effectiveness may grow on fault trees.

    PubMed

    Chow, C W; Haddad, K; Mannino, B

    1991-10-01

    The first step of a strategic planning process--identifying and analyzing threats and opportunities--requires subjective judgments. By using an analytical tool known as a fault tree, healthcare administrators can reduce the unreliability of subjective decision making by creating a logical structure for problem solving and decision making. A case study of 11 healthcare administrators showed that an analysis technique called prospective hindsight can add to a fault tree's ability to improve a strategic planning process.

  14. High level organizing principles for display of systems fault information for commercial flight crews

    NASA Technical Reports Server (NTRS)

    Rogers, William H.; Schutte, Paul C.

    1993-01-01

    Advanced fault management aiding concepts for commercial pilots are being developed in a research program at NASA Langley Research Center. One aim of this program is to re-evaluate current design principles for display of fault information to the flight crew: (1) from a cognitive engineering perspective and (2) in light of the availability of new types of information generated by advanced fault management aids. The study described in this paper specifically addresses principles for organizing fault information for display to pilots based on their mental models of fault management.

  15. Foundations for Protecting Renewable-Rich Distribution Systems.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ellis, Abraham; Brahma, Sukumar; Ranade, Satish

    High proliferation of Inverter Interfaced Distributed Energy Resources (IIDERs) into the electric distribution grid introduces new challenges to protection of such systems. This is because the existing protection systems are designed with two assumptions: 1) system is single-sourced, resulting in unidirectional fault current, and (2) fault currents are easily detectable due to much higher magnitudes compared to load currents. Due to the fact that most renewables interface with the grid though inverters, and inverters restrict their current output to levels close to the full load currents, both these assumptions are no longer valid - the system becomes multi-sourced, and overcurrent-basedmore » protection does not work. The primary scope of this study is to analyze the response of a grid-tied inverter to different faults in the grid, leading to new guidelines on protecting renewable-rich distribution systems.« less

  16. Integrated Hardware and Software for No-Loss Computing

    NASA Technical Reports Server (NTRS)

    James, Mark

    2007-01-01

    When an algorithm is distributed across multiple threads executing on many distinct processors, a loss of one of those threads or processors can potentially result in the total loss of all the incremental results up to that point. When implementation is massively hardware distributed, then the probability of a hardware failure during the course of a long execution is potentially high. Traditionally, this problem has been addressed by establishing checkpoints where the current state of some or part of the execution is saved. Then in the event of a failure, this state information can be used to recompute that point in the execution and resume the computation from that point. A serious problem arises when one distributes a problem across multiple threads and physical processors is that one increases the likelihood of the algorithm failing due to no fault of the scientist but as a result of hardware faults coupled with operating system problems. With good reason, scientists expect their computing tools to serve them and not the other way around. What is novel here is a unique combination of hardware and software that reformulates an application into monolithic structure that can be monitored in real-time and dynamically reconfigured in the event of a failure. This unique reformulation of hardware and software will provide advanced aeronautical technologies to meet the challenges of next-generation systems in aviation, for civilian and scientific purposes, in our atmosphere and in atmospheres of other worlds. In particular, with respect to NASA s manned flight to Mars, this technology addresses the critical requirements for improving safety and increasing reliability of manned spacecraft.

  17. Neural network application to comprehensive engine diagnostics

    NASA Technical Reports Server (NTRS)

    Marko, Kenneth A.

    1994-01-01

    We have previously reported on the use of neural networks for detection and identification of faults in complex microprocessor controlled powertrain systems. The data analyzed in those studies consisted of the full spectrum of signals passing between the engine and the real-time microprocessor controller. The specific task of the classification system was to classify system operation as nominal or abnormal and to identify the fault present. The primary concern in earlier work was the identification of faults, in sensors or actuators in the powertrain system as it was exercised over its full operating range. The use of data from a variety of sources, each contributing some potentially useful information to the classification task, is commonly referred to as sensor fusion and typifies the type of problems successfully addressed using neural networks. In this work we explore the application of neural networks to a different diagnostic problem, the diagnosis of faults in newly manufactured engines and the utility of neural networks for process control.

  18. Second-order sliding mode control for DFIG-based wind turbines fault ride-through capability enhancement.

    PubMed

    Benbouzid, Mohamed; Beltran, Brice; Amirat, Yassine; Yao, Gang; Han, Jingang; Mangel, Hervé

    2014-05-01

    This paper deals with the fault ride-through capability assessment of a doubly fed induction generator-based wind turbine using a high-order sliding mode control. Indeed, it has been recently suggested that sliding mode control is a solution of choice to the fault ride-through problem. In this context, this paper proposes a second-order sliding mode as an improved solution that handle the classical sliding mode chattering problem. Indeed, the main and attractive features of high-order sliding modes are robustness against external disturbances, the grids faults in particular, and chattering-free behavior (no extra mechanical stress on the wind turbine drive train). Simulations using the NREL FAST code on a 1.5-MW wind turbine are carried out to evaluate ride-through performance of the proposed high-order sliding mode control strategy in case of grid frequency variations and unbalanced voltage sags. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.

  19. Reliability of Fault Tolerant Control Systems. Part 2

    NASA Technical Reports Server (NTRS)

    Wu, N. Eva

    2000-01-01

    This paper reports Part II of a two part effort that is intended to delineate the relationship between reliability and fault tolerant control in a quantitative manner. Reliability properties peculiar to fault-tolerant control systems are emphasized, such as the presence of analytic redundancy in high proportion, the dependence of failures on control performance, and high risks associated with decisions in redundancy management due to multiple sources of uncertainties and sometimes large processing requirements. As a consequence, coverage of failures through redundancy management can be severely limited. The paper proposes to formulate the fault tolerant control problem as an optimization problem that maximizes coverage of failures through redundancy management. Coverage modeling is attempted in a way that captures its dependence on the control performance and on the diagnostic resolution. Under the proposed redundancy management policy, it is shown that an enhanced overall system reliability can be achieved with a control law of a superior robustness, with an estimator of a higher resolution, and with a control performance requirement of a lesser stringency.

  20. Modeling fault diagnosis as the activation and use of a frame system. [for pilot problem-solving rating

    NASA Technical Reports Server (NTRS)

    Smith, Philip J.; Giffin, Walter C.; Rockwell, Thomas H.; Thomas, Mark

    1986-01-01

    Twenty pilots with instrument flight ratings were asked to perform a fault-diagnosis task for which they had relevant domain knowledge. The pilots were asked to think out loud as they requested and interpreted information. Performances were then modeled as the activation and use of a frame system. Cognitive biases, memory distortions and losses, and failures to correctly diagnose the problem were studied in the context of this frame system model.

  1. An Analytic Training Effectiveness Analysis for a CTEA Update

    DTIC Science & Technology

    1977-11-01

    minutes to insert a fault in the crtual harduare and an additional 5 minut(,s to remove the fault. This manual insertion of f,iults, therefore, will cut an...organizatioual mu Inten;4 nce m4n.for tle potrfo’ in.ý;e of hi 1 t ,ksx. Thus, feedlark would hatve to be piovil,,d by an Instrutor ohb.-erving ,v,.ry...use Information regarding problem parameters and student performance (time, errors). The problems in handling this such data manually are such that

  2. Global strike-slip fault distribution on Enceladus reveals mostly left-lateral faults

    NASA Astrophysics Data System (ADS)

    Martin, E. S.; Kattenhorn, S. A.

    2013-12-01

    Within the outer solar system, normal faults are a dominant tectonic feature; however, strike-slip faults have played a role in modifying the surfaces of many icy bodies, including Europa, Ganymede, and Enceladus. Large-scale tectonic deformation in icy shells develops in response to stresses caused by a range of mechanisms including polar wander, despinning, volume changes, orbital recession/decay, diurnal tides, and nonsynchronous rotation (NSR). Icy shells often preserve this record of tectonic deformation as patterns of fractures that can be used to identify the source of stress responsible for creating the patterns. Previously published work on Jupiter's moon Europa found that right-lateral strike-slip faults predominantly formed in the southern hemisphere and left-lateral strike-slip faults in the northern hemisphere. This pattern suggested they were formed in the past by stresses induced by diurnal tidal forcing, and were then rotated into their current longitudinal positions by NSR. We mapped the distribution of strike-slip faults on Enceladus and used kinematic indicators, including tailcracks and en echelon fractures, to determine their sense of slip. Tailcracks are secondary fractures that form as a result of concentrations of stress at the tips of slipping faults with geometric patterns dictated by the slip sense. A total of 31 strike-slip faults were identified, nine of which were right-lateral faults, all distributed in a seemingly random pattern across Enceladus's surface, in contrast to Europa. Additionally, there is a dearth of strike-slip faults within the tectonized terrains centered at 90°W and within the polar regions north and south of 60°N and 60°S, respectively. The lack of strike-slip faults in the north polar region may be explained, in part, by limited data coverage. The south polar terrain (SPT), characterized by the prominent tiger stripes and south polar dichotomy, yielded no discrete strike-slip faults. This does not suggest that the SPT is devoid of shear: previous work has indicated that the tiger stripes may be undergoing strike-slip motions and the surrounding regions may be experiencing shear. The fracture patterns and geologic activity within the SPT have been previously documented to be the result of stresses induced by both NSR and diurnal tidal deformation. As these same mechanisms are the main controls on strike-slip fault patterns on Europa, the lack of a match between strike-slip patterns on Europa and Enceladus is intriguing. The pattern of strike-slip faults on Enceladus suggests a different combination of stress mechanisms is required to produce the observed distributions. We will present models of global stress mechanisms to consider how the global-scale pattern of strike-slip faults on Enceladus may have been produced. This problem will be investigated further by measuring the angles at which tailcracks have formed on Enceladus. Tailcracks produced by simple shear form at 70.5° to the fault. Any deviation from this angle indicates some ratio of concomitant shear and dilation, which may provide insights into elucidating the stresses controlling strike-slip formation on Enceladus.

  3. A Novel Arc Fault Detector for Early Detection of Electrical Fires

    PubMed Central

    Yang, Kai; Zhang, Rencheng; Yang, Jianhong; Liu, Canhua; Chen, Shouhong; Zhang, Fujiang

    2016-01-01

    Arc faults can produce very high temperatures and can easily ignite combustible materials; thus, they represent one of the most important causes of electrical fires. The application of arc fault detection, as an emerging early fire detection technology, is required by the National Electrical Code to reduce the occurrence of electrical fires. However, the concealment, randomness and diversity of arc faults make them difficult to detect. To improve the accuracy of arc fault detection, a novel arc fault detector (AFD) is developed in this study. First, an experimental arc fault platform is built to study electrical fires. A high-frequency transducer and a current transducer are used to measure typical load signals of arc faults and normal states. After the common features of these signals are studied, high-frequency energy and current variations are extracted as an input eigenvector for use by an arc fault detection algorithm. Then, the detection algorithm based on a weighted least squares support vector machine is designed and successfully applied in a microprocessor. Finally, an AFD is developed. The test results show that the AFD can detect arc faults in a timely manner and interrupt the circuit power supply before electrical fires can occur. The AFD is not influenced by cross talk or transient processes, and the detection accuracy is very high. Hence, the AFD can be installed in low-voltage circuits to monitor circuit states in real-time to facilitate the early detection of electrical fires. PMID:27070618

  4. Robust fault tolerant control based on sliding mode method for uncertain linear systems with quantization.

    PubMed

    Hao, Li-Ying; Yang, Guang-Hong

    2013-09-01

    This paper is concerned with the problem of robust fault-tolerant compensation control problem for uncertain linear systems subject to both state and input signal quantization. By incorporating novel matrix full-rank factorization technique with sliding surface design successfully, the total failure of certain actuators can be coped with, under a special actuator redundancy assumption. In order to compensate for quantization errors, an adjustment range of quantization sensitivity for a dynamic uniform quantizer is given through the flexible choices of design parameters. Comparing with the existing results, the derived inequality condition leads to the fault tolerance ability stronger and much wider scope of applicability. With a static adjustment policy of quantization sensitivity, an adaptive sliding mode controller is then designed to maintain the sliding mode, where the gain of the nonlinear unit vector term is updated automatically to compensate for the effects of actuator faults, quantization errors, exogenous disturbances and parameter uncertainties without the need for a fault detection and isolation (FDI) mechanism. Finally, the effectiveness of the proposed design method is illustrated via a model of a rocket fairing structural-acoustic. Copyright © 2013 ISA. Published by Elsevier Ltd. All rights reserved.

  5. Fuzzy Inference System Approach for Locating Series, Shunt, and Simultaneous Series-Shunt Faults in Double Circuit Transmission Lines

    PubMed Central

    Swetapadma, Aleena; Yadav, Anamika

    2015-01-01

    Many schemes are reported for shunt fault location estimation, but fault location estimation of series or open conductor faults has not been dealt with so far. The existing numerical relays only detect the open conductor (series) fault and give the indication of the faulty phase(s), but they are unable to locate the series fault. The repair crew needs to patrol the complete line to find the location of series fault. In this paper fuzzy based fault detection/classification and location schemes in time domain are proposed for both series faults, shunt faults, and simultaneous series and shunt faults. The fault simulation studies and fault location algorithm have been developed using Matlab/Simulink. Synchronized phasors of voltage and current signals of both the ends of the line have been used as input to the proposed fuzzy based fault location scheme. Percentage of error in location of series fault is within 1% and shunt fault is 5% for all the tested fault cases. Validation of percentage of error in location estimation is done using Chi square test with both 1% and 5% level of significance. PMID:26413088

  6. Automatic Detection of Electric Power Troubles (ADEPT)

    NASA Technical Reports Server (NTRS)

    Wang, Caroline; Zeanah, Hugh; Anderson, Audie; Patrick, Clint; Brady, Mike; Ford, Donnie

    1988-01-01

    ADEPT is an expert system that integrates knowledge from three different suppliers to offer an advanced fault-detection system, and is designed for two modes of operation: real-time fault isolation and simulated modeling. Real time fault isolation of components is accomplished on a power system breadboard through the Fault Isolation Expert System (FIES II) interface with a rule system developed in-house. Faults are quickly detected and displayed and the rules and chain of reasoning optionally provided on a Laser printer. This system consists of a simulated Space Station power module using direct-current power supplies for Solar arrays on three power busses. For tests of the system's ability to locate faults inserted via switches, loads are configured by an INTEL microcomputer and the Symbolics artificial intelligence development system. As these loads are resistive in nature, Ohm's Law is used as the basis for rules by which faults are located. The three-bus system can correct faults automatically where there is a surplus of power available on any of the three busses. Techniques developed and used can be applied readily to other control systems requiring rapid intelligent decisions. Simulated modelling, used for theoretical studies, is implemented using a modified version of Kennedy Space Center's KATE (Knowledge-Based Automatic Test Equipment), FIES II windowing, and an ADEPT knowledge base. A load scheduler and a fault recovery system are currently under development to support both modes of operation.

  7. Fault-tolerant cooperative output regulation for multi-vehicle systems with sensor faults

    NASA Astrophysics Data System (ADS)

    Qin, Liguo; He, Xiao; Zhou, D. H.

    2017-10-01

    This paper presents a unified framework of fault diagnosis and fault-tolerant cooperative output regulation (FTCOR) for a linear discrete-time multi-vehicle system with sensor faults. The FTCOR control law is designed through three steps. A cooperative output regulation (COR) controller is designed based on the internal mode principle when there are no sensor faults. A sufficient condition on the existence of the COR controller is given based on the discrete-time algebraic Riccati equation (DARE). Then, a decentralised fault diagnosis scheme is designed to cope with sensor faults occurring in followers. A residual generator is developed to detect sensor faults of each follower, and a bank of fault-matching estimators are proposed to isolate and estimate sensor faults of each follower. Unlike the current distributed fault diagnosis for multi-vehicle systems, the presented decentralised fault diagnosis scheme in each vehicle reduces the communication and computation load by only using the information of the vehicle. By combing the sensor fault estimation and the COR control law, an FTCOR controller is proposed. Finally, the simulation results demonstrate the effectiveness of the FTCOR controller.

  8. Adaptive sensor-fault tolerant control for a class of multivariable uncertain nonlinear systems.

    PubMed

    Khebbache, Hicham; Tadjine, Mohamed; Labiod, Salim; Boulkroune, Abdesselem

    2015-03-01

    This paper deals with the active fault tolerant control (AFTC) problem for a class of multiple-input multiple-output (MIMO) uncertain nonlinear systems subject to sensor faults and external disturbances. The proposed AFTC method can tolerate three additive (bias, drift and loss of accuracy) and one multiplicative (loss of effectiveness) sensor faults. By employing backstepping technique, a novel adaptive backstepping-based AFTC scheme is developed using the fact that sensor faults and system uncertainties (including external disturbances and unexpected nonlinear functions caused by sensor faults) can be on-line estimated and compensated via robust adaptive schemes. The stability analysis of the closed-loop system is rigorously proven using a Lyapunov approach. The effectiveness of the proposed controller is illustrated by two simulation examples. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.

  9. SPACE PROPULSION SYSTEM PHASED-MISSION PROBABILITY ANALYSIS USING CONVENTIONAL PRA METHODS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Curtis Smith; James Knudsen

    As part of a series of papers on the topic of advance probabilistic methods, a benchmark phased-mission problem has been suggested. This problem consists of modeling a space mission using an ion propulsion system, where the mission consists of seven mission phases. The mission requires that the propulsion operate for several phases, where the configuration changes as a function of phase. The ion propulsion system itself consists of five thruster assemblies and a single propellant supply, where each thruster assembly has one propulsion power unit and two ion engines. In this paper, we evaluate the probability of mission failure usingmore » the conventional methodology of event tree/fault tree analysis. The event tree and fault trees are developed and analyzed using Systems Analysis Programs for Hands-on Integrated Reliability Evaluations (SAPHIRE). While the benchmark problem is nominally a "dynamic" problem, in our analysis the mission phases are modeled in a single event tree to show the progression from one phase to the next. The propulsion system is modeled in fault trees to account for the operation; or in this case, the failure of the system. Specifically, the propulsion system is decomposed into each of the five thruster assemblies and fed into the appropriate N-out-of-M gate to evaluate mission failure. A separate fault tree for the propulsion system is developed to account for the different success criteria of each mission phase. Common-cause failure modeling is treated using traditional (i.e., parametrically) methods. As part of this paper, we discuss the overall results in addition to the positive and negative aspects of modeling dynamic situations with non-dynamic modeling techniques. One insight from the use of this conventional method for analyzing the benchmark problem is that it requires significant manual manipulation to the fault trees and how they are linked into the event tree. The conventional method also requires editing the resultant cut sets to obtain the correct results. While conventional methods may be used to evaluate a dynamic system like that in the benchmark, the level of effort required may preclude its use on real-world problems.« less

  10. Simultaneous fault detection and control design for switched systems with two quantized signals.

    PubMed

    Li, Jian; Park, Ju H; Ye, Dan

    2017-01-01

    The problem of simultaneous fault detection and control design for switched systems with two quantized signals is presented in this paper. Dynamic quantizers are employed, respectively, before the output is passed to fault detector, and before the control input is transmitted to the switched system. Taking the quantized errors into account, the robust performance for this kind of system is given. Furthermore, sufficient conditions for the existence of fault detector/controller are presented in the framework of linear matrix inequalities, and fault detector/controller gains and the supremum of quantizer range are derived by a convex optimized method. Finally, two illustrative examples demonstrate the effectiveness of the proposed method. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  11. Event-Triggered Fault Detection of Nonlinear Networked Systems.

    PubMed

    Li, Hongyi; Chen, Ziran; Wu, Ligang; Lam, Hak-Keung; Du, Haiping

    2017-04-01

    This paper investigates the problem of fault detection for nonlinear discrete-time networked systems under an event-triggered scheme. A polynomial fuzzy fault detection filter is designed to generate a residual signal and detect faults in the system. A novel polynomial event-triggered scheme is proposed to determine the transmission of the signal. A fault detection filter is designed to guarantee that the residual system is asymptotically stable and satisfies the desired performance. Polynomial approximated membership functions obtained by Taylor series are employed for filtering analysis. Furthermore, sufficient conditions are represented in terms of sum of squares (SOSs) and can be solved by SOS tools in MATLAB environment. A numerical example is provided to demonstrate the effectiveness of the proposed results.

  12. Device for limiting single phase ground fault of mining machines

    NASA Astrophysics Data System (ADS)

    Fediuk, R. S.; Stoyushko, N. Yu; Yevdokimova, Yu G.; Smoliakov, A. K.; Batarshin, V. O.; Timokhin, R. A.

    2017-10-01

    The paper shows the reasons and consequences of the single-phase ground fault. With all the variety of devices for limiting the current single-phase ground fault, it was found that the most effective are Peterson coils having different switching circuits. Measuring of the capacity of the network is of great importance in this case, a number of options capacitance measurement are presented. A closer look is taken at the device for limiting the current of single-phase short circuit, developed in the Far Eastern Federal University under the direction of Dr. G.E. Kuvshinov. The calculation of single-phase short-circuit currents in the electrical network, without compensation and with compensation of capacitive current is carried out. Simulation of a single-phase circuit in a network with the proposed device is conducted.

  13. Optimal Management of Redundant Control Authority for Fault Tolerance

    NASA Technical Reports Server (NTRS)

    Wu, N. Eva; Ju, Jianhong

    2000-01-01

    This paper is intended to demonstrate the feasibility of a solution to a fault tolerant control problem. It explains, through a numerical example, the design and the operation of a novel scheme for fault tolerant control. The fundamental principle of the scheme was formalized in [5] based on the notion of normalized nonspecificity. The novelty lies with the use of a reliability criterion for redundancy management, and therefore leads to a high overall system reliability.

  14. Dynamic earthquake rupture simulation on nonplanar faults embedded in 3D geometrically complex, heterogeneous Earth models

    NASA Astrophysics Data System (ADS)

    Duru, K.; Dunham, E. M.; Bydlon, S. A.; Radhakrishnan, H.

    2014-12-01

    Dynamic propagation of shear ruptures on a frictional interface is a useful idealization of a natural earthquake.The conditions relating slip rate and fault shear strength are often expressed as nonlinear friction laws.The corresponding initial boundary value problems are both numerically and computationally challenging.In addition, seismic waves generated by earthquake ruptures must be propagated, far away from fault zones, to seismic stations and remote areas.Therefore, reliable and efficient numerical simulations require both provably stable and high order accurate numerical methods.We present a numerical method for:a) enforcing nonlinear friction laws, in a consistent and provably stable manner, suitable for efficient explicit time integration;b) dynamic propagation of earthquake ruptures along rough faults; c) accurate propagation of seismic waves in heterogeneous media with free surface topography.We solve the first order form of the 3D elastic wave equation on a boundary-conforming curvilinear mesh, in terms of particle velocities and stresses that are collocated in space and time, using summation-by-parts finite differences in space. The finite difference stencils are 6th order accurate in the interior and 3rd order accurate close to the boundaries. Boundary and interface conditions are imposed weakly using penalties. By deriving semi-discrete energy estimates analogous to the continuous energy estimates we prove numerical stability. Time stepping is performed with a 4th order accurate explicit low storage Runge-Kutta scheme. We have performed extensive numerical experiments using a slip-weakening friction law on non-planar faults, including recent SCEC benchmark problems. We also show simulations on fractal faults revealing the complexity of rupture dynamics on rough faults. We are presently extending our method to rate-and-state friction laws and off-fault plasticity.

  15. Adding Fault Tolerance to NPB Benchmarks Using ULFM

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Parchman, Zachary W; Vallee, Geoffroy R; Naughton III, Thomas J

    2016-01-01

    In the world of high-performance computing, fault tolerance and application resilience are becoming some of the primary concerns because of increasing hardware failures and memory corruptions. While the research community has been investigating various options, from system-level solutions to application-level solutions, standards such as the Message Passing Interface (MPI) are also starting to include such capabilities. The current proposal for MPI fault tolerant is centered around the User-Level Failure Mitigation (ULFM) concept, which provides means for fault detection and recovery of the MPI layer. This approach does not address application-level recovery, which is currently left to application developers. In thismore » work, we present a mod- ification of some of the benchmarks of the NAS parallel benchmark (NPB) to include support of the ULFM capabilities as well as application-level strategies and mechanisms for application-level failure recovery. As such, we present: (i) an application-level library to checkpoint and restore data, (ii) extensions of NPB benchmarks for fault tolerance based on different strategies, (iii) a fault injection tool, and (iv) some preliminary results that show the impact of such fault tolerant strategies on the application execution.« less

  16. Three Least-Squares Minimization Approaches to Interpret Gravity Data Due to Dipping Faults

    NASA Astrophysics Data System (ADS)

    Abdelrahman, E. M.; Essa, K. S.

    2015-02-01

    We have developed three different least-squares minimization approaches to determine, successively, the depth, dip angle, and amplitude coefficient related to the thickness and density contrast of a buried dipping fault from first moving average residual gravity anomalies. By defining the zero-anomaly distance and the anomaly value at the origin of the moving average residual profile, the problem of depth determination is transformed into a constrained nonlinear gravity inversion. After estimating the depth of the fault, the dip angle is estimated by solving a nonlinear inverse problem. Finally, after estimating the depth and dip angle, the amplitude coefficient is determined using a linear equation. This method can be applied to residuals as well as to measured gravity data because it uses the moving average residual gravity anomalies to estimate the model parameters of the faulted structure. The proposed method was tested on noise-corrupted synthetic and real gravity data. In the case of the synthetic data, good results are obtained when errors are given in the zero-anomaly distance and the anomaly value at the origin, and even when the origin is determined approximately. In the case of practical data (Bouguer anomaly over Gazal fault, south Aswan, Egypt), the fault parameters obtained are in good agreement with the actual ones and with those given in the published literature.

  17. Faults Discovery By Using Mined Data

    NASA Technical Reports Server (NTRS)

    Lee, Charles

    2005-01-01

    Fault discovery in the complex systems consist of model based reasoning, fault tree analysis, rule based inference methods, and other approaches. Model based reasoning builds models for the systems either by mathematic formulations or by experiment model. Fault Tree Analysis shows the possible causes of a system malfunction by enumerating the suspect components and their respective failure modes that may have induced the problem. The rule based inference build the model based on the expert knowledge. Those models and methods have one thing in common; they have presumed some prior-conditions. Complex systems often use fault trees to analyze the faults. Fault diagnosis, when error occurs, is performed by engineers and analysts performing extensive examination of all data gathered during the mission. International Space Station (ISS) control center operates on the data feedback from the system and decisions are made based on threshold values by using fault trees. Since those decision-making tasks are safety critical and must be done promptly, the engineers who manually analyze the data are facing time challenge. To automate this process, this paper present an approach that uses decision trees to discover fault from data in real-time and capture the contents of fault trees as the initial state of the trees.

  18. Inferring Fault Frictional and Reservoir Hydraulic Properties From Injection-Induced Seismicity

    NASA Astrophysics Data System (ADS)

    Jagalur-Mohan, Jayanth; Jha, Birendra; Wang, Zheng; Juanes, Ruben; Marzouk, Youssef

    2018-02-01

    Characterizing the rheological properties of faults and the evolution of fault friction during seismic slip are fundamental problems in geology and seismology. Recent increases in the frequency of induced earthquakes have intensified the need for robust methods to estimate fault properties. Here we present a novel approach for estimation of aquifer and fault properties, which combines coupled multiphysics simulation of injection-induced seismicity with adaptive surrogate-based Bayesian inversion. In a synthetic 2-D model, we use aquifer pressure, ground displacements, and fault slip measurements during fluid injection to estimate the dynamic fault friction, the critical slip distance, and the aquifer permeability. Our forward model allows us to observe nonmonotonic evolutions of shear traction and slip on the fault resulting from the interplay of several physical mechanisms, including injection-induced aquifer expansion, stress transfer along the fault, and slip-induced stress relaxation. This interplay provides the basis for a successful joint inversion of induced seismicity, yielding well-informed Bayesian posterior distributions of dynamic friction and critical slip. We uncover an inverse relationship between dynamic friction and critical slip distance, which is in agreement with the small dynamic friction and large critical slip reported during seismicity on mature faults.

  19. Smart intimation and location of faults in distribution system

    NASA Astrophysics Data System (ADS)

    Hari Krishna, K.; Srinivasa Rao, B.

    2018-04-01

    Location of faults in the distribution system is one of the most complicated problems that we are facing today. Identification of fault location and severity of fault within a short time is required to provide continuous power supply but fault identification and information transfer to the operator is the biggest challenge in the distribution network. This paper proposes a fault location method in the distribution system based on Arduino nano and GSM module with flame sensor. The main idea is to locate the fault in the distribution transformer by sensing the arc coming out from the fuse element. The biggest challenge in the distribution network is to identify the location and the severity of faults under different conditions. Well operated transmission and distribution systems will play a key role for uninterrupted power supply. Whenever fault occurs in the distribution system the time taken to locate and eliminate the fault has to be reduced. The proposed design was achieved with flame sensor and GSM module. Under faulty condition, the system will automatically send an alert message to the operator in the distribution system, about the abnormal conditions near the transformer, site code and its exact location for possible power restoration.

  20. Detection of faults in rotating machinery using periodic time-frequency sparsity

    NASA Astrophysics Data System (ADS)

    Ding, Yin; He, Wangpeng; Chen, Binqiang; Zi, Yanyang; Selesnick, Ivan W.

    2016-11-01

    This paper addresses the problem of extracting periodic oscillatory features in vibration signals for detecting faults in rotating machinery. To extract the feature, we propose an approach in the short-time Fourier transform (STFT) domain where the periodic oscillatory feature manifests itself as a relatively sparse grid. To estimate the sparse grid, we formulate an optimization problem using customized binary weights in the regularizer, where the weights are formulated to promote periodicity. In order to solve the proposed optimization problem, we develop an algorithm called augmented Lagrangian majorization-minimization algorithm, which combines the split augmented Lagrangian shrinkage algorithm (SALSA) with majorization-minimization (MM), and is guaranteed to converge for both convex and non-convex formulation. As examples, the proposed approach is applied to simulated data, and used as a tool for diagnosing faults in bearings and gearboxes for real data, and compared to some state-of-the-art methods. The results show that the proposed approach can effectively detect and extract the periodical oscillatory features.

  1. Transparent Ada rendezvous in a fault tolerant distributed system

    NASA Technical Reports Server (NTRS)

    Racine, Roger

    1986-01-01

    There are many problems associated with distributing an Ada program over a loosely coupled communication network. Some of these problems involve the various aspects of the distributed rendezvous. The problems addressed involve supporting the delay statement in a selective call and supporting the else clause in a selective call. Most of these difficulties are compounded by the need for an efficient communication system. The difficulties are compounded even more by considering the possibility of hardware faults occurring while the program is running. With a hardware fault tolerant computer system, it is possible to design a distribution scheme and communication software which is efficient and allows Ada semantics to be preserved. An Ada design for the communications software of one such system will be presented, including a description of the services provided in the seven layers of an International Standards Organization (ISO) Open System Interconnect (OSI) model communications system. The system capabilities (hardware and software) that allow this communication system will also be described.

  2. Initial results on fault diagnosis of DSN antenna control assemblies using pattern recognition techniques

    NASA Technical Reports Server (NTRS)

    Smyth, P.; Mellstrom, J.

    1990-01-01

    Initial results obtained from an investigation using pattern recognition techniques for identifying fault modes in the Deep Space Network (DSN) 70 m antenna control loops are described. The overall background to the problem is described, the motivation and potential benefits of this approach are outlined. In particular, an experiment is described in which fault modes were introduced into a state-space simulation of the antenna control loops. By training a multilayer feed-forward neural network on the simulated sensor output, classification rates of over 95 percent were achieved with a false alarm rate of zero on unseen tests data. It concludes that although the neural classifier has certain practical limitations at present, it also has considerable potential for problems of this nature.

  3. A Novel Transient Fault Current Sensor Based on the PCB Rogowski Coil for Overhead Transmission Lines

    PubMed Central

    Liu, Yadong; Xie, Xiaolei; Hu, Yue; Qian, Yong; Sheng, Gehao; Jiang, Xiuchen

    2016-01-01

    The accurate detection of high-frequency transient fault currents in overhead transmission lines is the basis of malfunction detection and diagnosis. This paper proposes a novel differential winding printed circuit board (PCB) Rogowski coil for the detection of transient fault currents in overhead transmission lines. The interference mechanism of the sensor surrounding the overhead transmission line is analyzed and the guideline for the interference elimination is obtained, and then a differential winding printed circuit board (PCB) Rogowski coil is proposed, where the branch and return line of the PCB coil were designed to be strictly symmetrical by using a joining structure of two semi-rings and collinear twisted pair differential windings in each semi-ring. A serial test is conducted, including the frequency response, linearity, and anti-interference performance as well as a comparison with commercial sensors. Results show that a PCB Rogowski coil has good linearity and resistance to various external magnetic field interferences, thus enabling it to be widely applied in fault-current-collecting devices. PMID:27213402

  4. A Practical Study of the 66kV Fault Current Limiter (FCL) System with Rectifier

    NASA Astrophysics Data System (ADS)

    Tokuda, Noriaki; Matsubara, Yoshio; Yuguchi, Kyosuke; Ohkuma, Takeshi; Hobara, Natsuro; Takahashi, Yoshihisa

    A fault current limiter (FCL) is extensively expected to suppress fault current, particularly required for trunk power systems heavily connected high-voltage transmission lines, such as 500kV class power system which constitutes the nucleus of the electric power system. By installing such FCL in the power system, the system interconnection is possible without the need to raise the capacity of the circuit breakers, and facilities can be configured for efficiency, among other benefits. For these reasons, fault current limiters based on various principles of operation have been developed both in Japan and abroad. In this paper, we have proposed a new type of FCL system, consisting of solid-state diodes, DC coil and bypass AC coil, and described the specification of distribution power system and 66kV model at the island power system and the superconducting cable power system. Also we have made a practical study of 66kV class, which is the testing items and the future subjects of the rectifier type FCL system.

  5. Fault Detection and Diagnosis In Hall-Héroult Cells Based on Individual Anode Current Measurements Using Dynamic Kernel PCA

    NASA Astrophysics Data System (ADS)

    Yao, Yuchen; Bao, Jie; Skyllas-Kazacos, Maria; Welch, Barry J.; Akhmetov, Sergey

    2018-04-01

    Individual anode current signals in aluminum reduction cells provide localized cell conditions in the vicinity of each anode, which contain more information than the conventionally measured cell voltage and line current. One common use of this measurement is to identify process faults that can cause significant changes in the anode current signals. While this method is simple and direct, it ignores the interactions between anode currents and other important process variables. This paper presents an approach that applies multivariate statistical analysis techniques to individual anode currents and other process operating data, for the detection and diagnosis of local process abnormalities in aluminum reduction cells. Specifically, since the Hall-Héroult process is time-varying with its process variables dynamically and nonlinearly correlated, dynamic kernel principal component analysis with moving windows is used. The cell is discretized into a number of subsystems, with each subsystem representing one anode and cell conditions in its vicinity. The fault associated with each subsystem is identified based on multivariate statistical control charts. The results show that the proposed approach is able to not only effectively pinpoint the problematic areas in the cell, but also assess the effect of the fault on different parts of the cell.

  6. The Development of NASA's Fault Management Handbook

    NASA Technical Reports Server (NTRS)

    Fesq, Lorraine

    2011-01-01

    Disciplined approach to Fault Management (FM) has not always been emphasized by projects, contributing to major schedule and cost overruns: (1) Often faults aren't addressed until nominal spacecraft design is fairly stable. (2) Design relegated to after-the-fact patchwork, Band-Aid approach. Progress is being made on a number of fronts outside of Handbook effort: (1) Processes, Practices and Tools being developed at some Centers and Institutions (2) Management recognition. Constellation FM roles, Discovery/New Frontiers mission reviews (3) Potential Technology solutions. New approaches could avoid many current pitfalls (3a) New FM architectures, including model-based approach integrated with NASA's MBSE (Model-Based System Engineering) efforts (3b) NASA's Office of the Chief Technologist: FM identified in seven of NASA's 14 Space Technology Roadmaps. Opportunity to coalesce and establish thrust area to progressively develop new FM techniques. FM Handbook will help ensure that future missions do not encounter same FM-related problems as previous missions. Version 1 of the FM Handbook is a good start: (1) Still need Version 2 Agency-wide FM Handbook to expand Handbook to other areas, especially crewed missions. (2) Still need to reach out to other organizations to develop common understanding and vocabulary. Handbook doesn't/can't address all Workshop recommendations. Still need to identify how to address programmatic and infrastructure issues.

  7. An optimized implementation of a fault-tolerant clock synchronization circuit

    NASA Technical Reports Server (NTRS)

    Torres-Pomales, Wilfredo

    1995-01-01

    A fault-tolerant clock synchronization circuit was designed and tested. A comparison to a previous design and the procedure followed to achieve the current optimization are included. The report also includes a description of the system and the results of tests performed to study the synchronization and fault-tolerant characteristics of the implementation.

  8. Overview of the Southern San Andreas Fault Model

    USGS Publications Warehouse

    Weldon, Ray J.; Biasi, Glenn P.; Wills, Chris J.; Dawson, Timothy E.

    2008-01-01

    This appendix summarizes the data and methodology used to generate the source model for the southern San Andreas fault. It is organized into three sections, 1) a section by section review of the geological data in the format of past Working Groups, 2) an overview of the rupture model, and 3) a manuscript by Biasi and Weldon (in review Bulletin of the Seismological Society of America) that describes the correlation methodology that was used to help develop the ?geologic insight? model. The goal of the Biasi and Weldon methodology is to quantify the insight that went into developing all A faults; as such it is in concept consistent with all other A faults but applied in a more quantitative way. The most rapidly slipping fault and the only known source of M~8 earthquakes in southern California is the San Andreas fault. As such it plays a special role in the seismic hazard of California, and has received special attention in the current Working Group. The underlying philosophy of the current Working Group is to model the recurrence behavior of large, rapidly slipping faults like the San Andreas from observed data on the size, distribution and timing of past earthquakes with as few assumptions about underlying recurrence behavior as possible. In addition, we wish to carry the uncertainties in the data and the range of reasonable extrapolations from the data to the final model. To accomplish this for the Southern San Andreas fault we have developed an objective method to combine all of the observations of size, timing, and distribution of past earthquakes into a comprehensive set of earthquake scenarios that each represent a possible history of earthquakes for the past ~1400 years. The scenarios are then ranked according to their overall consistency with the data and then the frequencies of all of the ruptures permitted by the current Working Group?s segmentation model are calculated. We also present 30-yr conditional probabilities by segment and compare to previous results. A distinctive aspect of the current model is that the probability is higher at both ends of the fault and that the ends have a much greater fraction of smaller events. There is a significant difference in the likelihood of large (M 7.7-8.0) earthquakes along the fault from north to south, with large 1857-like events common on the northern half of the southern San Andreas fault but relatively few M 7.7-8.0 expected on the southern half.

  9. Project DAFNE - Drilling Active Faults in Northern Europe

    NASA Astrophysics Data System (ADS)

    Kukkonen, I. T.; Ask, M. S. V.; Olesen, O.

    2012-04-01

    We are currently developing a new ICDP project 'Drillling Active Faults in Northern Europe' (DAFNE) which aims at investigating, via scientific drilling, the tectonic and structural characteristics of postglacial (PG) faults in northern Fennoscandia, including their hydrogeology and associated deep biosphere [1, 2]. During the last stages of the Weichselian glaciation (ca. 9,000 - 15,000 years B.P.), reduced ice load and glacially affected stress field resulted in active faulting in Fennoscandia with fault scarps up to 160 km long and 30 m high. These postglacial (PG) faults are usually SE dipping, SW-NE oriented thrusts, and represent reactivated, pre-existing crustal discontinuities. Postglacial faulting indicates that the glacio-isostatic compensation is not only a gradual viscoelastic phenomenon, but includes also unexpected violent earthquakes, suggestively larger than other known earthquakes in stable continental regions. The research is anticipated to advance science in neotectonics, hydrogeology and deep biosphere studies, and provide important information for nuclear waste and CO2 disposal, petroleum exploration on the Norwegian continental shelf and studies of mineral resources in PG fault areas. We expect that multidisciplinary research applying shallow and deep drilling of postglacial faults would provide significant scientific results through generating new data and models, namely: (1) Understanding PG fault genesis and controls of their locations; (2) Deep structure and depth extent of PG faults; (3) Textural, mineralogical and physical alteration of rocks in the PG faults; (4) State of stress and estimates of paleostress of PG faults; (5) Hydrogeology, hydrochemistry and hydraulic properties of PG faults; (6) Dating of tectonic reactivation(s) and temporal evolution of tectonic systems hosting PG faults; (7) Existence/non-existence of deep biosphere in PG faults; (8) Data useful for planning radioactive waste disposal in crystalline bedrock; (9) Data on rock stress changes in the periphery of the inland ice; (10) Stress pattern along the Norwegian continental margin in relation to the bending spreading ridge and Plio-Pleistocene erosion, uplift and sedimentation with implications for fluid migration and sealing properties of petroleum reservoirs. (11) Data useful in predicting future seismic activity in areas of current deglaciation due to ongoing climatic warming.

  10. Stochastic resonance in an underdamped system with FitzHug-Nagumo potential for weak signal detection

    NASA Astrophysics Data System (ADS)

    López, Cristian; Zhong, Wei; Lu, Siliang; Cong, Feiyun; Cortese, Ignacio

    2017-12-01

    Vibration signals are widely used for bearing fault detection and diagnosis. When signals are acquired in the field, usually, the faulty periodic signal is weak and is concealed by noise. Various de-noising methods have been developed to extract the target signal from the raw signal. Stochastic resonance (SR) is a technique that changed the traditional denoising process, in which the weak periodic fault signal can be identified by adding an expression, the potential, to the raw signal and solving a differential equation problem. However, current SR methods have some deficiencies such us limited filtering performance, low frequency input signal and sequential search for optimum parameters. Consequently, in this study, we explore the application of SR based on the FitzHug-Nagumo (FHN) potential in rolling bearing vibration signals. Besides, we improve the search of the SR optimum parameters by the use of particle swarm optimization (PSO). The effectiveness of the proposed method is verified by using both simulated and real bearing data sets.

  11. Detection of broken rotor bar faults in induction motor at low load using neural network.

    PubMed

    Bessam, B; Menacer, A; Boumehraz, M; Cherif, H

    2016-09-01

    The knowledge of the broken rotor bars characteristic frequencies and amplitudes has a great importance for all related diagnostic methods. The monitoring of motor faults requires a high resolution spectrum to separate different frequency components. The Discrete Fourier Transform (DFT) has been widely used to achieve these requirements. However, at low slip this technique cannot give good results. As a solution for these problems, this paper proposes an efficient technique based on a neural network approach and Hilbert transform (HT) for broken rotor bar diagnosis in induction machines at low load. The Hilbert transform is used to extract the stator current envelope (SCE). Two features are selected from the (SCE) spectrum (the amplitude and frequency of the harmonic). These features will be used as input for neural network. The results obtained are astonishing and it is capable to detect the correct number of broken rotor bars under different load conditions. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  12. Knowledge-based fault diagnosis system for refuse collection vehicle

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tan, CheeFai; Juffrizal, K.; Khalil, S. N.

    The refuse collection vehicle is manufactured by local vehicle body manufacturer. Currently; the company supplied six model of the waste compactor truck to the local authority as well as waste management company. The company is facing difficulty to acquire the knowledge from the expert when the expert is absence. To solve the problem, the knowledge from the expert can be stored in the expert system. The expert system is able to provide necessary support to the company when the expert is not available. The implementation of the process and tool is able to be standardize and more accurate. The knowledgemore » that input to the expert system is based on design guidelines and experience from the expert. This project highlighted another application on knowledge-based system (KBS) approached in trouble shooting of the refuse collection vehicle production process. The main aim of the research is to develop a novel expert fault diagnosis system framework for the refuse collection vehicle.« less

  13. An overview of the Hadoop/MapReduce/HBase framework and its current applications in bioinformatics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Taylor, Ronald C.

    Bioinformatics researchers are increasingly confronted with analysis of ultra large-scale data sets, a problem that will only increase at an alarming rate in coming years. Recent developments in open source software, that is, the Hadoop project and associated software, provide a foundation for scaling to petabyte scale data warehouses on Linux clusters, providing fault-tolerant parallelized analysis on such data using a programming style named MapReduce. An overview is given of the current usage within the bioinformatics community of Hadoop, a top-level Apache Software Foundation project, and of associated open source software projects. The concepts behind Hadoop and the associated HBasemore » project are defined, and current bioinformatics software that employ Hadoop is described. The focus is on next-generation sequencing, as the leading application area to date.« less

  14. A divide and conquer approach to cope with uncertainty, human health risk, and decision making in contaminant hydrology

    NASA Astrophysics Data System (ADS)

    de Barros, Felipe P. J.; Bolster, Diogo; Sanchez-Vila, Xavier; Nowak, Wolfgang

    2011-05-01

    Assessing health risk in hydrological systems is an interdisciplinary field. It relies on the expertise in the fields of hydrology and public health and needs powerful translation concepts to provide decision support and policy making. Reliable health risk estimates need to account for the uncertainties and variabilities present in hydrological, physiological, and human behavioral parameters. Despite significant theoretical advancements in stochastic hydrology, there is still a dire need to further propagate these concepts to practical problems and to society in general. Following a recent line of work, we use fault trees to address the task of probabilistic risk analysis and to support related decision and management problems. Fault trees allow us to decompose the assessment of health risk into individual manageable modules, thus tackling a complex system by a structural divide and conquer approach. The complexity within each module can be chosen individually according to data availability, parsimony, relative importance, and stage of analysis. Three differences are highlighted in this paper when compared to previous works: (1) The fault tree proposed here accounts for the uncertainty in both hydrological and health components, (2) system failure within the fault tree is defined in terms of risk being above a threshold value, whereas previous studies that used fault trees used auxiliary events such as exceedance of critical concentration levels, and (3) we introduce a new form of stochastic fault tree that allows us to weaken the assumption of independent subsystems that is required by a classical fault tree approach. We illustrate our concept in a simple groundwater-related setting.

  15. Robust Fault Detection for Switched Fuzzy Systems With Unknown Input.

    PubMed

    Han, Jian; Zhang, Huaguang; Wang, Yingchun; Sun, Xun

    2017-10-03

    This paper investigates the fault detection problem for a class of switched nonlinear systems in the T-S fuzzy framework. The unknown input is considered in the systems. A novel fault detection unknown input observer design method is proposed. Based on the proposed observer, the unknown input can be removed from the fault detection residual. The weighted H∞ performance level is considered to ensure the robustness. In addition, the weighted H₋ performance level is introduced, which can increase the sensibility of the proposed detection method. To verify the proposed scheme, a numerical simulation example and an electromechanical system simulation example are provided at the end of this paper.

  16. Results of Gravity Fieldwork Conducted in March 2008 in the Moapa Valley Region of Clark County, Nevada

    USGS Publications Warehouse

    Scheirer, Daniel S.; Andreasen, Arne Dossing

    2008-01-01

    In March 2008, we collected gravity data along 12 traverses across newly-mapped faults in the Moapa Valley region of Clark County, Nevada. In areas crossed by these faults, the traverses provide better definition of the gravity field and, thus, the density structure, than prior gravity observations. Access problems prohibited complete gravity coverage along all of the planned gravity traverses, and we added and adjusted the locations of traverses to maximize our data collection. Most of the traverses exhibit isostatic gravity anomalies that have gradients characteristic of exposed or buried faults, including several of the newly-mapped faults.

  17. Robust adaptive fault-tolerant control for leader-follower flocking of uncertain multi-agent systems with actuator failure.

    PubMed

    Yazdani, Sahar; Haeri, Mohammad

    2017-11-01

    In this work, we study the flocking problem of multi-agent systems with uncertain dynamics subject to actuator failure and external disturbances. By considering some standard assumptions, we propose a robust adaptive fault tolerant protocol for compensating of the actuator bias fault, the partial loss of actuator effectiveness fault, the model uncertainties, and external disturbances. Under the designed protocol, velocity convergence of agents to that of virtual leader is guaranteed while the connectivity preservation of network and collision avoidance among agents are ensured as well. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  18. Fault-Tolerant Control of ANPC Three-Level Inverter Based on Order-Reduction Optimal Control Strategy under Multi-Device Open-Circuit Fault.

    PubMed

    Xu, Shi-Zhou; Wang, Chun-Jie; Lin, Fang-Li; Li, Shi-Xiang

    2017-10-31

    The multi-device open-circuit fault is a common fault of ANPC (Active Neutral-Point Clamped) three-level inverter and effect the operation stability of the whole system. To improve the operation stability, this paper summarized the main solutions currently firstly and analyzed all the possible states of multi-device open-circuit fault. Secondly, an order-reduction optimal control strategy was proposed under multi-device open-circuit fault to realize fault-tolerant control based on the topology and control requirement of ANPC three-level inverter and operation stability. This control strategy can solve the faults with different operation states, and can works in order-reduction state under specific open-circuit faults with specific combined devices, which sacrifices the control quality to obtain the stability priority control. Finally, the simulation and experiment proved the effectiveness of the proposed strategy.

  19. Benchmarking Gas Path Diagnostic Methods: A Public Approach

    NASA Technical Reports Server (NTRS)

    Simon, Donald L.; Bird, Jeff; Davison, Craig; Volponi, Al; Iverson, R. Eugene

    2008-01-01

    Recent technology reviews have identified the need for objective assessments of engine health management (EHM) technology. The need is two-fold: technology developers require relevant data and problems to design and validate new algorithms and techniques while engine system integrators and operators need practical tools to direct development and then evaluate the effectiveness of proposed solutions. This paper presents a publicly available gas path diagnostic benchmark problem that has been developed by the Propulsion and Power Systems Panel of The Technical Cooperation Program (TTCP) to help address these needs. The problem is coded in MATLAB (The MathWorks, Inc.) and coupled with a non-linear turbofan engine simulation to produce "snap-shot" measurements, with relevant noise levels, as if collected from a fleet of engines over their lifetime of use. Each engine within the fleet will experience unique operating and deterioration profiles, and may encounter randomly occurring relevant gas path faults including sensor, actuator and component faults. The challenge to the EHM community is to develop gas path diagnostic algorithms to reliably perform fault detection and isolation. An example solution to the benchmark problem is provided along with associated evaluation metrics. A plan is presented to disseminate this benchmark problem to the engine health management technical community and invite technology solutions.

  20. Fault Detection for Nonlinear Process With Deterministic Disturbances: A Just-In-Time Learning Based Data Driven Method.

    PubMed

    Yin, Shen; Gao, Huijun; Qiu, Jianbin; Kaynak, Okyay

    2017-11-01

    Data-driven fault detection plays an important role in industrial systems due to its applicability in case of unknown physical models. In fault detection, disturbances must be taken into account as an inherent characteristic of processes. Nevertheless, fault detection for nonlinear processes with deterministic disturbances still receive little attention, especially in data-driven field. To solve this problem, a just-in-time learning-based data-driven (JITL-DD) fault detection method for nonlinear processes with deterministic disturbances is proposed in this paper. JITL-DD employs JITL scheme for process description with local model structures to cope with processes dynamics and nonlinearity. The proposed method provides a data-driven fault detection solution for nonlinear processes with deterministic disturbances, and owns inherent online adaptation and high accuracy of fault detection. Two nonlinear systems, i.e., a numerical example and a sewage treatment process benchmark, are employed to show the effectiveness of the proposed method.

  1. An Indirect Adaptive Control Scheme in the Presence of Actuator and Sensor Failures

    NASA Technical Reports Server (NTRS)

    Sun, Joy Z.; Josh, Suresh M.

    2009-01-01

    The problem of controlling a system in the presence of unknown actuator and sensor faults is addressed. The system is assumed to have groups of actuators, and groups of sensors, with each group consisting of multiple redundant similar actuators or sensors. The types of actuator faults considered consist of unknown actuators stuck in unknown positions, as well as reduced actuator effectiveness. The sensor faults considered include unknown biases and outages. The approach employed for fault detection and estimation consists of a bank of Kalman filters based on multiple models, and subsequent control reconfiguration to mitigate the effect of biases caused by failed components as well as to obtain stability and satisfactory performance using the remaining actuators and sensors. Conditions for fault identifiability are presented, and the adaptive scheme is applied to an aircraft flight control example in the presence of actuator failures. Simulation results demonstrate that the method can rapidly and accurately detect faults and estimate the fault values, thus enabling safe operation and acceptable performance in spite of failures.

  2. Experiences with explicit finite-difference schemes for complex fluid dynamics problems on STAR-100 and CYBER-203 computers

    NASA Technical Reports Server (NTRS)

    Kumar, A.; Rudy, D. H.; Drummond, J. P.; Harris, J. E.

    1982-01-01

    Several two- and three-dimensional external and internal flow problems solved on the STAR-100 and CYBER-203 vector processing computers are described. The flow field was described by the full Navier-Stokes equations which were then solved by explicit finite-difference algorithms. Problem results and computer system requirements are presented. Program organization and data base structure for three-dimensional computer codes which will eliminate or improve on page faulting, are discussed. Storage requirements for three-dimensional codes are reduced by calculating transformation metric data in each step. As a result, in-core grid points were increased in number by 50% to 150,000, with a 10% execution time increase. An assessment of current and future machine requirements shows that even on the CYBER-205 computer only a few problems can be solved realistically. Estimates reveal that the present situation is more storage limited than compute rate limited, but advancements in both storage and speed are essential to realistically calculate three-dimensional flow.

  3. Maneuver Classification for Aircraft Fault Detection

    NASA Technical Reports Server (NTRS)

    Oza, Nikunj C.; Tumer, Irem Y.; Tumer, Kagan; Huff, Edward M.

    2003-01-01

    Automated fault detection is an increasingly important problem in aircraft maintenance and operation. Standard methods of fault detection assume the availability of either data produced during all possible faulty operation modes or a clearly-defined means to determine whether the data provide a reasonable match to known examples of proper operation. In the domain of fault detection in aircraft, identifying all possible faulty and proper operating modes is clearly impossible. We envision a system for online fault detection in aircraft, one part of which is a classifier that predicts the maneuver being performed by the aircraft as a function of vibration data and other available data. To develop such a system, we use flight data collected under a controlled test environment, subject to many sources of variability. We explain where our classifier fits into the envisioned fault detection system as well as experiments showing the promise of this classification subsystem.

  4. Classification of Aircraft Maneuvers for Fault Detection

    NASA Technical Reports Server (NTRS)

    Oza, Nikunj; Tumer, Irem Y.; Tumer, Kagan; Huff, Edward M.; Koga, Dennis (Technical Monitor)

    2002-01-01

    Automated fault detection is an increasingly important problem in aircraft maintenance and operation. Standard methods of fault detection assume the availability of either data produced during all possible faulty operation modes or a clearly-defined means to determine whether the data provide a reasonable match to known examples of proper operation. In the domain of fault detection in aircraft, the first assumption is unreasonable and the second is difficult to determine. We envision a system for online fault detection in aircraft, one part of which is a classifier that predicts the maneuver being performed by the aircraft as a function of vibration data and other available data. To develop such a system, we use flight data collected under a controlled test environment, subject to many sources of variability. We explain where our classifier fits into the envisioned fault detection system as well as experiments showing the promise of this classification subsystem.

  5. Optimal Sensor Allocation for Fault Detection and Isolation

    NASA Technical Reports Server (NTRS)

    Azam, Mohammad; Pattipati, Krishna; Patterson-Hine, Ann

    2004-01-01

    Automatic fault diagnostic schemes rely on various types of sensors (e.g., temperature, pressure, vibration, etc) to measure the system parameters. Efficacy of a diagnostic scheme is largely dependent on the amount and quality of information available from these sensors. The reliability of sensors, as well as the weight, volume, power, and cost constraints, often makes it impractical to monitor a large number of system parameters. An optimized sensor allocation that maximizes the fault diagnosibility, subject to specified weight, volume, power, and cost constraints is required. Use of optimal sensor allocation strategies during the design phase can ensure better diagnostics at a reduced cost for a system incorporating a high degree of built-in testing. In this paper, we propose an approach that employs multiple fault diagnosis (MFD) and optimization techniques for optimal sensor placement for fault detection and isolation (FDI) in complex systems. Keywords: sensor allocation, multiple fault diagnosis, Lagrangian relaxation, approximate belief revision, multidimensional knapsack problem.

  6. Application of improved wavelet total variation denoising for rolling bearing incipient fault diagnosis

    NASA Astrophysics Data System (ADS)

    Zhang, W.; Jia, M. P.

    2018-06-01

    When incipient fault appear in the rolling bearing, the fault feature is too small and easily submerged in the strong background noise. In this paper, wavelet total variation denoising based on kurtosis (Kurt-WATV) is studied, which can extract the incipient fault feature of the rolling bearing more effectively. The proposed algorithm contains main steps: a) establish a sparse diagnosis model, b) represent periodic impulses based on the redundant wavelet dictionary, c) solve the joint optimization problem by alternating direction method of multipliers (ADMM), d) obtain the reconstructed signal using kurtosis value as criterion and then select optimal wavelet subbands. This paper uses overcomplete rational-dilation wavelet transform (ORDWT) as a dictionary, and adjusts the control parameters to achieve the concentration in the time-frequency plane. Incipient fault of rolling bearing is used as an example, and the result shows that the effectiveness and superiority of the proposed Kurt- WATV bearing fault diagnosis algorithm.

  7. Symposium on the Interface: Computing Science and Statistics (20th). Theme: Computationally Intensive Methods in Statistics Held in Reston, Virginia on April 20-23, 1988

    DTIC Science & Technology

    1988-08-20

    34 William A. Link, Patuxent Wildlife Research Center "Increasing reliability of multiversion fault-tolerant software design by modulation," Junryo 3... Multiversion lault-Tolerant Software Design by Modularization Junryo Miyashita Department of Computer Science California state University at san Bernardino Fault...They shall beE refered to as " multiversion fault-tolerant software design". Onel problem of developing multi-versions of a program is the high cost

  8. Structural system reliability calculation using a probabilistic fault tree analysis method

    NASA Technical Reports Server (NTRS)

    Torng, T. Y.; Wu, Y.-T.; Millwater, H. R.

    1992-01-01

    The development of a new probabilistic fault tree analysis (PFTA) method for calculating structural system reliability is summarized. The proposed PFTA procedure includes: developing a fault tree to represent the complex structural system, constructing an approximation function for each bottom event, determining a dominant sampling sequence for all bottom events, and calculating the system reliability using an adaptive importance sampling method. PFTA is suitable for complicated structural problems that require computer-intensive computer calculations. A computer program has been developed to implement the PFTA.

  9. Automatic recloser circuit breaker integrated with GSM technology for power system notification

    NASA Astrophysics Data System (ADS)

    Lada, M. Y.; Khiar, M. S. A.; Ghani, S. A.; Nawawi, M. R. M.; Rahim, N. H.; Sinar, L. O. M.

    2015-05-01

    Lightning is one type of transient faults that usually cause the circuit breaker in the distribution board trip due to overload current detection. The instant tripping condition in the circuit breakers clears the fault in the system. Unfortunately most circuit breakers system is manually operated. The power line will be effectively re-energized after the clearing fault process is finished. Auto-reclose circuit is used on the transmission line to carry out the duty of supplying quality electrical power to customers. In this project, an automatic reclose circuit breaker for low voltage usage is designed. The product description is the Auto Reclose Circuit Breaker (ARCB) will trip if the current sensor detects high current which exceeds the rated current for the miniature circuit breaker (MCB) used. Then the fault condition will be cleared automatically and return the power line to normal condition. The Global System for Mobile Communication (GSM) system will send SMS to the person in charge if the tripping occurs. If the over current occurs in three times, the system will fully trip (open circuit) and at the same time will send an SMS to the person in charge. In this project a 1 A is set as the rated current and any current exceeding a 1 A will cause the system to trip or interrupted. This system also provides an additional notification for user such as the emergency light and warning system.

  10. Assessment of the geothermal potential of fault zones in Germany by numerical modelling

    NASA Astrophysics Data System (ADS)

    Kuder, Jörg

    2017-04-01

    Fault zones with significantly better permeabilities than host rocks can act as natural migration paths for ascending fluids that are able to transport thermal energy from deep geological formations. Under these circumstances, fault zones are interesting for geothermal utilization especially those in at least 7 km depth (Jung et al. 2002, Paschen et al. 2003). One objective of the joint project "The role of deep rooting fault zones for geothermal energy utilization" supported by the Federal Ministry for Economic Affairs and Energy was the evaluation of the geothermal potential of fault zones in Germany by means of numerical modelling with COMSOL. To achieve this goal a method was developed to estimate the potential of regional generalized fault zones in a simple but yet sophisticated way. The main problem for the development of a numerical model is the lack of geological and hydrological data. To address this problem the geothermal potential of a cube with 1 km side length including a 20 meter broad, 1000 m high and 1000 m long fault zone was calculated as a unified model with changing parameter sets. The properties of the surrounding host rock and the fault zone are assumed homogenous. The numerical models were calculated with a broad variety of fluid flow, rock and fluid property parameters for the depths of 3000-4000 m, 4000-5000 m, 5000-6000 m and 6000-7000 m. The fluid parameters are depending on temperature, salt load and initial pressure. The porosity and permeability values are provided by the database of the geothermal information system (GeotIS). The results are summarized in a table of values of geothermal energy modelled with different parameter sets and depths. The geothermal potential of fault zones in Germany was then calculated on the basis of this table and information of the geothermal atlas of Germany (2016).

  11. A seismic fault recognition method based on ant colony optimization

    NASA Astrophysics Data System (ADS)

    Chen, Lei; Xiao, Chuangbai; Li, Xueliang; Wang, Zhenli; Huo, Shoudong

    2018-05-01

    Fault recognition is an important section in seismic interpretation and there are many methods for this technology, but no one can recognize fault exactly enough. For this problem, we proposed a new fault recognition method based on ant colony optimization which can locate fault precisely and extract fault from the seismic section. Firstly, seismic horizons are extracted by the connected component labeling algorithm; secondly, the fault location are decided according to the horizontal endpoints of each horizon; thirdly, the whole seismic section is divided into several rectangular blocks and the top and bottom endpoints of each rectangular block are considered as the nest and food respectively for the ant colony optimization algorithm. Besides that, the positive section is taken as an actual three dimensional terrain by using the seismic amplitude as a height. After that, the optimal route from nest to food calculated by the ant colony in each block is judged as a fault. Finally, extensive comparative tests were performed on the real seismic data. Availability and advancement of the proposed method were validated by the experimental results.

  12. Rupture dynamics with energy loss outside the slip zone

    USGS Publications Warehouse

    Andrews, D.J.

    2005-01-01

    Energy loss in a fault damage zone, outside the slip zone, contributes to the fracture energy that determines rupture velocity of an earthquake. A nonelastic two-dimensional dynamic calculation is done in which the slip zone is modeled as a fault plane and material off the fault is subject to a Coulomb yield condition. In a mode 2 crack-like solution in which an abrupt uniform drop of shear traction on the fault spreads from a point, Coulomb yielding occurs on the extensional side of the fault. Plastic strain is distributed with uniform magnitude along the fault, and it has a thickness normal to the fault proportional to propagation distance. Energy loss off the fault is also proportional to propagation distance, and it can become much larger than energy loss on the fault specified by the fault constitutive relation. The slip velocity function could be produced in an equivalent elastic problem by a slip-weakening friction law with breakdown slip Dc increasing with distance. Fracture energy G and equivalent Dc will be different in ruptures with different initiation points and stress drops, so they are not constitutive properties; they are determined by the dynamic solution that arrives at a particular point. Peak slip velocity is, however, a property of a fault location. Nonelastic response can be mimicked by imposing a limit on slip velocity on a fault in an elastic medium.

  13. Modeling, Detection, and Disambiguation of Sensor Faults for Aerospace Applications

    NASA Technical Reports Server (NTRS)

    Balaban, Edward; Saxena, Abhinav; Bansal, Prasun; Goebel, Kai F.; Curran, Simon

    2009-01-01

    Sensor faults continue to be a major hurdle for systems health management to reach its full potential. At the same time, few recorded instances of sensor faults exist. It is equally difficult to seed particular sensor faults. Therefore, research is underway to better understand the different fault modes seen in sensors and to model the faults. The fault models can then be used in simulated sensor fault scenarios to ensure that algorithms can distinguish between sensor faults and system faults. The paper illustrates the work with data collected from an electro-mechanical actuator in an aerospace setting, equipped with temperature, vibration, current, and position sensors. The most common sensor faults, such as bias, drift, scaling, and dropout were simulated and injected into the experimental data, with the goal of making these simulations as realistic as feasible. A neural network based classifier was then created and tested on both experimental data and the more challenging randomized data sequences. Additional studies were also conducted to determine sensitivity of detection and disambiguation efficacy to severity of fault conditions.

  14. Invariant protection of high-voltage electric motors of technological complexes at industrial enterprises at partial single-phase ground faults

    NASA Astrophysics Data System (ADS)

    Abramovich, B. N.; Sychev, Yu A.; Pelenev, D. N.

    2018-03-01

    Development results of invariant protection of high-voltage motors at incomplete single-phase ground faults are observed in the article. It is established that current protections have low action selectivity because of an inadmissible decrease in entrance signals during the shirt circuit occurrence in the place of transient resistance. The structural functional scheme and an algorithm of protective actions where correction of automatic zero sequence currents signals of the protected accessions implemented according to the level of incompleteness of ground faults are developed. It is revealed that automatic correction of zero sequence currents allows one to provide the invariance of sensitivity factor for protection under the variation conditions of a transient resistance in the place of damage. Application of invariant protection allows one to minimize damages in 6-10 kV electrical installations of industrial enterprises for a cause of infringement of consumers’ power supply and their system breakdown due to timely localization of emergency of ground faults modes.

  15. Avionic Air Data Sensors Fault Detection and Isolation by means of Singular Perturbation and Geometric Approach

    PubMed Central

    2017-01-01

    Singular Perturbations represent an advantageous theory to deal with systems characterized by a two-time scale separation, such as the longitudinal dynamics of aircraft which are called phugoid and short period. In this work, the combination of the NonLinear Geometric Approach and the Singular Perturbations leads to an innovative Fault Detection and Isolation system dedicated to the isolation of faults affecting the air data system of a general aviation aircraft. The isolation capabilities, obtained by means of the approach proposed in this work, allow for the solution of a fault isolation problem otherwise not solvable by means of standard geometric techniques. Extensive Monte-Carlo simulations, exploiting a high fidelity aircraft simulator, show the effectiveness of the proposed Fault Detection and Isolation system. PMID:28946673

  16. Experiment study on an inductive superconducting fault current limiter using no-insulation coils

    NASA Astrophysics Data System (ADS)

    Qiu, D.; Li, Z. Y.; Gu, F.; Huang, Z.; Zhao, A.; Hu, D.; Wei, B. G.; Huang, H.; Hong, Z.; Ryu, K.; Jin, Z.

    2018-03-01

    No-insulation (NI) coil made of 2 G high temperature superconducting (HTS) tapes has been widely used in DC magnet due to its excellent performance of engineering current density, thermal stability and mechanical strength. However, there are few AC power device using NI coil at present. In this paper, the NI coil is firstly applied into inductive superconducting fault current limiter (iSFCL). A two-winding structure air-core iSFCL prototype was fabricated, composed of a primary copper winding and a secondary no-insulation winding using 2 G HTS coated conductors. Firstly, in order to testify the feasibility to use NI coil as the secondary winding, the impedance variation of the prototype at different currents and different cycles was tested. The result shows that the impedance increases rapidly with the current rises. Then the iSFCL prototype was tested in a 40 V rms/ 3.3 kA peak short circuit experiment platform, both of the fault current limiting and recovery property of the iSFCL are discussed.

  17. Kinematic evolution of the junction of the San Andreas, Garlock, and Big Pine faults, California

    USGS Publications Warehouse

    Bohannon, Robert G.; Howell, David G.

    1982-01-01

    If the San Andreas fault with about 300 km of right slip, the Carlock fault with about 60 km of left slip, and the Big Pine fault with about 15 km of left slip are considered to have been contemporaneously active, a space problem at their high-angle junctions becomes apparent. Large crustal masses converge in the area of the junctions as a result of the simultaneous large displacements on the faults. We present here a model in which an early straight north-northwest–trending San Andreas deforms to its present bent configuration in response to a westward displacement of crust north of the Garlock fault. During this deformation, the crust north of the Garlock in the vicinity of the junction undergoes north-south shortening, while the fault junction migrates along the trace of the San Andreas fault to the southeast relative to its original position. As a result of this migration, the Mojave area is displaced to the east relative to the original junction position. We suggest a similar history in mirror image for the Big Pine fault and the areas of crust adjacent to it.

  18. Fault Tree Analysis.

    PubMed

    McElroy, Lisa M; Khorzad, Rebeca; Rowe, Theresa A; Abecassis, Zachary A; Apley, Daniel W; Barnard, Cynthia; Holl, Jane L

    The purpose of this study was to use fault tree analysis to evaluate the adequacy of quality reporting programs in identifying root causes of postoperative bloodstream infection (BSI). A systematic review of the literature was used to construct a fault tree to evaluate 3 postoperative BSI reporting programs: National Surgical Quality Improvement Program (NSQIP), Centers for Medicare and Medicaid Services (CMS), and The Joint Commission (JC). The literature review revealed 699 eligible publications, 90 of which were used to create the fault tree containing 105 faults. A total of 14 identified faults are currently mandated for reporting to NSQIP, 5 to CMS, and 3 to JC; 2 or more programs require 4 identified faults. The fault tree identifies numerous contributing faults to postoperative BSI and reveals substantial variation in the requirements and ability of national quality data reporting programs to capture these potential faults. Efforts to prevent postoperative BSI require more comprehensive data collection to identify the root causes and develop high-reliability improvement strategies.

  19. Control and protection system for paralleled modular static inverter-converter systems

    NASA Technical Reports Server (NTRS)

    Birchenough, A. G.; Gourash, F.

    1973-01-01

    A control and protection system was developed for use with a paralleled 2.5-kWe-per-module static inverter-converter system. The control and protection system senses internal and external fault parameters such as voltage, frequency, current, and paralleling current unbalance. A logic system controls contactors to isolate defective power conditioners or loads. The system sequences contactor operation to automatically control parallel operation, startup, and fault isolation. Transient overload protection and fault checking sequences are included. The operation and performance of a control and protection system, with detailed circuit descriptions, are presented.

  20. Fault tolerant architectures for integrated aircraft electronics systems, task 2

    NASA Technical Reports Server (NTRS)

    Levitt, K. N.; Melliar-Smith, P. M.; Schwartz, R. L.

    1984-01-01

    The architectural basis for an advanced fault tolerant on-board computer to succeed the current generation of fault tolerant computers is examined. The network error tolerant system architecture is studied with particular attention to intercluster configurations and communication protocols, and to refined reliability estimates. The diagnosis of faults, so that appropriate choices for reconfiguration can be made is discussed. The analysis relates particularly to the recognition of transient faults in a system with tasks at many levels of priority. The demand driven data-flow architecture, which appears to have possible application in fault tolerant systems is described and work investigating the feasibility of automatic generation of aircraft flight control programs from abstract specifications is reported.

  1. Database and Map of Quaternary Faults and Folds in Peru and its Offshore Region

    USGS Publications Warehouse

    Machare, Jose; Fenton, Clark H.; Machette, Michael N.; Lavenu, Alain; Costa, Carlos; Dart, Richard L.

    2003-01-01

    This publication consists of a main map of Quaternary faults and fiolds of Peru, a table of Quaternary fault data, a region inset map showing relative plate motion, and a second inset map of an enlarged area of interest in southern Peru. These maps and data compilation show evidence for activity of Quaternary faults and folds in Peru and its offshore regions of the Pacific Ocean. The maps show the locations, ages, and activity rates of major earthquake-related features such as faults and fault-related folds. These data are accompanied by text databases that describe these features and document current information on their activity in the Quaternary.

  2. Fault Analysis and Detection in Microgrids with High PV Penetration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    El Khatib, Mohamed; Hernandez Alvidrez, Javier; Ellis, Abraham

    In this report we focus on analyzing current-controlled PV inverters behaviour under faults in order to develop fault detection schemes for microgrids with high PV penetration. Inverter model suitable for steady state fault studies is presented and the impact of PV inverters on two protection elements is analyzed. The studied protection elements are superimposed quantities based directional element and negative sequence directional element. Additionally, several non-overcurrent fault detection schemes are discussed in this report for microgrids with high PV penetration. A detailed time-domain simulation study is presented to assess the performance of the presented fault detection schemes under different microgridmore » modes of operation.« less

  3. A Sparsity-Promoted Decomposition for Compressed Fault Diagnosis of Roller Bearings

    PubMed Central

    Wang, Huaqing; Ke, Yanliang; Song, Liuyang; Tang, Gang; Chen, Peng

    2016-01-01

    The traditional approaches for condition monitoring of roller bearings are almost always achieved under Shannon sampling theorem conditions, leading to a big-data problem. The compressed sensing (CS) theory provides a new solution to the big-data problem. However, the vibration signals are insufficiently sparse and it is difficult to achieve sparsity using the conventional techniques, which impedes the application of CS theory. Therefore, it is of great significance to promote the sparsity when applying the CS theory to fault diagnosis of roller bearings. To increase the sparsity of vibration signals, a sparsity-promoted method called the tunable Q-factor wavelet transform based on decomposing the analyzed signals into transient impact components and high oscillation components is utilized in this work. The former become sparser than the raw signals with noise eliminated, whereas the latter include noise. Thus, the decomposed transient impact components replace the original signals for analysis. The CS theory is applied to extract the fault features without complete reconstruction, which means that the reconstruction can be completed when the components with interested frequencies are detected and the fault diagnosis can be achieved during the reconstruction procedure. The application cases prove that the CS theory assisted by the tunable Q-factor wavelet transform can successfully extract the fault features from the compressed samples. PMID:27657063

  4. Sinusoidal synthesis based adaptive tracking for rotating machinery fault detection

    NASA Astrophysics Data System (ADS)

    Li, Gang; McDonald, Geoff L.; Zhao, Qing

    2017-01-01

    This paper presents a novel Sinusoidal Synthesis Based Adaptive Tracking (SSBAT) technique for vibration-based rotating machinery fault detection. The proposed SSBAT algorithm is an adaptive time series technique that makes use of both frequency and time domain information of vibration signals. Such information is incorporated in a time varying dynamic model. Signal tracking is then realized by applying adaptive sinusoidal synthesis to the vibration signal. A modified Least-Squares (LS) method is adopted to estimate the model parameters. In addition to tracking, the proposed vibration synthesis model is mainly used as a linear time-varying predictor. The health condition of the rotating machine is monitored by checking the residual between the predicted and measured signal. The SSBAT method takes advantage of the sinusoidal nature of vibration signals and transfers the nonlinear problem into a linear adaptive problem in the time domain based on a state-space realization. It has low computation burden and does not need a priori knowledge of the machine under the no-fault condition which makes the algorithm ideal for on-line fault detection. The method is validated using both numerical simulation and practical application data. Meanwhile, the fault detection results are compared with the commonly adopted autoregressive (AR) and autoregressive Minimum Entropy Deconvolution (ARMED) method to verify the feasibility and performance of the SSBAT method.

  5. The Curiosity Mars Rover's Fault Protection Engine

    NASA Technical Reports Server (NTRS)

    Benowitz, Ed

    2014-01-01

    The Curiosity Rover, currently operating on Mars, contains flight software onboard to autonomously handle aspects of system fault protection. Over 1000 monitors and 39 responses are present in the flight software. Orchestrating these behaviors is the flight software's fault protection engine. In this paper, we discuss the engine's design, responsibilities, and present some lessons learned for future missions.

  6. Effect of faulting on ground-water movement in the Death Valley Region, Nevada and California

    USGS Publications Warehouse

    Faunt, Claudia C.

    1997-01-01

    The current crustal stress field was combined with fault orientations to predict potential effects of faults on the regional groundwater flow regime. Numerous examples of faultcontrolled ground-water flow exist within the study area. Hydrologic data provided an independent method for checking some of the assumptions concerning preferential flow paths.

  7. A survey of NASA and military standards on fault tolerance and reliability applied to robotics

    NASA Technical Reports Server (NTRS)

    Cavallaro, Joseph R.; Walker, Ian D.

    1994-01-01

    There is currently increasing interest and activity in the area of reliability and fault tolerance for robotics. This paper discusses the application of Standards in robot reliability, and surveys the literature of relevant existing standards. A bibliography of relevant Military and NASA standards for reliability and fault tolerance is included.

  8. Incipient fault detection study for advanced spacecraft systems

    NASA Technical Reports Server (NTRS)

    Milner, G. Martin; Black, Michael C.; Hovenga, J. Mike; Mcclure, Paul F.

    1986-01-01

    A feasibility study to investigate the application of vibration monitoring to the rotating machinery of planned NASA advanced spacecraft components is described. Factors investigated include: (1) special problems associated with small, high RPM machines; (2) application across multiple component types; (3) microgravity; (4) multiple fault types; (5) eight different analysis techniques including signature analysis, high frequency demodulation, cepstrum, clustering, amplitude analysis, and pattern recognition are compared; and (6) small sample statistical analysis is used to compare performance by computation of probability of detection and false alarm for an ensemble of repeated baseline and faulted tests. Both detection and classification performance are quantified. Vibration monitoring is shown to be an effective means of detecting the most important problem types for small, high RPM fans and pumps typical of those planned for the advanced spacecraft. A preliminary monitoring system design and implementation plan is presented.

  9. Diagnosing a Failed Proof in Fault-Tolerance: A Disproving Challenge Problem

    NASA Technical Reports Server (NTRS)

    Pike, Lee; Miner, Paul; Torres-Pomales, Wilfredo

    2006-01-01

    This paper proposes a challenge problem in disproving. We describe a fault-tolerant distributed protocol designed at NASA for use in a fly-by-wire system for next-generation commercial aircraft. An early design of the protocol contains a subtle bug that is highly unlikely to be caught in fault injection testing. We describe a failed proof of the protocol's correctness in a mechanical theorem prover (PVS) with a complex unfinished proof conjecture. We use a model checking suite (SAL) to generate a concrete counterexample to the unproven conjecture to demonstrate the existence of a bug. However, we argue that the effort required in our approach is too high and propose what conditions a better solution would satisfy. We carefully describe the protocol and bug to provide a challenging but feasible case study for disproving research.

  10. Automatic Generation Control Study in Two Area Reheat Thermal Power System

    NASA Astrophysics Data System (ADS)

    Pritam, Anita; Sahu, Sibakanta; Rout, Sushil Dev; Ganthia, Sibani; Prasad Ganthia, Bibhu

    2017-08-01

    Due to industrial pollution our living environment destroyed. An electric grid system has may vital equipment like generator, motor, transformers and loads. There is always be an imbalance between sending end and receiving end system which cause system unstable. So this error and fault causing problem should be solved and corrected as soon as possible else it creates faults and system error and fall of efficiency of the whole power system. The main problem developed from this fault is deviation of frequency cause instability to the power system and may cause permanent damage to the system. Therefore this mechanism studied in this paper make the system stable and balance by regulating frequency at both sending and receiving end power system using automatic generation control using various controllers taking a two area reheat thermal power system into account.

  11. Gain-Scheduled Fault Tolerance Control Under False Identification

    NASA Technical Reports Server (NTRS)

    Shin, Jong-Yeob; Belcastro, Christine (Technical Monitor)

    2006-01-01

    An active fault tolerant control (FTC) law is generally sensitive to false identification since the control gain is reconfigured for fault occurrence. In the conventional FTC law design procedure, dynamic variations due to false identification are not considered. In this paper, an FTC synthesis method is developed in order to consider possible variations of closed-loop dynamics under false identification into the control design procedure. An active FTC synthesis problem is formulated into an LMI optimization problem to minimize the upper bound of the induced-L2 norm which can represent the worst-case performance degradation due to false identification. The developed synthesis method is applied for control of the longitudinal motions of FASER (Free-flying Airplane for Subscale Experimental Research). The designed FTC law of the airplane is simulated for pitch angle command tracking under a false identification case.

  12. Application Study of a High Temperature Superconducting Fault Current Limiter for Electric Power System

    NASA Astrophysics Data System (ADS)

    Naito, Yuji; Shimizu, Iwao; Yamaguchi, Iwao; Kaiho, Katsuyuki; Yanabu, Satoru

    Using high temperature superconductor, a Superconducting Fault Current Limiter (SFCL) was made and tested. Superconductor and vacuum interrupter as commutation switch are connected in parallel with bypass coil. When a fault occurs and the excessive current flows, superconductor is first quenched and the current is transferred to bypass coil because on voltage drop of superconductor. At the same time, since magnetic field is generated by current which flows in bypass coil, commutation switch is immediately driven by electromagnetic repulsion plate connected to driving rod of vacuum interrupter, and superconductor is separated from this circuit. Using the testing model, we could separate the superconductor from a circuit due to movement of vacuum interrupter within half-cycle current and transfer all current to bypass coil. Since operation of a commutation switch is included in current limiting operation of this testing model, it is one of helpful circuit of development of SFCL in the future. Moreover, since it can make the consumed energy of superconductor small during fault state due to realization of high-speed switch with simple composition, the burden of superconductor is reduced compared with conventional resistive type SFCL and it is considered that the flexibility of a SFCL design increases. Cooperation with a circuit breaker was also considered, the trial calculation of a parameter and energy of operation is conducted and discussion in the case of installing the SFCL to electric power system is made.

  13. Orogen-scale uplift in the central Italian Apennines drives episodic behaviour of earthquake faults

    PubMed Central

    Cowie, P. A.; Phillips, R. J.; Roberts, G. P.; McCaffrey, K.; Zijerveld, L. J. J.; Gregory, L. C.; Faure Walker, J.; Wedmore, L. N. J.; Dunai, T. J.; Binnie, S. A.; Freeman, S. P. H. T.; Wilcken, K.; Shanks, R. P.; Huismans, R. S.; Papanikolaou, I.; Michetti, A. M.; Wilkinson, M.

    2017-01-01

    Many areas of the Earth’s crust deform by distributed extensional faulting and complex fault interactions are often observed. Geodetic data generally indicate a simpler picture of continuum deformation over decades but relating this behaviour to earthquake occurrence over centuries, given numerous potentially active faults, remains a global problem in hazard assessment. We address this challenge for an array of seismogenic faults in the central Italian Apennines, where crustal extension and devastating earthquakes occur in response to regional surface uplift. We constrain fault slip-rates since ~18 ka using variations in cosmogenic 36Cl measured on bedrock scarps, mapped using LiDAR and ground penetrating radar, and compare these rates to those inferred from geodesy. The 36Cl data reveal that individual faults typically accumulate meters of displacement relatively rapidly over several thousand years, separated by similar length time intervals when slip-rates are much lower, and activity shifts between faults across strike. Our rates agree with continuum deformation rates when averaged over long spatial or temporal scales (104 yr; 102 km) but over shorter timescales most of the deformation may be accommodated by <30% of the across-strike fault array. We attribute the shifts in activity to temporal variations in the mechanical work of faulting. PMID:28322311

  14. Factors That Affect Software Testability

    NASA Technical Reports Server (NTRS)

    Voas, Jeffrey M.

    1991-01-01

    Software faults that infrequently affect software's output are dangerous. When a software fault causes frequent software failures, testing is likely to reveal the fault before the software is releases; when the fault remains undetected during testing, it can cause disaster after the software is installed. A technique for predicting whether a particular piece of software is likely to reveal faults within itself during testing is found in [Voas91b]. A piece of software that is likely to reveal faults within itself during testing is said to have high testability. A piece of software that is not likely to reveal faults within itself during testing is said to have low testability. It is preferable to design software with higher testabilities from the outset, i.e., create software with as high of a degree of testability as possible to avoid the problems of having undetected faults that are associated with low testability. Information loss is a phenomenon that occurs during program execution that increases the likelihood that a fault will remain undetected. In this paper, I identify two brad classes of information loss, define them, and suggest ways of predicting the potential for information loss to occur. We do this in order to decrease the likelihood that faults will remain undetected during testing.

  15. Determining on-fault earthquake magnitude distributions from integer programming

    NASA Astrophysics Data System (ADS)

    Geist, Eric L.; Parsons, Tom

    2018-02-01

    Earthquake magnitude distributions among faults within a fault system are determined from regional seismicity and fault slip rates using binary integer programming. A synthetic earthquake catalog (i.e., list of randomly sampled magnitudes) that spans millennia is first formed, assuming that regional seismicity follows a Gutenberg-Richter relation. Each earthquake in the synthetic catalog can occur on any fault and at any location. The objective is to minimize misfits in the target slip rate for each fault, where slip for each earthquake is scaled from its magnitude. The decision vector consists of binary variables indicating which locations are optimal among all possibilities. Uncertainty estimates in fault slip rates provide explicit upper and lower bounding constraints to the problem. An implicit constraint is that an earthquake can only be located on a fault if it is long enough to contain that earthquake. A general mixed-integer programming solver, consisting of a number of different algorithms, is used to determine the optimal decision vector. A case study is presented for the State of California, where a 4 kyr synthetic earthquake catalog is created and faults with slip ≥3 mm/yr are considered, resulting in >106 variables. The optimal magnitude distributions for each of the faults in the system span a rich diversity of shapes, ranging from characteristic to power-law distributions.

  16. Orogen-scale uplift in the central Italian Apennines drives episodic behaviour of earthquake faults.

    PubMed

    Cowie, P A; Phillips, R J; Roberts, G P; McCaffrey, K; Zijerveld, L J J; Gregory, L C; Faure Walker, J; Wedmore, L N J; Dunai, T J; Binnie, S A; Freeman, S P H T; Wilcken, K; Shanks, R P; Huismans, R S; Papanikolaou, I; Michetti, A M; Wilkinson, M

    2017-03-21

    Many areas of the Earth's crust deform by distributed extensional faulting and complex fault interactions are often observed. Geodetic data generally indicate a simpler picture of continuum deformation over decades but relating this behaviour to earthquake occurrence over centuries, given numerous potentially active faults, remains a global problem in hazard assessment. We address this challenge for an array of seismogenic faults in the central Italian Apennines, where crustal extension and devastating earthquakes occur in response to regional surface uplift. We constrain fault slip-rates since ~18 ka using variations in cosmogenic 36 Cl measured on bedrock scarps, mapped using LiDAR and ground penetrating radar, and compare these rates to those inferred from geodesy. The 36 Cl data reveal that individual faults typically accumulate meters of displacement relatively rapidly over several thousand years, separated by similar length time intervals when slip-rates are much lower, and activity shifts between faults across strike. Our rates agree with continuum deformation rates when averaged over long spatial or temporal scales (10 4  yr; 10 2  km) but over shorter timescales most of the deformation may be accommodated by <30% of the across-strike fault array. We attribute the shifts in activity to temporal variations in the mechanical work of faulting.

  17. Current state of active-fault monitoring in Taiwan

    NASA Astrophysics Data System (ADS)

    Hou, C.; Lin, C.; Chen, Y.; Liu, H.; Chen, C.; Lin, Y.; Chen, C.

    2008-12-01

    The earthquake is one of the major hazard sources in Taiwan where an arc-continent collision is on-going. For the purpose of seismic hazard mitigation, to understand current situation of each already-known active fault is urgently needed. After the 1999 Chi-chi earthquake shocked Taiwan, the Central Geological Survey (CGS) of Taiwan aggressively promoted the tasks on studying the activities of active faults. One of them is the deployment of miscellaneous monitoring networks to cover all the target areas, where the earthquake occurrence potentials on active faults are eager to be answered. Up to the end of 2007, CGS has already deployed over 1000 GPS campaign sites, 44 GPS stations in continuous mode, and 42 leveling transects across the major active faults with a total ground distance of 974 km. The campaign sites and leveling tasks have to be measured once a year. The resulted crustal deformation will be relied on to derive the fault slip model. The time series analysis on continuous mode of GPS can further help understand the details of the fault behavior. In addition, 12 down-hole strain meters, five stations for liquid flux and geochemical proxies, and two for water table monitoring have been also installed to seek possible anomalies related to the earthquake activities. It may help discover reliable earthquake precursors.

  18. Optimal Placement of Dynamic Var Sources by Using Empirical Controllability Covariance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Qi, Junjian; Huang, Weihong; Sun, Kai

    In this paper, the empirical controllability covariance (ECC), which is calculated around the considered operating condition of a power system, is applied to quantify the degree of controllability of system voltages under specific dynamic var source locations. An optimal dynamic var source placement method addressing fault-induced delayed voltage recovery (FIDVR) issues is further formulated as an optimization problem that maximizes the determinant of ECC. The optimization problem is effectively solved by the NOMAD solver, which implements the mesh adaptive direct search algorithm. The proposed method is tested on an NPCC 140-bus system and the results show that the proposed methodmore » with fault specified ECC can solve the FIDVR issue caused by the most severe contingency with fewer dynamic var sources than the voltage sensitivity index (VSI)-based method. The proposed method with fault unspecified ECC does not depend on the settings of the contingency and can address more FIDVR issues than the VSI method when placing the same number of SVCs under different fault durations. It is also shown that the proposed method can help mitigate voltage collapse.« less

  19. A structural model decomposition framework for systems health management

    NASA Astrophysics Data System (ADS)

    Roychoudhury, I.; Daigle, M.; Bregon, A.; Pulido, B.

    Systems health management (SHM) is an important set of technologies aimed at increasing system safety and reliability by detecting, isolating, and identifying faults; and predicting when the system reaches end of life (EOL), so that appropriate fault mitigation and recovery actions can be taken. Model-based SHM approaches typically make use of global, monolithic system models for online analysis, which results in a loss of scalability and efficiency for large-scale systems. Improvement in scalability and efficiency can be achieved by decomposing the system model into smaller local submodels and operating on these submodels instead. In this paper, the global system model is analyzed offline and structurally decomposed into local submodels. We define a common model decomposition framework for extracting submodels from the global model. This framework is then used to develop algorithms for solving model decomposition problems for the design of three separate SHM technologies, namely, estimation (which is useful for fault detection and identification), fault isolation, and EOL prediction. We solve these model decomposition problems using a three-tank system as a case study.

  20. The Earth isn't flat: The (large) influence of topography on geodetic fault slip imaging.

    NASA Astrophysics Data System (ADS)

    Thompson, T. B.; Meade, B. J.

    2017-12-01

    While earthquakes both occur near and generate steep topography, most geodetic slip inversions assume that the Earth's surface is flat. We have developed a new boundary element tool, Tectosaur, with the capability to study fault and earthquake problems including complex fault system geometries, topography, material property contrasts, and millions of elements. Using Tectosaur, we study the model error induced by neglecting topography in both idealized synthetic fault models and for the cases of the MW=7.3 Landers and MW=8.0 Wenchuan earthquakes. Near the steepest topography, we find the use of flat Earth dislocation models may induce errors of more than 100% in the inferred slip magnitude and rake. In particular, neglecting topographic effects leads to an inferred shallow slip deficit. Thus, we propose that the shallow slip deficit observed in several earthquakes may be an artefact resulting from the systematic use of elastic dislocation models assuming a flat Earth. Finally, using this study as an example, we emphasize the dangerous potential for forward model errors to be amplified by an order of magnitude in inverse problems.

  1. A Structural Model Decomposition Framework for Systems Health Management

    NASA Technical Reports Server (NTRS)

    Roychoudhury, Indranil; Daigle, Matthew J.; Bregon, Anibal; Pulido, Belamino

    2013-01-01

    Systems health management (SHM) is an important set of technologies aimed at increasing system safety and reliability by detecting, isolating, and identifying faults; and predicting when the system reaches end of life (EOL), so that appropriate fault mitigation and recovery actions can be taken. Model-based SHM approaches typically make use of global, monolithic system models for online analysis, which results in a loss of scalability and efficiency for large-scale systems. Improvement in scalability and efficiency can be achieved by decomposing the system model into smaller local submodels and operating on these submodels instead. In this paper, the global system model is analyzed offline and structurally decomposed into local submodels. We define a common model decomposition framework for extracting submodels from the global model. This framework is then used to develop algorithms for solving model decomposition problems for the design of three separate SHM technologies, namely, estimation (which is useful for fault detection and identification), fault isolation, and EOL prediction. We solve these model decomposition problems using a three-tank system as a case study.

  2. 30 CFR 75.801 - Grounding resistors.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... volts under fault conditions. The grounding resistor shall be rated for maximum fault current continuously and insulated from ground for a voltage equal to the phase-to-phase voltage of the system. ...

  3. 30 CFR 75.801 - Grounding resistors.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... volts under fault conditions. The grounding resistor shall be rated for maximum fault current continuously and insulated from ground for a voltage equal to the phase-to-phase voltage of the system. ...

  4. Toward the Reliability of Fault Representation Methods in Finite Difference Schemes for Simulation of Shear Rupture Propagation

    NASA Astrophysics Data System (ADS)

    Dalguer, L. A.; Day, S. M.

    2006-12-01

    Accuracy in finite difference (FD) solutions to spontaneous rupture problems is controlled principally by the scheme used to represent the fault discontinuity, and not by the grid geometry used to represent the continuum. We have numerically tested three fault representation methods, the Thick Fault (TF) proposed by Madariaga et al (1998), the Stress Glut (SG) described by Andrews (1999), and the Staggered-Grid Split-Node (SGSN) methods proposed by Dalguer and Day (2006), each implemented in a the fourth-order velocity-stress staggered-grid (VSSG) FD scheme. The TF and the SG methods approximate the discontinuity through inelastic increments to stress components ("inelastic-zone" schemes) at a set of stress grid points taken to lie on the fault plane. With this type of scheme, the fault surface is indistinguishable from an inelastic zone with a thickness given by a spatial step dx for the SG, and 2dx for the TF model. The SGSN method uses the traction-at-split-node (TSN) approach adapted to the VSSG FD. This method represents the fault discontinuity by explicitly incorporating discontinuity terms at velocity nodes in the grid, with interactions between the "split nodes" occurring exclusively through the tractions (frictional resistance) acting between them. These tractions in turn are controlled by the jump conditions and a friction law. Our 3D tests problem solutions show that the inelastic-zone TF and SG methods show much poorer performance than does the SGSN formulation. The SG inelastic-zone method achieved solutions that are qualitatively meaningful and quantitatively reliable to within a few percent. The TF inelastic-zone method did not achieve qualitatively agreement with the reference solutions to the 3D test problem, and proved to be sufficiently computationally inefficient that it was not feasible to explore convergence quantitatively. The SGSN method gives very accurate solutions, and is also very efficient. Reliable solution of the rupture time is reached with a median resolution of the cohesive zone of only ~2 grid points, and efficiency is competitive with the Boundary Integral (BI) method. The results presented here demonstrate that appropriate fault representation in a numerical scheme is crucial to reduce uncertainties in numerical simulations of earthquake source dynamics and ground motion, and therefore important to improving our understanding of earthquake physics in general.

  5. Promise and problems in using stress triggering models for time-dependent earthquake hazard assessment

    NASA Astrophysics Data System (ADS)

    Cocco, M.

    2001-12-01

    Earthquake stress changes can promote failures on favorably oriented faults and modify the seismicity pattern over broad regions around the causative faults. Because the induced stress perturbations modify the rate of production of earthquakes, they alter the probability of seismic events in a specified time window. Comparing the Coulomb stress changes with the seismicity rate changes and aftershock patterns can statistically test the role of stress transfer in earthquake occurrence. The interaction probability may represent a further tool to test the stress trigger or shadow model. The probability model, which incorporate stress transfer, has the main advantage to include the contributions of the induced stress perturbation (a static step in its present formulation), the loading rate and the fault constitutive properties. Because the mechanical conditions of the secondary faults at the time of application of the induced load are largely unkown, stress triggering can only be tested on fault populations and not on single earthquake pairs with a specified time delay. The interaction probability can represent the most suitable tool to test the interaction between large magnitude earthquakes. Despite these important implications and the stimulating perspectives, there exist problems in understanding earthquake interaction that should motivate future research but at the same time limit its immediate social applications. One major limitation is that we are unable to predict how and if the induced stress perturbations modify the ratio between small versus large magnitude earthquakes. In other words, we cannot distinguish between a change in this ratio in favor of small events or of large magnitude earthquakes, because the interaction probability is independent of magnitude. Another problem concerns the reconstruction of the stressing history. The interaction probability model is based on the response to a static step; however, we know that other processes contribute to the stressing history perturbing the faults (such as dynamic stress changes, post-seismic stress changes caused by viscolelastic relaxation or fluid flow). If, for instance, we believe that dynamic stress changes can trigger aftershocks or earthquakes years after the passing of the seismic waves through the fault, the perspective of calculating interaction probability is untenable. It is therefore clear we have learned a lot on earthquake interaction incorporating fault constitutive properties, allowing to solve existing controversy, but leaving open questions for future research.

  6. Expert systems for real-time monitoring and fault diagnosis

    NASA Technical Reports Server (NTRS)

    Edwards, S. J.; Caglayan, A. K.

    1989-01-01

    Methods for building real-time onboard expert systems were investigated, and the use of expert systems technology was demonstrated in improving the performance of current real-time onboard monitoring and fault diagnosis applications. The potential applications of the proposed research include an expert system environment allowing the integration of expert systems into conventional time-critical application solutions, a grammar for describing the discrete event behavior of monitoring and fault diagnosis systems, and their applications to new real-time hardware fault diagnosis and monitoring systems for aircraft.

  7. Fault connectivity, distributed shortening, and impacts on geologic- geodetic slip rate discrepancies in the central Mojave Desert, California

    NASA Astrophysics Data System (ADS)

    Selander, J.; Oskin, M. E.; Cooke, M. L.; Grette, K.

    2015-12-01

    Understanding off-fault deformation and distribution of displacement rates associated with disconnected strike-slip faults requires a three-dimensional view of fault geometries. We address problems associated with distributed faulting by studying the Mojave segment of the East California Shear Zone (ECSZ), a region dominated by northwest-directed dextral shear along disconnected northwest- southeast striking faults. We use a combination of cross-sectional interpretations, 3D Boundary Element Method (BEM) models, and slip-rate measurements to test new hypothesized fault connections. We find that reverse faulting acts as an important means of slip transfer between strike-slip faults, and show that the impacts of these structural connections on shortening, uplift, strike-slip rates, and off-fault deformation, help to reconcile the overall strain budget across this portion of the ECSZ. In detail, we focus on the Calico and Blackwater faults, which are hypothesized to together represent the longest linked fault system in the Mojave ECSZ, connected by a restraining step at 35°N. Across this restraining step the system displays a pronounced displacement gradient, where dextral offset decreases from ~11.5 to <2 km from south to north. Cross-section interpretations show that ~40% of this displacement is transferred from the Calico fault to the Harper Lake and Blackwater faults via a set of north-dipping thrust ramps. Late Quaternary dextral slip rates follow a similar pattern, where 1.4 +0.8/-0.4 mm/yr of slip along the Calico fault south of 35°N is distributed to the Harper Lake, Blackwater, and Tin Can Alley faults. BEM model results using revised fault geometries for the Mojave ECSZ show areas of uplift consistent with contractional structures, and fault slip-rates that more closely match geologic data. Overall, revised fault connections and addition of off-fault deformation greatly reduces the discrepancy between geodetic and geologic slip rates.

  8. Protection of Renewable-dominated Microgrids: Challenges and Potential Solutions.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Elkhatib, Mohamed; Ellis, Abraham; Milan Biswal

    keywords : Microgrid Protection, Impedance Relay, Signal Processing-based Fault Detec- tion, Networked Microgrids, Communication-Assisted Protection In this report we address the challenge of designing efficient protection system for inverter- dominated microgrids. These microgrids are characterised with limited fault current capacity as a result of current-limiting protection functions of inverters. Typically, inverters limit their fault contribution in sub-cycle time frame to as low as 1.1 per unit. As a result, overcurrent protection could fail completely to detect faults in inverter-dominated microgrids. As part of this project a detailed literature survey of existing and proposed microgrid protection schemes were conducted. The surveymore » concluded that there is a gap in the available microgrid protection methods. The only credible protection solution available in literature for low- fault inverter-dominated microgrids is the differential protection scheme which represents a robust transmission-grade protection solution but at a very high cost. Two non-overcurrent protection schemes were investigated as part of this project; impedance-based protection and transient-based protection. Impedance-based protection depends on monitoring impedance trajectories at feeder relays to detect faults. Two communication-based impedance-based protection schemes were developed. the first scheme utilizes directional elements and pilot signals to locate the fault. The second scheme depends on a Central Protection Unit that communicates with all feeder relays to locate the fault based on directional flags received from feeder relays. The later approach could potentially be adapted to protect networked microgrids and dynamic topology microgrids. Transient-based protection relies on analyzing high frequency transients to detect and locate faults. This approach is very promising but its implementation in the filed faces several challenges. For example, high frequency transients due to faults can be confused with transients due to other events such as capacitor switching. Additionally, while detecting faults by analyzing transients could be doable, locating faults based on analyzing transients is still an open question.« less

  9. Advanced diagnostic system for piston slap faults in IC engines, based on the non-stationary characteristics of the vibration signals

    NASA Astrophysics Data System (ADS)

    Chen, Jian; Randall, Robert Bond; Peeters, Bart

    2016-06-01

    Artificial Neural Networks (ANNs) have the potential to solve the problem of automated diagnostics of piston slap faults, but the critical issue for the successful application of ANN is the training of the network by a large amount of data in various engine conditions (different speed/load conditions in normal condition, and with different locations/levels of faults). On the other hand, the latest simulation technology provides a useful alternative in that the effect of clearance changes may readily be explored without recourse to cutting metal, in order to create enough training data for the ANNs. In this paper, based on some existing simplified models of piston slap, an advanced multi-body dynamic simulation software was used to simulate piston slap faults with different speeds/loads and clearance conditions. Meanwhile, the simulation models were validated and updated by a series of experiments. Three-stage network systems are proposed to diagnose piston faults: fault detection, fault localisation and fault severity identification. Multi Layer Perceptron (MLP) networks were used in the detection stage and severity/prognosis stage and a Probabilistic Neural Network (PNN) was used to identify which cylinder has faults. Finally, it was demonstrated that the networks trained purely on simulated data can efficiently detect piston slap faults in real tests and identify the location and severity of the faults as well.

  10. A Design of Finite Memory Residual Generation Filter for Sensor Fault Detection

    NASA Astrophysics Data System (ADS)

    Kim, Pyung Soo

    2017-04-01

    In the current paper, a residual generation filter with finite memory structure is proposed for sensor fault detection. The proposed finite memory residual generation filter provides the residual by real-time filtering of fault vector using only the most recent finite measurements and inputs on the window. It is shown that the residual given by the proposed residual generation filter provides the exact fault for noisefree systems. The proposed residual generation filter is specified to the digital filter structure for the amenability to hardware implementation. Finally, to illustrate the capability of the proposed residual generation filter, extensive simulations are performed for the discretized DC motor system with two types of sensor faults, incipient soft bias-type fault and abrupt bias-type fault. In particular, according to diverse noise levels and windows lengths, meaningful simulation results are given for the abrupt bias-type fault.

  11. Functional requirements for an intelligent RPC. [remote power controller for spaceborne electrical distribution system

    NASA Technical Reports Server (NTRS)

    Aucoin, B. M.; Heller, R. P.

    1990-01-01

    An intelligent remote power controller (RPC) based on microcomputer technology can implement advanced functions for the accurate and secure detection of all types of faults on a spaceborne electrical distribution system. The intelligent RPC will implement conventional protection functions such as overcurrent, under-voltage, and ground fault protection. Advanced functions for the detection of soft faults, which cannot presently be detected, can also be implemented. Adaptive overcurrent protection changes overcurrent settings based on connected load. Incipient and high-impedance fault detection provides early detection of arcing conditions to prevent fires, and to clear and reconfigure circuits before soft faults progress to a hard-fault condition. Power electronics techniques can be used to implement fault current limiting to prevent voltage dips during hard faults. It is concluded that these techniques will enhance the overall safety and reliability of the distribution system.

  12. Switch failure diagnosis based on inductor current observation for boost converters

    NASA Astrophysics Data System (ADS)

    Jamshidpour, E.; Poure, P.; Saadate, S.

    2016-09-01

    Face to the growing number of applications using DC-DC power converters, the improvement of their reliability is subject to an increasing number of studies. Especially in safety critical applications, designing fault-tolerant converters is becoming mandatory. In this paper, a switch fault-tolerant DC-DC converter is studied. First, some of the fastest Fault Detection Algorithms (FDAs) are recalled. Then, a fast switch FDA is proposed which can detect both types of failures; open circuit fault as well as short circuit fault can be detected in less than one switching period. Second, a fault-tolerant converter which can be reconfigured under those types of fault is introduced. Hardware-In-the-Loop (HIL) results and experimental validations are given to verify the validity of the proposed switch fault-tolerant approach in the case of a single switch DC-DC boost converter with one redundant switch.

  13. Selection of test paths for solder joint intermittent connection faults under DC stimulus

    NASA Astrophysics Data System (ADS)

    Huakang, Li; Kehong, Lv; Jing, Qiu; Guanjun, Liu; Bailiang, Chen

    2018-06-01

    The test path of solder joint intermittent connection faults under direct-current stimulus is examined in this paper. According to the physical structure of the circuit, a network model is established first. A network node is utilised to represent the test node. The path edge refers to the number of intermittent connection faults in the path. Then, the selection criteria of the test path based on the node degree index are proposed and the solder joint intermittent connection faults are covered using fewer test paths. Finally, three circuits are selected to verify the method. To test if the intermittent fault is covered by the test paths, the intermittent fault is simulated by a switch. The results show that the proposed method can detect the solder joint intermittent connection fault using fewer test paths. Additionally, the number of detection steps is greatly reduced without compromising fault coverage.

  14. Adaptive Modulation for DFIG and STATCOM With High-Voltage Direct Current Transmission.

    PubMed

    Tang, Yufei; He, Haibo; Ni, Zhen; Wen, Jinyu; Huang, Tingwen

    2016-08-01

    This paper develops an adaptive modulation approach for power system control based on the approximate/adaptive dynamic programming method, namely, the goal representation heuristic dynamic programming (GrHDP). In particular, we focus on the fault recovery problem of a doubly fed induction generator (DFIG)-based wind farm and a static synchronous compensator (STATCOM) with high-voltage direct current (HVDC) transmission. In this design, the online GrHDP-based controller provides three adaptive supplementary control signals to the DFIG controller, STATCOM controller, and HVDC rectifier controller, respectively. The mechanism is to observe the system states and their derivatives and then provides supplementary control to the plant according to the utility function. With the GrHDP design, the controller can adaptively develop an internal goal representation signal according to the observed power system states, therefore, to achieve more effective learning and modulating. Our control approach is validated on a wind power integrated benchmark system with two areas connected by HVDC transmission lines. Compared with the classical direct HDP and proportional integral control, our GrHDP approach demonstrates the improved transient stability under system faults. Moreover, experiments under different system operating conditions with signal transmission delays are also carried out to further verify the effectiveness and robustness of the proposed approach.

  15. State reference design and saturated control of doubly-fed induction generators under voltage dips

    NASA Astrophysics Data System (ADS)

    Tilli, Andrea; Conficoni, Christian; Hashemi, Ahmad

    2017-04-01

    In this paper, the stator/rotor currents control problem of doubly-fed induction generator under faulty line voltage is carried out. Common grid faults cause a steep decline in the line voltage profile, commonly denoted as voltage dip. This point is critical for such kind of machines, having their stator windings directly connected to the grid. In this respect, solid methodological nonlinear control theory arguments are exploited and applied to design a novel controller, whose main goal is to improve the system behaviour during voltage dips, endowing it with low voltage ride through capability, a fundamental feature required by modern Grid Codes. The proposed solution exploits both feedforward and feedback actions. The feedforward part relies on suitable reference trajectories for the system internal dynamics, which are designed to prevent large oscillations in the rotor currents and command voltages, excited by line perturbations. The feedback part uses state measurements and is designed according to Linear Matrix Inequalities (LMI) based saturated control techniques to further reduce oscillations, while explicitly accounting for the system constraints. Numerical simulations verify the benefits of the internal dynamics trajectory planning, and the saturated state feedback action, in crucially improving the Doubly-Fed Induction Machine response under severe grid faults.

  16. Thin-skinned tectonics of upper Ojai Valley and Sulfur Mountain vicinity, Ventura basin, California

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huftile, G.J.

    1988-03-01

    The Upper Ojai Valley is a tectonic depression between opposing reverse faults. The active, north-dipping San Cayetano fault forms its northern border and has 5.8 km of dip-slip displacement at the Silverthread oil field and 2.6 km of displacement west of Sisar Creek. The fault dies out farther west in Ojai Valley. The southern border is formed by the late Quaternary Sisar-Big-Canyon-Lion fault set, which dips south and merges into a decollement within the south-dipping, ductile Rincon Formation. Folds with north-dipping fold axes, including the Lion Mountain anticline and Reeves syncline, are probably Pliocene. During the late Quaternary, the Sulfurmore » Mountain anticlinorium began forming as a fault-propagation fold, followed closely by the ramping of the south-dipping faults to the surface. One, the Lion fault, cuts the Pleistocene Saugus Formation. To the east, the San Cayetano fault overrides and folds the south-dipping faults. Cross-section balancing shows that the Miocene and younger rocks above the decollement are shortened 6.1 km more than the more competent rocks below. A solution to this bed-length problem is that the decollement becomes a ramp and merges at depth with the steeply south-dipping Oak Ridge fault. This implies that the Sisar, Big Canyon, and Lion faults are frontal thrusts to the Oak Ridge fault. Oil is produced primarily from Mohnian sands and shales north of the Big Canyon fault and from fractured Mohnian shale beneath the Sisar fault.« less

  17. Need for denser geodetic network to get real constrain on the fault behavior along the Main Marmara Sea segments of the NAF, toward an optimized GPS network.

    NASA Astrophysics Data System (ADS)

    Klein, E.; Masson, F.; Duputel, Z.; Yavasoglu, H.; Agram, P. S.

    2016-12-01

    Over the last two decades, the densification of GPS networks and the development of new radar satellites offered an unprecedented opportunity to study crustal deformation due to faulting. Yet, submarine strike slip fault segments remain a major issue, especially when the landscape appears unfavorable to the use of SAR measurements. It is the case of the North Anatolian fault segments located in the Main Marmara Sea, that remain unbroken ever since the Mw7.4 earthquake of Izmit in 1999, which ended a eastward migrating seismic sequence of Mw > 7 earthquakes. Located directly offshore Istanbul, evaluation of seismic hazard appears capital. But a strong controversy remains over whether these segments are accumulating strain and are likely to experience a major earthquake, or are creeping, resulting both from the simplicity of current geodetic models and the scarcity of geodetic data. We indeed show that 2D infinite fault models cannot account for the complexity of the Marmara fault segments. But current geodetic data in the western region of Istanbul are also insufficient to invert for the coupling using a 3D geometry of the fault. Therefore, we implement a global optimization procedure aiming at identifying the most favorable distribution of GPS stations to explore the strain accumulation. We present here the results of this procedure that allows to determine both the optimal number and location of the new stations. We show that a denser terrestrial survey network can indeed locally improve the resolution on the shallower part of the fault, even more efficiently with permanent stations. But data closer from the fault, only possible by submarine measurements, remain necessary to properly constrain the fault behavior and its potential along strike coupling variations.

  18. Shallow Lunar Seismic Activity and the Current Stress State of the Moon

    NASA Technical Reports Server (NTRS)

    Watters, Thomas R.; Weber, Renee C.; Collins, Geoffrey C.; Johnson, Catherine L.

    2017-01-01

    A vast, global network of more than 3200 lobate thrust fault scarps has been revealed in high resolution Lunar Reconnaissance Orbiter Camera (LROC) images. The fault scarps are very young, less than 50 Ma, based on their small scale and crisp appearance, crosscutting relations with small-diameter impact craters, and rates of infilling of associated small, shallow graben and may be actively forming today. The population of young thrust fault scarps provides a window into the recent stress state of the Moon and offers insight into the origin of global lunar stresses. The distribution of orientations of the fault scarps is non-random, inconsistent with isotropic stresses from late-stage global contraction as the sole source of stress. Modeling shows that tidal stresses contribute significantly to the current stress state of the lunar crust. Tidal stresses (orbital recession and diurnal tides) superimposed on stresses from global contraction result in non-isotropic compressional stress and may produce thrust faults consistent with lobate scarp orientations. At any particular point on the lunar surface, peak compressive stress will be reached at a certain time in the diurnal cycle. Coseismic slip events on currently active thrust faults are expected to be triggered when peak stresses are reached. Analysis of the timing of the 28 the shallow moonquakes recorded by the Apollo seismic network shows that 19 indeed occur when the Moon is closer to apogee, while only 9 shallow events occur when the Moon is closer to perigee. Here we report efforts to refine the model for the current stress state of the Moon by investigating the contribution of polar wander. Progress on relocating the epicentral locations of the shallow moonquakes using an algorithm designed for sparse networks is also reported.

  19. Proceedings of Workshop XVI; The dynamic characteristics of faulting inferred from recordings of strong ground motion

    USGS Publications Warehouse

    Boatwright, John; Jacobson, Muriel L.

    1982-01-01

    The strong ground motions radiated by earthquake faulting are controlled by the dynamic characteristics of the faulting process. Although this assertion seems self-evident, seismologists have only recently begun to derive and test quantitative relations between common measures of strong ground motion and the dynamic characteristics of faulting. Interest in this problem has increased dramatically in past several years, however, resulting in a number of important advances. The research presented in this workshop is a significant part of this scientific development. Watching this development occur through the work of many scientists is exciting; to be able to gather a number of these scientists together in one workshop is a remarkable opportunity.

  20. Fault detection in mechanical systems with friction phenomena: an online neural approximation approach.

    PubMed

    Papadimitropoulos, Adam; Rovithakis, George A; Parisini, Thomas

    2007-07-01

    In this paper, the problem of fault detection in mechanical systems performing linear motion, under the action of friction phenomena is addressed. The friction effects are modeled through the dynamic LuGre model. The proposed architecture is built upon an online neural network (NN) approximator, which requires only system's position and velocity. The friction internal state is not assumed to be available for measurement. The neural fault detection methodology is analyzed with respect to its robustness and sensitivity properties. Rigorous fault detectability conditions and upper bounds for the detection time are also derived. Extensive simulation results showing the effectiveness of the proposed methodology are provided, including a real case study on an industrial actuator.

  1. Reliable High Performance Peta- and Exa-Scale Computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bronevetsky, G

    2012-04-02

    As supercomputers become larger and more powerful, they are growing increasingly complex. This is reflected both in the exponentially increasing numbers of components in HPC systems (LLNL is currently installing the 1.6 million core Sequoia system) as well as the wide variety of software and hardware components that a typical system includes. At this scale it becomes infeasible to make each component sufficiently reliable to prevent regular faults somewhere in the system or to account for all possible cross-component interactions. The resulting faults and instability cause HPC applications to crash, perform sub-optimally or even produce erroneous results. As supercomputers continuemore » to approach Exascale performance and full system reliability becomes prohibitively expensive, we will require novel techniques to bridge the gap between the lower reliability provided by hardware systems and users unchanging need for consistent performance and reliable results. Previous research on HPC system reliability has developed various techniques for tolerating and detecting various types of faults. However, these techniques have seen very limited real applicability because of our poor understanding of how real systems are affected by complex faults such as soft fault-induced bit flips or performance degradations. Prior work on such techniques has had very limited practical utility because it has generally focused on analyzing the behavior of entire software/hardware systems both during normal operation and in the face of faults. Because such behaviors are extremely complex, such studies have only produced coarse behavioral models of limited sets of software/hardware system stacks. Since this provides little insight into the many different system stacks and applications used in practice, this work has had little real-world impact. My project addresses this problem by developing a modular methodology to analyze the behavior of applications and systems during both normal and faulty operation. By synthesizing models of individual components into a whole-system behavior models my work is making it possible to automatically understand the behavior of arbitrary real-world systems to enable them to tolerate a wide range of system faults. My project is following a multi-pronged research strategy. Section II discusses my work on modeling the behavior of existing applications and systems. Section II.A discusses resilience in the face of soft faults and Section II.B looks at techniques to tolerate performance faults. Finally Section III presents an alternative approach that studies how a system should be designed from the ground up to make resilience natural and easy.« less

  2. Characteristics of newly found Quaternary fault, southern Korea, and its tectonic implication

    NASA Astrophysics Data System (ADS)

    Lee, Y.; Kim, M. C.; Cheon, Y.; Ha, S.; Kang, H. C.; Choi, J. H.; Son, M.

    2017-12-01

    This study introduces the detailed geometry and kinematics of recently found Quaternary fault in southern Korea, named Seooe Fault, and discusses its tectonic implication through a synthetic analysis with previous studies. The N-S striking Seooe Fault shows a top-to-the-east thrust geometry and cuts the Cretaceous Goseong Formation and overlying Quaternary deposits, and its slip senses and associated minor folds in the hanging wall indicate an E-W compressional stress. The age of the lower part of the Quaternary deposits obtained by OSL dating indicates that the last movement of the fault occurred after 61 60 ka. Arcuate geometry of the main fault showing an upward decreasing dip-angle, reverse offset of the fault breccias, and reverse-sense indicators observed on neighboring N-S striking high-angle fractures indicate that this Quaternary fault was produced by the reactivation of pre-existing fault under E-W compressional stress field. Using the apparent vertical displacement of the fault and the attitudes of cutting slope and main fault surface, its minimum net displacement is calculated as 2.17 m. When the value is applied to the empirical equation of maximum displacement - moment earthquake magnitude (Mw), the magnitude is estimated to reach about 6.7, assuming that this displacement was due to one seismic event. Most of the Quaternary faults in southern Korea are observed along major inherited fault zones, and their geometry and kinematics indicate that they were reactivated under ENE-WSW or E-W compressional stress field, which is concordant with the characteristics of the Seooe Fault. In addition, focal mechanism solutions and geotechnical in-situ stress data in and around the Korean peninsula also support the current ENE-WSW or E-W regional compression. On the basis of the regional stress trajectories in and around East Asia, the current stress field in Korean peninsula is interpreted to have resulted from the cooperation of westward shallow subduction of the Pacific Plate and collision of Indian and Eurasian continents, whereas the Philippine Sea plate doesn't contribute to the crustal contraction due to its high-angle subduction that results in the crustal extension of back-arc region.

  3. Rotor Current Control of DFIG for Improving Fault Ride - Through Using a Novel Sliding Mode Control Approach

    NASA Astrophysics Data System (ADS)

    Cai, Guowei; Liu, Cheng; Yang, Deyou

    2013-11-01

    The doubly fed induction generators (DFIG) have been recognized as the dominant technology used in wind power generation systems with the rapid development of wind power. However, continuous operation of DFIG may cause a serious wind turbine generators tripping accident, due to destructive over-current in the rotor winding which is caused by the power system fault or inefficient fault ride-through (FRT) strategy. A new rotor current control scheme in the rotor-side converter (RSC) ispresented to enhance FRT capacities of grid-connected DFIG. Due to the strongly nonlinear nature of DFIG and insensitive to DFIG parameter's variations, a novel sliding mode controller was designed. The controller combines extended state observer (ESO) with sliding model variable structure control theory. The simulation is carried out to verify the effectiveness of the proposed control approach under various types of grid disturbances. It is shown that the proposed controller provides enhanced transient features than the classic proportional-integral control. The proposed control method can effectively reduce over-current in the RSC, and the transient pulse value of electromagnetic torque is too large under power grid fault.

  4. Research on resistance characteristics of YBCO tape under short-time DC large current impact

    NASA Astrophysics Data System (ADS)

    Zhang, Zhifeng; Yang, Jiabin; Qiu, Qingquan; Zhang, Guomin; Lin, Liangzhen

    2017-06-01

    Research of the resistance characteristics of YBCO tape under short-time DC large current impact is the foundation of the developing DC superconducting fault current limiter (SFCL) for voltage source converter-based high voltage direct current system (VSC-HVDC), which is one of the valid approaches to solve the problems of renewable energy integration. SFCL can limit DC short-circuit and enhance the interrupting capabilities of DC circuit breakers. In this paper, under short-time DC large current impacts, the resistance features of naked tape of YBCO tape are studied to find the resistance - temperature change rule and the maximum impact current. The influence of insulation for the resistance - temperature characteristics of YBCO tape is studied by comparison tests with naked tape and insulating tape in 77 K. The influence of operating temperature on the tape is also studied under subcooled liquid nitrogen condition. For the current impact security of YBCO tape, the critical current degradation and top temperature are analyzed and worked as judgment standards. The testing results is helpful for in developing SFCL in VSC-HVDC.

  5. An improved low-voltage ride-through performance of DFIG based wind plant using stator dynamic composite fault current limiter.

    PubMed

    Gayen, P K; Chatterjee, D; Goswami, S K

    2016-05-01

    In this paper, an enhanced low-voltage ride-through (LVRT) performance of a grid connected doubly fed induction generator (DFIG) has been presented with the usage of stator dynamic composite fault current limiter (SDCFCL). This protection circuit comprises of a suitable series resistor-inductor combination and parallel bidirectional semiconductor switch. The SDCFCL facilitates double benefits such as reduction of rotor induced open circuit voltage due to increased value of stator total inductance and concurrent increase of rotor impedance. Both effects will limit rotor circuit over current and over voltage situation more secured way in comparison to the conventional scheme like the dynamic rotor current limiter (RCL) during any type of fault situation. The proposed concept is validated through the simulation study of the grid integrated 2.0MW DFIG. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  6. Depth optimal sorting networks resistant to k passive faults

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Piotrow, M.

    In this paper, we study the problem of constructing a sorting network that is tolerant to faults and whose running time (i.e. depth) is as small as possible. We consider the scenario of worst-case comparator faults and follow the model of passive comparator failure proposed by Yao and Yao, in which a faulty comparator outputs directly its inputs without comparison. Our main result is the first construction of an N-input, k-fault-tolerant sorting network that is of an asymptotically optimal depth {theta}(log N+k). That improves over the recent result of Leighton and Ma, whose network is of depth O(log N +more » k log log N/log k). Actually, we present a fault-tolerant correction network that can be added after any N-input sorting network to correct its output in the presence of at most k faulty comparators. Since the depth of the network is O(log N + k) and the constants hidden behind the {open_quotes}O{close_quotes} notation are not big, the construction can be of practical use. Developing the techniques necessary to show the main result, we construct a fault-tolerant network for the insertion problem. As a by-product, we get an N-input, O(log N)-depth INSERT-network that is tolerant to random faults, thereby answering a question posed by Ma in his PhD thesis. The results are based on a new notion of constant delay comparator networks, that is, networks in which each register is used (compared) only in a period of time of a constant length. Copies of such networks can be put one after another with only a constant increase in depth per copy.« less

  7. Superconducting fault current limiter for railway transport

    NASA Astrophysics Data System (ADS)

    Fisher, L. M.; Alferov, D. F.; Akhmetgareev, M. R.; Budovskii, A. I.; Evsin, D. V.; Voloshin, I. F.; Kalinov, A. V.

    2015-12-01

    A resistive switching superconducting fault current limiter (SFCL) for DC networks with voltage of 3.5 kV and nominal current of 2 kA is developed. The SFCL consists of two series-connected units: block of superconducting modules and high-speed vacuum breaker with total disconnection time not more than 8 ms. The results of laboratory tests of superconducting SFCL modules in current limiting mode are presented. The recovery time of superconductivity is experimentally determined. The possibility of application of SFCL on traction substations of Russian Railways is considered.

  8. Long-term changes to river regimes prior to late Holocene coseismic faulting, Canterbury, New Zealand

    NASA Astrophysics Data System (ADS)

    Campbell, Jocelyn K.; Nicol, Andrew; Howard, Matthew E.

    2003-09-01

    Two sites are described from range front faults along the foothills of the Southern Alps of New Zealand, where apparently a period of 200-300 years of accelerated river incision preceded late Holocene coseismic ruptures, each probably in excess of M w 7.5. They relate to separate fault segments and seismic events on a transpressive system associated with fault-driven folding, but both show similar evidence of off-plane aseismic deformation during the downcutting phase. The incision history is documented by the ages, relative elevations and profiles of degradation terraces. The surface dating is largely based on the weathering rind technique of McSaveney (McSaveney, M.J., 1992. A Manual for Weathering-rind Dating of Grey Sandstones of the Torlesse Supergroup, New Zealand. 92/4, Institute of Geological and Nuclear Sciences), supported by some consistent radiocarbon ages. On the Porters Pass Fault, drainage from Red Lakes has incised up to 12 m into late Pleistocene recessional outwash, but the oldest degradation terrace surface T I is dated at only 690±50 years BP. The upper terraces T I and T II converge uniformly downstream right across the fault trace, but by T III the terrace has a reversed gradient upstream. T II and T III break into multiple small terraces on the hanging wall only, close to the fault trace. Continued backtilting during incision caused T IV to diverge downstream relative to the older surfaces. Coseismic faulting displaced T V and all the older terraces by a metre high reverse scarp and an uncertain right lateral component. This event cannot be younger than a nearby ca. 500 year old rock avalanche covering the trace. The second site in the middle reaches of the Waipara River valley involves the interaction of four faults associated with the Doctors Anticline. The main river and tributaries have incised steeply into a 2000 year old mid-Holocene, broad, degradation surface downcutting as much as 55 m. Beginning approximately 600 years ago accelerating incision eventually attained rates in excess of 100 mm/year in those reaches closely associated with the Doctors Anticline and related thrust and transfer faults. All four faults ruptured, either synchronously or sequentially, between 250 and 400 years ago when the river was close to 8 m above its present bed. Better cross-method checks on dating would eliminate some uncertainties, but the apparent similarities suggest a pattern of precursor events initiated by a period of base level drop extending for several kilometres across the structure, presumably in response to general uplift. Over time, deformation is concentrated close to the fault zone causing tilting of degradation terraces, and demonstrably in the Waipara case at least, coseismic rupture is preceded by marked acceleration of the downcutting rate. Overall base level drop is an order of magnitude greater than the throw on the eventual fault scarp. The Ostler Fault (Van Dissen et al., 1993) demonstrates that current deformation is taking place on similar thrust-fault driven folding in the Southern Alps. Regular re-levelling since 1966 has shown uplift rates of 1.0-1.5 mm/year at the crest of a 1-2 km half wave length anticline, but this case also illustrates the general problem of interpreting the significance of rates derived from geophysical monitoring relative to the long term seismic cycle. If the geomorphic signals described can be shown to hold for other examples, then criteria for targeting faults approaching the end of the seismic cycle in some tectonic settings may be possible.

  9. Arc fault detection system

    DOEpatents

    Jha, Kamal N.

    1999-01-01

    An arc fault detection system for use on ungrounded or high-resistance-grounded power distribution systems is provided which can be retrofitted outside electrical switchboard circuits having limited space constraints. The system includes a differential current relay that senses a current differential between current flowing from secondary windings located in a current transformer coupled to a power supply side of a switchboard, and a total current induced in secondary windings coupled to a load side of the switchboard. When such a current differential is experienced, a current travels through a operating coil of the differential current relay, which in turn opens an upstream circuit breaker located between the switchboard and a power supply to remove the supply of power to the switchboard.

  10. Electrically heated particulate filter diagnostic systems and methods

    DOEpatents

    Gonze, Eugene V [Pinckney, MI

    2009-09-29

    A system that diagnoses regeneration of an electrically heated particulate filter is provided. The system generally includes a grid module that diagnoses a fault of the grid based on at least one of a current signal and a voltage signal. A diagnostic module at least one of sets a fault status and generates a warning signal based on the fault of the grid.

  11. Comparison between wavelet and wavelet packet transform features for classification of faults in distribution system

    NASA Astrophysics Data System (ADS)

    Arvind, Pratul

    2012-11-01

    The ability to identify and classify all ten types of faults in a distribution system is an important task for protection engineers. Unlike transmission system, distribution systems have a complex configuration and are subjected to frequent faults. In the present work, an algorithm has been developed for identifying all ten types of faults in a distribution system by collecting current samples at the substation end. The samples are subjected to wavelet packet transform and artificial neural network in order to yield better classification results. A comparison of results between wavelet transform and wavelet packet transform is also presented thereby justifying the feature extracted from wavelet packet transform yields promising results. It should also be noted that current samples are collected after simulating a 25kv distribution system in PSCAD software.

  12. OBIST methodology incorporating modified sensitivity of pulses for active analogue filter components

    NASA Astrophysics Data System (ADS)

    Khade, R. H.; Chaudhari, D. S.

    2018-03-01

    In this paper, oscillation-based built-in self-test method is used to diagnose catastrophic and parametric faults in integrated circuits. Sallen-Key low pass filter and high pass filter circuits with different gains are used to investigate defects. Variation in seven parameters of operational amplifier (OP-AMP) like gain, input impedance, output impedance, slew rate, input bias current, input offset current, input offset voltage and catastrophic as well as parametric defects in components outside OP-AMP are introduced in the circuit and simulation results are analysed. Oscillator output signal is converted to pulses which are used to generate a signature of the circuit. The signature and pulse count changes with the type of fault present in the circuit under test (CUT). The change in oscillation frequency is observed for fault detection. Designer has flexibility to predefine tolerance band of cut-off frequency and range of pulses for which circuit should be accepted. The fault coverage depends upon the required tolerance band of the CUT. We propose a modification of sensitivity of parameter (pulses) to avoid test escape and enhance yield. Result shows that the method provides 100% fault coverage for catastrophic faults.

  13. The Eastern California Shear Zone as the northward extension of the southern San Andreas Fault

    USGS Publications Warehouse

    Thatcher, Wayne R.; Savage, James C.; Simpson, Robert W.

    2016-01-01

    Cluster analysis offers an agnostic way to organize and explore features of the current GPS velocity field without reference to geologic information or physical models using information only contained in the velocity field itself. We have used cluster analysis of the Southern California Global Positioning System (GPS) velocity field to determine the partitioning of Pacific-North America relative motion onto major regional faults. Our results indicate the large-scale kinematics of the region is best described with two boundaries of high velocity gradient, one centered on the Coachella section of the San Andreas Fault and the Eastern California Shear Zone and the other defined by the San Jacinto Fault south of Cajon Pass and the San Andreas Fault farther north. The ~120 km long strand of the San Andreas between Cajon Pass and Coachella Valley (often termed the San Bernardino and San Gorgonio sections) is thus currently of secondary importance and carries lesser amounts of slip over most or all of its length. We show these first order results are present in maps of the smoothed GPS velocity field itself. They are also generally consistent with currently available, loosely bounded geologic and geodetic fault slip rate estimates that alone do not provide useful constraints on the large-scale partitioning we show here. Our analysis does not preclude the existence of smaller blocks and more block boundaries in Southern California. However, attempts to identify smaller blocks along and adjacent to the San Gorgonio section were not successful.

  14. The Eastern California Shear Zone as the northward extension of the southern San Andreas Fault

    NASA Astrophysics Data System (ADS)

    Thatcher, W.; Savage, J. C.; Simpson, R. W.

    2016-04-01

    Cluster analysis offers an agnostic way to organize and explore features of the current GPS velocity field without reference to geologic information or physical models using information only contained in the velocity field itself. We have used cluster analysis of the Southern California Global Positioning System (GPS) velocity field to determine the partitioning of Pacific-North America relative motion onto major regional faults. Our results indicate the large-scale kinematics of the region is best described with two boundaries of high velocity gradient, one centered on the Coachella section of the San Andreas Fault and the Eastern California Shear Zone and the other defined by the San Jacinto Fault south of Cajon Pass and the San Andreas Fault farther north. The ~120 km long strand of the San Andreas between Cajon Pass and Coachella Valley (often termed the San Bernardino and San Gorgonio sections) is thus currently of secondary importance and carries lesser amounts of slip over most or all of its length. We show these first order results are present in maps of the smoothed GPS velocity field itself. They are also generally consistent with currently available, loosely bounded geologic and geodetic fault slip rate estimates that alone do not provide useful constraints on the large-scale partitioning we show here. Our analysis does not preclude the existence of smaller blocks and more block boundaries in Southern California. However, attempts to identify smaller blocks along and adjacent to the San Gorgonio section were not successful.

  15. Geology of the Elephanta Island fault zone, western Indian rifted margin, and its significance for understanding the Panvel flexure

    NASA Astrophysics Data System (ADS)

    Samant, Hrishikesh; Pundalik, Ashwin; D'souza, Joseph; Sheth, Hetu; Lobo, Keegan Carmo; D'souza, Kyle; Patel, Vanit

    2017-02-01

    The Panvel flexure is a 150-km long tectonic structure, comprising prominently seaward-dipping Deccan flood basalts, on the western Indian rifted margin. Given the active tectonic faulting beneath the Panvel flexure zone inferred from microseismicity, better structural understanding of the region is needed. The geology of Elephanta Island in the Mumbai harbour, famous for the ca. mid-6th century A.D. Hindu rock-cut caves in Deccan basalt (a UNESCO World Heritage site) is poorly known. We describe a previously unreported but well-exposed fault zone on Elephanta Island, consisting of two large faults dipping steeply east-southeast and producing easterly downthrows. Well-developed slickensides and structural measurements indicate oblique slip on both faults. The Elephanta Island fault zone may be the northern extension of the Alibag-Uran fault zone previously described. This and two other known regional faults (Nhava-Sheva and Belpada faults) indicate a progressively eastward step-faulted structure of the Panvel flexure, with the important result that the individual movements were not simply downdip but also oblique-slip and locally even rotational (as at Uran). An interesting problem is the normal faulting, block tectonics and rifting of this region of the crust for which seismological data indicate a normal thickness (up to 41.3 km). A model of asymmetric rifting by simple shear may explain this observation and the consistently landward dips of the rifted margin faults.

  16. The Source Inversion Validation (SIV) Initiative: A Collaborative Study on Uncertainty Quantification in Earthquake Source Inversions

    NASA Astrophysics Data System (ADS)

    Mai, P. M.; Schorlemmer, D.; Page, M.

    2012-04-01

    Earthquake source inversions image the spatio-temporal rupture evolution on one or more fault planes using seismic and/or geodetic data. Such studies are critically important for earthquake seismology in general, and for advancing seismic hazard analysis in particular, as they reveal earthquake source complexity and help (i) to investigate earthquake mechanics; (ii) to develop spontaneous dynamic rupture models; (iii) to build models for generating rupture realizations for ground-motion simulations. In applications (i - iii), the underlying finite-fault source models are regarded as "data" (input information), but their uncertainties are essentially unknown. After all, source models are obtained from solving an inherently ill-posed inverse problem to which many a priori assumptions and uncertain observations are applied. The Source Inversion Validation (SIV) project is a collaborative effort to better understand the variability between rupture models for a single earthquake (as manifested in the finite-source rupture model database) and to develop robust uncertainty quantification for earthquake source inversions. The SIV project highlights the need to develop a long-standing and rigorous testing platform to examine the current state-of-the-art in earthquake source inversion, and to develop and test novel source inversion approaches. We will review the current status of the SIV project, and report the findings and conclusions of the recent workshops. We will briefly discuss several source-inversion methods, how they treat uncertainties in data, and assess the posterior model uncertainty. Case studies include initial forward-modeling tests on Green's function calculations, and inversion results for synthetic data from spontaneous dynamic crack-like strike-slip earthquake on steeply dipping fault, embedded in a layered crustal velocity-density structure.

  17. Integrated Fault Diagnosis Algorithm for Motor Sensors of In-Wheel Independent Drive Electric Vehicles.

    PubMed

    Jeon, Namju; Lee, Hyeongcheol

    2016-12-12

    An integrated fault-diagnosis algorithm for a motor sensor of in-wheel independent drive electric vehicles is presented. This paper proposes a method that integrates the high- and low-level fault diagnoses to improve the robustness and performance of the system. For the high-level fault diagnosis of vehicle dynamics, a planar two-track non-linear model is first selected, and the longitudinal and lateral forces are calculated. To ensure redundancy of the system, correlation between the sensor and residual in the vehicle dynamics is analyzed to detect and separate the fault of the drive motor system of each wheel. To diagnose the motor system for low-level faults, the state equation of an interior permanent magnet synchronous motor is developed, and a parity equation is used to diagnose the fault of the electric current and position sensors. The validity of the high-level fault-diagnosis algorithm is verified using Carsim and Matlab/Simulink co-simulation. The low-level fault diagnosis is verified through Matlab/Simulink simulation and experiments. Finally, according to the residuals of the high- and low-level fault diagnoses, fault-detection flags are defined. On the basis of this information, an integrated fault-diagnosis strategy is proposed.

  18. A review on data-driven fault severity assessment in rolling bearings

    NASA Astrophysics Data System (ADS)

    Cerrada, Mariela; Sánchez, René-Vinicio; Li, Chuan; Pacheco, Fannia; Cabrera, Diego; Valente de Oliveira, José; Vásquez, Rafael E.

    2018-01-01

    Health condition monitoring of rotating machinery is a crucial task to guarantee reliability in industrial processes. In particular, bearings are mechanical components used in most rotating devices and they represent the main source of faults in such equipments; reason for which research activities on detecting and diagnosing their faults have increased. Fault detection aims at identifying whether the device is or not in a fault condition, and diagnosis is commonly oriented towards identifying the fault mode of the device, after detection. An important step after fault detection and diagnosis is the analysis of the magnitude or the degradation level of the fault, because this represents a support to the decision-making process in condition based-maintenance. However, no extensive works are devoted to analyse this problem, or some works tackle it from the fault diagnosis point of view. In a rough manner, fault severity is associated with the magnitude of the fault. In bearings, fault severity can be related to the physical size of fault or a general degradation of the component. Due to literature regarding the severity assessment of bearing damages is limited, this paper aims at discussing the recent methods and techniques used to achieve the fault severity evaluation in the main components of the rolling bearings, such as inner race, outer race, and ball. The review is mainly focused on data-driven approaches such as signal processing for extracting the proper fault signatures associated with the damage degradation, and learning approaches that are used to identify degradation patterns with regards to health conditions. Finally, new challenges are highlighted in order to develop new contributions in this field.

  19. The influence of fault geometry and frictional contact properties on slip surface behavior and off-fault damage: insights from quasi-static modeling of small strike-slip faults from the Sierra Nevada, CA

    NASA Astrophysics Data System (ADS)

    Ritz, E.; Pollard, D. D.

    2011-12-01

    Geological and geophysical investigations demonstrate that faults are geometrically complex structures, and that the nature and intensity of off-fault damage is spatially correlated with geometric irregularities of the slip surfaces. Geologic observations of exhumed meter-scale strike-slip faults in the Bear Creek drainage, central Sierra Nevada, CA, provide insight into the relationship between non-planar fault geometry and frictional slip at depth. We investigate natural fault geometries in an otherwise homogeneous and isotropic elastic material with a two-dimensional displacement discontinuity method (DDM). Although the DDM is a powerful tool, frictional contact problems are beyond the scope of the elementary implementation because it allows interpenetration of the crack surfaces. By incorporating a complementarity algorithm, we are able to enforce appropriate contact boundary conditions along the model faults and include variable friction and frictional strength. This tool allows us to model quasi-static slip on non-planar faults and the resulting deformation of the surrounding rock. Both field observations and numerical investigations indicate that sliding along geometrically discontinuous or irregular faults may lead to opening of the fault and the formation of new fractures, affecting permeability in the nearby rock mass and consequently impacting pore fluid pressure. Numerical simulations of natural fault geometries provide local stress fields that are correlated to the style and spatial distribution of off-fault damage. We also show how varying the friction and frictional strength along the model faults affects slip surface behavior and consequently influences the stress distributions in the adjacent material.

  20. Managing systems faults on the commercial flight deck: Analysis of pilots' organization and prioritization of fault management information

    NASA Technical Reports Server (NTRS)

    Rogers, William H.

    1993-01-01

    In rare instances, flight crews of commercial aircraft must manage complex systems faults in addition to all their normal flight tasks. Pilot errors in fault management have been attributed, at least in part, to an incomplete or inaccurate awareness of the fault situation. The current study is part of a program aimed at assuring that the types of information potentially available from an intelligent fault management aiding concept developed at NASA Langley called 'Faultfinde' (see Abbott, Schutte, Palmer, and Ricks, 1987) are an asset rather than a liability: additional information should improve pilot performance and aircraft safety, but it should not confuse, distract, overload, mislead, or generally exacerbate already difficult circumstances.

  1. Study of Stand-Alone Microgrid under Condition of Faults on Distribution Line

    NASA Astrophysics Data System (ADS)

    Malla, S. G.; Bhende, C. N.

    2014-10-01

    The behavior of stand-alone microgrid is analyzed under the condition of faults on distribution feeders. During fault since battery is not able to maintain dc-link voltage within limit, the resistive dump load control is presented to do so. An inverter control is proposed to maintain balanced voltages at PCC under the unbalanced load condition and to reduce voltage unbalance factor (VUF) at load points. The proposed inverter control also has facility to protect itself from high fault current. Existing maximum power point tracker (MPPT) algorithm is modified to limit the speed of generator during fault. Extensive simulation results using MATLAB/SIMULINK established that the performance of the controllers is quite satisfactory under different fault conditions as well as unbalanced load conditions.

  2. Index of faults of Cretaceous and Cenozoic age in the eastern United States

    USGS Publications Warehouse

    Prowell, David C.

    1983-01-01

    The data in this report represent the presently available knowledge of fault characteristics and distribution. Clearly, as current investigations progress and as geologists become more aware of the evidence for Cenozoic faulting, the number of known Cenozoic faults will increase substantially. Until such time, the data that are shown here must be viewed conservatively because I believe they are not a totally representative collection of information at this scale. the data are useful in characterizing basic fault patterns in the region, but certain factors limit the usefulness of the map. Limitations of this information are discussed in the following text, and the reader should give them major consideration when using the map and fault table.

  3. The detection error of thermal test low-frequency cable based on M sequence correlation algorithm

    NASA Astrophysics Data System (ADS)

    Wu, Dongliang; Ge, Zheyang; Tong, Xin; Du, Chunlin

    2018-04-01

    The problem of low accuracy and low efficiency of off-line detecting on thermal test low-frequency cable faults could be solved by designing a cable fault detection system, based on FPGA export M sequence code(Linear feedback shift register sequence) as pulse signal source. The design principle of SSTDR (Spread spectrum time-domain reflectometry) reflection method and hardware on-line monitoring setup figure is discussed in this paper. Testing data show that, this detection error increases with fault location of thermal test low-frequency cable.

  4. Updating the USGS seismic hazard maps for Alaska

    USGS Publications Warehouse

    Mueller, Charles; Briggs, Richard; Wesson, Robert L.; Petersen, Mark D.

    2015-01-01

    The U.S. Geological Survey makes probabilistic seismic hazard maps and engineering design maps for building codes, emergency planning, risk management, and many other applications. The methodology considers all known earthquake sources with their associated magnitude and rate distributions. Specific faults can be modeled if slip-rate or recurrence information is available. Otherwise, areal sources are developed from earthquake catalogs or GPS data. Sources are combined with ground-motion estimates to compute the hazard. The current maps for Alaska were developed in 2007, and included modeled sources for the Alaska-Aleutian megathrust, a few crustal faults, and areal seismicity sources. The megathrust was modeled as a segmented dipping plane with segmentation largely derived from the slip patches of past earthquakes. Some megathrust deformation is aseismic, so recurrence was estimated from seismic history rather than plate rates. Crustal faults included the Fairweather-Queen Charlotte system, the Denali–Totschunda system, the Castle Mountain fault, two faults on Kodiak Island, and the Transition fault, with recurrence estimated from geologic data. Areal seismicity sources were developed for Benioff-zone earthquakes and for crustal earthquakes not associated with modeled faults. We review the current state of knowledge in Alaska from a seismic-hazard perspective, in anticipation of future updates of the maps. Updated source models will consider revised seismicity catalogs, new information on crustal faults, new GPS data, and new thinking on megathrust recurrence, segmentation, and geometry. Revised ground-motion models will provide up-to-date shaking estimates for crustal earthquakes and subduction earthquakes in Alaska.

  5. Fault Isolation Filter for Networked Control System with Event-Triggered Sampling Scheme

    PubMed Central

    Li, Shanbin; Sauter, Dominique; Xu, Bugong

    2011-01-01

    In this paper, the sensor data is transmitted only when the absolute value of difference between the current sensor value and the previously transmitted one is greater than the given threshold value. Based on this send-on-delta scheme which is one of the event-triggered sampling strategies, a modified fault isolation filter for a discrete-time networked control system with multiple faults is then implemented by a particular form of the Kalman filter. The proposed fault isolation filter improves the resource utilization with graceful fault estimation performance degradation. An illustrative example is given to show the efficiency of the proposed method. PMID:22346590

  6. Experimental evaluation of the certification-trail method

    NASA Technical Reports Server (NTRS)

    Sullivan, Gregory F.; Wilson, Dwight S.; Masson, Gerald M.; Itoh, Mamoru; Smith, Warren W.; Kay, Jonathan S.

    1993-01-01

    Certification trails are a recently introduced and promising approach to fault-detection and fault-tolerance. A comprehensive attempt to assess experimentally the performance and overall value of the method is reported. The method is applied to algorithms for the following problems: huffman tree, shortest path, minimum spanning tree, sorting, and convex hull. Our results reveal many cases in which an approach using certification-trails allows for significantly faster overall program execution time than a basic time redundancy-approach. Algorithms for the answer-validation problem for abstract data types were also examined. This kind of problem provides a basis for applying the certification-trail method to wide classes of algorithms. Answer-validation solutions for two types of priority queues were implemented and analyzed. In both cases, the algorithm which performs answer-validation is substantially faster than the original algorithm for computing the answer. Next, a probabilistic model and analysis which enables comparison between the certification-trail method and the time-redundancy approach were presented. The analysis reveals some substantial and sometimes surprising advantages for ther certification-trail method. Finally, the work our group performed on the design and implementation of fault injection testbeds for experimental analysis of the certification trail technique is discussed. This work employs two distinct methodologies, software fault injection (modification of instruction, data, and stack segments of programs on a Sun Sparcstation ELC and on an IBM 386 PC) and hardware fault injection (control, address, and data lines of a Motorola MC68000-based target system pulsed at logical zero/one values). Our results indicate the viability of the certification trail technique. It is also believed that the tools developed provide a solid base for additional exploration.

  7. Research on fault characteristics about switching component failures for distribution electronic power transformers

    NASA Astrophysics Data System (ADS)

    Sang, Z. X.; Huang, J. Q.; Yan, J.; Du, Z.; Xu, Q. S.; Lei, H.; Zhou, S. X.; Wang, S. C.

    2017-11-01

    The protection is an essential part for power device, especially for those in power grid, as the failure may cost great losses to the society. A study on the voltage and current abnormality in the power electronic devices in Distribution Electronic Power Transformer (D-EPT) during the failures on switching components is presented, as well as the operational principles for 10 kV rectifier, 10 kV/400 V DC-DC converter and 400 V inverter in D-EPT. Derived from the discussion on the effects of voltage and current distortion, the fault characteristics as well as a fault diagnosis method for D-EPT are introduced.

  8. Fault Diagnosis for Rotating Machinery Using Vibration Measurement Deep Statistical Feature Learning.

    PubMed

    Li, Chuan; Sánchez, René-Vinicio; Zurita, Grover; Cerrada, Mariela; Cabrera, Diego

    2016-06-17

    Fault diagnosis is important for the maintenance of rotating machinery. The detection of faults and fault patterns is a challenging part of machinery fault diagnosis. To tackle this problem, a model for deep statistical feature learning from vibration measurements of rotating machinery is presented in this paper. Vibration sensor signals collected from rotating mechanical systems are represented in the time, frequency, and time-frequency domains, each of which is then used to produce a statistical feature set. For learning statistical features, real-value Gaussian-Bernoulli restricted Boltzmann machines (GRBMs) are stacked to develop a Gaussian-Bernoulli deep Boltzmann machine (GDBM). The suggested approach is applied as a deep statistical feature learning tool for both gearbox and bearing systems. The fault classification performances in experiments using this approach are 95.17% for the gearbox, and 91.75% for the bearing system. The proposed approach is compared to such standard methods as a support vector machine, GRBM and a combination model. In experiments, the best fault classification rate was detected using the proposed model. The results show that deep learning with statistical feature extraction has an essential improvement potential for diagnosing rotating machinery faults.

  9. Fault Diagnosis for Rotating Machinery Using Vibration Measurement Deep Statistical Feature Learning

    PubMed Central

    Li, Chuan; Sánchez, René-Vinicio; Zurita, Grover; Cerrada, Mariela; Cabrera, Diego

    2016-01-01

    Fault diagnosis is important for the maintenance of rotating machinery. The detection of faults and fault patterns is a challenging part of machinery fault diagnosis. To tackle this problem, a model for deep statistical feature learning from vibration measurements of rotating machinery is presented in this paper. Vibration sensor signals collected from rotating mechanical systems are represented in the time, frequency, and time-frequency domains, each of which is then used to produce a statistical feature set. For learning statistical features, real-value Gaussian-Bernoulli restricted Boltzmann machines (GRBMs) are stacked to develop a Gaussian-Bernoulli deep Boltzmann machine (GDBM). The suggested approach is applied as a deep statistical feature learning tool for both gearbox and bearing systems. The fault classification performances in experiments using this approach are 95.17% for the gearbox, and 91.75% for the bearing system. The proposed approach is compared to such standard methods as a support vector machine, GRBM and a combination model. In experiments, the best fault classification rate was detected using the proposed model. The results show that deep learning with statistical feature extraction has an essential improvement potential for diagnosing rotating machinery faults. PMID:27322273

  10. Effect of off-fault low-velocity elastic inclusions on supershear rupture dynamics

    NASA Astrophysics Data System (ADS)

    Ma, Xiao; Elbanna, A. E.

    2015-10-01

    Heterogeneous velocity structures are expected to affect fault rupture dynamics. To quantitatively evaluate some of these effects, we examine a model of dynamic rupture on a frictional fault embedded in an elastic full space, governed by plane strain elasticity, with a pair of off-fault inclusions that have a lower rigidity than the background medium. We solve the elastodynamic problem using the Finite Element software Pylith. The fault operates under linear slip-weakening friction law. We initiate the rupture by artificially overstressing a localized region near the left edge of the fault. We primarily consider embedded soft inclusions with 20 per cent reduction in both the pressure wave and shear wave speeds. The embedded inclusions are placed at different distances from the fault surface and have different sizes. We show that the existence of a soft inclusion may significantly shorten the transition length to supershear propagation through the Burridge-Andrews mechanism. We also observe that supershear rupture is generated at pre-stress values that are lower than what is theoretically predicted for a homogeneous medium. We discuss the implications of our results for dynamic rupture propagation in complex velocity structures as well as supershear propagation on understressed faults.

  11. Fault detection and fault tolerance in robotics

    NASA Technical Reports Server (NTRS)

    Visinsky, Monica; Walker, Ian D.; Cavallaro, Joseph R.

    1992-01-01

    Robots are used in inaccessible or hazardous environments in order to alleviate some of the time, cost and risk involved in preparing men to endure these conditions. In order to perform their expected tasks, the robots are often quite complex, thus increasing their potential for failures. If men must be sent into these environments to repair each component failure in the robot, the advantages of using the robot are quickly lost. Fault tolerant robots are needed which can effectively cope with failures and continue their tasks until repairs can be realistically scheduled. Before fault tolerant capabilities can be created, methods of detecting and pinpointing failures must be perfected. This paper develops a basic fault tree analysis of a robot in order to obtain a better understanding of where failures can occur and how they contribute to other failures in the robot. The resulting failure flow chart can also be used to analyze the resiliency of the robot in the presence of specific faults. By simulating robot failures and fault detection schemes, the problems involved in detecting failures for robots are explored in more depth.

  12. Sliding mode based fault detection, reconstruction and fault tolerant control scheme for motor systems.

    PubMed

    Mekki, Hemza; Benzineb, Omar; Boukhetala, Djamel; Tadjine, Mohamed; Benbouzid, Mohamed

    2015-07-01

    The fault-tolerant control problem belongs to the domain of complex control systems in which inter-control-disciplinary information and expertise are required. This paper proposes an improved faults detection, reconstruction and fault-tolerant control (FTC) scheme for motor systems (MS) with typical faults. For this purpose, a sliding mode controller (SMC) with an integral sliding surface is adopted. This controller can make the output of system to track the desired position reference signal in finite-time and obtain a better dynamic response and anti-disturbance performance. But this controller cannot deal directly with total system failures. However an appropriate combination of the adopted SMC and sliding mode observer (SMO), later it is designed to on-line detect and reconstruct the faults and also to give a sensorless control strategy which can achieve tolerance to a wide class of total additive failures. The closed-loop stability is proved, using the Lyapunov stability theory. Simulation results in healthy and faulty conditions confirm the reliability of the suggested framework. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.

  13. New Fault Recognition Method for Rotary Machinery Based on Information Entropy and a Probabilistic Neural Network.

    PubMed

    Jiang, Quansheng; Shen, Yehu; Li, Hua; Xu, Fengyu

    2018-01-24

    Feature recognition and fault diagnosis plays an important role in equipment safety and stable operation of rotating machinery. In order to cope with the complexity problem of the vibration signal of rotating machinery, a feature fusion model based on information entropy and probabilistic neural network is proposed in this paper. The new method first uses information entropy theory to extract three kinds of characteristics entropy in vibration signals, namely, singular spectrum entropy, power spectrum entropy, and approximate entropy. Then the feature fusion model is constructed to classify and diagnose the fault signals. The proposed approach can combine comprehensive information from different aspects and is more sensitive to the fault features. The experimental results on simulated fault signals verified better performances of our proposed approach. In real two-span rotor data, the fault detection accuracy of the new method is more than 10% higher compared with the methods using three kinds of information entropy separately. The new approach is proved to be an effective fault recognition method for rotating machinery.

  14. An information transfer based novel framework for fault root cause tracing of complex electromechanical systems in the processing industry

    NASA Astrophysics Data System (ADS)

    Wang, Rongxi; Gao, Xu; Gao, Jianmin; Gao, Zhiyong; Kang, Jiani

    2018-02-01

    As one of the most important approaches for analyzing the mechanism of fault pervasion, fault root cause tracing is a powerful and useful tool for detecting the fundamental causes of faults so as to prevent any further propagation and amplification. Focused on the problems arising from the lack of systematic and comprehensive integration, an information transfer-based novel data-driven framework for fault root cause tracing of complex electromechanical systems in the processing industry was proposed, taking into consideration the experience and qualitative analysis of conventional fault root cause tracing methods. Firstly, an improved symbolic transfer entropy method was presented to construct a directed-weighted information model for a specific complex electromechanical system based on the information flow. Secondly, considering the feedback mechanisms in the complex electromechanical systems, a method for determining the threshold values of weights was developed to explore the disciplines of fault propagation. Lastly, an iterative method was introduced to identify the fault development process. The fault root cause was traced by analyzing the changes in information transfer between the nodes along with the fault propagation pathway. An actual fault root cause tracing application of a complex electromechanical system is used to verify the effectiveness of the proposed framework. A unique fault root cause is obtained regardless of the choice of the initial variable. Thus, the proposed framework can be flexibly and effectively used in fault root cause tracing for complex electromechanical systems in the processing industry, and formulate the foundation of system vulnerability analysis and condition prediction, as well as other engineering applications.

  15. A grid-doubling finite-element technique for calculating dynamic three-dimensional spontaneous rupture on an earthquake fault

    USGS Publications Warehouse

    Barall, Michael

    2009-01-01

    We present a new finite-element technique for calculating dynamic 3-D spontaneous rupture on an earthquake fault, which can reduce the required computational resources by a factor of six or more, without loss of accuracy. The grid-doubling technique employs small cells in a thin layer surrounding the fault. The remainder of the modelling volume is filled with larger cells, typically two or four times as large as the small cells. In the resulting non-conforming mesh, an interpolation method is used to join the thin layer of smaller cells to the volume of larger cells. Grid-doubling is effective because spontaneous rupture calculations typically require higher spatial resolution on and near the fault than elsewhere in the model volume. The technique can be applied to non-planar faults by morphing, or smoothly distorting, the entire mesh to produce the desired 3-D fault geometry. Using our FaultMod finite-element software, we have tested grid-doubling with both slip-weakening and rate-and-state friction laws, by running the SCEC/USGS 3-D dynamic rupture benchmark problems. We have also applied it to a model of the Hayward fault, Northern California, which uses realistic fault geometry and rock properties. FaultMod implements fault slip using common nodes, which represent motion common to both sides of the fault, and differential nodes, which represent motion of one side of the fault relative to the other side. We describe how to modify the traction-at-split-nodes method to work with common and differential nodes, using an implicit time stepping algorithm.

  16. Centrifugal compressor fault diagnosis based on qualitative simulation and thermal parameters

    NASA Astrophysics Data System (ADS)

    Lu, Yunsong; Wang, Fuli; Jia, Mingxing; Qi, Yuanchen

    2016-12-01

    This paper concerns fault diagnosis of centrifugal compressor based on thermal parameters. An improved qualitative simulation (QSIM) based fault diagnosis method is proposed to diagnose the faults of centrifugal compressor in a gas-steam combined-cycle power plant (CCPP). The qualitative models under normal and two faulty conditions have been built through the analysis of the principle of centrifugal compressor. To solve the problem of qualitative description of the observations of system variables, a qualitative trend extraction algorithm is applied to extract the trends of the observations. For qualitative states matching, a sliding window based matching strategy which consists of variables operating ranges constraints and qualitative constraints is proposed. The matching results are used to determine which QSIM model is more consistent with the running state of system. The correct diagnosis of two typical faults: seal leakage and valve stuck in the centrifugal compressor has validated the targeted performance of the proposed method, showing the advantages of fault roots containing in thermal parameters.

  17. A H-infinity Fault Detection and Diagnosis Scheme for Discrete Nonlinear System Using Output Probability Density Estimation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang Yumin; Lum, Kai-Yew; Wang Qingguo

    In this paper, a H-infinity fault detection and diagnosis (FDD) scheme for a class of discrete nonlinear system fault using output probability density estimation is presented. Unlike classical FDD problems, the measured output of the system is viewed as a stochastic process and its square root probability density function (PDF) is modeled with B-spline functions, which leads to a deterministic space-time dynamic model including nonlinearities, uncertainties. A weighting mean value is given as an integral function of the square root PDF along space direction, which leads a function only about time and can be used to construct residual signal. Thus,more » the classical nonlinear filter approach can be used to detect and diagnose the fault in system. A feasible detection criterion is obtained at first, and a new H-infinity adaptive fault diagnosis algorithm is further investigated to estimate the fault. Simulation example is given to demonstrate the effectiveness of the proposed approaches.« less

  18. A H-infinity Fault Detection and Diagnosis Scheme for Discrete Nonlinear System Using Output Probability Density Estimation

    NASA Astrophysics Data System (ADS)

    Zhang, Yumin; Wang, Qing-Guo; Lum, Kai-Yew

    2009-03-01

    In this paper, a H-infinity fault detection and diagnosis (FDD) scheme for a class of discrete nonlinear system fault using output probability density estimation is presented. Unlike classical FDD problems, the measured output of the system is viewed as a stochastic process and its square root probability density function (PDF) is modeled with B-spline functions, which leads to a deterministic space-time dynamic model including nonlinearities, uncertainties. A weighting mean value is given as an integral function of the square root PDF along space direction, which leads a function only about time and can be used to construct residual signal. Thus, the classical nonlinear filter approach can be used to detect and diagnose the fault in system. A feasible detection criterion is obtained at first, and a new H-infinity adaptive fault diagnosis algorithm is further investigated to estimate the fault. Simulation example is given to demonstrate the effectiveness of the proposed approaches.

  19. Adaptively Adjusted Event-Triggering Mechanism on Fault Detection for Networked Control Systems.

    PubMed

    Wang, Yu-Long; Lim, Cheng-Chew; Shi, Peng

    2016-12-08

    This paper studies the problem of adaptively adjusted event-triggering mechanism-based fault detection for a class of discrete-time networked control system (NCS) with applications to aircraft dynamics. By taking into account the fault occurrence detection progress and the fault occurrence probability, and introducing an adaptively adjusted event-triggering parameter, a novel event-triggering mechanism is proposed to achieve the efficient utilization of the communication network bandwidth. Both the sensor-to-control station and the control station-to-actuator network-induced delays are taken into account. The event-triggered sensor and the event-triggered control station are utilized simultaneously to establish new network-based closed-loop models for the NCS subject to faults. Based on the established models, the event-triggered simultaneous design of fault detection filter (FDF) and controller is presented. A new algorithm for handling the adaptively adjusted event-triggering parameter is proposed. Performance analysis verifies the effectiveness of the adaptively adjusted event-triggering mechanism, and the simultaneous design of FDF and controller.

  20. Multi-sensor information fusion method for vibration fault diagnosis of rolling bearing

    NASA Astrophysics Data System (ADS)

    Jiao, Jing; Yue, Jianhai; Pei, Di

    2017-10-01

    Bearing is a key element in high-speed electric multiple unit (EMU) and any defect of it can cause huge malfunctioning of EMU under high operation speed. This paper presents a new method for bearing fault diagnosis based on least square support vector machine (LS-SVM) in feature-level fusion and Dempster-Shafer (D-S) evidence theory in decision-level fusion which were used to solve the problems about low detection accuracy, difficulty in extracting sensitive characteristics and unstable diagnosis system of single-sensor in rolling bearing fault diagnosis. Wavelet de-nosing technique was used for removing the signal noises. LS-SVM was used to make pattern recognition of the bearing vibration signal, and then fusion process was made according to the D-S evidence theory, so as to realize recognition of bearing fault. The results indicated that the data fusion method improved the performance of the intelligent approach in rolling bearing fault detection significantly. Moreover, the results showed that this method can efficiently improve the accuracy of fault diagnosis.

  1. Kinematics, mechanics, and potential earthquake hazards for faults in Pottawatomie County, Kansas, USA

    USGS Publications Warehouse

    Ohlmacher, G.C.; Berendsen, P.

    2005-01-01

    Many stable continental regions have subregions with poorly defined earthquake hazards. Analysis of minor structures (folds and faults) in these subregions can improve our understanding of the tectonics and earthquake hazards. Detailed structural mapping in Pottawatomie County has revealed a suite consisting of two uplifted blocks aligned along a northeast trend and surrounded by faults. The first uplift is located southwest of the second. The northwest and southeast sides of these uplifts are bounded by northeast-trending right-lateral faults. To the east, both uplifts are bounded by north-trending reverse faults, and the first uplift is bounded by a north-trending high-angle fault to the west. The structural suite occurs above a basement fault that is part of a series of north-northeast-trending faults that delineate the Humboldt Fault Zone of eastern Kansas, an integral part of the Midcontinent Rift System. The favored kinematic model is a contractional stepover (push-up) between echelon strike-slip faults. Mechanical modeling using the boundary element method supports the interpretation of the uplifts as contractional stepovers and indicates that an approximately east-northeast maximum compressive stress trajectory is responsible for the formation of the structural suite. This stress trajectory suggests potential activity during the Laramide Orogeny, which agrees with the age of kimberlite emplacement in adjacent Riley County. The current stress field in Kansas has a N85??W maximum compressive stress trajectory that could potentially produce earthquakes along the basement faults. Several epicenters of seismic events (

  2. Transient tracking of low and high-order eccentricity-related components in induction motors via TFD tools

    NASA Astrophysics Data System (ADS)

    Climente-Alarcon, V.; Antonino-Daviu, J.; Riera-Guasp, M.; Pons-Llinares, J.; Roger-Folch, J.; Jover-Rodriguez, P.; Arkkio, A.

    2011-02-01

    The present work is focused on the diagnosis of mixed eccentricity faults in induction motors via the study of currents demanded by the machine. Unlike traditional methods, based on the analysis of stationary currents (Motor Current Signature Analysis (MCSA)), this work provides new findings regarding the diagnosis approach proposed by the authors in recent years, which is mainly focused on the fault diagnosis based on the analysis of transient quantities, such as startup or plug stopping currents (Transient Motor Current Signature Analysis (TMCSA)), using suitable time-frequency decomposition (TFD) tools. The main novelty of this work is to prove the usefulness of tracking the transient evolution of high-order eccentricity-related harmonics in order to diagnose the condition of the machine, complementing the information obtained with the low-order components, whose transient evolution was well characterised in previous works. Tracking of high-order eccentricity-related harmonics during the transient, through their associated patterns in the time-frequency plane, may significantly increase the reliability of the diagnosis, since the set of fault-related patterns arising after application of the corresponding TFD tool is very unlikely to be caused by other faults or phenomena. Although there are different TFD tools which could be suitable for the transient extraction of these harmonics, this paper makes use of a Wigner-Ville distribution (WVD)-based algorithm in order to carry out the time-frequency decomposition of the startup current signal, since this is a tool showing an excellent trade-off between frequency resolution at both high and low frequencies. Several simulation results obtained with a finite element-based model and experimental results show the validity of this fault diagnosis approach under several faulty and operating conditions. Also, additional signals corresponding to the coexistence of the eccentricity and other non-fault related phenomena making difficult the diagnosis (fluctuating load torque) are included in the paper. Finally, a comparison with an alternative TFD tool - the discrete wavelet transform (DWT) - applied in previous papers, is also carried out in the contribution. The results are promising regarding the usefulness of the methodology for the reliable diagnosis of eccentricities and for their discrimination against other phenomena.

  3. Aircraft Fault Detection and Classification Using Multi-Level Immune Learning Detection

    NASA Technical Reports Server (NTRS)

    Wong, Derek; Poll, Scott; KrishnaKumar, Kalmanje

    2005-01-01

    This work is an extension of a recently developed software tool called MILD (Multi-level Immune Learning Detection), which implements a negative selection algorithm for anomaly and fault detection that is inspired by the human immune system. The immunity-based approach can detect a broad spectrum of known and unforeseen faults. We extend MILD by applying a neural network classifier to identify the pattern of fault detectors that are activated during fault detection. Consequently, MILD now performs fault detection and identification of the system under investigation. This paper describes the application of MILD to detect and classify faults of a generic transport aircraft augmented with an intelligent flight controller. The intelligent control architecture is designed to accommodate faults without the need to explicitly identify them. Adding knowledge about the existence and type of a fault will improve the handling qualities of a degraded aircraft and impact tactical and strategic maneuvering decisions. In addition, providing fault information to the pilot is important for maintaining situational awareness so that he can avoid performing an action that might lead to unexpected behavior - e.g., an action that exceeds the remaining control authority of the damaged aircraft. We discuss the detection and classification results of simulated failures of the aircraft's control system and show that MILD is effective at determining the problem with low false alarm and misclassification rates.

  4. Intelligent Gearbox Diagnosis Methods Based on SVM, Wavelet Lifting and RBR

    PubMed Central

    Gao, Lixin; Ren, Zhiqiang; Tang, Wenliang; Wang, Huaqing; Chen, Peng

    2010-01-01

    Given the problems in intelligent gearbox diagnosis methods, it is difficult to obtain the desired information and a large enough sample size to study; therefore, we propose the application of various methods for gearbox fault diagnosis, including wavelet lifting, a support vector machine (SVM) and rule-based reasoning (RBR). In a complex field environment, it is less likely for machines to have the same fault; moreover, the fault features can also vary. Therefore, a SVM could be used for the initial diagnosis. First, gearbox vibration signals were processed with wavelet packet decomposition, and the signal energy coefficients of each frequency band were extracted and used as input feature vectors in SVM for normal and faulty pattern recognition. Second, precision analysis using wavelet lifting could successfully filter out the noisy signals while maintaining the impulse characteristics of the fault; thus effectively extracting the fault frequency of the machine. Lastly, the knowledge base was built based on the field rules summarized by experts to identify the detailed fault type. Results have shown that SVM is a powerful tool to accomplish gearbox fault pattern recognition when the sample size is small, whereas the wavelet lifting scheme can effectively extract fault features, and rule-based reasoning can be used to identify the detailed fault type. Therefore, a method that combines SVM, wavelet lifting and rule-based reasoning ensures effective gearbox fault diagnosis. PMID:22399894

  5. Determining on-fault earthquake magnitude distributions from integer programming

    USGS Publications Warehouse

    Geist, Eric L.; Parsons, Thomas E.

    2018-01-01

    Earthquake magnitude distributions among faults within a fault system are determined from regional seismicity and fault slip rates using binary integer programming. A synthetic earthquake catalog (i.e., list of randomly sampled magnitudes) that spans millennia is first formed, assuming that regional seismicity follows a Gutenberg-Richter relation. Each earthquake in the synthetic catalog can occur on any fault and at any location. The objective is to minimize misfits in the target slip rate for each fault, where slip for each earthquake is scaled from its magnitude. The decision vector consists of binary variables indicating which locations are optimal among all possibilities. Uncertainty estimates in fault slip rates provide explicit upper and lower bounding constraints to the problem. An implicit constraint is that an earthquake can only be located on a fault if it is long enough to contain that earthquake. A general mixed-integer programming solver, consisting of a number of different algorithms, is used to determine the optimal decision vector. A case study is presented for the State of California, where a 4 kyr synthetic earthquake catalog is created and faults with slip ≥3 mm/yr are considered, resulting in >106  variables. The optimal magnitude distributions for each of the faults in the system span a rich diversity of shapes, ranging from characteristic to power-law distributions. 

  6. Intelligent gearbox diagnosis methods based on SVM, wavelet lifting and RBR.

    PubMed

    Gao, Lixin; Ren, Zhiqiang; Tang, Wenliang; Wang, Huaqing; Chen, Peng

    2010-01-01

    Given the problems in intelligent gearbox diagnosis methods, it is difficult to obtain the desired information and a large enough sample size to study; therefore, we propose the application of various methods for gearbox fault diagnosis, including wavelet lifting, a support vector machine (SVM) and rule-based reasoning (RBR). In a complex field environment, it is less likely for machines to have the same fault; moreover, the fault features can also vary. Therefore, a SVM could be used for the initial diagnosis. First, gearbox vibration signals were processed with wavelet packet decomposition, and the signal energy coefficients of each frequency band were extracted and used as input feature vectors in SVM for normal and faulty pattern recognition. Second, precision analysis using wavelet lifting could successfully filter out the noisy signals while maintaining the impulse characteristics of the fault; thus effectively extracting the fault frequency of the machine. Lastly, the knowledge base was built based on the field rules summarized by experts to identify the detailed fault type. Results have shown that SVM is a powerful tool to accomplish gearbox fault pattern recognition when the sample size is small, whereas the wavelet lifting scheme can effectively extract fault features, and rule-based reasoning can be used to identify the detailed fault type. Therefore, a method that combines SVM, wavelet lifting and rule-based reasoning ensures effective gearbox fault diagnosis.

  7. Protection of Renewable-dominated Microgrids: Challenges and Potential Solutions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Elkhatib, Mohamed; Ellis, Abraham; Biswal, Milan

    In this report we address the challenge of designing efficient protection system for inverter- dominated microgrids. These microgrids are characterised with limited fault current capacity as a result of current-limiting protection functions of inverters. Typically, inverters limit their fault contribution in sub-cycle time frame to as low as 1.1 per unit. As a result, overcurrent protection could fail completely to detect faults in inverter-dominated microgrids. As part of this project a detailed literature survey of existing and proposed microgrid protection schemes were conducted. The survey concluded that there is a gap in the available microgrid protection methods. The only crediblemore » protection solution available in literature for low- fault inverter-dominated microgrids is the differential protection scheme which represents a robust transmission-grade protection solution but at a very high cost. Two non-overcurrent protection schemes were investigated as part of this project; impedance-based protection and transient-based protection. Impedance-based protection depends on monitoring impedance trajectories at feeder relays to detect faults. Two communication-based impedance-based protection schemes were developed. the first scheme utilizes directional elements and pilot signals to locate the fault. The second scheme depends on a Central Protection Unit that communicates with all feeder relays to locate the fault based on directional flags received from feeder relays. The later approach could potentially be adapted to protect networked microgrids and dynamic topology microgrids. Transient-based protection relies on analyzing high frequency transients to detect and locate faults. This approach is very promising but its implementation in the filed faces several challenges. For example, high frequency transients due to faults can be confused with transients due to other events such as capacitor switching. Additionally, while detecting faults by analyzing transients could be doable, locating faults based on analyzing transients is still an open question.« less

  8. Step-by-step design of a single phase 3.3 kV/200 a resistive type superconducting fault current limiter (R-SFCL) and cryostat

    NASA Astrophysics Data System (ADS)

    Kar, Soumen; Rao, V. V.

    2018-07-01

    In our first attempt to design a single phase R-SFCL in India, we have chosen the typical rating of a medium voltage level (3.3 kVrms, 200 Arms, 1Φ) R-SFCL. The step-by-step design procedure for the R-SFCL involves conductor selection, time dependent electro-thermal simulations and recovery time optimization after fault removal. In the numerical analysis, effective fault limitation for a fault current of 5 kA for the medium voltage level R-SFCL are simulated. Maximum normal state resistance and maximum temperature rise in the SFCL coil during current limitation are estimated using one-dimensional energy balance equation. Further, a cryogenic system is conceptually designed for aforesaid MV level R-SFCL by considering inner and outer vessel materials, wall-thickness and thermal insulation which can be used for R-SFCL system. Finally, the total thermal load is calculated for the designed R-SFCL cryostat to select a suitable cryo-refrigerator for LN2 re-condensation.

  9. Fault Diagnosis of Induction Machines in a Transient Regime Using Current Sensors with an Optimized Slepian Window.

    PubMed

    Burriel-Valencia, Jordi; Puche-Panadero, Ruben; Martinez-Roman, Javier; Sapena-Bano, Angel; Pineda-Sanchez, Manuel

    2018-01-06

    The aim of this paper is to introduce a new methodology for the fault diagnosis of induction machines working in the transient regime, when time-frequency analysis tools are used. The proposed method relies on the use of the optimized Slepian window for performing the short time Fourier transform (STFT) of the stator current signal. It is shown that for a given sequence length of finite duration, the Slepian window has the maximum concentration of energy, greater than can be reached with a gated Gaussian window, which is usually used as the analysis window. In this paper, the use and optimization of the Slepian window for fault diagnosis of induction machines is theoretically introduced and experimentally validated through the test of a 3.15-MW induction motor with broken bars during the start-up transient. The theoretical analysis and the experimental results show that the use of the Slepian window can highlight the fault components in the current's spectrogram with a significant reduction of the required computational resources.

  10. Contribution of Transverse Structures, Magma, and Crustal Fluids to Continental Rift Evolution: The East African Rift in Southern Kenya

    NASA Astrophysics Data System (ADS)

    Kattenhorn, S. A.; Muirhead, J.; Dindi, E.; Fischer, T. P.; Lee, H.; Ebinger, C. J.

    2013-12-01

    The Magadi rift in southern Kenya formed at ~7 Ma within Proterozoic rocks of the Mozambique orogenic belt, parallel to its contact with the Archean Tanzania craton. The rift is bounded to the west by the ~1600-m-high Nguruman border fault. The rift center is intensely dissected by normal faults, most of which offset ~1.4-0.8 Ma lavas. Current E-W extensional velocities are ~2-4 mm/yr. Published crustal tomography models from the rift center show narrow high velocity zones in the upper crust, interpreted as cooled magma intrusions. Local, surface-wave, and SKS-splitting measurements show a rift-parallel anisotropy interpreted to be the result of aligned melt zones in the lithosphere. Our field observations suggest that recent fault activity is concentrated at the rift center, consistent with the location of the 1998 seismic swarm that was associated with an inferred diking event. Fault zones are pervasively mineralized by calcite, likely from CO2-rich fluids. A system of fault-fed springs provides the sole fluid input for Lake Magadi in the deepest part of the basin. Many of these springs emanate from the Kordjya fault, a 50-km-long, NW-SE striking, transverse structure connecting a portion of the border fault system (the NW-oriented Lengitoto fault) to the current locus of strain and magmatism at the rift center. Sampled springs are warm (44.4°C) and alkaline (pH=10). Dissolved gas data (mainly N2-Ar-He) suggests two-component mixing (mantle and air), possibly indicating that fluids are delivered into the fault zone from deep sources, consistent with a dominant role of magmatism to the focusing of strain at the rift center. The Kordjya fault has developed prominent fault scarps (~150 m high) despite being oblique to the dominant ~N-S fault fabric, and has utilized an en echelon alignment of N-S faults to accommodate its motion. These N-S faults show evidence of sinistral-oblique motion and imply a bookshelf style of faulting to accommodate dextral-oblique motion along the Kordjya fault. Fault relationships imply that the NW-SE transverse structures represent recent activity in the rift, and have locally tilted Late Pleistocene sediments. Given the abundance of N-S striking faults in the rift, the tendency for fault activity along transverse features suggests a change in the rifting driving forces that are likely the result of an interplay between strain localization at the rift center, inherited crustal fabric (NW structures in the Mozambique belt), a possible counterclockwise rotation of stress related to interacting rift segments in southern Kenya, and an active hydrothermal fluid regime that facilitates faulting. By connecting the Lengitoto fault to the rift center, the Kordjya fault has effectively caused the Magadi rift to bypass the Nguruman border fault, which has been rendered inactive and thus no longer a contributor to the rifting process.

  11. Earthquake cycle simulations with rate-and-state friction and power-law viscoelasticity

    NASA Astrophysics Data System (ADS)

    Allison, Kali L.; Dunham, Eric M.

    2018-05-01

    We simulate earthquake cycles with rate-and-state fault friction and off-fault power-law viscoelasticity for the classic 2D antiplane shear problem of a vertical, strike-slip plate boundary fault. We investigate the interaction between fault slip and bulk viscous flow with experimentally-based flow laws for quartz-diorite and olivine for the crust and mantle, respectively. Simulations using three linear geotherms (dT/dz = 20, 25, and 30 K/km) produce different deformation styles at depth, ranging from significant interseismic fault creep to purely bulk viscous flow. However, they have almost identical earthquake recurrence interval, nucleation depth, and down-dip coseismic slip limit. Despite these similarities, variations in the predicted surface deformation might permit discrimination of the deformation mechanism using geodetic observations. Additionally, in the 25 and 30 K/km simulations, the crust drags the mantle; the 20 K/km simulation also predicts this, except within 10 km of the fault where the reverse occurs. However, basal tractions play a minor role in the overall force balance of the lithosphere, at least for the flow laws used in our study. Therefore, the depth-integrated stress on the fault is balanced primarily by shear stress on vertical, fault-parallel planes. Because strain rates are higher directly below the fault than far from it, stresses are also higher. Thus, the upper crust far from the fault bears a substantial part of the tectonic load, resulting in unrealistically high stresses. In the real Earth, this might lead to distributed plastic deformation or formation of subparallel faults. Alternatively, fault pore pressures in excess of hydrostatic and/or weakening mechanisms such as grain size reduction and thermo-mechanical coupling could lower the strength of the ductile fault root in the lower crust and, concomitantly, off-fault upper crustal stresses.

  12. cOSPREY: A Cloud-Based Distributed Algorithm for Large-Scale Computational Protein Design

    PubMed Central

    Pan, Yuchao; Dong, Yuxi; Zhou, Jingtian; Hallen, Mark; Donald, Bruce R.; Xu, Wei

    2016-01-01

    Abstract Finding the global minimum energy conformation (GMEC) of a huge combinatorial search space is the key challenge in computational protein design (CPD) problems. Traditional algorithms lack a scalable and efficient distributed design scheme, preventing researchers from taking full advantage of current cloud infrastructures. We design cloud OSPREY (cOSPREY), an extension to a widely used protein design software OSPREY, to allow the original design framework to scale to the commercial cloud infrastructures. We propose several novel designs to integrate both algorithm and system optimizations, such as GMEC-specific pruning, state search partitioning, asynchronous algorithm state sharing, and fault tolerance. We evaluate cOSPREY on three different cloud platforms using different technologies and show that it can solve a number of large-scale protein design problems that have not been possible with previous approaches. PMID:27154509

  13. Progress in Aluminum Electrolysis Control and Future Direction for Smart Aluminum Electrolysis Plant

    NASA Astrophysics Data System (ADS)

    Zhang, Hongliang; Li, Tianshuang; Li, Jie; Yang, Shuai; Zou, Zhong

    2017-02-01

    The industrial aluminum reduction cell is an electrochemistry reactor that operates under high temperatures and highly corrosive conditions. However, these conditions have restricted the measurement of key control parameters, making the control of aluminum reduction cells a difficult problem in the industry. Because aluminum electrolysis control systems have a significant economic influence, substantial research has been conducted on control algorithms, control systems and information systems for aluminum reduction cells. This article first summarizes the development of control systems and then focuses on the progress made since 2000, including alumina concentration control, temperature control and electrolyte molecular ratio control, fault diagnosis, cell condition prediction and control system expansion. Based on these studies, the concept of a smart aluminum electrolysis plant is proposed. The frame construction, key problems and current progress are introduced. Finally, several future directions are discussed.

  14. Secular Variation in Slip (Invited)

    NASA Astrophysics Data System (ADS)

    Cowgill, E.; Gold, R. D.

    2010-12-01

    Faults show temporal variations in slip rate at time scales ranging from the hours following a major rupture to the millions of years over which plate boundaries reorganize. One such behavior is secular variation in slip (SVS), which we define as a pulse of accelerated strain release along a single fault that occurs at a frequency that is > 1 order of magnitude longer than the recurrence interval of earthquakes within the pulse. Although numerous mechanical models have been proposed to explain SVS, it has proven much harder to measure long (5-500 kyr) records of fault displacement as a function of time. Such fault-slip histories may be obtained from morphochronologic data, which are measurements of offset and age obtained from faulted landforms. Here we describe slip-history modeling of morphochronologic data and show how this method holds promise for obtaining long records of fault slip. In detail we place SVS in the context of other types of time-varying fault-slip phenomena, explain the importance of measuring fault-slip histories, summarize models proposed to explain SVS, review current approaches for measuring SVS in the geologic record, and illustrate the slip-history modeling approach we advocate here using data from the active, left-slip Altyn Tagh fault in NW Tibet. In addition to SVS, other types of temporal variation in fault slip include post-seismic transients, discrepancies between geologic slip rates and those derived from geodetic and/or paleoseismic data, and single changes in slip rate resulting from plate reorganization. Investigating secular variation in slip is important for advancing understanding of long-term continental deformation, fault mechanics, and seismic risk. Mechanical models producing such behavior include self-driven mode switching, changes in pore-fluid pressure, viscoelasticity, postseismic reloading, and changes in local surface loads (e.g., ice sheets, large lakes, etc.) among others. However, a key problem in testing these models is the paucity of long records of fault slip. Paleoseismic data are unlikely to yield such histories because measurements of the slip associated with each event are generally unavailable and long records require large accumulated offsets, which can result in structural duplication or omission of the stratigraphic records of events. In contrast, morphochronologic data capture both the age and offset of individual piercing points, although this approach generally does not resolve individual earthquake events. Because the uncertainties in both age and offset are generally large (5-15%) for individual markers, SVS is best resolved by obtaining suites of such measurements, in which case the errors can be used to reduce the range of slip histories common to all such data points. A suite of such data from the central Altyn Tagh fault reveals a pulse of accelerated strain release in the mid Holocene, with ~20 m of slip being released from ~6.7 to ~5.9 ka at a short-term rate (~28 mm/yr) that is 3 times greater than the average rate (~9 mm/yr). We interpret this pulse to represent a cluster of two to six, Mw > 7.2 earthquakes. To our knowledge, this is the first possible earthquake cluster detected using morphochronologic techniques.

  15. Multiple resolution chirp reflectometry for fault localization and diagnosis in a high voltage cable in automotive electronics

    NASA Astrophysics Data System (ADS)

    Chang, Seung Jin; Lee, Chun Ku; Shin, Yong-June; Park, Jin Bae

    2016-12-01

    A multiple chirp reflectometry system with a fault estimation process is proposed to obtain multiple resolution and to measure the degree of fault in a target cable. A multiple resolution algorithm has the ability to localize faults, regardless of fault location. The time delay information, which is derived from the normalized cross-correlation between the incident signal and bandpass filtered reflected signals, is converted to a fault location and cable length. The in-phase and quadrature components are obtained by lowpass filtering of the mixed signal of the incident signal and the reflected signal. Based on in-phase and quadrature components, the reflection coefficient is estimated by the proposed fault estimation process including the mixing and filtering procedure. Also, the measurement uncertainty for this experiment is analyzed according to the Guide to the Expression of Uncertainty in Measurement. To verify the performance of the proposed method, we conduct comparative experiments to detect and measure faults under different conditions. Considering the installation environment of the high voltage cable used in an actual vehicle, target cable length and fault position are designed. To simulate the degree of fault, the variety of termination impedance (10 Ω , 30 Ω , 50 Ω , and 1 \\text{k} Ω ) are used and estimated by the proposed method in this experiment. The proposed method demonstrates advantages in that it has multiple resolution to overcome the blind spot problem, and can assess the state of the fault.

  16. Adaptive-gain fast super-twisting sliding mode fault tolerant control for a reusable launch vehicle in reentry phase.

    PubMed

    Zhang, Yao; Tang, Shengjing; Guo, Jie

    2017-11-01

    In this paper, a novel adaptive-gain fast super-twisting (AGFST) sliding mode attitude control synthesis is carried out for a reusable launch vehicle subject to actuator faults and unknown disturbances. According to the fast nonsingular terminal sliding mode surface (FNTSMS) and adaptive-gain fast super-twisting algorithm, an adaptive fault tolerant control law for the attitude stabilization is derived to protect against the actuator faults and unknown uncertainties. Firstly, a second-order nonlinear control-oriented model for the RLV is established by feedback linearization method. And on the basis a fast nonsingular terminal sliding mode (FNTSM) manifold is designed, which provides fast finite-time global convergence and avoids singularity problem as well as chattering phenomenon. Based on the merits of the standard super-twisting (ST) algorithm and fast reaching law with adaption, a novel adaptive-gain fast super-twisting (AGFST) algorithm is proposed for the finite-time fault tolerant attitude control problem of the RLV without any knowledge of the bounds of uncertainties and actuator faults. The important feature of the AGFST algorithm includes non-overestimating the values of the control gains and faster convergence speed than the standard ST algorithm. A formal proof of the finite-time stability of the closed-loop system is derived using the Lyapunov function technique. An estimation of the convergence time and accurate expression of convergence region are also provided. Finally, simulations are presented to illustrate the effectiveness and superiority of the proposed control scheme. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  17. Relationship between displacement and gravity change of Uemachi faults and surrounding faults of Osaka basin, Southwest Japan

    NASA Astrophysics Data System (ADS)

    Inoue, N.; Kitada, N.; Kusumoto, S.; Itoh, Y.; Takemura, K.

    2011-12-01

    The Osaka basin surrounded by the Rokko and Ikoma Ranges is one of the typical Quaternary sedimentary basins in Japan. The Osaka basin has been filled by the Pleistocene Osaka group and the later sediments. Several large cities and metropolitan areas, such as Osaka and Kobe are located in the Osaka basin. The basin is surrounded by E-W trending strike slip faults and N-S trending reverse faults. The N-S trending 42-km-long Uemachi faults traverse in the central part of the Osaka city. The Uemachi faults have been investigated for countermeasures against earthquake disaster. It is important to reveal the detailed fault parameters, such as length, dip and recurrence interval, so on for strong ground motion simulation and disaster prevention. For strong ground motion simulation, the fault model of the Uemachi faults consist of the two parts, the north and south parts, because of the no basement displacement in the central part of the faults. The Ministry of Education, Culture, Sports, Science and Technology started the project to survey of the Uemachi faults. The Disaster Prevention Institute of Kyoto University is carried out various surveys from 2009 to 2012 for 3 years. The result of the last year revealed the higher fault activity of the branch fault than main faults in the central part (see poster of "Subsurface Flexure of Uemachi Fault, Japan" by Kitada et al., in this meeting). Kusumoto et al. (2001) reported that surrounding faults enable to form the similar basement relief without the Uemachi faults model based on a dislocation model. We performed various parameter studies for dislocation model and gravity changes based on simplified faults model, which were designed based on the distribution of the real faults. The model was consisted 7 faults including the Uemachi faults. The dislocation and gravity change were calculated based on the Okada et al. (1985) and Okubo et al. (1993) respectively. The results show the similar basement displacement pattern to the Kusumoto et al. (2001) and no characteristic gravity change pattern. The Quantitative estimation is further problem.

  18. Estimating Tsunami Runup with Fault Plane Parameters

    NASA Astrophysics Data System (ADS)

    Sepulveda, I.; Liu, P. L. F.

    2016-12-01

    The forecasting of tsunami runup has often been done by solving numerical models. The execution times, however, make them unsuitable for the purpose of warning. We offer an alternative method that provides analytical relationship between the runup height, the fault plane parameters and the characteristic of coastal bathymetry. The method uses the model of Okada (1985) to estimate the coseismic deformation and the corresponding sea surface displacement (η(x,0)). Once the tsunami waves are generated, Carrier & Greenspan (1958) solution (C&G) is adopted to yield analytical expressions for the shoreline elevation and velocity. Two types of problems are investigated. In the first, the bathymetry is modeled as a constant slope that is connected to a constant depth region, where a seismic event occurs. This is a boundary value problem (BVP). In the second, the bathymetry is further simplified as a constant slope, on which a seismic event occurs. This is an initial value problem (IVP). Both problems are depicted in Figure 1. We derive runup solutions in terms of the fault parameters. The earthquake is associated with vertical coseismic seafloor displacements by using Okada's elastic model. In addition to the simplifications considered in Okada's model, we further assume (1) a strike parallel to the shoreline, (2) a very long rupture area and (3) a fast earthquake so surface elevation mimics the seafloor displacements. Then the tsunami origin is modeled in terms of the fault depth (d), fault width (W), fault slip (s) and dip angle (δ). We describe the solution for the BVP. Madsen & Schaeffer (2010) utilized C&G to derive solutions for the shoreline elevation of sinusoidal waves imposed in the offshore boundary. A linear superposition of this solution represents any arbitrary incident wave. Furthermore, we can prescribe the boundary condition at the toe of sloping beach by adopting the linear shallow wave equations in the constant depth area. By means of a dimensional analysis, the runup R is determined by Eq.1. Kanoglu (2004) derived a non-dimensional expression for long wave runup originated over a sloping beach. In our work we determine an analytical expression for a sinusoidal initial condition. Following the same procedure as the BVP, the expression for the runup R in the IVP is given by Eq.2. The curves F1 and F2 are plotted in Figure 2.

  19. Performance analysis of microcomputer based differential protection of UHV lines under selective phase switching

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bhatti, A.A.

    1990-04-01

    This paper examines the effects of primary and secondary fault quantities as well s of mutual couplings of neighboring circuits on the sensitivity of operation and threshold settings of a microcomputer based differential protection of UHV lines under selective phase switching. Microcomputer based selective phase switching allows the disconnection of minimum number of phases involved in a fault and requires the autoreclosing of these phases immediately after the extinction of secondary arc. During a primary fault a heavy current contribution to the healthy phases tends to cause an unwanted tripping. Faulty phases physically disconnected constitute an isolated fault which beingmore » coupled to the system affects the current and voltage levels of the healthy phases still retained in the system and may cause an unwanted tripping. The microcomputer based differential protection, appears to have poor performance when applied to uncompensated lines employing selective pole switching.« less

  20. Controls on Early-Rift Geometry: New Perspectives From the Bilila-Mtakataka Fault, Malawi

    NASA Astrophysics Data System (ADS)

    Hodge, M.; Fagereng, Å.; Biggs, J.; Mdala, H.

    2018-05-01

    We use the ˜110-km long Bilila-Mtakataka fault in the amagmatic southern East African Rift, Malawi, to investigate the controls on early-rift geometry at the scale of a major border fault. Morphological variations along the 14 ± 8-m high scarp define six 10- to 40-km long segments, which are either foliation parallel or oblique to both foliation and the current regional extension direction. As the scarp is neither consistently parallel to foliation nor well oriented for the current regional extension direction, we suggest that the segmented surface expression is related to the local reactivation of well-oriented weak shallow fabrics above a broadly continuous structure at depth. Using a geometrical model, the geometry of the best fitting subsurface structure is consistent with the local strain field from recent seismicity. In conclusion, within this early-rift, preexisting weaknesses only locally control border fault geometry at subsurface.

  1. Intelligent fault-tolerant controllers

    NASA Technical Reports Server (NTRS)

    Huang, Chien Y.

    1987-01-01

    A system with fault tolerant controls is one that can detect, isolate, and estimate failures and perform necessary control reconfiguration based on this new information. Artificial intelligence (AI) is concerned with semantic processing, and it has evolved to include the topics of expert systems and machine learning. This research represents an attempt to apply AI to fault tolerant controls, hence, the name intelligent fault tolerant control (IFTC). A generic solution to the problem is sought, providing a system based on logic in addition to analytical tools, and offering machine learning capabilities. The advantages are that redundant system specific algorithms are no longer needed, that reasonableness is used to quickly choose the correct control strategy, and that the system can adapt to new situations by learning about its effects on system dynamics.

  2. Fault Identification Based on Nlpca in Complex Electrical Engineering

    NASA Astrophysics Data System (ADS)

    Zhang, Yagang; Wang, Zengping; Zhang, Jinfang

    2012-07-01

    The fault is inevitable in any complex systems engineering. Electric power system is essentially a typically nonlinear system. It is also one of the most complex artificial systems in this world. In our researches, based on the real-time measurements of phasor measurement unit, under the influence of white Gaussian noise (suppose the standard deviation is 0.01, and the mean error is 0), we used mainly nonlinear principal component analysis theory (NLPCA) to resolve fault identification problem in complex electrical engineering. The simulation results show that the fault in complex electrical engineering is usually corresponding to the variable with the maximum absolute value coefficient in the first principal component. These researches will have significant theoretical value and engineering practical significance.

  3. Breaking down barriers in cooperative fault management: Temporal and functional information displays

    NASA Technical Reports Server (NTRS)

    Potter, Scott S.; Woods, David D.

    1994-01-01

    At the highest level, the fundamental question addressed by this research is how to aid human operators engaged in dynamic fault management. In dynamic fault management there is some underlying dynamic process (an engineered or physiological process referred to as the monitored process - MP) whose state changes over time and whose behavior must be monitored and controlled. In these types of applications (dynamic, real-time systems), a vast array of sensor data is available to provide information on the state of the MP. Faults disturb the MP and diagnosis must be performed in parallel with responses to maintain process integrity and to correct the underlying problem. These situations frequently involve time pressure, multiple interacting goals, high consequences of failure, and multiple interleaved tasks.

  4. Weak characteristic information extraction from early fault of wind turbine generator gearbox

    NASA Astrophysics Data System (ADS)

    Xu, Xiaoli; Liu, Xiuli

    2017-09-01

    Given the weak early degradation characteristic information during early fault evolution in gearbox of wind turbine generator, traditional singular value decomposition (SVD)-based denoising may result in loss of useful information. A weak characteristic information extraction based on μ-SVD and local mean decomposition (LMD) is developed to address this problem. The basic principle of the method is as follows: Determine the denoising order based on cumulative contribution rate, perform signal reconstruction, extract and subject the noisy part of signal to LMD and μ-SVD denoising, and obtain denoised signal through superposition. Experimental results show that this method can significantly weaken signal noise, effectively extract the weak characteristic information of early fault, and facilitate the early fault warning and dynamic predictive maintenance.

  5. An Integrated Framework for Model-Based Distributed Diagnosis and Prognosis

    NASA Technical Reports Server (NTRS)

    Bregon, Anibal; Daigle, Matthew J.; Roychoudhury, Indranil

    2012-01-01

    Diagnosis and prognosis are necessary tasks for system reconfiguration and fault-adaptive control in complex systems. Diagnosis consists of detection, isolation and identification of faults, while prognosis consists of prediction of the remaining useful life of systems. This paper presents a novel integrated framework for model-based distributed diagnosis and prognosis, where system decomposition is used to enable the diagnosis and prognosis tasks to be performed in a distributed way. We show how different submodels can be automatically constructed to solve the local diagnosis and prognosis problems. We illustrate our approach using a simulated four-wheeled rover for different fault scenarios. Our experiments show that our approach correctly performs distributed fault diagnosis and prognosis in an efficient and robust manner.

  6. Growth and linkage of the quaternary Ubrique Normal Fault Zone, Western Gibraltar Arc: role on the along-strike relief segmentation

    NASA Astrophysics Data System (ADS)

    Jiménez-Bonilla, Alejandro; Balanya, Juan Carlos; Exposito, Inmaculada; Diaz-Azpiroz, Manuel; Barcos, Leticia

    2015-04-01

    Strain partitioning modes within migrating orogenic arcs may result in arc-parallel stretching that produces along-strike structural and topographic discontinuities. In the Western Gibraltar Arc, arc-parallel stretching has operated from the Lower Miocene up to recent times. In this study, we have reviewed the Colmenar Fault, located at the SW end of the Subbetic ranges, previously interpreted as a Middle Miocene low-angle normal fault. Our results allow to identify younger normal fault segments, to analyse their kinematics, growth and segment linkage, and to discuss its role on the structural and relief drop at regional scale. The Colmenar Fault is folded by post-Serravallian NE-SW buckle folds. Both the SW-dipping fault surfaces and the SW-plunging fold axes contribute to the structural relief drop toward the SW. Nevertheless, at the NW tip of the Colmenar Fault, we have identified unfolded normal faults cutting quaternary soils. They are grouped into a N110˚E striking brittle deformation band 15km long and until 3km wide (hereafter Ubrique Normal Fault Zone; UNFZ). The UNFZ is divided into three sectors: (a) The western tip zone is formed by normal faults which usually dip to the SW and whose slip directions vary between N205˚E and N225˚E. These segments are linked to each other by left-lateral oblique faults interpreted as transfer faults. (b) The central part of the UNFZ is composed of a single N115˚E striking fault segment 2,4km long. Slip directions are around N190˚E and the estimated throw is 1,25km. The fault scarp is well-conserved reaching up to 400m in its central part and diminishing to 200m at both segment terminations. This fault segment is linked to the western tip by an overlap zone characterized by tilted blocks limited by high-angle NNE-SSW and WNW-ESE striking faults interpreted as "box faults" [1]. (c) The eastern tip zone is formed by fault segments with oblique slip which also contribute to the downthrown of the SW block. This kinematic pattern seems to be related to other strike-slip fault systems developed to the E of the UNFZ. The structural revision together with updated kinematic data suggest that the Colmenar Fault is cut and downthrown by a younger normal fault zone, the UNFZ, which would have contributed to accommodate arc-parallel stretching until the Quaternary. This stretching provokes along-strike relief segmentation, being the UNFZ the main fault zone causing the final drop of the Subbetic ranges towards the SW within the Western Gibraltar Arc. Our results show displacement variations in each fault segment of the UNFZ, diminishing to their tips. This suggests fault segment linkage finally evolved to build the nearly continuous current fault zone. The development of current large through-going faults linked inside the UNFZ is similar to those ones simulated in some numerical modelling of rift systems [2]. Acknowledgements: RNM-415 and CGL-2013-46368-P [1]Peacock, D.C.P., Knipe, R.J., Sanderson, D.J., 2000. Glossary of normal faults. Journal Structural Geology, 22, 291-305. [2]Cowie, P.A., Gupta, S., Dawers, N.H., 2000. Implications of fault array evolution for synrift depocentre development: insights from a numerical fault growth model. Basin Research, 12, 241-261.

  7. Paleoseismic Investigation of the Ranong and Khlong Marui faults, Chumphon Province, Southern Thailand

    NASA Astrophysics Data System (ADS)

    Fenton, C. H.; Sutiwanich, C.

    2005-12-01

    The Ranong and Khlong Marui faults are northeast-southwest trending structures in the Isthmus of Kra, southern Thailand, that apparently link the extensional regimes of the Mergui Basin in the Andaman Sea and the Gulf of Thailand. These faults are depicted commonly as strike-slip faults, acting as conjugate structures to the dominant northwest-southeast trending strike-slip faults, in Southeast Asia. These faults are parallel to the predominant structural grain in the Carboniferous rocks of peninsular Thailand. In addition, they appear to be bounding structures for several Tertiary basins, including the onshore parts of the Surat Thani basin and the offshore Chumphon basin. Initial remote sensing studies showed that both faults have relatively subdued geomorphic expressions. Field reconnaissance investigations indicated a lack of youthful tectonic geomorphology along the Khlong Marui fault and ambiguous evidence for recent movement along the Ranong fault. Fault exposures along both fault trends and on minor parallel faults in the region indicated that, rather than predominantly strike-slip motion, these faults have experienced up-to-the-west reverse movement. Because of its more youthful geomorphic expression, several sites along the Ranong fault were chosen for paleoseismic trenching. Initial trench exposures indicate an absence of Holocene movement. Some exposures indicate the possibility of Late Tertiary-Early Holocene vertical movement. These investigations are currently ongoing and we hope to report our conclusions at the Fall Meeting.

  8. Fault ride-through enhancement using an enhanced field oriented control technique for converters of grid connected DFIG and STATCOM for different types of faults.

    PubMed

    Ananth, D V N; Nagesh Kumar, G V

    2016-05-01

    With increase in electric power demand, transmission lines were forced to operate close to its full load and due to the drastic change in weather conditions, thermal limit is increasing and the system is operating with less security margin. To meet the increased power demand, a doubly fed induction generator (DFIG) based wind generation system is a better alternative. For improving power flow capability and increasing security STATCOM can be adopted. As per modern grid rules, DFIG needs to operate without losing synchronism called low voltage ride through (LVRT) during severe grid faults. Hence, an enhanced field oriented control technique (EFOC) was adopted in Rotor Side Converter of DFIG converter to improve power flow transfer and to improve dynamic and transient stability. A STATCOM is coordinated to the system for obtaining much better stability and enhanced operation during grid fault. For the EFOC technique, rotor flux reference changes its value from synchronous speed to zero during fault for injecting current at the rotor slip frequency. In this process DC-Offset component of flux is controlled, decomposition during symmetric and asymmetric faults. The offset decomposition of flux will be oscillatory in a conventional field oriented control, whereas in EFOC it was aimed to damp quickly. This paper mitigates voltage and limits surge currents to enhance the operation of DFIG during symmetrical and asymmetrical faults. The system performance with different types of faults like single line to ground, double line to ground and triple line to ground was applied and compared without and with a STATCOM occurring at the point of common coupling with fault resistance of a very small value at 0.001Ω. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.

  9. Arc fault detection system

    DOEpatents

    Jha, K.N.

    1999-05-18

    An arc fault detection system for use on ungrounded or high-resistance-grounded power distribution systems is provided which can be retrofitted outside electrical switchboard circuits having limited space constraints. The system includes a differential current relay that senses a current differential between current flowing from secondary windings located in a current transformer coupled to a power supply side of a switchboard, and a total current induced in secondary windings coupled to a load side of the switchboard. When such a current differential is experienced, a current travels through a operating coil of the differential current relay, which in turn opens an upstream circuit breaker located between the switchboard and a power supply to remove the supply of power to the switchboard. 1 fig.

  10. Fault Tree Analysis Application for Safety and Reliability

    NASA Technical Reports Server (NTRS)

    Wallace, Dolores R.

    2003-01-01

    Many commercial software tools exist for fault tree analysis (FTA), an accepted method for mitigating risk in systems. The method embedded in the tools identifies a root as use in system components, but when software is identified as a root cause, it does not build trees into the software component. No commercial software tools have been built specifically for development and analysis of software fault trees. Research indicates that the methods of FTA could be applied to software, but the method is not practical without automated tool support. With appropriate automated tool support, software fault tree analysis (SFTA) may be a practical technique for identifying the underlying cause of software faults that may lead to critical system failures. We strive to demonstrate that existing commercial tools for FTA can be adapted for use with SFTA, and that applied to a safety-critical system, SFTA can be used to identify serious potential problems long before integrator and system testing.

  11. Sensor fault diagnosis of aero-engine based on divided flight status.

    PubMed

    Zhao, Zhen; Zhang, Jun; Sun, Yigang; Liu, Zhexu

    2017-11-01

    Fault diagnosis and safety analysis of an aero-engine have attracted more and more attention in modern society, whose safety directly affects the flight safety of an aircraft. In this paper, the problem concerning sensor fault diagnosis is investigated for an aero-engine during the whole flight process. Considering that the aero-engine is always working in different status through the whole flight process, a flight status division-based sensor fault diagnosis method is presented to improve fault diagnosis precision for the aero-engine. First, aero-engine status is partitioned according to normal sensor data during the whole flight process through the clustering algorithm. Based on that, a diagnosis model is built for each status using the principal component analysis algorithm. Finally, the sensors are monitored using the built diagnosis models by identifying the aero-engine status. The simulation result illustrates the effectiveness of the proposed method.

  12. Tsunamis and splay fault dynamics

    USGS Publications Warehouse

    Wendt, J.; Oglesby, D.D.; Geist, E.L.

    2009-01-01

    The geometry of a fault system can have significant effects on tsunami generation, but most tsunami models to date have not investigated the dynamic processes that determine which path rupture will take in a complex fault system. To gain insight into this problem, we use the 3D finite element method to model the dynamics of a plate boundary/splay fault system. We use the resulting ground deformation as a time-dependent boundary condition for a 2D shallow-water hydrodynamic tsunami calculation. We find that if me stress distribution is homogeneous, rupture remains on the plate boundary thrust. When a barrier is introduced along the strike of the plate boundary thrust, rupture propagates to the splay faults, and produces a significantly larger tsunami man in the homogeneous case. The results have implications for the dynamics of megathrust earthquakes, and also suggest mat dynamic earthquake modeling may be a useful tool in tsunami researcn. Copyright 2009 by the American Geophysical Union.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, S.; Peng, L.; Bronevetsky, G.

    As HPC systems approach Exascale, their circuit feature will shrink, while their overall size will grow, all at a fixed power limit. These trends imply that soft faults in electronic circuits will become an increasingly significant problem for applications that run on these systems, causing them to occasionally crash or worse, silently return incorrect results. This is motivating extensive work on application resilience to such faults, ranging from generic techniques such as replication or checkpoint/restart to algorithm-specific error detection and resilience techniques. Effective use of such techniques requires a detailed understanding of (1) which vulnerable parts of the application aremore » most worth protecting (2) the performance and resilience impact of fault resilience mechanisms on the application. This paper presents FaultTelescope, a tool that combines these two and generates actionable insights by presenting in an intuitive way application vulnerabilities and impact of fault resilience mechanisms on applications.« less

  14. Sensor fault diagnosis of aero-engine based on divided flight status

    NASA Astrophysics Data System (ADS)

    Zhao, Zhen; Zhang, Jun; Sun, Yigang; Liu, Zhexu

    2017-11-01

    Fault diagnosis and safety analysis of an aero-engine have attracted more and more attention in modern society, whose safety directly affects the flight safety of an aircraft. In this paper, the problem concerning sensor fault diagnosis is investigated for an aero-engine during the whole flight process. Considering that the aero-engine is always working in different status through the whole flight process, a flight status division-based sensor fault diagnosis method is presented to improve fault diagnosis precision for the aero-engine. First, aero-engine status is partitioned according to normal sensor data during the whole flight process through the clustering algorithm. Based on that, a diagnosis model is built for each status using the principal component analysis algorithm. Finally, the sensors are monitored using the built diagnosis models by identifying the aero-engine status. The simulation result illustrates the effectiveness of the proposed method.

  15. Distributed fault-tolerant time-varying formation control for high-order linear multi-agent systems with actuator failures.

    PubMed

    Hua, Yongzhao; Dong, Xiwang; Li, Qingdong; Ren, Zhang

    2017-11-01

    This paper investigates the fault-tolerant time-varying formation control problems for high-order linear multi-agent systems in the presence of actuator failures. Firstly, a fully distributed formation control protocol is presented to compensate for the influences of both bias fault and loss of effectiveness fault. Using the adaptive online updating strategies, no global knowledge about the communication topology is required and the bounds of actuator failures can be unknown. Then an algorithm is proposed to determine the control parameters of the fault-tolerant formation protocol, where the time-varying formation feasible conditions and an approach to expand the feasible formation set are given. Furthermore, the stability of the proposed algorithm is proven based on the Lyapunov-like theory. Finally, two simulation examples are given to demonstrate the effectiveness of the theoretical results. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  16. Fault detection for singular switched linear systems with multiple time-varying delay in finite frequency domain

    NASA Astrophysics Data System (ADS)

    Zhai, Ding; Lu, Anyang; Li, Jinghao; Zhang, Qingling

    2016-10-01

    This paper deals with the problem of the fault detection (FD) for continuous-time singular switched linear systems with multiple time-varying delay. In this paper, the actuator fault is considered. Besides, the systems faults and unknown disturbances are assumed in known frequency domains. Some finite frequency performance indices are initially introduced to design the switched FD filters which ensure that the filtering augmented systems under switching signal with average dwell time are exponentially admissible and guarantee the fault input sensitivity and disturbance robustness. By developing generalised Kalman-Yakubovic-Popov lemma and using Parseval's theorem and Fourier transform, finite frequency delay-dependent sufficient conditions for the existence of such a filter which can guarantee the finite-frequency H- and H∞ performance are derived and formulated in terms of linear matrix inequalities. Four examples are provided to illustrate the effectiveness of the proposed finite frequency method.

  17. A Fault Location Algorithm for Two-End Series-Compensated Double-Circuit Transmission Lines Using the Distributed Parameter Line Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kang, Ning; Gombos, Gergely; Mousavi, Mirrasoul J.

    A new fault location algorithm for two-end series-compensated double-circuit transmission lines utilizing unsynchronized two-terminal current phasors and local voltage phasors is presented in this paper. The distributed parameter line model is adopted to take into account the shunt capacitance of the lines. The mutual coupling between the parallel lines in the zero-sequence network is also considered. The boundary conditions under different fault types are used to derive the fault location formulation. The developed algorithm directly uses the local voltage phasors on the line side of series compensation (SC) and metal oxide varistor (MOV). However, when potential transformers are not installedmore » on the line side of SC and MOVs for the local terminal, these measurements can be calculated from the local terminal bus voltage and currents by estimating the voltages across the SC and MOVs. MATLAB SimPowerSystems is used to generate cases under diverse fault conditions to evaluating accuracy. The simulation results show that the proposed algorithm is qualified for practical implementation.« less

  18. Characteristic investigation and control of a modular multilevel converter-based HVDC system under single-line-to-ground fault conditions

    DOE PAGES

    Shi, Xiaojie; Wang, Zhiqiang; Liu, Bo; ...

    2014-05-16

    This paper presents the analysis and control of a multilevel modular converter (MMC)-based HVDC transmission system under three possible single-line-to-ground fault conditions, with special focus on the investigation of their different fault characteristics. Considering positive-, negative-, and zero-sequence components in both arm voltages and currents, the generalized instantaneous power of a phase unit is derived theoretically according to the equivalent circuit model of the MMC under unbalanced conditions. Based on this model, a novel double-line frequency dc-voltage ripple suppression control is proposed. This controller, together with the negative-and zero-sequence current control, could enhance the overall fault-tolerant capability of the HVDCmore » system without additional cost. To further improve the fault-tolerant capability, the operation performance of the HVDC system with and without single-phase switching is discussed and compared in detail. Lastly, simulation results from a three-phase MMC-HVDC system generated with MATLAB/Simulink are provided to support the theoretical analysis and proposed control schemes.« less

  19. Graphical fault tree analysis for fatal falls in the construction industry.

    PubMed

    Chi, Chia-Fen; Lin, Syuan-Zih; Dewi, Ratna Sari

    2014-11-01

    The current study applied a fault tree analysis to represent the causal relationships among events and causes that contributed to fatal falls in the construction industry. Four hundred and eleven work-related fatalities in the Taiwanese construction industry were analyzed in terms of age, gender, experience, falling site, falling height, company size, and the causes for each fatality. Given that most fatal accidents involve multiple events, the current study coded up to a maximum of three causes for each fall fatality. After the Boolean algebra and minimal cut set analyses, accident causes associated with each falling site can be presented as a fault tree to provide an overview of the basic causes, which could trigger fall fatalities in the construction industry. Graphical icons were designed for each falling site along with the associated accident causes to illustrate the fault tree in a graphical manner. A graphical fault tree can improve inter-disciplinary discussion of risk management and the communication of accident causation to first line supervisors. Copyright © 2014 Elsevier Ltd. All rights reserved.

  20. Automatic bearing fault diagnosis of permanent magnet synchronous generators in wind turbines subjected to noise interference

    NASA Astrophysics Data System (ADS)

    Guo, Jun; Lu, Siliang; Zhai, Chao; He, Qingbo

    2018-02-01

    An automatic bearing fault diagnosis method is proposed for permanent magnet synchronous generators (PMSGs), which are widely installed in wind turbines subjected to low rotating speeds, speed fluctuations, and electrical device noise interferences. The mechanical rotating angle curve is first extracted from the phase current of a PMSG by sequentially applying a series of algorithms. The synchronous sampled vibration signal of the fault bearing is then resampled in the angular domain according to the obtained rotating phase information. Considering that the resampled vibration signal is still overwhelmed by heavy background noise, an adaptive stochastic resonance filter is applied to the resampled signal to enhance the fault indicator and facilitate bearing fault identification. Two types of fault bearings with different fault sizes in a PMSG test rig are subjected to experiments to test the effectiveness of the proposed method. The proposed method is fully automated and thus shows potential for convenient, highly efficient and in situ bearing fault diagnosis for wind turbines subjected to harsh environments.

Top