Sample records for fault protection practices

  1. Protecting Against Faults in JPL Spacecraft

    NASA Technical Reports Server (NTRS)

    Morgan, Paula

    2007-01-01

    A paper discusses techniques for protecting against faults in spacecraft designed and operated by NASA s Jet Propulsion Laboratory (JPL). The paper addresses, more specifically, fault-protection requirements and techniques common to most JPL spacecraft (in contradistinction to unique, mission specific techniques), standard practices in the implementation of these techniques, and fault-protection software architectures. Common requirements include those to protect onboard command, data-processing, and control computers; protect against loss of Earth/spacecraft radio communication; maintain safe temperatures; and recover from power overloads. The paper describes fault-protection techniques as part of a fault-management strategy that also includes functional redundancy, redundant hardware, and autonomous monitoring of (1) the operational and health statuses of spacecraft components, (2) temperatures inside and outside the spacecraft, and (3) allocation of power. The strategy also provides for preprogrammed automated responses to anomalous conditions. In addition, the software running in almost every JPL spacecraft incorporates a general-purpose "Safe Mode" response algorithm that configures the spacecraft in a lower-power state that is safe and predictable, thereby facilitating diagnosis of more complex faults by a team of human experts on Earth.

  2. Attitude control fault protection - The Voyager experience

    NASA Technical Reports Server (NTRS)

    Litty, E. C.

    1980-01-01

    The length of the Voyager mission and the communication delay caused by the distances involved made fault protection a necessary part of the Voyager Attitude and Articulation Control Subsystem (AACS) design. An overview of the Voyager attitude control fault protection is given and flight experiences relating to fault protection are provided.

  3. Methodology for Designing Fault-Protection Software

    NASA Technical Reports Server (NTRS)

    Barltrop, Kevin; Levison, Jeffrey; Kan, Edwin

    2006-01-01

    A document describes a methodology for designing fault-protection (FP) software for autonomous spacecraft. The methodology embodies and extends established engineering practices in the technical discipline of Fault Detection, Diagnosis, Mitigation, and Recovery; and has been successfully implemented in the Deep Impact Spacecraft, a NASA Discovery mission. Based on established concepts of Fault Monitors and Responses, this FP methodology extends the notion of Opinion, Symptom, Alarm (aka Fault), and Response with numerous new notions, sub-notions, software constructs, and logic and timing gates. For example, Monitor generates a RawOpinion, which graduates into Opinion, categorized into no-opinion, acceptable, or unacceptable opinion. RaiseSymptom, ForceSymptom, and ClearSymptom govern the establishment and then mapping to an Alarm (aka Fault). Local Response is distinguished from FP System Response. A 1-to-n and n-to- 1 mapping is established among Monitors, Symptoms, and Responses. Responses are categorized by device versus by function. Responses operate in tiers, where the early tiers attempt to resolve the Fault in a localized step-by-step fashion, relegating more system-level response to later tier(s). Recovery actions are gated by epoch recovery timing, enabling strategy, urgency, MaxRetry gate, hardware availability, hazardous versus ordinary fault, and many other priority gates. This methodology is systematic, logical, and uses multiple linked tables, parameter files, and recovery command sequences. The credibility of the FP design is proven via a fault-tree analysis "top-down" approach, and a functional fault-mode-effects-and-analysis via "bottoms-up" approach. Via this process, the mitigation and recovery strategy(s) per Fault Containment Region scope (width versus depth) the FP architecture.

  4. MER surface fault protection system

    NASA Technical Reports Server (NTRS)

    Neilson, Tracy

    2005-01-01

    The Mars Exploration Rovers surface fault protection design was influenced by the fact that the solar-powered rovers must recharge their batteries during the day to survive the night. the rovers needed to autonomously maintain thermal stability, initiate safe and reliable communication with orbiting assets or directly to Earth, while maintaining energy balance. This paper will describe the system fault protection design for the surface phase of the mission.

  5. Staged-Fault Testing of Distance Protection Relay Settings

    NASA Astrophysics Data System (ADS)

    Havelka, J.; Malarić, R.; Frlan, K.

    2012-01-01

    In order to analyze the operation of the protection system during induced fault testing in the Croatian power system, a simulation using the CAPE software has been performed. The CAPE software (Computer-Aided Protection Engineering) is expert software intended primarily for relay protection engineers, which calculates current and voltage values during faults in the power system, so that relay protection devices can be properly set up. Once the accuracy of the simulation model had been confirmed, a series of simulations were performed in order to obtain the optimal fault location to test the protection system. The simulation results were used to specify the test sequence definitions for the end-to-end relay testing using advanced testing equipment with GPS synchronization for secondary injection in protection schemes based on communication. The objective of the end-to-end testing was to perform field validation of the protection settings, including verification of the circuit breaker operation, telecommunication channel time and the effectiveness of the relay algorithms. Once the end-to-end secondary injection testing had been completed, the induced fault testing was performed with three-end lines loaded and in service. This paper describes and analyses the test procedure, consisting of CAPE simulations, end-to-end test with advanced secondary equipment and staged-fault test of a three-end power line in the Croatian transmission system.

  6. The Curiosity Mars Rover's Fault Protection Engine

    NASA Technical Reports Server (NTRS)

    Benowitz, Ed

    2014-01-01

    The Curiosity Rover, currently operating on Mars, contains flight software onboard to autonomously handle aspects of system fault protection. Over 1000 monitors and 39 responses are present in the flight software. Orchestrating these behaviors is the flight software's fault protection engine. In this paper, we discuss the engine's design, responsibilities, and present some lessons learned for future missions.

  7. A Generalised Fault Protection Structure Proposed for Uni-grounded Low-Voltage AC Microgrids

    NASA Astrophysics Data System (ADS)

    Bui, Duong Minh; Chen, Shi-Lin; Lien, Keng-Yu; Jiang, Jheng-Lun

    2016-04-01

    This paper presents three main configurations of uni-grounded low-voltage AC microgrids. Transient situations of a uni-grounded low-voltage (LV) AC microgrid (MG) are simulated through various fault tests and operation transition tests between grid-connected and islanded modes. Based on transient simulation results, available fault protection methods are proposed for main and back-up protection of a uni-grounded AC microgrid. In addition, concept of a generalised fault protection structure of uni-grounded LVAC MGs is mentioned in the paper. As a result, main contributions of the paper are: (i) definition of different uni-grounded LVAC MG configurations; (ii) analysing transient responses of a uni-grounded LVAC microgrid through line-to-line faults, line-to-ground faults, three-phase faults and a microgrid operation transition test, (iii) proposing available fault protection methods for uni-grounded microgrids, such as: non-directional or directional overcurrent protection, under/over voltage protection, differential current protection, voltage-restrained overcurrent protection, and other fault protection principles not based on phase currents and voltages (e.g. total harmonic distortion detection of currents and voltages, using sequence components of current and voltage, 3I0 or 3V0 components), and (iv) developing a generalised fault protection structure with six individual protection zones to be suitable for different uni-grounded AC MG configurations.

  8. Advanced power system protection and incipient fault detection and protection of spaceborne power systems

    NASA Technical Reports Server (NTRS)

    Russell, B. Don

    1989-01-01

    This research concentrated on the application of advanced signal processing, expert system, and digital technologies for the detection and control of low grade, incipient faults on spaceborne power systems. The researchers have considerable experience in the application of advanced digital technologies and the protection of terrestrial power systems. This experience was used in the current contracts to develop new approaches for protecting the electrical distribution system in spaceborne applications. The project was divided into three distinct areas: (1) investigate the applicability of fault detection algorithms developed for terrestrial power systems to the detection of faults in spaceborne systems; (2) investigate the digital hardware and architectures required to monitor and control spaceborne power systems with full capability to implement new detection and diagnostic algorithms; and (3) develop a real-time expert operating system for implementing diagnostic and protection algorithms. Significant progress has been made in each of the above areas. Several terrestrial fault detection algorithms were modified to better adapt to spaceborne power system environments. Several digital architectures were developed and evaluated in light of the fault detection algorithms.

  9. Functional Fault Modeling Conventions and Practices for Real-Time Fault Isolation

    NASA Technical Reports Server (NTRS)

    Ferrell, Bob; Lewis, Mark; Perotti, Jose; Oostdyk, Rebecca; Brown, Barbara

    2010-01-01

    The purpose of this paper is to present the conventions, best practices, and processes that were established based on the prototype development of a Functional Fault Model (FFM) for a Cryogenic System that would be used for real-time Fault Isolation in a Fault Detection, Isolation, and Recovery (FDIR) system. The FDIR system is envisioned to perform health management functions for both a launch vehicle and the ground systems that support the vehicle during checkout and launch countdown by using a suite of complimentary software tools that alert operators to anomalies and failures in real-time. The FFMs were created offline but would eventually be used by a real-time reasoner to isolate faults in a Cryogenic System. Through their development and review, a set of modeling conventions and best practices were established. The prototype FFM development also provided a pathfinder for future FFM development processes. This paper documents the rationale and considerations for robust FFMs that can easily be transitioned to a real-time operating environment.

  10. Determining the Impact of Steady-State PV Fault Current Injections on Distribution Protection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Seuss, John; Reno, Matthew J.; Broderick, Robert Joseph

    This report investigates the fault current contribution from a single large PV system and the impact it has on existing distribution overcurrent protection devices. Assumptions are made about the modeling of the PV system under fault to perform exhaustive steady - state fault analyses throughout distribution feeder models. Each PV interconnection location is tested to determine how the size of the PV system affects the fault current measured by each protection device. This data is then searched for logical conditions that indicate whether a protection device has operated in a manner that will cause more customer outages due to themore » addition of the PV system. This is referred to as a protection issue , and there are four unique types of issues that have been identified in the study. The PV system size at which any issues occur are recorded to determine the feeder's PV hosting capacity limitations due to interference with protection settings. The analysis is carried out on six feeder models. The report concludes with a discussion of the prevalence and cause of each protection issue caused by PV system fault current.« less

  11. Protection Relaying Scheme Based on Fault Reactance Operation Type

    NASA Astrophysics Data System (ADS)

    Tsuji, Kouichi

    The theories of operation of existing relays are roughly divided into two types: one is the current differential types based on Kirchhoff's first law and the other is impedance types based on second law. We can apply the Kirchhoff's laws to strictly formulate fault phenomena, so the circuit equations are represented non linear simultaneous equations with variables fault point k and fault resistance Rf. This method has next two defect. 1) heavy computational burden for the iterative calculation on N-R method, 2) relay operator can not easily understand principle of numerical matrix operation. The new protection relay principles we proposed this paper focuses on the fact that the reactance component on fault point is almost zero. Two reactance Xf(S), Xf(R) on branch both ends are calculated by operation of solving linear equations. If signs of Xf(S) and Xf(R) are not same, it can be judged that the fault point exist in the branch. This reactance Xf corresponds to difference of branch reactance between actual fault point and imaginaly fault point. And so relay engineer can to understand fault location by concept of “distance". The simulation results using this new method indicates the highly precise estimation of fault locations compared with the inspected fault locations on operating transmission lines.

  12. MER Surface Phase; Blurring the Line Between Fault Protection and What is Supposed to Happen

    NASA Technical Reports Server (NTRS)

    Reeves, Glenn E.

    2008-01-01

    An assessment on the limitations of communication with MER rovers and how such constraints drove the system design, flight software and fault protection architecture, blurring the line between traditional fault protection and expected nominal behavior, and requiring the most novel autonomous and semi-autonomous elements of the vehicle software including communication, surface mobility, attitude knowledge acquisition, fault protection, and the activity arbitration service.

  13. Managing Space System Faults: Coalescing NASA's Views

    NASA Technical Reports Server (NTRS)

    Muirhead, Brian; Fesq, Lorraine

    2012-01-01

    Managing faults and their resultant failures is a fundamental and critical part of developing and operating aerospace systems. Yet, recent studies have shown that the engineering "discipline" required to manage faults is not widely recognized nor evenly practiced within the NASA community. Attempts to simply name this discipline in recent years has been fraught with controversy among members of the Integrated Systems Health Management (ISHM), Fault Management (FM), Fault Protection (FP), Hazard Analysis (HA), and Aborts communities. Approaches to managing space system faults typically are unique to each organization, with little commonality in the architectures, processes and practices across the industry.

  14. Toward a Model-Based Approach to Flight System Fault Protection

    NASA Technical Reports Server (NTRS)

    Day, John; Murray, Alex; Meakin, Peter

    2012-01-01

    Fault Protection (FP) is a distinct and separate systems engineering sub-discipline that is concerned with the off-nominal behavior of a system. Flight system fault protection is an important part of the overall flight system systems engineering effort, with its own products and processes. As with other aspects of systems engineering, the FP domain is highly amenable to expression and management in models. However, while there are standards and guidelines for performing FP related analyses, there are not standards or guidelines for formally relating the FP analyses to each other or to the system hardware and software design. As a result, the material generated for these analyses are effectively creating separate models that are only loosely-related to the system being designed. Development of approaches that enable modeling of FP concerns in the same model as the system hardware and software design enables establishment of formal relationships that has great potential for improving the efficiency, correctness, and verification of the implementation of flight system FP. This paper begins with an overview of the FP domain, and then continues with a presentation of a SysML/UML model of the FP domain and the particular analyses that it contains, by way of showing a potential model-based approach to flight system fault protection, and an exposition of the use of the FP models in FSW engineering. The analyses are small examples, inspired by current real-project examples of FP analyses.

  15. A Novel Wide-Area Backup Protection Based on Fault Component Current Distribution and Improved Evidence Theory

    PubMed Central

    Zhang, Zhe; Kong, Xiangping; Yin, Xianggen; Yang, Zengli; Wang, Lijun

    2014-01-01

    In order to solve the problems of the existing wide-area backup protection (WABP) algorithms, the paper proposes a novel WABP algorithm based on the distribution characteristics of fault component current and improved Dempster/Shafer (D-S) evidence theory. When a fault occurs, slave substations transmit to master station the amplitudes of fault component currents of transmission lines which are the closest to fault element. Then master substation identifies suspicious faulty lines according to the distribution characteristics of fault component current. After that, the master substation will identify the actual faulty line with improved D-S evidence theory based on the action states of traditional protections and direction components of these suspicious faulty lines. The simulation examples based on IEEE 10-generator-39-bus system show that the proposed WABP algorithm has an excellent performance. The algorithm has low requirement of sampling synchronization, small wide-area communication flow, and high fault tolerance. PMID:25050399

  16. Incipient fault detection and power system protection for spaceborne systems

    NASA Technical Reports Server (NTRS)

    Russell, B. Don; Hackler, Irene M.

    1987-01-01

    A program was initiated to study the feasibility of using advanced terrestrial power system protection techniques for spacecraft power systems. It was designed to enhance and automate spacecraft power distribution systems in the areas of safety, reliability and maintenance. The proposed power management/distribution system is described as well as security assessment and control, incipient and low current fault detection, and the proposed spaceborne protection system. It is noted that the intelligent remote power controller permits the implementation of digital relaying algorithms with both adaptive and programmable characteristics.

  17. Research on Fault Characteristics and Line Protections Within a Large-scale Photovoltaic Power Plant

    NASA Astrophysics Data System (ADS)

    Zhang, Chi; Zeng, Jie; Zhao, Wei; Zhong, Guobin; Xu, Qi; Luo, Pandian; Gu, Chenjie; Liu, Bohan

    2017-05-01

    Centralized photovoltaic (PV) systems have different fault characteristics from distributed PV systems due to the different system structures and controls. This makes the fault analysis and protection methods used in distribution networks with distributed PV not suitable for a centralized PV power plant. Therefore, a consolidated expression for the fault current within a PV power plant under different controls was calculated considering the fault response of the PV array. Then, supported by the fault current analysis and the on-site testing data, the overcurrent relay (OCR) performance was evaluated in the collection system of an 850 MW PV power plant. It reveals that the OCRs at downstream side on overhead lines may malfunction. In this case, a new relay scheme was proposed using directional distance elements. In the PSCAD/EMTDC, a detailed PV system model was built and verified using the on-site testing data. Simulation results indicate that the proposed relay scheme could effectively solve the problems under variant fault scenarios and PV plant output levels.

  18. Space Station automated systems testing/verification and the Galileo Orbiter fault protection design/verification

    NASA Technical Reports Server (NTRS)

    Landano, M. R.; Easter, R. W.

    1984-01-01

    Aspects of Space Station automated systems testing and verification are discussed, taking into account several program requirements. It is found that these requirements lead to a number of issues of uncertainties which require study and resolution during the Space Station definition phase. Most, if not all, of the considered uncertainties have implications for the overall testing and verification strategy adopted by the Space Station Program. A description is given of the Galileo Orbiter fault protection design/verification approach. Attention is given to a mission description, an Orbiter description, the design approach and process, the fault protection design verification approach/process, and problems of 'stress' testing.

  19. Invariant protection of high-voltage electric motors of technological complexes at industrial enterprises at partial single-phase ground faults

    NASA Astrophysics Data System (ADS)

    Abramovich, B. N.; Sychev, Yu A.; Pelenev, D. N.

    2018-03-01

    Development results of invariant protection of high-voltage motors at incomplete single-phase ground faults are observed in the article. It is established that current protections have low action selectivity because of an inadmissible decrease in entrance signals during the shirt circuit occurrence in the place of transient resistance. The structural functional scheme and an algorithm of protective actions where correction of automatic zero sequence currents signals of the protected accessions implemented according to the level of incompleteness of ground faults are developed. It is revealed that automatic correction of zero sequence currents allows one to provide the invariance of sensitivity factor for protection under the variation conditions of a transient resistance in the place of damage. Application of invariant protection allows one to minimize damages in 6-10 kV electrical installations of industrial enterprises for a cause of infringement of consumers’ power supply and their system breakdown due to timely localization of emergency of ground faults modes.

  20. Fault Management Practice: A Roadmap for Improvement

    NASA Technical Reports Server (NTRS)

    Fesq, Lorraine M.; Oberhettinger, David

    2010-01-01

    Autonomous fault management (FM) is critical for deep space and planetary missions where the limited communication opportunities may prevent timely intervention by ground control. Evidence of pervasive architecture, design, and verification/validation problems with NASA FM engineering has been revealed both during technical reviews of spaceflight missions and in flight. These problems include FM design changes required late in the life-cycle, insufficient project insight into the extent of FM testing required, unexpected test results that require resolution, spacecraft operational limitations because certain functions were not tested, and in-flight anomalies and mission failures attributable to fault management. A recent NASA initiative has characterized the FM state-of-practice throughout the spacecraft development community and identified common NASA, DoD, and commercial concerns that can be addressed in the near term through the development of a FM Practitioner's Handbook and the formation of a FM Working Group. Initial efforts will focus on standardizing FM terminology, establishing engineering processes and tools, and training.

  1. THE FAULT DIVERTER-A PROTECTIVE DEVICE FOR HIGH-POWER ELECTRON TUBES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, B.H.

    1957-08-01

    Fault diverters, or crowbars, have proven to be very effective protection against transient-induced power arcs within accelerator oscillator tubes. This device short circuits the oscillator-plate power supply in the event of an over-current, thus removing the power flow from the fault within a few microseconds. Ignitrons, thyratrons, and triggered spark gaps are used for this purpose. The power supply is protected from the short circuit either by a current-limiting device or a high-speed contactor which removes the system from the power lines within a few milliseconds. The fault diverters, and associated circuitry, used on several of the accelerators in Berkeleymore » and Livermore are described. (auth) l73O Studies of pi -meson and K-meson interactions were continued with counters, emulsions, and the 10-inch liquid hydrogen bubble chanmber. Six emulsion exposures were made for external groups to a pi -meson beam, three to Kmeson beams, two to a neutral-particle beanm, and three to the internal proton beam. An H-D reaction, catalyzed by mu mesons, was observed in the 10-inch liquid hydrogen bubble chamber. Absorption cross-section measurements for antiprotons were continued, using counters. Nineteen target bombardments were made for the chemistry group. A technique of producing two or more secondary-particle beam pulses per acceleration cycle, using different targets, has been successfully tried and used. (For preceding period see UCRL- 36l4.1 (auth)« less

  2. Fault Analysis in Solar Photovoltaic Arrays

    NASA Astrophysics Data System (ADS)

    Zhao, Ye

    Fault analysis in solar photovoltaic (PV) arrays is a fundamental task to increase reliability, efficiency and safety in PV systems. Conventional fault protection methods usually add fuses or circuit breakers in series with PV components. But these protection devices are only able to clear faults and isolate faulty circuits if they carry a large fault current. However, this research shows that faults in PV arrays may not be cleared by fuses under some fault scenarios, due to the current-limiting nature and non-linear output characteristics of PV arrays. First, this thesis introduces new simulation and analytic models that are suitable for fault analysis in PV arrays. Based on the simulation environment, this thesis studies a variety of typical faults in PV arrays, such as ground faults, line-line faults, and mismatch faults. The effect of a maximum power point tracker on fault current is discussed and shown to, at times, prevent the fault current protection devices to trip. A small-scale experimental PV benchmark system has been developed in Northeastern University to further validate the simulation conclusions. Additionally, this thesis examines two types of unique faults found in a PV array that have not been studied in the literature. One is a fault that occurs under low irradiance condition. The other is a fault evolution in a PV array during night-to-day transition. Our simulation and experimental results show that overcurrent protection devices are unable to clear the fault under "low irradiance" and "night-to-day transition". However, the overcurrent protection devices may work properly when the same PV fault occurs in daylight. As a result, a fault under "low irradiance" and "night-to-day transition" might be hidden in the PV array and become a potential hazard for system efficiency and reliability.

  3. Formal Validation of Fault Management Design Solutions

    NASA Technical Reports Server (NTRS)

    Gibson, Corrina; Karban, Robert; Andolfato, Luigi; Day, John

    2013-01-01

    The work presented in this paper describes an approach used to develop SysML modeling patterns to express the behavior of fault protection, test the model's logic by performing fault injection simulations, and verify the fault protection system's logical design via model checking. A representative example, using a subset of the fault protection design for the Soil Moisture Active-Passive (SMAP) system, was modeled with SysML State Machines and JavaScript as Action Language. The SysML model captures interactions between relevant system components and system behavior abstractions (mode managers, error monitors, fault protection engine, and devices/switches). Development of a method to implement verifiable and lightweight executable fault protection models enables future missions to have access to larger fault test domains and verifiable design patterns. A tool-chain to transform the SysML model to jpf-Statechart compliant Java code and then verify the generated code via model checking was established. Conclusions and lessons learned from this work are also described, as well as potential avenues for further research and development.

  4. Development of the self-learning machine for creating models of microprocessor of single-phase earth fault protection devices in networks with isolated neutral voltage above 1000 V

    NASA Astrophysics Data System (ADS)

    Utegulov, B. B.; Utegulov, A. B.; Meiramova, S.

    2018-02-01

    The paper proposes the development of a self-learning machine for creating models of microprocessor-based single-phase ground fault protection devices in networks with an isolated neutral voltage higher than 1000 V. Development of a self-learning machine for creating models of microprocessor-based single-phase earth fault protection devices in networks with an isolated neutral voltage higher than 1000 V. allows to effectively implement mathematical models of automatic change of protection settings. Single-phase earth fault protection devices.

  5. Quaternary Geology and Surface Faulting Hazard: Active and Capable Faults in Central Apennines, Italy

    NASA Astrophysics Data System (ADS)

    Falcucci, E.; Gori, S.

    2015-12-01

    The 2009 L'Aquila earthquake (Mw 6.1), in central Italy, raised the issue of surface faulting hazard in Italy, since large urban areas were affected by surface displacement along the causative structure, the Paganica fault. Since then, guidelines for microzonation were drew up that take into consideration the problem of surface faulting in Italy, and laying the bases for future regulations about related hazard, similarly to other countries (e.g. USA). More specific guidelines on the management of areas affected by active and capable faults (i.e. able to produce surface faulting) are going to be released by National Department of Civil Protection; these would define zonation of areas affected by active and capable faults, with prescriptions for land use planning. As such, the guidelines arise the problem of the time interval and general operational criteria to asses fault capability for the Italian territory. As for the chronology, the review of the international literature and regulatory allowed Galadini et al. (2012) to propose different time intervals depending on the ongoing tectonic regime - compressive or extensional - which encompass the Quaternary. As for the operational criteria, the detailed analysis of the large amount of works dealing with active faulting in Italy shows that investigations exclusively based on surface morphological features (e.g. fault planes exposition) or on indirect investigations (geophysical data), are not sufficient or even unreliable to define the presence of an active and capable fault; instead, more accurate geological information on the Quaternary space-time evolution of the areas affected by such tectonic structures is needed. A test area for which active and capable faults can be first mapped based on such a classical but still effective methodological approach can be the central Apennines. Reference Galadini F., Falcucci E., Galli P., Giaccio B., Gori S., Messina P., Moro M., Saroli M., Scardia G., Sposato A. (2012). Time

  6. Spacecraft fault tolerance: The Magellan experience

    NASA Technical Reports Server (NTRS)

    Kasuda, Rick; Packard, Donna Sexton

    1993-01-01

    Interplanetary and earth orbiting missions are now imposing unique fault tolerant requirements upon spacecraft design. Mission success is the prime motivator for building spacecraft with fault tolerant systems. The Magellan spacecraft had many such requirements imposed upon its design. Magellan met these requirements by building redundancy into all the major subsystem components and designing the onboard hardware and software with the capability to detect a fault, isolate it to a component, and issue commands to achieve a back-up configuration. This discussion is limited to fault protection, which is the autonomous capability to respond to a fault. The Magellan fault protection design is discussed, as well as the developmental and flight experiences and a summary of the lessons learned.

  7. On the design of fault-tolerant robotic manipulator systems

    NASA Technical Reports Server (NTRS)

    Tesar, Delbert

    1993-01-01

    Robotic systems are finding increasing use in space applications. Many of these devices are going to be operational on board the Space Station Freedom. Fault tolerance has been deemed necessary because of the criticality of the tasks and the inaccessibility of the systems to maintenance and repair. Design for fault tolerance in manipulator systems is an area within robotics that is without precedence in the literature. In this paper, we will attempt to lay down the foundations for such a technology. Design for fault tolerance demands new and special approaches to design, often at considerable variance from established design practices. These design aspects, together with reliability evaluation and modeling tools, are presented. Mechanical architectures that employ protective redundancies at many levels and have a modular architecture are then studied in detail. Once a mechanical architecture for fault tolerance has been derived, the chronological stages of operational fault tolerance are investigated. Failure detection, isolation, and estimation methods are surveyed, and such methods for robot sensors and actuators are derived. Failure recovery methods are also presented for each of the protective layers of redundancy. Failure recovery tactics often span all of the layers of a control hierarchy. Thus, a unified framework for decision-making and control, which orchestrates both the nominal redundancy management tasks and the failure management tasks, has been derived. The well-developed field of fault-tolerant computers is studied next, and some design principles relevant to the design of fault-tolerant robot controllers are abstracted. Conclusions are drawn, and a road map for the design of fault-tolerant manipulator systems is laid out with recommendations for a 10 DOF arm with dual actuators at each joint.

  8. Protecting your practice.

    PubMed

    Kerr, Joseph

    2015-01-01

    Dentists have much invested in their practices. They need to protect their practices from fraud and noncompliance. The author provides practical suggestions for how to significantly reduce the risk of fraud and theft in the practice without disrupting day-to-day operations. By adhering to nonintrusive policy and procedure changes, dental practice owners can reduce inherent risks of fraud and theft, while increasing financial and human resource knowledge regarding their practice. Practice owners with the appropriate policies and procedures benefit from significantly lower risk of loss from fraud and theft. Furthermore, they have a better understanding of their practice's finances and human resources. Copyright © 2015 American Dental Association. Published by Elsevier Inc. All rights reserved.

  9. The Consumer Protection Act: no-fault liability of health care providers.

    PubMed

    Slabbert, M Nöthling; Pepper, Michael S

    2011-11-01

    The introduction of no-fault or strict liability by the Consumer Protection Act 68 of 2008 (CPA) poses serious problems in the health care context. With a patient as a 'consumer' in terms of the CPA, health care practitioners may find themselves as 'suppliers' or 'retailers' as part of a supply chain, and potentially liable for harm and loss suffered by a patient in terms of the new no-fault liability provision. The claimant (patient) can sue anyone in the supply chain in terms of this provision, which places the health care practitioner who delivered the care in a very difficult position, as he or she is the most easily and often only identifiable person in the supply chain. Although the causal link between the harm suffered by the complainant will still need to be established on a balance of probabilities, the traditional common law obstacle requiring proof of negligence no longer applies. The article argues that this situation is unsatisfactory, as it places an increasingly onerous burden on certain health care practitioners.

  10. Novel WRM-based architecture of hybrid PON featuring online access and full-fiber-fault protection for smart grid

    NASA Astrophysics Data System (ADS)

    Li, Xingfeng; Gan, Chaoqin; Liu, Zongkang; Yan, Yuqi; Qiao, HuBao

    2018-01-01

    In this paper, a novel architecture of hybrid PON for smart grid is proposed by introducing a wavelength-routing module (WRM). By using conventional optical passive components, a WRM with M ports is designed. The symmetry and passivity of the WRM makes it be easily integrated and very cheap in practice. Via the WRM, two types of network based on different ONU-interconnected manner can realize online access. Depending on optical switches and interconnecting fibers, full-fiber-fault protection and dynamic bandwidth allocation are realized in these networks. With the help of amplitude modulation, DPSK modulation and RSOA technology, wavelength triple-reuse is achieved. By means of injecting signals into left and right branches in access ring simultaneously, the transmission delay is decreased. Finally, the performance analysis and simulation of the network verifies the feasibility of the proposed architecture.

  11. Fault Management Metrics

    NASA Technical Reports Server (NTRS)

    Johnson, Stephen B.; Ghoshal, Sudipto; Haste, Deepak; Moore, Craig

    2017-01-01

    This paper describes the theory and considerations in the application of metrics to measure the effectiveness of fault management. Fault management refers here to the operational aspect of system health management, and as such is considered as a meta-control loop that operates to preserve or maximize the system's ability to achieve its goals in the face of current or prospective failure. As a suite of control loops, the metrics to estimate and measure the effectiveness of fault management are similar to those of classical control loops in being divided into two major classes: state estimation, and state control. State estimation metrics can be classified into lower-level subdivisions for detection coverage, detection effectiveness, fault isolation and fault identification (diagnostics), and failure prognosis. State control metrics can be classified into response determination effectiveness and response effectiveness. These metrics are applied to each and every fault management control loop in the system, for each failure to which they apply, and probabilistically summed to determine the effectiveness of these fault management control loops to preserve the relevant system goals that they are intended to protect.

  12. Fault Injection Validation of a Safety-Critical TMR Sysem

    NASA Astrophysics Data System (ADS)

    Irrera, Ivano; Madeira, Henrique; Zentai, Andras; Hergovics, Beata

    2016-08-01

    Digital systems and their software are the core technology for controlling and monitoring industrial systems in practically all activity domains. Functional safety standards such as the European standard EN 50128 for railway applications define the procedures and technical requirements for the development of software for railway control and protection systems. The validation of such systems is a highly demanding task. In this paper we discuss the use of fault injection techniques, which have been used extensively in several domains, particularly in the space domain, to complement the traditional procedures to validate a SIL (Safety Integrity Level) 4 system for railway signalling, implementing a TMR (Triple Modular Redundancy) architecture. The fault injection tool is based on JTAG technology. The results of our injection campaign showed a high degree of tolerance to most of the injected faults, but several cases of unexpected behaviour have also been observed, helping understanding worst-case scenarios.

  13. Cautious to a Fault: Self-Protection and the Trajectory of Marital Satisfaction

    PubMed Central

    Murray, Sandra L.; Holmes, John G.; Derrick, Jaye L.; Harris, Brianna; Griffin, Dale W.; Pinkus, Rebecca T.

    2012-01-01

    A contextual model of self-protection is proposed to explain when adhering to cautious “if-then” rules in daily interaction erodes marital satisfaction. People can self-protect against partner non-responsiveness by distancing when a partner seems rejecting, promoting a partner’s dependence when feeling unworthy, or by devaluing a partner in the face of costs. The model implies that being less trusting elicits self-protection, and that mismatches between self-protective practices and encountered risk accelerate declines in satisfaction. A longitudinal study of newlyweds revealed that the fit between self-protection practices and risk predicted declines in satisfaction over three years. When people self-protected more initially, satisfaction declined more in low-risk (i.e., low conflict, resilient partner) than high-risk relationships (i.e., high conflict, vulnerable partner). However, when people self-protected less initially, satisfaction declined more in high-risk than low-risk relationships. Process evidence was consistent with moderated mediation: In low-risk relationships only, being less trusting predicted higher levels of self-protective caution that forecast later declines in satisfaction. PMID:25013236

  14. Algorithm-Based Fault Tolerance Integrated with Replication

    NASA Technical Reports Server (NTRS)

    Some, Raphael; Rennels, David

    2008-01-01

    In a proposed approach to programming and utilization of commercial off-the-shelf computing equipment, a combination of algorithm-based fault tolerance (ABFT) and replication would be utilized to obtain high degrees of fault tolerance without incurring excessive costs. The basic idea of the proposed approach is to integrate ABFT with replication such that the algorithmic portions of computations would be protected by ABFT, and the logical portions by replication. ABFT is an extremely efficient, inexpensive, high-coverage technique for detecting and mitigating faults in computer systems used for algorithmic computations, but does not protect against errors in logical operations surrounding algorithms.

  15. A real-time, practical sensor fault-tolerant module for robust EMG pattern recognition.

    PubMed

    Zhang, Xiaorong; Huang, He

    2015-02-19

    Unreliability of surface EMG recordings over time is a challenge for applying the EMG pattern recognition (PR)-controlled prostheses in clinical practice. Our previous study proposed a sensor fault-tolerant module (SFTM) by utilizing redundant information in multiple EMG signals. The SFTM consists of multiple sensor fault detectors and a self-recovery mechanism that can identify anomaly in EMG signals and remove the recordings of the disturbed signals from the input of the pattern classifier to recover the PR performance. While the proposed SFTM has shown great promise, the previous design is impractical. A practical SFTM has to be fast enough, lightweight, automatic, and robust under different conditions with or without disturbances. This paper presented a real-time, practical SFTM towards robust EMG PR. A novel fast LDA retraining algorithm and a fully automatic sensor fault detector based on outlier detection were developed, which allowed the SFTM to promptly detect disturbances and recover the PR performance immediately. These components of SFTM were then integrated with the EMG PR module and tested on five able-bodied subjects and a transradial amputee in real-time for classifying multiple hand and wrist motions under different conditions with different disturbance types and levels. The proposed fast LDA retraining algorithm significantly shortened the retraining time from nearly 1 s to less than 4 ms when tested on the embedded system prototype, which demonstrated the feasibility of a nearly "zero-delay" SFTM that is imperceptible to the users. The results of the real-time tests suggested that the SFTM was able to handle different types of disturbances investigated in this study and significantly improve the classification performance when one or multiple EMG signals were disturbed. In addition, the SFTM could also maintain the system's classification performance when there was no disturbance. This paper presented a real-time, lightweight, and automatic

  16. Multi-version software reliability through fault-avoidance and fault-tolerance

    NASA Technical Reports Server (NTRS)

    Vouk, Mladen A.; Mcallister, David F.

    1989-01-01

    A number of experimental and theoretical issues associated with the practical use of multi-version software to provide run-time tolerance to software faults were investigated. A specialized tool was developed and evaluated for measuring testing coverage for a variety of metrics. The tool was used to collect information on the relationships between software faults and coverage provided by the testing process as measured by different metrics (including data flow metrics). Considerable correlation was found between coverage provided by some higher metrics and the elimination of faults in the code. Back-to-back testing was continued as an efficient mechanism for removal of un-correlated faults, and common-cause faults of variable span. Software reliability estimation methods was also continued based on non-random sampling, and the relationship between software reliability and code coverage provided through testing. New fault tolerance models were formulated. Simulation studies of the Acceptance Voting and Multi-stage Voting algorithms were finished and it was found that these two schemes for software fault tolerance are superior in many respects to some commonly used schemes. Particularly encouraging are the safety properties of the Acceptance testing scheme.

  17. Alternative validation practice of an automated faulting measurement method.

    DOT National Transportation Integrated Search

    2010-03-08

    A number of states have adopted profiler based systems to automatically measure faulting, : in jointed concrete pavements. However, little published work exists which documents the : validation process used for such automated faulting systems. This p...

  18. Fault Analysis and Detection in Microgrids with High PV Penetration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    El Khatib, Mohamed; Hernandez Alvidrez, Javier; Ellis, Abraham

    In this report we focus on analyzing current-controlled PV inverters behaviour under faults in order to develop fault detection schemes for microgrids with high PV penetration. Inverter model suitable for steady state fault studies is presented and the impact of PV inverters on two protection elements is analyzed. The studied protection elements are superimposed quantities based directional element and negative sequence directional element. Additionally, several non-overcurrent fault detection schemes are discussed in this report for microgrids with high PV penetration. A detailed time-domain simulation study is presented to assess the performance of the presented fault detection schemes under different microgridmore » modes of operation.« less

  19. Reliability Improvement of Ground Fault Protection System Using an S-Type Horn Attachment Gap in AC Feeding System

    NASA Astrophysics Data System (ADS)

    Ajiki, Kohji; Morimoto, Hiroaki; Shimokawa, Fumiyuki; Sakai, Shinya; Sasaki, Kazuomi; Sato, Ryogo

    Contact wires used in feeding system for electric railroad are insulated by insulators. However, insulation of an insulator sometimes breaks down by surface dirt of an insulator and contact with a bird. The insulator breakdown derives a ground fault in feeding system. Ground fault will cause a human electric shock and a destruction of low voltage electric equipment. In order to prevent the damage by ground fault, an S-type horn has been applicable as equipped on insulators of negative feeder and protective wire until present. However, a concrete pole breaks down at the time of the ground fault because a spark-over voltage of the S-type horn is higher than a breakdown voltage of a concrete pole. Farther, the S-type horn installed in the steel tube pole does not discharge a case, because the earth resistance of a steel tube pole is very small. We assumed that we could solve these troubles by changing the power frequency spark-over voltage of the S-type horn from 12kV to 3kV. Accordingly, we developed an attachment gap that should be used to change the power frequency spark-over voltage of the S-type horn from 12kV to 3kV. The attachment gap consists of a gas gap arrester and a zinc oxide element. By the dynamic current test and the artificial ground fault test, we confirmed that the attachment gap in the S-type horn could prevent a trouble at the time of the ground fault.

  20. Practical Methods for Estimating Software Systems Fault Content and Location

    NASA Technical Reports Server (NTRS)

    Nikora, A.; Schneidewind, N.; Munson, J.

    1999-01-01

    Over the past several years, we have developed techniques to discriminate between fault-prone software modules and those that are not, to estimate a software system's residual fault content, to identify those portions of a software system having the highest estimated number of faults, and to estimate the effects of requirements changes on software quality.

  1. Protection of Renewable-dominated Microgrids: Challenges and Potential Solutions.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Elkhatib, Mohamed; Ellis, Abraham; Milan Biswal

    keywords : Microgrid Protection, Impedance Relay, Signal Processing-based Fault Detec- tion, Networked Microgrids, Communication-Assisted Protection In this report we address the challenge of designing efficient protection system for inverter- dominated microgrids. These microgrids are characterised with limited fault current capacity as a result of current-limiting protection functions of inverters. Typically, inverters limit their fault contribution in sub-cycle time frame to as low as 1.1 per unit. As a result, overcurrent protection could fail completely to detect faults in inverter-dominated microgrids. As part of this project a detailed literature survey of existing and proposed microgrid protection schemes were conducted. The surveymore » concluded that there is a gap in the available microgrid protection methods. The only credible protection solution available in literature for low- fault inverter-dominated microgrids is the differential protection scheme which represents a robust transmission-grade protection solution but at a very high cost. Two non-overcurrent protection schemes were investigated as part of this project; impedance-based protection and transient-based protection. Impedance-based protection depends on monitoring impedance trajectories at feeder relays to detect faults. Two communication-based impedance-based protection schemes were developed. the first scheme utilizes directional elements and pilot signals to locate the fault. The second scheme depends on a Central Protection Unit that communicates with all feeder relays to locate the fault based on directional flags received from feeder relays. The later approach could potentially be adapted to protect networked microgrids and dynamic topology microgrids. Transient-based protection relies on analyzing high frequency transients to detect and locate faults. This approach is very promising but its implementation in the filed faces several challenges. For example, high frequency transients

  2. Protection of Renewable-dominated Microgrids: Challenges and Potential Solutions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Elkhatib, Mohamed; Ellis, Abraham; Biswal, Milan

    In this report we address the challenge of designing efficient protection system for inverter- dominated microgrids. These microgrids are characterised with limited fault current capacity as a result of current-limiting protection functions of inverters. Typically, inverters limit their fault contribution in sub-cycle time frame to as low as 1.1 per unit. As a result, overcurrent protection could fail completely to detect faults in inverter-dominated microgrids. As part of this project a detailed literature survey of existing and proposed microgrid protection schemes were conducted. The survey concluded that there is a gap in the available microgrid protection methods. The only crediblemore » protection solution available in literature for low- fault inverter-dominated microgrids is the differential protection scheme which represents a robust transmission-grade protection solution but at a very high cost. Two non-overcurrent protection schemes were investigated as part of this project; impedance-based protection and transient-based protection. Impedance-based protection depends on monitoring impedance trajectories at feeder relays to detect faults. Two communication-based impedance-based protection schemes were developed. the first scheme utilizes directional elements and pilot signals to locate the fault. The second scheme depends on a Central Protection Unit that communicates with all feeder relays to locate the fault based on directional flags received from feeder relays. The later approach could potentially be adapted to protect networked microgrids and dynamic topology microgrids. Transient-based protection relies on analyzing high frequency transients to detect and locate faults. This approach is very promising but its implementation in the filed faces several challenges. For example, high frequency transients due to faults can be confused with transients due to other events such as capacitor switching. Additionally, while detecting faults by analyzing

  3. Experimental investigation into the fault response of superconducting hybrid electric propulsion electrical power system to a DC rail to rail fault

    NASA Astrophysics Data System (ADS)

    Nolan, S.; Jones, C. E.; Munro, R.; Norman, P.; Galloway, S.; Venturumilli, S.; Sheng, J.; Yuan, W.

    2017-12-01

    Hybrid electric propulsion aircraft are proposed to improve overall aircraft efficiency, enabling future rising demands for air travel to be met. The development of appropriate electrical power systems to provide thrust for the aircraft is a significant challenge due to the much higher required power generation capacity levels and complexity of the aero-electrical power systems (AEPS). The efficiency and weight of the AEPS is critical to ensure that the benefits of hybrid propulsion are not mitigated by the electrical power train. Hence it is proposed that for larger aircraft (~200 passengers) superconducting power systems are used to meet target power densities. Central to the design of the hybrid propulsion AEPS is a robust and reliable electrical protection and fault management system. It is known from previous studies that the choice of protection system may have a significant impact on the overall efficiency of the AEPS. Hence an informed design process which considers the key trades between choice of cable and protection requirements is needed. To date the fault response of a voltage source converter interfaced DC link rail to rail fault in a superconducting power system has only been investigated using simulation models validated by theoretical values from the literature. This paper will present the experimentally obtained fault response for a variety of different types of superconducting tape for a rail to rail DC fault. The paper will then use these as a platform to identify key trades between protection requirements and cable design, providing guidelines to enable future informed decisions to optimise hybrid propulsion electrical power system and protection design.

  4. A Practical Study of the 66kV Fault Current Limiter (FCL) System with Rectifier

    NASA Astrophysics Data System (ADS)

    Tokuda, Noriaki; Matsubara, Yoshio; Yuguchi, Kyosuke; Ohkuma, Takeshi; Hobara, Natsuro; Takahashi, Yoshihisa

    A fault current limiter (FCL) is extensively expected to suppress fault current, particularly required for trunk power systems heavily connected high-voltage transmission lines, such as 500kV class power system which constitutes the nucleus of the electric power system. By installing such FCL in the power system, the system interconnection is possible without the need to raise the capacity of the circuit breakers, and facilities can be configured for efficiency, among other benefits. For these reasons, fault current limiters based on various principles of operation have been developed both in Japan and abroad. In this paper, we have proposed a new type of FCL system, consisting of solid-state diodes, DC coil and bypass AC coil, and described the specification of distribution power system and 66kV model at the island power system and the superconducting cable power system. Also we have made a practical study of 66kV class, which is the testing items and the future subjects of the rectifier type FCL system.

  5. On the Adaptive Protection of Microgrids: A Review on How to Mitigate Cyber Attacks and Communication Failures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Habib, Hany F; Lashway, Christopher R; Mohammed, Osama A

    One main challenge in the practical implementation of a microgrid is the design of an adequate protection scheme in both grid connected and islanded modes. Conventional overcurrent protection schemes face selectivity and sensitivity issues during grid and microgrid faults since the fault current level is different in both cases for the same relay. Various approaches have been implemented in the past to deal with this problem, yet the most promising ones are the implementation of adaptive protection techniques abiding by the IEC 61850 communication standard. This paper presents a critical review of existing adaptive protection schemes, the technical challenges formore » the use of classical protection techniques and the need for an adaptive, smart protection system. However, the risk of communication link failures and cyber security threats still remain a challenge in implementing a reliable adaptive protection scheme. A contingency is needed where a communication issue prevents the relay from adjusting to a lower current level during islanded mode. An adaptive protection scheme is proposed that utilizes energy storage (ES) and hybrid ES (HESS) already available in the network as a mechanism to source the higher fault current. Four common grid ES and HESS are reviewed for their suitability in feeding the fault while some solutions are proposed.« less

  6. Fault tolerance in space-based digital signal processing and switching systems: Protecting up-link processing resources, demultiplexer, demodulator, and decoder

    NASA Technical Reports Server (NTRS)

    Redinbo, Robert

    1994-01-01

    Fault tolerance features in the first three major subsystems appearing in the next generation of communications satellites are described. These satellites will contain extensive but efficient high-speed processing and switching capabilities to support the low signal strengths associated with very small aperture terminals. The terminals' numerous data channels are combined through frequency division multiplexing (FDM) on the up-links and are protected individually by forward error-correcting (FEC) binary convolutional codes. The front-end processing resources, demultiplexer, demodulators, and FEC decoders extract all data channels which are then switched individually, multiplexed, and remodulated before retransmission to earth terminals through narrow beam spot antennas. Algorithm based fault tolerance (ABFT) techniques, which relate real number parity values with data flows and operations, are used to protect the data processing operations. The additional checking features utilize resources that can be substituted for normal processing elements when resource reconfiguration is required to replace a failed unit.

  7. Predictors of Sun-Protective Practices among Iranian Female College Students: Application of Protection Motivation Theory.

    PubMed

    Dehbari, Samaneh Rooshanpour; Dehdari, Tahereh; Dehdari, Laleh; Mahmoudi, Maryam

    2015-01-01

    Given the importance of sun protection in the prevention of skin cancer, this study was designed to determine predictors of sun-protective practices among a sample of Iranian female college students based on protection motivation theory (PMT) variables. In this cross-sectional study, a total of 201 female college students in Iran University of Medical Sciences were selected. Demographic and PMT variables were assessed with a 67-item questionnaire. Multiple linear regression was used to identify demographic and PMT variables that were associated with sun-protective practices and intention. one percent of participants always wore a hat with a brim, 3.5% gloves and 15.9% sunglasses while outdoors. Only 10.9% regularly had their skin checked by a doctor. Perceived rewards, response efficacy, fear, self-efficacy and marital status were the five variables which could predict 39% variance of participants intention to perform sun-protective practices. Also, intention and response cost explained 31% of the variance of sun-protective practices. These predictive variables may be used to develop theory-based education interventions interventions to prevent skin cancer among college students.

  8. 30 CFR 75.814 - Electrical protection.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... protection must not be dependent upon control power and may consist of a current transformer and overcurrent... restarting of the equipment. (b) Current transformers used for the ground-fault protection specified in... series with ground-fault current transformers. (c) Each ground-fault current device specified in...

  9. 30 CFR 75.814 - Electrical protection.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... protection must not be dependent upon control power and may consist of a current transformer and overcurrent... restarting of the equipment. (b) Current transformers used for the ground-fault protection specified in... series with ground-fault current transformers. (c) Each ground-fault current device specified in...

  10. 30 CFR 75.814 - Electrical protection.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... protection must not be dependent upon control power and may consist of a current transformer and overcurrent... restarting of the equipment. (b) Current transformers used for the ground-fault protection specified in... series with ground-fault current transformers. (c) Each ground-fault current device specified in...

  11. 30 CFR 75.814 - Electrical protection.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... protection must not be dependent upon control power and may consist of a current transformer and overcurrent... restarting of the equipment. (b) Current transformers used for the ground-fault protection specified in... series with ground-fault current transformers. (c) Each ground-fault current device specified in...

  12. 30 CFR 75.814 - Electrical protection.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... protection must not be dependent upon control power and may consist of a current transformer and overcurrent... restarting of the equipment. (b) Current transformers used for the ground-fault protection specified in... series with ground-fault current transformers. (c) Each ground-fault current device specified in...

  13. A Fault Alarm and Diagnosis Method Based on Sensitive Parameters and Support Vector Machine

    NASA Astrophysics Data System (ADS)

    Zhang, Jinjie; Yao, Ziyun; Lv, Zhiquan; Zhu, Qunxiong; Xu, Fengtian; Jiang, Zhinong

    2015-08-01

    Study on the extraction of fault feature and the diagnostic technique of reciprocating compressor is one of the hot research topics in the field of reciprocating machinery fault diagnosis at present. A large number of feature extraction and classification methods have been widely applied in the related research, but the practical fault alarm and the accuracy of diagnosis have not been effectively improved. Developing feature extraction and classification methods to meet the requirements of typical fault alarm and automatic diagnosis in practical engineering is urgent task. The typical mechanical faults of reciprocating compressor are presented in the paper, and the existing data of online monitoring system is used to extract fault feature parameters within 15 types in total; the inner sensitive connection between faults and the feature parameters has been made clear by using the distance evaluation technique, also sensitive characteristic parameters of different faults have been obtained. On this basis, a method based on fault feature parameters and support vector machine (SVM) is developed, which will be applied to practical fault diagnosis. A better ability of early fault warning has been proved by the experiment and the practical fault cases. Automatic classification by using the SVM to the data of fault alarm has obtained better diagnostic accuracy.

  14. Protection: clarifying the concept for use in nursing practice.

    PubMed

    Lorenz, Susan G

    2007-01-01

    The protection of patients is integral in any healthcare setting. Healthcare organizations are increasingly held accountable for preventable medical errors, the attitudes toward safety, and communication among all levels of providers, collaborative practices, and recognition of risks. The concept of protection is inherent in nursing practice. It provides a framework, that further defines healthcare provider's roles in meeting these imperatives. The scope of protection is considered both globally and individually prominent. Nurses protect patients from environmental hazards, themselves, and any perceived threat. In this analysis of the phenomenon, the concept is clarified, and an evidence-based approach to protection is utilized for theory development and concept measurement.

  15. Characterization of the faulted behavior of digital computers and fault tolerant systems

    NASA Technical Reports Server (NTRS)

    Bavuso, Salvatore J.; Miner, Paul S.

    1989-01-01

    A development status evaluation is presented for efforts conducted at NASA-Langley since 1977, toward the characterization of the latent fault in digital fault-tolerant systems. Attention is given to the practical, high speed, generalized gate-level logic system simulator developed, as well as to the validation methodology used for the simulator, on the basis of faultable software and hardware simulations employing a prototype MIL-STD-1750A processor. After validation, latency tests will be performed.

  16. NASA Spacecraft Fault Management Workshop Results

    NASA Technical Reports Server (NTRS)

    Newhouse, Marilyn; McDougal, John; Barley, Bryan; Fesq, Lorraine; Stephens, Karen

    2010-01-01

    Fault Management is a critical aspect of deep-space missions. For the purposes of this paper, fault management is defined as the ability of a system to detect, isolate, and mitigate events that impact, or have the potential to impact, nominal mission operations. The fault management capabilities are commonly distributed across flight and ground subsystems, impacting hardware, software, and mission operations designs. The National Aeronautics and Space Administration (NASA) Discovery & New Frontiers (D&NF) Program Office at Marshall Space Flight Center (MSFC) recently studied cost overruns and schedule delays for 5 missions. The goal was to identify the underlying causes for the overruns and delays, and to develop practical mitigations to assist the D&NF projects in identifying potential risks and controlling the associated impacts to proposed mission costs and schedules. The study found that 4 out of the 5 missions studied had significant overruns due to underestimating the complexity and support requirements for fault management. As a result of this and other recent experiences, the NASA Science Mission Directorate (SMD) Planetary Science Division (PSD) commissioned a workshop to bring together invited participants across government, industry, academia to assess the state of the art in fault management practice and research, identify current and potential issues, and make recommendations for addressing these issues. The workshop was held in New Orleans in April of 2008. The workshop concluded that fault management is not being limited by technology, but rather by a lack of emphasis and discipline in both the engineering and programmatic dimensions. Some of the areas cited in the findings include different, conflicting, and changing institutional goals and risk postures; unclear ownership of end-to-end fault management engineering; inadequate understanding of the impact of mission-level requirements on fault management complexity; and practices, processes, and

  17. No-Fault Malpractice Insurance

    PubMed Central

    Bush, J. W.; Chen, M. M.; Bush, A. S.

    1975-01-01

    No-fault medical malpractice insurance has been proposed as an alternative to the present tort liability approach. Statistical examination of the concept of proximate cause reveals not only that the question of acceptable care, and therefore of fault, is unavoidable in identifying patients deserving compensation, but also that specifying fault in an individual case is scientifically untenable. A simple formula for a Coefficient of Causality clarifies the question of proximate cause in existing trial practices and suggests that many of the threats associated with malpractice suits arise from the structure of the tort-insurance system rather than from professional responsibility for medical injury. The concepts could provide the basis for a revised claims and compensation procedure. PMID:1146300

  18. Cassini Attitude and Articulation Control Subsystem Fault Protection Challenges During Saturn Proximal Orbits

    NASA Technical Reports Server (NTRS)

    Bates, David M.

    2015-01-01

    NASA's Cassini Spacecraft, launched on October 15th, 1997 arrived at Saturn on June 30th, 2004, is the largest and most ambitious interplanetary spacecraft in history. As the first spacecraft to achieve orbit at Saturn, Cassini has collected science data throughout its four-year prime mission (2004-08), and has since been approved for a first and second extended mission through 2017. As part of the final extended mission, Cassini will begin an aggressive and exciting campaign of high inclination low altitude flybys within the inner most rings of Saturn, skimming Saturn's outer atmosphere, until the spacecraft is finally disposed of via planned impact with the planet. This final campaign, known as the proximal orbits, presents unique fault protection related challenges, the details of which are discussed in this paper.

  19. Workplace exposures and protective practices of Hispanic warehouse workers.

    PubMed

    Livaudais, Jennifer C; Thompson, Beti; Islas, Ilda; Ibarra, Genoveva; Anderson, Jennifer; Coronado, Gloria D

    2009-04-01

    This study was undertaken to assess workplace hazards and protective practices among Hispanic men and women working post-harvest in asparagus, apple and pear packaging warehouses. Three focus groups were conducted in July 2003 with 25 workers (20 women, 5 men) recruited from communities in the Yakima Valley, Washington. Focus group content informed the design of an in-person structured interview administered to 50 additional warehouse workers from August to November 2006. Focus group participants reported difficult working conditions, exposure to chemicals, adverse health effects and use of work and home protective practices to minimize exposures for themselves and their families. Structured interview participants reported few workplace exposures to chemicals although many reported engaging in workplace and home protective practices. Findings from this research can direct initial efforts to determine if and how interventions for warehouse workers may be designed to protect against hazardous workplace exposures.

  20. Use of Fuzzy Logic Systems for Assessment of Primary Faults

    NASA Astrophysics Data System (ADS)

    Petrović, Ivica; Jozsa, Lajos; Baus, Zoran

    2015-09-01

    In electric power systems, grid elements are often subjected to very complex and demanding disturbances or dangerous operating conditions. Determining initial fault or cause of those states is a difficult task. When fault occurs, often it is an imperative to disconnect affected grid element from the grid. This paper contains an overview of possibilities for using fuzzy logic in an assessment of primary faults in the transmission grid. The tool for this task is SCADA system, which is based on information of currents, voltages, events of protection devices and status of circuit breakers in the grid. The function model described with the membership function and fuzzy logic systems will be presented in the paper. For input data, diagnostics system uses information of protection devices tripping, states of circuit breakers and measurements of currents and voltages before and after faults.

  1. Series and parallel arc-fault circuit interrupter tests.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnson, Jay Dean; Fresquez, Armando J.; Gudgel, Bob

    2013-07-01

    While the 2011 National Electrical Codeª (NEC) only requires series arc-fault protection, some arc-fault circuit interrupter (AFCI) manufacturers are designing products to detect and mitigate both series and parallel arc-faults. Sandia National Laboratories (SNL) has extensively investigated the electrical differences of series and parallel arc-faults and has offered possible classification and mitigation solutions. As part of this effort, Sandia National Laboratories has collaborated with MidNite Solar to create and test a 24-string combiner box with an AFCI which detects, differentiates, and de-energizes series and parallel arc-faults. In the case of the MidNite AFCI prototype, series arc-faults are mitigated by openingmore » the PV strings, whereas parallel arc-faults are mitigated by shorting the array. A range of different experimental series and parallel arc-fault tests with the MidNite combiner box were performed at the Distributed Energy Technologies Laboratory (DETL) at SNL in Albuquerque, NM. In all the tests, the prototype de-energized the arc-faults in the time period required by the arc-fault circuit interrupt testing standard, UL 1699B. The experimental tests confirm series and parallel arc-faults can be successfully mitigated with a combiner box-integrated solution.« less

  2. Analysis on Behaviour of Wavelet Coefficient during Fault Occurrence in Transformer

    NASA Astrophysics Data System (ADS)

    Sreewirote, Bancha; Ngaopitakkul, Atthapol

    2018-03-01

    The protection system for transformer has play significant role in avoiding severe damage to equipment when disturbance occur and ensure overall system reliability. One of the methodology that widely used in protection scheme and algorithm is discrete wavelet transform. However, characteristic of coefficient under fault condition must be analyzed to ensure its effectiveness. So, this paper proposed study and analysis on wavelet coefficient characteristic when fault occur in transformer in both high- and low-frequency component from discrete wavelet transform. The effect of internal and external fault on wavelet coefficient of both fault and normal phase has been taken into consideration. The fault signal has been simulate using transmission connected to transformer experimental setup on laboratory level that modelled after actual system. The result in term of wavelet coefficient shown a clearly differentiate between wavelet characteristic in both high and low frequency component that can be used to further design and improve detection and classification algorithm that based on discrete wavelet transform methodology in the future.

  3. Best Practices in Physics-Based Fault Rupture Models for Seismic Hazard Assessment of Nuclear Installations

    NASA Astrophysics Data System (ADS)

    Dalguer, Luis A.; Fukushima, Yoshimitsu; Irikura, Kojiro; Wu, Changjiang

    2017-09-01

    Inspired by the first workshop on Best Practices in Physics-Based Fault Rupture Models for Seismic Hazard Assessment of Nuclear Installations (BestPSHANI) conducted by the International Atomic Energy Agency (IAEA) on 18-20 November, 2015 in Vienna (http://www-pub.iaea.org/iaeameetings/50896/BestPSHANI), this PAGEOPH topical volume collects several extended articles from this workshop as well as several new contributions. A total of 17 papers have been selected on topics ranging from the seismological aspects of earthquake cycle simulations for source-scaling evaluation, seismic source characterization, source inversion and ground motion modeling (based on finite fault rupture using dynamic, kinematic, stochastic and empirical Green's functions approaches) to the engineering application of simulated ground motion for the analysis of seismic response of structures. These contributions include applications to real earthquakes and description of current practice to assess seismic hazard in terms of nuclear safety in low seismicity areas, as well as proposals for physics-based hazard assessment for critical structures near large earthquakes. Collectively, the papers of this volume highlight the usefulness of physics-based models to evaluate and understand the physical causes of observed and empirical data, as well as to predict ground motion beyond the range of recorded data. Relevant importance is given on the validation and verification of the models by comparing synthetic results with observed data and empirical models.

  4. Assessment of radiation protection practices among radiographers in Lagos, Nigeria.

    PubMed

    Eze, Cletus Uche; Abonyi, Livinus Chibuzo; Njoku, Jerome; Irurhe, Nicholas Kayode; Olowu, Oluwabola

    2013-11-01

    Use of ionising radiation in diagnostic radiography could lead to hazards such as somatic and genetic damages. Compliance to safe work and radiation protection practices could mitigate such risks. The aim of the study was to assess the knowledge and radiation protection practices among radiographers in Lagos, Nigeria. The study was a prospective cross sectional survey. Convenience sampling technique was used to select four x-ray diagnostic centres in four tertiary hospitals in Lagos metropolis. Data were analysed with Epi- info software, version 3.5.1. Average score on assessment of knowledge was 73%. Most modern radiation protection instruments were lacking in all the centres studied. Application of shielding devices such as gonad shield for protection was neglected mostly in government hospitals. Most x-ray machines were quite old and evidence of quality assurance tests performed on such machines were lacking. Radiographers within Lagos metropolis showed an excellent knowledge of radiation protection within the study period. Adherence to radiation protection practices among radiographers in Lagos metropolis during the period studied was, however, poor. Radiographers in Lagos, Nigeria should embrace current trends in radiation protection and make more concerted efforts to apply their knowledge in protecting themselves and patients from harmful effects of ionising radiation.

  5. Sun-protection practices among undergraduates at an Australian University.

    PubMed

    Shahnam, Adel; Samarawickrema, Indira; Ali, Sayed

    2017-05-01

    To assess the sun-protection practices of undergraduates at the Australian National University. We sent emails with links to the questionnaire on the use of five sun-protection practices in the last fortnight of the summer to 3341 randomly selected students aged 18-24 years in this cross-sectional study. The response rate was 19% and 507 students met the inclusion criteria. The sample consisted of 338 female and 169 male students with a mean age of 20.5 years (SD ± 1.9). Any method of sun protection was used always or often by 32% of respondents. The commonest method used was shade (58%) while the least common was wearing a hat (8%). Domestic students (44%) used sunglasses more than the international students (23%, P < 0.05) and female students used sunscreen (48%) and sunglasses (37%) more than male students (33% and 23% respectively) (P < 0.05). In the 22-24-year-old age group non-medical students (54%) used sunglasses more than the medical students (36%, P < 0.05). Only a third of the sample practiced any method of sun protection and there were significant differences in the practices between subgroups, suggesting they were at an increased risk of sun damage. © 2015 The Australasian College of Dermatologists.

  6. Discrete Wavelet Transform for Fault Locations in Underground Distribution System

    NASA Astrophysics Data System (ADS)

    Apisit, C.; Ngaopitakkul, A.

    2010-10-01

    In this paper, a technique for detecting faults in underground distribution system is presented. Discrete Wavelet Transform (DWT) based on traveling wave is employed in order to detect the high frequency components and to identify fault locations in the underground distribution system. The first peak time obtained from the faulty bus is employed for calculating the distance of fault from sending end. The validity of the proposed technique is tested with various fault inception angles, fault locations and faulty phases. The result is found that the proposed technique provides satisfactory result and will be very useful in the development of power systems protection scheme.

  7. Implementation of an experimental fault-tolerant memory system

    NASA Technical Reports Server (NTRS)

    Carter, W. C.; Mccarthy, C. E.

    1976-01-01

    The experimental fault-tolerant memory system described in this paper has been designed to enable the modular addition of spares, to validate the theoretical fault-secure and self-testing properties of the translator/corrector, to provide a basis for experiments using the new testing and correction processes for recovery, and to determine the practicality of such systems. The hardware design and implementation are described, together with methods of fault insertion. The hardware/software interface, including a restricted single error correction/double error detection (SEC/DED) code, is specified. Procedures are carefully described which, (1) test for specified physical faults, (2) ensure that single error corrections are not miscorrections due to triple faults, and (3) enable recovery from double errors.

  8. Fault Tree Analysis Application for Safety and Reliability

    NASA Technical Reports Server (NTRS)

    Wallace, Dolores R.

    2003-01-01

    Many commercial software tools exist for fault tree analysis (FTA), an accepted method for mitigating risk in systems. The method embedded in the tools identifies a root as use in system components, but when software is identified as a root cause, it does not build trees into the software component. No commercial software tools have been built specifically for development and analysis of software fault trees. Research indicates that the methods of FTA could be applied to software, but the method is not practical without automated tool support. With appropriate automated tool support, software fault tree analysis (SFTA) may be a practical technique for identifying the underlying cause of software faults that may lead to critical system failures. We strive to demonstrate that existing commercial tools for FTA can be adapted for use with SFTA, and that applied to a safety-critical system, SFTA can be used to identify serious potential problems long before integrator and system testing.

  9. Reset Tree-Based Optical Fault Detection

    PubMed Central

    Lee, Dong-Geon; Choi, Dooho; Seo, Jungtaek; Kim, Howon

    2013-01-01

    In this paper, we present a new reset tree-based scheme to protect cryptographic hardware against optical fault injection attacks. As one of the most powerful invasive attacks on cryptographic hardware, optical fault attacks cause semiconductors to misbehave by injecting high-energy light into a decapped integrated circuit. The contaminated result from the affected chip is then used to reveal secret information, such as a key, from the cryptographic hardware. Since the advent of such attacks, various countermeasures have been proposed. Although most of these countermeasures are strong, there is still the possibility of attack. In this paper, we present a novel optical fault detection scheme that utilizes the buffers on a circuit's reset signal tree as a fault detection sensor. To evaluate our proposal, we model radiation-induced currents into circuit components and perform a SPICE simulation. The proposed scheme is expected to be used as a supplemental security tool. PMID:23698267

  10. Galileo spacecraft power distribution and autonomous fault recovery

    NASA Technical Reports Server (NTRS)

    Detwiler, R. C.

    1982-01-01

    There is a trend in current spacecraft design to achieve greater fault tolerance through the implemenation of on-board software dedicated to detecting and isolating failures. A combination of hardware and software is utilized in the Galileo power system for autonomous fault recovery. Galileo is a dual-spun spacecraft designed to carry a number of scientific instruments into a series of orbits around the planet Jupiter. In addition to its self-contained scientific payload, it will also carry a probe system which will be separated from the spacecraft some 150 days prior to Jupiter encounter. The Galileo spacecraft is scheduled to be launched in 1985. Attention is given to the power system, the fault protection requirements, and the power fault recovery implementation.

  11. Analysis of population inquiry on practices for ultraviolet radiation protection.

    PubMed

    Celaj, Stela; Deng, Jie; Murphy, Brendan L

    2017-10-15

    UV radiation exposure is one of the key modifiable risk factors for skin cancer. Hence, patient education regarding skin protection and sunscreen use is of tremendous importance to public health. To better understand patient practices regarding skin protection in a population level, we looked into the Internet search behavior of the US-based population. We investigated patient inquires on the United States Food and Drug Administration (FDA) announcements regarding sunscreen use by quantifying search terms such as "broad spectrum sunscreen", "sunscreen" and "sunblock" with Google Trends, a novel methodology for understanding internet search practices. Our findings show that "broad spectrum sunscreen" searches were significantly increased post 2011 FDA announcements, which suggest increased public awareness regarding the importance of broad spectrum protection. It is encouraging these preliminary results indicate that skin protection practices are being increasingly investigated by the general public and may serve as a novel approach for identifying areas of improvement regarding patient education on the reduction of the risk for skin cancer.

  12. A fault-tolerant intelligent robotic control system

    NASA Technical Reports Server (NTRS)

    Marzwell, Neville I.; Tso, Kam Sing

    1993-01-01

    This paper describes the concept, design, and features of a fault-tolerant intelligent robotic control system being developed for space and commercial applications that require high dependability. The comprehensive strategy integrates system level hardware/software fault tolerance with task level handling of uncertainties and unexpected events for robotic control. The underlying architecture for system level fault tolerance is the distributed recovery block which protects against application software, system software, hardware, and network failures. Task level fault tolerance provisions are implemented in a knowledge-based system which utilizes advanced automation techniques such as rule-based and model-based reasoning to monitor, diagnose, and recover from unexpected events. The two level design provides tolerance of two or more faults occurring serially at any level of command, control, sensing, or actuation. The potential benefits of such a fault tolerant robotic control system include: (1) a minimized potential for damage to humans, the work site, and the robot itself; (2) continuous operation with a minimum of uncommanded motion in the presence of failures; and (3) more reliable autonomous operation providing increased efficiency in the execution of robotic tasks and decreased demand on human operators for controlling and monitoring the robotic servicing routines.

  13. Verification of an IGBT Fusing Switch for Over-current Protection of the SNS HVCM

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Benwell, Andrew; Kemp, Mark; Burkhart, Craig

    2010-06-11

    An IGBT based over-current protection system has been developed to detect faults and limit the damage caused by faults in high voltage converter modulators. During normal operation, an IGBT enables energy to be transferred from storage capacitors to a H-bridge. When a fault occurs, the over-current protection system detects the fault, limits the fault current and opens the IGBT to isolate the remaining stored energy from the fault. This paper presents an experimental verification of the over-current protection system under applicable conditions.

  14. 76 FR 78945 - Summary of Commission Practice Relating to Administrative Protective Orders

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-12-20

    ... INTERNATIONAL TRADE COMMISSION Summary of Commission Practice Relating to Administrative Protective Orders AGENCY: U.S. International Trade Commission. ACTION: Summary of Commission practice relating to administrative protective orders. SUMMARY: Since February 1991, the U.S. International Trade...

  15. 75 FR 66127 - Summary of Commission Practice Relating to Administrative Protective Orders

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-10-27

    ... INTERNATIONAL TRADE COMMISSION Summary of Commission Practice Relating to Administrative Protective Orders AGENCY: U.S. International Trade Commission. ACTION: Summary of Commission practice relating to administrative protective orders. SUMMARY: Since February 1991, the U.S. International Trade...

  16. Protective practices against zoonotic infections among rural and slum communities from South Central Chile.

    PubMed

    Mason, Meghan R; Gonzalez, Marcelo; Hodges, James S; Muñoz-Zanzi, Claudia

    2015-07-28

    Despite well-recognized recommendations to reduce human exposure to zoonotic pathogens, the use of personal and herd-level protective practices is inconsistent in communities where human interactions with animals are common. This study assessed household-level participation in rodent- (extermination, proper food storage, trash disposal), occupational- (preventive veterinary care, boot-wearing, glove-wearing), and garden-associated (restricting animal access, boot-wearing, glove-wearing) protective practices in farms, villages, and slums in the Los Rios region, Chile, where zoonotic pathogens are endemic. Questionnaires administered at 422 households across 12 communities recorded household-level socio-demographic characteristics and participation in nine protective practices. Household inclusion in the analysis of occupational practices required having livestock and a household member with occupational exposure to livestock (n = 127), and inclusion in analysis of garden practices required having a garden and at least one animal (n = 233). The proportion of households participating in each protective practice was compared across community types through chi-square analyses. Mixed effects logistic regression assessed household-level associations between socio-demographic characteristics and participation in each protective practice. Most households (95.3 %) reported participation in rodent control, and a positive association between the number of rodent signs in a household and rodent extermination was observed (OR: 1.75, 95 % CI: 1.41, 2.16). Occupational protective practices were reported in 61.8 % of eligible households; household size (OR: 1.63, 95 % CI: 1.17, 5.84) and having children (OR: 0.22, 95 % CI: 0.06, 0.78) were associated with preventive veterinary care. Among eligible households, 73.8 % engaged in protective practices when gardening, and species diversity was positively associated with wearing boots (OR: 1.27, 95 % CI: 1.03, 1.56). Household

  17. Combining dynamical decoupling with fault-tolerant quantum computation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ng, Hui Khoon; Preskill, John; Lidar, Daniel A.

    2011-07-15

    We study how dynamical decoupling (DD) pulse sequences can improve the reliability of quantum computers. We prove upper bounds on the accuracy of DD-protected quantum gates and derive sufficient conditions for DD-protected gates to outperform unprotected gates. Under suitable conditions, fault-tolerant quantum circuits constructed from DD-protected gates can tolerate stronger noise and have a lower overhead cost than fault-tolerant circuits constructed from unprotected gates. Our accuracy estimates depend on the dynamics of the bath that couples to the quantum computer and can be expressed either in terms of the operator norm of the bath's Hamiltonian or in terms of themore » power spectrum of bath correlations; we explain in particular how the performance of recursively generated concatenated pulse sequences can be analyzed from either viewpoint. Our results apply to Hamiltonian noise models with limited spatial correlations.« less

  18. Wavelet Based Protection Scheme for Multi Terminal Transmission System with PV and Wind Generation

    NASA Astrophysics Data System (ADS)

    Manju Sree, Y.; Goli, Ravi kumar; Ramaiah, V.

    2017-08-01

    A hybrid generation is a part of large power system in which number of sources usually attached to a power electronic converter and loads are clustered can operate independent of the main power system. The protection scheme is crucial against faults based on traditional over current protection since there are adequate problems due to fault currents in the mode of operation. This paper adopts a new approach for detection, discrimination of the faults for multi terminal transmission line protection in presence of hybrid generation. Transient current based protection scheme is developed with discrete wavelet transform. Fault indices of all phase currents at all terminals are obtained by analyzing the detail coefficients of current signals using bior 1.5 mother wavelet. This scheme is tested for different types of faults and is found effective for detection and discrimination of fault with various fault inception angle and fault impedance.

  19. Interacting faults

    NASA Astrophysics Data System (ADS)

    Peacock, D. C. P.; Nixon, C. W.; Rotevatn, A.; Sanderson, D. J.; Zuluaga, L. F.

    2017-04-01

    The way that faults interact with each other controls fault geometries, displacements and strains. Faults rarely occur individually but as sets or networks, with the arrangement of these faults producing a variety of different fault interactions. Fault interactions are characterised in terms of the following: 1) Geometry - the spatial arrangement of the faults. Interacting faults may or may not be geometrically linked (i.e. physically connected), when fault planes share an intersection line. 2) Kinematics - the displacement distributions of the interacting faults and whether the displacement directions are parallel, perpendicular or oblique to the intersection line. Interacting faults may or may not be kinematically linked, where the displacements, stresses and strains of one fault influences those of the other. 3) Displacement and strain in the interaction zone - whether the faults have the same or opposite displacement directions, and if extension or contraction dominates in the acute bisector between the faults. 4) Chronology - the relative ages of the faults. This characterisation scheme is used to suggest a classification for interacting faults. Different types of interaction are illustrated using metre-scale faults from the Mesozoic rocks of Somerset and examples from the literature.

  20. Fault recovery characteristics of the fault tolerant multi-processor

    NASA Technical Reports Server (NTRS)

    Padilla, Peter A.

    1990-01-01

    The fault handling performance of the fault tolerant multiprocessor (FTMP) was investigated. Fault handling errors detected during fault injection experiments were characterized. In these fault injection experiments, the FTMP disabled a working unit instead of the faulted unit once every 500 faults, on the average. System design weaknesses allow active faults to exercise a part of the fault management software that handles byzantine or lying faults. It is pointed out that these weak areas in the FTMP's design increase the probability that, for any hardware fault, a good LRU (line replaceable unit) is mistakenly disabled by the fault management software. It is concluded that fault injection can help detect and analyze the behavior of a system in the ultra-reliable regime. Although fault injection testing cannot be exhaustive, it has been demonstrated that it provides a unique capability to unmask problems and to characterize the behavior of a fault-tolerant system.

  1. No-Fault Compensation for Adverse Events Following Immunization: A Review of Chinese Law And Practice.

    PubMed

    Fei, Lanfang; Peng, Zhou

    2017-02-01

    In 2005, China introduced an administrative no-fault one-time compensation scheme for adverse events following immunization (AEFI). The scheme aims to ensure fair compensation for those injured by adverse reactions following immunization. These individuals bear a significant burden for the benefits of widespread immunization. However, there is little empirical evidence of how the scheme has been implemented and how it functions in practice. The article aims to fill this gap. Based on an analysis of the legal basis of the scheme and of practical compensation cases, this article examines the structuring, function, and effects of the scheme; evaluates loopholes in the scheme; evaluates the extent to which the scheme has achieved its intended objectives; and discusses further development of the scheme. © The Author 2017. Published by Oxford University Press; all rights reserved. For Permissions, please email: journals.permissions@oup.com.

  2. Saturating time-delay transformer for overcurrent protection

    DOEpatents

    Praeg, Walter F.

    1977-01-01

    Electrical loads connected to d-c supplies are protected from damage by overcurrent in the case of a load fault by connecting in series with the load a saturating transformer that detects a load fault and limits the fault current to a safe level for a period long enough to correct the fault or else disconnect the power supply.

  3. Technology transfer by means of fault tree synthesis

    NASA Astrophysics Data System (ADS)

    Batzias, Dimitris F.

    2012-12-01

    Since Fault Tree Analysis (FTA) attempts to model and analyze failure processes of engineering, it forms a common technique for good industrial practice. On the contrary, fault tree synthesis (FTS) refers to the methodology of constructing complex trees either from dentritic modules built ad hoc or from fault tress already used and stored in a Knowledge Base. In both cases, technology transfer takes place in a quasi-inductive mode, from partial to holistic knowledge. In this work, an algorithmic procedure, including 9 activity steps and 3 decision nodes is developed for performing effectively this transfer when the fault under investigation occurs within one of the latter stages of an industrial procedure with several stages in series. The main parts of the algorithmic procedure are: (i) the construction of a local fault tree within the corresponding production stage, where the fault has been detected, (ii) the formation of an interface made of input faults that might occur upstream, (iii) the fuzzy (to count for uncertainty) multicriteria ranking of these faults according to their significance, and (iv) the synthesis of an extended fault tree based on the construction of part (i) and on the local fault tree of the first-ranked fault in part (iii). An implementation is presented, referring to 'uneven sealing of Al anodic film', thus proving the functionality of the developed methodology.

  4. 45 CFR 164.520 - Notice of privacy practices for protected health information.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... DATA STANDARDS AND RELATED REQUIREMENTS SECURITY AND PRIVACY Privacy of Individually Identifiable Health Information § 164.520 Notice of privacy practices for protected health information. (a) Standard... 45 Public Welfare 1 2014-10-01 2014-10-01 false Notice of privacy practices for protected health...

  5. Methods to enhance seismic faults and construct fault surfaces

    NASA Astrophysics Data System (ADS)

    Wu, Xinming; Zhu, Zhihui

    2017-10-01

    Faults are often apparent as reflector discontinuities in a seismic volume. Numerous types of fault attributes have been proposed to highlight fault positions from a seismic volume by measuring reflection discontinuities. These attribute volumes, however, can be sensitive to noise and stratigraphic features that are also apparent as discontinuities in a seismic volume. We propose a matched filtering method to enhance a precomputed fault attribute volume, and simultaneously estimate fault strikes and dips. In this method, a set of efficient 2D exponential filters, oriented by all possible combinations of strike and dip angles, are applied to the input attribute volume to find the maximum filtering responses at all samples in the volume. These maximum filtering responses are recorded to obtain the enhanced fault attribute volume while the corresponding strike and dip angles, that yield the maximum filtering responses, are recoded to obtain volumes of fault strikes and dips. By doing this, we assume that a fault surface is locally planar, and a 2D smoothing filter will yield a maximum response if the smoothing plane coincides with a local fault plane. With the enhanced fault attribute volume and the estimated fault strike and dip volumes, we then compute oriented fault samples on the ridges of the enhanced fault attribute volume, and each sample is oriented by the estimated fault strike and dip. Fault surfaces can be constructed by directly linking the oriented fault samples with consistent fault strikes and dips. For complicated cases with missing fault samples and noisy samples, we further propose to use a perceptual grouping method to infer fault surfaces that reasonably fit the positions and orientations of the fault samples. We apply these methods to 3D synthetic and real examples and successfully extract multiple intersecting fault surfaces and complete fault surfaces without holes.

  6. Abnormal fault-recovery characteristics of the fault-tolerant multiprocessor uncovered using a new fault-injection methodology

    NASA Technical Reports Server (NTRS)

    Padilla, Peter A.

    1991-01-01

    An investigation was made in AIRLAB of the fault handling performance of the Fault Tolerant MultiProcessor (FTMP). Fault handling errors detected during fault injection experiments were characterized. In these fault injection experiments, the FTMP disabled a working unit instead of the faulted unit once in every 500 faults, on the average. System design weaknesses allow active faults to exercise a part of the fault management software that handles Byzantine or lying faults. Byzantine faults behave such that the faulted unit points to a working unit as the source of errors. The design's problems involve: (1) the design and interface between the simplex error detection hardware and the error processing software, (2) the functional capabilities of the FTMP system bus, and (3) the communication requirements of a multiprocessor architecture. These weak areas in the FTMP's design increase the probability that, for any hardware fault, a good line replacement unit (LRU) is mistakenly disabled by the fault management software.

  7. [The Application of the Fault Tree Analysis Method in Medical Equipment Maintenance].

    PubMed

    Liu, Hongbin

    2015-11-01

    In this paper, the traditional fault tree analysis method is presented, detailed instructions for its application characteristics in medical instrument maintenance is made. It is made significant changes when the traditional fault tree analysis method is introduced into the medical instrument maintenance: gave up the logic symbolic, logic analysis and calculation, gave up its complicated programs, and only keep its image and practical fault tree diagram, and the fault tree diagram there are also differences: the fault tree is no longer a logical tree but the thinking tree in troubleshooting, the definition of the fault tree's nodes is different, the composition of the fault tree's branches is also different.

  8. Control and protection system for paralleled modular static inverter-converter systems

    NASA Technical Reports Server (NTRS)

    Birchenough, A. G.; Gourash, F.

    1973-01-01

    A control and protection system was developed for use with a paralleled 2.5-kWe-per-module static inverter-converter system. The control and protection system senses internal and external fault parameters such as voltage, frequency, current, and paralleling current unbalance. A logic system controls contactors to isolate defective power conditioners or loads. The system sequences contactor operation to automatically control parallel operation, startup, and fault isolation. Transient overload protection and fault checking sequences are included. The operation and performance of a control and protection system, with detailed circuit descriptions, are presented.

  9. Foundations for Protecting Renewable-Rich Distribution Systems.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ellis, Abraham; Brahma, Sukumar; Ranade, Satish

    High proliferation of Inverter Interfaced Distributed Energy Resources (IIDERs) into the electric distribution grid introduces new challenges to protection of such systems. This is because the existing protection systems are designed with two assumptions: 1) system is single-sourced, resulting in unidirectional fault current, and (2) fault currents are easily detectable due to much higher magnitudes compared to load currents. Due to the fact that most renewables interface with the grid though inverters, and inverters restrict their current output to levels close to the full load currents, both these assumptions are no longer valid - the system becomes multi-sourced, and overcurrent-basedmore » protection does not work. The primary scope of this study is to analyze the response of a grid-tied inverter to different faults in the grid, leading to new guidelines on protecting renewable-rich distribution systems.« less

  10. Radiation in dental practice: awareness, protection and recommendations.

    PubMed

    Praveen, B N; Shubhasini, A R; Bhanushree, R; Sumsum, P S; Sushma, C N

    2013-01-01

    Radiation is the transmission of energy through space and matter. There are several forms of radiation, including ionizing and nonionizing. X-rays are the ionizing radiation used extensively in medical and dental practice. Even though they provide useful information and aid in diagnosis, they also have the potential to cause harmful effects. In dentistry, it is mainly used for diagnostic purposes and in a dental set-up usually the practicing dentist exposes, processes and interprets the radiograph. Even though such exposure is less, it is critical to reduce the exposure to the dental personnel and patients in order to prevent the harmful effects of radiation. Several radiation protection measures have been advocated to ameliorate these effects. A survey conducted in the Bengaluru among practicing dentists revealed that radiation protection awareness was very low and the necessary measures taken to reduce the exposure were not adequate. The aim of the article is to review important parameters that must be taken into consideration in the clinical set-up to reduce radiation exposure to patients and dental personnel.

  11. MgB2-based superconductors for fault current limiters

    NASA Astrophysics Data System (ADS)

    Sokolovsky, V.; Prikhna, T.; Meerovich, V.; Eisterer, M.; Goldacker, W.; Kozyrev, A.; Weber, H. W.; Shapovalov, A.; Sverdun, V.; Moshchil, V.

    2017-02-01

    A promising solution of the fault current problem in power systems is the application of fast-operating nonlinear superconducting fault current limiters (SFCLs) with the capability of rapidly increasing their impedance, and thus limiting high fault currents. We report the results of experiments with models of inductive (transformer type) SFCLs based on the ring-shaped bulk MgB2 prepared under high quasihydrostatic pressure (2 GPa) and by hot pressing technique (30 MPa). It was shown that the SFCLs meet the main requirements to fault current limiters: they possess low impedance in the nominal regime of the protected circuit and can fast increase their impedance limiting both the transient and the steady-state fault currents. The study of quenching currents of MgB2 rings (SFCL activation current) and AC losses in the rings shows that the quenching current density and critical current density determined from AC losses can be 10-20 times less than the critical current determined from the magnetization experiments.

  12. Basic research on machinery fault diagnostics: Past, present, and future trends

    NASA Astrophysics Data System (ADS)

    Chen, Xuefeng; Wang, Shibin; Qiao, Baijie; Chen, Qiang

    2018-06-01

    Machinery fault diagnosis has progressed over the past decades with the evolution of machineries in terms of complexity and scale. High-value machineries require condition monitoring and fault diagnosis to guarantee their designed functions and performance throughout their lifetime. Research on machinery Fault diagnostics has grown rapidly in recent years. This paper attempts to summarize and review the recent R&D trends in the basic research field of machinery fault diagnosis in terms of four main aspects: Fault mechanism, sensor technique and signal acquisition, signal processing, and intelligent diagnostics. The review discusses the special contributions of Chinese scholars to machinery fault diagnostics. On the basis of the review of basic theory of machinery fault diagnosis and its practical applications in engineering, the paper concludes with a brief discussion on the future trends and challenges in machinery fault diagnosis.

  13. Research on vibration signal analysis and extraction method of gear local fault

    NASA Astrophysics Data System (ADS)

    Yang, X. F.; Wang, D.; Ma, J. F.; Shao, W.

    2018-02-01

    Gear is the main connection parts and power transmission parts in the mechanical equipment. If the fault occurs, it directly affects the running state of the whole machine and even endangers the personal safety. So it has important theoretical significance and practical value to study on the extraction of the gear fault signal and fault diagnosis of the gear. In this paper, the gear local fault as the research object, set up the vibration model of gear fault vibration mechanism, derive the vibration mechanism of the gear local fault and analyzes the similarities and differences of the vibration signal between the gear non fault and the gears local faults. In the MATLAB environment, the wavelet transform algorithm is used to denoise the fault signal. Hilbert transform is used to demodulate the fault vibration signal. The results show that the method can denoise the strong noise mechanical vibration signal and extract the local fault feature information from the fault vibration signal..

  14. Orion GN&C Fault Management System Verification: Scope And Methodology

    NASA Technical Reports Server (NTRS)

    Brown, Denise; Weiler, David; Flanary, Ronald

    2016-01-01

    In order to ensure long-term ability to meet mission goals and to provide for the safety of the public, ground personnel, and any crew members, nearly all spacecraft include a fault management (FM) system. For a manned vehicle such as Orion, the safety of the crew is of paramount importance. The goal of the Orion Guidance, Navigation and Control (GN&C) fault management system is to detect, isolate, and respond to faults before they can result in harm to the human crew or loss of the spacecraft. Verification of fault management/fault protection capability is challenging due to the large number of possible faults in a complex spacecraft, the inherent unpredictability of faults, the complexity of interactions among the various spacecraft components, and the inability to easily quantify human reactions to failure scenarios. The Orion GN&C Fault Detection, Isolation, and Recovery (FDIR) team has developed a methodology for bounding the scope of FM system verification while ensuring sufficient coverage of the failure space and providing high confidence that the fault management system meets all safety requirements. The methodology utilizes a swarm search algorithm to identify failure cases that can result in catastrophic loss of the crew or the vehicle and rare event sequential Monte Carlo to verify safety and FDIR performance requirements.

  15. Influence of fault trend, fault bends, and fault convergence on shallow structure, geomorphology, and hazards, Hosgri strike-slip fault, offshore central California

    NASA Astrophysics Data System (ADS)

    Johnson, S. Y.; Watt, J. T.; Hartwell, S. R.

    2012-12-01

    We mapped a ~94-km-long portion of the right-lateral Hosgri Fault Zone from Point Sal to Piedras Blancas in offshore central California using high-resolution seismic reflection profiles, marine magnetic data, and multibeam bathymetry. The database includes 121 seismic profiles across the fault zone and is perhaps the most comprehensive reported survey of the shallow structure of an active strike-slip fault. These data document the location, length, and near-surface continuity of multiple fault strands, highlight fault-zone heterogeneity, and demonstrate the importance of fault trend, fault bends, and fault convergences in the development of shallow structure and tectonic geomorphology. The Hosgri Fault Zone is continuous through the study area passing through a broad arc in which fault trend changes from about 338° to 328° from south to north. The southern ~40 km of the fault zone in this area is more extensional, resulting in accommodation space that is filled by deltaic sediments of the Santa Maria River. The central ~24 km of the fault zone is characterized by oblique convergence of the Hosgri Fault Zone with the more northwest-trending Los Osos and Shoreline Faults. Convergence between these faults has resulted in the formation of local restraining and releasing fault bends, transpressive uplifts, and transtensional basins of varying size and morphology. We present a hypothesis that links development of a paired fault bend to indenting and bulging of the Hosgri Fault by a strong crustal block translated to the northwest along the Shoreline Fault. Two diverging Hosgri Fault strands bounding a central uplifted block characterize the northern ~30 km of the Hosgri Fault in this area. The eastern Hosgri strand passes through releasing and restraining bends; the releasing bend is the primary control on development of an elongate, asymmetric, "Lazy Z" sedimentary basin. The western strand of the Hosgri Fault Zone passes through a significant restraining bend and

  16. Trends in sunburns, sun protection practices, and attitudes toward sun exposure protection and tanning among US adolescents, 1998-2004.

    PubMed

    Cokkinides, Vilma; Weinstock, Martin; Glanz, Karen; Albano, Jessica; Ward, Elizabeth; Thun, Michael

    2006-09-01

    Sun exposure in childhood is an important risk factor for developing skin cancer as an adult. Despite extensive efforts to reduce sun exposure among the young, there are no population-based data on trends in sunburns and sun protection practices in the young. The aim of this study was to describe nationally representative trend data on sunburns, sun protection, and attitudes related to sun exposure among US youth. Cross-sectional telephone surveys of youth aged 11 to 18 years in 1998 (N = 1196) and in 2004 (N = 1613) were conducted using a 2-stage sampling process to draw population-based samples. The surveys asked identical questions about sun protection, number of sunburns experienced, and attitudes toward sun exposure. Time trends were evaluated using pooled logistic regression analysis. In 2004, 69% of subjects reported having been sunburned during the summer, not significantly less than in 1998 (72%). There was a significant decrease in the percentage of those aged 11 to 15 years who reported sunburns and a nonsignificant increase among the 16- to 18-year-olds. The proportion of youth who reported regular sunscreen use increased significantly from 31% to 39%. Little change occurred in other recommended sun protection practices. A small reduction in sunburn frequency and modest increases in sun protection practices were observed among youth between 1998 and 2004, despite widespread sun protection campaigns. Nevertheless, the decrease in sunburns among younger teens may be cause for optimism regarding future trends. Overall, there was rather limited progress in improving sun protection practices and reducing sunburns among US youth between 1998 and 2004.

  17. Model-Based Fault Diagnosis for Turboshaft Engines

    NASA Technical Reports Server (NTRS)

    Green, Michael D.; Duyar, Ahmet; Litt, Jonathan S.

    1998-01-01

    Tests are described which, when used to augment the existing periodic maintenance and pre-flight checks of T700 engines, can greatly improve the chances of uncovering a problem compared to the current practice. These test signals can be used to expose and differentiate between faults in various components by comparing the responses of particular engine variables to the expected. The responses can be processed on-line in a variety of ways which have been shown to reveal and identify faults. The combination of specific test signals and on-line processing methods provides an ad hoc approach to the isolation of faults which might not otherwise be detected during pre-flight checkout.

  18. FTAPE: A fault injection tool to measure fault tolerance

    NASA Technical Reports Server (NTRS)

    Tsai, Timothy K.; Iyer, Ravishankar K.

    1995-01-01

    The paper introduces FTAPE (Fault Tolerance And Performance Evaluator), a tool that can be used to compare fault-tolerant computers. The tool combines system-wide fault injection with a controllable workload. A workload generator is used to create high stress conditions for the machine. Faults are injected based on this workload activity in order to ensure a high level of fault propagation. The errors/fault ratio and performance degradation are presented as measures of fault tolerance.

  19. Fault compaction and overpressured faults: results from a 3-D model of a ductile fault zone

    NASA Astrophysics Data System (ADS)

    Fitzenz, D. D.; Miller, S. A.

    2003-10-01

    A model of a ductile fault zone is incorporated into a forward 3-D earthquake model to better constrain fault-zone hydraulics. The conceptual framework of the model fault zone was chosen such that two distinct parts are recognized. The fault core, characterized by a relatively low permeability, is composed of a coseismic fault surface embedded in a visco-elastic volume that can creep and compact. The fault core is surrounded by, and mostly sealed from, a high permeability damaged zone. The model fault properties correspond explicitly to those of the coseismic fault core. Porosity and pore pressure evolve to account for the viscous compaction of the fault core, while stresses evolve in response to the applied tectonic loading and to shear creep of the fault itself. A small diffusive leakage is allowed in and out of the fault zone. Coseismically, porosity is created to account for frictional dilatancy. We show in the case of a 3-D fault model with no in-plane flow and constant fluid compressibility, pore pressures do not drop to hydrostatic levels after a seismic rupture, leading to an overpressured weak fault. Since pore pressure plays a key role in the fault behaviour, we investigate coseismic hydraulic property changes. In the full 3-D model, pore pressures vary instantaneously by the poroelastic effect during the propagation of the rupture. Once the stress state stabilizes, pore pressures are incrementally redistributed in the failed patch. We show that the significant effect of pressure-dependent fluid compressibility in the no in-plane flow case becomes a secondary effect when the other spatial dimensions are considered because in-plane flow with a near-lithostatically pressured neighbourhood equilibrates at a pressure much higher than hydrostatic levels, forming persistent high-pressure fluid compartments. If the observed faults are not all overpressured and weak, other mechanisms, not included in this model, must be at work in nature, which need to be

  20. Fault tolerant control laws

    NASA Technical Reports Server (NTRS)

    Ly, U. L.; Ho, J. K.

    1986-01-01

    A systematic procedure for the synthesis of fault tolerant control laws to actuator failure has been presented. Two design methods were used to synthesize fault tolerant controllers: the conventional LQ design method and a direct feedback controller design method SANDY. The latter method is used primarily to streamline the full-state Q feedback design into a practical implementable output feedback controller structure. To achieve robustness to control actuator failure, the redundant surfaces are properly balanced according to their control effectiveness. A simple gain schedule based on the landing gear up/down logic involving only three gains was developed to handle three design flight conditions: Mach .25 and Mach .60 at 5000 ft and Mach .90 at 20,000 ft. The fault tolerant control law developed in this study provides good stability augmentation and performance for the relaxed static stability aircraft. The augmented aircraft responses are found to be invariant to the presence of a failure. Furthermore, single-loop stability margins of +6 dB in gain and +30 deg in phase were achieved along with -40 dB/decade rolloff at high frequency.

  1. Flight elements: Fault detection and fault management

    NASA Technical Reports Server (NTRS)

    Lum, H.; Patterson-Hine, A.; Edge, J. T.; Lawler, D.

    1990-01-01

    Fault management for an intelligent computational system must be developed using a top down integrated engineering approach. An approach proposed includes integrating the overall environment involving sensors and their associated data; design knowledge capture; operations; fault detection, identification, and reconfiguration; testability; causal models including digraph matrix analysis; and overall performance impacts on the hardware and software architecture. Implementation of the concept to achieve a real time intelligent fault detection and management system will be accomplished via the implementation of several objectives, which are: Development of fault tolerant/FDIR requirement and specification from a systems level which will carry through from conceptual design through implementation and mission operations; Implementation of monitoring, diagnosis, and reconfiguration at all system levels providing fault isolation and system integration; Optimize system operations to manage degraded system performance through system integration; and Lower development and operations costs through the implementation of an intelligent real time fault detection and fault management system and an information management system.

  2. Quantifying Anderson's fault types

    USGS Publications Warehouse

    Simpson, R.W.

    1997-01-01

    Anderson [1905] explained three basic types of faulting (normal, strike-slip, and reverse) in terms of the shape of the causative stress tensor and its orientation relative to the Earth's surface. Quantitative parameters can be defined which contain information about both shape and orientation [Ce??le??rier, 1995], thereby offering a way to distinguish fault-type domains on plots of regional stress fields and to quantify, for example, the degree of normal-faulting tendencies within strike-slip domains. This paper offers a geometrically motivated generalization of Angelier's [1979, 1984, 1990] shape parameters ?? and ?? to new quantities named A?? and A??. In their simple forms, A?? varies from 0 to 1 for normal, 1 to 2 for strike-slip, and 2 to 3 for reverse faulting, and A?? ranges from 0?? to 60??, 60?? to 120??, and 120?? to 180??, respectively. After scaling, A?? and A?? agree to within 2% (or 1??), a difference of little practical significance, although A?? has smoother analytical properties. A formulation distinguishing horizontal axes as well as the vertical axis is also possible, yielding an A?? ranging from -3 to +3 and A?? from -180?? to +180??. The geometrically motivated derivation in three-dimensional stress space presented here may aid intuition and offers a natural link with traditional ways of plotting yield and failure criteria. Examples are given, based on models of Bird [1996] and Bird and Kong [1994], of the use of Anderson fault parameters A?? and A?? for visualizing tectonic regimes defined by regional stress fields. Copyright 1997 by the American Geophysical Union.

  3. Study of Stand-Alone Microgrid under Condition of Faults on Distribution Line

    NASA Astrophysics Data System (ADS)

    Malla, S. G.; Bhende, C. N.

    2014-10-01

    The behavior of stand-alone microgrid is analyzed under the condition of faults on distribution feeders. During fault since battery is not able to maintain dc-link voltage within limit, the resistive dump load control is presented to do so. An inverter control is proposed to maintain balanced voltages at PCC under the unbalanced load condition and to reduce voltage unbalance factor (VUF) at load points. The proposed inverter control also has facility to protect itself from high fault current. Existing maximum power point tracker (MPPT) algorithm is modified to limit the speed of generator during fault. Extensive simulation results using MATLAB/SIMULINK established that the performance of the controllers is quite satisfactory under different fault conditions as well as unbalanced load conditions.

  4. Fault-related clay authigenesis along the Moab Fault: Implications for calculations of fault rock composition and mechanical and hydrologic fault zone properties

    USGS Publications Warehouse

    Solum, J.G.; Davatzes, N.C.; Lockner, D.A.

    2010-01-01

    The presence of clays in fault rocks influences both the mechanical and hydrologic properties of clay-bearing faults, and therefore it is critical to understand the origin of clays in fault rocks and their distributions is of great importance for defining fundamental properties of faults in the shallow crust. Field mapping shows that layers of clay gouge and shale smear are common along the Moab Fault, from exposures with throws ranging from 10 to ???1000 m. Elemental analyses of four locations along the Moab Fault show that fault rocks are enriched in clays at R191 and Bartlett Wash, but that this clay enrichment occurred at different times and was associated with different fluids. Fault rocks at Corral and Courthouse Canyons show little difference in elemental composition from adjacent protolith, suggesting that formation of fault rocks at those locations is governed by mechanical processes. Friction tests show that these authigenic clays result in fault zone weakening, and potentially influence the style of failure along the fault (seismogenic vs. aseismic) and potentially influence the amount of fluid loss associated with coseismic dilation. Scanning electron microscopy shows that authigenesis promotes that continuity of slip surfaces, thereby enhancing seal capacity. The occurrence of the authigenesis, and its influence on the sealing properties of faults, highlights the importance of determining the processes that control this phenomenon. ?? 2010 Elsevier Ltd.

  5. dc Arc Fault Effect on Hybrid ac/dc Microgrid

    NASA Astrophysics Data System (ADS)

    Fatima, Zahra

    The advent of distributed energy resources (DER) and reliability and stability problems of the conventional grid system has given rise to the wide spread deployment of microgrids. Microgrids provide many advantages by incorporating renewable energy sources and increasing the reliability of the grid by isolating from the main grid in case of an outage. AC microgrids have been installed all over the world, but dc microgrids have been gaining interest due to the advantages they provide over ac microgrids. However the entire power network backbone is still ac and dc microgrids require expensive converters to connect to the ac power network. As a result hybrid ac/dc microgrids are gaining more attention as it combines the advantages of both ac and dc microgrids such as direct integration of ac and dc systems with minimum number of conversions which increases the efficiency by reducing energy losses. Although dc electric systems offer many advantages such as no synchronization and no reactive power, successful implementation of dc systems requires appropriate protection strategies. One unique protection challenge brought by the dc systems is dc arc faults. A dc arc fault is generated when there is a gap in the conductor due to insulation degradation and current is used to bridge the gap, resulting in an arc with very high temperature. Such a fault if it goes undetected and is not extinguished can cause damage to the entire system and cause fires. The purpose of the research is to study the effect of the dc arc fault at different locations in the hybrid ac/dc microgrid and provide insight on the reliability of the grid components when it is impacted by arc faults at various locations in the grid. The impact of dc arc fault at different locations on the performance of the PV array, wind generation, and constant power loads (CPL) interfaced with dc/dc converters is studied. MATLAB/Simulink is used to model the hybrid ac/dc microgrid and arc fault.

  6. Reconfigurable fault tolerant avionics system

    NASA Astrophysics Data System (ADS)

    Ibrahim, M. M.; Asami, K.; Cho, Mengu

    This paper presents the design of a reconfigurable avionics system based on modern Static Random Access Memory (SRAM)-based Field Programmable Gate Array (FPGA) to be used in future generations of nano satellites. A major concern in satellite systems and especially nano satellites is to build robust systems with low-power consumption profiles. The system is designed to be flexible by providing the capability of reconfiguring itself based on its orbital position. As Single Event Upsets (SEU) do not have the same severity and intensity in all orbital locations, having the maximum at the South Atlantic Anomaly (SAA) and the polar cusps, the system does not have to be fully protected all the time in its orbit. An acceptable level of protection against high-energy cosmic rays and charged particles roaming in space is provided within the majority of the orbit through software fault tolerance. Check pointing and roll back, besides control flow assertions, is used for that level of protection. In the minority part of the orbit where severe SEUs are expected to exist, a reconfiguration for the system FPGA is initiated where the processor systems are triplicated and protection through Triple Modular Redundancy (TMR) with feedback is provided. This technique of reconfiguring the system as per the level of the threat expected from SEU-induced faults helps in reducing the average dynamic power consumption of the system to one-third of its maximum. This technique can be viewed as a smart protection through system reconfiguration. The system is built on the commercial version of the (XC5VLX50) Xilinx Virtex5 FPGA on bulk silicon with 324 IO. Simulations of orbit SEU rates were carried out using the SPENVIS web-based software package.

  7. Do workplace and home protective practices protect farm workers? Findings from the For Healthy Kids Study

    PubMed Central

    Coronado, Gloria D.; Holte, Sarah E.; Vigoren, Eric M.; Griffith, William C; Barr, Dana B.; Faustman, Elaine M.; Thompson, Beti

    2013-01-01

    Objective To assess associations of protective workplace and home practices to pesticide exposure levels. Methods Using data from orchard workers in the Yakima Valley, Washington, we examined associations of workplace and home protective practices to (1) urinary metabolite concentrations of dimethylthiophosphate (DMTP) in adults and children aged 2–6; and (2) azinphos-methyl levels in house and vehicle dust. Results Data were from 95 orchard workers and 94 children. Contrary to expectation, adult farm workers who wore boots or washed hands using hand sanitizer had higher concentrations of DMTP than those who did not. Children who attended daycare had higher DMTP concentrations than children who did not. Conclusions Few workplace or home practices were associated with pesticide exposure levels; workers who used hand sanitizer had higher concentrations of DMTP, as did children who attended daycare. PMID:22772953

  8. Advanced microprocessor based power protection system using artificial neural network techniques

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Z.; Kalam, A.; Zayegh, A.

    This paper describes an intelligent embedded microprocessor based system for fault classification in power system protection system using advanced 32-bit microprocessor technology. The paper demonstrates the development of protective relay to provide overcurrent protection schemes for fault detection. It also describes a method for power fault classification in three-phase system based on the use of neural network technology. The proposed design is implemented and tested on a single line three phase power system in power laboratory. Both the hardware and software development are described in detail.

  9. Saturating time-delay transformer for overcurrent protection. [Patent application

    DOEpatents

    Praeg, W.F.

    1975-12-18

    Electrical loads connected to dc supplies are protected from damage by overcurrent in the case of a load fault by connecting in series with the load a saturating transformer that detects a load fault and limits the fault current to a safe level for a period long enough to correct the fault or else disconnect the power supply.

  10. Method and system for controlling a permanent magnet machine during fault conditions

    DOEpatents

    Krefta, Ronald John; Walters, James E.; Gunawan, Fani S.

    2004-05-25

    Method and system for controlling a permanent magnet machine driven by an inverter is provided. The method allows for monitoring a signal indicative of a fault condition. The method further allows for generating during the fault condition a respective signal configured to maintain a field weakening current even though electrical power from an energy source is absent during said fault condition. The level of the maintained field-weakening current enables the machine to operate in a safe mode so that the inverter is protected from excess voltage.

  11. A review of recently active faults in Taiwan

    USGS Publications Warehouse

    Bonilla, Manuel G.

    1975-01-01

    high near Hualien, where an uplift of at least 0.6 m and probably more than 1 m occurred in the 1951 earthquake, and near and south of the 1946 faulting. Sudden uplifts can have serious consequences for installations near the shore. Investigation of this process, study of recently active faults, and continuing study of seismicity are necessary parts of a practical earthquake-hazard reduction program.

  12. Study of a phase-to-ground fault on a 400 kV overhead transmission line

    NASA Astrophysics Data System (ADS)

    Iagăr, A.; Popa, G. N.; Diniş, C. M.

    2018-01-01

    Power utilities need to supply their consumers at high power quality level. Because the faults that occur on High-Voltage and Extra-High-Voltage transmission lines can cause serious damages in underlying transmission and distribution systems, it is important to examine each fault in detail. In this work we studied a phase-to-ground fault (on phase 1) of 400 kV overhead transmission line Mintia-Arad. Indactic® 650 fault analyzing system was used to record the history of the fault. Signals (analog and digital) recorded by Indactic® 650 were visualized and analyzed by Focus program. Summary of fault report allowed evaluation of behavior of control and protection equipment and determination of cause and location of the fault.

  13. Novel Directional Protection Scheme for the FREEDM Smart Grid System

    NASA Astrophysics Data System (ADS)

    Sharma, Nitish

    This research primarily deals with the design and validation of the protection system for a large scale meshed distribution system. The large scale system simulation (LSSS) is a system level PSCAD model which is used to validate component models for different time-scale platforms, to provide a virtual testing platform for the Future Renewable Electric Energy Delivery and Management (FREEDM) system. It is also used to validate the cases of power system protection, renewable energy integration and storage, and load profiles. The protection of the FREEDM system against any abnormal condition is one of the important tasks. The addition of distributed generation and power electronic based solid state transformer adds to the complexity of the protection. The FREEDM loop system has a fault current limiter and in addition, the Solid State Transformer (SST) limits the fault current at 2.0 per unit. Former students at ASU have developed the protection scheme using fiber-optic cable. However, during the NSF-FREEDM site visit, the National Science Foundation (NSF) team regarded the system incompatible for the long distances. Hence, a new protection scheme with a wireless scheme is presented in this thesis. The use of wireless communication is extended to protect the large scale meshed distributed generation from any fault. The trip signal generated by the pilot protection system is used to trigger the FID (fault isolation device) which is an electronic circuit breaker operation (switched off/opening the FIDs). The trip signal must be received and accepted by the SST, and it must block the SST operation immediately. A comprehensive protection system for the large scale meshed distribution system has been developed in PSCAD with the ability to quickly detect the faults. The validation of the protection system is performed by building a hardware model using commercial relays at the ASU power laboratory.

  14. Why the 2002 Denali fault rupture propagated onto the Totschunda fault: implications for fault branching and seismic hazards

    USGS Publications Warehouse

    Schwartz, David P.; Haeussler, Peter J.; Seitz, Gordon G.; Dawson, Timothy E.

    2012-01-01

    The propagation of the rupture of the Mw7.9 Denali fault earthquake from the central Denali fault onto the Totschunda fault has provided a basis for dynamic models of fault branching in which the angle of the regional or local prestress relative to the orientation of the main fault and branch plays a principal role in determining which fault branch is taken. GeoEarthScope LiDAR and paleoseismic data allow us to map the structure of the Denali-Totschunda fault intersection and evaluate controls of fault branching from a geological perspective. LiDAR data reveal the Denali-Totschunda fault intersection is structurally simple with the two faults directly connected. At the branch point, 227.2 km east of the 2002 epicenter, the 2002 rupture diverges southeast to become the Totschunda fault. We use paleoseismic data to propose that differences in the accumulated strain on each fault segment, which express differences in the elapsed time since the most recent event, was one important control of the branching direction. We suggest that data on event history, slip rate, paleo offsets, fault geometry and structure, and connectivity, especially on high slip rate-short recurrence interval faults, can be used to assess the likelihood of branching and its direction. Analysis of the Denali-Totschunda fault intersection has implications for evaluating the potential for a rupture to propagate across other types of fault intersections and for characterizing sources of future large earthquakes.

  15. Active Fault Topography and Fault Outcrops in the Central Part of the Nukumi fault, the 1891 Nobi Earthquake Fault System, Central Japan

    NASA Astrophysics Data System (ADS)

    Sasaki, T.; Ueta, K.; Inoue, D.; Aoyagi, Y.; Yanagida, M.; Ichikawa, K.; Goto, N.

    2010-12-01

    It is important to evaluate the magnitude of earthquake caused by multiple active faults, taking into account the simultaneous effects. The simultaneity of adjacent active faults are often decided on the basis of geometric distances except for known these paleoseismic records. We have been studied the step area between the Nukumi fault and the Neodani fault, which appeared as consecutive ruptures in the 1891 Nobi earthquake, since 2009. The purpose of this study is to establish innovation in valuation technique of the simultaneity of adjacent active faults in addition to the paleoseismic record and the geometric distance. Geomorphological, geological and reconnaissance microearthquake surveys are concluded. The present work is intended to clarify the distribution of tectonic geomorphology along the Nukumi fault and the Neodani fault by high-resolution interpretations of airborne LiDAR DEM and aerial photograph, and the field survey of outcrops and location survey. The study area of this work is the southeastern Nukumi fault and the northwestern Neodani fault. We interpret DEM using shaded relief map and stereoscopic bird's-eye view made from 2m mesh DEM data which is obtained by airborne laser scanner of Kokusai Kogyo Co., Ltd. Aerial photographic survey is for confirmation of DEM interpretation using 1/16,000 scale photo. As a result of topographic survey, we found consecutive tectonic topography which is left lateral displacement of ridge and valley lines and reverse scarplets along the Nukumi fault and the Neodani fault . From Ogotani 2km southeastern of Nukumi pass which is located at the southeastern end of surface rupture along the Nukumi fault by previous study to Neooppa 9km southeastern of Nukumi pass, we can interpret left lateral topographies and small uphill-facing fault scarps on the terrace surface by detail DEM investigation. These topographies are unrecognized by aerial photographic survey because of heavy vegetation. We have found several new

  16. The Chaîne des Puys and Limagne Fault World Heritage project: a view from a scientist

    NASA Astrophysics Data System (ADS)

    van Wyk de Vries, B.

    2013-12-01

    The development of the Chaîne des Puys and Limagne Fault World Heritage UNESCO project has taken about five years, since the Laboratoire Magmas et Volcans was approached by the local Auvergne government. Before this we had been working locally with organisations such as the Lemptégy volcano and Vulcania to help disseminate geoscience ideas to the general public, however the UNESCO project has lead us to do much more outreach. It has also stimulated our research and has taught us to better explain this to the lay person. In visiting other heritage projects, where we have exchanged ideas and best practice, we have been able to help other sites and improve what we do. These links are particularly important, as they can be used to help broaden the outlook of the general public and local actors, and increase further earth science literacy. I have noticed a strong increase in the awareness of the volcanoes, and volcanism as a result of the Chaîne des Puys and Limagne Fault project. I think that, before, many locals considered the volcanoes only as their special back garden, for easy walks and views and leisure, or for that matter farming, mining and hunting. However, now, there is a greater sense of pride rooted in the increased awareness of their geological significance in a historical and global context. While this effect is clear for the volcanoes, it is not yet apparent for the fault. The lay person has no clear concept of a rift and a fault. Thus, one of our major present challenges is to open the public's eyes to the fault. This is vital for the UNESCO project not only for educational reasons, but also because the fault scarp is a natural barrier that protects the rest of the property from urban development. Only if there is awareness of its nature and significance will it be an effective defence. To arrive at this goal, there is now a local government sponsored project to create and disseminate fault walks, fault viewpoints and fault information points. This is

  17. Fault Protection Design and Testing for the Cassini Spacecraft in a "Mixed" Thruster Configuration

    NASA Technical Reports Server (NTRS)

    Bates, David; Lee, Allan; Meakin, Peter; Weitl, Raquel

    2013-01-01

    NASA's Cassini Spacecraft, launched on October 15th, 1997 and arrived at Saturn on June 30th, 2004, is the largest and most ambitious interplanetary spacecraft in history. In order to meet the challenging attitude control and navigation requirements of the orbit profile at Saturn, Cassini is equipped with a monopropellant thruster based Reaction Control System (RCS), a bipropellant Main Engine Assembly (MEA) and a Reaction Wheel Assembly (RWA). In 2008, after 11 years of reliable service, several RCS thrusters began to show signs of end of life degradation, which led the operations team to successfully perform the swap from the A-branch to the B-branch RCS system. If similar degradation begins to occur on any of the B-branch thrusters, Cassini might have to assume a "mixed" thruster configuration, where a subset of both A and B branch thrusters will be designated as prime. The Cassini Fault Protection FSW was recently updated to handle this scenario. The design, implementation, and testing of this update is described in this paper.

  18. Predeployment validation of fault-tolerant systems through software-implemented fault insertion

    NASA Technical Reports Server (NTRS)

    Czeck, Edward W.; Siewiorek, Daniel P.; Segall, Zary Z.

    1989-01-01

    Fault injection-based automated testing (FIAT) environment, which can be used to experimentally characterize and evaluate distributed realtime systems under fault-free and faulted conditions is described. A survey is presented of validation methodologies. The need for fault insertion based on validation methodologies is demonstrated. The origins and models of faults, and motivation for the FIAT concept are reviewed. FIAT employs a validation methodology which builds confidence in the system through first providing a baseline of fault-free performance data and then characterizing the behavior of the system with faults present. Fault insertion is accomplished through software and allows faults or the manifestation of faults to be inserted by either seeding faults into memory or triggering error detection mechanisms. FIAT is capable of emulating a variety of fault-tolerant strategies and architectures, can monitor system activity, and can automatically orchestrate experiments involving insertion of faults. There is a common system interface which allows ease of use to decrease experiment development and run time. Fault models chosen for experiments on FIAT have generated system responses which parallel those observed in real systems under faulty conditions. These capabilities are shown by two example experiments each using a different fault-tolerance strategy.

  19. 75 FR 22844 - Construction Fall Protection Systems Criteria and Practices and Training Requirements; Extension...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-04-30

    ...] Construction Fall Protection Systems Criteria and Practices and Training Requirements; Extension of the Office of Management and Budget's (OMB) Approval of Information Collection (Paperwork) Requirements AGENCY... requirements contained in the construction standards on Fall Protection Systems Criteria and Practices (29 CFR...

  20. Prevalence and correlates of sun protection and skin self-examination practices among cutaneous malignant melanoma survivors.

    PubMed

    Manne, Sharon; Lessin, Stuart

    2006-10-01

    Little is known about the level of engagement and correlates of sun protection and skin self-exam among individuals diagnosed with melanoma. Participants (N = 229) completed measures of skin self-exam and sun protection practice and knowledge and attitudes. Approximately eighty-four percent of patients reported engaging in skin self-examination at least once in the past year. Engagement in sun protection practices was moderate. Self-exam practice was associated with gender, physician recommendation about self-exam, and perceived benefits and barriers of self-exam. Sun protection was associated with gender, age, medical status and health care access, physician recommendation, knowledge, and a number of psychological factors. Behavioral interventions to improve skin surveillance and sun protection may benefit from an emphasis on physician education regarding self-exam and sun protection, education regarding the efficacy of sunscreen and the risks associated with sunbathing, reducing perceived barriers to self-exam and sun protection, and reducing reliance on social influences on sun protection practices.

  1. 45 CFR 164.520 - Notice of privacy practices for protected health information.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... Health Information § 164.520 Notice of privacy practices for protected health information. (a) Standard... legal duties with respect to protected health information. (2) Exception for group health plans. (i) An... receive protected health information other than summary health information as defined in § 164.504(a) or...

  2. 45 CFR 164.520 - Notice of privacy practices for protected health information.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... Health Information § 164.520 Notice of privacy practices for protected health information. (a) Standard... legal duties with respect to protected health information. (2) Exception for group health plans. (i) An... receive protected health information other than summary health information as defined in § 164.504(a) or...

  3. Novel elastic protection against DDF failures in an enhanced software-defined SIEPON

    NASA Astrophysics Data System (ADS)

    Pakpahan, Andrew Fernando; Hwang, I.-Shyan; Yu, Yu-Ming; Hsu, Wu-Hsiao; Liem, Andrew Tanny; Nikoukar, AliAkbar

    2017-07-01

    Ever-increasing bandwidth demands on passive optical networks (PONs) are pushing the utilization of every fiber strand to its limit. This is mandating comprehensive protection until the end of the distribution drop fiber (DDF). Hence, it is important to provide refined protection with an advanced fault-protection architecture and recovery mechanism that is able to cope with various DDF failures. We propose a novel elastic protection against DDF failures that incorporates a software-defined networking (SDN) capability and a bus protection line to enhance the resiliency of the existing Service Interoperability in Ethernet Passive Optical Networks (SIEPON) system. We propose the addition of an integrated SDN controller and flow tables to the optical line terminal and optical network units (ONUs) in order to deliver various DDF protection scenarios. The proposed architecture enables flexible assignment of backup ONU(s) in pre/post-fault conditions depending on the PON traffic load. A transient backup ONU and multiple backup ONUs can be deployed in the pre-fault and post-fault scenarios, respectively. Our extensively discussed simulation results show that our proposed architecture provides better overall throughput and drop probability compared to the architecture with a fixed DDF protection mechanism. It does so while still maintaining overall QoS performance in terms of packet delay, mean jitter, packet loss, and throughput under various fault conditions.

  4. Development and Testing of Protection Scheme for Renewable-Rich Distribution System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brahma, Sukumar; Ranade, Satish; Elkhatib, Mohamed E.

    As the penetration of renewables increases in the distribution systems, and microgrids are conceived with high penetration of such generation that connects through inverters, fault location and protection of microgrids needs consideration. This report proposes averaged models that help simulate fault scenarios in renewable-rich microgrids, models for locating faults in such microgrids, and comments on the protection models that may be considered for microgrids. Simulation studies are reported to justify the models.

  5. Layered clustering multi-fault diagnosis for hydraulic piston pump

    NASA Astrophysics Data System (ADS)

    Du, Jun; Wang, Shaoping; Zhang, Haiyan

    2013-04-01

    Efficient diagnosis is very important for improving reliability and performance of aircraft hydraulic piston pump, and it is one of the key technologies in prognostic and health management system. In practice, due to harsh working environment and heavy working loads, multiple faults of an aircraft hydraulic pump may occur simultaneously after long time operations. However, most existing diagnosis methods can only distinguish pump faults that occur individually. Therefore, new method needs to be developed to realize effective diagnosis of simultaneous multiple faults on aircraft hydraulic pump. In this paper, a new method based on the layered clustering algorithm is proposed to diagnose multiple faults of an aircraft hydraulic pump that occur simultaneously. The intensive failure mechanism analyses of the five main types of faults are carried out, and based on these analyses the optimal combination and layout of diagnostic sensors is attained. The three layered diagnosis reasoning engine is designed according to the faults' risk priority number and the characteristics of different fault feature extraction methods. The most serious failures are first distinguished with the individual signal processing. To the desultory faults, i.e., swash plate eccentricity and incremental clearance increases between piston and slipper, the clustering diagnosis algorithm based on the statistical average relative power difference (ARPD) is proposed. By effectively enhancing the fault features of these two faults, the ARPDs calculated from vibration signals are employed to complete the hypothesis testing. The ARPDs of the different faults follow different probability distributions. Compared with the classical fast Fourier transform-based spectrum diagnosis method, the experimental results demonstrate that the proposed algorithm can diagnose the multiple faults, which occur synchronously, with higher precision and reliability.

  6. Fault Analysis in a Grid Integrated DFIG Based Wind Energy System with NA CB_P Circuit for Ridethrough Capability and Power Quality Improvement

    NASA Astrophysics Data System (ADS)

    Swain, Snehaprava; Ray, Pravat Kumar

    2016-12-01

    In this paper a three phase fault analysis is done on a DFIG based grid integrated wind energy system. A Novel Active Crowbar Protection (NACB_P) system is proposed to enhance the Fault-ride through (FRT) capability of DFIG both for symmetrical as well as unsymmetrical grid faults. Hence improves the power quality of the system. The protection scheme proposed here is designed with a capacitor in series with the resistor unlike the conventional Crowbar (CB) having only resistors. The major function of the capacitor in the protection circuit is to eliminate the ripples generated in the rotor current and to protect the converter as well as the DC-link capacitor. It also compensates reactive power required by the DFIG during fault. Due to these advantages the proposed scheme enhances the FRT capability of the DFIG and also improves the power quality of the whole system. Experimentally the fault analysis is done on a 3hp slip ring induction generator and simulation results are carried out on a 1.7 MVA DFIG based WECS under different types of grid faults in MATLAB/Simulation and functionality of the proposed scheme is verified.

  7. Hybrid routing technique for a fault-tolerant, integrated information network

    NASA Technical Reports Server (NTRS)

    Meredith, B. D.

    1986-01-01

    The evolutionary growth of the space station and the diverse activities onboard are expected to require a hierarchy of integrated, local area networks capable of supporting data, voice, and video communications. In addition, fault-tolerant network operation is necessary to protect communications between critical systems attached to the net and to relieve the valuable human resources onboard the space station of time-critical data system repair tasks. A key issue for the design of the fault-tolerant, integrated network is the development of a robust routing algorithm which dynamically selects the optimum communication paths through the net. A routing technique is described that adapts to topological changes in the network to support fault-tolerant operation and system evolvability.

  8. Reclosing operation characteristics of the flux-coupling type SFCL in a single-line-to ground fault

    NASA Astrophysics Data System (ADS)

    Jung, B. I.; Cho, Y. S.; Choi, H. S.; Ha, K. H.; Choi, S. G.; Chul, D. C.; Sung, T. H.

    2011-11-01

    The recloser that is used in distribution systems is a relay system that behaves sequentially to protect power systems from transient and continuous faults. This reclosing operation of the recloser can improve the reliability and stability of the power supply. For cooperation with this recloser, the superconducting fault current limiter (SFCL) must properly perform the reclosing operation. This paper analyzed the reclosing operation characteristics of the three-phase flux-coupling type SFCL in the event of a ground fault. The fault current limiting characteristics according to the changing number of turns of the primary and secondary coils were examined. As the number of turns of the first coil increased, the first maximum fault current decreased. Furthermore, the voltage of the quenched superconducting element also decreased. This means that the power burden of the superconducting element decreases based on the increasing number of turns of the primary coil. The fault current limiting characteristic of the SFCL according to the reclosing time limited the fault current within a 0.5 cycles (8 ms), which is shorter than the closing time of the recloser. In other words, the superconducting element returned to the superconducting state before the second fault and normally performed the fault current limiting operation. If the SFCL did not recover before the recloser reclosing time, the normal current that was flowing in the transmission line after the recovery of the SFCL from the fault would have been limited and would have caused losses. Therefore, the fast recovery time of a SFCL is critical to its cooperation with the protection system.

  9. Enhanced fault-tolerant quantum computing in d-level systems.

    PubMed

    Campbell, Earl T

    2014-12-05

    Error-correcting codes protect quantum information and form the basis of fault-tolerant quantum computing. Leading proposals for fault-tolerant quantum computation require codes with an exceedingly rare property, a transversal non-Clifford gate. Codes with the desired property are presented for d-level qudit systems with prime d. The codes use n=d-1 qudits and can detect up to ∼d/3 errors. We quantify the performance of these codes for one approach to quantum computation known as magic-state distillation. Unlike prior work, we find performance is always enhanced by increasing d.

  10. Non-Pilot Protection of the HVDC Grid

    NASA Astrophysics Data System (ADS)

    Badrkhani Ajaei, Firouz

    This thesis develops a non-pilot protection system for the next generation power transmission system, the High-Voltage Direct Current (HVDC) grid. The HVDC grid protection system is required to be (i) adequately fast to prevent damages and/or converter blocking and (ii) reliable to minimize the impacts of faults. This study is mainly focused on the Modular Multilevel Converter (MMC) -based HVDC grid since the MMC is considered as the building block of the future HVDC systems. The studies reported in this thesis include (i) developing an enhanced equivalent model of the MMC to enable accurate representation of its DC-side fault response, (ii) developing a realistic HVDC-AC test system that includes a five-terminal MMC-based HVDC grid embedded in a large interconnected AC network, (iii) investigating the transient response of the developed test system to AC-side and DC-side disturbances in order to determine the HVDC grid protection requirements, (iv) investigating the fault surge propagation in the HVDC grid to determine the impacts of the DC-side fault location on the measured signals at each relay location, (v) designing a protection algorithm that detects and locates DC-side faults reliably and sufficiently fast to prevent relay malfunction and unnecessary blocking of the converters, and (vi) performing hardware-in-the-loop tests on the designed relay to verify its potential to be implemented in hardware. The results of the off-line time domain transients studies in the PSCAD software platform and the real-time hardware-in-the-loop tests using an enhanced version of the RTDS platform indicate that the developed HVDC grid relay meets all technical requirements including speed, dependability, security, selectivity, and robustness. Moreover, the developed protection algorithm does not impose considerable computational burden on the hardware.

  11. Improvement in operational characteristics of KEPCO’s line-commutation-type superconducting hybrid fault current limiter

    NASA Astrophysics Data System (ADS)

    Yim, S.-W.; Park, B.-C.; Jeong, Y.-T.; Kim, Y.-J.; Yang, S.-E.; Kim, W.-S.; Kim, H.-R.; Du, H.-I.

    2013-01-01

    A 22.9 kV class hybrid fault current limiter (FCL) developed by Korea Electric Power Corporation and LS Industrial Systems in 2006 operates using the line commutation mechanism and begins to limit the fault current after the first half-cycle. The first peak of the fault current is available for protective coordination in the power system. However, it also produces a large electromagnetic force and imposes a huge stress on power facilities such as the main transformer and gas-insulated switchgear. In this study, we improved the operational characteristics of the hybrid FCL in order to reduce the first peak of the fault current. While maintaining the structure of the hybrid FCL system, we developed a superconducting module that detects and limits the fault current during the first half-cycle. To maintain the protective coordination capacity, the hybrid FCL was designed to reduce the first peak value of the fault current by up to approximately 30%. The superconducting module was also designed to produce a minimum AC loss, generating a small, uniform magnetic field distribution during normal operation. Performance tests confirmed that when applied to the hybrid FCL, the superconducting module showed successful current limiting operation without any damage.

  12. Fault finder

    DOEpatents

    Bunch, Richard H.

    1986-01-01

    A fault finder for locating faults along a high voltage electrical transmission line. Real time monitoring of background noise and improved filtering of input signals is used to identify the occurrence of a fault. A fault is detected at both a master and remote unit spaced along the line. A master clock synchronizes operation of a similar clock at the remote unit. Both units include modulator and demodulator circuits for transmission of clock signals and data. All data is received at the master unit for processing to determine an accurate fault distance calculation.

  13. The Sorong Fault Zone, Indonesia: Mapping a Fault Zone Offshore

    NASA Astrophysics Data System (ADS)

    Melia, S.; Hall, R.

    2017-12-01

    The Sorong Fault Zone is a left-lateral strike-slip fault zone in eastern Indonesia, extending westwards from the Bird's Head peninsula of West Papua towards Sulawesi. It is the result of interactions between the Pacific, Caroline, Philippine Sea, and Australian Plates and much of it is offshore. Previous research on the fault zone has been limited by the low resolution of available data offshore, leading to debates over the extent, location, and timing of movements, and the tectonic evolution of eastern Indonesia. Different studies have shown it north of the Sula Islands, truncated south of Halmahera, continuing to Sulawesi, or splaying into a horsetail fan of smaller faults. Recently acquired high resolution multibeam bathymetry of the seafloor (with a resolution of 15-25 meters), and 2D seismic lines, provide the opportunity to trace the fault offshore. The position of different strands can be identified. On land, SRTM topography shows that in the northern Bird's Head the fault zone is characterised by closely spaced E-W trending faults. NW of the Bird's Head offshore there is a fold and thrust belt which terminates some strands. To the west of the Bird's Head offshore the fault zone diverges into multiple strands trending ENE-WSW. Regions of Riedel shearing are evident west of the Bird's Head, indicating sinistral strike-slip motion. Further west, the ENE-WSW trending faults turn to an E-W trend and there are at least three fault zones situated immediately south of Halmahera, north of the Sula Islands, and between the islands of Sanana and Mangole where the fault system terminates in horsetail strands. South of the Sula islands some former normal faults at the continent-ocean boundary with the North Banda Sea are being reactivated as strike-slip faults. The fault zone does not currently reach Sulawesi. The new fault map differs from previous interpretations concerning the location, age and significance of different parts of the Sorong Fault Zone. Kinematic

  14. Discovering the Complexity of Capable Faults in Northern Chile

    NASA Astrophysics Data System (ADS)

    Gonzalez, G.; del Río, I. A.; Rojas Orrego, C., Sr.; Astudillo, L. A., Sr.

    2017-12-01

    Great crustal earthquakes (Mw >7.0) in the upper plate of subduction zones are relatively uncommon and less well documented. We hypothesize that crustal earthquakes are poorly represented in the instrumental record because they have long recurrence intervals. In northern Chile, the extreme long-term aridity permits extraordinary preservation of landforms related to fault activity, making this region a primary target to understand how upper plate faults work at subduction zones. To understand how these faults relate to crustal seismicity in the long-term, we have conducted a detailed palaeoseismological study. We performed a palaeoseismological survey integrating trench logging and photogrammetry based on UAVs. Optically stimulated luminescence (OSL) age determinations were practiced for dating deposits linked to faulting. In this contribution we present the study case of two primary faults located in the Coastal Cordillera of northern Chile between Iquique (21ºS) and Antofagasta (24ºS). We estimate the maximum moment magnitude of earthquakes generated in these upper plate faults, their recurrence interval and the fault-slip rate. We conclude that the studied upper plate faults show a complex kinematics on geological timescales. Faults seem to change their kinematics from normal (extension) to reverse (compression) or from normal to transcurrent (compression) according to the stage of subduction earthquake cycle. Normal displacement is related to coseismic stages and compression is linked to interseismic period. As result this complex interaction these faults are capable of generating Mw 7.0 earthquakes, with recurrence times on the order of thousands of years during every stage of the subduction earthquake cycle.

  15. A fuzzy decision tree for fault classification.

    PubMed

    Zio, Enrico; Baraldi, Piero; Popescu, Irina C

    2008-02-01

    In plant accident management, the control room operators are required to identify the causes of the accident, based on the different patterns of evolution of the monitored process variables thereby developing. This task is often quite challenging, given the large number of process parameters monitored and the intense emotional states under which it is performed. To aid the operators, various techniques of fault classification have been engineered. An important requirement for their practical application is the physical interpretability of the relationships among the process variables underpinning the fault classification. In this view, the present work propounds a fuzzy approach to fault classification, which relies on fuzzy if-then rules inferred from the clustering of available preclassified signal data, which are then organized in a logical and transparent decision tree structure. The advantages offered by the proposed approach are precisely that a transparent fault classification model is mined out of the signal data and that the underlying physical relationships among the process variables are easily interpretable as linguistic if-then rules that can be explicitly visualized in the decision tree structure. The approach is applied to a case study regarding the classification of simulated faults in the feedwater system of a boiling water reactor.

  16. Fault zone hydrogeology

    NASA Astrophysics Data System (ADS)

    Bense, V. F.; Gleeson, T.; Loveless, S. E.; Bour, O.; Scibek, J.

    2013-12-01

    Deformation along faults in the shallow crust (< 1 km) introduces permeability heterogeneity and anisotropy, which has an important impact on processes such as regional groundwater flow, hydrocarbon migration, and hydrothermal fluid circulation. Fault zones have the capacity to be hydraulic conduits connecting shallow and deep geological environments, but simultaneously the fault cores of many faults often form effective barriers to flow. The direct evaluation of the impact of faults to fluid flow patterns remains a challenge and requires a multidisciplinary research effort of structural geologists and hydrogeologists. However, we find that these disciplines often use different methods with little interaction between them. In this review, we document the current multi-disciplinary understanding of fault zone hydrogeology. We discuss surface- and subsurface observations from diverse rock types from unlithified and lithified clastic sediments through to carbonate, crystalline, and volcanic rocks. For each rock type, we evaluate geological deformation mechanisms, hydrogeologic observations and conceptual models of fault zone hydrogeology. Outcrop observations indicate that fault zones commonly have a permeability structure suggesting they should act as complex conduit-barrier systems in which along-fault flow is encouraged and across-fault flow is impeded. Hydrogeological observations of fault zones reported in the literature show a broad qualitative agreement with outcrop-based conceptual models of fault zone hydrogeology. Nevertheless, the specific impact of a particular fault permeability structure on fault zone hydrogeology can only be assessed when the hydrogeological context of the fault zone is considered and not from outcrop observations alone. To gain a more integrated, comprehensive understanding of fault zone hydrogeology, we foresee numerous synergistic opportunities and challenges for the discipline of structural geology and hydrogeology to co-evolve and

  17. Interactions between Polygonal Normal Faults and Larger Normal Faults, Offshore Nova Scotia, Canada

    NASA Astrophysics Data System (ADS)

    Pham, T. Q. H.; Withjack, M. O.; Hanafi, B. R.

    2017-12-01

    Polygonal faults, small normal faults with polygonal arrangements that form in fine-grained sedimentary rocks, can influence ground-water flow and hydrocarbon migration. Using well and 3D seismic-reflection data, we have examined the interactions between polygonal faults and larger normal faults on the passive margin of offshore Nova Scotia, Canada. The larger normal faults strike approximately E-W to NE-SW. Growth strata indicate that the larger normal faults were active in the Late Cretaceous (i.e., during the deposition of the Wyandot Formation) and during the Cenozoic. The polygonal faults were also active during the Cenozoic because they affect the top of the Wyandot Formation, a fine-grained carbonate sedimentary rock, and the overlying Cenozoic strata. Thus, the larger normal faults and the polygonal faults were both active during the Cenozoic. The polygonal faults far from the larger normal faults have a wide range of orientations. Near the larger normal faults, however, most polygonal faults have preferred orientations, either striking parallel or perpendicular to the larger normal faults. Some polygonal faults nucleated at the tip of a larger normal fault, propagated outward, and linked with a second larger normal fault. The strike of these polygonal faults changed as they propagated outward, ranging from parallel to the strike of the original larger normal fault to orthogonal to the strike of the second larger normal fault. These polygonal faults hard-linked the larger normal faults at and above the level of the Wyandot Formation but not below it. We argue that the larger normal faults created stress-enhancement and stress-reorientation zones for the polygonal faults. Numerous small, polygonal faults formed in the stress-enhancement zones near the tips of larger normal faults. Stress-reorientation zones surrounded the larger normal faults far from their tips. Fewer polygonal faults are present in these zones, and, more importantly, most polygonal faults

  18. 14 CFR 29.863 - Flammable fluid fire protection.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... AIRCRAFT AIRWORTHINESS STANDARDS: TRANSPORT CATEGORY ROTORCRAFT Design and Construction Fire Protection... sources, including electrical faults, overheating of equipment, and malfunctioning of protective devices...

  19. Fault diagnosis

    NASA Technical Reports Server (NTRS)

    Abbott, Kathy

    1990-01-01

    The objective of the research in this area of fault management is to develop and implement a decision aiding concept for diagnosing faults, especially faults which are difficult for pilots to identify, and to develop methods for presenting the diagnosis information to the flight crew in a timely and comprehensible manner. The requirements for the diagnosis concept were identified by interviewing pilots, analyzing actual incident and accident cases, and examining psychology literature on how humans perform diagnosis. The diagnosis decision aiding concept developed based on those requirements takes abnormal sensor readings as input, as identified by a fault monitor. Based on these abnormal sensor readings, the diagnosis concept identifies the cause or source of the fault and all components affected by the fault. This concept was implemented for diagnosis of aircraft propulsion and hydraulic subsystems in a computer program called Draphys (Diagnostic Reasoning About Physical Systems). Draphys is unique in two important ways. First, it uses models of both functional and physical relationships in the subsystems. Using both models enables the diagnostic reasoning to identify the fault propagation as the faulted system continues to operate, and to diagnose physical damage. Draphys also reasons about behavior of the faulted system over time, to eliminate possibilities as more information becomes available, and to update the system status as more components are affected by the fault. The crew interface research is examining display issues associated with presenting diagnosis information to the flight crew. One study examined issues for presenting system status information. One lesson learned from that study was that pilots found fault situations to be more complex if they involved multiple subsystems. Another was pilots could identify the faulted systems more quickly if the system status was presented in pictorial or text format. Another study is currently under way to

  20. High-Threshold Fault-Tolerant Quantum Computation with Analog Quantum Error Correction

    NASA Astrophysics Data System (ADS)

    Fukui, Kosuke; Tomita, Akihisa; Okamoto, Atsushi; Fujii, Keisuke

    2018-04-01

    To implement fault-tolerant quantum computation with continuous variables, the Gottesman-Kitaev-Preskill (GKP) qubit has been recognized as an important technological element. However, it is still challenging to experimentally generate the GKP qubit with the required squeezing level, 14.8 dB, of the existing fault-tolerant quantum computation. To reduce this requirement, we propose a high-threshold fault-tolerant quantum computation with GKP qubits using topologically protected measurement-based quantum computation with the surface code. By harnessing analog information contained in the GKP qubits, we apply analog quantum error correction to the surface code. Furthermore, we develop a method to prevent the squeezing level from decreasing during the construction of the large-scale cluster states for the topologically protected, measurement-based, quantum computation. We numerically show that the required squeezing level can be relaxed to less than 10 dB, which is within the reach of the current experimental technology. Hence, this work can considerably alleviate this experimental requirement and take a step closer to the realization of large-scale quantum computation.

  1. Results from the NASA Spacecraft Fault Management Workshop: Cost Drivers for Deep Space Missions

    NASA Technical Reports Server (NTRS)

    Newhouse, Marilyn E.; McDougal, John; Barley, Bryan; Stephens Karen; Fesq, Lorraine M.

    2010-01-01

    Fault Management, the detection of and response to in-flight anomalies, is a critical aspect of deep-space missions. Fault management capabilities are commonly distributed across flight and ground subsystems, impacting hardware, software, and mission operations designs. The National Aeronautics and Space Administration (NASA) Discovery & New Frontiers (D&NF) Program Office at Marshall Space Flight Center (MSFC) recently studied cost overruns and schedule delays for five missions. The goal was to identify the underlying causes for the overruns and delays, and to develop practical mitigations to assist the D&NF projects in identifying potential risks and controlling the associated impacts to proposed mission costs and schedules. The study found that four out of the five missions studied had significant overruns due to underestimating the complexity and support requirements for fault management. As a result of this and other recent experiences, the NASA Science Mission Directorate (SMD) Planetary Science Division (PSD) commissioned a workshop to bring together invited participants across government, industry, and academia to assess the state of the art in fault management practice and research, identify current and potential issues, and make recommendations for addressing these issues. The workshop was held in New Orleans in April of 2008. The workshop concluded that fault management is not being limited by technology, but rather by a lack of emphasis and discipline in both the engineering and programmatic dimensions. Some of the areas cited in the findings include different, conflicting, and changing institutional goals and risk postures; unclear ownership of end-to-end fault management engineering; inadequate understanding of the impact of mission-level requirements on fault management complexity; and practices, processes, and tools that have not kept pace with the increasing complexity of mission requirements and spacecraft systems. This paper summarizes the

  2. Validation of Helicopter Gear Condition Indicators Using Seeded Fault Tests

    NASA Technical Reports Server (NTRS)

    Dempsey, Paula; Brandon, E. Bruce

    2013-01-01

    A "seeded fault test" in support of a rotorcraft condition based maintenance program (CBM), is an experiment in which a component is tested with a known fault while health monitoring data is collected. These tests are performed at operating conditions comparable to operating conditions the component would be exposed to while installed on the aircraft. Performance of seeded fault tests is one method used to provide evidence that a Health Usage Monitoring System (HUMS) can replace current maintenance practices required for aircraft airworthiness. Actual in-service experience of the HUMS detecting a component fault is another validation method. This paper will discuss a hybrid validation approach that combines in service-data with seeded fault tests. For this approach, existing in-service HUMS flight data from a naturally occurring component fault will be used to define a component seeded fault test. An example, using spiral bevel gears as the targeted component, will be presented. Since the U.S. Army has begun to develop standards for using seeded fault tests for HUMS validation, the hybrid approach will be mapped to the steps defined within their Aeronautical Design Standard Handbook for CBM. This paper will step through their defined processes, and identify additional steps that may be required when using component test rig fault tests to demonstrate helicopter CI performance. The discussion within this paper will provide the reader with a better appreciation for the challenges faced when defining a seeded fault test for HUMS validation.

  3. A New Kinematic Model for Polymodal Faulting: Implications for Fault Connectivity

    NASA Astrophysics Data System (ADS)

    Healy, D.; Rizzo, R. E.

    2015-12-01

    Conjugate, or bimodal, fault patterns dominate the geological literature on shear failure. Based on Anderson's (1905) application of the Mohr-Coulomb failure criterion, these patterns have been interpreted from all tectonic regimes, including normal, strike-slip and thrust (reverse) faulting. However, a fundamental limitation of the Mohr-Coulomb failure criterion - and others that assume faults form parallel to the intermediate principal stress - is that only plane strain can result from slip on the conjugate faults. However, deformation in the Earth is widely accepted as being three-dimensional, with truly triaxial stresses and strains. Polymodal faulting, with three or more sets of faults forming and slipping simultaneously, can generate three-dimensional strains from truly triaxial stresses. Laboratory experiments and outcrop studies have verified the occurrence of the polymodal fault patterns in nature. The connectivity of polymodal fault networks differs significantly from conjugate fault networks, and this presents challenges to our understanding of faulting and an opportunity to improve our understanding of seismic hazards and fluid flow. Polymodal fault patterns will, in general, have more connected nodes in 2D (and more branch lines in 3D) than comparable conjugate (bimodal) patterns. The anisotropy of permeability is therefore expected to be very different in rocks with polymodal fault patterns in comparison to conjugate fault patterns, and this has implications for the development of hydrocarbon reservoirs, the genesis of ore deposits and the management of aquifers. In this contribution, I assess the published evidence and models for polymodal faulting before presenting a novel kinematic model for general triaxial strain in the brittle field.

  4. Fault tree models for fault tolerant hypercube multiprocessors

    NASA Technical Reports Server (NTRS)

    Boyd, Mark A.; Tuazon, Jezus O.

    1991-01-01

    Three candidate fault tolerant hypercube architectures are modeled, their reliability analyses are compared, and the resulting implications of these methods of incorporating fault tolerance into hypercube multiprocessors are discussed. In the course of performing the reliability analyses, the use of HARP and fault trees in modeling sequence dependent system behaviors is demonstrated.

  5. Fault Tree Analysis.

    PubMed

    McElroy, Lisa M; Khorzad, Rebeca; Rowe, Theresa A; Abecassis, Zachary A; Apley, Daniel W; Barnard, Cynthia; Holl, Jane L

    The purpose of this study was to use fault tree analysis to evaluate the adequacy of quality reporting programs in identifying root causes of postoperative bloodstream infection (BSI). A systematic review of the literature was used to construct a fault tree to evaluate 3 postoperative BSI reporting programs: National Surgical Quality Improvement Program (NSQIP), Centers for Medicare and Medicaid Services (CMS), and The Joint Commission (JC). The literature review revealed 699 eligible publications, 90 of which were used to create the fault tree containing 105 faults. A total of 14 identified faults are currently mandated for reporting to NSQIP, 5 to CMS, and 3 to JC; 2 or more programs require 4 identified faults. The fault tree identifies numerous contributing faults to postoperative BSI and reveals substantial variation in the requirements and ability of national quality data reporting programs to capture these potential faults. Efforts to prevent postoperative BSI require more comprehensive data collection to identify the root causes and develop high-reliability improvement strategies.

  6. Soft-Fault Detection Technologies Developed for Electrical Power Systems

    NASA Technical Reports Server (NTRS)

    Button, Robert M.

    2004-01-01

    The NASA Glenn Research Center, partner universities, and defense contractors are working to develop intelligent power management and distribution (PMAD) technologies for future spacecraft and launch vehicles. The goals are to provide higher performance (efficiency, transient response, and stability), higher fault tolerance, and higher reliability through the application of digital control and communication technologies. It is also expected that these technologies will eventually reduce the design, development, manufacturing, and integration costs for large, electrical power systems for space vehicles. The main focus of this research has been to incorporate digital control, communications, and intelligent algorithms into power electronic devices such as direct-current to direct-current (dc-dc) converters and protective switchgear. These technologies, in turn, will enable revolutionary changes in the way electrical power systems are designed, developed, configured, and integrated in aerospace vehicles and satellites. Initial successes in integrating modern, digital controllers have proven that transient response performance can be improved using advanced nonlinear control algorithms. One technology being developed includes the detection of "soft faults," those not typically covered by current systems in use today. Soft faults include arcing faults, corona discharge faults, and undetected leakage currents. Using digital control and advanced signal analysis algorithms, we have shown that it is possible to reliably detect arcing faults in high-voltage dc power distribution systems (see the preceding photograph). Another research effort has shown that low-level leakage faults and cable degradation can be detected by analyzing power system parameters over time. This additional fault detection capability will result in higher reliability for long-lived power systems such as reusable launch vehicles and space exploration missions.

  7. 14 CFR 25.863 - Flammable fluid fire protection.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... AIRCRAFT AIRWORTHINESS STANDARDS: TRANSPORT CATEGORY AIRPLANES Design and Construction Fire Protection § 25..., including electrical faults, overheating of equipment, and malfunctioning of protective devices. (4) Means...

  8. Fault-tolerant cooperative output regulation for multi-vehicle systems with sensor faults

    NASA Astrophysics Data System (ADS)

    Qin, Liguo; He, Xiao; Zhou, D. H.

    2017-10-01

    This paper presents a unified framework of fault diagnosis and fault-tolerant cooperative output regulation (FTCOR) for a linear discrete-time multi-vehicle system with sensor faults. The FTCOR control law is designed through three steps. A cooperative output regulation (COR) controller is designed based on the internal mode principle when there are no sensor faults. A sufficient condition on the existence of the COR controller is given based on the discrete-time algebraic Riccati equation (DARE). Then, a decentralised fault diagnosis scheme is designed to cope with sensor faults occurring in followers. A residual generator is developed to detect sensor faults of each follower, and a bank of fault-matching estimators are proposed to isolate and estimate sensor faults of each follower. Unlike the current distributed fault diagnosis for multi-vehicle systems, the presented decentralised fault diagnosis scheme in each vehicle reduces the communication and computation load by only using the information of the vehicle. By combing the sensor fault estimation and the COR control law, an FTCOR controller is proposed. Finally, the simulation results demonstrate the effectiveness of the FTCOR controller.

  9. 'My child did not like using sun protection': practices and perceptions of child sun protection among rural black African mothers.

    PubMed

    Kunene, Zamantimande; Albers, Patricia N; Lucas, Robyn M; Banwell, Cathy; Mathee, Angela; Wright, Caradee Y

    2017-08-25

    Photodamage is partially mitigated by darker skin pigmentation, but immune suppression, photoaging and cataracts occur among individuals with all skin types. To assess practices and acceptability to Black African mothers of sun protection equipment for their children living in a rural area, participants were recruited at the time of their child's 18-month vaccinations. Mothers completed a baseline questionnaire on usual sun behaviours and sun protection practices. They were then provided with sun protection equipment and advice. A follow-up questionnaire was administered two weeks later. Mothers reported that during the week prior to the baseline questionnaire, children spent on average less than 1 hour of time outdoors (most often spent in the shade). Most mothers (97%) liked the sun protection equipment. However, many (78 of 86) reported that their child did not like any of the sun protection equipment and two-thirds stated that the sun protection equipment was not easy to use. Among Black Africans in rural northern South Africa, we found a mismatch between parental preferences and child acceptance for using sun protection when outdoors. A better understanding of the health risks of incidental excess sun exposure and potential benefits of sun protection is required among Black Africans.

  10. Multiple Embedded Processors for Fault-Tolerant Computing

    NASA Technical Reports Server (NTRS)

    Bolotin, Gary; Watson, Robert; Katanyoutanant, Sunant; Burke, Gary; Wang, Mandy

    2005-01-01

    A fault-tolerant computer architecture has been conceived in an effort to reduce vulnerability to single-event upsets (spurious bit flips caused by impingement of energetic ionizing particles or photons). As in some prior fault-tolerant architectures, the redundancy needed for fault tolerance is obtained by use of multiple processors in one computer. Unlike prior architectures, the multiple processors are embedded in a single field-programmable gate array (FPGA). What makes this new approach practical is the recent commercial availability of FPGAs that are capable of having multiple embedded processors. A working prototype (see figure) consists of two embedded IBM PowerPC 405 processor cores and a comparator built on a Xilinx Virtex-II Pro FPGA. This relatively simple instantiation of the architecture implements an error-detection scheme. A planned future version, incorporating four processors and two comparators, would correct some errors in addition to detecting them.

  11. Fault-tolerant reactor protection system

    DOEpatents

    Gaubatz, Donald C.

    1997-01-01

    A reactor protection system having four divisions, with quad redundant sensors for each scram parameter providing input to four independent microprocessor-based electronic chassis. Each electronic chassis acquires the scram parameter data from its own sensor, digitizes the information, and then transmits the sensor reading to the other three electronic chassis via optical fibers. To increase system availability and reduce false scrams, the reactor protection system employs two levels of voting on a need for reactor scram. The electronic chassis perform software divisional data processing, vote 2/3 with spare based upon information from all four sensors, and send the divisional scram signals to the hardware logic panel, which performs a 2/4 division vote on whether or not to initiate a reactor scram. Each chassis makes a divisional scram decision based on data from all sensors. Each division performs independently of the others (asynchronous operation). All communications between the divisions are asynchronous. Each chassis substitutes its own spare sensor reading in the 2/3 vote if a sensor reading from one of the other chassis is faulty or missing. Therefore the presence of at least two valid sensor readings in excess of a set point is required before terminating the output to the hardware logic of a scram inhibition signal even when one of the four sensors is faulty or when one of the divisions is out of service.

  12. Multi-faults decoupling on turbo-expander using differential-based ensemble empirical mode decomposition

    NASA Astrophysics Data System (ADS)

    Li, Hongguang; Li, Ming; Li, Cheng; Li, Fucai; Meng, Guang

    2017-09-01

    This paper dedicates on the multi-faults decoupling of turbo-expander rotor system using Differential-based Ensemble Empirical Mode Decomposition (DEEMD). DEEMD is an improved version of DEMD to resolve the imperfection of mode mixing. The nonlinear behaviors of the turbo-expander considering temperature gradient with crack, rub-impact and pedestal looseness faults are investigated respectively, so that the baseline for the multi-faults decoupling can be established. DEEMD is then utilized on the vibration signals of the rotor system with coupling faults acquired by numerical simulation, and the results indicate that DEEMD can successfully decouple the coupling faults, which is more efficient than EEMD. DEEMD is also applied on the vibration signal of the misalignment coupling with rub-impact fault obtained during the adjustment of the experimental system. The conclusion shows that DEEMD can decompose the practical multi-faults signal and the industrial prospect of DEEMD is verified as well.

  13. Protection goals in environmental risk assessment: a practical approach.

    PubMed

    Garcia-Alonso, Monica; Raybould, Alan

    2014-12-01

    Policy protection goals are set up in most countries to minimise harm to the environment, humans and animals caused by human activities. Decisions on whether to approve new agricultural products, like pesticides or genetically modified (GM) crops, take into account these policy protection goals. To support decision-making, applications for approval of commercial uses of GM crops usually comprise an environmental risk assessment (ERA). These risk assessments are analytical tools, based on science, that follow a conceptual model that includes a problem formulation step where policy protection goals are considered. However, in most countries, risk assessors face major problems in that policy protection goals set in the legislation are stated in very broad terms and are too ambiguous to be directly applicable in ERAs. This means that risk assessors often have to interpret policy protection goals without clear guidance on what effects would be considered harmful. In this paper we propose a practical approach that may help risk assessors to translate policy protection goals into unambiguous (i.e., operational) protection goals and to establish relevant assessment endpoints and risk hypotheses that can be used in ERAs. Examples are provided to show how this approach can be applied to two areas of environmental concern relevant to the ERAs of GM crops.

  14. Fault Management Design Strategies

    NASA Technical Reports Server (NTRS)

    Day, John C.; Johnson, Stephen B.

    2014-01-01

    Development of dependable systems relies on the ability of the system to determine and respond to off-nominal system behavior. Specification and development of these fault management capabilities must be done in a structured and principled manner to improve our understanding of these systems, and to make significant gains in dependability (safety, reliability and availability). Prior work has described a fundamental taxonomy and theory of System Health Management (SHM), and of its operational subset, Fault Management (FM). This conceptual foundation provides a basis to develop framework to design and implement FM design strategies that protect mission objectives and account for system design limitations. Selection of an SHM strategy has implications for the functions required to perform the strategy, and it places constraints on the set of possible design solutions. The framework developed in this paper provides a rigorous and principled approach to classifying SHM strategies, as well as methods for determination and implementation of SHM strategies. An illustrative example is used to describe the application of the framework and the resulting benefits to system and FM design and dependability.

  15. Fault geometries in basement-induced wrench faulting under different initial stress states

    NASA Astrophysics Data System (ADS)

    Naylor, M. A.; Mandl, G.; Supesteijn, C. H. K.

    Scaled sandbox experiments were used to generate models for relative ages, dip, strike and three-dimensional shape of faults in basement-controlled wrench faulting. The basic fault sequence runs from early en échelon Riedel shears and splay faults through 'lower-angle' shears to P shears. The Riedel shears are concave upwards and define a tulip structure in cross-section. In three dimensions, each Riedel shear has a helicoidal form. The sequence of faults and three-dimensional geometry are rationalized in terms of the prevailing stress field and Coulomb-Mohr theory of shear failure. The stress state in the sedimentary overburden before wrenching begins has a substantial influence on the fault geometries and on the final complexity of the fault zone. With the maximum compressive stress (∂ 1) initially parallel to the basement fault (transtension), Riedel shears are only slightly en échelon, sub-parallel to the basement fault, steeply dipping with a reduced helicoidal aspect. Conversely, with ∂ 1 initially perpendicular to the basement fault (transpression), Riedel shears are strongly oblique to the basement fault strike, have lower dips and an exaggerated helicoidal form; the final fault zone is both wide and complex. We find good agreement between the models and both mechanical theory and natural examples of wrench faulting.

  16. Fault zone property near Xinfengjiang Reservoir using dense, across-fault seismic array

    NASA Astrophysics Data System (ADS)

    Lee, M. H. B.; Yang, H.; Sun, X.

    2017-12-01

    Properties of fault zones are important to the understanding of earthquake process. Around the fault zone is a damaged zone which is characterised by a lower seismic velocity. This is detectable as a low velocity zone and measure some physical property of the fault zone, which is otherwise difficult sample directly. A dense, across-fault array of short period seismometer is deployed on an inactive fault near Xinfengjiang Reservoir. Local events were manually picked. By computing the synthetic arrival time, we were able to constrain the parameters of the fault zone Preliminary result shows that the fault zone is around 350 m wide with a P and S velocity increase of around 10%. The fault is geologically inferred, and this result suggested that it may be a geological layer. The other possibility is that the higher velocity is caused by a combination of fault zone healing and fluid intrusion. Whilst the result was not able to tell us the nature of the fault, it demonstrated that this method is able to derive properties from a fault zone.

  17. Experimental Demonstration of Fault-Tolerant State Preparation with Superconducting Qubits.

    PubMed

    Takita, Maika; Cross, Andrew W; Córcoles, A D; Chow, Jerry M; Gambetta, Jay M

    2017-11-03

    Robust quantum computation requires encoding delicate quantum information into degrees of freedom that are hard for the environment to change. Quantum encodings have been demonstrated in many physical systems by observing and correcting storage errors, but applications require not just storing information; we must accurately compute even with faulty operations. The theory of fault-tolerant quantum computing illuminates a way forward by providing a foundation and collection of techniques for limiting the spread of errors. Here we implement one of the smallest quantum codes in a five-qubit superconducting transmon device and demonstrate fault-tolerant state preparation. We characterize the resulting code words through quantum process tomography and study the free evolution of the logical observables. Our results are consistent with fault-tolerant state preparation in a protected qubit subspace.

  18. Detection of Rooftop Cooling Unit Faults Based on Electrical Measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Armstrong, Peter R.; Laughman, C R.; Leeb, S B.

    Non-intrusive load monitoring (NILM) is accomplished by sampling voltage and current at high rates and reducing the resulting start transients or harmonic contents to concise ''signatures''. Changes in these signatures can be used to detect, and in many cases directly diagnose, equipment and component faults associated with roof-top cooling units. Use of the NILM for fault detection and diagnosis (FDD) is important because (1) it complements other FDD schemes that are based on thermo-fluid sensors and analyses and (2) it is minimally intrusive (one measuring point in the relatively protected confines of the control panel) and therefore inherently reliable. Thismore » paper describes changes in the power signatures of fans and compressors that were found, experimentally and theoretically, to be useful for fault detection.« less

  19. Evolving transpressional strain fields along the San Andreas fault in southern California: implications for fault branching, fault dip segmentation and strain partitioning

    NASA Astrophysics Data System (ADS)

    Bergh, Steffen; Sylvester, Arthur; Damte, Alula; Indrevær, Kjetil

    2014-05-01

    The San Andreas fault in southern California records only few large-magnitude earthquakes in historic time, and the recent activity is confined primarily on irregular and discontinuous strike-slip and thrust fault strands at shallow depths of ~5-20 km. Despite this fact, slip along the San Andreas fault is calculated to c. 35 mm/yr based on c.160 km total right lateral displacement for the southern segment of the fault in the last c. 8 Ma. Field observations also reveal complex fault strands and multiple events of deformation. The presently diffuse high-magnitude crustal movements may be explained by the deformation being largely distributed along more gently dipping reverse faults in fold-thrust belts, in contrast to regions to the north where deformation is less partitioned and localized to narrow strike-slip fault zones. In the Mecca Hills of the Salton trough transpressional deformation of an uplifted segment of the San Andreas fault in the last ca. 4.0 My is expressed by very complex fault-oblique and fault-parallel (en echelon) folding, and zones of uplift (fold-thrust belts), basement-involved reverse and strike-slip faults and accompanying multiple and pervasive cataclasis and conjugate fracturing of Miocene to Pleistocene sedimentary strata. Our structural analysis of the Mecca Hills addresses the kinematic nature of the San Andreas fault and mechanisms of uplift and strain-stress distribution along bent fault strands. The San Andreas fault and subsidiary faults define a wide spectrum of kinematic styles, from steep localized strike-slip faults, to moderate dipping faults related to oblique en echelon folds, and gently dipping faults distributed in fold-thrust belt domains. Therefore, the San Andreas fault is not a through-going, steep strike-slip crustal structure, which is commonly the basis for crustal modeling and earthquake rupture models. The fault trace was steep initially, but was later multiphase deformed/modified by oblique en echelon folding

  20. Practical example of the infrastructure protection against rock fall

    NASA Astrophysics Data System (ADS)

    Jirásko, Daniel; Vaníček, Ivan

    2017-09-01

    The protection of transport infrastructures against rock falls represents for the Czech Republic one of the sensitive questions. Rock falls, similarly as other typical geo-hazards for the Czech Republic, as landslides and floods, can have negative impact on safety and security of these infrastructures. One practical example how to reduce risk of rock fall is described in the paper. Great care is devoted to the visual inspection enabling to indicate places with high potential to failure. With the help of numerical modelling the range of rock fall negative impact is estimated. Protection measures are dealing with two basic ways. The first one utilize the results of numerical modelling for the optimal design of protection measures and the second one is focused on the monitoring of the rock blocks with high potential of instability together with wire-less transfer of measured results. After quick evaluation, e.g. comparison with warning values, some protection measures, mostly connected with closure of the potential sector, can be recommended.

  1. Rule-based fault diagnosis of hall sensors and fault-tolerant control of PMSM

    NASA Astrophysics Data System (ADS)

    Song, Ziyou; Li, Jianqiu; Ouyang, Minggao; Gu, Jing; Feng, Xuning; Lu, Dongbin

    2013-07-01

    Hall sensor is widely used for estimating rotor phase of permanent magnet synchronous motor(PMSM). And rotor position is an essential parameter of PMSM control algorithm, hence it is very dangerous if Hall senor faults occur. But there is scarcely any research focusing on fault diagnosis and fault-tolerant control of Hall sensor used in PMSM. From this standpoint, the Hall sensor faults which may occur during the PMSM operating are theoretically analyzed. According to the analysis results, the fault diagnosis algorithm of Hall sensor, which is based on three rules, is proposed to classify the fault phenomena accurately. The rotor phase estimation algorithms, based on one or two Hall sensor(s), are initialized to engender the fault-tolerant control algorithm. The fault diagnosis algorithm can detect 60 Hall fault phenomena in total as well as all detections can be fulfilled in 1/138 rotor rotation period. The fault-tolerant control algorithm can achieve a smooth torque production which means the same control effect as normal control mode (with three Hall sensors). Finally, the PMSM bench test verifies the accuracy and rapidity of fault diagnosis and fault-tolerant control strategies. The fault diagnosis algorithm can detect all Hall sensor faults promptly and fault-tolerant control algorithm allows the PMSM to face failure conditions of one or two Hall sensor(s). In addition, the transitions between health-control and fault-tolerant control conditions are smooth without any additional noise and harshness. Proposed algorithms can deal with the Hall sensor faults of PMSM in real applications, and can be provided to realize the fault diagnosis and fault-tolerant control of PMSM.

  2. The susitna glacier thrust fault: Characteristics of surface ruptures on the fault that initiated the 2002 denali fault earthquake

    USGS Publications Warehouse

    Crone, A.J.; Personius, S.F.; Craw, P.A.; Haeussler, P.J.; Staft, L.A.

    2004-01-01

    The 3 November 2002 Mw 7.9 Denali fault earthquake sequence initiated on the newly discovered Susitna Glacier thrust fault and caused 48 km of surface rupture. Rupture of the Susitna Glacier fault generated scarps on ice of the Susitna and West Fork glaciers and on tundra and surficial deposits along the southern front of the central Alaska Range. Based on detailed mapping, 27 topographic profiles, and field observations, we document the characteristics and slip distribution of the 2002 ruptures and describe evidence of pre-2002 ruptures on the fault. The 2002 surface faulting produced structures that range from simple folds on a single trace to complex thrust-fault ruptures and pressure ridges on multiple, sinuous strands. The deformation zone is locally more than 1 km wide. We measured a maximum vertical displacement of 5.4 m on the south-directed main thrust. North-directed backthrusts have more than 4 m of surface offset. We measured a well-constrained near-surface fault dip of about 19?? at one site, which is considerably less than seismologically determined values of 35??-48??. Surface-rupture data yield an estimated magnitude of Mw 7.3 for the fault, which is similar to the seismological value of Mw 7.2. Comparison of field and seismological data suggest that the Susitna Glacier fault is part of a large positive flower structure associated with northwest-directed transpressive deformation on the Denali fault. Prehistoric scarps are evidence of previous rupture of the Sustina Glacier fault, but additional work is needed to determine if past failures of the Susitna Glacier fault have consistently induced rupture of the Denali fault.

  3. Fault Identification Based on Nlpca in Complex Electrical Engineering

    NASA Astrophysics Data System (ADS)

    Zhang, Yagang; Wang, Zengping; Zhang, Jinfang

    2012-07-01

    The fault is inevitable in any complex systems engineering. Electric power system is essentially a typically nonlinear system. It is also one of the most complex artificial systems in this world. In our researches, based on the real-time measurements of phasor measurement unit, under the influence of white Gaussian noise (suppose the standard deviation is 0.01, and the mean error is 0), we used mainly nonlinear principal component analysis theory (NLPCA) to resolve fault identification problem in complex electrical engineering. The simulation results show that the fault in complex electrical engineering is usually corresponding to the variable with the maximum absolute value coefficient in the first principal component. These researches will have significant theoretical value and engineering practical significance.

  4. Fault-tolerant reactor protection system

    DOEpatents

    Gaubatz, D.C.

    1997-04-15

    A reactor protection system is disclosed having four divisions, with quad redundant sensors for each scram parameter providing input to four independent microprocessor-based electronic chassis. Each electronic chassis acquires the scram parameter data from its own sensor, digitizes the information, and then transmits the sensor reading to the other three electronic chassis via optical fibers. To increase system availability and reduce false scrams, the reactor protection system employs two levels of voting on a need for reactor scram. The electronic chassis perform software divisional data processing, vote 2/3 with spare based upon information from all four sensors, and send the divisional scram signals to the hardware logic panel, which performs a 2/4 division vote on whether or not to initiate a reactor scram. Each chassis makes a divisional scram decision based on data from all sensors. Each division performs independently of the others (asynchronous operation). All communications between the divisions are asynchronous. Each chassis substitutes its own spare sensor reading in the 2/3 vote if a sensor reading from one of the other chassis is faulty or missing. Therefore the presence of at least two valid sensor readings in excess of a set point is required before terminating the output to the hardware logic of a scram inhibition signal even when one of the four sensors is faulty or when one of the divisions is out of service. 16 figs.

  5. Intelligent classifier for dynamic fault patterns based on hidden Markov model

    NASA Astrophysics Data System (ADS)

    Xu, Bo; Feng, Yuguang; Yu, Jinsong

    2006-11-01

    It's difficult to build precise mathematical models for complex engineering systems because of the complexity of the structure and dynamics characteristics. Intelligent fault diagnosis introduces artificial intelligence and works in a different way without building the analytical mathematical model of a diagnostic object, so it's a practical approach to solve diagnostic problems of complex systems. This paper presents an intelligent fault diagnosis method, an integrated fault-pattern classifier based on Hidden Markov Model (HMM). This classifier consists of dynamic time warping (DTW) algorithm, self-organizing feature mapping (SOFM) network and Hidden Markov Model. First, after dynamic observation vector in measuring space is processed by DTW, the error vector including the fault feature of being tested system is obtained. Then a SOFM network is used as a feature extractor and vector quantization processor. Finally, fault diagnosis is realized by fault patterns classifying with the Hidden Markov Model classifier. The importing of dynamic time warping solves the problem of feature extracting from dynamic process vectors of complex system such as aeroengine, and makes it come true to diagnose complex system by utilizing dynamic process information. Simulating experiments show that the diagnosis model is easy to extend, and the fault pattern classifier is efficient and is convenient to the detecting and diagnosing of new faults.

  6. Dynamic test input generation for multiple-fault isolation

    NASA Technical Reports Server (NTRS)

    Schaefer, Phil

    1990-01-01

    Recent work is Causal Reasoning has provided practical techniques for multiple fault diagnosis. These techniques provide a hypothesis/measurement diagnosis cycle. Using probabilistic methods, they choose the best measurements to make, then update fault hypotheses in response. For many applications such as computers and spacecraft, few measurement points may be accessible, or values may change quickly as the system under diagnosis operates. In these cases, a hypothesis/measurement cycle is insufficient. A technique is presented for a hypothesis/test-input/measurement diagnosis cycle. In contrast to generating tests a priori for determining device functionality, it dynamically generates tests in response to current knowledge about fault probabilities. It is shown how the mathematics previously used for measurement specification can be applied to the test input generation process. An example from an efficient implementation called Multi-Purpose Causal (MPC) is presented.

  7. Detection of faults and software reliability analysis

    NASA Technical Reports Server (NTRS)

    Knight, John C.

    1987-01-01

    Multi-version or N-version programming is proposed as a method of providing fault tolerance in software. The approach requires the separate, independent preparation of multiple versions of a piece of software for some application. These versions are executed in parallel in the application environment; each receives identical inputs and each produces its version of the required outputs. The outputs are collected by a voter and, in principle, they should all be the same. In practice there may be some disagreement. If this occurs, the results of the majority are taken to be the correct output, and that is the output used by the system. A total of 27 programs were produced. Each of these programs was then subjected to one million randomly-generated test cases. The experiment yielded a number of programs containing faults that are useful for general studies of software reliability as well as studies of N-version programming. Fault tolerance through data diversity and analytic models of comparison testing are discussed.

  8. Fault evolution in volcanic tuffs and quartz-rich eolian sandstone as mechanical analogs for faulting in Martian pyroclastic deposits

    NASA Astrophysics Data System (ADS)

    Okubo, C. H.

    2014-12-01

    In order to establish a foundation for studies of faulting in Martian rocks and soils in volcanic terrain, the distribution of brittle strain around faults within the North Menan Butte Tuff in the eastern Snake River Plain, Idaho and the Joe Lott Tuff Member of the Mount Belknap Volcanics, Utah, has been recently described. These studies employed a combination of macroscopic and microscopic observations, including measurements of in situ permeability as a proxy for non-localized brittle deformation of the host rock. In areas where the tuff retained its primary granular nature at the time of deformation, initial plastic yielding in both tuffs occurred along deformation bands. Both compactional and dilational types of deformation bands were observed, and faulting occurred along clusters of deformation bands. Where secondary alteration processes imparted a massive texture to the tuff, brittle deformation was accommodated along fractures. Host-rock permeability exhibits little variation from non-deformed values in the North Menan Butte Tuff, whereas host rock permeability is reduced by roughly an order of magnitude through compaction alone (no alteration) in the Joe Lott Tuff. To create a bridge between these observations in tuff and the more substantial body of work centered on deformation band formation and faulting in quartz-rich sandstones, the same techniques employed in the North Menan Butte Tuff and the Joe Lott Tuff have also been applied to a kilometer-scale fault in the Jurassic Navajo Sandstone in the Waterpocket Fold, Utah. These observations demonstrate that the manifestation of strain and evolution of faulting in the Mars-analog tuffs are comparable to that in quartz-rich sandstones. Therefore, current understanding of brittle deformation in quartz-rich sandstones can be used to inform investigations into fault growth within porous tuffs on Mars. A discussion of these observations, practical limitations, and directions for future work are presented here.

  9. The Fault Block Model: A novel approach for faulted gas reservoirs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ursin, J.R.; Moerkeseth, P.O.

    1994-12-31

    The Fault Block Model was designed for the development of gas production from Sleipner Vest. The reservoir consists of marginal marine sandstone of Hugine Formation. Modeling of highly faulted and compartmentalized reservoirs is severely impeded by the nature and extent of known and undetected faults and, in particular, their effectiveness as flow barrier. The model presented is efficient and superior to other models, for highly faulted reservoir, i.e. grid based simulators, because it minimizes the effect of major undetected faults and geological uncertainties. In this article the authors present the Fault Block Model as a new tool to better understandmore » the implications of geological uncertainty in faulted gas reservoirs with good productivity, with respect to uncertainty in well coverage and optimum gas recovery.« less

  10. Fault detection and fault tolerance in robotics

    NASA Technical Reports Server (NTRS)

    Visinsky, Monica; Walker, Ian D.; Cavallaro, Joseph R.

    1992-01-01

    Robots are used in inaccessible or hazardous environments in order to alleviate some of the time, cost and risk involved in preparing men to endure these conditions. In order to perform their expected tasks, the robots are often quite complex, thus increasing their potential for failures. If men must be sent into these environments to repair each component failure in the robot, the advantages of using the robot are quickly lost. Fault tolerant robots are needed which can effectively cope with failures and continue their tasks until repairs can be realistically scheduled. Before fault tolerant capabilities can be created, methods of detecting and pinpointing failures must be perfected. This paper develops a basic fault tree analysis of a robot in order to obtain a better understanding of where failures can occur and how they contribute to other failures in the robot. The resulting failure flow chart can also be used to analyze the resiliency of the robot in the presence of specific faults. By simulating robot failures and fault detection schemes, the problems involved in detecting failures for robots are explored in more depth.

  11. Fiber Bragg grating sensor for fault detection in high voltage overhead transmission lines

    NASA Astrophysics Data System (ADS)

    Moghadas, Amin

    2011-12-01

    A fiber optic based sensor capable of fault detection in both radial and network overhead transmission power line systems is investigated. Bragg wavelength shift is used to measure the fault current and detect fault in power systems. Magnetic fields generated by currents in the overhead transmission lines cause a strain in magnetostrictive material which is then detected by fiber Bragg grating (FBG) sensors. The Fiber Bragg interrogator senses the reflected FBG signals, and the Bragg wavelength shift is calculated and the signals are processed. A broadband light source in the control room scans the shift in the reflected signals. Any surge in the magnetic field relates to an increased fault current at a certain location. Also, fault location can be precisely defined with an artificial neural network (ANN) algorithm. This algorithm can be easily coordinated with other protective devices. It is shown that the faults in the overhead transmission line cause a detectable wavelength shift on the reflected signal of FBG sensors and can be used to detect and classify different kind of faults. The proposed method has been extensively tested by simulation and results confirm that the proposed scheme is able to detect different kinds of fault in both radial and network system.

  12. Fiber Bragg Grating Sensor for Fault Detection in Radial and Network Transmission Lines

    PubMed Central

    Moghadas, Amin A.; Shadaram, Mehdi

    2010-01-01

    In this paper, a fiber optic based sensor capable of fault detection in both radial and network overhead transmission power line systems is investigated. Bragg wavelength shift is used to measure the fault current and detect fault in power systems. Magnetic fields generated by currents in the overhead transmission lines cause a strain in magnetostrictive material which is then detected by Fiber Bragg Grating (FBG). The Fiber Bragg interrogator senses the reflected FBG signals, and the Bragg wavelength shift is calculated and the signals are processed. A broadband light source in the control room scans the shift in the reflected signal. Any surge in the magnetic field relates to an increased fault current at a certain location. Also, fault location can be precisely defined with an artificial neural network (ANN) algorithm. This algorithm can be easily coordinated with other protective devices. It is shown that the faults in the overhead transmission line cause a detectable wavelength shift on the reflected signal of FBG and can be used to detect and classify different kind of faults. The proposed method has been extensively tested by simulation and results confirm that the proposed scheme is able to detect different kinds of fault in both radial and network system. PMID:22163416

  13. Fault reactivation: The Picuris-Pecos fault system of north-central New Mexico

    NASA Astrophysics Data System (ADS)

    McDonald, David Wilson

    The PPFS is a N-trending fault system extending over 80 km in the Sangre de Cristo Mountains of northern New Mexico. Precambrian basement rocks are offset 37 km in a right-lateral sense; however, this offset includes dextral strike-slip (Precambrian), mostly normal dip-slip (Pennsylvanian), mostly reverse dip-slip (Early Laramide), limited strike-slip (Late Laramide) and mostly normal dip-slip (Cenozoic). The PPFS is broken into at least 3 segments by the NE-trending Embudo fault and by several Laramide age NW-trending tear faults. These segments are (from N to S): the Taos, the Picuris, and the Pecos segments. On the east side of the Picuris segment in the Picuris Mountains, the Oligocene-Miocene age Miranda graben developed and represents a complex extension zone south of the Embudo fault. Regional analysis of remotely sensed data and geologic maps indicate that lineaments subparallel to the trace of the PPFS are longer and less frequent than lineaments that trend orthogonal to the PPFS. Significant cross cutting faults and subtle changes in fault trends in each segment are clear in the lineament data. Detailed mapping in the eastern Picuris Mountains showed that the favorably oriented Picuris segment was not reactivated in the Tertiary development of the Rio Grande rift. Segmentation of the PPFS and post-Laramide annealing of the Picuris segment are interpreted to have resulted in the development of the subparallel La Serna fault. The Picuris segment of the PPFS is offset by several E-ESE trending faults. These faults are Late Cenozoic in age and interpreted to be related to the uplift of the Picuris Mountains and the continuing sinistral motion on the Embudo fault. Differential subsidence within the Miranda graben caused the development of several synthetic and orthogonal faults between the bounding La Serna and Miranda faults. Analysis of over 10,000 outcrop scale brittle structures reveals a strong correlation between faults and fracture systems. The dominant

  14. [Child protection: Pediatricians' knowledge and practices in Brittany].

    PubMed

    Ayou, C; Gauducheau, E; Arrieta, A; Roussey, M; Marichal, M; Vabres, N; Balençon, M

    2018-04-01

    On 5 March 2007, the law concerning the child protection system was reformed. Since then, child protection services have been responsible for the management of child abuse and neglect. Reporting and asking for child protection is now easier for every physician by submitting a "preoccupying information" form. A study conducted in 2014 in the general practitioners (GP) in the Ille-et-Vilaine department showed that they were quite unfamiliar with the child protection updates and that they needed special training. We wished to study the knowledge and practices of the pediatricians in Brittany and compare these results to the previous study. An anonymous postal investigation was conducted between May and July 2014. The questionnaire was referred to the previous study so the results would be comparable. A total of 134 pediatricians (including 99 women) of the 316 pediatricians of Brittany answered our questionnaire regarding their activity and their knowledge about child abuse and neglect. These results were analyzed and compared to the data of GPs in Ille-et-Vilaine. Only 4.4 % of the pediatricians obtained more than 80 % correct answers and 12 % of the pediatricians obtained fewer than 50 % correct answers. Among the pediatricians, 41 % of them had not reported a single time since 2007. The pediatricians who obtained better results (P<0.001) had more training and were more often in contact with child abuse and neglect than the GPs. The most common reasons that clinicians gave for not reporting were lack of a return from social services after a report, lack of training and the fear of making a misdiagnosis. Indeed only 9 % had received feedback from social services. To make reporting easier, 92 % of the pediatricians would like training and 97 % found a simple practice guide on child abuse useful. Child protection is a neglected subject, including by pediatricians. To fight against professional denial and isolation, a substantial effort is still needed

  15. Stafford fault system: 120 million year fault movement history of northern Virginia

    USGS Publications Warehouse

    Powars, David S.; Catchings, Rufus D.; Horton, J. Wright; Schindler, J. Stephen; Pavich, Milan J.

    2015-01-01

    The Stafford fault system, located in the mid-Atlantic coastal plain of the eastern United States, provides the most complete record of fault movement during the past ~120 m.y. across the Virginia, Washington, District of Columbia (D.C.), and Maryland region, including displacement of Pleistocene terrace gravels. The Stafford fault system is close to and aligned with the Piedmont Spotsylvania and Long Branch fault zones. The dominant southwest-northeast trend of strong shaking from the 23 August 2011, moment magnitude Mw 5.8 Mineral, Virginia, earthquake is consistent with the connectivity of these faults, as seismic energy appears to have traveled along the documented and proposed extensions of the Stafford fault system into the Washington, D.C., area. Some other faults documented in the nearby coastal plain are clearly rooted in crystalline basement faults, especially along terrane boundaries. These coastal plain faults are commonly assumed to have undergone relatively uniform movement through time, with average slip rates from 0.3 to 1.5 m/m.y. However, there were higher rates during the Paleocene–early Eocene and the Pliocene (4.4–27.4 m/m.y), suggesting that slip occurred primarily during large earthquakes. Further investigation of the Stafford fault system is needed to understand potential earthquake hazards for the Virginia, Maryland, and Washington, D.C., area. The combined Stafford fault system and aligned Piedmont faults are ~180 km long, so if the combined fault system ruptured in a single event, it would result in a significantly larger magnitude earthquake than the Mineral earthquake. Many structures most strongly affected during the Mineral earthquake are along or near the Stafford fault system and its proposed northeastward extension.

  16. A situational crime prevention analysis of Anglican clergy's child protective practices.

    PubMed

    Guerzoni, Michael Andre

    2018-03-01

    To date, a predominant focus within the field of 'clerical collar crime' has revolved around institutional-level church responses to child sexual abuse events, survivors and offenders. Comparatively, little attention has been directed towards the micro-level and in particular, examining clerical responses to child sexual abuse. This article presents empirical findings concerning the 'everyday' child protective practices of Anglican clergy in the Diocese of Tasmania, Australia. Research data was acquired through open-ended qualitative interviews conducted with a sample of 34 clergy in a broader study of clerical culture, habitus and life amidst the 'church abuse crisis'. The framework of Situational Crime Prevention is employed to evaluate the feasibility of clergy's child-safe practices and comment on how these practices could be further altered through professional development. Research findings demonstrate that clergy possess an active awareness of risk, and execute a series of protective measures to minimise both sexual interactions with children and allegations of impropriety. Copyright © 2017 Elsevier Ltd. All rights reserved.

  17. ECOSYSTEM RESTORATION: MANAGEMENT PRACTICES FOR PROTECTING AND ENHANCING AQUATIC RESOURCES

    EPA Science Inventory

    This poster describes research that addresses the question: Which management practices are most successful for protection and restoration of ecological resources? The Ecosystem Restoration Research Program of EPA/ORD is designed to conduct basic and applied field research to eva...

  18. Study on Practical Application of Turboprop Engine Condition Monitoring and Fault Diagnostic System Using Fuzzy-Neuro Algorithms

    NASA Astrophysics Data System (ADS)

    Kong, Changduk; Lim, Semyeong; Kim, Keunwoo

    2013-03-01

    The Neural Networks is mostly used to engine fault diagnostic system due to its good learning performance, but it has a drawback due to low accuracy and long learning time to build learning data base. This work builds inversely a base performance model of a turboprop engine to be used for a high altitude operation UAV using measuring performance data, and proposes a fault diagnostic system using the base performance model and artificial intelligent methods such as Fuzzy and Neural Networks. Each real engine performance model, which is named as the base performance model that can simulate a new engine performance, is inversely made using its performance test data. Therefore the condition monitoring of each engine can be more precisely carried out through comparison with measuring performance data. The proposed diagnostic system identifies firstly the faulted components using Fuzzy Logic, and then quantifies faults of the identified components using Neural Networks leaned by fault learning data base obtained from the developed base performance model. In leaning the measuring performance data of the faulted components, the FFBP (Feed Forward Back Propagation) is used. In order to user's friendly purpose, the proposed diagnostic program is coded by the GUI type using MATLAB.

  19. Fault-scale controls on rift geometry: the Bilila-Mtakataka Fault, Malawi

    NASA Astrophysics Data System (ADS)

    Hodge, M.; Fagereng, A.; Biggs, J.; Mdala, H. S.

    2017-12-01

    Border faults that develop during initial stages of rifting determine the geometry of rifts and passive margins. At outcrop and regional scales, it has been suggested that border fault orientation may be controlled by reactivation of pre-existing weaknesses. Here, we perform a multi-scale investigation on the influence of anisotropic fabrics along a major developing border fault in the southern East African Rift, Malawi. The 130 km long Bilila-Mtakataka fault has been proposed to have slipped in a single MW 8 earthquake with 10 m of normal displacement. The fault is marked by an 11±7 m high scarp with an average trend that is oblique to the current plate motion. Variations in scarp height are greatest at lithological boundaries and where the scarp switches between following and cross-cutting high-grade metamorphic foliation. Based on the scarp's geometry and morphology, we define 6 geometrically distinct segments. We suggest that the segments link to at least one deeper structure that strikes parallel to the average scarp trend, an orientation consistent with the kinematics of an early phase of rift initiation. The slip required on a deep fault(s) to match the height of the current scarp suggests multiple earthquakes along the fault. We test this hypothesis by studying the scarp morphology using high-resolution satellite data. Our results suggest that during the earthquake(s) that formed the current scarp, the propagation of the fault toward the surface locally followed moderately-dipping foliation well oriented for reactivation. In conclusion, although well oriented pre-existing weaknesses locally influence shallow fault geometry, large-scale border fault geometry appears primarily controlled by the stress field at the time of fault initiation.

  20. Tectono-stratigraphic evolution of normal fault zones: Thal Fault Zone, Suez Rift, Egypt

    NASA Astrophysics Data System (ADS)

    Leppard, Christopher William

    The evolution of linkage of normal fault populations to form continuous, basin bounding normal fault zones is recognised as an important control on the stratigraphic evolution of rift-basins. This project aims to investigate the temporal and spatial evolution of normal fault populations and associated syn-rift deposits from the initiation of early-formed, isolated normal faults (rift-initiation) to the development of a through-going fault zone (rift-climax) by documenting the tectono-stratigraphic evolution of the Sarbut EI Gamal segment of the exceptionally well-exposed Thai fault zone, Suez Rift, Egypt. A number of dated stratal surfaces mapped around the syn-rift depocentre of the Sarbut El Gamal segment allow constraints to be placed on the timing and style of deformation, and the spatial variability of facies along this segment of the fault zone. Data collected indicates that during the first 3.5 My of rifting the structural style was characterised by numerous, closely spaced, short (< 3 km), low displacement (< 200 m) synthetic and antithetic normal faults within 1 - 2 km of the present-day fault segment trace, accommodating surface deformation associated with the development of a fault propagation monocline above the buried, pre-cursor strands of the Sarbut El Gamal fault segment. The progressive localisation of displacement onto the fault segment during rift-climax resulted in the development of a major, surface-breaking fault 3.5 - 5 My after the onset of rifting and is recorded by the death of early-formed synthetic and antithetic faults up-section, and thickening of syn-rift strata towards the fault segment. The influence of intrabasinal highs at the tips of the Sarbut EI Gamal fault segment on the pre-rift sub-crop level, combined with observations from the early-formed structures and coeval deposits suggest that the overall length of the fault segment was fixed from an early stage. The fault segment is interpreted to have grown through rapid lateral

  1. Fault kinematics and localised inversion within the Troms-Finnmark Fault Complex, SW Barents Sea

    NASA Astrophysics Data System (ADS)

    Zervas, I.; Omosanya, K. O.; Lippard, S. J.; Johansen, S. E.

    2018-04-01

    The areas bounding the Troms-Finnmark Fault Complex are affected by complex tectonic evolution. In this work, the history of fault growth, reactivation, and inversion of major faults in the Troms-Finnmark Fault Complex and the Ringvassøy Loppa Fault Complex is interpreted from three-dimensional seismic data, structural maps and fault displacement plots. Our results reveal eight normal faults bounding rotated fault blocks in the Troms-Finnmark Fault Complex. Both the throw-depth and displacement-distance plots show that the faults exhibit complex configurations of lateral and vertical segmentation with varied profiles. Some of the faults were reactivated by dip-linkages during the Late Jurassic and exhibit polycyclic fault growth, including radial, syn-sedimentary, and hybrid propagation. Localised positive inversion is the main mechanism of fault reactivation occurring at the Troms-Finnmark Fault Complex. The observed structural styles include folds associated with extensional faults, folded growth wedges and inverted depocentres. Localised inversion was intermittent with rifting during the Middle Jurassic-Early Cretaceous at the boundaries of the Troms-Finnmark Fault Complex to the Finnmark Platform. Additionally, tectonic inversion was more intense at the boundaries of the two fault complexes, affecting Middle Triassic to Early Cretaceous strata. Our study shows that localised folding is either a product of compressional forces or of lateral movements in the Troms-Finnmark Fault Complex. Regional stresses due to the uplift in the Loppa High and halokinesis in the Tromsø Basin are likely additional causes of inversion in the Troms-Finnmark Fault Complex.

  2. Methodologies for Adaptive Flight Envelope Estimation and Protection

    NASA Technical Reports Server (NTRS)

    Tang, Liang; Roemer, Michael; Ge, Jianhua; Crassidis, Agamemnon; Prasad, J. V. R.; Belcastro, Christine

    2009-01-01

    This paper reports the latest development of several techniques for adaptive flight envelope estimation and protection system for aircraft under damage upset conditions. Through the integration of advanced fault detection algorithms, real-time system identification of the damage/faulted aircraft and flight envelop estimation, real-time decision support can be executed autonomously for improving damage tolerance and flight recoverability. Particularly, a bank of adaptive nonlinear fault detection and isolation estimators were developed for flight control actuator faults; a real-time system identification method was developed for assessing the dynamics and performance limitation of impaired aircraft; online learning neural networks were used to approximate selected aircraft dynamics which were then inverted to estimate command margins. As off-line training of network weights is not required, the method has the advantage of adapting to varying flight conditions and different vehicle configurations. The key benefit of the envelope estimation and protection system is that it allows the aircraft to fly close to its limit boundary by constantly updating the controller command limits during flight. The developed techniques were demonstrated on NASA s Generic Transport Model (GTM) simulation environments with simulated actuator faults. Simulation results and remarks on future work are presented.

  3. Pseudo-fault signal assisted EMD for fault detection and isolation in rotating machines

    NASA Astrophysics Data System (ADS)

    Singh, Dheeraj Sharan; Zhao, Qing

    2016-12-01

    This paper presents a novel data driven technique for the detection and isolation of faults, which generate impacts in a rotating equipment. The technique is built upon the principles of empirical mode decomposition (EMD), envelope analysis and pseudo-fault signal for fault separation. Firstly, the most dominant intrinsic mode function (IMF) is identified using EMD of a raw signal, which contains all the necessary information about the faults. The envelope of this IMF is often modulated with multiple vibration sources and noise. A second level decomposition is performed by applying pseudo-fault signal (PFS) assisted EMD on the envelope. A pseudo-fault signal is constructed based on the known fault characteristic frequency of the particular machine. The objective of using external (pseudo-fault) signal is to isolate different fault frequencies, present in the envelope . The pseudo-fault signal serves dual purposes: (i) it solves the mode mixing problem inherent in EMD, (ii) it isolates and quantifies a particular fault frequency component. The proposed technique is suitable for real-time implementation, which has also been validated on simulated fault and experimental data corresponding to a bearing and a gear-box set-up, respectively.

  4. Fault tree applications within the safety program of Idaho Nuclear Corporation

    NASA Technical Reports Server (NTRS)

    Vesely, W. E.

    1971-01-01

    Computerized fault tree analyses are used to obtain both qualitative and quantitative information about the safety and reliability of an electrical control system that shuts the reactor down when certain safety criteria are exceeded, in the design of a nuclear plant protection system, and in an investigation of a backup emergency system for reactor shutdown. The fault tree yields the modes by which the system failure or accident will occur, the most critical failure or accident causing areas, detailed failure probabilities, and the response of safety or reliability to design modifications and maintenance schemes.

  5. Integration of InSAR and GIS in the Study of Surface Faults Caused by Subsidence-Creep-Fault Processes in Celaya, Guanajuato, Mexico

    NASA Astrophysics Data System (ADS)

    Avila-Olivera, Jorge A.; Farina, Paolo; Garduño-Monroy, Victor H.

    2008-05-01

    In Celaya city, Subsidence-Creep-Fault Processes (SCFP) began to become visible at the beginning of the 1980s with the sprouting of the crackings that gave rise to the surface faults "Oriente" and "Poniente". At the present time, the city is being affected by five surface faults that display a preferential NNW-SSE direction, parallel to the regional faulting system "Taxco-San Miguel de Allende". In order to study the SCFP in the city, the first step was to obtain a map of surface faults, by integrating in a GIS field survey and an urban city plan. The following step was to create a map of the current phreatic level decline in city with the information of deep wells and using the "kriging" method in order to obtain a continuous surface. Finally the interferograms maps resulted of an InSAR analysis of 9 SAR images covering the time interval between July 12 of 2003 and May 27 of 2006 were integrated to a GIS. All the maps generated, show how the surface faults divide the city from North to South, in two zones that behave in a different way. The difference of the phreatic level decline between these two zones is 60 m; and the InSAR study revealed that the Western zone practically remains stable, while sinkings between the surface faults "Oriente" and "Universidad Pedagógica" are present, as well as in portions NE and SE of the city, all of these sinkings between 7 and 10 cm/year.

  6. Integration of InSAR and GIS in the Study of Surface Faults Caused by Subsidence-Creep-Fault Processes in Celaya, Guanajuato, Mexico

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Avila-Olivera, Jorge A.; Instituto de Investigaciones Metalurgicas, Universidad Michoacana de San Nicolas de Hidalgo, C.U., 58030 Morelia, Michoacan; Farina, Paolo

    2008-05-07

    In Celaya city, Subsidence-Creep-Fault Processes (SCFP) began to become visible at the beginning of the 1980s with the sprouting of the crackings that gave rise to the surface faults 'Oriente' and 'Poniente'. At the present time, the city is being affected by five surface faults that display a preferential NNW-SSE direction, parallel to the regional faulting system 'Taxco-San Miguel de Allende'. In order to study the SCFP in the city, the first step was to obtain a map of surface faults, by integrating in a GIS field survey and an urban city plan. The following step was to create amore » map of the current phreatic level decline in city with the information of deep wells and using the 'kriging' method in order to obtain a continuous surface. Finally the interferograms maps resulted of an InSAR analysis of 9 SAR images covering the time interval between July 12 of 2003 and May 27 of 2006 were integrated to a GIS. All the maps generated, show how the surface faults divide the city from North to South, in two zones that behave in a different way. The difference of the phreatic level decline between these two zones is 60 m; and the InSAR study revealed that the Western zone practically remains stable, while sinkings between the surface faults 'Oriente' and 'Universidad Pedagogica' are present, as well as in portions NE and SE of the city, all of these sinkings between 7 and 10 cm/year.« less

  7. Protection of autonomous microgrids using agent-based distributed communication

    DOE PAGES

    Cintuglu, Mehmet H.; Ma, Tan; Mohammed, Osama A.

    2016-04-06

    This study presents a real-time implementation of autonomous microgrid protection using agent-based distributed communication. Protection of an autonomous microgrid requires special considerations compared to large scale distribution net-works due to the presence of power converters and relatively low inertia. In this work, we introduce a practical overcurrent and a frequency selectivity method to overcome conventional limitations. The proposed overcurrent scheme defines a selectivity mechanism considering the remedial action scheme (RAS) of the microgrid after a fault instant based on feeder characteristics and the location of the intelligent electronic devices (IEDs). A synchrophasor-based online frequency selectivity approach is proposed to avoidmore » pulse loading effects in low inertia microgrids. Experimental results are presented for verification of the pro-posed schemes using a laboratory based microgrid. The setup was composed of actual generation units and IEDs using IEC 61850 protocol. The experimental results were in excellent agreement with the proposed protection scheme.« less

  8. Protection of autonomous microgrids using agent-based distributed communication

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cintuglu, Mehmet H.; Ma, Tan; Mohammed, Osama A.

    This study presents a real-time implementation of autonomous microgrid protection using agent-based distributed communication. Protection of an autonomous microgrid requires special considerations compared to large scale distribution net-works due to the presence of power converters and relatively low inertia. In this work, we introduce a practical overcurrent and a frequency selectivity method to overcome conventional limitations. The proposed overcurrent scheme defines a selectivity mechanism considering the remedial action scheme (RAS) of the microgrid after a fault instant based on feeder characteristics and the location of the intelligent electronic devices (IEDs). A synchrophasor-based online frequency selectivity approach is proposed to avoidmore » pulse loading effects in low inertia microgrids. Experimental results are presented for verification of the pro-posed schemes using a laboratory based microgrid. The setup was composed of actual generation units and IEDs using IEC 61850 protocol. The experimental results were in excellent agreement with the proposed protection scheme.« less

  9. A testing-coverage software reliability model considering fault removal efficiency and error generation.

    PubMed

    Li, Qiuying; Pham, Hoang

    2017-01-01

    In this paper, we propose a software reliability model that considers not only error generation but also fault removal efficiency combined with testing coverage information based on a nonhomogeneous Poisson process (NHPP). During the past four decades, many software reliability growth models (SRGMs) based on NHPP have been proposed to estimate the software reliability measures, most of which have the same following agreements: 1) it is a common phenomenon that during the testing phase, the fault detection rate always changes; 2) as a result of imperfect debugging, fault removal has been related to a fault re-introduction rate. But there are few SRGMs in the literature that differentiate between fault detection and fault removal, i.e. they seldom consider the imperfect fault removal efficiency. But in practical software developing process, fault removal efficiency cannot always be perfect, i.e. the failures detected might not be removed completely and the original faults might still exist and new faults might be introduced meanwhile, which is referred to as imperfect debugging phenomenon. In this study, a model aiming to incorporate fault introduction rate, fault removal efficiency and testing coverage into software reliability evaluation is developed, using testing coverage to express the fault detection rate and using fault removal efficiency to consider the fault repair. We compare the performance of the proposed model with several existing NHPP SRGMs using three sets of real failure data based on five criteria. The results exhibit that the model can give a better fitting and predictive performance.

  10. A testing-coverage software reliability model considering fault removal efficiency and error generation

    PubMed Central

    Li, Qiuying; Pham, Hoang

    2017-01-01

    In this paper, we propose a software reliability model that considers not only error generation but also fault removal efficiency combined with testing coverage information based on a nonhomogeneous Poisson process (NHPP). During the past four decades, many software reliability growth models (SRGMs) based on NHPP have been proposed to estimate the software reliability measures, most of which have the same following agreements: 1) it is a common phenomenon that during the testing phase, the fault detection rate always changes; 2) as a result of imperfect debugging, fault removal has been related to a fault re-introduction rate. But there are few SRGMs in the literature that differentiate between fault detection and fault removal, i.e. they seldom consider the imperfect fault removal efficiency. But in practical software developing process, fault removal efficiency cannot always be perfect, i.e. the failures detected might not be removed completely and the original faults might still exist and new faults might be introduced meanwhile, which is referred to as imperfect debugging phenomenon. In this study, a model aiming to incorporate fault introduction rate, fault removal efficiency and testing coverage into software reliability evaluation is developed, using testing coverage to express the fault detection rate and using fault removal efficiency to consider the fault repair. We compare the performance of the proposed model with several existing NHPP SRGMs using three sets of real failure data based on five criteria. The results exhibit that the model can give a better fitting and predictive performance. PMID:28750091

  11. Fault detection and diagnosis in asymmetric multilevel inverter using artificial neural network

    NASA Astrophysics Data System (ADS)

    Raj, Nithin; Jagadanand, G.; George, Saly

    2018-04-01

    The increased component requirement to realise multilevel inverter (MLI) fallout in a higher fault prospect due to power semiconductors. In this scenario, efficient fault detection and diagnosis (FDD) strategies to detect and locate the power semiconductor faults have to be incorporated in addition to the conventional protection systems. Even though a number of FDD methods have been introduced in the symmetrical cascaded H-bridge (CHB) MLIs, very few methods address the FDD in asymmetric CHB-MLIs. In this paper, the gate-open circuit FDD strategy in asymmetric CHB-MLI is presented. Here, a single artificial neural network (ANN) is used to detect and diagnose the fault in both binary and trinary configurations of the asymmetric CHB-MLIs. In this method, features of the output voltage of the MLIs are used as to train the ANN for FDD method. The results prove the validity of the proposed method in detecting and locating the fault in both asymmetric MLI configurations. Finally, the ANN response to the input parameter variation is also analysed to access the performance of the proposed ANN-based FDD strategy.

  12. Fuzzy logic based on-line fault detection and classification in transmission line.

    PubMed

    Adhikari, Shuma; Sinha, Nidul; Dorendrajit, Thingam

    2016-01-01

    This study presents fuzzy logic based online fault detection and classification of transmission line using Programmable Automation and Control technology based National Instrument Compact Reconfigurable i/o (CRIO) devices. The LabVIEW software combined with CRIO can perform real time data acquisition of transmission line. When fault occurs in the system current waveforms are distorted due to transients and their pattern changes according to the type of fault in the system. The three phase alternating current, zero sequence and positive sequence current data generated by LabVIEW through CRIO-9067 are processed directly for relaying. The result shows that proposed technique is capable of right tripping action and classification of type of fault at high speed therefore can be employed in practical application.

  13. Fault Management Guiding Principles

    NASA Technical Reports Server (NTRS)

    Newhouse, Marilyn E.; Friberg, Kenneth H.; Fesq, Lorraine; Barley, Bryan

    2011-01-01

    Regardless of the mission type: deep space or low Earth orbit, robotic or human spaceflight, Fault Management (FM) is a critical aspect of NASA space missions. As the complexity of space missions grows, the complexity of supporting FM systems increase in turn. Data on recent NASA missions show that development of FM capabilities is a common driver for significant cost overruns late in the project development cycle. Efforts to understand the drivers behind these cost overruns, spearheaded by NASA's Science Mission Directorate (SMD), indicate that they are primarily caused by the growing complexity of FM systems and the lack of maturity of FM as an engineering discipline. NASA can and does develop FM systems that effectively protect mission functionality and assets. The cost growth results from a lack of FM planning and emphasis by project management, as well the maturity of FM as an engineering discipline, which lags behind the maturity of other engineering disciplines. As a step towards controlling the cost growth associated with FM development, SMD has commissioned a multi-institution team to develop a practitioner's handbook representing best practices for the end-to-end processes involved in engineering FM systems. While currently concentrating primarily on FM for science missions, the expectation is that this handbook will grow into a NASA-wide handbook, serving as a companion to the NASA Systems Engineering Handbook. This paper presents a snapshot of the principles that have been identified to guide FM development from cradle to grave. The principles range from considerations for integrating FM into the project and SE organizational structure, the relationship between FM designs and mission risk, and the use of the various tools of FM (e.g., redundancy) to meet the FM goal of protecting mission functionality and assets.

  14. Fault-Tree Compiler

    NASA Technical Reports Server (NTRS)

    Butler, Ricky W.; Boerschlein, David P.

    1993-01-01

    Fault-Tree Compiler (FTC) program, is software tool used to calculate probability of top event in fault tree. Gates of five different types allowed in fault tree: AND, OR, EXCLUSIVE OR, INVERT, and M OF N. High-level input language easy to understand and use. In addition, program supports hierarchical fault-tree definition feature, which simplifies tree-description process and reduces execution time. Set of programs created forming basis for reliability-analysis workstation: SURE, ASSIST, PAWS/STEM, and FTC fault-tree tool (LAR-14586). Written in PASCAL, ANSI-compliant C language, and FORTRAN 77. Other versions available upon request.

  15. A Log-Scaling Fault Tolerant Agreement Algorithm for a Fault Tolerant MPI

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hursey, Joshua J; Naughton, III, Thomas J; Vallee, Geoffroy R

    The lack of fault tolerance is becoming a limiting factor for application scalability in HPC systems. The MPI does not provide standardized fault tolerance interfaces and semantics. The MPI Forum's Fault Tolerance Working Group is proposing a collective fault tolerant agreement algorithm for the next MPI standard. Such algorithms play a central role in many fault tolerant applications. This paper combines a log-scaling two-phase commit agreement algorithm with a reduction operation to provide the necessary functionality for the new collective without any additional messages. Error handling mechanisms are described that preserve the fault tolerance properties while maintaining overall scalability.

  16. Drive and protection circuit for converter module of cascaded H-bridge STATCOM

    NASA Astrophysics Data System (ADS)

    Wang, Xuan; Yuan, Hongliang; Wang, Xiaoxing; Wang, Shuai; Fu, Yongsheng

    2018-04-01

    Drive and protection circuit is an important part of power electronics, which is related to safe and stable operation issues in the power electronics. The drive and protection circuit is designed for the cascaded H-bridge STATCOM. This circuit can realize flexible dead-time setting, operation status self-detection, fault priority protection and detailed fault status uploading. It can help to improve the reliability of STATCOM's operation. Finally, the proposed circuit is tested and analyzed by power electronic simulation software PSPICE (Simulation Program with IC Emphasis) and a series of experiments. Further studies showed that the proposed circuit can realize drive and control of H-bridge circuit, meanwhile it also can realize fast processing faults and have advantage of high reliability.

  17. Experimental study on propagation of fault slip along a simulated rock fault

    NASA Astrophysics Data System (ADS)

    Mizoguchi, K.

    2015-12-01

    Around pre-existing geological faults in the crust, we have often observed off-fault damage zone where there are many fractures with various scales, from ~ mm to ~ m and their density typically increases with proximity to the fault. One of the fracture formation processes is considered to be dynamic shear rupture propagation on the faults, which leads to the occurrence of earthquakes. Here, I have conducted experiments on propagation of fault slip along a pre-cut rock surface to investigate the damaging behavior of rocks with slip propagation. For the experiments, I used a pair of metagabbro blocks from Tamil Nadu, India, of which the contacting surface simulates a fault of 35 cm in length and 1cm width. The experiments were done with the similar uniaxial loading configuration to Rosakis et al. (2007). Axial load σ is applied to the fault plane with an angle 60° to the loading direction. When σ is 5kN, normal and shear stresses on the fault are 1.25MPa and 0.72MPa, respectively. Timing and direction of slip propagation on the fault during the experiments were monitored with several strain gauges arrayed at an interval along the fault. The gauge data were digitally recorded with a 1MHz sampling rate and 16bit resolution. When σ is 4.8kN is applied, we observed some fault slip events where a slip nucleates spontaneously in a subsection of the fault and propagates to the whole fault. However, the propagation speed is about 1.2km/s, much lower than the S-wave velocity of the rock. This indicates that the slip events were not earthquake-like dynamic rupture ones. More efforts are needed to reproduce earthquake-like slip events in the experiments. This work is supported by the JSPS KAKENHI (26870912).

  18. Toward a Model-Based Approach for Flight System Fault Protection

    NASA Technical Reports Server (NTRS)

    Day, John; Meakin, Peter; Murray, Alex

    2012-01-01

    Use SysML/UML to describe the physical structure of the system This part of the model would be shared with other teams - FS Systems Engineering, Planning & Execution, V&V, Operations, etc., in an integrated model-based engineering environment Use the UML Profile mechanism, defining Stereotypes to precisely express the concepts of the FP domain This extends the UML/SysML languages to contain our FP concepts Use UML/SysML, along with our profile, to capture FP concepts and relationships in the model Generate typical FP engineering products (the FMECA, Fault Tree, MRD, V&V Matrices)

  19. A comparative study of sensor fault diagnosis methods based on observer for ECAS system

    NASA Astrophysics Data System (ADS)

    Xu, Xing; Wang, Wei; Zou, Nannan; Chen, Long; Cui, Xiaoli

    2017-03-01

    The performance and practicality of electronically controlled air suspension (ECAS) system are highly dependent on the state information supplied by kinds of sensors, but faults of sensors occur frequently. Based on a non-linearized 3-DOF 1/4 vehicle model, different methods of fault detection and isolation (FDI) are used to diagnose the sensor faults for ECAS system. The considered approaches include an extended Kalman filter (EKF) with concise algorithm, a strong tracking filter (STF) with robust tracking ability, and the cubature Kalman filter (CKF) with numerical precision. We propose three filters of EKF, STF, and CKF to design a state observer of ECAS system under typical sensor faults and noise. Results show that three approaches can successfully detect and isolate faults respectively despite of the existence of environmental noise, FDI time delay and fault sensitivity of different algorithms are different, meanwhile, compared with EKF and STF, CKF method has best performing FDI of sensor faults for ECAS system.

  20. Surface faulting along the Superstition Hills fault zone and nearby faults associated with the earthquakes of 24 November 1987

    USGS Publications Warehouse

    Sharp, R.V.

    1989-01-01

    The M6.2 Elmore Desert Ranch earthquake of 24 November 1987 was associated spatially and probably temporally with left-lateral surface rupture on many northeast-trending faults in and near the Superstition Hills in western Imperial Valley. Three curving discontinuous principal zones of rupture among these breaks extended northeastward from near the Superstition Hills fault zone as far as 9km; the maximum observed surface slip, 12.5cm, was on the northern of the three, the Elmore Ranch fault, at a point near the epicenter. Twelve hours after the Elmore Ranch earthquake, the M6.6 Superstition Hills earthquake occurred near the northwest end of the right-lateral Superstition Hills fault zone. We measured displacements over 339 days at as many as 296 sites along the Superstition Hills fault zone, and repeated measurements at 49 sites provided sufficient data to fit with a simple power law. The overall distributions of right-lateral displacement at 1 day and the estimated final slip are nearly symmetrical about the midpoint of the surface rupture. The average estimated final right-lateral slip for the Superstition Hills fault zone is ~54cm. The average left-lateral slip for the conjugate faults trending northeastward is ~23cm. The southernmost ruptured member of the Superstition Hills fault zone, newly named the Wienert fault, extends the known length of the zone by about 4km. -from Authors

  1. Fault tolerant computing: A preamble for assuring viability of large computer systems

    NASA Technical Reports Server (NTRS)

    Lim, R. S.

    1977-01-01

    The need for fault-tolerant computing is addressed from the viewpoints of (1) why it is needed, (2) how to apply it in the current state of technology, and (3) what it means in the context of the Phoenix computer system and other related systems. To this end, the value of concurrent error detection and correction is described. User protection, program retry, and repair are among the factors considered. The technology of algebraic codes to protect memory systems and arithmetic codes to protect memory systems and arithmetic codes to protect arithmetic operations is discussed.

  2. Current Fault Management Trends in NASA's Planetary Spacecraft

    NASA Technical Reports Server (NTRS)

    Fesq, Lorraine M.

    2009-01-01

    The key product of this three-day workshop is a NASA White Paper that documents lessons learned from previous missions, recommended best practices, and future opportunities for investments in the fault management domain. This paper summarizes the findings and recommendations that are captured in the White Paper.

  3. Intelligent Method for Diagnosing Structural Faults of Rotating Machinery Using Ant Colony Optimization

    PubMed Central

    Li, Ke; Chen, Peng

    2011-01-01

    Structural faults, such as unbalance, misalignment and looseness, etc., often occur in the shafts of rotating machinery. These faults may cause serious machine accidents and lead to great production losses. This paper proposes an intelligent method for diagnosing structural faults of rotating machinery using ant colony optimization (ACO) and relative ratio symptom parameters (RRSPs) in order to detect faults and distinguish fault types at an early stage. New symptom parameters called “relative ratio symptom parameters” are defined for reflecting the features of vibration signals measured in each state. Synthetic detection index (SDI) using statistical theory has also been defined to evaluate the applicability of the RRSPs. The SDI can be used to indicate the fitness of a RRSP for ACO. Lastly, this paper also compares the proposed method with the conventional neural networks (NN) method. Practical examples of fault diagnosis for a centrifugal fan are provided to verify the effectiveness of the proposed method. The verification results show that the structural faults often occurring in the centrifugal fan, such as unbalance, misalignment and looseness states are effectively identified by the proposed method, while these faults are difficult to detect using conventional neural networks. PMID:22163833

  4. Intelligent method for diagnosing structural faults of rotating machinery using ant colony optimization.

    PubMed

    Li, Ke; Chen, Peng

    2011-01-01

    Structural faults, such as unbalance, misalignment and looseness, etc., often occur in the shafts of rotating machinery. These faults may cause serious machine accidents and lead to great production losses. This paper proposes an intelligent method for diagnosing structural faults of rotating machinery using ant colony optimization (ACO) and relative ratio symptom parameters (RRSPs) in order to detect faults and distinguish fault types at an early stage. New symptom parameters called "relative ratio symptom parameters" are defined for reflecting the features of vibration signals measured in each state. Synthetic detection index (SDI) using statistical theory has also been defined to evaluate the applicability of the RRSPs. The SDI can be used to indicate the fitness of a RRSP for ACO. Lastly, this paper also compares the proposed method with the conventional neural networks (NN) method. Practical examples of fault diagnosis for a centrifugal fan are provided to verify the effectiveness of the proposed method. The verification results show that the structural faults often occurring in the centrifugal fan, such as unbalance, misalignment and looseness states are effectively identified by the proposed method, while these faults are difficult to detect using conventional neural networks.

  5. Coordinated Fault-Tolerance for High-Performance Computing Final Project Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Panda, Dhabaleswar Kumar; Beckman, Pete

    2011-07-28

    With the Coordinated Infrastructure for Fault Tolerance Systems (CIFTS, as the original project came to be called) project, our aim has been to understand and tackle the following broad research questions, the answers to which will help the HEC community analyze and shape the direction of research in the field of fault tolerance and resiliency on future high-end leadership systems. Will availability of global fault information, obtained by fault information exchange between the different HEC software on a system, allow individual system software to better detect, diagnose, and adaptively respond to faults? If fault-awareness is raised throughout the system throughmore » fault information exchange, is it possible to get all system software working together to provide a more comprehensive end-to-end fault management on the system? What are the missing fault-tolerance features that widely used HEC system software lacks today that would inhibit such software from taking advantage of systemwide global fault information? What are the practical limitations of a systemwide approach for end-to-end fault management based on fault awareness and coordination? What mechanisms, tools, and technologies are needed to bring about fault awareness and coordination of responses on a leadership-class system? What standards, outreach, and community interaction are needed for adoption of the concept of fault awareness and coordination for fault management on future systems? Keeping our overall objectives in mind, the CIFTS team has taken a parallel fourfold approach. Our central goal was to design and implement a light-weight, scalable infrastructure with a simple, standardized interface to allow communication of fault-related information through the system and facilitate coordinated responses. This work led to the development of the Fault Tolerance Backplane (FTB) publish-subscribe API specification, together with a reference implementation and several experimental implementations on

  6. Critical fault patterns determination in fault-tolerant computer systems

    NASA Technical Reports Server (NTRS)

    Mccluskey, E. J.; Losq, J.

    1978-01-01

    The method proposed tries to enumerate all the critical fault-patterns (successive occurrences of failures) without analyzing every single possible fault. The conditions for the system to be operating in a given mode can be expressed in terms of the static states. Thus, one can find all the system states that correspond to a given critical mode of operation. The next step consists in analyzing the fault-detection mechanisms, the diagnosis algorithm and the process of switch control. From them, one can find all the possible system configurations that can result from a failure occurrence. Thus, one can list all the characteristics, with respect to detection, diagnosis, and switch control, that failures must have to constitute critical fault-patterns. Such an enumeration of the critical fault-patterns can be directly used to evaluate the overall system tolerance to failures. Present research is focused on how to efficiently make use of these system-level characteristics to enumerate all the failures that verify these characteristics.

  7. Eigenvector of gravity gradient tensor for estimating fault dips considering fault type

    NASA Astrophysics Data System (ADS)

    Kusumoto, Shigekazu

    2017-12-01

    The dips of boundaries in faults and caldera walls play an important role in understanding their formation mechanisms. The fault dip is a particularly important parameter in numerical simulations for hazard map creation as the fault dip affects estimations of the area of disaster occurrence. In this study, I introduce a technique for estimating the fault dip using the eigenvector of the observed or calculated gravity gradient tensor on a profile and investigating its properties through numerical simulations. From numerical simulations, it was found that the maximum eigenvector of the tensor points to the high-density causative body, and the dip of the maximum eigenvector closely follows the dip of the normal fault. It was also found that the minimum eigenvector of the tensor points to the low-density causative body and that the dip of the minimum eigenvector closely follows the dip of the reverse fault. It was shown that the eigenvector of the gravity gradient tensor for estimating fault dips is determined by fault type. As an application of this technique, I estimated the dip of the Kurehayama Fault located in Toyama, Japan, and obtained a result that corresponded to conventional fault dip estimations by geology and geomorphology. Because the gravity gradient tensor is required for this analysis, I present a technique that estimates the gravity gradient tensor from the gravity anomaly on a profile.

  8. Reverse fault growth and fault interaction with frictional interfaces: insights from analogue models

    NASA Astrophysics Data System (ADS)

    Bonanno, Emanuele; Bonini, Lorenzo; Basili, Roberto; Toscani, Giovanni; Seno, Silvio

    2017-04-01

    The association of faulting and folding is a common feature in mountain chains, fold-and-thrust belts, and accretionary wedges. Kinematic models are developed and widely used to explain a range of relationships between faulting and folding. However, these models may result not to be completely appropriate to explain shortening in mechanically heterogeneous rock bodies. Weak layers, bedding surfaces, or pre-existing faults placed ahead of a propagating fault tip may influence the fault propagation rate itself and the associated fold shape. In this work, we employed clay analogue models to investigate how mechanical discontinuities affect the propagation rate and the associated fold shape during the growth of reverse master faults. The simulated master faults dip at 30° and 45°, recalling the range of the most frequent dip angles for active reverse faults that occurs in nature. The mechanical discontinuities are simulated by pre-cutting the clay pack. For both experimental setups (30° and 45° dipping faults) we analyzed three different configurations: 1) isotropic, i.e. without precuts; 2) with one precut in the middle of the clay pack; and 3) with two evenly-spaced precuts. To test the repeatability of the processes and to have a statistically valid dataset we replicate each configuration three times. The experiments were monitored by collecting successive snapshots with a high-resolution camera pointing at the side of the model. The pictures were then processed using the Digital Image Correlation method (D.I.C.), in order to extract the displacement and shear-rate fields. These two quantities effectively show both the on-fault and off-fault deformation, indicating the activity along the newly-formed faults and whether and at what stage the discontinuities (precuts) are reactivated. To study the fault propagation and fold shape variability we marked the position of the fault tips and the fold profiles for every successive step of deformation. Then we compared

  9. Common Practice Lightning Strike Protection Characterization Technique to Quantify Damage Mechanisms on Composite Substrates

    NASA Technical Reports Server (NTRS)

    Szatkowski, George N.; Dudley, Kenneth L.; Koppen, Sandra V.; Ely, Jay J.; Nguyen, Truong X.; Ticatch, Larry A.; Mielnik, John J.; Mcneill, Patrick A.

    2013-01-01

    To support FAA certification airworthiness standards, composite substrates are subjected to lightning direct-effect electrical waveforms to determine performance characteristics of the lightning strike protection (LSP) conductive layers used to protect composite substrates. Test results collected from independent LSP studies are often incomparable due to variability in test procedures & applied practices at different organizations, which impairs performance correlations between different LSP data sets. Under a NASA supported contract, The Boeing Company developed technical procedures and documentation as guidance in order to facilitate a test method for conducting universal common practice lightning strike protection test procedures. The procedures obtain conformity in future lightning strike protection evaluations to allow meaningful performance correlations across data sets. This universal common practice guidance provides the manufacturing specifications to fabricate carbon fiber reinforced plastic (CFRP) test panels, including finish, grounding configuration, and acceptable methods for pretest nondestructive inspection (NDI) and posttest destructive inspection. The test operations guidance elaborates on the provisions contained in SAE ARP5416 to address inconsistencies in the generation of damage protection performance data, so as to provide for maximum achievable correlation across capable lab facilities. In addition, the guidance details a direct effects test bed design to aid in quantification of the multi-physical phenomena surrounding a lightning direct attachment supporting validation data requirements for the development of predictive computational modeling. The lightning test bed is designed to accommodate a repeatable installation procedure to secure the test panel and eliminate test installation uncertainty. It also facilitates a means to capture the electrical waveform parameters in 2 dimensions, along with the mechanical displacement and thermal

  10. Mapping fault-controlled volatile migration in equatorial layered deposits on Mars

    NASA Astrophysics Data System (ADS)

    Okubo, C. H.

    2006-12-01

    Research in terrestrial settings shows that clastic sedimentary deposits are productive host rocks for underground volatile reservoirs because of their high porosity and permeability. Within such reservoirs, faults play an important role in controlling pathways for volatile migration, because faults act as either barriers or conduits. Therefore faults are important volatile concentrators, which means that evidence of geochemical, hydrologic and biologic processes are commonly concentrated at these locations. Accordingly, faulted sedimentary deposits on Mars are plausible areas to search for evidence of past volatile activity and associated processes. Indeed, evidence for volatile migration through layered sedimentary deposits on Mars has been documented in detail by the Opportunity rover in Meridiani Planum. Thus evidence for past volatile- driven processes that could have occurred within the protective depths of these deposits may now exposed at the surface and more likely found around faults. Owing to the extensive distribution of layered deposits on Mars, a major challenge in looking for and investigating evidence of past volatile processes in these deposits is identifying and prioritizing study areas. Toward this end, this presentation details initial results of a multiyear project to develop quantitative maps of latent pathways for fault-controlled volatile migration through the layered sedimentary deposits on Mars. Available MOC and THEMIS imagery are used to map fault traces within equatorial layered deposits, with an emphasis on proposed regions for MSL landing sites. These fault maps define regions of interest for stereo imaging by HiRISE and identify areas to search for existing MOC stereo coverage. Stereo coverage of identified areas of interest allows for the construction of digital elevation models and ultimately extraction of fault plane and displacement vector orientations. These fault and displacement data will be fed through numerical modeling

  11. Scissoring Fault Rupture Properties along the Median Tectonic Line Fault Zone, Southwest Japan

    NASA Astrophysics Data System (ADS)

    Ikeda, M.; Nishizaka, N.; Onishi, K.; Sakamoto, J.; Takahashi, K.

    2017-12-01

    The Median Tectonic Line fault zone (hereinafter MTLFZ) is the longest and most active fault zone in Japan. The MTLFZ is a 400-km-long trench parallel right-lateral strike-slip fault accommodating lateral slip components of the Philippine Sea plate oblique subduction beneath the Eurasian plate [Fitch, 1972; Yeats, 1996]. Complex fault geometry evolves along the MTLFZ. The geomorphic and geological characteristics show a remarkable change through the MTLFZ. Extensional step-overs and pull-apart basins and a pop-up structure develop in western and eastern parts of the MTLFZ, respectively. It is like a "scissoring fault properties". We can point out two main factors to form scissoring fault properties along the MTLFZ. One is a regional stress condition, and another is a preexisting fault. The direction of σ1 anticlockwise rotate from N170°E [Famin et al., 2014] in the eastern Shikoku to Kinki areas and N100°E [Research Group for Crustral Stress in Western Japan, 1980] in central Shikoku to N85°E [Onishi et al., 2016] in western Shikoku. According to the rotation of principal stress directions, the western and eastern parts of the MTLFZ are to be a transtension and compression regime, respectively. The MTLFZ formed as a terrain boundary at Cretaceous, and has evolved with a long active history. The fault style has changed variously, such as left-lateral, thrust, normal and right-lateral. Under the structural condition of a preexisting fault being, the rupture does not completely conform to Anderson's theory for a newly formed fault, as the theory would require either purely dip-slip motion on the 45° dipping fault or strike-slip motion on a vertical fault. The fault rupture of the 2013 Barochistan earthquake in Pakistan is a rare example of large strike-slip reactivation on a relatively low angle dipping fault (thrust fault), though many strike-slip faults have vertical plane generally [Avouac et al., 2014]. In this presentation, we, firstly, show deep subsurface

  12. Practical aspects of handling data protection and data security.

    PubMed

    Louwerse, C P

    1991-01-01

    Looking at practical applications of health care information systems, we must conclude that in the field of data protection there still is too large a gap between what is feasible and necessary on one hand, and what is achieved in actual realizations on the other. To illustrate this point, we sketch the actual data protection measures in a large hospital information system, and describe the effects of changes affecting the system, such as increasing use of personal computers, and growing intensity of use of the system. Trends in the development of new and additional systems are indicated, and a summary of possible weak points and gaps in the security is given, some suggestions for improvement are made.

  13. Insurance Applications of Active Fault Maps Showing Epistemic Uncertainty

    NASA Astrophysics Data System (ADS)

    Woo, G.

    2005-12-01

    high deductible is in force, this requires estimation of the epistemic uncertainty on fault geometry and activity. Transport infrastructure insurance is of practical interest in seismic countries. On the North Anatolian Fault in Turkey, there is uncertainty over an unbroken segment between the eastern end of the Dazce Fault and Bolu. This may have ruptured during the 1944 earthquake. Existing hazard maps may simply use a question mark to flag uncertainty. However, a far more informative type of hazard map might express spatial variations in the confidence level associated with a fault map. Through such visual guidance, an insurance risk analyst would be better placed to price earthquake cover, allowing for epistemic uncertainty.

  14. Clustering for unsupervised fault diagnosis in nuclear turbine shut-down transients

    NASA Astrophysics Data System (ADS)

    Baraldi, Piero; Di Maio, Francesco; Rigamonti, Marco; Zio, Enrico; Seraoui, Redouane

    2015-06-01

    Empirical methods for fault diagnosis usually entail a process of supervised training based on a set of examples of signal evolutions "labeled" with the corresponding, known classes of fault. However, in practice, the signals collected during plant operation may be, very often, "unlabeled", i.e., the information on the corresponding type of occurred fault is not available. To cope with this practical situation, in this paper we develop a methodology for the identification of transient signals showing similar characteristics, under the conjecture that operational/faulty transient conditions of the same type lead to similar behavior in the measured signals evolution. The methodology is founded on a feature extraction procedure, which feeds a spectral clustering technique, embedding the unsupervised fuzzy C-means (FCM) algorithm, which evaluates the functional similarity among the different operational/faulty transients. A procedure for validating the plausibility of the obtained clusters is also propounded based on physical considerations. The methodology is applied to a real industrial case, on the basis of 148 shut-down transients of a Nuclear Power Plant (NPP) steam turbine.

  15. Protection - Principles and practice.

    NASA Technical Reports Server (NTRS)

    Graham, G. S.; Denning, P. J.

    1972-01-01

    The protection mechanisms of computer systems control the access to objects, especially information objects. The principles of protection system design are formalized as a model (theory) of protection. Each process has a unique identification number which is attached by the system to each access attempted by the process. Details of system implementation are discussed, taking into account the storing of the access matrix, aspects of efficiency, and the selection of subjects and objects. Two systems which have protection features incorporating all the elements of the model are described.

  16. Lognormal Approximations of Fault Tree Uncertainty Distributions.

    PubMed

    El-Shanawany, Ashraf Ben; Ardron, Keith H; Walker, Simon P

    2018-01-26

    Fault trees are used in reliability modeling to create logical models of fault combinations that can lead to undesirable events. The output of a fault tree analysis (the top event probability) is expressed in terms of the failure probabilities of basic events that are input to the model. Typically, the basic event probabilities are not known exactly, but are modeled as probability distributions: therefore, the top event probability is also represented as an uncertainty distribution. Monte Carlo methods are generally used for evaluating the uncertainty distribution, but such calculations are computationally intensive and do not readily reveal the dominant contributors to the uncertainty. In this article, a closed-form approximation for the fault tree top event uncertainty distribution is developed, which is applicable when the uncertainties in the basic events of the model are lognormally distributed. The results of the approximate method are compared with results from two sampling-based methods: namely, the Monte Carlo method and the Wilks method based on order statistics. It is shown that the closed-form expression can provide a reasonable approximation to results obtained by Monte Carlo sampling, without incurring the computational expense. The Wilks method is found to be a useful means of providing an upper bound for the percentiles of the uncertainty distribution while being computationally inexpensive compared with full Monte Carlo sampling. The lognormal approximation method and Wilks's method appear attractive, practical alternatives for the evaluation of uncertainty in the output of fault trees and similar multilinear models. © 2018 Society for Risk Analysis.

  17. Transform fault earthquakes in the North Atlantic: Source mechanisms and depth of faulting

    NASA Technical Reports Server (NTRS)

    Bergman, Eric A.; Solomon, Sean C.

    1987-01-01

    The centroid depths and source mechanisms of 12 large earthquakes on transform faults of the northern Mid-Atlantic Ridge were determined from an inversion of long-period body waveforms. The earthquakes occurred on the Gibbs, Oceanographer, Hayes, Kane, 15 deg 20 min, and Vema transforms. The depth extent of faulting during each earthquake was estimated from the centroid depth and the fault width. The source mechanisms for all events in this study display the strike slip motion expected for transform fault earthquakes; slip vector azimuths agree to 2 to 3 deg of the local strike of the zone of active faulting. The only anomalies in mechanism were for two earthquakes near the western end of the Vema transform which occurred on significantly nonvertical fault planes. Secondary faulting, occurring either precursory to or near the end of the main episode of strike-slip rupture, was observed for 5 of the 12 earthquakes. For three events the secondary faulting was characterized by reverse motion on fault planes striking oblique to the trend of the transform. In all three cases, the site of secondary reverse faulting is near a compression jog in the current trace of the active transform fault zone. No evidence was found to support the conclusions of Engeln, Wiens, and Stein that oceanic transform faults in general are either hotter than expected from current thermal models or weaker than normal oceanic lithosphere.

  18. V&V of Fault Management: Challenges and Successes

    NASA Technical Reports Server (NTRS)

    Fesq, Lorraine M.; Costello, Ken; Ohi, Don; Lu, Tiffany; Newhouse, Marilyn

    2013-01-01

    This paper describes the results of a special breakout session of the NASA Independent Verification and Validation (IV&V) Workshop held in the fall of 2012 entitled "V&V of Fault Management: Challenges and Successes." The NASA IV&V Program is in a unique position to interact with projects across all of the NASA development domains. Using this unique opportunity, the IV&V program convened a breakout session to enable IV&V teams to share their challenges and successes with respect to the V&V of Fault Management (FM) architectures and software. The presentations and discussions provided practical examples of pitfalls encountered while performing V&V of FM including the lack of consistent designs for implementing faults monitors and the fact that FM information is not centralized but scattered among many diverse project artifacts. The discussions also solidified the need for an early commitment to developing FM in parallel with the spacecraft systems as well as clearly defining FM terminology within a project.

  19. Pattern Recognition Application of Support Vector Machine for Fault Classification of Thyristor Controlled Series Compensated Transmission Lines

    NASA Astrophysics Data System (ADS)

    Yashvantrai Vyas, Bhargav; Maheshwari, Rudra Prakash; Das, Biswarup

    2016-06-01

    Application of series compensation in extra high voltage (EHV) transmission line makes the protection job difficult for engineers, due to alteration in system parameters and measurements. The problem amplifies with inclusion of electronically controlled compensation like thyristor controlled series compensation (TCSC) as it produce harmonics and rapid change in system parameters during fault associated with TCSC control. This paper presents a pattern recognition based fault type identification approach with support vector machine. The scheme uses only half cycle post fault data of three phase currents to accomplish the task. The change in current signal features during fault has been considered as discriminatory measure. The developed scheme in this paper is tested over a large set of fault data with variation in system and fault parameters. These fault cases have been generated with PSCAD/EMTDC on a 400 kV, 300 km transmission line model. The developed algorithm has proved better for implementation on TCSC compensated line with its improved accuracy and speed.

  20. Modeling and Measurement Constraints in Fault Diagnostics for HVAC Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Najafi, Massieh; Auslander, David M.; Bartlett, Peter L.

    2010-05-30

    Many studies have shown that energy savings of five to fifteen percent are achievable in commercial buildings by detecting and correcting building faults, and optimizing building control systems. However, in spite of good progress in developing tools for determining HVAC diagnostics, methods to detect faults in HVAC systems are still generally undeveloped. Most approaches use numerical filtering or parameter estimation methods to compare data from energy meters and building sensors to predictions from mathematical or statistical models. They are effective when models are relatively accurate and data contain few errors. In this paper, we address the case where models aremore » imperfect and data are variable, uncertain, and can contain error. We apply a Bayesian updating approach that is systematic in managing and accounting for most forms of model and data errors. The proposed method uses both knowledge of first principle modeling and empirical results to analyze the system performance within the boundaries defined by practical constraints. We demonstrate the approach by detecting faults in commercial building air handling units. We find that the limitations that exist in air handling unit diagnostics due to practical constraints can generally be effectively addressed through the proposed approach.« less

  1. Earthquake Nucleation and Fault Slip: Possible Experiments on a Natural Fault

    NASA Astrophysics Data System (ADS)

    Germanovich, L. N.; Murdoch, L. C.; Garagash, D.; Reches, Z.; Martel, S. J.; Johnston, M. J.; Ebenhack, J.; Gwaba, D.

    2011-12-01

    High-resolution deformation and seismic observations are usually made only near the Earths' surface, kilometers away from where earthquake nucleate on active faults and are limited by inverse-cube-distance attenuation and ground noise. We have developed an experimental approach that aims at reactivating faults in-situ using thermal techniques and fluid injection, which modify in-situ stresses and the fault strength until the fault slips. Mines where in-situ stresses are sufficient to drive faulting present an opportunity to conduct such experiments. The former Homestake gold mine in South Dakota is a good example. During our recent field work in the Homestake mine, we found a large fault that intersects multiple mine levels. The size and distinct structure of this fault make it a promising target for in-situ reactivation, which would likely to be localized on a crack-like patch. Slow patch propagation, moderated by the injection rate and the rate of change of the background stresses, may become unstable, leading to the nucleation of a dynamic earthquake rupture. Our analyses for the Homestake fault conditions indicate that this transition occurs for a patch size ~1 m. This represents a fundamental limitation for laboratory experiments and necessitates larger-scale field tests ~10-100 m. The opportunity to observe earthquake nucleation on the Homestake Fault is feasible because slip could be initiated at a pre-defined location and time with instrumentation placed as close as a few meters from the nucleation site. Designing the experiment requires a detailed assessment of the state-of-stress in the vicinity of the fault. This is being conducted by simulating changes in pore pressure and effective stresses accompanying dewatering of the mine, and by evaluating in-situ stress measurements in light of a regional stress field modified by local perturbations caused by the mine workings.

  2. Fault-zone structure and weakening processes in basin-scale reverse faults: The Moonlight Fault Zone, South Island, New Zealand

    NASA Astrophysics Data System (ADS)

    Alder, S.; Smith, S. A. F.; Scott, J. M.

    2016-10-01

    The >200 km long Moonlight Fault Zone (MFZ) in southern New Zealand was an Oligocene basin-bounding normal fault zone that reactivated in the Miocene as a high-angle reverse fault (present dip angle 65°-75°). Regional exhumation in the last c. 5 Ma has resulted in deep exposures of the MFZ that present an opportunity to study the structure and deformation processes that were active in a basin-scale reverse fault at basement depths. Syn-rift sediments are preserved only as thin fault-bound slivers. The hanging wall and footwall of the MFZ are mainly greenschist facies quartzofeldspathic schists that have a steeply-dipping (55°-75°) foliation subparallel to the main fault trace. In more fissile lithologies (e.g. greyschists), hanging-wall deformation occurred by the development of foliation-parallel breccia layers up to a few centimetres thick. Greyschists in the footwall deformed mainly by folding and formation of tabular, foliation-parallel breccias up to 1 m wide. Where the hanging-wall contains more competent lithologies (e.g. greenschist facies metabasite) it is laced with networks of pseudotachylyte that formed parallel to the host rock foliation in a damage zone extending up to 500 m from the main fault trace. The fault core contains an up to 20 m thick sequence of breccias, cataclasites and foliated cataclasites preserving evidence for the progressive development of interconnected networks of (partly authigenic) chlorite and muscovite. Deformation in the fault core occurred by cataclasis of quartz and albite, frictional sliding of chlorite and muscovite grains, and dissolution-precipitation. Combined with published friction and permeability data, our observations suggest that: 1) host rock lithology and anisotropy were the primary controls on the structure of the MFZ at basement depths and 2) high-angle reverse slip was facilitated by the low frictional strength of fault core materials. Restriction of pseudotachylyte networks to the hanging-wall of the

  3. AGSM Functional Fault Models for Fault Isolation Project

    NASA Technical Reports Server (NTRS)

    Harp, Janicce Leshay

    2014-01-01

    This project implements functional fault models to automate the isolation of failures during ground systems operations. FFMs will also be used to recommend sensor placement to improve fault isolation capabilities. The project enables the delivery of system health advisories to ground system operators.

  4. Distributed Fault-Tolerant Control of Networked Uncertain Euler-Lagrange Systems Under Actuator Faults.

    PubMed

    Chen, Gang; Song, Yongduan; Lewis, Frank L

    2016-05-03

    This paper investigates the distributed fault-tolerant control problem of networked Euler-Lagrange systems with actuator and communication link faults. An adaptive fault-tolerant cooperative control scheme is proposed to achieve the coordinated tracking control of networked uncertain Lagrange systems on a general directed communication topology, which contains a spanning tree with the root node being the active target system. The proposed algorithm is capable of compensating for the actuator bias fault, the partial loss of effectiveness actuation fault, the communication link fault, the model uncertainty, and the external disturbance simultaneously. The control scheme does not use any fault detection and isolation mechanism to detect, separate, and identify the actuator faults online, which largely reduces the online computation and expedites the responsiveness of the controller. To validate the effectiveness of the proposed method, a test-bed of multiple robot-arm cooperative control system is developed for real-time verification. Experiments on the networked robot-arms are conduced and the results confirm the benefits and the effectiveness of the proposed distributed fault-tolerant control algorithms.

  5. Seismic images and fault relations of the Santa Monica thrust fault, West Los Angeles, California

    USGS Publications Warehouse

    Catchings, R.D.; Gandhok, G.; Goldman, M.R.; Okaya, D.

    2001-01-01

    In May 1997, the US Geological Survey (USGS) and the University of Southern California (USC) acquired high-resolution seismic reflection and refraction images on the grounds of the Wadsworth Veterans Administration Hospital (WVAH) in the city of Los Angeles (Fig. 1a,b). The objective of the seismic survey was to better understand the near-surface geometry and faulting characteristics of the Santa Monica fault zone. In this report, we present seismic images, an interpretation of those images, and a comparison of our results with results from studies by Dolan and Pratt (1997), Pratt et al. (1998) and Gibbs et al. (2000). The Santa Monica fault is one of the several northeast-southwest-trending, north-dipping, reverse faults that extend through the Los Angeles metropolitan area (Fig. 1a). Through much of area, the Santa Monica fault trends subparallel to the Hollywood fault, but the two faults apparently join into a single fault zone to the southwest and to the northeast (Dolan et al., 1995). The Santa Monica and Hollywood faults may be part of a larger fault system that extends from the Pacific Ocean to the Transverse Ranges. Crook et al. (1983) refer to this fault system as the Malibu Coast-Santa Monica-Raymond-Cucamonga fault system. They suggest that these faults have not formed a contiguous zone since the Pleistocene and conclude that each of the faults should be treated as a separate fault with respect to seismic hazards. However, Dolan et al. (1995) suggest that the Hollywood and Santa Monica faults are capable of generating Mw 6.8 and Mw 7.0 earthquakes, respectively. Thus, regardless of whether the overall fault system is connected and capable of rupturing in one event, individually, each of the faults present a sizable earthquake hazard to the Los Angeles metropolitan area. If, however, these faults are connected, and they were to rupture along a continuous fault rupture, the resulting hazard would be even greater. Although the Santa Monica fault represents

  6. Model-based fault detection and isolation for intermittently active faults with application to motion-based thruster fault detection and isolation for spacecraft

    NASA Technical Reports Server (NTRS)

    Wilson, Edward (Inventor)

    2008-01-01

    The present invention is a method for detecting and isolating fault modes in a system having a model describing its behavior and regularly sampled measurements. The models are used to calculate past and present deviations from measurements that would result with no faults present, as well as with one or more potential fault modes present. Algorithms that calculate and store these deviations, along with memory of when said faults, if present, would have an effect on the said actual measurements, are used to detect when a fault is present. Related algorithms are used to exonerate false fault modes and finally to isolate the true fault mode. This invention is presented with application to detection and isolation of thruster faults for a thruster-controlled spacecraft. As a supporting aspect of the invention, a novel, effective, and efficient filtering method for estimating the derivative of a noisy signal is presented.

  7. Seismic Hazard and Fault Length

    NASA Astrophysics Data System (ADS)

    Black, N. M.; Jackson, D. D.; Mualchin, L.

    2005-12-01

    If mx is the largest earthquake magnitude that can occur on a fault, then what is mp, the largest magnitude that should be expected during the planned lifetime of a particular structure? Most approaches to these questions rely on an estimate of the Maximum Credible Earthquake, obtained by regression (e.g. Wells and Coppersmith, 1994) of fault length (or area) and magnitude. Our work differs in two ways. First, we modify the traditional approach to measuring fault length, to allow for hidden fault complexity and multi-fault rupture. Second, we use a magnitude-frequency relationship to calculate the largest magnitude expected to occur within a given time interval. Often fault length is poorly defined and multiple faults rupture together in a single event. Therefore, we need to expand the definition of a mapped fault length to obtain a more accurate estimate of the maximum magnitude. In previous work, we compared fault length vs. rupture length for post-1975 earthquakes in Southern California. In this study, we found that mapped fault length and rupture length are often unequal, and in several cases rupture broke beyond the previously mapped fault traces. To expand the geologic definition of fault length we outlined several guidelines: 1) if a fault truncates at young Quaternary alluvium, the fault line should be inferred underneath the younger sediments 2) faults striking within 45° of one another should be treated as a continuous fault line and 3) a step-over can link together faults at least 5 km apart. These definitions were applied to fault lines in Southern California. For example, many of the along-strike faults lines in the Mojave Desert are treated as a single fault trending from the Pinto Mountain to the Garlock fault. In addition, the Rose Canyon and Newport-Inglewood faults are treated as a single fault line. We used these more generous fault lengths, and the Wells and Coppersmith regression, to estimate the maximum magnitude (mx) for the major faults in

  8. Induction machine bearing faults detection based on a multi-dimensional MUSIC algorithm and maximum likelihood estimation.

    PubMed

    Elbouchikhi, Elhoussin; Choqueuse, Vincent; Benbouzid, Mohamed

    2016-07-01

    Condition monitoring of electric drives is of paramount importance since it contributes to enhance the system reliability and availability. Moreover, the knowledge about the fault mode behavior is extremely important in order to improve system protection and fault-tolerant control. Fault detection and diagnosis in squirrel cage induction machines based on motor current signature analysis (MCSA) has been widely investigated. Several high resolution spectral estimation techniques have been developed and used to detect induction machine abnormal operating conditions. This paper focuses on the application of MCSA for the detection of abnormal mechanical conditions that may lead to induction machines failure. In fact, this paper is devoted to the detection of single-point defects in bearings based on parametric spectral estimation. A multi-dimensional MUSIC (MD MUSIC) algorithm has been developed for bearing faults detection based on bearing faults characteristic frequencies. This method has been used to estimate the fundamental frequency and the fault related frequency. Then, an amplitude estimator of the fault characteristic frequencies has been proposed and fault indicator has been derived for fault severity measurement. The proposed bearing faults detection approach is assessed using simulated stator currents data, issued from a coupled electromagnetic circuits approach for air-gap eccentricity emulating bearing faults. Then, experimental data are used for validation purposes. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  9. Quantifying structural uncertainty on fault networks using a marked point process within a Bayesian framework

    NASA Astrophysics Data System (ADS)

    Aydin, Orhun; Caers, Jef Karel

    2017-08-01

    Faults are one of the building-blocks for subsurface modeling studies. Incomplete observations of subsurface fault networks lead to uncertainty pertaining to location, geometry and existence of faults. In practice, gaps in incomplete fault network observations are filled based on tectonic knowledge and interpreter's intuition pertaining to fault relationships. Modeling fault network uncertainty with realistic models that represent tectonic knowledge is still a challenge. Although methods that address specific sources of fault network uncertainty and complexities of fault modeling exists, a unifying framework is still lacking. In this paper, we propose a rigorous approach to quantify fault network uncertainty. Fault pattern and intensity information are expressed by means of a marked point process, marked Strauss point process. Fault network information is constrained to fault surface observations (complete or partial) within a Bayesian framework. A structural prior model is defined to quantitatively express fault patterns, geometries and relationships within the Bayesian framework. Structural relationships between faults, in particular fault abutting relations, are represented with a level-set based approach. A Markov Chain Monte Carlo sampler is used to sample posterior fault network realizations that reflect tectonic knowledge and honor fault observations. We apply the methodology to a field study from Nankai Trough & Kumano Basin. The target for uncertainty quantification is a deep site with attenuated seismic data with only partially visible faults and many faults missing from the survey or interpretation. A structural prior model is built from shallow analog sites that are believed to have undergone similar tectonics compared to the site of study. Fault network uncertainty for the field is quantified with fault network realizations that are conditioned to structural rules, tectonic information and partially observed fault surfaces. We show the proposed

  10. A data-driven multiplicative fault diagnosis approach for automation processes.

    PubMed

    Hao, Haiyang; Zhang, Kai; Ding, Steven X; Chen, Zhiwen; Lei, Yaguo

    2014-09-01

    This paper presents a new data-driven method for diagnosing multiplicative key performance degradation in automation processes. Different from the well-established additive fault diagnosis approaches, the proposed method aims at identifying those low-level components which increase the variability of process variables and cause performance degradation. Based on process data, features of multiplicative fault are extracted. To identify the root cause, the impact of fault on each process variable is evaluated in the sense of contribution to performance degradation. Then, a numerical example is used to illustrate the functionalities of the method and Monte-Carlo simulation is performed to demonstrate the effectiveness from the statistical viewpoint. Finally, to show the practical applicability, a case study on the Tennessee Eastman process is presented. Copyright © 2013. Published by Elsevier Ltd.

  11. Encoder fault analysis system based on Moire fringe error signal

    NASA Astrophysics Data System (ADS)

    Gao, Xu; Chen, Wei; Wan, Qiu-hua; Lu, Xin-ran; Xie, Chun-yu

    2018-02-01

    Aiming at the problem of any fault and wrong code in the practical application of photoelectric shaft encoder, a fast and accurate encoder fault analysis system is researched from the aspect of Moire fringe photoelectric signal processing. DSP28335 is selected as the core processor and high speed serial A/D converter acquisition card is used. And temperature measuring circuit using AD7420 is designed. Discrete data of Moire fringe error signal is collected at different temperatures and it is sent to the host computer through wireless transmission. The error signal quality index and fault type is displayed on the host computer based on the error signal identification method. The error signal quality can be used to diagnosis the state of error code through the human-machine interface.

  12. An Investigative Redesign of the ECG and EMG Signal Conditioning Circuits for Two-fault Tolerance and Circuit Improvement

    NASA Technical Reports Server (NTRS)

    Obrien, Edward M.

    1991-01-01

    An investigation was undertaken to make the elctrocardiography (ECG) and the electromyography (EMG) signal conditioning circuits two-fault tolerant and to update the circuitry. The present signal conditioning circuits provide at least one level of subject protection against electrical shock hazard but at a level of 100 micro-A (for voltages of up to 200 V). However, it is necessary to provide catastrophic fault tolerance protection for the astronauts and to provide protection at a current level of less that 100 micro-A. For this study, protection at the 10 micro-A level was sought. This is the generally accepted value below which no possibility of microshock exists. Only the possibility of macroshock exists in the case of the signal conditioners. However, this extra amount of protection is desirable. The initial part deals with current limiter circuits followed by an investigation into the signal conditioner specifications and circuit design.

  13. Application of Phasor Measurement Units for Protection of Distribution Networks with High Penetration of Photovoltaic Sources

    NASA Astrophysics Data System (ADS)

    Meskin, Matin

    The rate of the integration of distributed generation (DG) units to the distribution level to meet the growth in demand increases as a reasonable replacement for costly network expansion. This integration brings many advantages to the consumers and power grids, as well as giving rise to more challenges in relation to protection and control. Recent research has brought to light the negative effects of DG units on short circuit currents and overcurrent (OC) protection systems in distribution networks. Change in the direction of fault current flow, increment or decrement of fault current magnitude, blindness of protection, feeder sympathy trip, nuisance trip of interrupting devices, and the disruption of coordination between protective devices are some potential impacts of DG unit integration. Among other types of DG units, the integration of renewable energy resources into the electric grid has seen a vast improvement in recent years. In particular, the interconnection of photovoltaic (PV) sources to the medium voltage (MV) distribution networks has experienced a rapid increase in the last decade. In this work, the effect of PV source on conventional OC relays in MV distribution networks is shown. It is indicated that the PV output fluctuation, due to changes in solar radiation, causes the magnitude and direction of the current to change haphazardly. These variations may result in the poor operation of OC relays as the main protective devices in the MV distribution networks. In other words, due to the bi-directional power flow characteristic and the fluctuation of current magnitude occurring in the presence of PV sources, a specific setting of OC relays is difficult to realize. Therefore, OC relays may operate in normal conditions. To improve the OC relay operation, a voltage-dependent-overcurrent protection is proposed. Although, this new method prevents the OC relay from maloperation, its ability to detect earth faults and high impedance faults is poor. Thus, a

  14. Frictional heterogeneities on carbonate-bearing normal faults: Insights from the Monte Maggio Fault, Italy

    NASA Astrophysics Data System (ADS)

    Carpenter, B. M.; Scuderi, M. M.; Collettini, C.; Marone, C.

    2014-12-01

    Observations of heterogeneous and complex fault slip are often attributed to the complexity of fault structure and/or spatial heterogeneity of fault frictional behavior. Such complex slip patterns have been observed for earthquakes on normal faults throughout central Italy, where many of the Mw 6 to 7 earthquakes in the Apennines nucleate at depths where the lithology is dominated by carbonate rocks. To explore the relationship between fault structure and heterogeneous frictional properties, we studied the exhumed Monte Maggio Fault, located in the northern Apennines. We collected intact specimens of the fault zone, including the principal slip surface and hanging wall cataclasite, and performed experiments at a normal stress of 10 MPa under saturated conditions. Experiments designed to reactivate slip between the cemented principal slip surface and cataclasite show a 3 MPa stress drop as the fault surface fails, then velocity-neutral frictional behavior and significant frictional healing. Overall, our results suggest that (1) earthquakes may readily nucleate in areas of the fault where the slip surface separates massive limestone and are likely to propagate in areas where fault gouge is in contact with the slip surface; (2) postseismic slip is more likely to occur in areas of the fault where gouge is present; and (3) high rates of frictional healing and low creep relaxation observed between solid fault surfaces could lead to significant aftershocks in areas of low stress drop.

  15. Interagency collaboration between child protection and mental health services: practices, attitudes and barriers.

    PubMed

    Darlington, Yvonne; Feeney, Judith A; Rixon, Kylie

    2005-10-01

    The aim of this paper is to examine some of the factors that facilitate and hinder interagency collaboration between child protection services and mental health services in cases where there is a parent with a mental illness and there are protection concerns for the child(ren). The paper reports on agency practices, worker attitudes and experiences, and barriers to effective collaboration. A self-administered, cross-sectional survey was developed and distributed via direct mail or via line supervisors to workers in statutory child protection services, adult mental health services, child and youth mental health services, and Suspected Child Abuse and Neglect (SCAN) Teams. There were 232 completed questionnaires returned, with an overall response rate of 21%. Thirty-eight percent of respondents were statutory child protection workers, 39% were adult mental health workers, 16% were child and youth mental health workers, and 4% were SCAN Team medical officers (with 3% missing data). Analysis revealed that workers were engaging in a moderate amount of interagency contact, but that they were unhappy with the support provided by their agency. Principle components analysis and multivariate analysis of variance (MANOVA) on items assessing attitudes toward other workers identified four factors, which differed in rates of endorsement: inadequate training, positive regard for child protection workers, positive regard for mental health workers, and mutual mistrust (from highest to lowest level of endorsement). The same procedure identified the relative endorsement of five factors extracted from items about potential barriers: inadequate resources, confidentiality, gaps in interagency processes, unrealistic expectations, and professional knowledge domains and boundaries. Mental health and child protection professionals believe that collaborative practice is necessary; however, their efforts are hindered by a lack of supportive structures and practices at the organizational level.

  16. Power flow analysis and optimal locations of resistive type superconducting fault current limiters.

    PubMed

    Zhang, Xiuchang; Ruiz, Harold S; Geng, Jianzhao; Shen, Boyang; Fu, Lin; Zhang, Heng; Coombs, Tim A

    2016-01-01

    Based on conventional approaches for the integration of resistive-type superconducting fault current limiters (SFCLs) on electric distribution networks, SFCL models largely rely on the insertion of a step or exponential resistance that is determined by a predefined quenching time. In this paper, we expand the scope of the aforementioned models by considering the actual behaviour of an SFCL in terms of the temperature dynamic power-law dependence between the electrical field and the current density, characteristic of high temperature superconductors. Our results are compared to the step-resistance models for the sake of discussion and clarity of the conclusions. Both SFCL models were integrated into a power system model built based on the UK power standard, to study the impact of these protection strategies on the performance of the overall electricity network. As a representative renewable energy source, a 90 MVA wind farm was considered for the simulations. Three fault conditions were simulated, and the figures for the fault current reduction predicted by both fault current limiting models have been compared in terms of multiple current measuring points and allocation strategies. Consequently, we have shown that the incorporation of the E - J characteristics and thermal properties of the superconductor at the simulation level of electric power systems, is crucial for estimations of reliability and determining the optimal locations of resistive type SFCLs in distributed power networks. Our results may help decision making by distribution network operators regarding investment and promotion of SFCL technologies, as it is possible to determine the maximum number of SFCLs necessary to protect against different fault conditions at multiple locations.

  17. Advanced Protection & Service Restoration for FREEDM Systems

    NASA Astrophysics Data System (ADS)

    Singh, Urvir

    A smart electric power distribution system (FREEDM system) that incorporates DERs (Distributed Energy Resources), SSTs (Solid State Transformers - that can limit the fault current to two times of the rated current) & RSC (Reliable & Secure Communication) capabilities has been studied in this work in order to develop its appropriate protection & service restoration techniques. First, a solution is proposed that can make conventional protective devices be able to provide effective protection for FREEDM systems. Results show that although this scheme can provide required protection but it can be quite slow. Using the FREEDM system's communication capabilities, a communication assisted Overcurrent (O/C) protection scheme is proposed & results show that by using communication (blocking signals) very fast operating times are achieved thereby, mitigating the problem of conventional O/C scheme. Using the FREEDM System's DGI (Distributed Grid Intelligence) capability, an automated FLISR (Fault Location, Isolation & Service Restoration) scheme is proposed that is based on the concept of 'software agents' & uses lesser data (than conventional centralized approaches). Test results illustrated that this scheme is able to provide a global optimal system reconfiguration for service restoration.

  18. Practicing chemical process safety: a look at the layers of protection.

    PubMed

    Sanders, Roy E

    2004-11-11

    This presentation will review a few public perceptions of safety in chemical plants and refineries, and will compare these plant workplace risks to some of the more traditional occupations. The central theme of this paper is to provide a "within-the-fence" view of many of the process safety practices that world class plants perform to pro-actively protect people, property, profits as well as the environment. It behooves each chemical plant and refinery to have their story on an image-rich presentation to stress stewardship and process safety. Such a program can assure the company's employees and help convince the community that many layers of safety protection within our plants are effective, and protect all from harm.

  19. Late Quaternary faulting along the Death Valley-Furnace Creek fault system, California and Nevada

    USGS Publications Warehouse

    Brogan, George E.; Kellogg, Karl; Slemmons, D. Burton; Terhune, Christina L.

    1991-01-01

    The Death Valley-Furnace Creek fault system, in California and Nevada, has a variety of impressive late Quaternary neotectonic features that record a long history of recurrent earthquake-induced faulting. Although no neotectonic features of unequivocal historical age are known, paleoseismic features from multiple late Quaternary events of surface faulting are well developed throughout the length of the system. Comparison of scarp heights to amount of horizontal offset of stream channels and the relationships of both scarps and channels to the ages of different geomorphic surfaces demonstrate that Quaternary faulting along the northwest-trending Furnace Creek fault zone is predominantly right lateral, whereas that along the north-trending Death Valley fault zone is predominantly normal. These observations are compatible with tectonic models of Death Valley as a northwest-trending pull-apart basin. The largest late Quaternary scarps along the Furnace Creek fault zone, with vertical separation of late Pleistocene surfaces of as much as 64 m (meters), are in Fish Lake Valley. Despite the predominance of normal faulting along the Death Valley fault zone, vertical offset of late Pleistocene surfaces along the Death Valley fault zone apparently does not exceed about 15 m. Evidence for four to six separate late Holocene faulting events along the Furnace Creek fault zone and three or more late Holocene events along the Death Valley fault zone are indicated by rupturing of Q1B (about 200-2,000 years old) geomorphic surfaces. Probably the youngest neotectonic feature observed along the Death Valley-Furnace Creek fault system, possibly historic in age, is vegetation lineaments in southernmost Fish Lake Valley. Near-historic faulting in Death Valley, within several kilometers south of Furnace Creek Ranch, is represented by (1) a 2,000-year-old lake shoreline that is cut by sinuous scarps, and (2) a system of young scarps with free-faceted faces (representing several faulting

  20. Slicken 1.0: Program for calculating the orientation of shear on reactivated faults

    NASA Astrophysics Data System (ADS)

    Xu, Hong; Xu, Shunshan; Nieto-Samaniego, Ángel F.; Alaniz-Álvarez, Susana A.

    2017-07-01

    The slip vector on a fault is an important parameter in the study of the movement history of a fault and its faulting mechanism. Although there exist many graphical programs to represent the shear stress (or slickenline) orientations on faults, programs to quantitatively calculate the orientation of fault slip based on a given stress field are scarce. In consequence, we develop Slicken 1.0, a software to rapidly calculate the orientation of maximum shear stress on any fault plane. For this direct method of calculating the resolved shear stress on a planar surface, the input data are the unit vector normal to the involved plane, the unit vectors of the three principal stress axes, and the stress ratio. The advantage of this program is that the vertical or horizontal principal stresses are not necessarily required. Due to its nimble design using Java SE 8.0, it runs on most operating systems with the corresponding Java VM. The software program will be practical for geoscience students, geologists and engineers and will help resolve a deficiency in field geology, and structural and engineering geology.

  1. Audio-frequency magnetotelluric imaging of the Hijima fault, Yamasaki fault system, southwest Japan

    NASA Astrophysics Data System (ADS)

    Yamaguchi, S.; Ogawa, Y.; Fuji-Ta, K.; Ujihara, N.; Inokuchi, H.; Oshiman, N.

    2010-04-01

    An audio-frequency magnetotelluric (AMT) survey was undertaken at ten sites along a transect across the Hijima fault, a major segment of the Yamasaki fault system, Japan. The data were subjected to dimensionality analysis, following which two-dimensional inversions for the TE and TM modes were carried out. This model is characterized by (1) a clear resistivity boundary that coincides with the downward projection of the surface trace of the Hijima fault, (2) a resistive zone (>500 Ω m) that corresponds to Mesozoic sediment, and (3) shallow and deep two highly conductive zones (30-40 Ω m) along the fault. The shallow conductive zone is a common feature of the Yamasaki fault system, whereas the deep conductor is a newly discovered feature at depths of 800-1,800 m to the southwest of the fault. The conductor is truncated by the Hijima fault to the northeast, and its upper boundary is the resistive zone. Both conductors are interpreted to represent a combination of clay minerals and a fluid network within a fault-related fracture zone. In terms of the development of the fluid networks, the fault core of the Hijima fault and the highly resistive zone may play important roles as barriers to fluid flow on the northeast and upper sides of the conductive zones, respectively.

  2. Best management practices for creating a community wildfire protection plan

    Treesearch

    Pamela J. Jakes; Christine Esposito; Sam Burns; Antony S. Cheng; Kristen C. Nelson; Victoria E. Sturtevant; Daniel R. Williams

    2012-01-01

    A community wildfire protection plan (CWPP) is a means of bringing local solutions to wildland fire management. In developing and implementing CWPPs, communities assume a leadership role in reducing wildfi re risk on federal and nonfederal land. In this publication, we identify best management practices for CWPP development and implementation based on the experiences...

  3. Quantifying Vertical Exhumation in Intracontinental Strike-Slip Faults: the Garlock fault zone, southern California

    NASA Astrophysics Data System (ADS)

    Chinn, L.; Blythe, A. E.; Fendick, A.

    2012-12-01

    New apatite fission-track ages show varying rates of vertical exhumation at the eastern terminus of the Garlock fault zone. The Garlock fault zone is a 260 km long east-northeast striking strike-slip fault with as much as 64 km of sinistral offset. The Garlock fault zone terminates in the east in the Avawatz Mountains, at the intersection with the dextral Southern Death Valley fault zone. Although motion along the Garlock fault west of the Avawatz Mountains is considered purely strike-slip, uplift and exhumation of bedrock in the Avawatz Mountains south of the Garlock fault, as recently as 5 Ma, indicates that transpression plays an important role at this location and is perhaps related to a restricting bend as the fault wraps around and terminates southeastward along the Avawatz Mountains. In this study we complement extant thermochronometric ages from within the Avawatz core with new low temperature fission-track ages from samples collected within the adjacent Garlock and Southern Death Valley fault zones. These thermochronometric data indicate that vertical exhumation rates vary within the fault zone. Two Miocene ages (10.2 (+5.0/-3.4) Ma, 9.0 (+2.2/-1.8) Ma) indicate at least ~3.3 km of vertical exhumation at ~0.35 mm/yr, assuming a 30°C/km geothermal gradient, along a 2 km transect parallel and adjacent to the Mule Spring fault. An older Eocene age (42.9 (+8.7/-7.3) Ma) indicates ~3.3 km of vertical exhumation at ~0.08 mm/yr. These results are consistent with published exhumation rates of 0.35 mm/yr between ~7 and ~4 Ma and 0.13 mm/yr between ~15 and ~9 Ma, as determined by apatite fission-track and U-Th/He thermochronometry in the hanging-wall of the Mule Spring fault. Similar exhumation rates on both sides of the Mule Spring fault support three separate models: 1) Thrusting is no longer active along the Mule Spring fault, 2) Faulting is dominantly strike-slip at the sample locations, or 3) Miocene-present uplift and exhumation is below detection levels

  4. Aftershocks of the 2014 South Napa, California, Earthquake: Complex faulting on secondary faults

    USGS Publications Warehouse

    Hardebeck, Jeanne L.; Shelly, David R.

    2016-01-01

    We investigate the aftershock sequence of the 2014 MW6.0 South Napa, California, earthquake. Low-magnitude aftershocks missing from the network catalog are detected by applying a matched-filter approach to continuous seismic data, with the catalog earthquakes serving as the waveform templates. We measure precise differential arrival times between events, which we use for double-difference event relocation in a 3D seismic velocity model. Most aftershocks are deeper than the mainshock slip, and most occur west of the mapped surface rupture. While the mainshock coseismic and postseismic slip appears to have occurred on the near-vertical, strike-slip West Napa fault, many of the aftershocks occur in a complex zone of secondary faulting. Earthquake locations in the main aftershock zone, near the mainshock hypocenter, delineate multiple dipping secondary faults. Composite focal mechanisms indicate strike-slip and oblique-reverse faulting on the secondary features. The secondary faults were moved towards failure by Coulomb stress changes from the mainshock slip. Clusters of aftershocks north and south of the main aftershock zone exhibit vertical strike-slip faulting more consistent with the West Napa Fault. The northern aftershocks correspond to the area of largest mainshock coseismic slip, while the main aftershock zone is adjacent to the fault area that has primarily slipped postseismically. Unlike most creeping faults, the zone of postseismic slip does not appear to contain embedded stick-slip patches that would have produced on-fault aftershocks. The lack of stick-slip patches along this portion of the fault may contribute to the low productivity of the South Napa aftershock sequence.

  5. Fault linkage and continental breakup

    NASA Astrophysics Data System (ADS)

    Cresswell, Derren; Lymer, Gaël; Reston, Tim; Stevenson, Carl; Bull, Jonathan; Sawyer, Dale; Morgan, Julia

    2017-04-01

    The magma-poor rifted margin off the west coast of Galicia (NW Spain) has provided some of the key observations in the development of models describing the final stages of rifting and continental breakup. In 2013, we collected a 68 x 20 km 3D seismic survey across the Galicia margin, NE Atlantic. Processing through to 3D Pre-stack Time Migration (12.5 m bin-size) and 3D depth conversion reveals the key structures, including an underlying detachment fault (the S detachment), and the intra-block and inter-block faults. These data reveal multiple phases of faulting, which overlap spatially and temporally, have thinned the crust to between zero and a few km thickness, producing 'basement windows' where crustal basement has been completely pulled apart and sediments lie directly on the mantle. Two approximately N-S trending fault systems are observed: 1) a margin proximal system of two linked faults that are the upward extension (breakaway faults) of the S; in the south they form one surface that splays northward to form two faults with an intervening fault block. These faults were thus demonstrably active at one time rather than sequentially. 2) An oceanward relay structure that shows clear along strike linkage. Faults within the relay trend NE-SW and heavily dissect the basement. The main block bounding faults can be traced from the S detachment through the basement into, and heavily deforming, the syn-rift sediments where they die out, suggesting that the faults propagated up from the S detachment surface. Analysis of the fault heaves and associated maps at different structural levels show complementary fault systems. The pattern of faulting suggests a variation in main tectonic transport direction moving oceanward. This might be interpreted as a temporal change during sequential faulting, however the transfer of extension between faults and the lateral variability of fault blocks suggests that many of the faults across the 3D volume were active at least in part

  6. Interagency Collaboration between Child Protection and Mental Health Services: Practices, Attitudes and Barriers

    ERIC Educational Resources Information Center

    Darlington, Yvonne; Feeney, Judith A.; Rixon, Kylie

    2005-01-01

    Objective: The aim of this paper is to examine some of the factors that facilitate and hinder interagency collaboration between child protection services and mental health services in cases where there is a parent with a mental illness and there are protection concerns for the child(ren). The paper reports on agency practices, worker attitudes and…

  7. Off-fault tip splay networks: a genetic and generic property of faults indicative of their long-term propagation, and a major component of off-fault damage

    NASA Astrophysics Data System (ADS)

    Perrin, C.; Manighetti, I.; Gaudemer, Y.

    2015-12-01

    Faults grow over the long-term by accumulating displacement and lengthening, i.e., propagating laterally. We use fault maps and fault propagation evidences available in literature to examine geometrical relations between parent faults and off-fault splays. The population includes 47 worldwide crustal faults with lengths from millimeters to thousands of kilometers and of different slip modes. We show that fault splays form adjacent to any propagating fault tip, whereas they are absent at non-propagating fault ends. Independent of parent fault length, slip mode, context, etc, tip splay networks have a similar fan shape widening in direction of long-term propagation, a similar relative length and width (~30 and ~10 % of parent fault length, respectively), and a similar range of mean angles to parent fault (10-20°). Tip splays more commonly develop on one side only of the parent fault. We infer that tip splay networks are a genetic and a generic property of faults indicative of their long-term propagation. We suggest that they represent the most recent damage off-the parent fault, formed during the most recent phase of fault lengthening. The scaling relation between parent fault length and width of tip splay network implies that damage zones enlarge as parent fault length increases. Elastic properties of host rocks might thus be modified at large distances away from a fault, up to 10% of its length. During an earthquake, a significant fraction of coseismic slip and stress is dissipated into the permanent damage zone that surrounds the causative fault. We infer that coseismic dissipation might occur away from a rupture zone as far as a distance of 10% of the length of its causative fault. Coseismic deformations and stress transfers might thus be significant in broad regions about principal rupture traces. This work has been published in Comptes Rendus Geoscience under doi:10.1016/j.crte.2015.05.002 (http://www.sciencedirect.com/science/article/pii/S1631071315000528).

  8. Data-based fault-tolerant control for affine nonlinear systems with actuator faults.

    PubMed

    Xie, Chun-Hua; Yang, Guang-Hong

    2016-09-01

    This paper investigates the fault-tolerant control (FTC) problem for unknown nonlinear systems with actuator faults including stuck, outage, bias and loss of effectiveness. The upper bounds of stuck faults, bias faults and loss of effectiveness faults are unknown. A new data-based FTC scheme is proposed. It consists of the online estimations of the bounds and a state-dependent function. The estimations are adjusted online to compensate automatically the actuator faults. The state-dependent function solved by using real system data helps to stabilize the system. Furthermore, all signals in the resulting closed-loop system are uniformly bounded and the states converge asymptotically to zero. Compared with the existing results, the proposed approach is data-based. Finally, two simulation examples are provided to show the effectiveness of the proposed approach. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  9. Porosity variations in and around normal fault zones: implications for fault seal and geomechanics

    NASA Astrophysics Data System (ADS)

    Healy, David; Neilson, Joyce; Farrell, Natalie; Timms, Nick; Wilson, Moyra

    2015-04-01

    Porosity forms the building blocks for permeability, exerts a significant influence on the acoustic response of rocks to elastic waves, and fundamentally influences rock strength. And yet, published studies of porosity around fault zones or in faulted rock are relatively rare, and are hugely dominated by those of fault zone permeability. We present new data from detailed studies of porosity variations around normal faults in sandstone and limestone. We have developed an integrated approach to porosity characterisation in faulted rock exploiting different techniques to understand variations in the data. From systematic samples taken across exposed normal faults in limestone (Malta) and sandstone (Scotland), we combine digital image analysis on thin sections (optical and electron microscopy), core plug analysis (He porosimetry) and mercury injection capillary pressures (MICP). Our sampling includes representative material from undeformed protoliths and fault rocks from the footwall and hanging wall. Fault-related porosity can produce anisotropic permeability with a 'fast' direction parallel to the slip vector in a sandstone-hosted normal fault. Undeformed sandstones in the same unit exhibit maximum permeability in a sub-horizontal direction parallel to lamination in dune-bedded sandstones. Fault-related deformation produces anisotropic pores and pore networks with long axes aligned sub-vertically and this controls the permeability anisotropy, even under confining pressures up to 100 MPa. Fault-related porosity also has interesting consequences for the elastic properties and velocity structure of normal fault zones. Relationships between texture, pore type and acoustic velocity have been well documented in undeformed limestone. We have extended this work to include the effects of faulting on carbonate textures, pore types and P- and S-wave velocities (Vp, Vs) using a suite of normal fault zones in Malta, with displacements ranging from 0.5 to 90 m. Our results show a

  10. On Identifiability of Bias-Type Actuator-Sensor Faults in Multiple-Model-Based Fault Detection and Identification

    NASA Technical Reports Server (NTRS)

    Joshi, Suresh M.

    2012-01-01

    This paper explores a class of multiple-model-based fault detection and identification (FDI) methods for bias-type faults in actuators and sensors. These methods employ banks of Kalman-Bucy filters to detect the faults, determine the fault pattern, and estimate the fault values, wherein each Kalman-Bucy filter is tuned to a different failure pattern. Necessary and sufficient conditions are presented for identifiability of actuator faults, sensor faults, and simultaneous actuator and sensor faults. It is shown that FDI of simultaneous actuator and sensor faults is not possible using these methods when all sensors have biases.

  11. Fault Injection Campaign for a Fault Tolerant Duplex Framework

    NASA Technical Reports Server (NTRS)

    Sacco, Gian Franco; Ferraro, Robert D.; von llmen, Paul; Rennels, Dave A.

    2007-01-01

    Fault tolerance is an efficient approach adopted to avoid or reduce the damage of a system failure. In this work we present the results of a fault injection campaign we conducted on the Duplex Framework (DF). The DF is a software developed by the UCLA group [1, 2] that uses a fault tolerant approach and allows to run two replicas of the same process on two different nodes of a commercial off-the-shelf (COTS) computer cluster. A third process running on a different node, constantly monitors the results computed by the two replicas, and eventually restarts the two replica processes if an inconsistency in their computation is detected. This approach is very cost efficient and can be adopted to control processes on spacecrafts where the fault rate produced by cosmic rays is not very high.

  12. Radiation protection for surgeons and anesthetists: practices and knowledge before and after training.

    PubMed

    Brun, A; Mor, R Alcaraz; Bourrelly, M; Dalivoust, G; Gazazian, G; Boufercha, R; Lehucher-Michel, M P; Sari-Minodier, I

    2018-03-01

    The use of radiological activity in the operating room (OR) and a regulatory decrease of the eye lens dose warrant an assessment of how medical staff are protected from radiation. This study aims to evaluate practices and knowledge in radiation protection (RP) for OR doctors before and after training. A descriptive study of surgeons and anesthetists in a French public hospital center was conducted in 2016. An ad hoc questionnaire concerning occupational practices and knowledge about RP was distributed before and one month after RP training. Among 103 doctors attending the training, 90 answered the questionnaire before the training. Results showed a lack of knowledge and good practice in RP. Most of the participants (86.7%) had never been trained in RP and recognized insufficient knowledge. Most of them (92.2%) wore a lead apron, 50.0% a thyroid-shield, 5.6% lead glasses, 53.3% a passive dosimeter and 17.8% an electronic dosimeter. None of them benefitted from collective protective equipment such as a ceiling suspended screen. The questionnaire following the training was completed by only 35 doctors. A comparison before and after training results showed an improvement in knowledge (scores of correct responses: 5.5/16 before and 9.5/16 after training) but not in RP good practices (scores of correct responses: 3.2/7 before and 3.3/7 after training). One training session appears to be insufficient to improve the application of the safety rules when x-rays are used. Communication needs to be improved regarding RP among anesthetists and surgeons, such as training renewal, workstation analysis in OR related to x-ray use and occupational medical follow-up. Otherwise, radiological risks in OR need to be given better consideration, such as radio-induced cataract risk. It is necessary to encourage the use of dosimeters and protective equipment and to strengthen access to lead glasses and collective protective equipment, such as ceiling suspended screens. All these

  13. The San Andreas Fault and a Strike-slip Fault on Europa

    NASA Technical Reports Server (NTRS)

    1998-01-01

    The mosaic on the right of the south polar region of Jupiter's moon Europa shows the northern 290 kilometers (180 miles) of a strike-slip fault named Astypalaea Linea. The entire fault is about 810 kilometers (500 miles) long, the size of the California portion of the San Andreas fault on Earth which runs from the California-Mexico border north to the San Francisco Bay.

    The left mosaic shows the portion of the San Andreas fault near California's san Francisco Bay that has been scaled to the same size and resolution as the Europa image. Each covers an area approximately 170 by 193 kilometers(105 by 120 miles). The red line marks the once active central crack of the Europan fault (right) and the line of the San Andreas fault (left).

    A strike-slip fault is one in which two crustal blocks move horizontally past one another, similar to two opposing lanes of traffic. The overall motion along the Europan fault seems to have followed a continuous narrow crack along the entire length of the feature, with a path resembling stepson a staircase crossing zones which have been pulled apart. The images show that about 50 kilometers (30 miles) of displacement have taken place along the fault. Opposite sides of the fault can be reconstructed like a puzzle, matching the shape of the sides as well as older individual cracks and ridges that had been broken by its movements.

    Bends in the Europan fault have allowed the surface to be pulled apart. This pulling-apart along the fault's bends created openings through which warmer, softer ice from below Europa's brittle ice shell surface, or frozen water from a possible subsurface ocean, could reach the surface. This upwelling of material formed large areas of new ice within the boundaries of the original fault. A similar pulling apart phenomenon can be observed in the geological trough surrounding California's Salton Sea, and in Death Valley and the Dead Sea. In those cases, the pulled apart regions can include upwelled

  14. Dual-quaternion based fault-tolerant control for spacecraft formation flying with finite-time convergence.

    PubMed

    Dong, Hongyang; Hu, Qinglei; Ma, Guangfu

    2016-03-01

    Study results of developing control system for spacecraft formation proximity operations between a target and a chaser are presented. In particular, a coupled model using dual quaternion is employed to describe the proximity problem of spacecraft formation, and a nonlinear adaptive fault-tolerant feedback control law is developed to enable the chaser spacecraft to track the position and attitude of the target even though its actuator occurs fault. Multiple-task capability of the proposed control system is further demonstrated in the presence of disturbances and parametric uncertainties as well. In addition, the practical finite-time stability feature of the closed-loop system is guaranteed theoretically under the designed control law. Numerical simulation of the proposed method is presented to demonstrate the advantages with respect to interference suppression, fast tracking, fault tolerant and practical finite-time stability. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.

  15. Complex Paleotopography and Faulting near the Elsinore Fault, Coyote Mountains, southern California

    NASA Astrophysics Data System (ADS)

    Brenneman, M. J.; Bykerk-Kauffman, A.

    2012-12-01

    The Coyote Mountains of southern California are bounded on the southwest by the Elsinore Fault, an active dextral fault within the San Andreas Fault zone. According to Axen and Fletcher (1998) and Dorsey and others (2011), rocks exposed in these mountains comprise a portion of the hanging wall of the east-vergent Salton Detachment Fault, which was active from the late Miocene-early Pliocene to Ca. 1.1-1.3 Ma. Detachment faulting was accompanied by subsidence, resulting in deposition of a thick sequence of marine and nonmarine sedimentary rocks. Regional detachment faulting and subsidence ceased with the inception of the Elsinore Fault, which has induced uplift of the Coyote Mountains. Detailed geologic mapping in the central Coyote Mountains supports the above interpretation and adds some intriguing details. New discoveries include a buttress unconformity at the base of the Miocene/Pliocene section that locally cuts across strata at an angle so high that it could be misinterpreted as a fault. We thus conclude that the syn-extension strata were deposited on a surface with very rugged topography. We also discovered that locally-derived nonmarine gravel deposits exposed near the crest of the range, previously interpreted as part of the Miocene Split Mountain Group by Winker and Kidwell (1996), unconformably overlie units of the marine Miocene/Pliocene Imperial Group and must therefore be Pliocene or younger. The presence of such young gravel deposits on the crest of the range provides evidence for its rapid uplift. Additional new discoveries flesh out details of the structural history of the range. We mapped just two normal faults, both of which were relatively minor, thus supporting Axen and Fletcher's assertion that the hanging wall block of the Salton Detachment Fault had not undergone significant internal deformation during extension. We found abundant complex synthetic and antithetic strike-slip faults throughout the area, some of which offset Quaternary alluvial

  16. Safeguarding and Protecting Children in Maternity Services: Implications for Practice

    ERIC Educational Resources Information Center

    Lazenbatt, Anne; Greer, Jean

    2009-01-01

    This article debates the issues involved in safeguarding and protecting children in maternity services and offers implications for professional practice. Midwives and other staff who work as members of the maternity team have a safeguarding role to play in the identification of babies and children who have been abused, or are at risk of abuse, and…

  17. Fault Interaction and Stress Accumulation in Chaman Fault System, Balouchistan, Pakistan, Since 1892

    NASA Astrophysics Data System (ADS)

    Riaz, M. S.; Shan, B.; Xiong, X.; Xie, Z.

    2017-12-01

    The curved-shaped left-lateral Chaman fault is the Western boundary of the Indian plate, which is approximately 1000 km long. The Chaman fault is an active fault and also locus of many catastrophic earthquakes. Since the inception of strike-slip movement at 20-25Ma along the western collision boundary between Indian and Eurasian plates, the average geologically constrained slip rate of 24 to 35 mm/yr accounts for a total displacement of 460±10 km along the Chaman fault system (Beun et al., 1979; Lawrence et al., 1992). Based on earthquake triggering theory, the change in Coulomb Failure Stress (DCFS) either halted (shadow stress) or advances (positive stress) the occurrence of subsequent earthquakes. Several major earthquakes occurred in Chaman fault system, and this region is poorly studied to understand the earthquake/fault interaction and hazard assessment. In order to do so, we have analyzed the earthquakes catalog and collected significant earthquakes with M ≥6.2 since 1892. We then investigate the evolution of DCFS in the Chaman fault system is computed by integration of coseismic static and postseismic viscoelastic relaxation stress transfer since the 1892, using the codePSGRN/PSCMP (Wang et al., 2006). Moreover, for postseismic stress transfer simulation, we adopted linear Maxwell rheology to calculate the viscoelastic effects in this study. Our results elucidate that three out of four earthquakes are triggered by the preceding earthquakes. The 1892-earthquake with magnitude Mw6.8, which occurred on the North segment of Chaman fault has not influence the 1935-earthquake which occurred on Ghazaband fault, a parallel fault 20km east to Chaman fault. The 1935-earthquake with magnitude Mw7.7 significantly loaded the both ends of rupture with positive stress (CFS ≥0.01 Mpa), which later on triggered the 1975-earthquake with 23% of its rupture length where CFS ≥0.01 Mpa, on Chaman fault, and 1990-earthquke with 58% of its rupture length where CFS ≥0

  18. Fluid involvement in normal faulting

    NASA Astrophysics Data System (ADS)

    Sibson, Richard H.

    2000-04-01

    Evidence of fluid interaction with normal faults comes from their varied role as flow barriers or conduits in hydrocarbon basins and as hosting structures for hydrothermal mineralisation, and from fault-rock assemblages in exhumed footwalls of steep active normal faults and metamorphic core complexes. These last suggest involvement of predominantly aqueous fluids over a broad depth range, with implications for fault shear resistance and the mechanics of normal fault reactivation. A general downwards progression in fault rock assemblages (high-level breccia-gouge (often clay-rich) → cataclasites → phyllonites → mylonite → mylonitic gneiss with the onset of greenschist phyllonites occurring near the base of the seismogenic crust) is inferred for normal fault zones developed in quartzo-feldspathic continental crust. Fluid inclusion studies in hydrothermal veining from some footwall assemblages suggest a transition from hydrostatic to suprahydrostatic fluid pressures over the depth range 3-5 km, with some evidence for near-lithostatic to hydrostatic pressure cycling towards the base of the seismogenic zone in the phyllonitic assemblages. Development of fault-fracture meshes through mixed-mode brittle failure in rock-masses with strong competence layering is promoted by low effective stress in the absence of thoroughgoing cohesionless faults that are favourably oriented for reactivation. Meshes may develop around normal faults in the near-surface under hydrostatic fluid pressures to depths determined by rock tensile strength, and at greater depths in overpressured portions of normal fault zones and at stress heterogeneities, especially dilational jogs. Overpressures localised within developing normal fault zones also determine the extent to which they may reutilise existing discontinuities (for example, low-angle thrust faults). Brittle failure mode plots demonstrate that reactivation of existing low-angle faults under vertical σ1 trajectories is only likely if

  19. An improved fault detection classification and location scheme based on wavelet transform and artificial neural network for six phase transmission line using single end data only.

    PubMed

    Koley, Ebha; Verma, Khushaboo; Ghosh, Subhojit

    2015-01-01

    Restrictions on right of way and increasing power demand has boosted development of six phase transmission. It offers a viable alternative for transmitting more power, without major modification in existing structure of three phase double circuit transmission system. Inspite of the advantages, low acceptance of six phase system is attributed to the unavailability of a proper protection scheme. The complexity arising from large number of possible faults in six phase lines makes the protection quite challenging. The proposed work presents a hybrid wavelet transform and modular artificial neural network based fault detector, classifier and locator for six phase lines using single end data only. The standard deviation of the approximate coefficients of voltage and current signals obtained using discrete wavelet transform are applied as input to the modular artificial neural network for fault classification and location. The proposed scheme has been tested for all 120 types of shunt faults with variation in location, fault resistance, fault inception angles. The variation in power system parameters viz. short circuit capacity of the source and its X/R ratio, voltage, frequency and CT saturation has also been investigated. The result confirms the effectiveness and reliability of the proposed protection scheme which makes it ideal for real time implementation.

  20. Sexual Harassment Preventive/Protective Practices at U.S. Colleges and Universities.

    ERIC Educational Resources Information Center

    Hobson, Charles J.; Guziewicz, Jennifer

    2002-01-01

    Reports on a survey concerning thirteen recommended sexual harassment preventive/protective practices at U.S. colleges and universities. A majority of responding institutions had formal sexual harassment policies, offered counseling to student victims, and investigated all complaints. Relatively fewer schools provided student access to faculty…

  1. Fault zone structure from topography: signatures of en echelon fault slip at Mustang Ridge on the San Andreas Fault, Monterey County, California

    USGS Publications Warehouse

    DeLong, Stephen B.; Hilley, George E.; Rymer, Michael J.; Prentice, Carol

    2010-01-01

    We used high-resolution topography to quantify the spatial distribution of scarps, linear valleys, topographic sinks, and oversteepened stream channels formed along an extensional step over on the San Andreas Fault (SAF) at Mustang Ridge, California. This location provides detail of both creeping fault landform development and complex fault zone kinematics. Here, the SAF creeps 10–14 mm/yr slower than at locations ∼20 km along the fault in either direction. This spatial change in creep rate is coincident with a series of en echelon oblique-normal faults that strike obliquely to the SAF and may accommodate the missing deformation. This study presents a suite of analyses that are helpful for proper mapping of faults in locations where high-resolution topographic data are available. Furthermore, our analyses indicate that two large subsidiary faults near the center of the step over zone appear to carry significant distributed deformation based on their large apparent vertical offsets, the presence of associated sag ponds and fluvial knickpoints, and the observation that they are rotating a segment of the main SAF. Several subsidiary faults in the southeastern portion of Mustang Ridge are likely less active; they have few associated sag ponds and have older scarp morphologic ages and subdued channel knickpoints. Several faults in the northwestern part of Mustang Ridge, though relatively small, are likely also actively accommodating active fault slip based on their young morphologic ages and the presence of associated sag ponds.

  2. Fault diagnosis of power transformer based on fault-tree analysis (FTA)

    NASA Astrophysics Data System (ADS)

    Wang, Yongliang; Li, Xiaoqiang; Ma, Jianwei; Li, SuoYu

    2017-05-01

    Power transformers is an important equipment in power plants and substations, power distribution transmission link is made an important hub of power systems. Its performance directly affects the quality and health of the power system reliability and stability. This paper summarizes the five parts according to the fault type power transformers, then from the time dimension divided into three stages of power transformer fault, use DGA routine analysis and infrared diagnostics criterion set power transformer running state, finally, according to the needs of power transformer fault diagnosis, by the general to the section by stepwise refinement of dendritic tree constructed power transformer fault

  3. Large earthquakes and creeping faults

    USGS Publications Warehouse

    Harris, Ruth A.

    2017-01-01

    Faults are ubiquitous throughout the Earth's crust. The majority are silent for decades to centuries, until they suddenly rupture and produce earthquakes. With a focus on shallow continental active-tectonic regions, this paper reviews a subset of faults that have a different behavior. These unusual faults slowly creep for long periods of time and produce many small earthquakes. The presence of fault creep and the related microseismicity helps illuminate faults that might not otherwise be located in fine detail, but there is also the question of how creeping faults contribute to seismic hazard. It appears that well-recorded creeping fault earthquakes of up to magnitude 6.6 that have occurred in shallow continental regions produce similar fault-surface rupture areas and similar peak ground shaking as their locked fault counterparts of the same earthquake magnitude. The behavior of much larger earthquakes on shallow creeping continental faults is less well known, because there is a dearth of comprehensive observations. Computational simulations provide an opportunity to fill the gaps in our understanding, particularly of the dynamic processes that occur during large earthquake rupture and arrest.

  4. A novel KFCM based fault diagnosis method for unknown faults in satellite reaction wheels.

    PubMed

    Hu, Di; Sarosh, Ali; Dong, Yun-Feng

    2012-03-01

    Reaction wheels are one of the most critical components of the satellite attitude control system, therefore correct diagnosis of their faults is quintessential for efficient operation of these spacecraft. The known faults in any of the subsystems are often diagnosed by supervised learning algorithms, however, this method fails to work correctly when a new or unknown fault occurs. In such cases an unsupervised learning algorithm becomes essential for obtaining the correct diagnosis. Kernel Fuzzy C-Means (KFCM) is one of the unsupervised algorithms, although it has its own limitations; however in this paper a novel method has been proposed for conditioning of KFCM method (C-KFCM) so that it can be effectively used for fault diagnosis of both known and unknown faults as in satellite reaction wheels. The C-KFCM approach involves determination of exact class centers from the data of known faults, in this way discrete number of fault classes are determined at the start. Similarity parameters are derived and determined for each of the fault data point. Thereafter depending on the similarity threshold each data point is issued with a class label. The high similarity points fall into one of the 'known-fault' classes while the low similarity points are labeled as 'unknown-faults'. Simulation results show that as compared to the supervised algorithm such as neural network, the C-KFCM method can effectively cluster historical fault data (as in reaction wheels) and diagnose the faults to an accuracy of more than 91%. Copyright © 2011 ISA. Published by Elsevier Ltd. All rights reserved.

  5. The SCEC 3D Community Fault Model (CFM-v5): An updated and expanded fault set of oblique crustal deformation and complex fault interaction for southern California

    NASA Astrophysics Data System (ADS)

    Nicholson, C.; Plesch, A.; Sorlien, C. C.; Shaw, J. H.; Hauksson, E.

    2014-12-01

    Southern California represents an ideal natural laboratory to investigate oblique deformation in 3D owing to its comprehensive datasets, complex tectonic history, evolving components of oblique slip, and continued crustal rotations about horizontal and vertical axes. As the SCEC Community Fault Model (CFM) aims to accurately reflect this 3D deformation, we present the results of an extensive update to the model by using primarily detailed fault trace, seismic reflection, relocated hypocenter and focal mechanism nodal plane data to generate improved, more realistic digital 3D fault surfaces. The results document a wide variety of oblique strain accommodation, including various aspects of strain partitioning and fault-related folding, sets of both high-angle and low-angle faults that mutually interact, significant non-planar, multi-stranded faults with variable dip along strike and with depth, and active mid-crustal detachments. In places, closely-spaced fault strands or fault systems can remain surprisingly subparallel to seismogenic depths, while in other areas, major strike-slip to oblique-slip faults can merge, such as the S-dipping Arroyo Parida-Mission Ridge and Santa Ynez faults with the N-dipping North Channel-Pitas Point-Red Mountain fault system, or diverge with depth. Examples of the latter include the steep-to-west-dipping Laguna Salada-Indiviso faults with the steep-to-east-dipping Sierra Cucapah faults, and the steep southern San Andreas fault with the adjacent NE-dipping Mecca Hills-Hidden Springs fault system. In addition, overprinting by steep predominantly strike-slip faulting can segment which parts of intersecting inherited low-angle faults are reactivated, or result in mutual cross-cutting relationships. The updated CFM 3D fault surfaces thus help characterize a more complex pattern of fault interactions at depth between various fault sets and linked fault systems, and a more complex fault geometry than typically inferred or expected from

  6. Identifying Conventionally Sub-Seismic Faults in Polygonal Fault Systems

    NASA Astrophysics Data System (ADS)

    Fry, C.; Dix, J.

    2017-12-01

    Polygonal Fault Systems (PFS) are prevalent in hydrocarbon basins globally and represent potential fluid pathways. However the characterization of these pathways is subject to the limitations of conventional 3D seismic imaging; only capable of resolving features on a decametre scale horizontally and metres scale vertically. While outcrop and core examples can identify smaller features, they are limited by the extent of the exposures. The disparity between these scales can allow for smaller faults to be lost in a resolution gap which could mean potential pathways are left unseen. Here the focus is upon PFS from within the London Clay, a common bedrock that is tunnelled into and bears construction foundations for much of London. It is a continuation of the Ieper Clay where PFS were first identified and is found to approach the seafloor within the Outer Thames Estuary. This allows for the direct analysis of PFS surface expressions, via the use of high resolution 1m bathymetric imaging in combination with high resolution seismic imaging. Through use of these datasets surface expressions of over 1500 faults within the London Clay have been identified, with the smallest fault measuring 12m and the largest at 612m in length. The displacements over these faults established from both bathymetric and seismic imaging ranges from 30cm to a couple of metres, scales that would typically be sub-seismic for conventional basin seismic imaging. The orientations and dimensions of the faults within this network have been directly compared to 3D seismic data of the Ieper Clay from the offshore Dutch sector where it exists approximately 1km below the seafloor. These have typical PFS attributes with lengths of hundreds of metres to kilometres and throws of tens of metres, a magnitude larger than those identified in the Outer Thames Estuary. The similar orientations and polygonal patterns within both locations indicates that the smaller faults exist within typical PFS structure but are

  7. Influence of fault steps on rupture termination of strike-slip earthquake faults

    NASA Astrophysics Data System (ADS)

    Li, Zhengfang; Zhou, Bengang

    2018-03-01

    A statistical analysis was completed on the rupture data of 29 historical strike-slip earthquakes across the world. The purpose of this study is to examine the effects of fault steps on the rupture termination of these events. The results show good correlations between the type and length of steps with the seismic rupture and a poor correlation between the step number and seismic rupture. For different magnitude intervals, the smallest widths of the fault steps (Lt) that can terminate the rupture propagation are variable: Lt = 3 km for Ms 6.5 6.9, Lt = 4 km for Ms 7.0 7.5, Lt = 6 km for Ms 7.5 8.0, and Lt = 8 km for Ms 8.0 8.5. The dilational fault step is easier to rupture through than the compression fault step. The smallest widths of the fault step for the rupture arrest can be used as an indicator to judge the scale of the rupture termination of seismic faults. This is helpful for research on fault segmentation, as well as estimating the magnitude of potential earthquakes, and is thus of significance for the assessment of seismic risks.

  8. Experimental Investigation for Fault Diagnosis Based on a Hybrid Approach Using Wavelet Packet and Support Vector Classification

    PubMed Central

    Li, Pengfei; Jiang, Yongying; Xiang, Jiawei

    2014-01-01

    To deal with the difficulty to obtain a large number of fault samples under the practical condition for mechanical fault diagnosis, a hybrid method that combined wavelet packet decomposition and support vector classification (SVC) is proposed. The wavelet packet is employed to decompose the vibration signal to obtain the energy ratio in each frequency band. Taking energy ratios as feature vectors, the pattern recognition results are obtained by the SVC. The rolling bearing and gear fault diagnostic results of the typical experimental platform show that the present approach is robust to noise and has higher classification accuracy and, thus, provides a better way to diagnose mechanical faults under the condition of small fault samples. PMID:24688361

  9. Model-Based Fault Tolerant Control

    NASA Technical Reports Server (NTRS)

    Kumar, Aditya; Viassolo, Daniel

    2008-01-01

    The Model Based Fault Tolerant Control (MBFTC) task was conducted under the NASA Aviation Safety and Security Program. The goal of MBFTC is to develop and demonstrate real-time strategies to diagnose and accommodate anomalous aircraft engine events such as sensor faults, actuator faults, or turbine gas-path component damage that can lead to in-flight shutdowns, aborted take offs, asymmetric thrust/loss of thrust control, or engine surge/stall events. A suite of model-based fault detection algorithms were developed and evaluated. Based on the performance and maturity of the developed algorithms two approaches were selected for further analysis: (i) multiple-hypothesis testing, and (ii) neural networks; both used residuals from an Extended Kalman Filter to detect the occurrence of the selected faults. A simple fusion algorithm was implemented to combine the results from each algorithm to obtain an overall estimate of the identified fault type and magnitude. The identification of the fault type and magnitude enabled the use of an online fault accommodation strategy to correct for the adverse impact of these faults on engine operability thereby enabling continued engine operation in the presence of these faults. The performance of the fault detection and accommodation algorithm was extensively tested in a simulation environment.

  10. Research on the fault diagnosis of bearing based on wavelet and demodulation

    NASA Astrophysics Data System (ADS)

    Li, Jiapeng; Yuan, Yu

    2017-05-01

    As a most commonly-used machine part, antifriction bearing is extensively used in mechanical equipment. Vibration signal analysis is one of the methods to monitor and diagnose the running status of antifriction bearings. Therefore, using wavelet analysis for demising is of great importance in the engineering practice. This paper firstly presented the basic theory of wavelet analysis to study the transformation, decomposition and reconstruction of wavelet. In addition, edition software LabVIEW was adopted to conduct wavelet and demodulation upon the vibration signal of antifriction bearing collected. With the combination of Hilbert envelop demodulation analysis, the fault character frequencies of the demised signal were extracted to conduct fault diagnosis analysis, which serves as a reference for the wavelet and demodulation of the vibration signal in engineering practice.

  11. Detection of CMOS bridging faults using minimal stuck-at fault test sets

    NASA Technical Reports Server (NTRS)

    Ijaz, Nabeel; Frenzel, James F.

    1993-01-01

    The performance of minimal stuck-at fault test sets at detecting bridging faults are evaluated. New functional models of circuit primitives are presented which allow accurate representation of bridging faults under switch-level simulation. The effectiveness of the patterns is evaluated using both voltage and current testing.

  12. Fault diagnosis of sensor networked structures with multiple faults using a virtual beam based approach

    NASA Astrophysics Data System (ADS)

    Wang, H.; Jing, X. J.

    2017-07-01

    This paper presents a virtual beam based approach suitable for conducting diagnosis of multiple faults in complex structures with limited prior knowledge of the faults involved. The "virtual beam", a recently-proposed concept for fault detection in complex structures, is applied, which consists of a chain of sensors representing a vibration energy transmission path embedded in the complex structure. Statistical tests and adaptive threshold are particularly adopted for fault detection due to limited prior knowledge of normal operational conditions and fault conditions. To isolate the multiple faults within a specific structure or substructure of a more complex one, a 'biased running' strategy is developed and embedded within the bacterial-based optimization method to construct effective virtual beams and thus to improve the accuracy of localization. The proposed method is easy and efficient to implement for multiple fault localization with limited prior knowledge of normal conditions and faults. With extensive experimental results, it is validated that the proposed method can localize both single fault and multiple faults more effectively than the classical trust index subtract on negative add on positive (TI-SNAP) method.

  13. Fault Identification by Unsupervised Learning Algorithm

    NASA Astrophysics Data System (ADS)

    Nandan, S.; Mannu, U.

    2012-12-01

    Contemporary fault identification techniques predominantly rely on the surface expression of the fault. This biased observation is inadequate to yield detailed fault structures in areas with surface cover like cities deserts vegetation etc and the changes in fault patterns with depth. Furthermore it is difficult to estimate faults structure which do not generate any surface rupture. Many disastrous events have been attributed to these blind faults. Faults and earthquakes are very closely related as earthquakes occur on faults and faults grow by accumulation of coseismic rupture. For a better seismic risk evaluation it is imperative to recognize and map these faults. We implement a novel approach to identify seismically active fault planes from three dimensional hypocenter distribution by making use of unsupervised learning algorithms. We employ K-means clustering algorithm and Expectation Maximization (EM) algorithm modified to identify planar structures in spatial distribution of hypocenter after filtering out isolated events. We examine difference in the faults reconstructed by deterministic assignment in K- means and probabilistic assignment in EM algorithm. The method is conceptually identical to methodologies developed by Ouillion et al (2008, 2010) and has been extensively tested on synthetic data. We determined the sensitivity of the methodology to uncertainties in hypocenter location, density of clustering and cross cutting fault structures. The method has been applied to datasets from two contrasting regions. While Kumaon Himalaya is a convergent plate boundary, Koyna-Warna lies in middle of the Indian Plate but has a history of triggered seismicity. The reconstructed faults were validated by examining the fault orientation of mapped faults and the focal mechanism of these events determined through waveform inversion. The reconstructed faults could be used to solve the fault plane ambiguity in focal mechanism determination and constrain the fault

  14. Seismicity and recent faulting in eastern California and western and central Nevada: A preliminary report

    NASA Technical Reports Server (NTRS)

    Abdel-Gawad, M. (Principal Investigator); Silverstein, J.; Tubbesing, L.

    1973-01-01

    The author has identified the following significant results. ERTS-1 imagery covering the eastern California-Nevada seismic belt were utilized to study the fault pattern in relation to the distribution of earthquake epicenters and Quaternary volcanic rocks. Many suspected faults not previously mapped were identified. These include several suspected shear zones in Nevada, faults showing evidence of recent breakage, and major lineaments. Highly seismic areas are generally characterized by Holocene faulting and Quaternary volcanic activity. However, several major fault segments showing evidence of recent breakage are associated with little or no seismicity. The tectonic pattern strongly suggests that the eastern California-Nevada seismic belt coincides with a major crustal rift associated with zones of lateral shear. New data on potentially active fault zones have direct practical applications in national and local earthquake hazard reduction programs. Positive contacts have been made with Kern and Ventura Counties to make results of this investigation available for application to their earthquake hazards definition projects.

  15. How fault evolution changes strain partitioning and fault slip rates in Southern California: Results from geodynamic modeling

    NASA Astrophysics Data System (ADS)

    Ye, Jiyang; Liu, Mian

    2017-08-01

    In Southern California, the Pacific-North America relative plate motion is accommodated by the complex southern San Andreas Fault system that includes many young faults (<2 Ma). The initiation of these young faults and their impact on strain partitioning and fault slip rates are important for understanding the evolution of this plate boundary zone and assessing earthquake hazard in Southern California. Using a three-dimensional viscoelastoplastic finite element model, we have investigated how this plate boundary fault system has evolved to accommodate the relative plate motion in Southern California. Our results show that when the plate boundary faults are not optimally configured to accommodate the relative plate motion, strain is localized in places where new faults would initiate to improve the mechanical efficiency of the fault system. In particular, the Eastern California Shear Zone, the San Jacinto Fault, the Elsinore Fault, and the offshore dextral faults all developed in places of highly localized strain. These younger faults compensate for the reduced fault slip on the San Andreas Fault proper because of the Big Bend, a major restraining bend. The evolution of the fault system changes the apportionment of fault slip rates over time, which may explain some of the slip rate discrepancy between geological and geodetic measurements in Southern California. For the present fault configuration, our model predicts localized strain in western Transverse Ranges and along the dextral faults across the Mojave Desert, where numerous damaging earthquakes occurred in recent years.

  16. Fault-tolerant linear optical quantum computing with small-amplitude coherent States.

    PubMed

    Lund, A P; Ralph, T C; Haselgrove, H L

    2008-01-25

    Quantum computing using two coherent states as a qubit basis is a proposed alternative architecture with lower overheads but has been questioned as a practical way of performing quantum computing due to the fragility of diagonal states with large coherent amplitudes. We show that using error correction only small amplitudes (alpha>1.2) are required for fault-tolerant quantum computing. We study fault tolerance under the effects of small amplitudes and loss using a Monte Carlo simulation. The first encoding level resources are orders of magnitude lower than the best single photon scheme.

  17. Simulated fault injection - A methodology to evaluate fault tolerant microprocessor architectures

    NASA Technical Reports Server (NTRS)

    Choi, Gwan S.; Iyer, Ravishankar K.; Carreno, Victor A.

    1990-01-01

    A simulation-based fault-injection method for validating fault-tolerant microprocessor architectures is described. The approach uses mixed-mode simulation (electrical/logic analysis), and injects transient errors in run-time to assess the resulting fault impact. As an example, a fault-tolerant architecture which models the digital aspects of a dual-channel real-time jet-engine controller is used. The level of effectiveness of the dual configuration with respect to single and multiple transients is measured. The results indicate 100 percent coverage of single transients. Approximately 12 percent of the multiple transients affect both channels; none result in controller failure since two additional levels of redundancy exist.

  18. The engine fuel system fault analysis

    NASA Astrophysics Data System (ADS)

    Zhang, Yong; Song, Hanqiang; Yang, Changsheng; Zhao, Wei

    2017-05-01

    For improving the reliability of the engine fuel system, the typical fault factor of the engine fuel system was analyzed from the point view of structure and functional. The fault character was gotten by building the fuel system fault tree. According the utilizing of fault mode effect analysis method (FMEA), several factors of key component fuel regulator was obtained, which include the fault mode, the fault cause, and the fault influences. All of this made foundation for next development of fault diagnosis system.

  19. Comparative study of superconducting fault current limiter both for LCC-HVDC and VSC-HVDC systems

    NASA Astrophysics Data System (ADS)

    Lee, Jong-Geon; Khan, Umer Amir; Lim, Sung-Woo; Shin, Woo-ju; Seo, In-Jin; Lee, Bang-Wook

    2015-11-01

    High Voltage Direct Current (HVDC) system has been evaluated as the optimum solution for the renewable energy transmission and long-distance power grid connections. In spite of the various advantages of HVDC system, it still has been regarded as an unreliable system compared to AC system due to its vulnerable characteristics on the power system fault. Furthermore, unlike AC system, optimum protection and switching device has not been fully developed yet. Therefore, in order to enhance the reliability of the HVDC systems mitigation of power system fault and reliable fault current limiting and switching devices should be developed. In this paper, in order to mitigate HVDC fault, both for Line Commutated Converter HVDC (LCC-HVDC) and Voltage Source Converter HVDC (VSC-HVDC) system, an application of resistive superconducting fault current limiter which has been known as optimum solution to cope with the power system fault was considered. Firstly, simulation models for two types of LCC-HVDC and VSC-HVDC system which has point to point connection model were developed. From the designed model, fault current characteristics of faulty condition were analyzed. Second, application of SFCL on each types of HVDC system and comparative study of modified fault current characteristics were analyzed. Consequently, it was deduced that an application of AC-SFCL on LCC-HVDC system with point to point connection was desirable solution to mitigate the fault current stresses and to prevent commutation failure in HVDC electric power system interconnected with AC grid.

  20. Where's the Hayward Fault? A Green Guide to the Fault

    USGS Publications Warehouse

    Stoffer, Philip W.

    2008-01-01

    This report describes self-guided field trips to one of North America?s most dangerous earthquake faults?the Hayward Fault. Locations were chosen because of their easy access using mass transit and/or their significance relating to the natural and cultural history of the East Bay landscape. This field-trip guidebook was compiled to help commemorate the 140th anniversary of an estimated M 7.0 earthquake that occurred on the Hayward Fault at approximately 7:50 AM, October 21st, 1868. Although many reports and on-line resources have been compiled about the science and engineering associated with earthquakes on the Hayward Fault, this report has been prepared to serve as an outdoor guide to the fault for the interested public and for educators. The first chapter is a general overview of the geologic setting of the fault. This is followed by ten chapters of field trips to selected areas along the fault, or in the vicinity, where landscape, geologic, and man-made features that have relevance to understanding the nature of the fault and its earthquake history can be found. A glossary is provided to define and illustrate scientific term used throughout this guide. A ?green? theme helps conserve resources and promotes use of public transportation, where possible. Although access to all locations described in this guide is possible by car, alternative suggestions are provided. To help conserve paper, this guidebook is available on-line only; however, select pages or chapters (field trips) within this guide can be printed separately to take along on an excursion. The discussions in this paper highlight transportation alternatives to visit selected field trip locations. In some cases, combinations, such as a ride on BART and a bus, can be used instead of automobile transportation. For other locales, bicycles can be an alternative means of transportation. Transportation descriptions on selected pages are intended to help guide fieldtrip planners or participants choose trip

  1. Sun Protection Practices and Sun Exposure among Children with a Parental History of Melanoma

    PubMed Central

    Glenn, Beth A.; Lin, Tiffany; Chang, L. Cindy; Okada, Ashley; Wong, Weng Kee; Glanz, Karen; Bastani, Roshan

    2014-01-01

    Background First-degree relatives of melanoma survivors have a substantially higher lifetime risk for melanoma than individuals with no family history. Exposure to ultraviolet radiation is the primary modifiable risk factor for the disease. Reducing UV exposure through sun protection may be particularly important for children with a parental history of melanoma. Nonetheless, limited prior research has investigated sun protection practices and sun exposure among these children. Methods The California Cancer Registry was used to identify melanoma survivors eligible to participate in a survey to assess their children's sun protection practices and sun exposure. The survey was administered by mail, telephone, or web to Latino and non-Latino white melanoma survivors with at least one child (0–17 years; N = 324). Results Sun exposure was high and the rate of sunburn was equivalent to or higher than estimates from average risk populations. Use of sun protection was suboptimal. Latino children were less likely to wear sunscreen and hats and more likely to wear sunglasses, although these differences disappeared in adjusted analyses. Increasing age of the child was associated with lower sun protection and higher risk for sunburn whereas higher objective risk for melanoma predicted improved sun protection and a higher risk for sunburns. Perception of high barriers to sun protection was the strongest modifiable correlate of sun protection. Conclusions Interventions to improve sun protection and reduce sun exposure and sunburns in high risk children are needed. Impact Intervening in high risk populations may help reduce the burden of melanoma in the U.S. PMID:25587110

  2. Tidal Fluctuations in a Deep Fault Extending Under the Santa Barbara Channel, California

    NASA Astrophysics Data System (ADS)

    Garven, G.; Stone, J.; Boles, J. R.

    2013-12-01

    Faults are known to strongly affect deep groundwater flow, and exert a profound control on petroleum accumulation, migration, and natural seafloor seepage from coastal reservoirs within the young sedimentary basins of southern California. In this paper we focus on major fault structure permeability and compressibility in the Santa Barbara Basin, where unique submarine and subsurface instrumentation provide the hydraulic characterization of faults in a structurally complex system. Subsurface geologic logs, geophysical logs, fluid P-T-X data, seafloor seep discharge patterns, fault mineralization petrology, isotopic data, fluid inclusions, and structural models help characterize the hydrogeological nature of faults in this seismically-active and young geologic terrain. Unique submarine gas flow data from a natural submarine seep area of the Santa Barbara Channel help constrain fault permeability k ~ 30 millidarcys for large-scale upward migration of methane-bearing formation fluids along one of the major fault zones. At another offshore site near Platform Holly, pressure-transducer time-series data from a 1.5 km deep exploration well in the South Ellwood Field demonstrate a strong ocean tidal component, due to vertical fault connectivity to the seafloor. Analytical models from classic hydrologic papers by Jacob-Ferris-Bredehoeft-van der Kamp-Wang can be used to extract large-scale fault permeability and compressibility parameters, based on tidal signal amplitude attenuation and phase shift at depth. For the South Ellwood Fault, we estimate k ~ 38 millidarcys (hydraulic conductivity K~ 3.6E-07 m/s) and specific storage coefficient Ss ~ 5.5E-08 m-1. The tidal-derived hydraulic properties also suggest a low effective porosity for the fault zone, n ~ 1 to 3%. Results of forward modeling with 2-D finite element models illustrate significant lateral propagation of the tidal signal into highly-permeable Monterey Formation. The results have important practical implications

  3. Evaluation of Cepstrum Algorithm with Impact Seeded Fault Data of Helicopter Oil Cooler Fan Bearings and Machine Fault Simulator Data

    DTIC Science & Technology

    2013-02-01

    of a bearing must be put into practice. There are many potential methods, the most traditional being the use of statistical time-domain features...accelerate degradation to test multiples bearings to gain statistical relevance and extrapolate results to scale for field conditions. Temperature...as time statistics , frequency estimation to improve the fault frequency detection. For future investigations, one can further explore the

  4. HOT Faults", Fault Organization, and the Occurrence of the Largest Earthquakes

    NASA Astrophysics Data System (ADS)

    Carlson, J. M.; Hillers, G.; Archuleta, R. J.

    2006-12-01

    We apply the concept of "Highly Optimized Tolerance" (HOT) for the investigation of spatio-temporal seismicity evolution, in particular mechanisms associated with largest earthquakes. HOT provides a framework for investigating both qualitative and quantitative features of complex feedback systems that are far from equilibrium and punctuated by rare, catastrophic events. In HOT, robustness trade-offs lead to complexity and power laws in systems that are coupled to evolving environments. HOT was originally inspired by biology and engineering, where systems are internally very highly structured, through biological evolution or deliberate design, and perform in an optimum manner despite fluctuations in their surroundings. Though faults and fault systems are not designed in ways comparable to biological and engineered structures, feedback processes are responsible in a conceptually comparable way for the development, evolution and maintenance of younger fault structures and primary slip surfaces of mature faults, respectively. Hence, in geophysical applications the "optimization" approach is perhaps more aptly replaced by "organization", reflecting the distinction between HOT and random, disorganized configurations, and highlighting the importance of structured interdependencies that evolve via feedback among and between different spatial and temporal scales. Expressed in the terminology of the HOT concept, mature faults represent a configuration optimally organized for the release of strain energy; whereas immature, more heterogeneous fault networks represent intermittent, suboptimal systems that are regularized towards structural simplicity and the ability to generate large earthquakes more easily. We discuss fault structure and associated seismic response pattern within the HOT concept, and outline fundamental differences between this novel interpretation to more orthodox viewpoints like the criticality concept. The discussion is flanked by numerical simulations of a

  5. Fault creep rates of the Chaman fault (Afghanistan and Pakistan) inferred from InSAR

    NASA Astrophysics Data System (ADS)

    Barnhart, William D.

    2017-01-01

    The Chaman fault is the major strike-slip structural boundary between the India and Eurasia plates. Despite sinistral slip rates similar to the North America-Pacific plate boundary, no major (>M7) earthquakes have been documented along the Chaman fault, indicating that the fault either creeps aseismically or is at a late stage in its seismic cycle. Recent work with remotely sensed interferometric synthetic aperture radar (InSAR) time series documented a heterogeneous distribution of fault creep and interseismic coupling along the entire length of the Chaman fault, including an 125 km long creeping segment and an 95 km long locked segment within the region documented in this study. Here I present additional InSAR time series results from the Envisat and ALOS radar missions spanning the southern and central Chaman fault in an effort to constrain the locking depth, dip, and slip direction of the Chaman fault. I find that the fault deviates little from a vertical geometry and accommodates little to no fault-normal displacements. Peak-documented creep rates on the fault are 9-12 mm/yr, accounting for 25-33% of the total motion between India and Eurasia, and locking depths in creeping segments are commonly shallower than 500 m. The magnitude of the 1892 Chaman earthquake is well predicted by the total area of the 95 km long coupled segment. To a first order, the heterogeneous distribution of aseismic creep combined with consistently shallow locking depths suggests that the southern and central Chaman fault may only produce small to moderate earthquakes (

  6. Aftershocks illuminate the 2011 Mineral, Virginia, earthquake causative fault zone and nearby active faults

    USGS Publications Warehouse

    Horton, J. Wright; Shah, Anjana K.; McNamara, Daniel E.; Snyder, Stephen L.; Carter, Aina M

    2015-01-01

    Deployment of temporary seismic stations after the 2011 Mineral, Virginia (USA), earthquake produced a well-recorded aftershock sequence. The majority of aftershocks are in a tabular cluster that delineates the previously unknown Quail fault zone. Quail fault zone aftershocks range from ~3 to 8 km in depth and are in a 1-km-thick zone striking ~036° and dipping ~50°SE, consistent with a 028°, 50°SE main-shock nodal plane having mostly reverse slip. This cluster extends ~10 km along strike. The Quail fault zone projects to the surface in gneiss of the Ordovician Chopawamsic Formation just southeast of the Ordovician–Silurian Ellisville Granodiorite pluton tail. The following three clusters of shallow (<3 km) aftershocks illuminate other faults. (1) An elongate cluster of early aftershocks, ~10 km east of the Quail fault zone, extends 8 km from Fredericks Hall, strikes ~035°–039°, and appears to be roughly vertical. The Fredericks Hall fault may be a strand or splay of the older Lakeside fault zone, which to the south spans a width of several kilometers. (2) A cluster of later aftershocks ~3 km northeast of Cuckoo delineates a fault near the eastern contact of the Ordovician Quantico Formation. (3) An elongate cluster of late aftershocks ~1 km northwest of the Quail fault zone aftershock cluster delineates the northwest fault (described herein), which is temporally distinct, dips more steeply, and has a more northeastward strike. Some aftershock-illuminated faults coincide with preexisting units or structures evident from radiometric anomalies, suggesting tectonic inheritance or reactivation.

  7. Paleoseismicity of two historically quiescent faults in Australia: Implications for fault behavior in stable continental regions

    USGS Publications Warehouse

    Crone, A.J.; De Martini, P. M.; Machette, M.M.; Okumura, K.; Prescott, J.R.

    2003-01-01

    Paleoseismic studies of two historically aseismic Quaternary faults in Australia confirm that cratonic faults in stable continental regions (SCR) typically have a long-term behavior characterized by episodes of activity separated by quiescent intervals of at least 10,000 and commonly 100,000 years or more. Studies of the approximately 30-km-long Roopena fault in South Australia and the approximately 30-km-long Hyden fault in Western Australia document multiple Quaternary surface-faulting events that are unevenly spaced in time. The episodic clustering of events on cratonic SCR faults may be related to temporal fluctuations of fault-zone fluid pore pressures in a volume of strained crust. The long-term slip rate on cratonic SCR faults is extremely low, so the geomorphic expression of many cratonic SCR faults is subtle, and scarps may be difficult to detect because they are poorly preserved. Both the Roopena and Hyden faults are in areas of limited or no significant seismicity; these and other faults that we have studied indicate that many potentially hazardous SCR faults cannot be recognized solely on the basis of instrumental data or historical earthquakes. Although cratonic SCR faults may appear to be nonhazardous because they have been historically aseismic, those that are favorably oriented for movement in the current stress field can and have produced unexpected damaging earthquakes. Paleoseismic studies of modern and prehistoric SCR faulting events provide the basis for understanding of the long-term behavior of these faults and ultimately contribute to better seismic-hazard assessments.

  8. Faulting processes in active faults - Evidences from TCDP and SAFOD drill core samples

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Janssen, C.; Wirth, R.; Wenk, H. -R.

    The microstructures, mineralogy and chemistry of representative samples collected from the cores of the San Andreas Fault drill hole (SAFOD) and the Taiwan Chelungpu-Fault Drilling project (TCDP) have been studied using optical microscopy, TEM, SEM, XRD and XRF analyses. SAFOD samples provide a transect across undeformed host rock, the fault damage zone and currently active deforming zones of the San Andreas Fault. TCDP samples are retrieved from the principal slip zone (PSZ) and from the surrounding damage zone of the Chelungpu Fault. Substantial differences exist in the clay mineralogy of SAFOD and TCDP fault gouge samples. Amorphous material has beenmore » observed in SAFOD as well as TCDP samples. In line with previous publications, we propose that melt, observed in TCDP black gouge samples, was produced by seismic slip (melt origin) whereas amorphous material in SAFOD samples was formed by comminution of grains (crush origin) rather than by melting. Dauphiné twins in quartz grains of SAFOD and TCDP samples may indicate high seismic stress. The differences in the crystallographic preferred orientation of calcite between SAFOD and TCDP samples are significant. Microstructures resulting from dissolution–precipitation processes were observed in both faults but are more frequently found in SAFOD samples than in TCDP fault rocks. As already described for many other fault zones clay-gouge fabrics are quite weak in SAFOD and TCDP samples. Clay-clast aggregates (CCAs), proposed to indicate frictional heating and thermal pressurization, occur in material taken from the PSZ of the Chelungpu Fault, as well as within and outside of the SAFOD deforming zones, indicating that these microstructures were formed over a wide range of slip rates.« less

  9. Misbheaving Faults: The Expanding Role of Geodetic Imaging in Unraveling Unexpected Fault Slip Behavior

    NASA Astrophysics Data System (ADS)

    Barnhart, W. D.; Briggs, R.

    2015-12-01

    Geodetic imaging techniques enable researchers to "see" details of fault rupture that cannot be captured by complementary tools such as seismology and field studies, thus providing increasingly detailed information about surface strain, slip kinematics, and how an earthquake may be transcribed into the geological record. For example, the recent Haiti, Sierra El Mayor, and Nepal earthquakes illustrate the fundamental role of geodetic observations in recording blind ruptures where purely geological and seismological studies provided incomplete views of rupture kinematics. Traditional earthquake hazard analyses typically rely on sparse paleoseismic observations and incomplete mapping, simple assumptions of slip kinematics from Andersonian faulting, and earthquake analogs to characterize the probabilities of forthcoming ruptures and the severity of ground accelerations. Spatially dense geodetic observations in turn help to identify where these prevailing assumptions regarding fault behavior break down and highlight new and unexpected kinematic slip behavior. Here, we focus on three key contributions of space geodetic observations to the analysis of co-seismic deformation: identifying near-surface co-seismic slip where no easily recognized fault rupture exists; discerning non-Andersonian faulting styles; and quantifying distributed, off-fault deformation. The 2013 Balochistan strike slip earthquake in Pakistan illuminates how space geodesy precisely images non-Andersonian behavior and off-fault deformation. Through analysis of high-resolution optical imagery and DEMs, evidence emerges that a single fault map slip as both a strike slip and dip slip fault across multiple seismic cycles. These observations likewise enable us to quantify on-fault deformation, which account for ~72% of the displacements in this earthquake. Nonetheless, the spatial distribution of on- and off-fault deformation in this event is highly spatially variable- a complicating factor for comparisons

  10. Fuzzy-Wavelet Based Double Line Transmission System Protection Scheme in the Presence of SVC

    NASA Astrophysics Data System (ADS)

    Goli, Ravikumar; Shaik, Abdul Gafoor; Tulasi Ram, Sankara S.

    2015-06-01

    Increasing the power transfer capability and efficient utilization of available transmission lines, improving the power system controllability and stability, power oscillation damping and voltage compensation have made strides and created Flexible AC Transmission (FACTS) devices in recent decades. Shunt FACTS devices can have adverse effects on distance protection both in steady state and transient periods. Severe under reaching is the most important problem of relay which is caused by current injection at the point of connection to the system. Current absorption of compensator leads to overreach of relay. This work presents an efficient method based on wavelet transforms, fault detection, classification and location using Fuzzy logic technique which is almost independent of fault impedance, fault distance and fault inception angle. The proposed protection scheme is found to be fast, reliable and accurate for various types of faults on transmission lines with and without Static Var compensator at different locations and with various incidence angles.

  11. Fault strength in Marmara region inferred from the geometry of the principle stress axes and fault orientations: A case study for the Prince's Islands fault segment

    NASA Astrophysics Data System (ADS)

    Pinar, Ali; Coskun, Zeynep; Mert, Aydin; Kalafat, Dogan

    2015-04-01

    The general consensus based on historical earthquake data point out that the last major moment release on the Prince's islands fault was in 1766 which in turn signals an increased seismic risk for Istanbul Metropolitan area considering the fact that most of the 20 mm/yr GPS derived slip rate for the region is accommodated mostly by that fault segment. The orientation of the Prince's islands fault segment overlaps with the NW-SE direction of the maximum principle stress axis derived from the focal mechanism solutions of the large and moderate sized earthquakes occurred in the Marmara region. As such, the NW-SE trending fault segment translates the motion between the two E-W trending branches of the North Anatolian fault zone; one extending from the Gulf of Izmit towards Çınarcık basin and the other extending between offshore Bakırköy and Silivri. The basic relation between the orientation of the maximum and minimum principal stress axes, the shear and normal stresses, and the orientation of a fault provides clue on the strength of a fault, i.e., its frictional coefficient. Here, the angle between the fault normal and maximum compressive stress axis is a key parameter where fault normal and fault parallel maximum compressive stress might be a necessary and sufficient condition for a creeping event. That relation also implies that when the trend of the sigma-1 axis is close to the strike of the fault the shear stress acting on the fault plane approaches zero. On the other hand, the ratio between the shear and normal stresses acting on a fault plane is proportional to the coefficient of frictional coefficient of the fault. Accordingly, the geometry between the Prince's islands fault segment and a maximum principal stress axis matches a weak fault model. In the frame of the presentation we analyze seismological data acquired in Marmara region and interpret the results in conjuction with the above mentioned weak fault model.

  12. A new method of converter transformer protection without commutation failure

    NASA Astrophysics Data System (ADS)

    Zhang, Jiayu; Kong, Bo; Liu, Mingchang; Zhang, Jun; Guo, Jianhong; Jing, Xu

    2018-01-01

    With the development of AC / DC hybrid transmission technology, converter transformer as nodes of AC and DC conversion of HVDC transmission technology, its reliable safe and stable operation plays an important role in the DC transmission. As a common problem of DC transmission, commutation failure poses a serious threat to the safe and stable operation of power grid. According to the commutation relation between the AC bus voltage of converter station and the output DC voltage of converter, the generalized transformation ratio is defined, and a new method of converter transformer protection based on generalized transformation ratio is put forward. The method uses generalized ratio to realize the on-line monitoring of the fault or abnormal commutation components, and the use of valve side of converter transformer bushing CT current characteristics of converter transformer fault accurately, and is not influenced by the presence of commutation failure. Through the fault analysis and EMTDC/PSCAD simulation, the protection can be operated correctly under the condition of various faults of the converter.

  13. A phase angle based diagnostic scheme to planetary gear faults diagnostics under non-stationary operational conditions

    NASA Astrophysics Data System (ADS)

    Feng, Ke; Wang, Kesheng; Ni, Qing; Zuo, Ming J.; Wei, Dongdong

    2017-11-01

    Planetary gearbox is a critical component for rotating machinery. It is widely used in wind turbines, aerospace and transmission systems in heavy industry. Thus, it is important to monitor planetary gearboxes, especially for fault diagnostics, during its operational conditions. However, in practice, operational conditions of planetary gearbox are often characterized by variations of rotational speeds and loads, which may bring difficulties for fault diagnosis through the measured vibrations. In this paper, phase angle data extracted from measured planetary gearbox vibrations is used for fault detection under non-stationary operational conditions. Together with sample entropy, fault diagnosis on planetary gearbox is implemented. The proposed scheme is explained and demonstrated in both simulation and experimental studies. The scheme proves to be effective and features advantages on fault diagnosis of planetary gearboxes under non-stationary operational conditions.

  14. Loading of the San Andreas fault by flood-induced rupture of faults beneath the Salton Sea

    USGS Publications Warehouse

    Brothers, Daniel; Kilb, Debi; Luttrell, Karen; Driscoll, Neal W.; Kent, Graham

    2011-01-01

    The southern San Andreas fault has not experienced a large earthquake for approximately 300 years, yet the previous five earthquakes occurred at ~180-year intervals. Large strike-slip faults are often segmented by lateral stepover zones. Movement on smaller faults within a stepover zone could perturb the main fault segments and potentially trigger a large earthquake. The southern San Andreas fault terminates in an extensional stepover zone beneath the Salton Sea—a lake that has experienced periodic flooding and desiccation since the late Holocene. Here we reconstruct the magnitude and timing of fault activity beneath the Salton Sea over several earthquake cycles. We observe coincident timing between flooding events, stepover fault displacement and ruptures on the San Andreas fault. Using Coulomb stress models, we show that the combined effect of lake loading, stepover fault movement and increased pore pressure could increase stress on the southern San Andreas fault to levels sufficient to induce failure. We conclude that rupture of the stepover faults, caused by periodic flooding of the palaeo-Salton Sea and by tectonic forcing, had the potential to trigger earthquake rupture on the southern San Andreas fault. Extensional stepover zones are highly susceptible to rapid stress loading and thus the Salton Sea may be a nucleation point for large ruptures on the southern San Andreas fault.

  15. Software-implemented fault insertion: An FTMP example

    NASA Technical Reports Server (NTRS)

    Czeck, Edward W.; Siewiorek, Daniel P.; Segall, Zary Z.

    1987-01-01

    This report presents a model for fault insertion through software; describes its implementation on a fault-tolerant computer, FTMP; presents a summary of fault detection, identification, and reconfiguration data collected with software-implemented fault insertion; and compares the results to hardware fault insertion data. Experimental results show detection time to be a function of time of insertion and system workload. For the fault detection time, there is no correlation between software-inserted faults and hardware-inserted faults; this is because hardware-inserted faults must manifest as errors before detection, whereas software-inserted faults immediately exercise the error detection mechanisms. In summary, the software-implemented fault insertion is able to be used as an evaluation technique for the fault-handling capabilities of a system in fault detection, identification and recovery. Although the software-inserted faults do not map directly to hardware-inserted faults, experiments show software-implemented fault insertion is capable of emulating hardware fault insertion, with greater ease and automation.

  16. Development, Interaction and Linkage of Normal Fault Segments along the 100-km Bilila-Mtakataka Fault, Malawi

    NASA Astrophysics Data System (ADS)

    Fagereng, A.; Hodge, M.; Biggs, J.; Mdala, H. S.; Goda, K.

    2016-12-01

    Faults grow through the interaction and linkage of isolated fault segments. Continuous fault systems are those where segments interact, link and may slip synchronously, whereas non-continuous fault systems comprise isolated faults. As seismic moment is related to fault length (Wells and Coppersmith, 1994), understanding whether a fault system is continuous or not is critical in evaluating seismic hazard. Maturity may be a control on fault continuity: immature, low displacement faults are typically assumed to be non-continuous. Here, we study two overlapping, 20 km long, normal fault segments of the N-S striking Bilila-Mtakataka fault, Malawi, in the southern section of the East African Rift System. Despite its relative immaturity, previous studies concluded the Bilila-Mtakataka fault is continuous for its entire 100 km length, with the most recent event equating to an Mw8.0 earthquake (Jackson and Blenkinsop, 1997). We explore whether segment geometry and relationship to pre-existing high-grade metamorphic foliation has influenced segment interaction and fault development. Fault geometry and scarp height is constrained by DEMs derived from SRTM, Pleiades and `Structure from Motion' photogrammetry using a UAV, alongside direct field observations. The segment strikes differ on average by 10°, but up to 55° at their adjacent tips. The southern segment is sub-parallel to the foliation, whereas the northern segment is highly oblique to the foliation. Geometrical surface discontinuities suggest two isolated faults; however, displacement-length profiles and Coulomb stress change models suggest segment interaction, with potential for linkage at depth. Further work must be undertaken on other segments to assess the continuity of the entire fault, concluding whether an earthquake greater than that of the maximum instrumentally recorded (1910 M7.4 Rukwa) is possible.

  17. Global strike-slip fault distribution on Enceladus reveals mostly left-lateral faults

    NASA Astrophysics Data System (ADS)

    Martin, E. S.; Kattenhorn, S. A.

    2013-12-01

    Within the outer solar system, normal faults are a dominant tectonic feature; however, strike-slip faults have played a role in modifying the surfaces of many icy bodies, including Europa, Ganymede, and Enceladus. Large-scale tectonic deformation in icy shells develops in response to stresses caused by a range of mechanisms including polar wander, despinning, volume changes, orbital recession/decay, diurnal tides, and nonsynchronous rotation (NSR). Icy shells often preserve this record of tectonic deformation as patterns of fractures that can be used to identify the source of stress responsible for creating the patterns. Previously published work on Jupiter's moon Europa found that right-lateral strike-slip faults predominantly formed in the southern hemisphere and left-lateral strike-slip faults in the northern hemisphere. This pattern suggested they were formed in the past by stresses induced by diurnal tidal forcing, and were then rotated into their current longitudinal positions by NSR. We mapped the distribution of strike-slip faults on Enceladus and used kinematic indicators, including tailcracks and en echelon fractures, to determine their sense of slip. Tailcracks are secondary fractures that form as a result of concentrations of stress at the tips of slipping faults with geometric patterns dictated by the slip sense. A total of 31 strike-slip faults were identified, nine of which were right-lateral faults, all distributed in a seemingly random pattern across Enceladus's surface, in contrast to Europa. Additionally, there is a dearth of strike-slip faults within the tectonized terrains centered at 90°W and within the polar regions north and south of 60°N and 60°S, respectively. The lack of strike-slip faults in the north polar region may be explained, in part, by limited data coverage. The south polar terrain (SPT), characterized by the prominent tiger stripes and south polar dichotomy, yielded no discrete strike-slip faults. This does not suggest that

  18. The mechanics of fault-bend folding and tear-fault systems in the Niger Delta

    NASA Astrophysics Data System (ADS)

    Benesh, Nathan Philip

    This dissertation investigates the mechanics of fault-bend folding using the discrete element method (DEM) and explores the nature of tear-fault systems in the deep-water Niger Delta fold-and-thrust belt. In Chapter 1, we employ the DEM to investigate the development of growth structures in anticlinal fault-bend folds. This work was inspired by observations that growth strata in active folds show a pronounced upward decrease in bed dip, in contrast to traditional kinematic fault-bend fold models. Our analysis shows that the modeled folds grow largely by parallel folding as specified by the kinematic theory; however, the process of folding over a broad axial surface zone yields a component of fold growth by limb rotation that is consistent with the patterns observed in natural folds. This result has important implications for how growth structures can he used to constrain slip and paleo-earthquake ages on active blind-thrust faults. In Chapter 2, we expand our DEM study to investigate the development of a wider range of fault-bend folds. We examine the influence of mechanical stratigraphy and quantitatively compare our models with the relationships between fold and fault shape prescribed by the kinematic theory. While the synclinal fault-bend models closely match the kinematic theory, the modeled anticlinal fault-bend folds show robust behavior that is distinct from the kinematic theory. Specifically, we observe that modeled structures maintain a linear relationship between fold shape (gamma) and fault-horizon cutoff angle (theta), rather than expressing the non-linear relationship with two distinct modes of anticlinal folding that is prescribed by the kinematic theory. These observations lead to a revised quantitative relationship for fault-bend folds that can serve as a useful interpretation tool. Finally, in Chapter 3, we examine the 3D relationships of tear- and thrust-fault systems in the western, deep-water Niger Delta. Using 3D seismic reflection data and new

  19. Technical Reference Suite Addressing Challenges of Providing Assurance for Fault Management Architectural Design

    NASA Technical Reports Server (NTRS)

    Fitz, Rhonda; Whitman, Gerek

    2016-01-01

    Research into complexities of software systems Fault Management (FM) and how architectural design decisions affect safety, preservation of assets, and maintenance of desired system functionality has coalesced into a technical reference (TR) suite that advances the provision of safety and mission assurance. The NASA Independent Verification and Validation (IV&V) Program, with Software Assurance Research Program support, extracted FM architectures across the IV&V portfolio to evaluate robustness, assess visibility for validation and test, and define software assurance methods applied to the architectures and designs. This investigation spanned IV&V projects with seven different primary developers, a wide range of sizes and complexities, and encompassed Deep Space Robotic, Human Spaceflight, and Earth Orbiter mission FM architectures. The initiative continues with an expansion of the TR suite to include Launch Vehicles, adding the benefit of investigating differences intrinsic to model-based FM architectures and insight into complexities of FM within an Agile software development environment, in order to improve awareness of how nontraditional processes affect FM architectural design and system health management. The identification of particular FM architectures, visibility, and associated IV&V techniques provides a TR suite that enables greater assurance that critical software systems will adequately protect against faults and respond to adverse conditions. Additionally, the role FM has with regard to strengthened security requirements, with potential to advance overall asset protection of flight software systems, is being addressed with the development of an adverse conditions database encompassing flight software vulnerabilities. Capitalizing on the established framework, this TR suite provides assurance capability for a variety of FM architectures and varied development approaches. Research results are being disseminated across NASA, other agencies, and the

  20. Risk-Significant Adverse Condition Awareness Strengthens Assurance of Fault Management Systems

    NASA Technical Reports Server (NTRS)

    Fitz, Rhonda

    2017-01-01

    As spaceflight systems increase in complexity, Fault Management (FM) systems are ranked high in risk-based assessment of software criticality, emphasizing the importance of establishing highly competent domain expertise to provide assurance. Adverse conditions (ACs) and specific vulnerabilities encountered by safety- and mission-critical software systems have been identified through efforts to reduce the risk posture of software-intensive NASA missions. Acknowledgement of potential off-nominal conditions and analysis to determine software system resiliency are important aspects of hazard analysis and FM. A key component of assuring FM is an assessment of how well software addresses susceptibility to failure through consideration of ACs. Focus on significant risk predicted through experienced analysis conducted at the NASA Independent Verification & Validation (IV&V) Program enables the scoping of effective assurance strategies with regard to overall asset protection of complex spaceflight as well as ground systems. Research efforts sponsored by NASAs Office of Safety and Mission Assurance (OSMA) defined terminology, categorized data fields, and designed a baseline repository that centralizes and compiles a comprehensive listing of ACs and correlated data relevant across many NASA missions. This prototype tool helps projects improve analysis by tracking ACs and allowing queries based on project, mission type, domain/component, causal fault, and other key characteristics. Vulnerability in off-nominal situations, architectural design weaknesses, and unexpected or undesirable system behaviors in reaction to faults are curtailed with the awareness of ACs and risk-significant scenarios modeled for analysts through this database. Integration within the Enterprise Architecture at NASA IV&V enables interfacing with other tools and datasets, technical support, and accessibility across the Agency. This paper discusses the development of an improved workflow process utilizing

  1. Risk-Significant Adverse Condition Awareness Strengthens Assurance of Fault Management Systems

    NASA Technical Reports Server (NTRS)

    Fitz, Rhonda

    2017-01-01

    As spaceflight systems increase in complexity, Fault Management (FM) systems are ranked high in risk-based assessment of software criticality, emphasizing the importance of establishing highly competent domain expertise to provide assurance. Adverse conditions (ACs) and specific vulnerabilities encountered by safety- and mission-critical software systems have been identified through efforts to reduce the risk posture of software-intensive NASA missions. Acknowledgement of potential off-nominal conditions and analysis to determine software system resiliency are important aspects of hazard analysis and FM. A key component of assuring FM is an assessment of how well software addresses susceptibility to failure through consideration of ACs. Focus on significant risk predicted through experienced analysis conducted at the NASA Independent Verification Validation (IVV) Program enables the scoping of effective assurance strategies with regard to overall asset protection of complex spaceflight as well as ground systems. Research efforts sponsored by NASA's Office of Safety and Mission Assurance defined terminology, categorized data fields, and designed a baseline repository that centralizes and compiles a comprehensive listing of ACs and correlated data relevant across many NASA missions. This prototype tool helps projects improve analysis by tracking ACs and allowing queries based on project, mission type, domaincomponent, causal fault, and other key characteristics. Vulnerability in off-nominal situations, architectural design weaknesses, and unexpected or undesirable system behaviors in reaction to faults are curtailed with the awareness of ACs and risk-significant scenarios modeled for analysts through this database. Integration within the Enterprise Architecture at NASA IVV enables interfacing with other tools and datasets, technical support, and accessibility across the Agency. This paper discusses the development of an improved workflow process utilizing this

  2. High-Intensity Radiated Field Fault-Injection Experiment for a Fault-Tolerant Distributed Communication System

    NASA Technical Reports Server (NTRS)

    Yates, Amy M.; Torres-Pomales, Wilfredo; Malekpour, Mahyar R.; Gonzalez, Oscar R.; Gray, W. Steven

    2010-01-01

    Safety-critical distributed flight control systems require robustness in the presence of faults. In general, these systems consist of a number of input/output (I/O) and computation nodes interacting through a fault-tolerant data communication system. The communication system transfers sensor data and control commands and can handle most faults under typical operating conditions. However, the performance of the closed-loop system can be adversely affected as a result of operating in harsh environments. In particular, High-Intensity Radiated Field (HIRF) environments have the potential to cause random fault manifestations in individual avionic components and to generate simultaneous system-wide communication faults that overwhelm existing fault management mechanisms. This paper presents the design of an experiment conducted at the NASA Langley Research Center's HIRF Laboratory to statistically characterize the faults that a HIRF environment can trigger on a single node of a distributed flight control system.

  3. Development of Fault Models for Hybrid Fault Detection and Diagnostics Algorithm: October 1, 2014 -- May 5, 2015

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cheung, Howard; Braun, James E.

    This report describes models of building faults created for OpenStudio to support the ongoing development of fault detection and diagnostic (FDD) algorithms at the National Renewable Energy Laboratory. Building faults are operating abnormalities that degrade building performance, such as using more energy than normal operation, failing to maintain building temperatures according to the thermostat set points, etc. Models of building faults in OpenStudio can be used to estimate fault impacts on building performance and to develop and evaluate FDD algorithms. The aim of the project is to develop fault models of typical heating, ventilating and air conditioning (HVAC) equipment inmore » the United States, and the fault models in this report are grouped as control faults, sensor faults, packaged and split air conditioner faults, water-cooled chiller faults, and other uncategorized faults. The control fault models simulate impacts of inappropriate thermostat control schemes such as an incorrect thermostat set point in unoccupied hours and manual changes of thermostat set point due to extreme outside temperature. Sensor fault models focus on the modeling of sensor biases including economizer relative humidity sensor bias, supply air temperature sensor bias, and water circuit temperature sensor bias. Packaged and split air conditioner fault models simulate refrigerant undercharging, condenser fouling, condenser fan motor efficiency degradation, non-condensable entrainment in refrigerant, and liquid line restriction. Other fault models that are uncategorized include duct fouling, excessive infiltration into the building, and blower and pump motor degradation.« less

  4. Development of Fault Models for Hybrid Fault Detection and Diagnostics Algorithm: October 1, 2014 -- May 5, 2015

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cheung, Howard; Braun, James E.

    2015-12-31

    This report describes models of building faults created for OpenStudio to support the ongoing development of fault detection and diagnostic (FDD) algorithms at the National Renewable Energy Laboratory. Building faults are operating abnormalities that degrade building performance, such as using more energy than normal operation, failing to maintain building temperatures according to the thermostat set points, etc. Models of building faults in OpenStudio can be used to estimate fault impacts on building performance and to develop and evaluate FDD algorithms. The aim of the project is to develop fault models of typical heating, ventilating and air conditioning (HVAC) equipment inmore » the United States, and the fault models in this report are grouped as control faults, sensor faults, packaged and split air conditioner faults, water-cooled chiller faults, and other uncategorized faults. The control fault models simulate impacts of inappropriate thermostat control schemes such as an incorrect thermostat set point in unoccupied hours and manual changes of thermostat set point due to extreme outside temperature. Sensor fault models focus on the modeling of sensor biases including economizer relative humidity sensor bias, supply air temperature sensor bias, and water circuit temperature sensor bias. Packaged and split air conditioner fault models simulate refrigerant undercharging, condenser fouling, condenser fan motor efficiency degradation, non-condensable entrainment in refrigerant, and liquid line restriction. Other fault models that are uncategorized include duct fouling, excessive infiltration into the building, and blower and pump motor degradation.« less

  5. Illite authigenesis during faulting and fluid flow - a microstructural study of fault rocks

    NASA Astrophysics Data System (ADS)

    Scheiber, Thomas; Viola, Giulio; van der Lelij, Roelant; Margreth, Annina

    2017-04-01

    Authigenic illite can form synkinematically during slip events along brittle faults. In addition it can also crystallize as a result of fluid flow and associated mineral alteration processes in hydrothermal environments. K-Ar dating of illite-bearing fault rocks has recently become a common tool to constrain the timing of fault activity. However, to fully interpret the derived age spectra in terms of deformation ages, a careful investigation of the fault deformation history and architecture at the outcrop-scale, ideally followed by a detailed mineralogical analysis of the illite-forming processes at the micro-scale, are indispensable. Here we integrate this methodological approach by presenting microstructural observations from the host rock immediately adjacent to dated fault gouges from two sites located in the Rolvsnes granodiorite (Bømlo, western Norway). This granodiorite experienced multiple episodes of brittle faulting and fluid-induced alteration, starting in the Mid Ordovician (Scheiber et al., 2016). Fault gouges are predominantly associated with normal faults accommodating mainly E-W extension. K-Ar dating of illites separated from representative fault gouges constrains deformation and alteration due to fluid ingress from the Permian to the Cretaceous, with a cluster of ages for the finest (<0.1 µm) fraction in the early to middle Jurassic. At site one, high-resolution thin section structural mapping reveals a complex deformation history characterized by several coexisting types of calcite veins and seven different generations of cataclasite, two of which contain a significant amount of authigenic and undoubtedly deformation-related illite. At site two, fluid ingress along and adjoining the fault core induced pervasive alteration of the host granodiorite. Quartz is crosscut by calcite veinlets whereas plagioclase, K-feldspar and biotite are almost completely replaced by the main alteration products kaolin, quartz and illite. Illite-bearing micro

  6. Homogeneity of small-scale earthquake faulting, stress, and fault strength

    USGS Publications Warehouse

    Hardebeck, J.L.

    2006-01-01

    Small-scale faulting at seismogenic depths in the crust appears to be more homogeneous than previously thought. I study three new high-quality focal-mechanism datasets of small (M < ??? 3) earthquakes in southern California, the east San Francisco Bay, and the aftershock sequence of the 1989 Loma Prieta earthquake. I quantify the degree of mechanism variability on a range of length scales by comparing the hypocentral disctance between every pair of events and the angular difference between their focal mechanisms. Closely spaced earthquakes (interhypocentral distance faults of many orientations may or may not be present, only similarly oriented fault planes produce earthquakes contemporaneously. On these short length scales, the crustal stress orientation and fault strength (coefficient of friction) are inferred to be homogeneous as well, to produce such similar earthquakes. Over larger length scales (???2-50 km), focal mechanisms become more diverse with increasing interhypocentral distance (differing on average by 40-70??). Mechanism variability on ???2- to 50 km length scales can be explained by ralatively small variations (???30%) in stress or fault strength. It is possible that most of this small apparent heterogeneity in stress of strength comes from measurement error in the focal mechanisms, as negligibble variation in stress or fault strength (<10%) is needed if each earthquake is assigned the optimally oriented focal mechanism within the 1-sigma confidence region. This local homogeneity in stress orientation and fault strength is encouraging, implying it may be possible to measure these parameters with enough precision to be useful in studying and modeling large earthquakes.

  7. Spatiotemporal patterns of fault slip rates across the Central Sierra Nevada frontal fault zone

    NASA Astrophysics Data System (ADS)

    Rood, Dylan H.; Burbank, Douglas W.; Finkel, Robert C.

    2011-01-01

    Patterns in fault slip rates through time and space are examined across the transition from the Sierra Nevada to the Eastern California Shear Zone-Walker Lane belt. At each of four sites along the eastern Sierra Nevada frontal fault zone between 38 and 39° N latitude, geomorphic markers, such as glacial moraines and outwash terraces, are displaced by a suite of range-front normal faults. Using geomorphic mapping, surveying, and 10Be surface exposure dating, mean fault slip rates are defined, and by utilizing markers of different ages (generally, ~ 20 ka and ~ 150 ka), rates through time and interactions among multiple faults are examined over 10 4-10 5 year timescales. At each site for which data are available for the last ~ 150 ky, mean slip rates across the Sierra Nevada frontal fault zone have probably not varied by more than a factor of two over time spans equal to half of the total time interval (~ 20 ky and ~ 150 ky timescales): 0.3 ± 0.1 mm year - 1 (mode and 95% CI) at both Buckeye Creek in the Bridgeport basin and Sonora Junction; and 0.4 + 0.3/-0.1 mm year - 1 along the West Fork of the Carson River at Woodfords. Data permit rates that are relatively constant over the time scales examined. In contrast, slip rates are highly variable in space over the last ~ 20 ky. Slip rates decrease by a factor of 3-5 northward over a distance of ~ 20 km between the northern Mono Basin (1.3 + 0.6/-0.3 mm year - 1 at Lundy Canyon site) to the Bridgeport Basin (0.3 ± 0.1 mm year - 1 ). The 3-fold decrease in the slip rate on the Sierra Nevada frontal fault zone northward from Mono Basin is indicative of a change in the character of faulting north of the Mina Deflection as extension is transferred eastward onto normal faults between the Sierra Nevada and Walker Lane belt. A compilation of regional deformation rates reveals that the spatial pattern of extension rates changes along strike of the Eastern California Shear Zone-Walker Lane belt. South of the Mina Deflection

  8. Spatiotemporal Patterns of Fault Slip Rates Across the Central Sierra Nevada Frontal Fault Zone

    NASA Astrophysics Data System (ADS)

    Rood, D. H.; Burbank, D.; Finkel, R. C.

    2010-12-01

    We examine patterns in fault slip rates through time and space across the transition from the Sierra Nevada to the Eastern California Shear Zone-Walker Lane belt. At each of four sites along the eastern Sierra Nevada frontal fault zone between 38-39° N latitude, geomorphic markers, such as glacial moraines and outwash terraces, are displaced by a suite of range-front normal faults. Using geomorphic mapping, surveying, and Be-10 surface exposure dating, we define mean fault slip rates, and by utilizing markers of different ages (generally, ~20 ka and ~150 ka), we examine rates through time and interactions among multiple faults over 10-100 ky timescales. At each site for which data are available for the last ~150 ky, mean slip rates across the Sierra Nevada frontal fault zone have probably not varied by more than a factor of two over time spans equal to half of the total time interval (~20 ky and ~150 ky timescales): 0.3 ± 0.1 mm/yr (mode and 95% CI) at both Buckeye Creek in the Bridgeport basin and Sonora Junction; and 0.4 +0.3/-0.1 mm/yr along the West Fork of the Carson River at Woodfords. Our data permit that rates are relatively constant over the time scales examined. In contrast, slip rates are highly variable in space over the last ~20 ky. Slip rates decrease by a factor of 3-5 northward over a distance of ~20 km between the northern Mono Basin (1.3 +0.6/-0.3 mm/yr at Lundy Canyon site) and the Bridgeport Basin (0.3 ± 0.1 mm/yr). The 3-fold decrease in the slip rate on the Sierra Nevada frontal fault zone northward from Mono Basin reflects a change in the character of faulting north of the Mina Deflection as extension is transferred eastward onto normal faults between the Sierra Nevada and Walker Lane belt. A compilation of regional deformation rates reveal that the spatial pattern of extension rates changes along strike of the Eastern California Shear Zone-Walker Lane belt. South of the Mina Deflection, extension is accommodated within a diffuse zone of

  9. Late Quaternary Faulting in Southeastern Louisiana: A Natural Laboratory for Understanding Shallow Faulting in Deltaic Materials

    NASA Astrophysics Data System (ADS)

    Dawers, N. H.; McLindon, C.

    2017-12-01

    A synthesis of late Quaternary faults within the Mississippi River deltaic plain aims to provide a more accurate assessment of regional and local fault architecture, and interactions between faulting, sediment loading, salt withdrawal and compaction. This effort was initiated by the New Orleans Geological Society and has resulted in access to industry 3d seismic reflection data, as well as fault trace maps, and various types of well data and biostratigraphy. An unexpected outgrowth of this project is a hypothesis that gravity-driven normal faults in deltaic settings may be good candidates for shallow aseismic and slow-slip phenomena. The late Quaternary fault population is characterized by several large, highly segmented normal fault arrays: the Baton Rouge-Tepetate fault zone, the Lake Pontchartrain-Lake Borgne fault zone, the Golden Meadow fault zone (GMFZ), and a major counter-regional salt withdrawal structure (the Bay Marchand-Timbalier Bay-Caillou Island salt complex and West Delta fault zone) that lies just offshore of southeastern Louisiana. In comparison to the other, more northerly fault zones, the GMFZ is still significantly salt-involved. Salt structures segment the GMFZ with fault tips ending near or within salt, resulting in highly localized fault and compaction related subsidence separated by shallow salt structures, which are inherently buoyant and virtually incompressible. At least several segments within the GMFZ are characterized by marsh breaks that formed aseismically over timescales of days to months, such as near Adams Bay and Lake Enfermer. One well-documented surface rupture adjacent to a salt dome propagated over a 3 day period in 1943. We suggest that Louisiana's coastal faults make excellent analogues for deltaic faults in general, and propose that a series of positive feedbacks keep them active in the near surface. These include differential sediment loading and compaction, weak fault zone materials, high fluid pressure, low elastic

  10. Fault-tolerant software - Experiment with the sift operating system. [Software Implemented Fault Tolerance computer

    NASA Technical Reports Server (NTRS)

    Brunelle, J. E.; Eckhardt, D. E., Jr.

    1985-01-01

    Results are presented of an experiment conducted in the NASA Avionics Integrated Research Laboratory (AIRLAB) to investigate the implementation of fault-tolerant software techniques on fault-tolerant computer architectures, in particular the Software Implemented Fault Tolerance (SIFT) computer. The N-version programming and recovery block techniques were implemented on a portion of the SIFT operating system. The results indicate that, to effectively implement fault-tolerant software design techniques, system requirements will be impacted and suggest that retrofitting fault-tolerant software on existing designs will be inefficient and may require system modification.

  11. The distribution of deformation in parallel fault-related folds with migrating axial surfaces: comparison between fault-propagation and fault-bend folding

    NASA Astrophysics Data System (ADS)

    Salvini, Francesco; Storti, Fabrizio

    2001-01-01

    In fault-related folds that form by axial surface migration, rocks undergo deformation as they pass through axial surfaces. The distribution and intensity of deformation in these structures has been impacted by the history of axial surface migration. Upon fold initiation, unique dip panels develop, each with a characteristic deformation intensity, depending on their history. During fold growth, rocks that pass through axial surfaces are transported between dip panels and accumulate additional deformation. By tracking the pattern of axial surface migration in model folds, we predict the distribution of relative deformation intensity in simple-step, parallel fault-bend and fault-propagation anticlines. In both cases the deformation is partitioned into unique domains we call deformation panels. For a given rheology of the folded multilayer, deformation intensity will be homogeneously distributed in each deformation panel. Fold limbs are always deformed. The flat crests of fault-propagation anticlines are always undeformed. Two asymmetric deformation panels develop in fault-propagation folds above ramp angles exceeding 29°. For lower ramp angles, an additional, more intensely-deformed panel develops at the transition between the crest and the forelimb. Deformation in the flat crests of fault-bend anticlines occurs when fault displacement exceeds the length of the footwall ramp, but is never found immediately hinterland of the crest to forelimb transition. In environments dominated by brittle deformation, our models may serve as a first-order approximation of the distribution of fractures in fault-related folds.

  12. Reliability Through Life of Internal Protection Devices in Small-Cell ABSL Batteries

    NASA Technical Reports Server (NTRS)

    Neubauer, Jeremy; Ng, Ka Lok; Bennetti, Andrea; Pearson, Chris; Rao, gopal

    2007-01-01

    This viewgraph presentation reviews a reliability analysis of small cell protection batteries. The contents include: 1) The s-p Topology; 2) Cell Level Protection Devices; 3) Battery Level Fault Protection; 4) Large Cell Comparison; and 5) Battery Level Testing and Results.

  13. Fault pattern at the northern end of the Death Valley - Furnace Creek fault zone, California and Nevada

    NASA Technical Reports Server (NTRS)

    Liggett, M. A. (Principal Investigator); Childs, J. F.

    1974-01-01

    The author has identified the following significant results. The pattern of faulting associated with the termination of the Death Valley-Furnace Creek Fault Zone in northern Fish Lake Valley, Nevada was studied in ERTS-1 MSS color composite imagery and color IR U-2 photography. Imagery analysis was supported by field reconnaissance and low altitude aerial photography. The northwest-trending right-lateral Death Valley-Furnace Creek Fault Zone changes northward to a complex pattern of discontinuous dip slip and strike slip faults. This fault pattern terminates to the north against an east-northeast trending zone herein called the Montgomery Fault Zone. No evidence for continuation of the Death Valley-Furnace Creek Fault Zone is recognized north of the Montgomery Fault Zone. Penecontemporaneous displacement in the Death Valley-Furnace Creek Fault Zone, the complex transitional zone, and the Montgomery Fault Zone suggests that the systems are genetically related. Mercury mineralization appears to have been localized along faults recognizable in ERTS-1 imagery within the transitional zone and the Montgomery Fault Zone.

  14. Study on the Evaluation Method for Fault Displacement: Probabilistic Approach Based on Japanese Earthquake Rupture Data - Principal fault displacements -

    NASA Astrophysics Data System (ADS)

    Kitada, N.; Inoue, N.; Tonagi, M.

    2016-12-01

    The purpose of Probabilistic Fault Displacement Hazard Analysis (PFDHA) is estimate fault displacement values and its extent of the impact. There are two types of fault displacement related to the earthquake fault: principal fault displacement and distributed fault displacement. Distributed fault displacement should be evaluated in important facilities, such as Nuclear Installations. PFDHA estimates principal fault and distributed fault displacement. For estimation, PFDHA uses distance-displacement functions, which are constructed from field measurement data. We constructed slip distance relation of principal fault displacement based on Japanese strike and reverse slip earthquakes in order to apply to Japan area that of subduction field. However, observed displacement data are sparse, especially reverse faults. Takao et al. (2013) tried to estimate the relation using all type fault systems (reverse fault and strike slip fault). After Takao et al. (2013), several inland earthquakes were occurred in Japan, so in this time, we try to estimate distance-displacement functions each strike slip fault type and reverse fault type especially add new fault displacement data set. To normalized slip function data, several criteria were provided by several researchers. We normalized principal fault displacement data based on several methods and compared slip-distance functions. The normalized by total length of Japanese reverse fault data did not show particular trend slip distance relation. In the case of segmented data, the slip-distance relationship indicated similar trend as strike slip faults. We will also discuss the relation between principal fault displacement distributions with source fault character. According to slip distribution function (Petersen et al., 2011), strike slip fault type shows the ratio of normalized displacement are decreased toward to the edge of fault. However, the data set of Japanese strike slip fault data not so decrease in the end of the fault

  15. Fault Current Distribution and Pole Earth Potential Rise (EPR) Under Substation Fault

    NASA Astrophysics Data System (ADS)

    Nnassereddine, M.; Rizk, J.; Hellany, A.; Nagrial, M.

    2013-09-01

    New high-voltage (HV) substations are fed by transmission lines. The position of these lines necessitates earthing design to ensure safety compliance of the system. Conductive structures such as steel or concrete poles are widely used in HV transmission mains. The earth potential rise (EPR) generated by a fault at the substation could result in an unsafe condition. This article discusses EPR based on substation fault. The pole EPR assessment under substation fault is assessed with and without mutual impedance consideration. Split factor determination with and without the mutual impedance of the line is also discussed. Furthermore, a simplified formula to compute the pole grid current under substation fault is included. Also, it includes the introduction of the n factor which determines the number of poles that required earthing assessments under substation fault. A case study is shown.

  16. Award ER25750: Coordinated Infrastructure for Fault Tolerance Systems Indiana University Final Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lumsdaine, Andrew

    2013-03-08

    The main purpose of the Coordinated Infrastructure for Fault Tolerance in Systems initiative has been to conduct research with a goal of providing end-to-end fault tolerance on a systemwide basis for applications and other system software. While fault tolerance has been an integral part of most high-performance computing (HPC) system software developed over the past decade, it has been treated mostly as a collection of isolated stovepipes. Visibility and response to faults has typically been limited to the particular hardware and software subsystems in which they are initially observed. Little fault information is shared across subsystems, allowing little flexibility ormore » control on a system-wide basis, making it practically impossible to provide cohesive end-to-end fault tolerance in support of scientific applications. As an example, consider faults such as communication link failures that can be seen by a network library but are not directly visible to the job scheduler, or consider faults related to node failures that can be detected by system monitoring software but are not inherently visible to the resource manager. If information about such faults could be shared by the network libraries or monitoring software, then other system software, such as a resource manager or job scheduler, could ensure that failed nodes or failed network links were excluded from further job allocations and that further diagnosis could be performed. As a founding member and one of the lead developers of the Open MPI project, our efforts over the course of this project have been focused on making Open MPI more robust to failures by supporting various fault tolerance techniques, and using fault information exchange and coordination between MPI and the HPC system software stack from the application, numeric libraries, and programming language runtime to other common system components such as jobs schedulers, resource managers, and monitoring tools.« less

  17. Stress sensitivity of fault seismicity: A comparison between limited-offset oblique and major strike-slip faults

    USGS Publications Warehouse

    Parsons, T.; Stein, R.S.; Simpson, R.W.; Reasenberg, P.A.

    1999-01-01

    We present a new three-dimensional inventory of the southern San Francisco Bay area faults and use it to calculate stress applied principally by the 1989 M = 7.1 Loma Prieta earthquake and to compare fault seismicity rates before and after 1989. The major high-angle right-lateral faults exhibit a different response to the stress change than do minor oblique (right-lateral/thrust) faults. Seismicity on oblique-slip faults in the southern Santa Clara Valley thrust belt increased where the faults were unclamped. The strong dependence of seismicity change on normal stress change implies a high coefficient of static friction. In contrast, we observe that faults with significant offset (>50-100 km) behave differently; microseismicity on the Hayward fault diminished where right-lateral shear stress was reduced and where it was unclamped by the Loma Prieta earthquake. We observe a similar response on the San Andreas fault zone in southern California after the Landers earthquake sequence. Additionally, the offshore San Gregorio fault shows a seismicity rate increase where right-lateral/oblique shear stress was increased by the Loma Prieta earthquake despite also being clamped by it. These responses are consistent with either a low coefficient of static friction or high pore fluid pressures within the fault zones. We can explain the different behavior of the two styles of faults if those with large cumulative offset become impermeable through gouge buildup; coseismically pressurized pore fluids could be trapped and negate imposed normal stress changes, whereas in more limited offset faults, fluids could rapidly escape. The difference in behavior between minor and major faults may explain why frictional failure criteria that apply intermediate coefficients of static friction can be effective in describing the broad distributions of aftershocks that follow large earthquakes, since many of these events occur both inside and outside major fault zones.

  18. Knowledge, perceptions and practices of healthcare workers regarding the use of respiratory protection equipment at Iran hospitals.

    PubMed

    Honarbakhsh, Marzieh; Jahangiri, Mehdi; Ghaem, Haleh

    2018-01-01

    Using appropriate respiratory protection equipment (RPE) is very important to protect healthcare workers (HCWs) against respiratory hazards. The aim of this study was to identify the level of knowledge, perceptions and practices of HCWs on using RPE. This cross-sectional study was conducted with 284 employees of educational hospitals affiliated to Shiraz University of Medical Sciences. The study's instrument was a self-made questionnaire that comprised four components: demographic inquiries and questions designed to assess the knowledge, perceptions and practice of HCWs regarding RPE. Collected data were analysed using SPSS software version 21. Average scores of knowledge, perceptions and practice of HCWs on using RPE were 66.50% ± 11.93%, 80.32% ± 10.05% and 70.12% ± 20.51%, respectively. A significant association was observed between knowledge and age, job experience, history of using respirator, marital status and risk of respiratory hazards in the workplace and perceptions with age and education and practice with education. Studied HCWs had positive perceptions and moderate level of knowledge and practice about the use of RPE. Full implementation of respiratory protection program in the hospitals would be helpful to improve the knowledge, perceptions and practices of HCWs regarding RPE.

  19. Deformation associated with continental normal faults

    NASA Astrophysics Data System (ADS)

    Resor, Phillip G.

    Deformation associated with normal fault earthquakes and geologic structures provide insights into the seismic cycle as it unfolds over time scales from seconds to millions of years. Improved understanding of normal faulting will lead to more accurate seismic hazard assessments and prediction of associated structures. High-precision aftershock locations for the 1995 Kozani-Grevena earthquake (Mw 6.5), Greece image a segmented master fault and antithetic faults. This three-dimensional fault geometry is typical of normal fault systems mapped from outcrop or interpreted from reflection seismic data and illustrates the importance of incorporating three-dimensional fault geometry in mechanical models. Subsurface fault slip associated with the Kozani-Grevena and 1999 Hector Mine (Mw 7.1) earthquakes is modeled using a new method for slip inversion on three-dimensional fault surfaces. Incorporation of three-dimensional fault geometry improves the fit to the geodetic data while honoring aftershock distributions and surface ruptures. GPS Surveying of deformed bedding surfaces associated with normal faulting in the western Grand Canyon reveals patterns of deformation that are similar to those observed by interferometric satellite radar interferometry (InSAR) for the Kozani Grevena earthquake with a prominent down-warp in the hanging wall and a lesser up-warp in the footwall. However, deformation associated with the Kozani-Grevena earthquake extends ˜20 km from the fault surface trace, while the folds in the western Grand Canyon only extend 500 m into the footwall and 1500 m into the hanging wall. A comparison of mechanical and kinematic models illustrates advantages of mechanical models in exploring normal faulting processes including incorporation of both deformation and causative forces, and the opportunity to incorporate more complex fault geometry and constitutive properties. Elastic models with antithetic or synthetic faults or joints in association with a master

  20. Fault healing promotes high-frequency earthquakes in laboratory experiments and on natural faults

    USGS Publications Warehouse

    McLaskey, Gregory C.; Thomas, Amanda M.; Glaser, Steven D.; Nadeau, Robert M.

    2012-01-01

    Faults strengthen or heal with time in stationary contact and this healing may be an essential ingredient for the generation of earthquakes. In the laboratory, healing is thought to be the result of thermally activated mechanisms that weld together micrometre-sized asperity contacts on the fault surface, but the relationship between laboratory measures of fault healing and the seismically observable properties of earthquakes is at present not well defined. Here we report on laboratory experiments and seismological observations that show how the spectral properties of earthquakes vary as a function of fault healing time. In the laboratory, we find that increased healing causes a disproportionately large amount of high-frequency seismic radiation to be produced during fault rupture. We observe a similar connection between earthquake spectra and recurrence time for repeating earthquake sequences on natural faults. Healing rates depend on pressure, temperature and mineralogy, so the connection between seismicity and healing may help to explain recent observations of large megathrust earthquakes which indicate that energetic, high-frequency seismic radiation originates from locations that are distinct from the geodetically inferred locations of large-amplitude fault slip

  1. Relationship between displacement and gravity change of Uemachi faults and surrounding faults of Osaka basin, Southwest Japan

    NASA Astrophysics Data System (ADS)

    Inoue, N.; Kitada, N.; Kusumoto, S.; Itoh, Y.; Takemura, K.

    2011-12-01

    The Osaka basin surrounded by the Rokko and Ikoma Ranges is one of the typical Quaternary sedimentary basins in Japan. The Osaka basin has been filled by the Pleistocene Osaka group and the later sediments. Several large cities and metropolitan areas, such as Osaka and Kobe are located in the Osaka basin. The basin is surrounded by E-W trending strike slip faults and N-S trending reverse faults. The N-S trending 42-km-long Uemachi faults traverse in the central part of the Osaka city. The Uemachi faults have been investigated for countermeasures against earthquake disaster. It is important to reveal the detailed fault parameters, such as length, dip and recurrence interval, so on for strong ground motion simulation and disaster prevention. For strong ground motion simulation, the fault model of the Uemachi faults consist of the two parts, the north and south parts, because of the no basement displacement in the central part of the faults. The Ministry of Education, Culture, Sports, Science and Technology started the project to survey of the Uemachi faults. The Disaster Prevention Institute of Kyoto University is carried out various surveys from 2009 to 2012 for 3 years. The result of the last year revealed the higher fault activity of the branch fault than main faults in the central part (see poster of "Subsurface Flexure of Uemachi Fault, Japan" by Kitada et al., in this meeting). Kusumoto et al. (2001) reported that surrounding faults enable to form the similar basement relief without the Uemachi faults model based on a dislocation model. We performed various parameter studies for dislocation model and gravity changes based on simplified faults model, which were designed based on the distribution of the real faults. The model was consisted 7 faults including the Uemachi faults. The dislocation and gravity change were calculated based on the Okada et al. (1985) and Okubo et al. (1993) respectively. The results show the similar basement displacement pattern to the

  2. 14 CFR 29.1357 - Circuit protective devices.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... faults or serious malfunction of the system or connected equipment. (b) The protective and control devices in the generating system must be designed to de-energize and disconnect faulty power sources and...

  3. Certificates of Confidentiality: Protecting Human Subject Research Data in Law and Practice.

    PubMed

    Wolf, Leslie E; Patel, Mayank J; Williams Tarver, Brett A; Austin, Jeffrey L; Dame, Lauren A; Beskow, Laura M

    2015-01-01

    The federal Certificate of Confidentiality plays an important role in research on sensitive topics by authorizing researchers to refuse to disclose identifiable research data in response to subpoenas in any legal setting. However, there is little known about how effective Certificates are in practice. This article draws on our legal and empirical research on this topic to fill this information gap. It includes a description of the purpose of Certificates, their legislative and regulatory history, and a summary of the few reported and unreported cases that have dealt with Certificates. In addition, we outline other statutory confidentiality protections, compare them to the Certificate's protections, and analyze some of the vulnerabilities of a Certificate's protections. This analysis allows us to make specific recommendations for strengthening the protections afforded to research data. © 2015 American Society of Law, Medicine & Ethics, Inc.

  4. 3D Dynamic Rupture Simulations along the Wasatch Fault, Utah, Incorporating Rough-fault Topography

    NASA Astrophysics Data System (ADS)

    Withers, Kyle; Moschetti, Morgan

    2017-04-01

    Studies have found that the Wasatch Fault has experienced successive large magnitude (>Mw 7.2) earthquakes, with an average recurrence interval near 350 years. To date, no large magnitude event has been recorded along the fault, with the last rupture along the Salt Lake City segment occurring 1300 years ago. Because of this, as well as the lack of strong ground motion records in basins and from normal-faulting earthquakes worldwide, seismic hazard in the region is not well constrained. Previous numerical simulations have modeled deterministic ground motion in the heavily populated regions of Utah, near Salt Lake City, but were primarily restricted to low frequencies ( 1 Hz). Our goal is to better assess broadband ground motions from the Wasatch Fault Zone. Here, we extend deterministic ground motion prediction to higher frequencies ( 5 Hz) in this region by using physics-based spontaneous dynamic rupture simulations along a normal fault with characteristics derived from geologic observations. We use a summation by parts finite difference code (Waveqlab3D) with rough-fault topography following a self-similar fractal distribution (over length scales from 100 m to the size of the fault) and include off-fault plasticity to simulate ruptures > Mw 6.5. Geometric complexity along fault planes has previously been shown to generate broadband sources with spectral energy matching that of observations. We investigate the impact of varying the hypocenter location, as well as the influence that multiple realizations of rough-fault topography have on the rupture process and resulting ground motion. We utilize Waveqlab3's computational efficiency to model wave-propagation to a significant distance from the fault with media heterogeneity at both long and short spatial wavelengths. These simulations generate a synthetic dataset of ground motions to compare with GMPEs, in terms of both the median and inter and intraevent variability.

  5. Time-varying singular value decomposition for periodic transient identification in bearing fault diagnosis

    NASA Astrophysics Data System (ADS)

    Zhang, Shangbin; Lu, Siliang; He, Qingbo; Kong, Fanrang

    2016-09-01

    For rotating machines, the defective faults of bearings generally are represented as periodic transient impulses in acquired signals. The extraction of transient features from signals has been a key issue for fault diagnosis. However, the background noise reduces identification performance of periodic faults in practice. This paper proposes a time-varying singular value decomposition (TSVD) method to enhance the identification of periodic faults. The proposed method is inspired by the sliding window method. By applying singular value decomposition (SVD) to the signal under a sliding window, we can obtain a time-varying singular value matrix (TSVM). Each column in the TSVM is occupied by the singular values of the corresponding sliding window, and each row represents the intrinsic structure of the raw signal, namely time-singular-value-sequence (TSVS). Theoretical and experimental analyses show that the frequency of TSVS is exactly twice that of the corresponding intrinsic structure. Moreover, the signal-to-noise ratio (SNR) of TSVS is improved significantly in comparison with the raw signal. The proposed method takes advantages of the TSVS in noise suppression and feature extraction to enhance fault frequency for diagnosis. The effectiveness of the TSVD is verified by means of simulation studies and applications to diagnosis of bearing faults. Results indicate that the proposed method is superior to traditional methods for bearing fault diagnosis.

  6. Fault-tolerant processing system

    NASA Technical Reports Server (NTRS)

    Palumbo, Daniel L. (Inventor)

    1996-01-01

    A fault-tolerant, fiber optic interconnect, or backplane, which serves as a via for data transfer between modules. Fault tolerance algorithms are embedded in the backplane by dividing the backplane into a read bus and a write bus and placing a redundancy management unit (RMU) between the read bus and the write bus so that all data transmitted by the write bus is subjected to the fault tolerance algorithms before the data is passed for distribution to the read bus. The RMU provides both backplane control and fault tolerance.

  7. Fault-Tree Compiler Program

    NASA Technical Reports Server (NTRS)

    Butler, Ricky W.; Martensen, Anna L.

    1992-01-01

    FTC, Fault-Tree Compiler program, is reliability-analysis software tool used to calculate probability of top event of fault tree. Five different types of gates allowed in fault tree: AND, OR, EXCLUSIVE OR, INVERT, and M OF N. High-level input language of FTC easy to understand and use. Program supports hierarchical fault-tree-definition feature simplifying process of description of tree and reduces execution time. Solution technique implemented in FORTRAN, and user interface in Pascal. Written to run on DEC VAX computer operating under VMS operating system.

  8. Surface fault rupture during the Mw 7.8 Kaikoura earthquake, New Zealand, with specific comment on the Kekerengu Fault - one of the country's fastest slipping onland active faults

    NASA Astrophysics Data System (ADS)

    Van Dissen, Russ; Little, Tim

    2017-04-01

    The Mw 7.8 Kaikoura earthquake of 14 November, 2016 (NZDT) was a complex event. It involved ground-surface (or seafloor) fault rupture on at least a dozen onland or offshore faults, and subsurface rupture on a handful of additional faults. Most of the surface ruptures involved previously known (or suspected) active faults, as well as surface rupture on at least two hitherto unrecognised active faults. The southwest to northeast extent of surface fault rupture, as generalised by two straight-line segments, is approximately 180 km, though this is a minimum for the collective length of surface rupture due to multiple overlapping faults with various orientations. Surface rupture displacements on specific faults involved in the Kaikoura Earthquake span approximately two orders of magnitude. For example, maximum surface displacement on the Heaver's Creek Fault is cm- to dm-scale in size; whereas, maximum surface displacement on the nearby Kekerengu Fault is approximately 10-12 m (predominantly in a dextral sense). The Kekerengu Fault has a Late Pleistocene slip-rate rate of 20-26 mm/yr, and is possibly the second fastest slipping onland fault in New Zealand, behind the Alpine Fault. Located in the northeastern South Island of New Zealand, the Kekerengu Fault - along with the Hope Fault to the southwest and the Needles Fault offshore to the northeast - comprise the fastest slipping elements of the Pacific-Australian plate boundary in this part of the country. In January 2016 (about ten months prior to the Kaikoura earthquake) three paleo-earthquake investigation trenches were excavated across pronounced traces of the Kekerengu Fault at two locations. These were the first such trenches dug and evaluated across the fault. All three trenches displayed abundant evidence of past surface fault ruptures (three surface ruptures in the last approximately 1,200 years, four now including the 2016 rupture). An interesting aspect of the 2016 rupture is that two of the trenches

  9. Hospital respiratory protection practices in 6 U.S. states: A public health evaluation study

    PubMed Central

    Peterson, Kristina; Novak, Debra; Stradtman, Lindsay; Wilson, David; Couzens, Lance

    2015-01-01

    Background Lessons learned from the influenza A (H1N1) virus revealed a need to better understand hospitals’ respiratory protection programmatic practice gaps. This article reports findings from a multistate assessment of hospitals’ adherence to the Occupational Safety and Health Administration’s respiratory protection program (RPP) requirements and the Centers for Disease Control and Prevention’s infection control guidance. Methods Onsite surveys were conducted in 98 acute care hospitals in 6 U.S. states, including >1,500 hospital managers, unit managers, and health care workers. Descriptive statistics were used to assess hospital adherence. Results Most acute care hospitals adhere to requirements for initial medical evaluations, fit testing, training, and recommended respiratory protection when in close contact with patients who have suspected or confirmed seasonal influenza. Low hospital adherence was found for respiratory protection with infectious diseases requiring airborne precautions, aerosol-generating procedures with seasonal influenza, and checking of the respirator’s user seal. Hospitals’ adherence was also low with follow-up program evaluations, medical re-evaluations, and respirator maintenance. Conclusion Efforts should be made to closely examine ways of strengthening hospitals’ RPPs to ensure the program’s ongoing effectiveness and workers’ proper selection and use of respiratory protection. Implications for improved RPPs and practice are discussed. PMID:25564126

  10. How Do Normal Faults Grow?

    NASA Astrophysics Data System (ADS)

    Jackson, C. A. L.; Bell, R. E.; Rotevatn, A.; Tvedt, A. B. M.

    2015-12-01

    Normal faulting accommodates stretching of the Earth's crust and is one of the fundamental controls on landscape evolution and sediment dispersal in rift basins. Displacement-length scaling relationships compiled from global datasets suggest normal faults grow via a sympathetic increase in these two parameters (the 'isolated fault model'). This model has dominated the structural geology literature for >20 years and underpins the structural and tectono-stratigraphic models developed for active rifts. However, relatively recent analysis of high-quality 3D seismic reflection data suggests faults may grow by rapid establishment of their near-final length prior to significant displacement accumulation (the 'coherent fault model'). The isolated and coherent fault models make very different predictions regarding the tectono-stratigraphic evolution of rift basin, thus assessing their applicability is important. To-date, however, very few studies have explicitly set out to critically test the coherent fault model thus, it may be argued, it has yet to be widely accepted in the structural geology community. Displacement backstripping is a simple graphical technique typically used to determine how faults lengthen and accumulate displacement; this technique should therefore allow us to test the competing fault models. However, in this talk we use several subsurface case studies to show that the most commonly used backstripping methods (the 'original' and 'modified' methods) are, however, of limited value, because application of one over the other requires an a priori assumption of the model most applicable to any given fault; we argue this is illogical given that the style of growth is exactly what the analysis is attempting to determine. We then revisit our case studies and demonstrate that, in the case of seismic-scale growth faults, growth strata thickness patterns and relay zone kinematics, rather than displacement backstripping, should be assessed to directly constrain

  11. System for Collecting, Analyzing, and Using Information on Institutional Consumer Protection Practices: Accreditation User Guide. Improving The Consumer Protection Function in Postsecondary Education.

    ERIC Educational Resources Information Center

    Jung, Steven M.; And Others

    This accreditation user guide describes: (1) a set of institutional conditions, policies, and practices that are potentially abusive to students, and (2) a system for collecting, analyzing, and using quantitative data on these conditions, policies, and practices. Possible uses include: (1) setting consumer protection standards and goals that…

  12. Ontology and Knowledgebase of Fractures and Faults

    NASA Astrophysics Data System (ADS)

    Aydin, A.; Zhong, J.

    2007-12-01

    Fractures and faults are related to many societal and industrial problems including oil and gas exploration and production, CO2 sequestration, and waste isolation. Therefore, an ontology focusing fractures and faults is desirable to facilitate a sound education and communication among this highly diverse community. We developed an ontology for this field. Some high level classes in our ontology include geological structure, deformation mechanism, and property or factor. Throughout our ontology, we emphasis the relationship among the classes, such as structures formed by mechanisms and properties effect the mechanism that will occur. At this stage, there are about 1,000 classes, referencing about 150 articles or textbook and supplemented by about 350 photographs, diagrams, and illustrations. With limited time and resources, we chose a simple application for our ontology - transforming to a knowledgebase made of a series of web pages. Each web page corresponds to one class in the ontology, having discussion, figures, links to subclass and related concepts, as well as references. We believe that our knowledgebase is a valuable resource for finding information about fractures and faults, to both practicing geologists and students who are interested in the related issues either in application or in education and training.

  13. Mechanics of slip and fracture along small faults and simple strike-slip fault zones in granitic rock

    NASA Astrophysics Data System (ADS)

    Martel, Stephen J.; Pollard, David D.

    1989-07-01

    We exploit quasi-static fracture mechanics models for slip along pre-existing faults to account for the fracture structure observed along small exhumed faults and small segmented fault zones in the Mount Abbot quadrangle of California and to estimate stress drop and shear fracture energy from geological field measurements. Along small strike-slip faults, cracks that splay from the faults are common only near fault ends. In contrast, many cracks splay from the boundary faults at the edges of a simple fault zone. Except near segment ends, the cracks preferentially splay into a zone. We infer that shear displacement discontinuities (slip patches) along a small fault propagated to near the fault ends and caused fracturing there. Based on elastic stress analyses, we suggest that slip on one boundary fault triggered slip on the adjacent boundary fault, and that the subsequent interaction of the slip patches preferentially led to the generation of fractures that splayed into the zones away from segment ends and out of the zones near segment ends. We estimate the average stress drops for slip events along the fault zones as ˜1 MPa and the shear fracture energy release rate during slip as 5 × 102 - 2 × 104 J/m2. This estimate is similar to those obtained from shear fracture of laboratory samples, but orders of magnitude less than those for large fault zones. These results suggest that the shear fracture energy release rate increases as the structural complexity of fault zones increases.

  14. How do normal faults grow?

    NASA Astrophysics Data System (ADS)

    Jackson, Christopher; Bell, Rebecca; Rotevatn, Atle; Tvedt, Anette

    2016-04-01

    Normal faulting accommodates stretching of the Earth's crust, and it is arguably the most fundamental tectonic process leading to continent rupture and oceanic crust emplacement. Furthermore, the incremental and finite geometries associated with normal faulting dictate landscape evolution, sediment dispersal and hydrocarbon systems development in rifts. Displacement-length scaling relationships compiled from global datasets suggest normal faults grow via a sympathetic increase in these two parameters (the 'isolated fault model'). This model has dominated the structural geology literature for >20 years and underpins the structural and tectono-stratigraphic models developed for active rifts. However, relatively recent analysis of high-quality 3D seismic reflection data suggests faults may grow by rapid establishment of their near-final length prior to significant displacement accumulation (the 'coherent fault model'). The isolated and coherent fault models make very different predictions regarding the tectono-stratigraphic evolution of rift basin, thus assessing their applicability is important. To-date, however, very few studies have explicitly set out to critically test the coherent fault model thus, it may be argued, it has yet to be widely accepted in the structural geology community. Displacement backstripping is a simple graphical technique typically used to determine how faults lengthen and accumulate displacement; this technique should therefore allow us to test the competing fault models. However, in this talk we use several subsurface case studies to show that the most commonly used backstripping methods (the 'original' and 'modified' methods) are, however, of limited value, because application of one over the other requires an a priori assumption of the model most applicable to any given fault; we argue this is illogical given that the style of growth is exactly what the analysis is attempting to determine. We then revisit our case studies and demonstrate

  15. Late Holocene earthquakes on the Toe Jam Hill fault, Seattle fault zone, Bainbridge Island, Washington

    USGS Publications Warehouse

    Nelson, A.R.; Johnson, S.Y.; Kelsey, H.M.; Wells, R.E.; Sherrod, B.L.; Pezzopane, S.K.; Bradley, L.A.; Koehler, R. D.; Bucknam, R.C.

    2003-01-01

    Five trenches across a Holocene fault scarp yield the first radiocarbon-measured earthquake recurrence intervals for a crustal fault in western Washington. The scarp, the first to be revealed by laser imagery, marks the Toe Jam Hill fault, a north-dipping backthrust to the Seattle fault. Folded and faulted strata, liquefaction features, and forest soil A horizons buried by hanging-wall-collapse colluvium record three, or possibly four, earthquakes between 2500 and 1000 yr ago. The most recent earthquake is probably the 1050-1020 cal. (calibrated) yr B.P. (A.D. 900-930) earthquake that raised marine terraces and triggered a tsunami in Puget Sound. Vertical deformation estimated from stratigraphic and surface offsets at trench sites suggests late Holocene earthquake magnitudes near M7, corresponding to surface ruptures >36 km long. Deformation features recording poorly understood latest Pleistocene earthquakes suggest that they were smaller than late Holocene earthquakes. Postglacial earthquake recurrence intervals based on 97 radiocarbon ages, most on detrital charcoal, range from ???12,000 yr to as little as a century or less; corresponding fault-slip rates are 0.2 mm/yr for the past 16,000 yr and 2 mm/yr for the past 2500 yr. Because the Toe Jam Hill fault is a backthrust to the Seattle fault, it may not have ruptured during every earthquake on the Seattle fault. But the earthquake history of the Toe Jam Hill fault is at least a partial proxy for the history of the rest of the Seattle fault zone.

  16. Depth optimal sorting networks resistant to k passive faults

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Piotrow, M.

    In this paper, we study the problem of constructing a sorting network that is tolerant to faults and whose running time (i.e. depth) is as small as possible. We consider the scenario of worst-case comparator faults and follow the model of passive comparator failure proposed by Yao and Yao, in which a faulty comparator outputs directly its inputs without comparison. Our main result is the first construction of an N-input, k-fault-tolerant sorting network that is of an asymptotically optimal depth {theta}(log N+k). That improves over the recent result of Leighton and Ma, whose network is of depth O(log N +more » k log log N/log k). Actually, we present a fault-tolerant correction network that can be added after any N-input sorting network to correct its output in the presence of at most k faulty comparators. Since the depth of the network is O(log N + k) and the constants hidden behind the {open_quotes}O{close_quotes} notation are not big, the construction can be of practical use. Developing the techniques necessary to show the main result, we construct a fault-tolerant network for the insertion problem. As a by-product, we get an N-input, O(log N)-depth INSERT-network that is tolerant to random faults, thereby answering a question posed by Ma in his PhD thesis. The results are based on a new notion of constant delay comparator networks, that is, networks in which each register is used (compared) only in a period of time of a constant length. Copies of such networks can be put one after another with only a constant increase in depth per copy.« less

  17. Active faults in Africa: a review

    NASA Astrophysics Data System (ADS)

    Skobelev, S. F.; Hanon, M.; Klerkx, J.; Govorova, N. N.; Lukina, N. V.; Kazmin, V. G.

    2004-03-01

    The active fault database and Map of active faults in Africa, in scale of 1:5,000,000, were compiled according to the ILP Project II-2 "World Map of Major Active Faults". The data were collected in the Royal Museum of Central Africa, Tervuren, Belgium, and in the Geological Institute, Moscow, where the final edition was carried out. Active faults of Africa form three groups. The first group is represented by thrusts and reverse faults associated with compressed folds in the northwest Africa. They belong to the western part of the Alpine-Central Asian collision belt. The faults disturb only the Earth's crust and some of them do not penetrate deeper than the sedimentary cover. The second group comprises the faults of the Great African rift system. The faults form the known Western and Eastern branches, which are rifts with abnormal mantle below. The deep-seated mantle "hot" anomaly probably relates to the eastern volcanic branch. In the north, it joins with the Aden-Red Sea rift zone. Active faults in Egypt, Libya and Tunis may represent a link between the East African rift system and Pantellerian rift zone in the Mediterranean. The third group included rare faults in the west of Equatorial Africa. The data were scarce, so that most of the faults of this group were identified solely by interpretation of space imageries and seismicity. Some longer faults of the group may continue the transverse faults of the Atlantic and thus can penetrate into the mantle. This seems evident for the Cameron fault line.

  18. Fault imprint in clay units: magnetic fabric, structural and mineralogical signature

    NASA Astrophysics Data System (ADS)

    Moreno, Eva; Homberg, Catherine; Schnyder, Johann; Person, Alain; du Peloux1, Arthur; Dick, Pierre

    2014-05-01

    Fault-induced deformations in clay units can be difficult to decipher because strain markers are not always visible at outcrop scale or using geophysical methods. Previous studies have indicated that the anisotropy of magnetic susceptibility (ASM) provides a powerful and rapid technique to investigate tectonic deformation in clay units even when they appear quite homogenous and undeformed at the outcrop scale (Lee et al. 1990, Mattei et al. 1997). We report here a study based on ASM, structural analysis and magnetic and clay mineralogy from two boreholes (TF1 and ASM1)drilled horizontally in the Experimental Station of Tournemire of the Institute for Radiological Protection and Nuclear Safety (IRSN) in Aveyron (France). The boreholes intersect a N-S trending strike-slip fault from west to east. The ASM study indicates the evolution of the magnetic fabric from the undeformed host rock to the fault core. Also, all the fractures cutting the studied interval of the core have been measured as well as the slip vectors which are generally well preserved. In the two boreholes, the undeformed sediments outside the fault zone are characterized by an oblate fabric, a sub-vertical minimum susceptibility axis (k3) perpendicular to the bedding plane and without magnetic lineation. Within the fault zone, a tilt in the bedding plane has been observed in two boreholes TF1 and ASM1. In addition, in the TF1 core, the fault area presents a tectonic fabric characterized by a triaxial AMS ellipsoid. Moreover, the magnetic lineation increases and k3 switches from a vertical to a sub-horizontal plane. This kind of fabric has not been observed in borehole ASM1. The structural analysis of the individual fractures making the fault zone indicates a complex tectonic history with different imprint in the two fault segments cut by the two boreholes. The large majority of fractures correspond to dextral strike-slip faults but normal and reverse movements were observed and are more or less

  19. Construction of testing facilities and verifying tests of a 22.9 kV/630 A class superconducting fault current limiter

    NASA Astrophysics Data System (ADS)

    Yim, S.-W.; Yu, S.-D.; Kim, H.-R.; Kim, M.-J.; Park, C.-R.; Yang, S.-E.; Kim, W.-S.; Hyun, O.-B.; Sim, J.; Park, K.-B.; Oh, I.-S.

    2010-11-01

    We have constructed and completed the preparation for a long-term operation test of a superconducting fault current limiter (SFCL) in a Korea Electric Power Corporation (KEPCO) test grid. The SFCL with rating of 22.9 kV/630 A, 3-phases, has been connected to the 22.9 kV test grid equipped with reclosers and other protection devices in Gochang Power Testing Center of KEPCO. The main goals of the test are the verification of SFCL performance and protection coordination studies. A line-commutation type SFCL was fabricated and installed for this project, and the superconducting components were cooled by a cryo-cooler to 77 K in the sub-cooled liquid nitrogen pressurized by 3 bar of helium gas. The verification test includes un-manned - long-term operation with and without loads and fault tests. Since the test site is 170 km away from the laboratory, we will adopt the un-manned operation with real-time remote monitoring and controlling using high speed internet. For the fault tests, we will apply fault currents up to around 8 kArms to the SFCL using an artificial fault generator. The fault tests may allow us not only to confirm the current limiting capability of the SFCL, but also to adjust the SFCL - recloser coordination such as resetting over-current relay parameters. This paper describes the construction of the testing facilities and discusses the plans for the verification tests.

  20. Robust In-Flight Sensor Fault Diagnostics for Aircraft Engine Based on Sliding Mode Observers

    PubMed Central

    Chang, Xiaodong; Huang, Jinquan; Lu, Feng

    2017-01-01

    For a sensor fault diagnostic system of aircraft engines, the health performance degradation is an inevitable interference that cannot be neglected. To address this issue, this paper investigates an integrated on-line sensor fault diagnostic scheme for a commercial aircraft engine based on a sliding mode observer (SMO). In this approach, one sliding mode observer is designed for engine health performance tracking, and another for sensor fault reconstruction. Both observers are employed in in-flight applications. The results of the former SMO are analyzed for post-flight updating the baseline model of the latter. This idea is practical and feasible since the updating process does not require the algorithm to be regulated or redesigned, so that ground-based intervention is avoided, and the update process is implemented in an economical and efficient way. With this setup, the robustness of the proposed scheme to the health degradation is much enhanced and the latter SMO is able to fulfill sensor fault reconstruction over the course of the engine life. The proposed sensor fault diagnostic system is applied to a nonlinear simulation of a commercial aircraft engine, and its effectiveness is evaluated in several fault scenarios. PMID:28398255

  1. Robust In-Flight Sensor Fault Diagnostics for Aircraft Engine Based on Sliding Mode Observers.

    PubMed

    Chang, Xiaodong; Huang, Jinquan; Lu, Feng

    2017-04-11

    For a sensor fault diagnostic system of aircraft engines, the health performance degradation is an inevitable interference that cannot be neglected. To address this issue, this paper investigates an integrated on-line sensor fault diagnostic scheme for a commercial aircraft engine based on a sliding mode observer (SMO). In this approach, one sliding mode observer is designed for engine health performance tracking, and another for sensor fault reconstruction. Both observers are employed in in-flight applications. The results of the former SMO are analyzed for post-flight updating the baseline model of the latter. This idea is practical and feasible since the updating process does not require the algorithm to be regulated or redesigned, so that ground-based intervention is avoided, and the update process is implemented in an economical and efficient way. With this setup, the robustness of the proposed scheme to the health degradation is much enhanced and the latter SMO is able to fulfill sensor fault reconstruction over the course of the engine life. The proposed sensor fault diagnostic system is applied to a nonlinear simulation of a commercial aircraft engine, and its effectiveness is evaluated in several fault scenarios.

  2. Letter - Response to 'How to protect general practice from child protection'

    PubMed

    Lang, Danny

    2011-08-18

    The bad news is that the BJGP published under such a negative and provocative title without balanced debate, in contrast to the recent BMJ head-to-head debate 'Has child protection become a form of madness'.1,2 One interesting comparison made in that debate is that Sweden and Finland spend 50% more of their gross domestic product on children and families than we do in the UK, we spend 200% more than they do on social problems. In my view these figures help us understand societal attitudes that have knock-on effects through all services for children and families including general practice. The current UK GP contract is certainly not child and family friendly and it is unlikely that current NHS changes will improve matters.Mike Fitzpatrick's latest letter over-emphasises the personal while flagging up the real difficulties for general practice.3 My work as a frontline GP and my statutory role as a named doctor for safeguarding children allows me to understand these difficulties, and indeed the importance of making vulnerable adults safe that he highlights in the same Journal.4The good news is that the great majority of children grow up well and happy in their families and GP practices support them in this. I emphasise this when delivering safeguarding training to GPs. In doing this work GPs can develop a sense of why something is going wrong. With appropriate use of their knowledge, skills, and information, GPs can address the needs of children and their families across the spectrum of vulnerability and need through to abuse and neglect. There is no question that this is challenging, but in the UK all professionals are obliged by statute to fulfil our professional roles and responsibilities. Those specific to general practice are on pages 60-63 of Working Together to Safeguard Children 2012.5GP colleagues in Cornwall show appreciation of the importance of safeguarding, but many feel there is too much guidance and insufficient resources. The majority of the practices

  3. Fault Model Development for Fault Tolerant VLSI Design

    DTIC Science & Technology

    1988-05-01

    0 % .%. . BEIDGING FAULTS A bridging fault in a digital circuit connects two or more conducting paths of the circuit. The resistance...Melvin Breuer and Arthur Friedman, "Diagnosis and Reliable Design of Digital Systems", Computer Science Press, Inc., 1976. 4. [Chandramouli,1983] R...2138 AEDC LIBARY (TECH REPORTS FILE) MS-O0 ARNOLD AFS TN 37389-9998 USAG1 Attn: ASH-PCA-CRT Ft Huachuca AZ 85613-6000 DOT LIBRARY/iQA SECTION - ATTN

  4. Heterogeneity in the Fault Damage Zone: a Field Study on the Borrego Fault, B.C., Mexico

    NASA Astrophysics Data System (ADS)

    Ostermeijer, G.; Mitchell, T. M.; Dorsey, M. T.; Browning, J.; Rockwell, T. K.; Aben, F. M.; Fletcher, J. M.; Brantut, N.

    2017-12-01

    The nature and distribution of damage around faults, and its impacts on fault zone properties has been a hot topic of research over the past decade. Understanding the mechanisms that control the formation of off fault damage can shed light on the processes during the seismic cycle, and the nature of fault zone development. Recent published work has identified three broad zones of damage around most faults based on the type, intensity, and extent of fracturing; Tip, Wall, and Linking damage. Although these zones are able to adequately characterise the general distribution of damage, little has been done to identify the nature of damage heterogeneity within those zones, often simplifying the distribution to fit log-normal linear decay trends. Here, we attempt to characterise the distribution of fractures that make up the wall damage around seismogenic faults. To do so, we investigate an extensive two dimensional fracture network exposed on a river cut platform along the Borrego Fault, BC, Mexico, 5m wide, and extending 20m from the fault core into the damage zone. High resolution fracture mapping of the outcrop, covering scales ranging three orders of magnitude (cm to m), has allowed for detailed observations of the 2D damage distribution within the fault damage zone. Damage profiles were obtained along several 1D transects perpendicular to the fault and micro-damage was examined from thin-sections at various locations around the outcrop for comparison. Analysis of the resulting fracture network indicates heterogeneities in damage intensity at decimetre scales resulting from a patchy distribution of high and low intensity corridors and clusters. Such patchiness may contribute to inconsistencies in damage zone widths defined along 1D transects and the observed variability of fracture densities around decay trends. How this distribution develops with fault maturity and the scaling of heterogeneities above and below the observed range will likely play a key role in

  5. A fault-based model for crustal deformation, fault slip-rates and off-fault strain rate in California

    USGS Publications Warehouse

    Zeng, Yuehua; Shen, Zheng-Kang

    2016-01-01

    We invert Global Positioning System (GPS) velocity data to estimate fault slip rates in California using a fault‐based crustal deformation model with geologic constraints. The model assumes buried elastic dislocations across the region using Uniform California Earthquake Rupture Forecast Version 3 (UCERF3) fault geometries. New GPS velocity and geologic slip‐rate data were compiled by the UCERF3 deformation working group. The result of least‐squares inversion shows that the San Andreas fault slips at 19–22  mm/yr along Santa Cruz to the North Coast, 25–28  mm/yr along the central California creeping segment to the Carrizo Plain, 20–22  mm/yr along the Mojave, and 20–24  mm/yr along the Coachella to the Imperial Valley. Modeled slip rates are 7–16  mm/yr lower than the preferred geologic rates from the central California creeping section to the San Bernardino North section. For the Bartlett Springs section, fault slip rates of 7–9  mm/yr fall within the geologic bounds but are twice the preferred geologic rates. For the central and eastern Garlock, inverted slip rates of 7.5 and 4.9  mm/yr, respectively, match closely with the geologic rates. For the western Garlock, however, our result suggests a low slip rate of 1.7  mm/yr. Along the eastern California shear zone and southern Walker Lane, our model shows a cumulative slip rate of 6.2–6.9  mm/yr across its east–west transects, which is ∼1  mm/yr increase of the geologic estimates. For the off‐coast faults of central California, from Hosgri to San Gregorio, fault slips are modeled at 1–5  mm/yr, similar to the lower geologic bounds. For the off‐fault deformation, the total moment rate amounts to 0.88×1019  N·m/yr, with fast straining regions found around the Mendocino triple junction, Transverse Ranges and Garlock fault zones, Landers and Brawley seismic zones, and farther south. The overall California moment rate is 2.76×1019

  6. Voltage Based Detection Method for High Impedance Fault in a Distribution System

    NASA Astrophysics Data System (ADS)

    Thomas, Mini Shaji; Bhaskar, Namrata; Prakash, Anupama

    2016-09-01

    High-impedance faults (HIFs) on distribution feeders cannot be detected by conventional protection schemes, as HIFs are characterized by their low fault current level and waveform distortion due to the nonlinearity of the ground return path. This paper proposes a method to identify the HIFs in distribution system and isolate the faulty section, to reduce downtime. This method is based on voltage measurements along the distribution feeder and utilizes the sequence components of the voltages. Three models of high impedance faults have been considered and source side and load side breaking of the conductor have been studied in this work to capture a wide range of scenarios. The effect of neutral grounding of the source side transformer is also accounted in this study. The results show that the algorithm detects the HIFs accurately and rapidly. Thus, the faulty section can be isolated and service can be restored to the rest of the consumers.

  7. Strike-slip fault propagation and linkage via work optimization with application to the San Jacinto fault, California

    NASA Astrophysics Data System (ADS)

    Madden, E. H.; McBeck, J.; Cooke, M. L.

    2013-12-01

    Over multiple earthquake cycles, strike-slip faults link to form through-going structures, as demonstrated by the continuous nature of the mature San Andreas fault system in California relative to the younger and more segmented San Jacinto fault system nearby. Despite its immaturity, the San Jacinto system accommodates between one third and one half of the slip along the boundary between the North American and Pacific plates. It therefore poses a significant seismic threat to southern California. Better understanding of how the San Jacinto system has evolved over geologic time and of current interactions between faults within the system is critical to assessing this seismic hazard accurately. Numerical models are well suited to simulating kilometer-scale processes, but models of fault system development are challenged by the multiple physical mechanisms involved. For example, laboratory experiments on brittle materials show that faults propagate and eventually join (hard-linkage) by both opening-mode and shear failure. In addition, faults interact prior to linkage through stress transfer (soft-linkage). The new algorithm GROW (GRowth by Optimization of Work) accounts for this complex array of behaviors by taking a global approach to fault propagation while adhering to the principals of linear elastic fracture mechanics. This makes GROW a powerful tool for studying fault interactions and fault system development over geologic time. In GROW, faults evolve to minimize the work (or energy) expended during deformation, thereby maximizing the mechanical efficiency of the entire system. Furthermore, the incorporation of both static and dynamic friction allows GROW models to capture fault slip and fault propagation in single earthquakes as well as over consecutive earthquake cycles. GROW models with idealized faults reveal that the initial fault spacing and the applied stress orientation control fault linkage propensity and linkage patterns. These models allow the gains in

  8. Using concatenated quantum codes for universal fault-tolerant quantum gates.

    PubMed

    Jochym-O'Connor, Tomas; Laflamme, Raymond

    2014-01-10

    We propose a method for universal fault-tolerant quantum computation using concatenated quantum error correcting codes. The concatenation scheme exploits the transversal properties of two different codes, combining them to provide a means to protect against low-weight arbitrary errors. We give the required properties of the error correcting codes to ensure universal fault tolerance and discuss a particular example using the 7-qubit Steane and 15-qubit Reed-Muller codes. Namely, other than computational basis state preparation as required by the DiVincenzo criteria, our scheme requires no special ancillary state preparation to achieve universality, as opposed to schemes such as magic state distillation. We believe that optimizing the codes used in such a scheme could provide a useful alternative to state distillation schemes that exhibit high overhead costs.

  9. Active faulting on the Wallula fault zone within the Olympic-Wallowa lineament, Washington State, USA

    USGS Publications Warehouse

    Sherrod, Brian; Blakely, Richard J.; Lasher, John P.; Lamb, Andrew P.; Mahan, Shannon; Foit, Franklin F.; Barnett, Elizabeth

    2016-01-01

    The Wallula fault zone is an integral feature of the Olympic-Wallowa lineament, an ∼500-km-long topographic lineament oblique to the Cascadia plate boundary, extending from Vancouver Island, British Columbia, to Walla Walla, Washington. The structure and past earthquake activity of the Wallula fault zone are important because of nearby infrastructure, and also because the fault zone defines part of the Olympic-Wallowa lineament in south-central Washington and suggests that the Olympic-Wallowa lineament may have a structural origin. We used aeromagnetic and ground magnetic data to locate the trace of the Wallula fault zone in the subsurface and map a quarry exposure of the Wallula fault zone near Finley, Washington, to investigate past earthquakes along the fault. We mapped three main packages of rocks and unconsolidated sediments in an ∼10-m-high quarry exposure. Our mapping suggests at least three late Pleistocene earthquakes with surface rupture, and an episode of liquefaction in the Holocene along the Wallula fault zone. Faint striae on the master fault surface are subhorizontal and suggest reverse dextral oblique motion for these earthquakes, consistent with dextral offset on the Wallula fault zone inferred from offset aeromagnetic anomalies associated with ca. 8.5 Ma basalt dikes. Magnetic surveys show that the Wallula fault actually lies 350 m to the southwest of the trace shown on published maps, passes directly through deformed late Pleistocene or younger deposits exposed at Finley quarry, and extends uninterrupted over 120 km.

  10. Improving Multiple Fault Diagnosability using Possible Conflicts

    NASA Technical Reports Server (NTRS)

    Daigle, Matthew J.; Bregon, Anibal; Biswas, Gautam; Koutsoukos, Xenofon; Pulido, Belarmino

    2012-01-01

    Multiple fault diagnosis is a difficult problem for dynamic systems. Due to fault masking, compensation, and relative time of fault occurrence, multiple faults can manifest in many different ways as observable fault signature sequences. This decreases diagnosability of multiple faults, and therefore leads to a loss in effectiveness of the fault isolation step. We develop a qualitative, event-based, multiple fault isolation framework, and derive several notions of multiple fault diagnosability. We show that using Possible Conflicts, a model decomposition technique that decouples faults from residuals, we can significantly improve the diagnosability of multiple faults compared to an approach using a single global model. We demonstrate these concepts and provide results using a multi-tank system as a case study.

  11. Fault structure and mechanics of the Hayward Fault, California from double-difference earthquake locations

    USGS Publications Warehouse

    Waldhauser, F.; Ellsworth, W.L.

    2002-01-01

    The relationship between small-magnitude seismicity and large-scale crustal faulting along the Hayward Fault, California, is investigated using a double-difference (DD) earthquake location algorithm. We used the DD method to determine high-resolution hypocenter locations of the seismicity that occurred between 1967 and 1998. The DD technique incorporates catalog travel time data and relative P and S wave arrival time measurements from waveform cross correlation to solve for the hypocentral separation between events. The relocated seismicity reveals a narrow, near-vertical fault zone at most locations. This zone follows the Hayward Fault along its northern half and then diverges from it to the east near San Leandro, forming the Mission trend. The relocated seismicity is consistent with the idea that slip from the Calaveras Fault is transferred over the Mission trend onto the northern Hayward Fault. The Mission trend is not clearly associated with any mapped active fault as it continues to the south and joins the Calaveras Fault at Calaveras Reservoir. In some locations, discrete structures adjacent to the main trace are seen, features that were previously hidden in the uncertainty of the network locations. The fine structure of the seismicity suggest that the fault surface on the northern Hayward Fault is curved or that the events occur on several substructures. Near San Leandro, where the more westerly striking trend of the Mission seismicity intersects with the surface trace of the (aseismic) southern Hayward Fault, the seismicity remains diffuse after relocation, with strong variation in focal mechanisms between adjacent events indicating a highly fractured zone of deformation. The seismicity is highly organized in space, especially on the northern Hayward Fault, where it forms horizontal, slip-parallel streaks of hypocenters of only a few tens of meters width, bounded by areas almost absent of seismic activity. During the interval from 1984 to 1998, when digital

  12. Gaps in governance: protective mechanisms used by nurse leaders when policy and practice are misaligned.

    PubMed

    Knight, Kaye M; Kenny, Amanda; Endacott, Ruth

    2015-04-09

    Due to large geographical distances, the telephone is central to enabling rural Australian communities to access care from their local health service. While there is a history of rural nurses providing care via the telephone, it has been a highly controversial practice that is not routinely documented and little is known about how the practice is governed. The lack of knowledge regarding governance extends to the role of Directors of Nursing as clinical leaders charged with the responsibility of ensuring practice safety, quality, regulation and risk management. The purpose of this study was to identify clinical governance processes related to managing telephone presentations, and to explore Directors of Nursing perceptions of processes and clinical practices related to the management of telephone presentations to health services in rural Victoria, Australia. Qualitative documentary analysis and semi structured interviews were used in the study to examine the content of health service policies and explore the perceptions of Directors of Nursing in eight rural health services regarding policy content and enactment when people telephone rural health services for care. Participants were purposively selected for their knowledge and leadership role in governance processes and clinical practice. Data from the interviews were analysed using framework analysis. The process of analysis resulted in the identification of five themes. The majority of policies reviewed provided little guidance for managing telephone presentations. The Directors of Nursing perceived policy content and enactment to be largely inadequate. When organisational structures failed to provide appropriate governance for the context, the Directors of Nursing engaged in protective mechanisms to support rural nurses who manage telephone presentations. Rural Directors of Nursing employed intuitive behaviours to protect rural nurses practicing within a clinical governance context that is inadequate for the

  13. Solar system fault detection

    DOEpatents

    Farrington, R.B.; Pruett, J.C. Jr.

    1984-05-14

    A fault detecting apparatus and method are provided for use with an active solar system. The apparatus provides an indication as to whether one or more predetermined faults have occurred in the solar system. The apparatus includes a plurality of sensors, each sensor being used in determining whether a predetermined condition is present. The outputs of the sensors are combined in a pre-established manner in accordance with the kind of predetermined faults to be detected. Indicators communicate with the outputs generated by combining the sensor outputs to give the user of the solar system and the apparatus an indication as to whether a predetermined fault has occurred. Upon detection and indication of any predetermined fault, the user can take appropriate corrective action so that the overall reliability and efficiency of the active solar system are increased.

  14. Solar system fault detection

    DOEpatents

    Farrington, Robert B.; Pruett, Jr., James C.

    1986-01-01

    A fault detecting apparatus and method are provided for use with an active solar system. The apparatus provides an indication as to whether one or more predetermined faults have occurred in the solar system. The apparatus includes a plurality of sensors, each sensor being used in determining whether a predetermined condition is present. The outputs of the sensors are combined in a pre-established manner in accordance with the kind of predetermined faults to be detected. Indicators communicate with the outputs generated by combining the sensor outputs to give the user of the solar system and the apparatus an indication as to whether a predetermined fault has occurred. Upon detection and indication of any predetermined fault, the user can take appropriate corrective action so that the overall reliability and efficiency of the active solar system are increased.

  15. Structural evolution of fault zones in sandstone by multiple deformation mechanisms: Moab fault, southeast Utah

    USGS Publications Warehouse

    Davatzes, N.C.; Eichhubl, P.; Aydin, A.

    2005-01-01

    Faults in sandstone are frequently composed of two classes of structures: (1) deformation bands and (2) joints and sheared joints. Whereas the former structures are associated with cataclastic deformation, the latter ones represent brittle fracturing, fragmentation, and brecciation. We investigated the distribution of these structures, their formation, and the underlying mechanical controls for their occurrence along the Moab normal fault in southeastern Utah through the use of structural mapping and numerical elastic boundary element modeling. We found that deformation bands occur everywhere along the fault, but with increased density in contractional relays. Joints and sheared joints only occur at intersections and extensional relays. In all locations , joints consistently overprint deformation bands. Localization of joints and sheared joints in extensional relays suggests that their distribution is controlled by local variations in stress state that are due to mechanical interaction between the fault segments. This interpretation is consistent with elastic boundary element models that predict a local reduction in mean stress and least compressive principal stress at intersections and extensional relays. The transition from deformation band to joint formation along these sections of the fault system likely resulted from the combined effects of changes in remote tectonic loading, burial depth, fluid pressure, and rock properties. In the case of the Moab fault, we conclude that the structural heterogeneity in the fault zone is systematically related to the geometric evolution of the fault, the local state of stress associated with fault slip , and the remote loading history. Because the type and distribution of structures affect fault permeability and strength, our results predict systematic variations in these parameters with fault evolution. ?? 2004 Geological Society of America.

  16. Development of direct dating methods of fault gouges: Deep drilling into Nojima Fault, Japan

    NASA Astrophysics Data System (ADS)

    Miyawaki, M.; Uchida, J. I.; Satsukawa, T.

    2017-12-01

    It is crucial to develop a direct dating method of fault gouges for the assessment of recent fault activity in terms of site evaluation for nuclear power plants. This method would be useful in regions without Late Pleistocene overlying sediments. In order to estimate the age of the latest fault slip event, it is necessary to use fault gouges which have experienced high frictional heating sufficient for age resetting. It is said that frictional heating is higher in deeper depths, because frictional heating generated by fault movement is determined depending on the shear stress. Therefore, we should determine the reliable depth of age resetting, as it is likely that fault gouges from the ground surface have been dated to be older than the actual age of the latest fault movement due to incomplete resetting. In this project, we target the Nojima fault which triggered the 1995 Kobe earthquake in Japan. Samples are collected from various depths (300-1,500m) by trenching and drilling to investigate age resetting conditions and depth using several methods including electron spin resonance (ESR) and optical stimulated luminescence (OSL), which are applicable to ages later than the Late Pleistocene. The preliminary results by the ESR method show approx. 1.1 Ma1) at the ground surface and 0.15-0.28 Ma2) at 388 m depth, respectively. These results indicate that samples from deeper depths preserve a younger age. In contrast, the OSL method dated approx. 2,200 yr1) at the ground surface. Although further consideration is still needed as there is a large margin of error, this result indicates that the age resetting depth of OSL is relatively shallow due to the high thermosensitivity of OSL compare to ESR. In the future, we plan to carry out further investigation for dating fault gouges from various depths up to approx. 1,500 m to verify the use of these direct dating methods.1) Kyoto University, 2017. FY27 Commissioned for the disaster presentation on nuclear facilities (Drilling

  17. Adaptive Neural Networks Decentralized FTC Design for Nonstrict-Feedback Nonlinear Interconnected Large-Scale Systems Against Actuator Faults.

    PubMed

    Li, Yongming; Tong, Shaocheng

    The problem of active fault-tolerant control (FTC) is investigated for the large-scale nonlinear systems in nonstrict-feedback form. The nonstrict-feedback nonlinear systems considered in this paper consist of unstructured uncertainties, unmeasured states, unknown interconnected terms, and actuator faults (e.g., bias fault and gain fault). A state observer is designed to solve the unmeasurable state problem. Neural networks (NNs) are used to identify the unknown lumped nonlinear functions so that the problems of unstructured uncertainties and unknown interconnected terms can be solved. By combining the adaptive backstepping design principle with the combination Nussbaum gain function property, a novel NN adaptive output-feedback FTC approach is developed. The proposed FTC controller can guarantee that all signals in all subsystems are bounded, and the tracking errors for each subsystem converge to a small neighborhood of zero. Finally, numerical results of practical examples are presented to further demonstrate the effectiveness of the proposed control strategy.The problem of active fault-tolerant control (FTC) is investigated for the large-scale nonlinear systems in nonstrict-feedback form. The nonstrict-feedback nonlinear systems considered in this paper consist of unstructured uncertainties, unmeasured states, unknown interconnected terms, and actuator faults (e.g., bias fault and gain fault). A state observer is designed to solve the unmeasurable state problem. Neural networks (NNs) are used to identify the unknown lumped nonlinear functions so that the problems of unstructured uncertainties and unknown interconnected terms can be solved. By combining the adaptive backstepping design principle with the combination Nussbaum gain function property, a novel NN adaptive output-feedback FTC approach is developed. The proposed FTC controller can guarantee that all signals in all subsystems are bounded, and the tracking errors for each subsystem converge to a small

  18. 45 CFR 164.520 - Notice of privacy practices for protected health information.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... organized health care arrangement; (2) The joint notice meets the implementation specifications in paragraph... 45 Public Welfare 1 2011-10-01 2011-10-01 false Notice of privacy practices for protected health information. 164.520 Section 164.520 Public Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES ADMINISTRATIVE...

  19. 45 CFR 164.520 - Notice of privacy practices for protected health information.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... organized health care arrangement; (2) The joint notice meets the implementation specifications in paragraph... 45 Public Welfare 1 2010-10-01 2010-10-01 false Notice of privacy practices for protected health information. 164.520 Section 164.520 Public Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES ADMINISTRATIVE...

  20. Late Quaternary Faulting along the San Juan de los Planes Fault Zone, Baja California Sur, Mexico

    NASA Astrophysics Data System (ADS)

    Busch, M. M.; Coyan, J. A.; Arrowsmith, J.; Maloney, S. J.; Gutierrez, G.; Umhoefer, P. J.

    2007-12-01

    As a result of continued distributed deformation in the Gulf Extensional Province along an oblique-divergent plate margin, active normal faulting is well manifest in southeastern Baja California. By characterizing normal-fault related deformation along the San Juan de los Planes fault zone (SJPFZ) southwest of La Paz, Baja California Sur we contribute to understanding the patterns and rates of faulting along the southwest gulf-margin fault system. The geometry, history, and rate of faulting provide constraints on the relative significance of gulf-margin deformation as compared to axial system deformation. The SJPFZ is a major north-trending structure in the southern Baja margin along which we focused our field efforts. These investigations included: a detailed strip map of the active fault zone, including delineation of active scarp traces and geomorphic surfaces on the hanging wall and footwall; fault scarp profiles; analysis of bedrock structures to better understand how the pattern and rate of strain varied during the development of this fault zone; and a gravity survey across the San Juan de los Planes basin to determine basin geometry and fault behavior. The map covers a N-S swath from the Gulf of California in the north to San Antonio in the south, an area ~45km long and ~1-4km wide. Bedrock along the SJPFZ varies from Cretaceous Las Cruces Granite in the north to Cretaceous Buena Mujer Tonalite in the south and is scarred by shear zones and brittle faults. The active scarp-forming fault juxtaposes bedrock in the footwall against Late Quaternary sandstone-conglomerate. This ~20m wide zone is highly fractured bedrock infused with carbonate. The northern ~12km of the SJPFZ, trending 200°, preserves discontinuous scarps 1-2km long and 1-3m high in Quaternary units. The scarps are separated by stretches of bedrock embayed by hundreds of meters-wide tongues of Quaternary sandstone-conglomerate, implying low Quaternary slip rate. Further south, ~2 km north of the

  1. Hayward Fault, California Interferogram

    NASA Technical Reports Server (NTRS)

    2000-01-01

    This image of California's Hayward fault is an interferogram created using a pair of images taken by Synthetic Aperture Radar(SAR) combined to measure changes in the surface that may have occurred between the time the two images were taken.

    The images were collected by the European Space Agency's Remote Sensing satellites ERS-1 and ERS-2 in June 1992 and September 1997 over the central San Francisco Bay in California.

    The radar image data are shown as a gray-scale image, with the interferometric measurements that show the changes rendered in color. Only the urbanized area could be mapped with these data. The color changes from orange tones to blue tones across the Hayward fault (marked by a thin red line) show about 2-3centimeters (0.8-1.1 inches) of gradual displacement or movement of the southwest side of the fault. The block west of the fault moved horizontally toward the northwest during the 63 months between the acquisition of the two SAR images. This fault movement is called a seismic creep because the fault moved slowly without generating an earthquake.

    Scientists are using the SAR interferometry along with other data collected on the ground to monitor this fault motion in an attempt to estimate the probability of earthquake on the Hayward fault, which last had a major earthquake of magnitude 7 in 1868. This analysis indicates that the northern part of the Hayward fault is creeping all the way from the surface to a depth of 12 kilometers (7.5 miles). This suggests that the potential for a large earthquake on the northern Hayward fault might be less than previously thought. The blue area to the west (lower left) of the fault near the center of the image seemed to move upward relative to the yellow and orange areas nearby by about 2 centimeters (0.8 inches). The cause of this apparent motion is not yet confirmed, but the rise of groundwater levels during the time between the images may have caused the reversal of a small portion of the subsidence that

  2. Identification of active fault using analysis of derivatives with vertical second based on gravity anomaly data (Case study: Seulimeum fault in Sumatera fault system)

    NASA Astrophysics Data System (ADS)

    Hududillah, Teuku Hafid; Simanjuntak, Andrean V. H.; Husni, Muhammad

    2017-07-01

    Gravity is a non-destructive geophysical technique that has numerous application in engineering and environmental field like locating a fault zone. The purpose of this study is to spot the Seulimeum fault system in Iejue, Aceh Besar (Indonesia) by using a gravity technique and correlate the result with geologic map and conjointly to grasp a trend pattern of fault system. An estimation of subsurface geological structure of Seulimeum fault has been done by using gravity field anomaly data. Gravity anomaly data which used in this study is from Topex that is processed up to Free Air Correction. The step in the Next data processing is applying Bouger correction and Terrin Correction to obtain complete Bouger anomaly that is topographically dependent. Subsurface modeling is done using the Gav2DC for windows software. The result showed a low residual gravity value at a north half compared to south a part of study space that indicated a pattern of fault zone. Gravity residual was successfully correlate with the geologic map that show the existence of the Seulimeum fault in this study space. The study of earthquake records can be used for differentiating the active and non active fault elements, this gives an indication that the delineated fault elements are active.

  3. Fault management for data systems

    NASA Technical Reports Server (NTRS)

    Boyd, Mark A.; Iverson, David L.; Patterson-Hine, F. Ann

    1993-01-01

    Issues related to automating the process of fault management (fault diagnosis and response) for data management systems are considered. Substantial benefits are to be gained by successful automation of this process, particularly for large, complex systems. The use of graph-based models to develop a computer assisted fault management system is advocated. The general problem is described and the motivation behind choosing graph-based models over other approaches for developing fault diagnosis computer programs is outlined. Some existing work in the area of graph-based fault diagnosis is reviewed, and a new fault management method which was developed from existing methods is offered. Our method is applied to an automatic telescope system intended as a prototype for future lunar telescope programs. Finally, an application of our method to general data management systems is described.

  4. Unscientific health practice and disciplinary and consumer protection litigation.

    PubMed

    Freckelton, Ian

    2011-06-01

    Evidence-based health care is expected of their practitioners by contemporary health professions. This requires health care to have a foundation in scholarly literature and to have a scientifically valid methodology. However, there are many instances of registered and unregistered practitioners either providing assessment and treatment that does not conform to such requirements or making representations about likely efficacy that are unjustifiable by reference to peer-reviewed clinical knowledge. Sometimes such conduct is predatory and deliberately exploitative; other times it is simply misconceived on the part of practitioners who regard themselves as medical pioneers. This editorial situates such conduct within unscientific and unorthodox health practice. It surveys recent consumer protection and disciplinary decisions to evaluate the role of the law in regulating such conduct. It argues in favour of an assertive legal response to protect vulnerable patients or potential patients against forms of treatment and promises of outcomes that are unscientific and deceptive.

  5. Fault geometric complexity and how it may cause temporal slip-rate variation within an interacting fault system

    NASA Astrophysics Data System (ADS)

    Zielke, Olaf; Arrowsmith, Ramon

    2010-05-01

    Slip-rates along individual faults may differ as a function of measurement time scale. Short-term slip-rates may be higher than the long term rate and vice versa. For example, vertical slip-rates along the Wasatch Fault, Utah are 1.7+/-0.5 mm/yr since 6ka, <0.6 mm/yr since 130ka, and 0.5-0.7 mm/yr since 10Ma (Friedrich et al., 2003). Following conventional earthquake recurrence models like the characteristic earthquake model, this observation implies that the driving strain accumulation rates may have changed over the respective time scales as well. While potential explanations for such slip-rate variations may be found for example in the reorganization of plate tectonic motion or mantle flow dynamics, causing changes in the crustal velocity field over long spatial wavelengths, no single geophysical explanation exists. Temporal changes in earthquake rate (i.e., event clustering) due to elastic interactions within a complex fault system may present an alternative explanation that requires neither variations in strain accumulation rate or nor changes in fault constitutive behavior for frictional sliding. In the presented study, we explore this scenario and investigate how fault geometric complexity, fault segmentation and fault (segment) interaction affect the seismic behavior and slip-rate along individual faults while keeping tectonic stressing-rate and frictional behavior constant in time. For that, we used FIMozFric--a physics-based numerical earthquake simulator, based on Okada's (1992) formulations for internal displacements and strains due to shear and tensile faults in a half-space. Faults are divided into a large number of equal-sized fault patches which communicate via elastic interaction, allowing implementation of geometrically complex, non-planar faults. Each patch has assigned a static and dynamic friction coefficient. The difference between those values is a function of depth--corresponding to the temperature-dependence of velocity-weakening that is

  6. Performance analysis of microcomputer based differential protection of UHV lines under selective phase switching

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bhatti, A.A.

    1990-04-01

    This paper examines the effects of primary and secondary fault quantities as well s of mutual couplings of neighboring circuits on the sensitivity of operation and threshold settings of a microcomputer based differential protection of UHV lines under selective phase switching. Microcomputer based selective phase switching allows the disconnection of minimum number of phases involved in a fault and requires the autoreclosing of these phases immediately after the extinction of secondary arc. During a primary fault a heavy current contribution to the healthy phases tends to cause an unwanted tripping. Faulty phases physically disconnected constitute an isolated fault which beingmore » coupled to the system affects the current and voltage levels of the healthy phases still retained in the system and may cause an unwanted tripping. The microcomputer based differential protection, appears to have poor performance when applied to uncompensated lines employing selective pole switching.« less

  7. Assurance of Fault Management: Risk-Significant Adverse Condition Awareness

    NASA Technical Reports Server (NTRS)

    Fitz, Rhonda

    2016-01-01

    Fault Management (FM) systems are ranked high in risk-based assessment of criticality within flight software, emphasizing the importance of establishing highly competent domain expertise to provide assurance for NASA projects, especially as spaceflight systems continue to increase in complexity. Insight into specific characteristics of FM architectures seen embedded within safety- and mission-critical software systems analyzed by the NASA Independent Verification Validation (IVV) Program has been enhanced with an FM Technical Reference (TR) suite. Benefits are aimed beyond the IVV community to those that seek ways to efficiently and effectively provide software assurance to reduce the FM risk posture of NASA and other space missions. The identification of particular FM architectures, visibility, and associated IVV techniques provides a TR suite that enables greater assurance that critical software systems will adequately protect against faults and respond to adverse conditions. The role FM has with regard to overall asset protection of flight software systems is being addressed with the development of an adverse condition (AC) database encompassing flight software vulnerabilities.Identification of potential off-nominal conditions and analysis to determine how a system responds to these conditions are important aspects of hazard analysis and fault management. Understanding what ACs the mission may face, and ensuring they are prevented or addressed is the responsibility of the assurance team, which necessarily should have insight into ACs beyond those defined by the project itself. Research efforts sponsored by NASAs Office of Safety and Mission Assurance defined terminology, categorized data fields, and designed a baseline repository that centralizes and compiles a comprehensive listing of ACs and correlated data relevant across many NASA missions. This prototype tool helps projects improve analysis by tracking ACs, and allowing queries based on project, mission

  8. Large-scale splay faults on a strike-slip fault system: The Yakima Folds, Washington State

    USGS Publications Warehouse

    Pratt, Thomas L.

    2012-01-01

    The Yakima Folds (YF) comprise anticlines above reverse faults cutting flows of the Miocene Columbia River Basalt Group of central Washington State. The YF are bisected by the ~1100-km-long Olympic-Wallowa Lineament (OWL), which is an alignment of topographic features including known faults. There is considerable debate about the origin and earthquake potential of both the YF and OWL, which lie near six major dams and a large nuclear waste storage site. Here I show that the trends of the faults forming the YF relative to the OWL match remarkably well the trends of the principal stress directions at the end of a vertical strike-slip fault. This comparison and the termination of some YF against the OWL are consistent with the YF initially forming as splay faults caused by an along-strike decrease in the amount of strike-slip on the OWL. The hypothesis is that the YF faults initially developed as splay faults in the early to mid Miocene under NNW-oriented principal compressive stress, but the anticlines subsequently grew with thrust motion after the principal compressive stress direction rotated to N-S or NNE after the mid-Miocene. A seismic profile across one of the YF anticlines shows folding at about 7 km depth, indicating deformation of sub-basalt strata. The seismic profile and the hypothesized relationship between the YF and the OWL suggest that the structures are connected in the middle or lower crust, and that the faults forming the YF are large-scale splay faults associated with a major strike-slip fault system.

  9. Balancing theory and practicality: engaging non-ethicists in ethical decision making related to radiological protection.

    PubMed

    Martinez, Nicole; Wueste, Daniel

    2016-12-01

    This paper discusses an approach for engaging radiation protection professionals in the ethical aspects of decision-making, with discussion on how this approach fits in with the existing system of radiological protection. It explores finding common ground between ethical and scientific theory, how to present relevant moral theory in accessible language, and provides a practical framework for dealing with real-world problems. Although establishing the ethical theory behind the system of radiological protection is an important ongoing endeavour within the community, it is equally important to communicate this information in a way that is useful to non-ethicists. Discussion of both ethical theory and a useful strategy for applying the theory makes ethics more accessible to those working in the field by providing them with the knowledge and confidence to apply ethical principles in decisions and practice.

  10. The Relationship between Sun Protection Policy and Associated Practices in a National Sample of Early Childhood Services in Australia

    ERIC Educational Resources Information Center

    Ettridge, Kerry A.; Bowden, Jacqueline A.; Rayner, Joanne M.; Wilson, Carlene J.

    2011-01-01

    Limiting exposure to sunlight during childhood can significantly reduce the risk of skin cancer. This was the first national study to assess the sun protection policies and practices of early childhood services across Australia. It also examined the key predictors of services' sun protection practices. In 2007, 1017 respondents completed a…

  11. Rapid recovery from transient faults in the fault-tolerant processor with fault-tolerant shared memory

    NASA Technical Reports Server (NTRS)

    Harper, Richard E.; Butler, Bryan P.

    1990-01-01

    The Draper fault-tolerant processor with fault-tolerant shared memory (FTP/FTSM), which is designed to allow application tasks to continue execution during the memory alignment process, is described. Processor performance is not affected by memory alignment. In addition, the FTP/FTSM incorporates a hardware scrubber device to perform the memory alignment quickly during unused memory access cycles. The FTP/FTSM architecture is described, followed by an estimate of the time required for channel reintegration.

  12. Research on fault characteristics about switching component failures for distribution electronic power transformers

    NASA Astrophysics Data System (ADS)

    Sang, Z. X.; Huang, J. Q.; Yan, J.; Du, Z.; Xu, Q. S.; Lei, H.; Zhou, S. X.; Wang, S. C.

    2017-11-01

    The protection is an essential part for power device, especially for those in power grid, as the failure may cost great losses to the society. A study on the voltage and current abnormality in the power electronic devices in Distribution Electronic Power Transformer (D-EPT) during the failures on switching components is presented, as well as the operational principles for 10 kV rectifier, 10 kV/400 V DC-DC converter and 400 V inverter in D-EPT. Derived from the discussion on the effects of voltage and current distortion, the fault characteristics as well as a fault diagnosis method for D-EPT are introduced.

  13. Reconnaissance study of late quaternary faulting along cerro GoDen fault zone, western Puerto Rico

    USGS Publications Warehouse

    Mann, P.; Prentice, C.S.; Hippolyte, J.-C.; Grindlay, N.R.; Abrams, L.J.; Lao-Davila, D.

    2005-01-01

    The Cerro GoDen fault zone is associated with a curvilinear, continuous, and prominent topographic lineament in western Puerto Rico. The fault varies in strike from northwest to west. In its westernmost section, the fault is ???500 m south of an abrupt, curvilinear mountain front separating the 270- to 361-m-high La CaDena De San Francisco range from the Rio A??asco alluvial valley. The Quaternary fault of the A??asco Valley is in alignment with the bedrock fault mapped by D. McIntyre (1971) in the Central La Plata quadrangle sheet east of A??asco Valley. Previous workers have postulated that the Cerro GoDen fault zone continues southeast from the A??asco Valley and merges with the Great Southern Puerto Rico fault zone of south-central Puerto Rico. West of the A??asco Valley, the fault continues offshore into the Mona Passage (Caribbean Sea) where it is characterized by offsets of seafloor sediments estimated to be of late Quaternary age. Using both 1:18,500 scale air photographs taken in 1936 and 1:40,000 scale photographs taken by the U.S. Department of Agriculture in 1986, we iDentified geomorphic features suggestive of Quaternary fault movement in the A??asco Valley, including aligned and Deflected drainages, apparently offset terrace risers, and mountain-facing scarps. Many of these features suggest right-lateral displacement. Mapping of Paleogene bedrock units in the uplifted La CaDena range adjacent to the Cerro GoDen fault zone reveals the main tectonic events that have culminated in late Quaternary normal-oblique displacement across the Cerro GoDen fault. Cretaceous to Eocene rocks of the La CaDena range exhibit large folds with wavelengths of several kms. The orientation of folds and analysis of fault striations within the folds indicate that the folds formed by northeast-southwest shorTening in present-day geographic coordinates. The age of Deformation is well constrained as late Eocene-early Oligocene by an angular unconformity separating folDed, Deep

  14. Fault-slip inversions: Their importance in terms of strain, heterogeneity, and kinematics of brittle deformation

    NASA Astrophysics Data System (ADS)

    Riller, U.; Clark, M. D.; Daxberger, H.; Doman, D.; Lenauer, I.; Plath, S.; Santimano, T.

    2017-08-01

    Heterogeneous deformation is intrinsic in natural deformation, but often underestimated in the analysis and interpretation of mesoscopic brittle shear faults. Based on the analysis of 11,222 faults from two distinct tectonic settings, the Central Andes in Argentina and the Sudbury area in Canada, interpolation of principal strain directions and scaled analogue modelling, we revisit controversial issues of fault-slip inversions, collectively adhering to heterogeneous deformation. These issues include the significance of inversion solutions in terms of (1) strain or paleo-stress; (2) displacement, notably plate convergence; (3) local versus far-field deformation; (4) strain perturbations and (5) spacing between stations of fault-slip data acquisition. Furthermore, we highlight the value of inversions for identifying the kinematics of master fault zones in the absence of displaced geological markers. A key result of our assessment is that fault-slip inversions relate to local strain, not paleo-stress, and thus can aid in inferring, the kinematics of master faults. Moreover, strain perturbations caused by mechanical anomalies of the deforming upper crust significantly influence local principal strain directions. Thus, differently oriented principal strain axes inferred from fault-slip inversions in a given region may not point to regional deformation caused by successive and distinct deformation regimes. This outcome calls into question the common practice of separating heterogeneous fault-slip data sets into apparently homogeneous subsets. Finally, the fact that displacement vectors and principal strains are rarely co-linear defies the use of brittle fault data as proxy for estimating directions of plate-scale motions.

  15. Do mesoscale faults near the tip of an active strike-slip fault indicate regional or local stress?

    NASA Astrophysics Data System (ADS)

    Yamaji, Atsushi

    2017-04-01

    Fault-slip analysis is used in Japan after the Great Tohoku Earthquake (2011) to judge the stability of fractures in the foundations of nuclear power plants. In case a fault-slip datum from a fracture surface is explained by the present stress condition, the fracture is thought to have a risk to be activated as a fault. So, it is important to understand the relative significance of regional and local stresses. To answer the question whether mesoscale faults indicate regional or local stress, fault-slip data were collected from the walls of a trenching site of the Nojima Fault in central Japan—an active, dextral, strike-slip fault. The fault gave rise to the 1995 Kobe earthquake, which killed more than 6000 people. The trench was placed near the fault tip, which produced compressional and extensional local stress conditions on the sides of the fault near the tip. A segment of the fault, which ruptured the surface in 1995, bounded Cretaceous granite and latest Pliocene sediments in the trench. As a result, the stress inversion of the data from the mesoscale faults observed in the trench showed both the local stresses. The present WNW-ESE regional compression was found from the compressive side, but was not in the extensional side, probably because local extension surpassed the regional compression. Instead, the regional N-S compression of the Early Pleistocene was found from the extensional side. From this project, we got the lesson that fault-slip analysis reveals regional and local stresses, and that local stress sometimes masks regional one. This work was supported by a science project of "Drilling into Fault Damage Zone" (awarded to A. Lin) of the Secretariat of Nuclear Regulation Authority (Japan).

  16. The implication of gouge mineralogy evolution on fault creep: an example from The North Anatolian Fault, Turkey

    NASA Astrophysics Data System (ADS)

    Kaduri, M.; Gratier, J. P.; Renard, F.; Cakir, Z.; Lasserre, C.

    2015-12-01

    Aseismic creep is found along several sections of major active faults at shallow depth, such as the North Anatolian Fault in Turkey, the San Andreas Fault in California (USA), the Longitudinal Valley Fault in Taiwan, the Haiyuan fault in China and the El Pilar Fault in Venezuela. Identifying the mechanisms controlling creep and their evolution with time and space represents a major challenge for predicting the mechanical evolution of active faults, the interplay between creep and earthquakes, and the link between short-term observations from geodesy and the geological setting. Hence, studying the evolution of initial rock into damaged rock, then into gouge, is one of the key question for understanding the origin of fault creep. In order to address this question we collected samples from a dozen well-preserved fault outcrops along creeping and locked sections of the North Anatolian Fault. We used various methods such as microscopic and geological observations, EPMA, XRD analysis, combined with image processing, to characterize their mineralogy and strain. We conclude that (1) there is a clear correlation between creep localization and gouge composition. The locked sections of the fault are mostly composed of massive limestone. The creeping sections comprises clay gouges with 40-80% low friction minerals such as smectite, saponite, kaolinite, that facilitates the creeping. (2) The fault gouge shows two main structures that evolve with displacement: anastomosing cleavage develop during the first stage of displacement; amplifying displacement leads to layering development oblique or sub-parallel to the fault. (3) We demonstrate that the fault gouge result from a progressive evolution of initial volcanic rocks including dissolution of soluble species that move at least partially toward the damage zones and alteration transformations by fluid flow that weaken the gouge and strengthen the damage zone.

  17. Comparison between wavelet and wavelet packet transform features for classification of faults in distribution system

    NASA Astrophysics Data System (ADS)

    Arvind, Pratul

    2012-11-01

    The ability to identify and classify all ten types of faults in a distribution system is an important task for protection engineers. Unlike transmission system, distribution systems have a complex configuration and are subjected to frequent faults. In the present work, an algorithm has been developed for identifying all ten types of faults in a distribution system by collecting current samples at the substation end. The samples are subjected to wavelet packet transform and artificial neural network in order to yield better classification results. A comparison of results between wavelet transform and wavelet packet transform is also presented thereby justifying the feature extracted from wavelet packet transform yields promising results. It should also be noted that current samples are collected after simulating a 25kv distribution system in PSCAD software.

  18. Dating faults by quantifying shear heating

    NASA Astrophysics Data System (ADS)

    Maino, Matteo; Casini, Leonardo; Langone, Antonio; Oggiano, Giacomo; Seno, Silvio; Stuart, Finlay

    2017-04-01

    Dating brittle and brittle-ductile faults is crucial for developing seismic models and for understanding the geological evolution of a region. Improvement the geochronological approaches for absolute fault dating and its accuracy is, therefore, a key objective for the geological community. Direct dating of ancient faults may be attained by exploiting the thermal effects associated with deformation. Heat generated during faulting - i.e. the shear heating - is perhaps the best signal that provides a link between time and activity of a fault. However, other mechanisms not instantaneously related to fault motion can generate heating (advection, upwelling of hot fluids), resulting in a difficulty to determine if the thermal signal corresponds to the timing of fault movement. Recognizing the contribution of shear heating is a fundamental pre-requisite for dating the fault motion through thermochronometric techniques; therefore, a comprehensive thermal characterization of the fault zone is needed. Several methods have been proposed to assess radiometric ages of faulting from either newly grown crystals on fault gouges or surfaces (e.g. Ar/Ar dating), or thermochronometric reset of existing minerals (e.g. zircon and apatite fission tracks). In this contribution we show two cases of brittle and brittle-ductile faulting, one shallow thrust from the SW Alps and one HT, pseudotachylite-bearing fault zone in Sardinia. We applied, in both examples, a multidisciplinary approach that integrates field and micro-structural observations, petrographical characterization, geochemical and mineralogical analyses, fluid inclusion microthermometry and numerical modeling with thermochronometric dating of the two fault zones. We used the zircon (U-Th)/He thermochronometry to estimate the temperatures experienced by the shallow Alpine thrust. The ZHe thermochronometer has a closure temperature (Tc) of 180°C. Consequently, it is ideally suited to dating large heat-producing faults that were

  19. Apparent stress, fault maturity and seismic hazard for normal-fault earthquakes at subduction zones

    USGS Publications Warehouse

    Choy, G.L.; Kirby, S.H.

    2004-01-01

    The behavior of apparent stress for normal-fault earthquakes at subduction zones is derived by examining the apparent stress (?? a = ??Es/Mo, where E s is radiated energy and Mo is seismic moment) of all globally distributed shallow (depth, ?? 1 MPa) are also generally intraslab, but occur where the lithosphere has just begun subduction beneath the overriding plate. They usually occur in cold slabs near trenches where the direction of plate motion across the trench is oblique to the trench axis, or where there are local contortions or geometrical complexities of the plate boundary. Lower ??a (< 1 MPa) is associated with events occurring at the outer rise (OR) complex (between the OR and the trench axis), as well as with intracrustal events occurring just landward of the trench. The average apparent stress of intraslab-normal-fault earthquakes is considerably higher than the average apparent stress of interplate-thrust-fault earthquakes. In turn, the average ?? a of strike-slip earthquakes in intraoceanic environments is considerably higher than that of intraslab-normal-fault earthquakes. The variation of average ??a with focal mechanism and tectonic regime suggests that the level of ?? a is related to fault maturity. Lower stress drops are needed to rupture mature faults such as those found at plate interfaces that have been smoothed by large cumulative displacements (from hundreds to thousands of kilometres). In contrast, immature faults, such as those on which intraslab-normal-fault earthquakes generally occur, are found in cold and intact lithosphere in which total fault displacement has been much less (from hundreds of metres to a few kilometres). Also, faults on which high ??a oceanic strike-slip earthquakes occur are predominantly intraplate or at evolving ends of transforms. At subduction zones, earthquakes occurring on immature faults are likely to be more hazardous as they tend to generate higher amounts of radiated energy per unit of moment than

  20. The Impact of Solution-Focused Training on Professionals' Beliefs, Practices and Burnout of Child Protection Workers in Tenerife Island

    ERIC Educational Resources Information Center

    Medina, Antonio; Beyebach, Mark

    2014-01-01

    This paper presents the first results of a large-scale research project on the child protection services in Tenerife, Spain. In Study 1, the professional beliefs and practices of 152 child protection workers, as measured by a Professional Beliefs and Practices Questionnaire, were correlated with their scores on the Maslach Burnout Inventory.…

  1. Evolving geometrical heterogeneities of fault trace data

    NASA Astrophysics Data System (ADS)

    Wechsler, Neta; Ben-Zion, Yehuda; Christofferson, Shari

    2010-08-01

    We perform a systematic comparative analysis of geometrical fault zone heterogeneities using derived measures from digitized fault maps that are not very sensitive to mapping resolution. We employ the digital GIS map of California faults (version 2.0) and analyse the surface traces of active strike-slip fault zones with evidence of Quaternary and historic movements. Each fault zone is broken into segments that are defined as a continuous length of fault bounded by changes of angle larger than 1°. Measurements of the orientations and lengths of fault zone segments are used to calculate the mean direction and misalignment of each fault zone from the local plate motion direction, and to define several quantities that represent the fault zone disorder. These include circular standard deviation and circular standard error of segments, orientation of long and short segments with respect to the mean direction, and normal separation distances of fault segments. We examine the correlations between various calculated parameters of fault zone disorder and the following three potential controlling variables: cumulative slip, slip rate and fault zone misalignment from the plate motion direction. The analysis indicates that the circular standard deviation and circular standard error of segments decrease overall with increasing cumulative slip and increasing slip rate of the fault zones. The results imply that the circular standard deviation and error, quantifying the range or dispersion in the data, provide effective measures of the fault zone disorder, and that the cumulative slip and slip rate (or more generally slip rate normalized by healing rate) represent the fault zone maturity. The fault zone misalignment from plate motion direction does not seem to play a major role in controlling the fault trace heterogeneities. The frequency-size statistics of fault segment lengths can be fitted well by an exponential function over the entire range of observations.

  2. Microstructural investigations on carbonate fault core rocks in active extensional fault zones from the central Apennines (Italy)

    NASA Astrophysics Data System (ADS)

    Cortinovis, Silvia; Balsamo, Fabrizio; Storti, Fabrizio

    2017-04-01

    The study of the microstructural and petrophysical evolution of cataclasites and gouges has a fundamental impact on both hydraulic and frictional properties of fault zones. In the last decades, growing attention has been payed to the characterization of carbonate fault core rocks due to the nucleation and propagation of coseismic ruptures in carbonate successions (e.g., Umbria-Marche 1997, L'Aquila 2009, Amatrice 2016 earthquakes in Central Apennines, Italy). Among several physical parameters, grain size and shape in fault core rocks are expected to control the way of sliding along the slip surfaces in active fault zones, thus influencing the propagation of coseismic ruptures during earthquakes. Nevertheless, the role of grain size and shape distribution evolution in controlling the weakening or strengthening behavior in seismogenic fault zones is still not fully understood also because a comprehensive database from natural fault cores is still missing. In this contribution, we present a preliminary study of seismogenic extensional fault zones in Central Apennines by combining detailed filed mapping with grain size and microstructural analysis of fault core rocks. Field mapping was aimed to describe the structural architecture of fault systems and the along-strike fault rock distribution and fracturing variations. In the laboratory we used a Malvern Mastersizer 3000 granulometer to obtain a precise grain size characterization of loose fault rocks combined with sieving for coarser size classes. In addition, we employed image analysis on thin sections to quantify the grain shape and size in cemented fault core rocks. The studied fault zones consist of an up to 5-10 m-thick fault core where most of slip is accommodated, surrounded by a tens-of-meters wide fractured damage zone. Fault core rocks consist of (1) loose to partially cemented breccias characterized by different grain size (from several cm up to mm) and variable grain shape (from very angular to sub

  3. Grid Integrated Type 3 Wind Systems - Modeling, and Line Protection Performance Analysis using the RTDS

    NASA Astrophysics Data System (ADS)

    Jain, Rishabh

    In this thesis, the line protection elements and their supervisory elements are analyzed in context of Type 3 (Doubly Fed Induction Generator based) grid integrated wind turbine systems. The underlying converter and controller design algorithms and topologies are discussed. A detailed controller for the Type 3 wind turbine system is designed and integrated to the grid using the RTDS. An alternative to the conventional PLL for tracking of rotor frequency is designed and implemented. A comparative analysis of the performance of an averaged model and the corresponding switching model is presented. After completing the WT model design, the averaged model is used to model an aggregate 10-generator equivalent model tied to a 230kV grid via a 22kV collector. This model is a great asset to understand dynamics, and the unfaulted and faulted behavior of aggregated and single-turbine Type 3 WT systems. The model is then utilized to analyze the response of conventional protection schemes (Line current Differential and Mho Distance elements) and their respective supervisory elements of modern commercial protection relays in real time by hardware-in-the-loop simulation using the RTDS. Differences in the behavior of these elements compared to conventional power systems is noted. Fault are analyzed from the relay's perspective and the reasons for the observed behavior are presented. Challenges associated with sequence components and relay sensitivity are discussed and alternate practices to circumvent these issues are recommended.

  4. Active fault databases and seismic hazard calculations: a compromise between science and practice. Review of case studies from Spain.

    NASA Astrophysics Data System (ADS)

    Garcia-Mayordomo, Julian; Martin-Banda, Raquel; Insua-Arevalo, Juan Miguel; Alvarez-Gomez, Jose Antonio; Martinez-Diaz, Jose Jesus

    2017-04-01

    Since the Quaternary Active Faults Database of Iberia (QAFI) was released in February 2012 a number of studies aimed at producing seismic hazard assessments have made use of it. We will present a summary of the shortcomings and advantages that were faced when QAFI was considered in different seismic hazard studies. These include the production of the new official seismic hazard map of Spain, performed in the view of the foreseen adoption of Eurocode-8 throughout 2017. The QAFI database was considered as a complementary source of information for designing the seismogenic source-zone models used in the calculations, and particularly for the estimation of maximum magnitude distribution in each zone, as well as for assigning the predominant rupture mechanism based on style of faulting. We will also review the different results obtained by other studies that considered QAFI faults as independent seismogenic-sources in opposition to source-zones, revealing, on one hand, the crucial importance of data-reliability and, on the other, the very much influence that ground motion attenuation models have on the actual impact of fault-sources on hazard results. Finally, we will present briefly the updated version of the database (QAFI v.3, 2015), which includes an original scheme for evaluating the reliability of fault seismic parameters specifically devised to facilitate decision-making to seismic hazard practitioners.

  5. Collaborative Protection and Control Schemes for Shipboard Electrical Systems

    DTIC Science & Technology

    2007-03-26

    VSCs ) for fault current limiting and interruption. Revisions needed on the VSCs to perform these functions have been identified, and feasibility of this...disturbances very fast - less than 3-4 ms [3]. Next section summarizes the details of the agent based protection scheme that uses the VSC as the...fault currents. In our previous work [2, 3], it has been demonstrated that this new functionally for VSC can be achieved by proper selection of

  6. Advanced Ground Systems Maintenance Functional Fault Models For Fault Isolation Project

    NASA Technical Reports Server (NTRS)

    Perotti, Jose M. (Compiler)

    2014-01-01

    This project implements functional fault models (FFM) to automate the isolation of failures during ground systems operations. FFMs will also be used to recommend sensor placement to improve fault isolation capabilities. The project enables the delivery of system health advisories to ground system operators.

  7. Ultrareliable fault-tolerant control systems

    NASA Technical Reports Server (NTRS)

    Webster, L. D.; Slykhouse, R. A.; Booth, L. A., Jr.; Carson, T. M.; Davis, G. J.; Howard, J. C.

    1984-01-01

    It is demonstrated that fault-tolerant computer systems, such as on the Shuttles, based on redundant, independent operation are a viable alternative in fault tolerant system designs. The ultrareliable fault-tolerant control system (UFTCS) was developed and tested in laboratory simulations of an UH-1H helicopter. UFTCS includes asymptotically stable independent control elements in a parallel, cross-linked system environment. Static redundancy provides the fault tolerance. A polling is performed among the computers, with results allowing for time-delay channel variations with tight bounds. When compared with the laboratory and actual flight data for the helicopter, the probability of a fault was, for the first 10 hr of flight given a quintuple computer redundancy, found to be 1 in 290 billion. Two weeks of untended Space Station operations would experience a fault probability of 1 in 24 million. Techniques for avoiding channel divergence problems are identified.

  8. Creating an automated chiller fault detection and diagnostics tool using a data fault library.

    PubMed

    Bailey, Margaret B; Kreider, Jan F

    2003-07-01

    Reliable, automated detection and diagnosis of abnormal behavior within vapor compression refrigeration cycle (VCRC) equipment is extremely desirable for equipment owners and operators. The specific type of VCRC equipment studied in this paper is a 70-ton helical rotary, air-cooled chiller. The fault detection and diagnostic (FDD) tool developed as part of this research analyzes chiller operating data and detects faults through recognizing trends or patterns existing within the data. The FDD method incorporates a neural network (NN) classifier to infer the current state given a vector of observables. Therefore the FDD method relies upon the availability of normal and fault empirical data for training purposes and therefore a fault library of empirical data is assembled. This paper presents procedures for conducting sophisticated fault experiments on chillers that simulate air-cooled condenser, refrigerant, and oil related faults. The experimental processes described here are not well documented in literature and therefore will provide the interested reader with a useful guide. In addition, the authors provide evidence, based on both thermodynamics and empirical data analysis, that chiller performance is significantly degraded during fault operation. The chiller's performance degradation is successfully detected and classified by the NN FDD classifier as discussed in the paper's final section.

  9. The buried active faults in southeastern China as revealed by the relocated background seismicity and fault plane solutions

    NASA Astrophysics Data System (ADS)

    Zhu, A.; Wang, P.; Liu, F.

    2017-12-01

    The southeastern China in the mainland corresponds to the south China block, which is characterized by moderate historical seismicity and low stain rate. Most faults are buried under thick Quaternary deposits, so it is difficult to detect and locate them using the routine geological methods. Only a few have been identified to be active in late Quaternary, which leads to relatively high potentially seismic risk to this region due to the unexpected locations of the earthquakes. We performed both hypoDD and tomoDD for the background seismicity from 2000 to 2016 to investigate the buried faults. Some buried active faults are revealed by the relocated seismicity and the velocity structure, no geologically known faults corresponding to them and no surface active evidence ever observed. The geometries of the faults are obtained by analyzing the hypocentral distribution pattern and focal mechanism. The focal mechanism solutions indicate that all the revealed faults are dominated in strike-slip mechanisms, or with some thrust components. While the previous fault investigation and detection results show that most of the Quaternary faults in southeastern China are dominated by normal movement. It suggests that there may exist two fault systems in deep and shallow tectonic regimes. The revealed faults may construct the deep one that act as the seismogenic faults, and the normal faults at shallow cannot generate the destructive earthquakes. The variation in the Curie-point depths agrees well with the structure plane of the revealed active faults, suggesting that the faults may have changed the deep structure.

  10. Protecting the GEO Environment: Policies and Practices

    NASA Technical Reports Server (NTRS)

    Johnson, Nicholas L.; McKay, Gordon A. (Technical Monitor)

    1999-01-01

    The geosynchronous orbital regime has long been recognized as a unique space resource, dictating special measures to ensure its continuing use for future generations. During the past 20 years a variety of national and international policies have been developed to preserve this environment. A review of current practices involving the deployment and disposal of geosynchronous spacecraft, associated upper stages and apogee kick motors, and geosynchronous orbit transfer objects indicates both positive and negative trends. Most spacecraft operators are indeed performing end-of-mission maneuvers, but the boost altitudes normally fall short of policy guidelines. Russia, a major operator in geosynchronous orbit, maneuvers only 1 in 3 spacecraft out of the region, while China has never refired a spacecraft above GEO. The viability of voluntary protection measures for this regime depends upon the responsible actions of the aerospace community as a whole.

  11. Utilization of Supercapacitors in Adaptive Protection Applications for Resiliency against Communication Failures: A Size and Cost Optimization Case Study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Habib, Hany F; El Hariri, Mohamad; Elsayed, Ahmed

    Microgrids’ adaptive protection techniques rely on communication signals from the point of common coupling to ad- just the corresponding relays’ settings for either grid-connected or islanded modes of operation. However, during communication out- ages or in the event of a cyberattack, relays settings are not changed. Thus adaptive protection schemes are rendered unsuc- cessful. Due to their fast response, supercapacitors, which are pre- sent in the microgrid to feed pulse loads, could also be utilized to enhance the resiliency of adaptive protection schemes to communi- cation outages. Proper sizing of the supercapacitors is therefore im- portant in order to maintainmore » a stable system operation and also reg- ulate the protection scheme’s cost. This paper presents a two-level optimization scheme for minimizing the supercapacitor size along with optimizing its controllers’ parameters. The latter will lead to a reduction of the supercapacitor fault current contribution and an increase in that of other AC resources in the microgrid in the ex- treme case of having a fault occurring simultaneously with a pulse load. It was also shown that the size of the supercapacitor can be reduced if the pulse load is temporary disconnected during the transient fault period. Simulations showed that the resulting super- capacitor size and the optimized controller parameters from the proposed two-level optimization scheme were feeding enough fault currents for different types of faults and minimizing the cost of the protection scheme.« less

  12. Controls on Patterns of Repeated Fault Rupture: Examples From the Denali and Bear River Faults

    NASA Astrophysics Data System (ADS)

    Schwartz, D. P.; Hecker, S.

    2013-12-01

    A requirement for estimating seismic hazards is assigning magnitudes to earthquake sources. This relies on anticipating rupture length and slip along faults. Fundamental questions include whether lengths of past surface ruptures can be reasonably determined from fault zone characteristics and whether the variability in length and slip during repeated faulting can be constrained. To address these issues, we look at rupture characteristics and their possible controls from examples in very different tectonic settings: the high slip rate (≥15 mm/yr) Denali fault system, Alaska, and the recently activated Bear River normal fault, Wyoming-Utah. The 2002 rupture of the central Denali fault (CDF) is associated with two noteworthy geometric features. First, rupture initiated where the Susitna Glacier thrust fault (SG) intersects the CDF at depth, near the apex of a structurally complex restraining bend along the Denali. Paleoseismic data show that for the past 700 years the timing of large surface ruptures on the Denali fault west of the 2002 rupture has been distinct from those along the CDF. For the past ~6ka the frequency of SG to Denali ruptures has been ~1:12, indicating that this complexity of the 2002 rupture has not been common. Second, rupture propagated off of one strike-slip fault (CDF) onto another (the Totschunda fault, TF), an occurrence that seldom has been observed. LiDAR mapping of the intersection shows direct connectivity of the two faults--the CDF simply branches into both the TF and the eastern Denali fault (EDF). Differences in the timing of earthquakes during the past 700-800 years at sites surrounding this intersection, and estimates of accumulated slip from slip rates, indicate that for the 2002 rupture sufficient strain had accumulated on the TF to favor its failure. In contrast, the penultimate CDF rupture, with the same slip distribution as in 2002, appears to have stopped at or near the branch point, implying that neither the TF nor the EDF

  13. Experiments in fault tolerant software reliability

    NASA Technical Reports Server (NTRS)

    Mcallister, David F.; Vouk, Mladen A.

    1989-01-01

    Twenty functionally equivalent programs were built and tested in a multiversion software experiment. Following unit testing, all programs were subjected to an extensive system test. In the process sixty-one distinct faults were identified among the versions. Less than 12 percent of the faults exhibited varying degrees of positive correlation. The common-cause (or similar) faults spanned as many as 14 components. However, a majority of these faults were trivial, and easily detected by proper unit and/or system testing. Only two of the seven similar faults were difficult faults, and both were caused by specification ambiguities. One of these faults exhibited variable identical-and-wrong response span, i.e. response span which varied with the testing conditions and input data. Techniques that could have been used to avoid the faults are discussed. For example, it was determined that back-to-back testing of 2-tuples could have been used to eliminate about 90 percent of the faults. In addition, four of the seven similar faults could have been detected by using back-to-back testing of 5-tuples. It is believed that most, if not all, similar faults could have been avoided had the specifications been written using more formal notation, the unit testing phase was subject to more stringent standards and controls, and better tools for measuring the quality and adequacy of the test data (e.g. coverage) were used.

  14. Development of Hydrologic Characterization Methodology of Faults: Outline of the Project in Berkeley, California

    NASA Astrophysics Data System (ADS)

    Goto, J.; Miwa, T.; Tsuchi, H.; Karasaki, K.

    2009-12-01

    The Nuclear Waste Management Organization of Japan (NUMO), after volunteering municipalities arise, will start a three-staged program for selecting a HLW and TRU waste repository site. It is recognized from experiences from various site characterization programs in the world that the hydrologic property of faults is one of the most important parameters in the early stage of the program. It is expected that numerous faults of interest exist in an investigation area of several tens of square kilometers. It is, however, impossible to characterize all these faults in a limited time and budget. This raises problems in the repository designing and safety assessment that we may have to accept unrealistic or over conservative results by using a single model or parameters for all the faults in the area. We, therefore, seek to develop an efficient and practical methodology to characterize hydrologic property of faults. This project is a five year program started in 2007, and comprises the basic methodology development through literature study and its verification through field investigations. The literature study tries to classify faults by correlating their geological features with hydraulic property, to search for the most efficient technology for fault characterization, and to develop a work flow diagram. The field investigation starts from selection of a site and fault(s), followed by existing site data analyses, surface geophysics, geological mapping, trenching, water sampling, a series of borehole investigations and modeling/analyses. Based on the results of the field investigations, we plan to develop a systematic hydrologic characterization methodology of faults. A classification method that correlates combinations of geological features (rock type, fault displacement, fault type, position in a fault zone, fracture zone width, damage zone width) with widths of high permeability zones around a fault zone was proposed through a survey on available documents of the site

  15. Frictional and hydraulic behaviour of carbonate fault gouge during fault reactivation - An experimental study

    NASA Astrophysics Data System (ADS)

    Delle Piane, Claudio; Giwelli, Ausama; Clennell, M. Ben; Esteban, Lionel; Nogueira Kiewiet, Melissa Cristina D.; Kiewiet, Leigh; Kager, Shane; Raimon, John

    2016-10-01

    We present a novel experimental approach devised to test the hydro-mechanical behaviour of different structural elements of carbonate fault rocks during experimental re-activation. Experimentally faulted core plugs were subject to triaxial tests under water saturated conditions simulating depletion processes in reservoirs. Different fault zone structural elements were created by shearing initially intact travertine blocks (nominal size: 240 × 110 × 150 mm) to a maximum displacement of 20 and 120 mm under different normal stresses. Meso-and microstructural features of these sample and the thickness to displacement ratio characteristics of their deformation zones allowed to classify them as experimentally created damage zones (displacement of 20 mm) and fault cores (displacement of 120 mm). Following direct shear testing, cylindrical plugs with diameter of 38 mm were drilled across the slip surface to be re-activated in a conventional triaxial configuration monitoring the permeability and frictional behaviour of the samples as a function of applied stress. All re-activation experiments on faulted plugs showed consistent frictional response consisting of an initial fast hardening followed by apparent yield up to a friction coefficient of approximately 0.6 attained at around 2 mm of displacement. Permeability in the re-activation experiments shows exponential decay with increasing mean effective stress. The rate of permeability decline with mean effective stress is higher in the fault core plugs than in the simulated damage zone ones. It can be concluded that the presence of gouge in un-cemented carbonate faults results in their sealing character and that leakage cannot be achieved by renewed movement on the fault plane alone, at least not within the range of slip measureable with our apparatus (i.e. approximately 7 mm of cumulative displacement). Additionally, it is shown that under sub seismic slip rates re-activated carbonate faults remain strong and no frictional

  16. Study on the Evaluation Method for Fault Displacement: Probabilistic Approach Based on Japanese Earthquake Rupture Data - Distributed fault displacements -

    NASA Astrophysics Data System (ADS)

    Inoue, N.; Kitada, N.; Tonagi, M.

    2016-12-01

    Distributed fault displacements in Probabilistic Fault Displace- ment Analysis (PFDHA) have an important rule in evaluation of important facilities such as Nuclear Installations. In Japan, the Nu- clear Installations should be constructed where there is no possibility that the displacement by the earthquake on the active faults occurs. Youngs et al. (2003) defined the distributed fault as displacement on other faults or shears, or fractures in the vicinity of the principal rup- ture in response to the principal faulting. Other researchers treated the data of distribution fault around principal fault and modeled according to their definitions (e.g. Petersen et al., 2011; Takao et al., 2013 ). We organized Japanese fault displacements data and constructed the slip-distance relationship depending on fault types. In the case of reverse fault, slip-distance relationship on the foot-wall indicated difference trend compared with that on hanging-wall. The process zone or damaged zone have been studied as weak structure around principal faults. The density or number is rapidly decrease away from the principal faults. We contrasted the trend of these zones with that of distributed slip-distance distributions. The subsurface FEM simulation have been carried out to inves- tigate the distribution of stress around principal faults. The results indicated similar trend compared with the distribution of field obser- vations. This research was part of the 2014-2015 research project `Development of evaluating method for fault displacement` by the Secretariat of Nuclear Regulation Authority (S/NRA), Japan.

  17. Application of Fault Tree Analysis and Fuzzy Neural Networks to Fault Diagnosis in the Internet of Things (IoT) for Aquaculture.

    PubMed

    Chen, Yingyi; Zhen, Zhumi; Yu, Huihui; Xu, Jing

    2017-01-14

    In the Internet of Things (IoT) equipment used for aquaculture is often deployed in outdoor ponds located in remote areas. Faults occur frequently in these tough environments and the staff generally lack professional knowledge and pay a low degree of attention in these areas. Once faults happen, expert personnel must carry out maintenance outdoors. Therefore, this study presents an intelligent method for fault diagnosis based on fault tree analysis and a fuzzy neural network. In the proposed method, first, the fault tree presents a logic structure of fault symptoms and faults. Second, rules extracted from the fault trees avoid duplicate and redundancy. Third, the fuzzy neural network is applied to train the relationship mapping between fault symptoms and faults. In the aquaculture IoT, one fault can cause various fault symptoms, and one symptom can be caused by a variety of faults. Four fault relationships are obtained. Results show that one symptom-to-one fault, two symptoms-to-two faults, and two symptoms-to-one fault relationships can be rapidly diagnosed with high precision, while one symptom-to-two faults patterns perform not so well, but are still worth researching. This model implements diagnosis for most kinds of faults in the aquaculture IoT.

  18. Application of Fault Tree Analysis and Fuzzy Neural Networks to Fault Diagnosis in the Internet of Things (IoT) for Aquaculture

    PubMed Central

    Chen, Yingyi; Zhen, Zhumi; Yu, Huihui; Xu, Jing

    2017-01-01

    In the Internet of Things (IoT) equipment used for aquaculture is often deployed in outdoor ponds located in remote areas. Faults occur frequently in these tough environments and the staff generally lack professional knowledge and pay a low degree of attention in these areas. Once faults happen, expert personnel must carry out maintenance outdoors. Therefore, this study presents an intelligent method for fault diagnosis based on fault tree analysis and a fuzzy neural network. In the proposed method, first, the fault tree presents a logic structure of fault symptoms and faults. Second, rules extracted from the fault trees avoid duplicate and redundancy. Third, the fuzzy neural network is applied to train the relationship mapping between fault symptoms and faults. In the aquaculture IoT, one fault can cause various fault symptoms, and one symptom can be caused by a variety of faults. Four fault relationships are obtained. Results show that one symptom-to-one fault, two symptoms-to-two faults, and two symptoms-to-one fault relationships can be rapidly diagnosed with high precision, while one symptom-to-two faults patterns perform not so well, but are still worth researching. This model implements diagnosis for most kinds of faults in the aquaculture IoT. PMID:28098822

  19. Geometry and architecture of faults in a syn-rift normal fault array: The Nukhul half-graben, Suez rift, Egypt

    NASA Astrophysics Data System (ADS)

    Wilson, Paul; Gawthorpe, Rob L.; Hodgetts, David; Rarity, Franklin; Sharp, Ian R.

    2009-08-01

    The geometry and architecture of a well exposed syn-rift normal fault array in the Suez rift is examined. At pre-rift level, the Nukhul fault consists of a single zone of intense deformation up to 10 m wide, with a significant monocline in the hanging wall and much more limited folding in the footwall. At syn-rift level, the fault zone is characterised by a single discrete fault zone less than 2 m wide, with damage zone faults up to approximately 200 m into the hanging wall, and with no significant monocline developed. The evolution of the fault from a buried structure with associated fault-propagation folding, to a surface-breaking structure with associated surface faulting, has led to enhanced bedding-parallel slip at lower levels that is absent at higher levels. Strain is enhanced at breached relay ramps and bends inherited from pre-existing structures that were reactivated during rifting. Damage zone faults observed within the pre-rift show ramp-flat geometries associated with contrast in competency of the layers cut and commonly contain zones of scaly shale or clay smear. Damage zone faults within the syn-rift are commonly very straight, and may be discrete fault planes with no visible fault rock at the scale of observation, or contain relatively thin and simple zones of scaly shale or gouge. The geometric and architectural evolution of the fault array is interpreted to be the result of (i) the evolution from distributed trishear deformation during upward propagation of buried fault tips to surface faulting after faults breach the surface; (ii) differences in deformation response between lithified pre-rift units that display high competence contrasts during deformation, and unlithified syn-rift units that display low competence contrasts during deformation, and; (iii) the history of segmentation, growth and linkage of the faults that make up the fault array. This has important implications for fluid flow in fault zones.

  20. Fault trees and sequence dependencies

    NASA Technical Reports Server (NTRS)

    Dugan, Joanne Bechta; Boyd, Mark A.; Bavuso, Salvatore J.

    1990-01-01

    One of the frequently cited shortcomings of fault-tree models, their inability to model so-called sequence dependencies, is discussed. Several sources of such sequence dependencies are discussed, and new fault-tree gates to capture this behavior are defined. These complex behaviors can be included in present fault-tree models because they utilize a Markov solution. The utility of the new gates is demonstrated by presenting several models of the fault-tolerant parallel processor, which include both hot and cold spares.

  1. Physical fault tolerance of nanoelectronics.

    PubMed

    Szkopek, Thomas; Roychowdhury, Vwani P; Antoniadis, Dimitri A; Damoulakis, John N

    2011-04-29

    The error rate in complementary transistor circuits is suppressed exponentially in electron number, arising from an intrinsic physical implementation of fault-tolerant error correction. Contrariwise, explicit assembly of gates into the most efficient known fault-tolerant architecture is characterized by a subexponential suppression of error rate with electron number, and incurs significant overhead in wiring and complexity. We conclude that it is more efficient to prevent logical errors with physical fault tolerance than to correct logical errors with fault-tolerant architecture.

  2. Enacting Firm, Fair and Friendly Practice: A Model for Strengths-Based Child Protection Relationships?

    PubMed

    Oliver, Carolyn; Charles, Grant

    2016-06-01

    Strengths-based solution-focused approaches are gaining ground in statutory child protection work, but few studies have asked front line practitioners how they navigate the complex worker-client relationships such approaches require. This paper describes one component of a mixed-methods study in a large Canadian statutory child protection agency in which 225 workers described how they applied the ideas of strengths-based practice in their daily work. Interviews with twenty-four practitioners were analysed using an interpretive description approach. Only four interviewees appeared to successfully enact a version of strengths-based practice that closely mirrored those described by key strengths-based child protection theorists and was fully congruent with their mandated role. They described navigating a shifting balance of collaboration and authority in worker-client relationships based on transparency, impartial judgement, attentiveness to the worker-client interaction and the value that clients were fellow human beings. Their accounts extend current conceptualisations of the worker-client relationship in strengths-based child protection work and are congruent with current understandings of effective mandated relationships. They provide what may be a useful model to help workers understand and navigate relationships in which they must reconcile their own authority and expertise with genuine support for the authority and expertise of their clients.

  3. Characterizing the effects of intermittent faults on a processor for dependability enhancement strategy.

    PubMed

    Wang, Chao Saul; Fu, Zhong-Chuan; Chen, Hong-Song; Wang, Dong-Sheng

    2014-01-01

    As semiconductor technology scales into the nanometer regime, intermittent faults have become an increasing threat. This paper focuses on the effects of intermittent faults on NET versus REG on one hand and the implications for dependability strategy on the other. First, the vulnerability characteristics of representative units in OpenSPARC T2 are revealed, and in particular, the highly sensitive modules are identified. Second, an arch-level dependability enhancement strategy is proposed, showing that events such as core/strand running status and core-memory interface events can be candidates of detectable symptoms. A simple watchdog can be deployed to detect application running status (IEXE event). Then SDC (silent data corruption) rate is evaluated demonstrating its potential. Third and last, the effects of traditional protection schemes in the target CMT to intermittent faults are quantitatively studied on behalf of the contribution of each trap type, demonstrating the necessity of taking this factor into account for the strategy.

  4. A hybrid fault diagnosis approach based on mixed-domain state features for rotating machinery.

    PubMed

    Xue, Xiaoming; Zhou, Jianzhong

    2017-01-01

    To make further improvement in the diagnosis accuracy and efficiency, a mixed-domain state features data based hybrid fault diagnosis approach, which systematically blends both the statistical analysis approach and the artificial intelligence technology, is proposed in this work for rolling element bearings. For simplifying the fault diagnosis problems, the execution of the proposed method is divided into three steps, i.e., fault preliminary detection, fault type recognition and fault degree identification. In the first step, a preliminary judgment about the health status of the equipment can be evaluated by the statistical analysis method based on the permutation entropy theory. If fault exists, the following two processes based on the artificial intelligence approach are performed to further recognize the fault type and then identify the fault degree. For the two subsequent steps, mixed-domain state features containing time-domain, frequency-domain and multi-scale features are extracted to represent the fault peculiarity under different working conditions. As a powerful time-frequency analysis method, the fast EEMD method was employed to obtain multi-scale features. Furthermore, due to the information redundancy and the submergence of original feature space, a novel manifold learning method (modified LGPCA) is introduced to realize the low-dimensional representations for high-dimensional feature space. Finally, two cases with 12 working conditions respectively have been employed to evaluate the performance of the proposed method, where vibration signals were measured from an experimental bench of rolling element bearing. The analysis results showed the effectiveness and the superiority of the proposed method of which the diagnosis thought is more suitable for practical application. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  5. The relationship of near-surface active faulting to megathrust splay fault geometry in Prince William Sound, Alaska

    NASA Astrophysics Data System (ADS)

    Finn, S.; Liberty, L. M.; Haeussler, P. J.; Northrup, C.; Pratt, T. L.

    2010-12-01

    We interpret regionally extensive, active faults beneath Prince William Sound (PWS), Alaska, to be structurally linked to deeper megathrust splay faults, such as the one that ruptured in the 1964 M9.2 earthquake. Western PWS in particular is unique; the locations of active faulting offer insights into the transition at the southern terminus of the previously subducted Yakutat slab to Pacific plate subduction. Newly acquired high-resolution, marine seismic data show three seismic facies related to Holocene and older Quaternary to Tertiary strata. These sediments are cut by numerous high angle normal faults in the hanging wall of megathrust splay. Crustal-scale seismic reflection profiles show splay faults emerging from 20 km depth between the Yakutat block and North American crust and surfacing as the Hanning Bay and Patton Bay faults. A distinct boundary coinciding beneath the Hinchinbrook Entrance causes a systematic fault trend change from N30E in southwestern PWS to N70E in northeastern PWS. The fault trend change underneath Hinchinbrook Entrance may occur gradually or abruptly and there is evidence for similar deformation near the Montague Strait Entrance. Landward of surface expressions of the splay fault, we observe subsidence, faulting, and landslides that record deformation associated with the 1964 and older megathrust earthquakes. Surface exposures of Tertiary rocks throughout PWS along with new apatite-helium dates suggest long-term and regional uplift with localized, fault-controlled subsidence.

  6. Biometrics IRB best practices and data protection

    NASA Astrophysics Data System (ADS)

    Boehnen, Christopher; Bolme, David; Flynn, Patrick

    2015-05-01

    The collection of data from human subjects for biometrics research in the United States requires the development of a data collection protocol that is reviewed by a Human Subjects Institutional Review Board (IRB). The IRB reviews the protocol for risks and approves it if it meets the criteria for approval specified in the relevant Federal regulations (45 CFR 46). Many other countries operate similar mechanisms for the protection of human subjects. IRBs review protocols for safety, confidentiality, and for minimization of risk associated with identity disclosure. Since biometric measurements are potentially identifying, IRB scrutiny of biometrics data collection protocols can be expected to be thorough. This paper discusses the intricacies of IRB best practices within the worldwide biometrics community. This is important because research decisions involving human subjects are made at a local level and do not set a precedent for decisions made by another IRB board. In many cases, what one board approves is not approved by another board, resulting in significant inconsistencies that prove detrimental to both researchers and human subjects. Furthermore, the level of biometrics expertise may be low on IRBs, which can contribute to the unevenness of reviews. This publication will suggest possible best practices for designing and seeking IRB approval for human subjects research involving biometrics measurements. The views expressed are the opinions of the authors.

  7. Fault-tolerant quantum error detection.

    PubMed

    Linke, Norbert M; Gutierrez, Mauricio; Landsman, Kevin A; Figgatt, Caroline; Debnath, Shantanu; Brown, Kenneth R; Monroe, Christopher

    2017-10-01

    Quantum computers will eventually reach a size at which quantum error correction becomes imperative. Quantum information can be protected from qubit imperfections and flawed control operations by encoding a single logical qubit in multiple physical qubits. This redundancy allows the extraction of error syndromes and the subsequent detection or correction of errors without destroying the logical state itself through direct measurement. We show the encoding and syndrome measurement of a fault-tolerantly prepared logical qubit via an error detection protocol on four physical qubits, represented by trapped atomic ions. This demonstrates the robustness of a logical qubit to imperfections in the very operations used to encode it. The advantage persists in the face of large added error rates and experimental calibration errors.

  8. Fault-tolerant quantum error detection

    PubMed Central

    Linke, Norbert M.; Gutierrez, Mauricio; Landsman, Kevin A.; Figgatt, Caroline; Debnath, Shantanu; Brown, Kenneth R.; Monroe, Christopher

    2017-01-01

    Quantum computers will eventually reach a size at which quantum error correction becomes imperative. Quantum information can be protected from qubit imperfections and flawed control operations by encoding a single logical qubit in multiple physical qubits. This redundancy allows the extraction of error syndromes and the subsequent detection or correction of errors without destroying the logical state itself through direct measurement. We show the encoding and syndrome measurement of a fault-tolerantly prepared logical qubit via an error detection protocol on four physical qubits, represented by trapped atomic ions. This demonstrates the robustness of a logical qubit to imperfections in the very operations used to encode it. The advantage persists in the face of large added error rates and experimental calibration errors. PMID:29062889

  9. Expert System Detects Power-Distribution Faults

    NASA Technical Reports Server (NTRS)

    Walters, Jerry L.; Quinn, Todd M.

    1994-01-01

    Autonomous Power Expert (APEX) computer program is prototype expert-system program detecting faults in electrical-power-distribution system. Assists human operators in diagnosing faults and deciding what adjustments or repairs needed for immediate recovery from faults or for maintenance to correct initially nonthreatening conditions that could develop into faults. Written in Lisp.

  10. Characterizing the Iron Wash fault: A fault line scarp in Utah

    NASA Astrophysics Data System (ADS)

    Kozaci, O.; Ostenaa, D.; Goodman, J.; Zellman, M.; Hoeft, J.; Sowers, J. M.; Retson, T.

    2015-12-01

    The Iron Wash fault (IWF) is an approximately 30 mile-long, NW-SE trending structure, oriented perpendicular to the San Rafael Monocline near Green River in Utah. IWF exhibits well-expressed geomorphic features such as a linear escarpment with consistently north side down displacement. The fault coincides with an abrupt change in San Rafael Monocline dip angle along its eastern margin. The IWF is exposed in incised drainages where Jurassic Navajo sandstone (oldest) and Lower Carmel Formation (old), are juxtaposed against Jurassic Entrada sandstone (younger) and Quaternary alluvium (youngest). To assess the recency of activity of the IWF we performed detailed geomorphic mapping and a paleoseismic trenching investigation. A benched trench was excavated across a Quaternary fluvial terrace remnant across the mapped trace of the IWF. The uppermost gravel units and overlying colluvium are exposed in the trench across the projection of the fault. In addition, we mapped the basal contact of the Quaternary gravel deposit in relation to the adjacent fault exposures in detail to show the geometry of the basal contact near and across the fault. We find no evidence of vertical displacement of these Quaternary gravels. A preliminary U-series date of calcite cementing unfaulted fluvial gravels and OSL dating of a sand lens within the unfaulted fluvial gravels yielded approximately 304,000 years and 78,000 years, respectively. These preliminary results of independent dating methods constrains the timing of last activity of the IWF to greater than 78,000 years before present suggesting that IWF not an active structure. Its distinct geomorphic expression is most likely the result of differential erosion, forming a fault-line scarp.

  11. Geochemical and microstructural evidence for interseismic changes in fault zone permeability and strength, Alpine Fault, New Zealand

    NASA Astrophysics Data System (ADS)

    Boulton, Carolyn; Menzies, Catriona D.; Toy, Virginia G.; Townend, John; Sutherland, Rupert

    2017-01-01

    Oblique dextral motion on the central Alpine Fault in the last circa 5 Ma has exhumed garnet-oligoclase facies mylonitic fault rocks from ˜35 km depth. During exhumation, deformation, accompanied by fluid infiltration, has generated complex lithological variations in fault-related rocks retrieved during Deep Fault Drilling Project (DFDP-1) drilling at Gaunt Creek, South Island, New Zealand. Lithological, geochemical, and mineralogical results reveal that the fault comprises a core of highly comminuted cataclasites and fault gouges bounded by a damage zone containing cataclasites, protocataclasites, and fractured mylonites. The fault core-alteration zone extends ˜20-30 m from the principal slip zone (PSZ) and is characterized by alteration of primary phases to phyllosilicate minerals. Alteration associated with distinct mineral phases occurred proximal the brittle-to-plastic transition (T ≤ 300-400°C, 6-10 km depth) and at shallow depths (T = 20-150°C, 0-3 km depth). Within the fault core-alteration zone, fractures have been sealed by precipitation of calcite and phyllosilicates. This sealing has decreased fault normal permeability and increased rock mass competency, potentially promoting interseismic strain buildup.

  12. Tremor, the curious third wheel of fault motion (Invited)

    NASA Astrophysics Data System (ADS)

    Vidale, J. E.

    2009-12-01

    that marks tremor. The tremor moments in the Cascadia population follow the frequency scaling laid down by this lecture's namesake, the Gutenberg-Richter distribution, just like normal earthquakes. More practically, tremor is loading dangerous faults, such as near Seattle and Los Angeles, at unsteady rates, perhaps allowing estimations of variable levels of danger. Tremor probably silhouettes where the fault is locked, with alarming implications in some places. We can hope that tremor evolves through the earthquake cycle of the locked zone above in a way revealing the approach of feared events. It is a rare phenomenon that attracts so much inquiry, yet remains so obscure. No doubt, by AGU time, our seismic flashlights will have illuminated a few more juicy tidbits, and with luck bring us closer to the secret of tremor, which still lingers in an unlit corner.

  13. "3D_Fault_Offsets," a Matlab Code to Automatically Measure Lateral and Vertical Fault Offsets in Topographic Data: Application to San Andreas, Owens Valley, and Hope Faults

    NASA Astrophysics Data System (ADS)

    Stewart, N.; Gaudemer, Y.; Manighetti, I.; Serreau, L.; Vincendeau, A.; Dominguez, S.; Mattéo, L.; Malavieille, J.

    2018-01-01

    Measuring fault offsets preserved at the ground surface is of primary importance to recover earthquake and long-term slip distributions and understand fault mechanics. The recent explosion of high-resolution topographic data, such as Lidar and photogrammetric digital elevation models, offers an unprecedented opportunity to measure dense collections of fault offsets. We have developed a new Matlab code, 3D_Fault_Offsets, to automate these measurements. In topographic data, 3D_Fault_Offsets mathematically identifies and represents nine of the most prominent geometric characteristics of common sublinear markers along faults (especially strike slip) in 3-D, such as the streambed (minimum elevation), top, free face and base of channel banks or scarps (minimum Laplacian, maximum gradient, and maximum Laplacian), and ridges (maximum elevation). By calculating best fit lines through the nine point clouds on either side of the fault, the code computes the lateral and vertical offsets between the piercing points of these lines onto the fault plane, providing nine lateral and nine vertical offset measures per marker. Through a Monte Carlo approach, the code calculates the total uncertainty on each offset. It then provides tools to statistically analyze the dense collection of measures and to reconstruct the prefaulted marker geometry in the horizontal and vertical planes. We applied 3D_Fault_Offsets to remeasure previously published offsets across 88 markers on the San Andreas, Owens Valley, and Hope faults. We obtained 5,454 lateral and vertical offset measures. These automatic measures compare well to prior ones, field and remote, while their rich record provides new insights on the preservation of fault displacements in the morphology.

  14. Guide of good practices for occupational radiological protection in plutonium facilities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    1998-06-01

    This Technical Standard (TS) does not contain any new requirements. Its purpose is to provide guides to good practice, update existing reference material, and discuss practical lessons learned relevant to the safe handling of plutonium. the technical rationale is given to allow US Department of Energy (DOE) health physicists to adapt the recommendations to similar situations throughout the DOE complex. Generally, DOE contractor health physicists will be responsible to implement radiation protection activities at DOE facilities and DOE health physicists will be responsible for oversight of those activities. This guidance is meant to be useful for both efforts. This TSmore » replaces PNL-6534, Health Physics Manual of Good Practices for Plutonium Facilities, by providing more complete and current information and by emphasizing the situations that are typical of DOE`s current plutonium operations; safe storage, decontamination, and decommissioning (environmental restoration); and weapons disassembly.« less

  15. Simulation of Co-Seismic Off-Fault Stress Effects: Influence of Fault Roughness and Pore Pressure Coupling

    NASA Astrophysics Data System (ADS)

    Fälth, B.; Lund, B.; Hökmark, H.

    2017-12-01

    Aiming at improved safety assessment of geological nuclear waste repositories, we use dynamic 3D earthquake simulations to estimate the potential for co-seismic off-fault distributed fracture slip. Our model comprises a 12.5 x 8.5 km strike-slip fault embedded in a full space continuum where we apply a homogeneous initial stress field. In the reference case (Case 1) the fault is planar and oriented optimally for slip, given the assumed stress field. To examine the potential impact of fault roughness, we also study cases where the fault surface has undulations with self-similar fractal properties. In both the planar and the undulated cases the fault has homogeneous frictional properties. In a set of ten rough fault models (Case 2), the fault friction is equal to that of Case 1, meaning that these models generate lower seismic moments than Case 1. In another set of ten rough fault models (Case 3), the fault dynamic friction is adjusted such that seismic moments on par with that of Case 1 are generated. For the propagation of the earthquake rupture we adopt the linear slip-weakening law and obtain Mw 6.4 in Case 1 and Case 3, and Mw 6.3 in Case 2 (35 % lower moment than Case 1). During rupture we monitor the off-fault stress evolution along the fault plane at 250 m distance and calculate the corresponding evolution of the Coulomb Failure Stress (CFS) on optimally oriented hypothetical fracture planes. For the stress-pore pressure coupling, we assume Skempton's coefficient B = 0.5 as a base case value, but also examine the sensitivity to variations of B. We observe the following: (I) The CFS values, and thus the potential for fracture slip, tend to increase with the distance from the hypocenter. This is in accordance with results by other authors. (II) The highest CFS values are generated by quasi-static stress concentrations around fault edges and around large scale fault bends, where we obtain values of the order of 10 MPa. (III) Locally, fault roughness may have a

  16. Subsurface Resistivity Structures in and Around Strike-Slip Faults - Electromagnetic Surveys and Drillings Across Active Faults in Central Japan -

    NASA Astrophysics Data System (ADS)

    Omura, K.; Ikeda, R.; Iio, Y.; Matsuda, T.

    2005-12-01

    Electrical resistivity is important property to investigate the structure of active faults. Pore fluid affect seriously the electrical properties of rocks, subsurface electrical resistivity can be an indicator of the existence of fluid and distribution of pores. Fracture zone of fault is expected to have low resistivity due to high porosity and small gain size. Especially, strike-slip type fault has nearly vertical fracture zone and the fracture zone would be detected by an electrical survey across the fault. We performed electromagnetic survey across the strike-slip active faults in central Japan. At the same faults, we also drilled borehole into the fault and did downhole logging in the borehole. We applied MT or CSAMT methods onto 5 faults: Nojima fault which appeared on the surface by the 1995 Great Kobe earthquake (M=7.2), western Nagano Ohtaki area(1984 Nagano-ken seibu earthquake (M=6.8), the fault did not appeared on the surface), Neodani fault which appeared by the 1891 Nobi earthquake (M=8.0), Atera fault which seemed to be dislocated by the 1586 Tensyo earthquake (M=7.9), Gofukuji fault that is considered to have activated about 1200 years ago. The sampling frequencies of electrical and magnetic field were 2 - 1024Hz (10 frequencies) for CSAMT survey and 0.00055 - 384Hz (40 frequencies) for MT survey. The electromagnetic data were processed by standard method and inverted to 2-D resistivity structure along transects of the faults. Results of the survey were compared with downhole electrical logging data and observational descriptions of drilled cores. Fault plane of each fault were recognized as low resistivity region or boundary between relatively low and high resistivity region, except for Gofukuji fault. As for Gofukuji fault, fault was located in relatively high resistivity region. During very long elapsed time from the last earthquake, the properties of fracture zone of Gofukuji fault might changed from low resistivity properties as observed for

  17. Preliminary results from fault-slip analysis of the Pärvie neotectonic postglacial fault zone, northern Sweden

    NASA Astrophysics Data System (ADS)

    Backstrom, Ann; Viola, Giulio; Rantakokko, Nina; Jonsson, Erik; Ask, Maria

    2013-04-01

    Our study aims at constraining the paleostress field evolution of neotectonic postglacial faulting in northern Sweden. Postglacial faulting is a special type of intraplate faulting triggered by the retreat of continental glaciers and by the induced changes of the local stress field. We investigated the longest known post-glacial fault (PGF) in Scandinavia, the Pärvie PGF. It is 155 km long and consists of a series of 3-10 m high fault scarps developed in several rock types such as mafic and felsic meta-volcanic rocks, and in the north, Archean granites and gneisses. Most of the scarps trend north-northeast and dip steeply to the west. A smaller sibling fault to the east (the Lansjärv PGF) displaces postglacial sediments. It is interpreted as resulting from a great earthquake (M≤8.2) at the end or just after the last glaciation (~10 ky B.P.). Microseismic activity is still present along the Pärvie fault zone. Unfortunately, the stress history of the Pärvie PGF before the last glaciation is poorly known. To reconstruct its stress history, we have performed fault-slip analysis. Fault slip data have been collected from two profiles across the Pärvie PGF in the Corruvagge valley and in Kamasjaure in the north, and Stora Sjöfallet in the southern part of the fault zone. Cross-cutting relationships, fracture mineralization and structural features of the brittle overprint of the rocks have been used to suggest a conceptual model of the brittle history of the fault. Ca. 40 kinematically constrained fault planes were used in the inversion study in addition to ca. 1060 fractures. Preliminary results indicate that the oldest generation of fractures are coated by pink plagioclase and clinoamphibole. The key mineral epidote is prominent along cataclastic structures. Rarly multiple kinematic indicators are identified along the same fracture, indicating polyphase reactivation. Epidote coating is found along fractures from all the computed stress-fields, indicating that

  18. Along fault friction and fluid pressure effects on the spatial distribution of fault-related fractures

    NASA Astrophysics Data System (ADS)

    Maerten, Laurent; Maerten, Frantz; Lejri, Mostfa

    2018-03-01

    Whatever the processes involved in the natural fracture development in the subsurface, fracture patterns are often affected by the local stress field during propagation. This homogeneous or heterogeneous local stress field can be of mechanical and/or tectonic origin. In this contribution, we focus on the fracture-pattern development where active faults perturb the stress field, and are affected by fluid pressure and sliding friction along the faults. We analyse and geomechanically model two fractured outcrops in UK (Nash Point) and in France (Les Matelles). We demonstrate that the observed local radial joint pattern is best explained by local fluid pressure along the faults and that observed fracture pattern can only be reproduced when fault friction is very low (μ < 0.2). Additionally, in the case of sub-vertical faults, we emphasize that the far field horizontal stress ratio does not affect stress trajectories, or fracture patterns, unless fault normal displacement (dilation or contraction) is relatively large.

  19. Fault detection and isolation for complex system

    NASA Astrophysics Data System (ADS)

    Jing, Chan Shi; Bayuaji, Luhur; Samad, R.; Mustafa, M.; Abdullah, N. R. H.; Zain, Z. M.; Pebrianti, Dwi

    2017-07-01

    Fault Detection and Isolation (FDI) is a method to monitor, identify, and pinpoint the type and location of system fault in a complex multiple input multiple output (MIMO) non-linear system. A two wheel robot is used as a complex system in this study. The aim of the research is to construct and design a Fault Detection and Isolation algorithm. The proposed method for the fault identification is using hybrid technique that combines Kalman filter and Artificial Neural Network (ANN). The Kalman filter is able to recognize the data from the sensors of the system and indicate the fault of the system in the sensor reading. Error prediction is based on the fault magnitude and the time occurrence of fault. Additionally, Artificial Neural Network (ANN) is another algorithm used to determine the type of fault and isolate the fault in the system.

  20. Research on Fault Rate Prediction Method of T/R Component

    NASA Astrophysics Data System (ADS)

    Hou, Xiaodong; Yang, Jiangping; Bi, Zengjun; Zhang, Yu

    2017-07-01

    T/R component is an important part of the large phased array radar antenna array, because of its large numbers, high fault rate, it has important significance for fault prediction. Aiming at the problems of traditional grey model GM(1,1) in practical operation, the discrete grey model is established based on the original model in this paper, and the optimization factor is introduced to optimize the background value, and the linear form of the prediction model is added, the improved discrete grey model of linear regression is proposed, finally, an example is simulated and compared with other models. The results show that the method proposed in this paper has higher accuracy and the solution is simple and the application scope is more extensive.

  1. Parallel Fault Strands at 9-km Depth Resolved on the Imperial Fault, Southern California

    NASA Astrophysics Data System (ADS)

    Shearer, P. M.

    2001-12-01

    The Imperial Fault is one of the most active faults in California with several M>6 events during the 20th century and geodetic results suggesting that it currently carries almost 80% of the total plate motion between the Pacific and North American plates. We apply waveform cross-correlation to a group of ~1500 microearthquakes along the Imperial Fault and find that about 25% of the events form similar event clusters. Event relocation based on precise differential times among events in these clusters reveals multiple streaks of seismicity up to 5 km in length that are at a nearly constant depth of ~9 km but are spaced about 0.5 km apart in map view. These multiples are unlikely to be a location artifact because they are spaced more widely than the computed location errors and different streaks can be resolved within individual similar event clusters. The streaks are parallel to the mapped surface rupture of the 1979 Mw=6.5 Imperial Valley earthquake. No obvious temporal migration of the event locations is observed. Limited focal mechanism data for the events within the streaks are consistent with right-lateral slip on vertical fault planes. The seismicity not contained in similar event clusters cannot be located as precisely; our locations for these events scatter between 7 and 11 km depth, but it is possible that their true locations could be much more tightly clustered. The observed streaks have some similarities to those previously observed in northern California along the San Andreas and Hayward faults (e.g., Rubin et al., 1999; Waldhauser et al., 1999); however those streaks were imaged within a single fault plane rather than the multiple faults resolved on the Imperial Fault. The apparent constant depth of the Imperial streaks is similar to that seen in Hawaii at much shallower depth by Gillard et al. (1996). Geodetic results (e.g., Lyons et al., 2001) suggest that the Imperial Fault is currently slipping at 45 mm/yr below a locked portion that extends to ~10

  2. 20 CFR 410.561b - Fault.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 20 Employees' Benefits 2 2010-04-01 2010-04-01 false Fault. 410.561b Section 410.561b Employees' Benefits SOCIAL SECURITY ADMINISTRATION FEDERAL COAL MINE HEALTH AND SAFETY ACT OF 1969, TITLE IV-BLACK LUNG BENEFITS (1969- ) Payment of Benefits § 410.561b Fault. Fault as used in without fault (see § 410...

  3. 20 CFR 410.561b - Fault.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 20 Employees' Benefits 2 2011-04-01 2011-04-01 false Fault. 410.561b Section 410.561b Employees' Benefits SOCIAL SECURITY ADMINISTRATION FEDERAL COAL MINE HEALTH AND SAFETY ACT OF 1969, TITLE IV-BLACK LUNG BENEFITS (1969- ) Payment of Benefits § 410.561b Fault. Fault as used in without fault (see § 410...

  4. Faults Discovery By Using Mined Data

    NASA Technical Reports Server (NTRS)

    Lee, Charles

    2005-01-01

    Fault discovery in the complex systems consist of model based reasoning, fault tree analysis, rule based inference methods, and other approaches. Model based reasoning builds models for the systems either by mathematic formulations or by experiment model. Fault Tree Analysis shows the possible causes of a system malfunction by enumerating the suspect components and their respective failure modes that may have induced the problem. The rule based inference build the model based on the expert knowledge. Those models and methods have one thing in common; they have presumed some prior-conditions. Complex systems often use fault trees to analyze the faults. Fault diagnosis, when error occurs, is performed by engineers and analysts performing extensive examination of all data gathered during the mission. International Space Station (ISS) control center operates on the data feedback from the system and decisions are made based on threshold values by using fault trees. Since those decision-making tasks are safety critical and must be done promptly, the engineers who manually analyze the data are facing time challenge. To automate this process, this paper present an approach that uses decision trees to discover fault from data in real-time and capture the contents of fault trees as the initial state of the trees.

  5. InSAR observations of strain accumulation and fault creep along the Chaman Fault system, Pakistan and Afghanistan

    NASA Astrophysics Data System (ADS)

    Fattahi, Heresh; Amelung, Falk

    2016-08-01

    We use 2004-2011 Envisat synthetic aperture radar imagery and InSAR time series methods to estimate the contemporary rates of strain accumulation in the Chaman Fault system in Pakistan and Afghanistan. At 29 N we find long-term slip rates of 16 ± 2.3 mm/yr for the Ghazaband Fault and of 8 ± 3.1 mm/yr for the Chaman Fault. This makes the Ghazaband Fault one of the most hazardous faults of the plate boundary zone. We further identify a 340 km long segment displaying aseismic surface creep along the Chaman Fault, with maximum surface creep rate of 8.1 ± 2 mm/yr. The observation that the Chaman Fault accommodates only 30% of the relative plate motion between India and Eurasia implies that the remainder is accommodated south and east of the Katawaz block microplate.

  6. Fault lubrication during earthquakes.

    PubMed

    Di Toro, G; Han, R; Hirose, T; De Paola, N; Nielsen, S; Mizoguchi, K; Ferri, F; Cocco, M; Shimamoto, T

    2011-03-24

    The determination of rock friction at seismic slip rates (about 1 m s(-1)) is of paramount importance in earthquake mechanics, as fault friction controls the stress drop, the mechanical work and the frictional heat generated during slip. Given the difficulty in determining friction by seismological methods, elucidating constraints are derived from experimental studies. Here we review a large set of published and unpublished experiments (∼300) performed in rotary shear apparatus at slip rates of 0.1-2.6 m s(-1). The experiments indicate a significant decrease in friction (of up to one order of magnitude), which we term fault lubrication, both for cohesive (silicate-built, quartz-built and carbonate-built) rocks and non-cohesive rocks (clay-rich, anhydrite, gypsum and dolomite gouges) typical of crustal seismogenic sources. The available mechanical work and the associated temperature rise in the slipping zone trigger a number of physicochemical processes (gelification, decarbonation and dehydration reactions, melting and so on) whose products are responsible for fault lubrication. The similarity between (1) experimental and natural fault products and (2) mechanical work measures resulting from these laboratory experiments and seismological estimates suggests that it is reasonable to extrapolate experimental data to conditions typical of earthquake nucleation depths (7-15 km). It seems that faults are lubricated during earthquakes, irrespective of the fault rock composition and of the specific weakening mechanism involved.

  7. Fault tolerant filtering and fault detection for quantum systems driven by fields in single photon states

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gao, Qing, E-mail: qing.gao.chance@gmail.com; Dong, Daoyi, E-mail: daoyidong@gmail.com; Petersen, Ian R., E-mail: i.r.petersen@gmai.com

    The purpose of this paper is to solve the fault tolerant filtering and fault detection problem for a class of open quantum systems driven by a continuous-mode bosonic input field in single photon states when the systems are subject to stochastic faults. Optimal estimates of both the system observables and the fault process are simultaneously calculated and characterized by a set of coupled recursive quantum stochastic differential equations.

  8. Transforming incomplete fault tree to Ishikawa diagram as an alternative method for technology transfer

    NASA Astrophysics Data System (ADS)

    Batzias, Dimitris F.

    2012-12-01

    Fault Tree Analysis (FTA) can be used for technology transfer when the relevant problem (called 'top even' in FTA) is solved in a technology centre and the results are diffused to interested parties (usually Small Medium Enterprises - SMEs) that have not the proper equipment and the required know-how to solve the problem by their own. Nevertheless, there is a significant drawback in this procedure: the information usually provided by the SMEs to the technology centre, about production conditions and corresponding quality characteristics of the product, and (sometimes) the relevant expertise in the Knowledge Base of this centre may be inadequate to form a complete fault tree. Since such cases are quite frequent in practice, we have developed a methodology for transforming incomplete fault tree to Ishikawa diagram, which is more flexible and less strict in establishing causal chains, because it uses a surface phenomenological level with a limited number of categories of faults. On the other hand, such an Ishikawa diagram can be extended to simulate a fault tree as relevant knowledge increases. An implementation of this transformation, referring to anodization of aluminium, is presented.

  9. Laboratory scale micro-seismic monitoring of rock faulting and injection-induced fault reactivation

    NASA Astrophysics Data System (ADS)

    Sarout, J.; Dautriat, J.; Esteban, L.; Lumley, D. E.; King, A.

    2017-12-01

    The South West Hub CCS project in Western Australia aims to evaluate the feasibility and impact of geosequestration of CO2 in the Lesueur sandstone formation. Part of this evaluation focuses on the feasibility and design of a robust passive seismic monitoring array. Micro-seismicity monitoring can be used to image the injected CO2plume, or any geomechanical fracture/fault activity; and thus serve as an early warning system by measuring low-level (unfelt) seismicity that may precede potentially larger (felt) earthquakes. This paper describes laboratory deformation experiments replicating typical field scenarios of fluid injection in faulted reservoirs. Two pairs of cylindrical core specimens were recovered from the Harvey-1 well at depths of 1924 m and 2508 m. In each specimen a fault is first generated at the in situ stress, pore pressure and temperature by increasing the vertical stress beyond the peak in a triaxial stress vessel at CSIRO's Geomechanics & Geophysics Lab. The faulted specimen is then stabilized by decreasing the vertical stress. The freshly formed fault is subsequently reactivated by brine injection and increase of the pore pressure until slip occurs again. This second slip event is then controlled in displacement and allowed to develop for a few millimeters. The micro-seismic (MS) response of the rock during the initial fracturing and subsequent reactivation is monitored using an array of 16 ultrasonic sensors attached to the specimen's surface. The recorded MS events are relocated in space and time, and correlate well with the 3D X-ray CT images of the specimen obtained post-mortem. The time evolution of the structural changes induced within the triaxial stress vessel is therefore reliably inferred. The recorded MS activity shows that, as expected, the increase of the vertical stress beyond the peak led to an inclined shear fault. The injection of fluid and the resulting increase in pore pressure led first to a reactivation of the pre

  10. Final Technical Report: PV Fault Detection Tool.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    King, Bruce Hardison; Jones, Christian Birk

    The PV Fault Detection Tool project plans to demonstrate that the FDT can (a) detect catastrophic and degradation faults and (b) identify the type of fault. This will be accomplished by collecting fault signatures using different instruments and integrating this information to establish a logical controller for detecting, diagnosing and classifying each fault.

  11. From experiment to design -- Fault characterization and detection in parallel computer systems using computational accelerators

    NASA Astrophysics Data System (ADS)

    Yim, Keun Soo

    program states that included dynamically allocated memory (to be spatially comprehensive). In GPUs, we used fault injection studies to demonstrate the importance of detecting silent data corruption (SDC) errors that are mainly due to the lack of fine-grained protections and the massive use of fault-insensitive data. This dissertation also presents transparent fault tolerance frameworks and techniques that are directly applicable to hybrid computers built using only commercial off-the-shelf hardware components. This dissertation shows that by developing understanding of the failure characteristics and error propagation paths of target programs, we were able to create fault tolerance frameworks and techniques that can quickly detect and recover from hardware faults with low performance and hardware overheads.

  12. Distributed bearing fault diagnosis based on vibration analysis

    NASA Astrophysics Data System (ADS)

    Dolenc, Boštjan; Boškoski, Pavle; Juričić, Đani

    2016-01-01

    Distributed bearing faults appear under various circumstances, for example due to electroerosion or the progression of localized faults. Bearings with distributed faults tend to generate more complex vibration patterns than those with localized faults. Despite the frequent occurrence of such faults, their diagnosis has attracted limited attention. This paper examines a method for the diagnosis of distributed bearing faults employing vibration analysis. The vibrational patterns generated are modeled by incorporating the geometrical imperfections of the bearing components. Comparing envelope spectra of vibration signals shows that one can distinguish between localized and distributed faults. Furthermore, a diagnostic procedure for the detection of distributed faults is proposed. This is evaluated on several bearings with naturally born distributed faults, which are compared with fault-free bearings and bearings with localized faults. It is shown experimentally that features extracted from vibrations in fault-free, localized and distributed fault conditions form clearly separable clusters, thus enabling diagnosis.

  13. High stresses stored in fault zones: example of the Nojima fault (Japan)

    NASA Astrophysics Data System (ADS)

    Boullier, Anne-Marie; Robach, Odile; Ildefonse, Benoît; Barou, Fabrice; Mainprice, David; Ohtani, Tomoyuki; Fujimoto, Koichiro

    2018-04-01

    During the last decade pulverized rocks have been described on outcrops along large active faults and attributed to damage related to a propagating seismic rupture front. Questions remain concerning the maximal lateral distance from the fault plane and maximal depth for dynamic damage to be imprinted in rocks. In order to document these questions, a representative core sample of granodiorite located 51.3 m from the Nojima fault (Japan) that was drilled after the Hyogo-ken Nanbu (Kobe) earthquake is studied by using electron backscattered diffraction (EBSD) and high-resolution X-ray Laue microdiffraction. Although located outside of the Nojima damage fault zone and macroscopically undeformed, the sample shows pervasive microfractures and local fragmentation. These features are attributed to the first stage of seismic activity along the Nojima fault characterized by laumontite as the main sealing mineral. EBSD mapping was used in order to characterize the crystallographic orientation and deformation microstructures in the sample, and X-ray microdiffraction was used to measure elastic strain and residual stresses on each point of the mapped quartz grain. Both methods give consistent results on the crystallographic orientation and show small and short wavelength misorientations associated with laumontite-sealed microfractures and alignments of tiny fluid inclusions. Deformation microstructures in quartz are symptomatic of the semi-brittle faulting regime, in which low-temperature brittle plastic deformation and stress-driven dissolution-deposition processes occur conjointly. This deformation occurred at a 3.7-11.1 km depth interval as indicated by the laumontite stability domain. Residual stresses are calculated from deviatoric elastic strain tensor measured using X-ray Laue microdiffraction using the Hooke's law. The modal value of the von Mises stress distribution is at 100 MPa and the mean at 141 MPa. Such stress values are comparable to the peak strength of a

  14. Subaru FATS (fault tracking system)

    NASA Astrophysics Data System (ADS)

    Winegar, Tom W.; Noumaru, Junichi

    2000-07-01

    The Subaru Telescope requires a fault tracking system to record the problems and questions that staff experience during their work, and the solutions provided by technical experts to these problems and questions. The system records each fault and routes it to a pre-selected 'solution-provider' for each type of fault. The solution provider analyzes the fault and writes a solution that is routed back to the fault reporter and recorded in a 'knowledge-base' for future reference. The specifications of our fault tracking system were unique. (1) Dual language capacity -- Our staff speak both English and Japanese. Our contractors speak Japanese. (2) Heterogeneous computers -- Our computer workstations are a mixture of SPARCstations, Macintosh and Windows computers. (3) Integration with prime contractors -- Mitsubishi and Fujitsu are primary contractors in the construction of the telescope. In many cases, our 'experts' are our contractors. (4) Operator scheduling -- Our operators spend 50% of their work-month operating the telescope, the other 50% is spent working day shift at the base facility in Hilo, or day shift at the summit. We plan for 8 operators, with a frequent rotation. We need to keep all operators informed on the current status of all faults, no matter the operator's location.

  15. Influence of mineralogy and microstructures on strain localization and fault zone architecture of the Alpine Fault, New Zealand

    NASA Astrophysics Data System (ADS)

    Ichiba, T.; Kaneki, S.; Hirono, T.; Oohashi, K.; Schuck, B.; Janssen, C.; Schleicher, A.; Toy, V.; Dresen, G.

    2017-12-01

    The Alpine Fault on New Zealand's South Island is an oblique, dextral strike-slip fault that accommodated the majority of displacement between the Pacific and the Australian Plates and presents the biggest seismic hazard in the region. Along its central segment, the hanging wall comprises greenschist and amphibolite facies Alpine Schists. Exhumation from 35 km depth, along a SE-dipping detachment, lead to mylonitization which was subsequently overprinted by brittle deformation and finally resulted in the fault's 1 km wide damage zone. The geomechanical behavior of a fault is affected by the internal structure of its fault zone. Consequently, studying processes controlling fault zone architecture allows assessing the seismic hazard of a fault. Here we present the results of a combined microstructural (SEM and TEM), mineralogical (XRD) and geochemical (XRF) investigation of outcrop samples originating from several locations along the Alpine Fault, the aim of which is to evaluate the influence of mineralogical composition, alteration and pre-existing fabric on strain localization and to identify the controls on the fault zone architecture, particularly the locus of brittle deformation in P, T and t space. Field observations reveal that the fault's principal slip zone (PSZ) is either a thin (< 1 cm to < 7 cm) layered structure or a relatively thick (10s cm) package lacking a detectable macroscopic fabric. Lithological and related rheological contrasts are widely assumed to govern strain localization. However, our preliminary results suggest that qualitative mineralogical composition has only minor impact on fault zone architecture. Quantities of individual mineral phases differ markedly between fault damage zone and fault core at specific sites, but the quantitative composition of identical structural units such as the fault core, is similar in all samples. This indicates that the degree of strain localization at the Alpine Fault might be controlled by small initial

  16. Outer Rise Faulting And Mantle Serpentinization

    NASA Astrophysics Data System (ADS)

    Ranero, C. R.; Phipps Morgan, J.; McIntosh, K.; Reichert, C.

    Dehydration of serpentinized mantle of the downgoing slab has been proposed to cause both intermediate depth earthquakes (50-300 km) and arc volcanism at sub- duction zones. It has been suggested that most of this serpentinization occurs beneath the outer rise; where normal faulting earthquakes due to bending cut > 20 km deep into the lithosphere, allowing seawater to reach and react with underlying mantle. However, little is known about flexural faulting at convergent margins; about how many normal faults cut across the crust and how deeply they penetrate into the man- tle; about the true potential of faults as conduits for fluid flow and how much water can be added through this process. We present evidence that pervasive flexural faulting may cut deep into the mantle and that the amount of faulting vary dramatically along strike at subduction zones. Flexural faulting increases towards the trench axis indicat- ing that active extension occurs in a broad area. Multibeam bathymetry of the Pacific margin of Costa Rica and Nicaragua shows a remarkable variation in the amount of flexural faulting along the incoming ocean plate. Several parameters seem to control lateral variability. Off south Costa Rica thick crust of the Cocos Ridge flexes little, and little to no faulting develops near the trench. Off central Costa Rica, normal thick- ness crust with magnetic anomalies striking oblique to the trench displays small offset faults (~200 m) striking similar to the original seafloor fabric. Off northern Costa Rica, magnetic anomalies strike perpendicular to the trench axis, and a few ~100m-offset faults develop parallel to the trench. Further north, across the Nicaraguan margin, magnetic anomalies strike parallel to the trench and the most widespread faulting de- velops entering the trench. Multichannel seismic reflection images in this area show a pervasive set of trenchward dipping reflections that cross the ~6 km thick crust and extend into the mantle to depths of at

  17. Fault tolerant operation of switched reluctance machine

    NASA Astrophysics Data System (ADS)

    Wang, Wei

    The energy crisis and environmental challenges have driven industry towards more energy efficient solutions. With nearly 60% of electricity consumed by various electric machines in industry sector, advancement in the efficiency of the electric drive system is of vital importance. Adjustable speed drive system (ASDS) provides excellent speed regulation and dynamic performance as well as dramatically improved system efficiency compared with conventional motors without electronics drives. Industry has witnessed tremendous grow in ASDS applications not only as a driving force but also as an electric auxiliary system for replacing bulky and low efficiency auxiliary hydraulic and mechanical systems. With the vast penetration of ASDS, its fault tolerant operation capability is more widely recognized as an important feature of drive performance especially for aerospace, automotive applications and other industrial drive applications demanding high reliability. The Switched Reluctance Machine (SRM), a low cost, highly reliable electric machine with fault tolerant operation capability, has drawn substantial attention in the past three decades. Nevertheless, SRM is not free of fault. Certain faults such as converter faults, sensor faults, winding shorts, eccentricity and position sensor faults are commonly shared among all ASDS. In this dissertation, a thorough understanding of various faults and their influence on transient and steady state performance of SRM is developed via simulation and experimental study, providing necessary knowledge for fault detection and post fault management. Lumped parameter models are established for fast real time simulation and drive control. Based on the behavior of the faults, a fault detection scheme is developed for the purpose of fast and reliable fault diagnosis. In order to improve the SRM power and torque capacity under faults, the maximum torque per ampere excitation are conceptualized and validated through theoretical analysis and

  18. The Effect of Power Protection Equipment on Explosion Hazards and on the Reliability of Power Supply to Longwall Systems

    NASA Astrophysics Data System (ADS)

    Boron, Sergiusz

    2017-06-01

    Operational safety of electrical machines and equipment depends, inter alia, on the hazards resulting from their use and on the scope of applied protective measures. The use of insufficient protection against existing hazards leads to reduced operational safety, particularly under fault conditions. On the other hand, excessive (in relation to existing hazards) level of protection may compromise the reliability of power supply. This paper analyses the explosion hazard created by earth faults in longwall power supply systems and evaluates existing protection equipment from the viewpoint of its protective performance, particularly in the context of explosion hazards, and also assesses its effect on the reliability of power supply.

  19. Protecting single-photon entanglement with practical entanglement source

    NASA Astrophysics Data System (ADS)

    Zhou, Lan; Ou-Yang, Yang; Wang, Lei; Sheng, Yu-Bo

    2017-06-01

    Single-photon entanglement (SPE) is important for quantum communication and quantum information processing. However, SPE is sensitive to photon loss. In this paper, we discuss a linear optical amplification protocol for protecting SPE. Different from the previous protocols, we exploit the practical spontaneous parametric down-conversion (SPDC) source to realize the amplification, for the ideal entanglement source is unavailable in current quantum technology. Moreover, we prove that the amplification using the entanglement generated from SPDC source as auxiliary is better than the amplification assisted with single photons. The reason is that the vacuum state from SPDC source will not affect the amplification, so that it can be eliminated automatically. This protocol may be useful in future long-distance quantum communications.

  20. Borjomi-Kazbegi Fault: Does it Exist?

    NASA Astrophysics Data System (ADS)

    Martin, R. J.; O, Connor, T.; Adamia, S.; Szymanski, E.; Krasovec, M.

    2012-12-01

    The Caucasus region has long been considered to be an example of indenture tectonics. The proposed Borjomi-Kazbegi sinistral fault is considered the western boundary of the actively indenting wedge. However, an improved seismic network density has led to recent unpublished observations noting a lack of seismicity on the proposed Borjomi-Kazbegi fault. These new observations call into question the existence of the fault, and with it, the tectonic model of the region. To clarify this anomaly, geologic and geophysical field research was carried out on the proposed Borjomi-Kazbegi fault during the summers of 2005 and 2006. Since the Borjomi-Kazbegi fault is also proposed to be a major crustal structure, a multi-disciplinary approach was utilized for this investigation. Precise GPS instrumentation was used to map multiple local geologic marker beds across the proposed line of the fault, and gravimetric and magnetic surveys were used to map deeper structures. The results showed no evidence of a strike slip fault. Localized marker beds, which included lithologic contacts, structural folds, quaternary lava deposits and several sills, continue uninterrupted across the proposed fault zone. Data from the gravimetric and magnetic surveys also show no discontinuity across the proposed fault line. In addition, the newly collected geophysical data agrees with the results of gravity and magnetic surveys carried out during the Soviet period. The Soviet data has more extensive areal coverage, and also shows no evidence of a major strike slip fault in the region. Currently, the field observations support a model that suggests active shortening in the Borjomi region is accommodated predominantly by thrust faulting.

  1. Vibration Sensor Data Denoising Using a Time-Frequency Manifold for Machinery Fault Diagnosis

    PubMed Central

    He, Qingbo; Wang, Xiangxiang; Zhou, Qiang

    2014-01-01

    Vibration sensor data from a mechanical system are often associated with important measurement information useful for machinery fault diagnosis. However, in practice the existence of background noise makes it difficult to identify the fault signature from the sensing data. This paper introduces the time-frequency manifold (TFM) concept into sensor data denoising and proposes a novel denoising method for reliable machinery fault diagnosis. The TFM signature reflects the intrinsic time-frequency structure of a non-stationary signal. The proposed method intends to realize data denoising by synthesizing the TFM using time-frequency synthesis and phase space reconstruction (PSR) synthesis. Due to the merits of the TFM in noise suppression and resolution enhancement, the denoised signal would have satisfactory denoising effects, as well as inherent time-frequency structure keeping. Moreover, this paper presents a clustering-based statistical parameter to evaluate the proposed method, and also presents a new diagnostic approach, called frequency probability time series (FPTS) spectral analysis, to show its effectiveness in fault diagnosis. The proposed TFM-based data denoising method has been employed to deal with a set of vibration sensor data from defective bearings, and the results verify that for machinery fault diagnosis the method is superior to two traditional denoising methods. PMID:24379045

  2. Sinusoidal synthesis based adaptive tracking for rotating machinery fault detection

    NASA Astrophysics Data System (ADS)

    Li, Gang; McDonald, Geoff L.; Zhao, Qing

    2017-01-01

    This paper presents a novel Sinusoidal Synthesis Based Adaptive Tracking (SSBAT) technique for vibration-based rotating machinery fault detection. The proposed SSBAT algorithm is an adaptive time series technique that makes use of both frequency and time domain information of vibration signals. Such information is incorporated in a time varying dynamic model. Signal tracking is then realized by applying adaptive sinusoidal synthesis to the vibration signal. A modified Least-Squares (LS) method is adopted to estimate the model parameters. In addition to tracking, the proposed vibration synthesis model is mainly used as a linear time-varying predictor. The health condition of the rotating machine is monitored by checking the residual between the predicted and measured signal. The SSBAT method takes advantage of the sinusoidal nature of vibration signals and transfers the nonlinear problem into a linear adaptive problem in the time domain based on a state-space realization. It has low computation burden and does not need a priori knowledge of the machine under the no-fault condition which makes the algorithm ideal for on-line fault detection. The method is validated using both numerical simulation and practical application data. Meanwhile, the fault detection results are compared with the commonly adopted autoregressive (AR) and autoregressive Minimum Entropy Deconvolution (ARMED) method to verify the feasibility and performance of the SSBAT method.

  3. Fault-Related Sanctuaries

    NASA Astrophysics Data System (ADS)

    Piccardi, L.

    2001-12-01

    Beyond the study of historical surface faulting events, this work investigates the possibility, in specific cases, of identifying pre-historical events whose memory survives in myths and legends. The myths of many famous sacred places of the ancient world contain relevant telluric references: "sacred" earthquakes, openings to the Underworld and/or chthonic dragons. Given the strong correspondence with local geological evidence, these myths may be considered as describing natural phenomena. It has been possible in this way to shed light on the geologic origin of famous myths (Piccardi, 1999, 2000 and 2001). Interdisciplinary researches reveal that the origin of several ancient sanctuaries may be linked in particular to peculiar geological phenomena observed on local active faults (like ground shaking and coseismic surface ruptures, gas and flames emissions, strong underground rumours). In many of these sanctuaries the sacred area is laid directly above the active fault. In a few cases, faulting has affected also the archaeological relics, right through the main temple (e.g. Delphi, Cnidus, Hierapolis of Phrygia). As such, the arrangement of the cult site and content of relative myths suggest that specific points along the trace of active faults have been noticed in the past and worshiped as special `sacred' places, most likely interpreted as Hades' Doors. The mythological stratification of most of these sanctuaries dates back to prehistory, and points to a common derivation from the cult of the Mother Goddess (the Lady of the Doors), which was largely widespread since at least 25000 BC. The cult itself was later reconverted into various different divinities, while the `sacred doors' of the Great Goddess and/or the dragons (offspring of Mother Earth and generally regarded as Keepers of the Doors) persisted in more recent mythologies. Piccardi L., 1999: The "Footprints" of the Archangel: Evidence of Early-Medieval Surface Faulting at Monte Sant'Angelo (Gargano, Italy

  4. Fault creep and persistent asperities on the western section of the North Anatolian Fault, Turkey

    NASA Astrophysics Data System (ADS)

    Floyd, M.; Reilinger, R. E.; Ergintav, S.; Karabulut, H.; Vernant, P.; Konca, A. O.; Dogan, U.; Cetin, S.; Cakir, Z.; Mencin, D.; Bilham, R. G.; King, R. W.

    2017-12-01

    We interpret new geodetic and seismic observations along the western section of the North Anatolian Fault (NAF) in Turkey as evidence for persistent asperities on the fault surface. Analysis of geodetic and seismic observations of seven segments of the fault at different stages of the earthquake cycle suggest that areas of the fault surface that are accumulating strain (i.e. asperities) are deficient in interseismic seismicity and earthquake aftershocks compared to areas between asperities that are failing at least in part by fault creep. From west to east, these segments include the 2014 M6.9 Gokceada earthquake and 1912 M7.4 Ganos earthquake segments, the Sea of Marmara and Princes' Islands seismic "gaps", the 1999 M7.6/7.2 Izmit/Duzce earthquake segments, and the 1944 M7.4 Ismetpasa segment, which remains actively creeping. Aspects of each segment contribute to our interpretation of overall fault behavior. The most well-defined distribution of coseismic slip in relation to pre- and post-earthquake seismicity is for the 2014 Gokceada event. The most complete set of geodetic observations (pre-, co-, and short- and long-term post-seismic) come from the 1999 Izmit and Duzce events. Simple three-layer elastic models including a middle layer that is fully locked between earthquakes, and shallow and deeper layers that are allowed to creep, can account for these observations of the deformation cycle. Recent observations from InSAR, creepmeters and small-aperture GPS profiles indicate ongoing surface and shallow fault creep rates, as allowed by the upper layer of the three-layer model. Conceptually, creep in the deeper layer represents the deep healing of the fault following the earthquake. For the Izmit and Duzce earthquake segments, healing from prior earthquakes was complete before the 1999 sequence. More generally, the consistent pattern of strain accumulation along the full length of the NAF, including the long eastern segments that ruptured in major earthquakes in

  5. 36 CFR 230.43 - Cost-share assistance-prohibited practices.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... practice without fault of the landowner or recurring practices as noted in this subpart; (3) Capital investments or capital improvements not related to FLEP practices, purchase of land or any interest in land...

  6. Contrasting fault fluids along high-angle faults: a case study from Southern Apennines (Italy)

    NASA Astrophysics Data System (ADS)

    Sinisi, Rosa; Petrullo, Angela Vita; Agosta, Fabrizio; Paternoster, Michele; Belviso, Claudia; Grassa, Fausto

    2016-10-01

    This work focuses on two fault-controlled deposits, the Atella and Rapolla travertines, which are associated with high-angle extensional faults of the Bradano Trough, southern Apennines (Italy). The Atella travertine is along a NW-SE striking, deep-seated extensional fault, already described in literature, which crosscuts both Apulian carbonates and the overlying foredeep basin infill. The Rapolla travertine is on top of a NE-SW striking, shallow-seated fault, here described for the first time, which is interpreted as a tear fault associated with a shallow thrust displacing only the foredeep basin infill. The results of structural, sedimentological, mineralogical, and C and O isotope analyses are here reported and discussed to assess the provenance of mineralizing fluids, and to evaluate the control exerted by the aforementioned extensional faults on deep, mantle-derived and shallow, meteoric fluids. Sedimentological analysis is consistent with five lithofacies in the studied travertines, which likely formed in a typical lacustrine depositional environment. Mineralogical analysis show that travertines mainly consist of calcite, and minor quartz, feldspar and clay minerals, indicative of a terrigenous supply during travertine precipitation. The isotope signature of the two studied travertines shows different provenance for the mineralizing fluids. At the Atella site, the δ13CPDB values range between + 5.2 and + 5.7‰ and the δ18OPDB values between - 9.0 and - 7.3‰, which are consistent with a mantle-derived CO2 component in the fluid. In contrast, at the Rapolla site the δ13CPDB values vary from - 2.7 to + 1.5‰ and the δ18OPDB values from - 6.8 to - 5.4‰, suggesting a mixed CO2 source with both biogenic-derived and mantle-derived fluids. The results of structural analyses conducted along the footwall damage zone of the fault exposed at the Rapolla site, show that the whole damage zone, in which fractures and joints likely channeled the mixed fluids, acted

  7. RAID Unbound: Storage Fault Tolerance in a Distributed Environment

    NASA Technical Reports Server (NTRS)

    Ritchie, Brian

    1996-01-01

    Mirroring, data replication, backup, and more recently, redundant arrays of independent disks (RAID) are all technologies used to protect and ensure access to critical company data. A new set of problems has arisen as data becomes more and more geographically distributed. Each of the technologies listed above provides important benefits; but each has failed to adapt fully to the realities of distributed computing. The key to data high availability and protection is to take the technologies' strengths and 'virtualize' them across a distributed network. RAID and mirroring offer high data availability, which data replication and backup provide strong data protection. If we take these concepts at a very granular level (defining user, record, block, file, or directory types) and them liberate them from the physical subsystems with which they have traditionally been associated, we have the opportunity to create a highly scalable network wide storage fault tolerance. The network becomes the virtual storage space in which the traditional concepts of data high availability and protection are implemented without their corresponding physical constraints.

  8. Ground-Penatrating Radar Investigations Across the Sawmill Branch Fault Near Charleston, South Carolina

    NASA Astrophysics Data System (ADS)

    Dura-Gomez, I.; Addison, A.; Knapp, C. C.; Talwani, P.; Chapman, A.

    2005-12-01

    During the 1886 Charleston earthquake, two parallel tabby walls of Fort Dorchester broke left-laterally, and a strike of ~N25°W was inferred for the causative Sawmill Branch fault. To better define this fault, which does not have any surface expression, we planned to cut trenches across it. However, as Fort Dorchester is a protected archeological site, we were required to locate the fault accurately away from the fort, before permission could be obtained to cut short trenches. The present GPR investigations were planned as a preliminary step to determine locations for trenching. A pulseEKKO 100 GPR was used to collect data along eight profiles (varying in length from 10 m to 30 m) that were run across the projected strike of the fault, and one 50 m long profile that was run parallel to it. The locations of the profiles were obtained using a total station. To capture the signature of the fault, sixteen common-offset (COS) lines were acquired by using different antennas (50, 100 and 200 MHz) and stacking 64 times to increase the signal-to-noise ratio. The location of trees and stumps were recorded. In addition, two common-midpoint (CMP) tests were carried out, and gave an average velocity of about 0.097 m/ns. Processing included the subtraction of the low frequency "wow" on the trace (dewow), automatic gain control (AGC) and the application of bandpass filters. The signals using the 50 MHz, 100 MHz and 200 MHz antennas were found to penetrate up to about 30 meters, 20 meters and 12 meters respectively. Vertically offset reflectors and disruptions of the electrical signal were used to infer the location of the fault(s). Comparisons of the locations of these disruptions on various lines were used to infer the presence of a N30°W fault zone We plan to confirm these locations by cutting shallow trenches.

  9. 30 CFR 18.50 - Protection against external arcs and sparks.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 30 Mineral Resources 1 2011-07-01 2011-07-01 false Protection against external arcs and sparks. 18... and Design Requirements § 18.50 Protection against external arcs and sparks. Provision shall be made... of that of one power conductor unless a ground-fault tripping relay is used, in which case the...

  10. 30 CFR 18.50 - Protection against external arcs and sparks.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Protection against external arcs and sparks. 18... and Design Requirements § 18.50 Protection against external arcs and sparks. Provision shall be made... of that of one power conductor unless a ground-fault tripping relay is used, in which case the...

  11. The Maradi fault zone: 3-D imagery of a classic wrench fault in Oman

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Neuhaus, D.

    1993-09-01

    The Maradi fault zone extends for almost 350 km in a north-northwest-south-southeast direction from the Oman Mountain foothills into the Arabian Sea, thereby dissecting two prolific hydrocarbon provinces, the Ghaba and Fahud salt basins. During its major Late Cretaceous period of movement, the Maradi fault zone acted as a left-lateral wrench fault. An early exploration campaign based on two-dimensional seismic targeted at fractured Cretaceous carbonates had mixed success and resulted in the discovery of one producing oil field. The structural complexity, rapidly varying carbonate facies, and uncertain fracture distribution prevented further drilling activity. In 1990 a three-dimensional (3-D) seismic surveymore » covering some 500 km[sup 2] was acquired over the transpressional northern part of the Maradi fault zone. The good data quality and the focusing power of 3-D has enabled stunning insight into the complex structural style of a [open quotes]textbook[close quotes] wrench fault, even at deeper levels and below reverse faults hitherto unexplored. Subtle thickness changes within the carbonate reservoir and the unconformably overlying shale seal provided the tool for the identification of possible shoals and depocenters. Horizon attribute maps revealed in detail the various structural components of the wrench assemblage and highlighted areas of increased small-scale faulting/fracturing. The results of four recent exploration wells will be demonstrated and their impact on the interpretation discussed.« less

  12. Seismic fault zone trapped noise

    NASA Astrophysics Data System (ADS)

    Hillers, G.; Campillo, M.; Ben-Zion, Y.; Roux, P.

    2014-07-01

    Systematic velocity contrasts across and within fault zones can lead to head and trapped waves that provide direct information on structural units that are important for many aspects of earthquake and fault mechanics. Here we construct trapped waves from the scattered seismic wavefield recorded by a fault zone array. The frequency-dependent interaction between the ambient wavefield and the fault zone environment is studied using properties of the noise correlation field. A critical frequency fc ≈ 0.5 Hz defines a threshold above which the in-fault scattered wavefield has increased isotropy and coherency compared to the ambient noise. The increased randomization of in-fault propagation directions produces a wavefield that is trapped in a waveguide/cavity-like structure associated with the low-velocity damage zone. Dense spatial sampling allows the resolution of a near-field focal spot, which emerges from the superposition of a collapsing, time reversed wavefront. The shape of the focal spot depends on local medium properties, and a focal spot-based fault normal distribution of wave speeds indicates a ˜50% velocity reduction consistent with estimates from a far-field travel time inversion. The arrival time pattern of a synthetic correlation field can be tuned to match properties of an observed pattern, providing a noise-based imaging tool that can complement analyses of trapped ballistic waves. The results can have wide applicability for investigating the internal properties of fault damage zones, because mechanisms controlling the emergence of trapped noise have less limitations compared to trapped ballistic waves.

  13. Control model design to limit DC-link voltage during grid fault in a dfig variable speed wind turbine

    NASA Astrophysics Data System (ADS)

    Nwosu, Cajethan M.; Ogbuka, Cosmas U.; Oti, Stephen E.

    2017-08-01

    This paper presents a control model design capable of inhibiting the phenomenal rise in the DC-link voltage during grid- fault condition in a variable speed wind turbine. Against the use of power circuit protection strategies with inherent limitations in fault ride-through capability, a control circuit algorithm capable of limiting the DC-link voltage rise which in turn bears dynamics that has direct influence on the characteristics of the rotor voltage especially during grid faults is here proposed. The model results so obtained compare favorably with the simulation results as obtained in a MATLAB/SIMULINK environment. The generated model may therefore be used to predict near accurately the nature of DC-link voltage variations during fault given some factors which include speed and speed mode of operation, the value of damping resistor relative to half the product of inner loop current control bandwidth and the filter inductance.

  14. Fault zone architecture of a major oblique-slip fault in the Rawil depression, Western Helvetic nappes, Switzerland

    NASA Astrophysics Data System (ADS)

    Gasser, D.; Mancktelow, N. S.

    2009-04-01

    The Helvetic nappes in the Swiss Alps form a classic fold-and-thrust belt related to overall NNW-directed transport. In western Switzerland, the plunge of nappe fold axes and the regional distribution of units define a broad depression, the Rawil depression, between the culminations of Aiguilles Rouge massif to the SW and Aar massif to the NE. A compilation of data from the literature establishes that, in addition to thrusts related to nappe stacking, the Rawil depression is cross-cut by four sets of brittle faults: (1) SW-NE striking normal faults that strike parallel to the regional fold axis trend, (2) NW-SE striking normal faults and joints that strike perpendicular to the regional fold axis trend, and (3) WNW-ESE striking normal plus dextral oblique-slip faults as well as (4) WSW-ENE striking normal plus dextral oblique-slip faults that both strike oblique to the regional fold axis trend. We studied in detail a beautifully exposed fault from set 3, the Rezli fault zone (RFZ) in the central Wildhorn nappe. The RFZ is a shallow to moderately-dipping (ca. 30-60˚) fault zone with an oblique-slip displacement vector, combining both dextral and normal components. It must have formed in approximately this orientation, because the local orientation of fold axes corresponds to the regional one, as does the generally vertical orientation of extensional joints and veins associated with the regional fault set 2. The fault zone crosscuts four different lithologies: limestone, intercalated marl and limestone, marl and sandstone, and it has a maximum horizontal dextral offset component of ~300 m and a maximum vertical normal offset component of ~200 m. Its internal architecture strongly depends on the lithology in which it developed. In the limestone, it consists of veins, stylolites, cataclasites and cemented gouge, in the intercalated marls and limestones of anastomosing shear zones, brittle fractures, veins and folds, in the marls of anastomosing shear zones, pressure

  15. Fault detection and isolation

    NASA Technical Reports Server (NTRS)

    Bernath, Greg

    1994-01-01

    In order for a current satellite-based navigation system (such as the Global Positioning System, GPS) to meet integrity requirements, there must be a way of detecting erroneous measurements, without help from outside the system. This process is called Fault Detection and Isolation (FDI). Fault detection requires at least one redundant measurement, and can be done with a parity space algorithm. The best way around the fault isolation problem is not necessarily isolating the bad measurement, but finding a new combination of measurements which excludes it.

  16. Distribution and nature of fault architecture in a layered sandstone and shale sequence: An example from the Moab fault, Utah

    USGS Publications Warehouse

    Davatzes, N.C.; Aydin, A.

    2005-01-01

    We examined the distribution of fault rock and damage zone structures in sandstone and shale along the Moab fault, a basin-scale normal fault with nearly 1 km (0.62 mi) of throw, in southeast Utah. We find that fault rock and damage zone structures vary along strike and dip. Variations are related to changes in fault geometry, faulted slip, lithology, and the mechanism of faulting. In sandstone, we differentiated two structural assemblages: (1) deformation bands, zones of deformation bands, and polished slip surfaces and (2) joints, sheared joints, and breccia. These structural assemblages result from the deformation band-based mechanism and the joint-based mechanism, respectively. Along the Moab fault, where both types of structures are present, joint-based deformation is always younger. Where shale is juxtaposed against the fault, a third faulting mechanism, smearing of shale by ductile deformation and associated shale fault rocks, occurs. Based on the knowledge of these three mechanisms, we projected the distribution of their structural products in three dimensions along idealized fault surfaces and evaluated the potential effect on fluid and hydrocarbon flow. We contend that these mechanisms could be used to facilitate predictions of fault and damage zone structures and their permeability from limited data sets. Copyright ?? 2005 by The American Association of Petroleum Geologists.

  17. Advanced cloud fault tolerance system

    NASA Astrophysics Data System (ADS)

    Sumangali, K.; Benny, Niketa

    2017-11-01

    Cloud computing has become a prevalent on-demand service on the internet to store, manage and process data. A pitfall that accompanies cloud computing is the failures that can be encountered in the cloud. To overcome these failures, we require a fault tolerance mechanism to abstract faults from users. We have proposed a fault tolerant architecture, which is a combination of proactive and reactive fault tolerance. This architecture essentially increases the reliability and the availability of the cloud. In the future, we would like to compare evaluations of our proposed architecture with existing architectures and further improve it.

  18. Subsidence and Fault Displacement Along the Long Point Fault Derived from Continuous GPS Observations (2012-2017)

    NASA Astrophysics Data System (ADS)

    Tsibanos, V.; Wang, G.

    2017-12-01

    The Long Point Fault located in Houston Texas is a complex system of normal faults which causes significant damage to urban infrastructure on both private and public property. This case study focuses on the 20-km long fault using high accuracy continuously operating global positioning satellite (GPS) stations to delineate fault movement over five years (2012 - 2017). The Long Point Fault is the longest active fault in the greater Houston area that damages roads, buried pipes, concrete structures and buildings and creates a financial burden for the city of Houston and the residents who live in close vicinity to the fault trace. In order to monitor fault displacement along the surface 11 permanent and continuously operating GPS stations were installed 6 on the hanging wall and 5 on the footwall. This study is an overview of the GPS observations from 2013 to 2017. GPS positions were processed with both relative (double differencing) and absolute Precise Point Positioning (PPP) techniques. The PPP solutions that are referred to IGS08 reference frame were transformed to the Stable Houston Reference Frame (SHRF16). Our results show no considerable horizontal displacements across the fault, but do show uneven vertical displacement attributed to regional subsidence in the range of (5 - 10 mm/yr). This subsidence can be associated to compaction of silty clays in the Chicot and Evangeline aquifers whose water depths are approximately 50m and 80m below the land surface (bls). These levels are below the regional pre-consolidation head that is about 30 to 40m bls. Recent research indicates subsidence will continue to occur until the aquifer levels reach the pre-consolidation head. With further GPS observations both the Long Point Fault and regional land subsidence can be monitored providing important geological data to the Houston community.

  19. Two Trees: Migrating Fault Trees to Decision Trees for Real Time Fault Detection on International Space Station

    NASA Technical Reports Server (NTRS)

    Lee, Charles; Alena, Richard L.; Robinson, Peter

    2004-01-01

    We started from ISS fault trees example to migrate to decision trees, presented a method to convert fault trees to decision trees. The method shows that the visualizations of root cause of fault are easier and the tree manipulating becomes more programmatic via available decision tree programs. The visualization of decision trees for the diagnostic shows a format of straight forward and easy understands. For ISS real time fault diagnostic, the status of the systems could be shown by mining the signals through the trees and see where it stops at. The other advantage to use decision trees is that the trees can learn the fault patterns and predict the future fault from the historic data. The learning is not only on the static data sets but also can be online, through accumulating the real time data sets, the decision trees can gain and store faults patterns in the trees and recognize them when they come.

  20. Subsurface structure identification of active fault based on magnetic anomaly data (Case study: Toru fault in Sumatera fault system)

    NASA Astrophysics Data System (ADS)

    Simanjuntak, Andrean V. H.; Husni, Muhammad; Syirojudin, Muhammad

    2017-07-01

    Toru segment, which is one of the active faults and located in the North of Sumatra, broke in 1984 ago on Pahae Jahe's earthquake with a magnitude 6.4 at the northern part of the fault which has a length of 23 km, and also broke again at the same place in 2008. The event of recurrence is very fast, which only 25 years old have repeatedly returned. However, in the elastic rebound theory, it probably happen with a fracture 50 cm and an average of the shear velocity 20 mm/year. The average focus of the earthquake sourced at a depth of 10 km and 23 km along its fracture zones, which can generate enough shaking 7 MMI and could breaking down buildings and create landslides on the cliff. Due to its seismic activity, this study was made to identify the effectiveness of this fault with geophysical methods. Geophysical methods such as gravity, geomagnetic and seismology are powerful tools for detecting subsurface structures of local, regional as well as of global scales. This study used to geophysical methods to discuss about total intensity of the geomagnetic anomaly data, resulted in the distribution of susceptibility values corresponding to the fault movement. The geomagnetic anomalies data was obtained from Geomag, such as total intensity measured by satellite. Data acquisition have been corrected for diurnal variations and reduced by IGRF. The study of earthquake records can be used for differentiating the active and non active fault elements. Modeling has been done using several methods, such as pseudo-gravity, reduce to pole, and upward or downward continuation, which is used to filter the geomagnetic anomaly data because the data has not fully representative of the fault structure. The results indicate that rock layers of 0 - 100 km depth encountered the process of intrusion and are dominated by sedimentary rocks that are paramagnetic, and that the ones of 100 - 150 km depth experienced the activity of subducting slab consisting of basalt and granite which are

  1. No-fault compensation and performance review.

    PubMed

    Knight, B

    1993-01-01

    Two major issues in relation to medical malpractice are discussed. The first is "no-fault compensation", an alternative to the present tort system long established in most countries, including Singapore, where negligence must be proved before a claim can succeed. The second is "performance review", a new concept for monitoring and correcting under-performing medical practitioners against whom a complaint has been laid. Both these issues are currently under active discussion and are arousing political notice and professional controversy. Though the article describes the British situation, there is much of contemporary relevance for Singapore, which has such a similar system of medical practice.

  2. Fault-zone guided waves from explosions in the San Andreas fault at Parkfield and Cienega Valley, California

    USGS Publications Warehouse

    Li, Y.-G.; Ellsworth, W.L.; Thurber, C.H.; Malin, P.E.; Aki, K.

    1997-01-01

    Fault-zone guided waves were successfully excited by near-surface explosions in the San Andreas fault zone both at Parkfield and Cienega Valley, central California. The guided waves were observed on linear, three-component seismic arrays deployed across the fault trace. These waves were not excited by explosions located outside the fault zone. The amplitude spectra of guided waves show a maximum peak at 2 Hz at Parkfield and 3 Hz at Cienega Valley. The guided wave amplitude decays sharply with observation distance from the fault trace. The explosion-excited fault-zone guided waves are similar to those generated by earthquakes at Parkfield but have lower frequencies and travel more slowly. These observations suggest that the fault-zone wave guide has lower seismic velocities as it approaches the surface at Parkfield. We have modeled the waveforms as S waves trapped in a low-velocity wave guide sandwiched between high-velocity wall rocks, resulting in Love-type fault-zone guided waves. While the results are nonunique, the Parkfield data are adequately fit by a shallow wave guide 170 m wide with an S velocity 0.85 km/sec and an apparent Q ??? 30 to 40. At Cienega Valley, the fault-zone wave guide appears to be about 120 m wide with an S velocity 0.7 km/sec and a Q ??? 30.

  3. 46 CFR 111.97-9 - Overcurrent protection.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) ELECTRICAL ENGINEERING ELECTRIC SYSTEMS-GENERAL REQUIREMENTS Electric Power-Operated Watertight Door Systems § 111.97-9 Overcurrent protection. Overcurrent devices must be arranged to isolate a fault with as little disruption of the system as possible...

  4. Fault rocks as indicators of slip behavior

    NASA Astrophysics Data System (ADS)

    Hayman, N. W.

    2017-12-01

    Forty years ago, Sibson ("Fault rocks and fault mechanisms", J. Geol. Soc. Lon., 1977) explored plastic flow mechanisms in the upper and lower crust which he attributed to deformation rates faster than tectonic ones, but slower than earthquakes. We can now combine observations of natural fault rocks with insights from experiments to interpret a broad range of length and time scales of fault slip in more detail. Fault rocks are generally weak, with predominantly frictionally stable materials in some fault segments, and more unstable materials in others. Both upper and lower crustal faults contain veins and mineralogical signatures of transiently elevated fluid pressure, and some contain relicts of pseudotachylite and bear other thermal-mechanical signatures of seismic slip. Varying strain rates and episodic-tremor-and-slip (ETS) have been attributed to fault zones with varying widths filled with irregular foliations, veins, and dismembered blocks of varying sizes. Particle-size distributions and orientations in gouge appear to differ between locked and creeping faults. These and other geologic observations can be framed in terms of constitutive behaviors derived from experiments and modeling. The experimental correlation of velocity-dependence with microstructure and the behavior of natural fault-rocks under shear suggest that friction laws may be applied liberally to fault-zone interpretation. Force-chains imaged in stress-sensitive granular aggregates or in numerical simulations show that stick-slip behavior with stress drops far below that of earthquakes can occur during quasi-periodic creep, yet localize shear in larger, aperiodic events; perhaps the systematic relationship between sub-mm shear bands and surrounding gouge and/or cataclasites causes such slip partitioning in nature. Fracture, frictional sliding, and viscous creep can experimentally produce a range of slip behavior, including ETS-like events. Perhaps a similar mechanism occurs to cause ETS at the

  5. Dynamic characteristics of a 20 kHz resonant power system - Fault identification and fault recovery

    NASA Technical Reports Server (NTRS)

    Wasynczuk, O.

    1988-01-01

    A detailed simulation of a dc inductor resonant driver and receiver is used to demonstrate the transient characteristics of a 20 kHz resonant power system during fault and overload conditions. The simulated system consists of a dc inductor resonant inverter (driver), a 50-meter transmission cable, and a dc inductor resonant receiver load. Of particular interest are the driver and receiver performance during fault and overload conditions and on the recovery characteristics following removal of the fault. The information gained from these studies sets the stage for further work in fault identification and autonomous power system control.

  6. Is there a "blind" strike-slip fault at the southern end of the San Jacinto Fault system?

    NASA Astrophysics Data System (ADS)

    Tymofyeyeva, E.; Fialko, Y. A.

    2015-12-01

    We have studied the interseismic deformation at the southern end of the San Jacinto fault system using Interferometric Synthetic Aperture Radar (InSAR) and Global Positioning System (GPS) data. To complement the continuous GPS measurements from the PBO network, we have conducted campaign-style GPS surveys of 19 benchmarks along Highway 78 in the years 2012, 2013, and 2014. We processed the campaign GPS data using GAMIT to obtain horizontal velocities. The data show high velocity gradients East of the surface trace of the Coyote Creek Fault. We also processed InSAR data from the ascending and descending tracks of the ENVISAT mission between the years 2003 and 2010. The InSAR data were corrected for atmospheric artifacts using an iterative common point stacking method. We combined average velocities from different look angles to isolate the fault-parallel velocity field, and used fault-parallel velocities to compute strain rate. We filtered the data over a range of wavelengths prior to numerical differentiation, to reduce the effects of noise and to investigate both shallow and deep sources of deformation. At spatial wavelengths less than 2km the strain rate data show prominent anomalies along the San Andreas and Superstition Hills faults, where shallow creep has been documented by previous studies. Similar anomalies are also observed along parts of the Coyote Creek Fault, San Felipe Fault, and an unmapped southern continuation of the Clark strand of the San Jacinto Fault. At wavelengths on the order of 20km, we observe elevated strain rates concentrated east of the Coyote Creek Fault. The long-wavelength strain anomaly east of the Coyote Creek Fault, and the localized shallow creep observed in the short-wavelength strain rate data over the same area suggest that there may be a "blind" segment of the Clark Fault that accommodates a significant portion of the deformation on the southern end of the San Jacinto Fault.

  7. Medical social work practice in child protection in China: A multiple case study in Shanghai hospitals.

    PubMed

    Zhao, Fang; Hämäläinen, Juha; Chen, Yu-Ting

    2017-01-01

    With the rapid development of the child welfare system in China over recent years, medical social work has been increasingly involved in providing child protection services in several hospitals in Shanghai. Focusing on five cases in this paper, the exploratory study aims to present a critical overview of current practices and effects of medical social work for child protection, based on a critical analysis of the multidimensional role of social work practitioners engaged in the provision of child protection services as well as potential challenges. Implications and suggestions for future improvements of China's child protection system are also discussed.

  8. Job satisfaction and its relationship to Radiation Protection Knowledge, Attitude and Practice (RPKAP) of Iranian radiation workers.

    PubMed

    Alavi, S S; Dabbagh, S T; Abbasi, M; Mehrdad, R

    2017-01-23

    This study aimed to find the association between job satisfaction and radiation protection knowledge, attitude and practice of medical radiation workers occupationally exposed to ionizing radiation. In this crosssectional study, 530 radiation workers affiliated to Tehran University of Medical Sciences completed a knowledge, attitude and practice questionnaire on protecting themselves against radiation and Job Descriptive Index as a job satisfaction measure during May to November 2014. Opportunities for promotion (84.2%) and payment (91.5%) were the most important factors for dissatisfaction. Radiation workers who were married, had more positive attitudes toward protecting themselves against radiation, and had higher level of education accounted for 15.8% of the total variance in predicting job satisfaction. In conclusion, medical radiation workers with a more positive attitude toward self-protection against radiation were more satisfied with their jobs. In radiation environments, improving staff attitudes toward their safety may be considered as a key strategy to increase job satisfaction.

  9. Sun protection policies and practices in New Zealand primary schools.

    PubMed

    Reeder, Anthony I; Jopson, Janet A; Gray, Andrew

    2012-02-10

    For schools with primary age students, to report the percentages meeting specific requirements of the New Zealand SunSmart Schools Accreditation Programme (SSAP). Schools were randomly selected, within geographic regions, from the Ministry of Education schools database. A questionnaire, mailed to school principals, assessed schools regarding 12 criteria for accreditation: policy, information, hats, 'play in the shade', sunscreen, clothing, role modelling, curriculum, planning, rescheduling, shade provision and review. Post-stratification weights (for achieving each criterion) were used to compensate for oversampling within some regions and differential response rates between regions, using the number of schools per region. 388 schools (representative in socioeconomic decile, size and type) participated. Less than 4% fully met accreditation criteria. Clothing (42%), curriculum delivery and shade (each 54%) requirements were met by the fewest schools. Staff role modelling (92%) was the most commonly met. Schools with uniforms tended to have more protective clothing expectations. Ongoing promotion is needed to consolidate gains and encourage comprehensive sun protection through policies, practices, environment and curriculum. Staff role modelling requirements may be strengthened by implementing existing occupational guidelines for mitigating UVR hazards. There is a need to further assist schools, particularly regarding sun protective clothing, curriculum delivery and environmental shade.

  10. Fault zone structure and inferences on past activities of the active Shanchiao Fault in the Taipei metropolis, northern Taiwan

    NASA Astrophysics Data System (ADS)

    Chen, C.; Lee, J.; Chan, Y.; Lu, C.

    2010-12-01

    The Taipei Metropolis, home to around 10 million people, is subject to seismic hazard originated from not only distant faults or sources scattered throughout the Taiwan region, but also active fault lain directly underneath. Northern Taiwan including the Taipei region is currently affected by post-orogenic (Penglai arc-continent collision) processes related to backarc extension of the Ryukyu subduction system. The Shanchiao Fault, an active normal fault outcropping along the western boundary of the Taipei Basin and dipping to the east, is investigated here for its subsurface structure and activities. Boreholes records in the central portion of the fault were analyzed to document the stacking of post- Last Glacial Maximum growth sediments, and a tulip flower structure is illuminated with averaged vertical slip rate of about 3 mm/yr. Similar fault zone architecture and post-LGM tectonic subsidence rate is also found in the northern portion of the fault. A correlation between geomorphology and structural geology in the Shanchiao Fault zone demonstrates an array of subtle geomorphic scarps corresponds to the branch fault while the surface trace of the main fault seems to be completely erased by erosion and sedimentation. Such constraints and knowledge are crucial in earthquake hazard evaluation and mitigation in the Taipei Metropolis, and in understanding the kinematics of transtensional tectonics in northern Taiwan. Schematic 3D diagram of the fault zone in the central portion of the Shanchiao Fault, displaying regional subsurface geology and its relation to topographic features.

  11. Structural styles of Paleozoic intracratonic fault reactivation: A case study of the Grays Point fault zone in southeastern Missouri, USA

    USGS Publications Warehouse

    Clendenin, C.W.; Diehl, S.F.

    1999-01-01

    A pronounced, subparallel set of northeast-striking faults occurs in southeastern Missouri, but little is known about these faults because of poor exposure. The Commerce fault system is the southernmost exposed fault system in this set and has an ancestry related to Reelfoot rift extension. Recent published work indicates that this fault system has a long history of reactivation. The northeast-striking Grays Point fault zone is a segment of the Commerce fault system and is well exposed along the southeast rim of an inactive quarry. Our mapping shows that the Grays Point fault zone also has a complex history of polyphase reactivation, involving three periods of Paleozoic reactivation that occurred in Late Ordovician, Devonian, and post-Mississippian. Each period is characterized by divergent, right-lateral oblique-slip faulting. Petrographic examination of sidwall rip-out clasts in calcite-filled faults associated with the Grays Point fault zone supports a minimum of three periods of right-lateral oblique-slip. The reported observations imply that a genetic link exists between intracratonic fault reactivation and strain produced by Paleozoic orogenies affecting the eastern margin of Laurentia (North America). Interpretation of this link indicate that right-lateral oblique-slip has occurred on all of the northeast-striking faults in southeastern Missouri as a result of strain influenced by the convergence directions of the different Paleozoic orogenies.

  12. The role of rock anisotropy in developing non-Andersonian faults: staircase trajectories for strike-slip faults

    NASA Astrophysics Data System (ADS)

    Barchi, M. R.; Collettini, C.; Lena, G.

    2012-04-01

    Thrust and normal faults affecting mechanically heterogeneous multilayers often show staircase trajectories, where flat segments follow less competent units. Within flat segments the initiation/reactivation angle, θ, which is the angle that the fault makes with the σ1 direction, is different from that predicted by the Andersonian theory. This suggests that fault trajectory is mainly controlled by rock anisotropy instead of frictional properties of the material. Our study areas are located in the Umbria-Marche fold-thrust belt, within the Northern Apennines of Italy. The area is characterized by a lithologically complex multilayer, about 2000 m thick, consisting of alternated competent (mainly calcareous) and less competent (marls or evaporites) units. At the outcrop scale, some units show a significant mechanical layering, consisting of alternated limestones and shales. Due to the complex tectonic evolution of the Apennines, well developed sets of conjugate normal, thrust and strike-slip faults are exposed in the region. The study outcrop, Candigliano Gourge, is characterized by steep (dip > 60°) NE dipping beds, affected by conjugate sets of strike-slip faults, exposed in the eastern limb of a NE verging anticline. The faults develop within the Marne a Fucoidi Fm., a Cretaceous sedimentary unit, about 70 m thick, made of competent calcareous beds (about 20 cm thick), separated by marly beds (1-20 cm thick). The conjugate strike-slip faults are formed after the major folding phase: in fact the strike-slip faults cut both minor folds and striated bedding surfaces, related to syn-folding flexural slip. Faults show marked staircase trajectories, with straight segments almost parallel to the marly horizons and ramps cutting through the calcareous layers. Slip along these faults induces local block rotation of the competent strata, dilational jogs (pull-aparts), extensional duplexes and boudinage of the competent layers, while marly levels are strongly laminated. In

  13. Passive fault current limiting device

    DOEpatents

    Evans, Daniel J.; Cha, Yung S.

    1999-01-01

    A passive current limiting device and isolator is particularly adapted for use at high power levels for limiting excessive currents in a circuit in a fault condition such as an electrical short. The current limiting device comprises a magnetic core wound with two magnetically opposed, parallel connected coils of copper, a high temperature superconductor or other electrically conducting material, and a fault element connected in series with one of the coils. Under normal operating conditions, the magnetic flux density produced by the two coils cancel each other. Under a fault condition, the fault element is triggered to cause an imbalance in the magnetic flux density between the two coils which results in an increase in the impedance in the coils. While the fault element may be a separate current limiter, switch, fuse, bimetal strip or the like, it preferably is a superconductor current limiter conducting one-half of the current load compared to the same limiter wired to carry the total current of the circuit. The major voltage during a fault condition is in the coils wound on the common core in a preferred embodiment.

  14. Normal fault earthquakes or graviquakes

    PubMed Central

    Doglioni, C.; Carminati, E.; Petricca, P.; Riguzzi, F.

    2015-01-01

    Earthquakes are dissipation of energy throughout elastic waves. Canonically is the elastic energy accumulated during the interseismic period. However, in crustal extensional settings, gravity is the main energy source for hangingwall fault collapsing. Gravitational potential is about 100 times larger than the observed magnitude, far more than enough to explain the earthquake. Therefore, normal faults have a different mechanism of energy accumulation and dissipation (graviquakes) with respect to other tectonic settings (strike-slip and contractional), where elastic energy allows motion even against gravity. The bigger the involved volume, the larger is their magnitude. The steeper the normal fault, the larger is the vertical displacement and the larger is the seismic energy released. Normal faults activate preferentially at about 60° but they can be shallower in low friction rocks. In low static friction rocks, the fault may partly creep dissipating gravitational energy without releasing great amount of seismic energy. The maximum volume involved by graviquakes is smaller than the other tectonic settings, being the activated fault at most about three times the hypocentre depth, explaining their higher b-value and the lower magnitude of the largest recorded events. Having different phenomenology, graviquakes show peculiar precursors. PMID:26169163

  15. Passive fault current limiting device

    DOEpatents

    Evans, D.J.; Cha, Y.S.

    1999-04-06

    A passive current limiting device and isolator is particularly adapted for use at high power levels for limiting excessive currents in a circuit in a fault condition such as an electrical short. The current limiting device comprises a magnetic core wound with two magnetically opposed, parallel connected coils of copper, a high temperature superconductor or other electrically conducting material, and a fault element connected in series with one of the coils. Under normal operating conditions, the magnetic flux density produced by the two coils cancel each other. Under a fault condition, the fault element is triggered to cause an imbalance in the magnetic flux density between the two coils which results in an increase in the impedance in the coils. While the fault element may be a separate current limiter, switch, fuse, bimetal strip or the like, it preferably is a superconductor current limiter conducting one-half of the current load compared to the same limiter wired to carry the total current of the circuit. The major voltage during a fault condition is in the coils wound on the common core in a preferred embodiment. 6 figs.

  16. Double-layer rotor magnetic shield performance analysis in high temperature superconducting synchronous generators under short circuit fault conditions

    NASA Astrophysics Data System (ADS)

    Hekmati, Arsalan; Aliahmadi, Mehdi

    2016-12-01

    High temperature superconducting, HTS, synchronous machines benefit from a rotor magnetic shield in order to protect superconducting coils against asynchronous magnetic fields. This magnetic shield, however, suffers from exerted Lorentz forces generated in light of induced eddy currents during transient conditions, e.g. stator windings short-circuit fault. In addition, to the exerted electromagnetic forces, eddy current losses and the associated effects on the cryogenic system are the other consequences of shielding HTS coils. This study aims at investigating the Rotor Magnetic Shield, RMS, performance in HTS synchronous generators under stator winding short-circuit fault conditions. The induced eddy currents in different circumferential positions of the rotor magnetic shield along with associated Joule heating losses would be studied using 2-D time-stepping Finite Element Analysis, FEA. The investigation of Lorentz forces exerted on the magnetic shield during transient conditions has also been performed in this paper. The obtained results show that double line-to-ground fault is of the most importance among different types of short-circuit faults. It was revealed that when it comes to the design of the rotor magnetic shields, in addition to the eddy current distribution and the associated ohmic losses, two phase-to-ground fault should be taken into account since the produced electromagnetic forces in the time of fault conditions are more severe during double line-to-ground fault.

  17. Absenteeism and Truancy: Risk, Protection, and Best Practice Implications for School Social Workers

    ERIC Educational Resources Information Center

    Teasley, Martell L.

    2004-01-01

    The author discusses the context in which absenteeism and truancy occur through an analysis of risk and protective mechanisms and suggests best practice methods based on a review of literature and research on several successful absenteeism and truancy prevention and reduction programs. The author suggests ways that school social workers can…

  18. A low-angle detachment fault revealed: Three-dimensional images of the S-reflector fault zone along the Galicia passive margin

    NASA Astrophysics Data System (ADS)

    Schuba, C. Nur; Gray, Gary G.; Morgan, Julia K.; Sawyer, Dale S.; Shillington, Donna J.; Reston, Tim J.; Bull, Jonathan M.; Jordan, Brian E.

    2018-06-01

    A new 3-D seismic reflection volume over the Galicia margin continent-ocean transition zone provides an unprecedented view of the prominent S-reflector detachment fault that underlies the outer part of the margin. This volume images the fault's structure from breakaway to termination. The filtered time-structure map of the S-reflector shows coherent corrugations parallel to the expected paleo-extension directions with an average azimuth of 107°. These corrugations maintain their orientations, wavelengths and amplitudes where overlying faults sole into the S-reflector, suggesting that the parts of the detachment fault containing multiple crustal blocks may have slipped as discrete units during its late stages. Another interface above the S-reflector, here named S‧, is identified and interpreted as the upper boundary of the fault zone associated with the detachment fault. This layer, named the S-interval, thickens by tens of meters from SE to NW in the direction of transport. Localized thick accumulations also occur near overlying fault intersections, suggesting either non-uniform fault rock production, or redistribution of fault rock during slip. These observations have important implications for understanding how detachment faults form and evolve over time. 3-D seismic reflection imaging has enabled unique insights into fault slip history, fault rock production and redistribution.

  19. Multiple Fault Isolation in Redundant Systems

    NASA Technical Reports Server (NTRS)

    Pattipati, Krishna R.; Patterson-Hine, Ann; Iverson, David

    1997-01-01

    Fault diagnosis in large-scale systems that are products of modern technology present formidable challenges to manufacturers and users. This is due to large number of failure sources in such systems and the need to quickly isolate and rectify failures with minimal down time. In addition, for fault-tolerant systems and systems with infrequent opportunity for maintenance (e.g., Hubble telescope, space station), the assumption of at most a single fault in the system is unrealistic. In this project, we have developed novel block and sequential diagnostic strategies to isolate multiple faults in the shortest possible time without making the unrealistic single fault assumption.

  20. Multiple Fault Isolation in Redundant Systems

    NASA Technical Reports Server (NTRS)

    Pattipati, Krishna R.

    1997-01-01

    Fault diagnosis in large-scale systems that are products of modem technology present formidable challenges to manufacturers and users. This is due to large number of failure sources in such systems and the need to quickly isolate and rectify failures with minimal down time. In addition, for fault-tolerant systems and systems with infrequent opportunity for maintenance (e.g., Hubble telescope, space station), the assumption of at most a single fault in the system is unrealistic. In this project, we have developed novel block and sequential diagnostic strategies to isolate multiple faults in the shortest possible time without making the unrealistic single fault assumption.

  1. Can diligent and extensive mapping of faults provide reliable estimates of the expected maximum earthquakes at these faults? No. (Invited)

    NASA Astrophysics Data System (ADS)

    Bird, P.

    2010-12-01

    The hope expressed in the title question above can be contradicted in 5 ways, listed below. To summarize, an earthquake rupture can be larger than anticipated either because the fault system has not been fully mapped, or because the rupture is not limited to the pre-existing fault network. 1. Geologic mapping of faults is always incomplete due to four limitations: (a) Map-scale limitation: Faults below a certain (scale-dependent) apparent offset are omitted; (b) Field-time limitation: The most obvious fault(s) get(s) the most attention; (c) Outcrop limitation: You can't map what you can't see; and (d) Lithologic-contrast limitation: Intra-formation faults can be tough to map, so they are often assumed to be minor and omitted. If mapping is incomplete, fault traces may be longer and/or better-connected than we realize. 2. Fault trace “lengths” are unreliable guides to maximum magnitude. Fault networks have multiply-branching, quasi-fractal shapes, so fault “length” may be meaningless. Naming conventions for main strands are unclear, and rarely reviewed. Gaps due to Quaternary alluvial cover may not reflect deeper seismogenic structure. Mapped kinks and other “segment boundary asperities” may be only shallow structures. Also, some recent earthquakes have jumped and linked “separate” faults (Landers, California 1992; Denali, Alaska, 2002) [Wesnousky, 2006; Black, 2008]. 3. Distributed faulting (“eventually occurring everywhere”) is predicted by several simple theories: (a) Viscoelastic stress redistribution in plate/microplate interiors concentrates deviatoric stress upward until they fail by faulting; (b) Unstable triple-junctions (e.g., between 3 strike-slip faults) in 2-D plate theory require new faults to form; and (c) Faults which appear to end (on a geologic map) imply distributed permanent deformation. This means that all fault networks evolve and that even a perfect fault map would be incomplete for future ruptures. 4. A recent attempt

  2. Boundary integral solutions for faults in flowing rock

    NASA Astrophysics Data System (ADS)

    Wei, Wei

    We develop new boundary-integral solutions for faulting in viscous rock and implement solutions numerically with a boundary-element computer program, called Faux_Pas. In the solutions, large permanent rock deformations near faults are treated with velocity discontinuities within linear, incompressible, creeping, viscous flows. The faults may have zero strength or a finite strength that can be a constant or varying with deformation. Large deformations are achieved by integrating step by step with the fourth-order Runge-Kutta method. With this method, the boundaries and passive markers are updated dynamically. Faux_Pas has been applied to straight and curved elementary faults, and to listric and dish compound faults, composed of two or more elementary faults, such as listric faults and dish faults, all subjected to simple shear, shortening and lengthening. It reproduces the essential geometric elements seen in seismic profiles of fault-related folds associated with listric thrust faults in the Bighorn Basin of Wyoming, with dish faults in the Appalachians in Pennsylvania, Parry Islands of Canada and San Fernando Valley, California, and with listric normal faults in the Gulf of Mexico. Faux_Pas also predicts that some of these fault-related structures will include fascinating minor folds, especially in the footwall of the fault, that have been recognized earlier but have not been known to be related to the faulting. Some of these minor folds are potential structural traps. Faux_Pas is superior in several respects to current geometric techniques of balancing profiles, such as the "fault-bend fold" construction. With Faux_Pas, both the hanging wall and footwall are deformable, the faults are mechanical features, the cross sections are automatically balanced and, most important, the solutions are based on the first principles of mechanics. With the geometric techniques, folds are drawn only in the hanging wall, the faults are simply lines, the cross sections are

  3. Fault Diagnosis for the Heat Exchanger of the Aircraft Environmental Control System Based on the Strong Tracking Filter

    PubMed Central

    Ma, Jian; Lu, Chen; Liu, Hongmei

    2015-01-01

    The aircraft environmental control system (ECS) is a critical aircraft system, which provides the appropriate environmental conditions to ensure the safe transport of air passengers and equipment. The functionality and reliability of ECS have received increasing attention in recent years. The heat exchanger is a particularly significant component of the ECS, because its failure decreases the system’s efficiency, which can lead to catastrophic consequences. Fault diagnosis of the heat exchanger is necessary to prevent risks. However, two problems hinder the implementation of the heat exchanger fault diagnosis in practice. First, the actual measured parameter of the heat exchanger cannot effectively reflect the fault occurrence, whereas the heat exchanger faults are usually depicted by utilizing the corresponding fault-related state parameters that cannot be measured directly. Second, both the traditional Extended Kalman Filter (EKF) and the EKF-based Double Model Filter have certain disadvantages, such as sensitivity to modeling errors and difficulties in selection of initialization values. To solve the aforementioned problems, this paper presents a fault-related parameter adaptive estimation method based on strong tracking filter (STF) and Modified Bayes classification algorithm for fault detection and failure mode classification of the heat exchanger, respectively. Heat exchanger fault simulation is conducted to generate fault data, through which the proposed methods are validated. The results demonstrate that the proposed methods are capable of providing accurate, stable, and rapid fault diagnosis of the heat exchanger. PMID:25823010

  4. Fault diagnosis for the heat exchanger of the aircraft environmental control system based on the strong tracking filter.

    PubMed

    Ma, Jian; Lu, Chen; Liu, Hongmei

    2015-01-01

    The aircraft environmental control system (ECS) is a critical aircraft system, which provides the appropriate environmental conditions to ensure the safe transport of air passengers and equipment. The functionality and reliability of ECS have received increasing attention in recent years. The heat exchanger is a particularly significant component of the ECS, because its failure decreases the system's efficiency, which can lead to catastrophic consequences. Fault diagnosis of the heat exchanger is necessary to prevent risks. However, two problems hinder the implementation of the heat exchanger fault diagnosis in practice. First, the actual measured parameter of the heat exchanger cannot effectively reflect the fault occurrence, whereas the heat exchanger faults are usually depicted by utilizing the corresponding fault-related state parameters that cannot be measured directly. Second, both the traditional Extended Kalman Filter (EKF) and the EKF-based Double Model Filter have certain disadvantages, such as sensitivity to modeling errors and difficulties in selection of initialization values. To solve the aforementioned problems, this paper presents a fault-related parameter adaptive estimation method based on strong tracking filter (STF) and Modified Bayes classification algorithm for fault detection and failure mode classification of the heat exchanger, respectively. Heat exchanger fault simulation is conducted to generate fault data, through which the proposed methods are validated. The results demonstrate that the proposed methods are capable of providing accurate, stable, and rapid fault diagnosis of the heat exchanger.

  5. Fault tectonics and earthquake hazards in parts of southern California. [penninsular ranges, Garlock fault, Salton Trough area, and western Mojave Desert

    NASA Technical Reports Server (NTRS)

    Merifield, P. M. (Principal Investigator); Lamar, D. L.; Gazley, C., Jr.; Lamar, J. V.; Stratton, R. H.

    1976-01-01

    The author has identified the following significant results. Four previously unknown faults were discovered in basement terrane of the Peninsular Ranges. These have been named the San Ysidro Creek fault, Thing Valley fault, Canyon City fault, and Warren Canyon fault. In addition fault gouge and breccia were recognized along the San Diego River fault. Study of features on Skylab imagery and review of geologic and seismic data suggest that the risk of a damaging earthquake is greater along the northwestern portion of the Elsinore fault than along the southeastern portion. Physiographic indicators of active faulting along the Garlock fault identifiable in Skylab imagery include scarps, linear ridges, shutter ridges, faceted ridges, linear valleys, undrained depressions and offset drainage. The following previously unrecognized fault segments are postulated for the Salton Trough Area: (1) An extension of a previously known fault in the San Andreas fault set located southeast of the Salton Sea; (2) An extension of the active San Jacinto fault zone along a tonal change in cultivated fields across Mexicali Valley ( the tonal change may represent different soil conditions along opposite sides of a fault). For the Skylab and LANDSAT images studied, pseudocolor transformations offer no advantages over the original images in the recognition of faults in Skylab and LANDSAT images. Alluvial deposits of different ages, a marble unit and iron oxide gossans of the Mojave Mining District are more readily differentiated on images prepared from ratios of individual bands of the S-192 multispectral scanner data. The San Andreas fault was also made more distinct in the 8/2 and 9/2 band ratios by enhancement of vegetation differences on opposite sides of the fault. Preliminary analysis indicates a significant earth resources potential for the discrimination of soil and rock types, including mineral alteration zones. This application should be actively pursued.

  6. The evolution of fabric with displacement in natural brittle faults

    NASA Astrophysics Data System (ADS)

    Mittempergher, S.; Di Toro, G.; Gratier, J.; Aretusini, S.; Boullier-Bertrand, A.

    2011-12-01

    In experiments performed at room temperature on gouges, a characteristic clast size distribution (CSD) is produced with increasing strain, and shear localization is documented to begin after few millimetres of sliding. But in natural faults active at depth in the crust, mechanical processes are associated with fluid-rock interactions, which might control the deformation and strength recovery. We aim to investigate the microstructural, geochemical and mineralogical evolution of low-displacement faults with increasing shear strain. The faults (cataclasite- and pseudotachylyte-bearing) are hosted in tonalite and were active at 9-11 km and 250-300°C. The samples were collected on a large glacier-polished outcrop, where major faults (accommodating up to 4300 mm of displacement) exploit pre-existing magmatic joints and are connected by a network of secondary fractures and faults (accommodating up to 500 mm of displacement) breaking intact tonalite. We performed optical and cathodoluminescence (CL) microscope, Scanning Electron Microscope (SEM), Energy Dispersive X-ray Spectroscopy (EDS), Rietveld X-Ray Powder Diffraction and microprobe chemical analysis in deformation zones of secondary faults with various offsets in order to evaluate the transfer of chemical species between dissolution zones and protected zones. Image analysis techniques were applied on SEM-BSE and optical microscope images to compute the CSD in samples, which experienced an increasing amount of strain. The secondary fractures are up to 5 mm thick. Within the first 20 mm of displacement, shear localizes along Y and R1 surfaces and a cataclastic foliation develops. The CSD evolves from a fractal dimension D of 1.3 in fractures without visible displacement to values above 2 after the first 500 mm of displacement. Chemical maps and CL images indicate that the foliation in cataclasite results from the rotation and fragmentation of clasts, with dissolution of quartz and passive concentration of Ti oxides

  7. A distributed fault-tolerant signal processor /FTSP/

    NASA Astrophysics Data System (ADS)

    Bonneau, R. J.; Evett, R. C.; Young, M. J.

    1980-01-01

    A digital fault-tolerant signal processor (FTSP), an example of a self-repairing programmable system is analyzed. The design configuration is discussed in terms of fault tolerance, system-level fault detection, isolation and common memory. Special attention is given to the FDIR (fault detection isolation and reconfiguration) logic, noting that the reconfiguration decisions are based on configuration, summary status, end-around tests, and north marker/synchro data. Several mechanisms of fault detection are described which initiate reconfiguration at different levels. It is concluded that the reliability of a signal processor can be significantly enhanced by the use of fault-tolerant techniques.

  8. Active faulting in apparently stable peninsular India: Rift inversion and a Holocene-age great earthquake on the Tapti Fault

    NASA Astrophysics Data System (ADS)

    Copley, Alex; Mitra, Supriyo; Sloan, R. Alastair; Gaonkar, Sharad; Reynolds, Kirsty

    2014-08-01

    We present observations of active faulting within peninsular India, far from the surrounding plate boundaries. Offset alluvial fan surfaces indicate one or more magnitude 7.6-8.4 thrust-faulting earthquakes on the Tapti Fault (Maharashtra, western India) during the Holocene. The high ratio of fault displacement to length on the alluvial fan offsets implies high stress-drop faulting, as has been observed elsewhere in the peninsula. The along-strike extent of the fan offsets is similar to the thickness of the seismogenic layer, suggesting a roughly equidimensional fault rupture. The subsiding footwall of the fault is likely to have been responsible for altering the continental-scale drainage pattern in central India and creating the large west flowing catchment of the Tapti river. A preexisting sedimentary basin in the uplifting hanging wall implies that the Tapti Fault was active as a normal fault during the Mesozoic and has been reactivated as a thrust, highlighting the role of preexisting structures in determining the rheology and deformation of the lithosphere. The slip sense of faults and earthquakes in India suggests that deformation south of the Ganges foreland basin is driven by the compressive force transmitted between India and the Tibetan Plateau. The along-strike continuation of faulting to the east of the Holocene ruptures we have studied represents a significant seismic hazard in central India.

  9. Fault zone structure and fluid-rock interaction of a high angle normal fault in Carrara marble (NW Tuscany, Italy)

    NASA Astrophysics Data System (ADS)

    Molli, G.; Cortecci, G.; Vaselli, L.; Ottria, G.; Cortopassi, A.; Dinelli, E.; Mussi, M.; Barbieri, M.

    2010-09-01

    We studied the geometry, intensity of deformation and fluid-rock interaction of a high angle normal fault within Carrara marble in the Alpi Apuane NW Tuscany, Italy. The fault is comprised of a core bounded by two major, non-parallel slip surfaces. The fault core, marked by crush breccia and cataclasites, asymmetrically grades to the host protolith through a damage zone, which is well developed only in the footwall block. On the contrary, the transition from the fault core to the hangingwall protolith is sharply defined by the upper main slip surface. Faulting was associated with fluid-rock interaction, as evidenced by kinematically related veins observable in the damage zone and fluid channelling within the fault core, where an orange-brownish cataclasite matrix can be observed. A chemical and isotopic study of veins and different structural elements of the fault zone (protolith, damage zone and fault core), including a mathematical model, was performed to document type, role, and activity of fluid-rock interactions during deformation. The results of our studies suggested that deformation pattern was mainly controlled by processes associated with a linking-damage zone at a fault tip, development of a fault core, localization and channelling of fluids within the fault zone. Syn-kinematic microstructural modification of calcite microfabric possibly played a role in confining fluid percolation.

  10. Structural setting and kinematics of Nubian fault system, SE Western Desert, Egypt: An example of multi-reactivated intraplate strike-slip faults

    NASA Astrophysics Data System (ADS)

    Sakran, Shawky; Said, Said Mohamed

    2018-02-01

    Detailed surface geological mapping and subsurface seismic interpretation have been integrated to unravel the structural style and kinematic history of the Nubian Fault System (NFS). The NFS consists of several E-W Principal Deformation Zones (PDZs) (e.g. Kalabsha fault). Each PDZ is defined by spectacular E-W, WNW and ENE dextral strike-slip faults, NNE sinistral strike-slip faults, NE to ENE folds, and NNW normal faults. Each fault zone has typical self-similar strike-slip architecture comprising multi-scale fault segments. Several multi-scale uplifts and basins were developed at the step-over zones between parallel strike-slip fault segments as a result of local extension or contraction. The NNE faults consist of right-stepping sinistral strike-slip fault segments (e.g. Sin El Kiddab fault). The NNE sinistral faults extend for long distances ranging from 30 to 100 kms and cut one or two E-W PDZs. Two nearly perpendicular strike-slip tectonic regimes are recognized in the NFS; an inactive E-W Late Cretaceous - Early Cenozoic dextral transpression and an active NNE sinistral shear.

  11. Learning and diagnosing faults using neural networks

    NASA Technical Reports Server (NTRS)

    Whitehead, Bruce A.; Kiech, Earl L.; Ali, Moonis

    1990-01-01

    Neural networks have been employed for learning fault behavior from rocket engine simulator parameters and for diagnosing faults on the basis of the learned behavior. Two problems in applying neural networks to learning and diagnosing faults are (1) the complexity of the sensor data to fault mapping to be modeled by the neural network, which implies difficult and lengthy training procedures; and (2) the lack of sufficient training data to adequately represent the very large number of different types of faults which might occur. Methods are derived and tested in an architecture which addresses these two problems. First, the sensor data to fault mapping is decomposed into three simpler mappings which perform sensor data compression, hypothesis generation, and sensor fusion. Efficient training is performed for each mapping separately. Secondly, the neural network which performs sensor fusion is structured to detect new unknown faults for which training examples were not presented during training. These methods were tested on a task of fault diagnosis by employing rocket engine simulator data. Results indicate that the decomposed neural network architecture can be trained efficiently, can identify faults for which it has been trained, and can detect the occurrence of faults for which it has not been trained.

  12. Active tectonics of the Imperial Valley, southern California: fault damage zones, complex basins and buried faults

    NASA Astrophysics Data System (ADS)

    Persaud, P.; Ma, Y.; Stock, J. M.; Hole, J. A.; Fuis, G. S.; Han, L.

    2016-12-01

    Ongoing oblique slip at the Pacific-North America plate boundary in the Salton Trough produced the Imperial Valley. Deformation in this seismically active area is distributed across a complex network of exposed and buried faults resulting in a largely unmapped seismic hazard beneath the growing population centers of El Centro, Calexico and Mexicali. To better understand the shallow crustal structure in this region and the connectivity of faults and seismicity lineaments, we used data primarily from the Salton Seismic Imaging Project (SSIP) to construct a P-wave velocity profile to 15 km depth, and a 3-D velocity model down to 8 km depth including the Brawley Geothermal area. We obtained detailed images of a complex wedge-shaped basin at the southern end of the San Andreas Fault system. Two deep subbasins (VP <5.65 km/s) are located in the western part of the larger Imperial Valley basin, where seismicity trends and active faults play a significant role in shaping the basin edge. Our 3-D VP model reveals previously unrecognized NE-striking cross faults that are interacting with the dominant NW-striking faults to control deformation. New findings in our profile include localized regions of low VP (thickening of a 5.65-5.85 km/s layer) near faults or seismicity lineaments interpreted as possibly faulting-related. Our 3-D model and basement map reveal velocity highs associated with the geothermal areas in the eastern valley. The improved seismic velocity model from this study, and the identification of important unmapped faults or buried interfaces will help refine the seismic hazard for parts of Imperial County, California.

  13. An improved CS-LSSVM algorithm-based fault pattern recognition of ship power equipments.

    PubMed

    Yang, Yifei; Tan, Minjia; Dai, Yuewei

    2017-01-01

    A ship power equipments' fault monitoring signal usually provides few samples and the data's feature is non-linear in practical situation. This paper adopts the method of the least squares support vector machine (LSSVM) to deal with the problem of fault pattern identification in the case of small sample data. Meanwhile, in order to avoid involving a local extremum and poor convergence precision which are induced by optimizing the kernel function parameter and penalty factor of LSSVM, an improved Cuckoo Search (CS) algorithm is proposed for the purpose of parameter optimization. Based on the dynamic adaptive strategy, the newly proposed algorithm improves the recognition probability and the searching step length, which can effectively solve the problems of slow searching speed and low calculation accuracy of the CS algorithm. A benchmark example demonstrates that the CS-LSSVM algorithm can accurately and effectively identify the fault pattern types of ship power equipments.

  14. Paleoseismology of the Mt. Narryer Fault Zone, West Central Western Australia: a Multi-Segment Intraplate Fault System

    NASA Astrophysics Data System (ADS)

    Whitney, B. B.; Clark, D.; Hengesh, J.

    2014-12-01

    The Western Australia shear zone (WASZ) is a 2000 km long fault system within the intraplate region of Australia. A paleoseismological study of faults and fault-related folds comprising the Mount Narryer fault zone (MNfz) in the southern WASZ reveals a late Quaternary history of repeated morphogenic earthquake occurrence that has profoundly influenced the planform and course of the Murchison, Roderick, and Sanford Rivers. Folding in the near surface sediments is the predominant style of surface expression of reactivated basement faults which is consistent with other neotectonic structures throughout the Western Australia shear zone. CRN and OSL estimates of exposure and burial ages of fault-related folds and fold derived colluvium provide constraint on Late Quaternary slip rates on the underlying faults of ~0.05 - 0.1 mm/a. In the case of the Roderick River fault scarp, 2-3m high tectonic risers separating inset terraces where the Murchison River crosses the scarp are consistent with multiple late Quaternary seismic events on the order of magnitude Mw 7.1-7.3. Mid-Pleistocene ages of tectonically deformed strata in the MNfz are consistent with the timing of collision between the Australian extended margin and Savu-Rote ridge 0.2-1.8 Ma.

  15. Dynamic modeling of gearbox faults: A review

    NASA Astrophysics Data System (ADS)

    Liang, Xihui; Zuo, Ming J.; Feng, Zhipeng

    2018-01-01

    Gearbox is widely used in industrial and military applications. Due to high service load, harsh operating conditions or inevitable fatigue, faults may develop in gears. If the gear faults cannot be detected early, the health will continue to degrade, perhaps causing heavy economic loss or even catastrophe. Early fault detection and diagnosis allows properly scheduled shutdowns to prevent catastrophic failure and consequently result in a safer operation and higher cost reduction. Recently, many studies have been done to develop gearbox dynamic models with faults aiming to understand gear fault generation mechanism and then develop effective fault detection and diagnosis methods. This paper focuses on dynamics based gearbox fault modeling, detection and diagnosis. State-of-art and challenges are reviewed and discussed. This detailed literature review limits research results to the following fundamental yet key aspects: gear mesh stiffness evaluation, gearbox damage modeling and fault diagnosis techniques, gearbox transmission path modeling and method validation. In the end, a summary and some research prospects are presented.

  16. Stability of fault submitted to fluid injections

    NASA Astrophysics Data System (ADS)

    Brantut, N.; Passelegue, F. X.; Mitchell, T. M.

    2017-12-01

    Elevated pore pressure can lead to slip reactivation on pre-existing fractures and faults when the coulomb failure point is reached. From a static point of view, the reactivation of fault submitted to a background stress (τ0) is a function of the peak strength of the fault, i.e. the quasi-static effective friction coefficient (µeff). However, this theory is valid only when the entire fault is affected by fluid pressure, which is not the case in nature, and during human induced-seismicity. In this study, we present new results about the influence of the injection rate on the stability of faults. Experiments were conducted on a saw-cut sample of westerly granite. The experimental fault was 8 cm length. Injections were conducted through a 2 mm diameter hole reaching the fault surface. Experiments were conducted at four different order magnitudes fluid pressure injection rates (from 1 MPa/minute to 1 GPa/minute), in a fault system submitted to 50 and 100 MPa confining pressure. Our results show that the peak fluid pressure leading to slip depends on injection rate. The faster the injection rate, the larger the peak fluid pressure leading to instability. Wave velocity surveys across the fault highlighted that decreasing the injection-rate leads to an increase of size of the fluid pressure perturbation. Our result demonstrate that the stability of the fault is not only a function of the fluid pressure requires to reach the failure criterion, but is mainly a function of the ratio between the length of the fault affected by fluid pressure and the total fault length. In addition, we show that the slip rate increases with the background effective stress and with the intensity of the fluid pressure pertubation, i.e. with the excess shear stress acting on the part of the fault pertubated by fluid injection. Our results suggest that crustal fault can be reactivated by local high fluid overpressures. These results could explain the "large" magnitude human-induced earthquakes

  17. The stress shadow effect: a mechanical analysis of the evenly-spaced parallel strike-slip faults in the San Andreas fault system

    NASA Astrophysics Data System (ADS)

    Zuza, A. V.; Yin, A.; Lin, J. C.

    2015-12-01

    Parallel evenly-spaced strike-slip faults are prominent in the southern San Andreas fault system, as well as other settings along plate boundaries (e.g., the Alpine fault) and within continental interiors (e.g., the North Anatolian, central Asian, and northern Tibetan faults). In southern California, the parallel San Jacinto, Elsinore, Rose Canyon, and San Clemente faults to the west of the San Andreas are regularly spaced at ~40 km. In the Eastern California Shear Zone, east of the San Andreas, faults are spaced at ~15 km. These characteristic spacings provide unique mechanical constraints on how the faults interact. Despite the common occurrence of parallel strike-slip faults, the fundamental questions of how and why these fault systems form remain unanswered. We address this issue by using the stress shadow concept of Lachenbruch (1961)—developed to explain extensional joints by using the stress-free condition on the crack surface—to present a mechanical analysis of the formation of parallel strike-slip faults that relates fault spacing and brittle-crust thickness to fault strength, crustal strength, and the crustal stress state. We discuss three independent models: (1) a fracture mechanics model, (2) an empirical stress-rise function model embedded in a plastic medium, and (3) an elastic-plate model. The assumptions and predictions of these models are quantitatively tested using scaled analogue sandbox experiments that show that strike-slip fault spacing is linearly related to the brittle-crust thickness. We derive constraints on the mechanical properties of the southern San Andreas strike-slip faults and fault-bounded crust (e.g., local fault strength and crustal/regional stress) given the observed fault spacing and brittle-crust thickness, which is obtained by defining the base of the seismogenic zone with high-resolution earthquake data. Our models allow direct comparison of the parallel faults in the southern San Andreas system with other similar strike

  18. Comparison of the quench and fault current limiting characteristics of the flux-coupling type SFCL with single and three-phase transformer

    NASA Astrophysics Data System (ADS)

    Jung, Byung Ik; Cho, Yong Sun; Park, Hyoung Min; Chung, Dong Chul; Choi, Hyo Sang

    2013-01-01

    The South Korean power grid has a network structure for the flexible operation of the system. The continuously increasing power demand necessitated the increase of power facilities, which decreased the impedance in the power system. As a result, the size of the fault current in the event of a system fault increased. As this increased fault current size is threatening the breaking capacity of the circuit breaker, the main protective device, a solution to this problem is needed. The superconducting fault current limiter (SFCL) has been designed to address this problem. SFCL supports the stable operation of the circuit breaker through its excellent fault-current-limiting operation [1-5]. In this paper, the quench and fault current limiting characteristics of the flux-coupling-type SFCL with one three-phase transformer were compared with those of the same SFCL type but with three single-phase transformers. In the case of the three-phase transformers, both the superconducting elements of the fault and sound phases were quenched, whereas in the case of the single-phase transformer, only that of the fault phase was quenched. For the fault current limiting rate, both cases showed similar rates for the single line-to-ground fault, but for the three-wire earth fault, the fault current limiting rate of the single-phase transformer was over 90% whereas that of the three-phase transformer was about 60%. It appears that when the three-phase transformer was used, the limiting rate decreased because the fluxes by the fault current of each phase were linked in one core. When the power loads of the superconducting elements were compared by fault type, the initial (half-cycle) load was great when the single-phase transformer was applied, whereas for the three-phase transformer, its power load was slightly lower at the initial stage but became greater after the half fault cycle.

  19. Interseismic Strain Accumulation of the Gazikoy-Saros segment (Ganos fault) of the North Anatolian Fault Zone

    NASA Astrophysics Data System (ADS)

    Havazli, E.; Wdowinski, S.; Amelung, F.

    2017-12-01

    The North Anatolian Fault Zone (NAFZ) is one of the most active continental transform faults in the world. A westward migrating earthquake sequence has started in 1939 in Erzincan and the last two events of this sequence occurred in 1999 in Izmit and Duzce manifesting the importance of NAFZ on the seismic hazard potential of the region. NAFZ exhibits slip rates ranging from 14-30 mm/yr along its 1500 km length with a right lateral strike slip characteristic. In the East of the Marmara Sea, the NAFZ splits into two branches. The Gazikoy-Saros segment (Ganos Fault) is the westernmost and onshore segment of the northern branch. The ENE-WSW oriented Ganos Fault is seismically active. It produced a Ms 7.2 earthquake in 1912, which was followed by several large aftershocks, including Ms 6.3 and Ms 6.9 events. Since 1912, the Ganos Fault did not produce any significant earthquakes (> M 5), in contrast to its adjacent segments, which produced 20 M>5 earthquakes, including a M 6.7 event, offshore in Gulf of Saros. Interseismic strain accumulation along the Ganos Fault was assessed from sparse GPS measurements along a single transect located perpendicular to the fault zone, suggesting strain accumulation rate of 20-25 mm/yr. Insofar, InSAR studies, based on C-band data, didn't produce conclusive results due to low coherence over the fault zone area, which is highly vegetated. In this study, we present a detailed interseismic velocity map of the Ganos Fault zone derived from L-band InSAR observations. We use 21 ALOS PALSAR scenes acquired over a 5-year period, from 2007 to 2011. We processed the ALOS data using the PySAR software, which is the University of Miami version of the Small Baseline (SB) method. The L-band observations enabled us to overcome the coherence issue in the study area. Our initial results indicate a maximum velocity of 15 mm/yr across the fault zone. The high spatial resolution of the InSAR-based interseismic velocity map will enable us to better to

  20. Real-time Estimation of Fault Rupture Extent for Recent Large Earthquakes

    NASA Astrophysics Data System (ADS)

    Yamada, M.; Mori, J. J.

    2009-12-01

    Current earthquake early warning systems assume point source models for the rupture. However, for large earthquakes, the fault rupture length can be of the order of tens to hundreds of kilometers, and the prediction of ground motion at a site requires the approximated knowledge of the rupture geometry. Early warning information based on a point source model may underestimate the ground motion at a site, if a station is close to the fault but distant from the epicenter. We developed an empirical function to classify seismic records into near-source (NS) or far-source (FS) records based on the past strong motion records (Yamada et al., 2007). Here, we defined the near-source region as an area with a fault rupture distance less than 10km. If we have ground motion records at a station, the probability that the station is located in the near-source region is; P = 1/(1+exp(-f)) f = 6.046log10(Za) + 7.885log10(Hv) - 27.091 where Za and Hv denote the peak values of the vertical acceleration and horizontal velocity, respectively. Each observation provides the probability that the station is located in near-source region, so the resolution of the proposed method depends on the station density. The information of the fault rupture location is a group of points where the stations are located. However, for practical purposes, the 2-dimensional configuration of the fault is required to compute the ground motion at a site. In this study, we extend the methodology of NS/FS classification to characterize 2-dimensional fault geometries and apply them to strong motion data observed in recent large earthquakes. We apply a cosine-shaped smoothing function to the probability distribution of near-source stations, and convert the point fault location to 2-dimensional fault information. The estimated rupture geometry for the 2007 Niigata-ken Chuetsu-oki earthquake 10 seconds after the origin time is shown in Figure 1. Furthermore, we illustrate our method with strong motion data of the