Science.gov

Sample records for fault protection practices

  1. Protecting Against Faults in JPL Spacecraft

    NASA Technical Reports Server (NTRS)

    Morgan, Paula

    2007-01-01

    A paper discusses techniques for protecting against faults in spacecraft designed and operated by NASA s Jet Propulsion Laboratory (JPL). The paper addresses, more specifically, fault-protection requirements and techniques common to most JPL spacecraft (in contradistinction to unique, mission specific techniques), standard practices in the implementation of these techniques, and fault-protection software architectures. Common requirements include those to protect onboard command, data-processing, and control computers; protect against loss of Earth/spacecraft radio communication; maintain safe temperatures; and recover from power overloads. The paper describes fault-protection techniques as part of a fault-management strategy that also includes functional redundancy, redundant hardware, and autonomous monitoring of (1) the operational and health statuses of spacecraft components, (2) temperatures inside and outside the spacecraft, and (3) allocation of power. The strategy also provides for preprogrammed automated responses to anomalous conditions. In addition, the software running in almost every JPL spacecraft incorporates a general-purpose "Safe Mode" response algorithm that configures the spacecraft in a lower-power state that is safe and predictable, thereby facilitating diagnosis of more complex faults by a team of human experts on Earth.

  2. Methodology for Designing Fault-Protection Software

    NASA Technical Reports Server (NTRS)

    Barltrop, Kevin; Levison, Jeffrey; Kan, Edwin

    2006-01-01

    A document describes a methodology for designing fault-protection (FP) software for autonomous spacecraft. The methodology embodies and extends established engineering practices in the technical discipline of Fault Detection, Diagnosis, Mitigation, and Recovery; and has been successfully implemented in the Deep Impact Spacecraft, a NASA Discovery mission. Based on established concepts of Fault Monitors and Responses, this FP methodology extends the notion of Opinion, Symptom, Alarm (aka Fault), and Response with numerous new notions, sub-notions, software constructs, and logic and timing gates. For example, Monitor generates a RawOpinion, which graduates into Opinion, categorized into no-opinion, acceptable, or unacceptable opinion. RaiseSymptom, ForceSymptom, and ClearSymptom govern the establishment and then mapping to an Alarm (aka Fault). Local Response is distinguished from FP System Response. A 1-to-n and n-to- 1 mapping is established among Monitors, Symptoms, and Responses. Responses are categorized by device versus by function. Responses operate in tiers, where the early tiers attempt to resolve the Fault in a localized step-by-step fashion, relegating more system-level response to later tier(s). Recovery actions are gated by epoch recovery timing, enabling strategy, urgency, MaxRetry gate, hardware availability, hazardous versus ordinary fault, and many other priority gates. This methodology is systematic, logical, and uses multiple linked tables, parameter files, and recovery command sequences. The credibility of the FP design is proven via a fault-tree analysis "top-down" approach, and a functional fault-mode-effects-and-analysis via "bottoms-up" approach. Via this process, the mitigation and recovery strategy(s) per Fault Containment Region scope (width versus depth) the FP architecture.

  3. The Curiosity Mars Rover's Fault Protection Engine

    NASA Technical Reports Server (NTRS)

    Benowitz, Ed

    2014-01-01

    The Curiosity Rover, currently operating on Mars, contains flight software onboard to autonomously handle aspects of system fault protection. Over 1000 monitors and 39 responses are present in the flight software. Orchestrating these behaviors is the flight software's fault protection engine. In this paper, we discuss the engine's design, responsibilities, and present some lessons learned for future missions.

  4. Fault Locating, Prediction and Protection (FLPPS)

    SciTech Connect

    Yinger, Robert, J.; Venkata, S., S.; Centeno, Virgilio

    2010-09-30

    One of the main objectives of this DOE-sponsored project was to reduce customer outage time. Fault location, prediction, and protection are the most important aspects of fault management for the reduction of outage time. In the past most of the research and development on power system faults in these areas has focused on transmission systems, and it is not until recently with deregulation and competition that research on power system faults has begun to focus on the unique aspects of distribution systems. This project was planned with three Phases, approximately one year per phase. The first phase of the project involved an assessment of the state-of-the-art in fault location, prediction, and detection as well as the design, lab testing, and field installation of the advanced protection system on the SCE Circuit of the Future located north of San Bernardino, CA. The new feeder automation scheme, with vacuum fault interrupters, will limit the number of customers affected by the fault. Depending on the fault location, the substation breaker might not even trip. Through the use of fast communications (fiber) the fault locations can be determined and the proper fault interrupting switches opened automatically. With knowledge of circuit loadings at the time of the fault, ties to other circuits can be closed automatically to restore all customers except the faulted section. This new automation scheme limits outage time and increases reliability for customers. The second phase of the project involved the selection, modeling, testing and installation of a fault current limiter on the Circuit of the Future. While this project did not pay for the installation and testing of the fault current limiter, it did perform the evaluation of the fault current limiter and its impacts on the protection system of the Circuit of the Future. After investigation of several fault current limiters, the Zenergy superconducting, saturable core fault current limiter was selected for

  5. Fault Management Practice: A Roadmap for Improvement

    NASA Technical Reports Server (NTRS)

    Fesq, Lorraine M.; Oberhettinger, David

    2010-01-01

    Autonomous fault management (FM) is critical for deep space and planetary missions where the limited communication opportunities may prevent timely intervention by ground control. Evidence of pervasive architecture, design, and verification/validation problems with NASA FM engineering has been revealed both during technical reviews of spaceflight missions and in flight. These problems include FM design changes required late in the life-cycle, insufficient project insight into the extent of FM testing required, unexpected test results that require resolution, spacecraft operational limitations because certain functions were not tested, and in-flight anomalies and mission failures attributable to fault management. A recent NASA initiative has characterized the FM state-of-practice throughout the spacecraft development community and identified common NASA, DoD, and commercial concerns that can be addressed in the near term through the development of a FM Practitioner's Handbook and the formation of a FM Working Group. Initial efforts will focus on standardizing FM terminology, establishing engineering processes and tools, and training.

  6. Staged-Fault Testing of Distance Protection Relay Settings

    NASA Astrophysics Data System (ADS)

    Havelka, J.; Malarić, R.; Frlan, K.

    2012-01-01

    In order to analyze the operation of the protection system during induced fault testing in the Croatian power system, a simulation using the CAPE software has been performed. The CAPE software (Computer-Aided Protection Engineering) is expert software intended primarily for relay protection engineers, which calculates current and voltage values during faults in the power system, so that relay protection devices can be properly set up. Once the accuracy of the simulation model had been confirmed, a series of simulations were performed in order to obtain the optimal fault location to test the protection system. The simulation results were used to specify the test sequence definitions for the end-to-end relay testing using advanced testing equipment with GPS synchronization for secondary injection in protection schemes based on communication. The objective of the end-to-end testing was to perform field validation of the protection settings, including verification of the circuit breaker operation, telecommunication channel time and the effectiveness of the relay algorithms. Once the end-to-end secondary injection testing had been completed, the induced fault testing was performed with three-end lines loaded and in service. This paper describes and analyses the test procedure, consisting of CAPE simulations, end-to-end test with advanced secondary equipment and staged-fault test of a three-end power line in the Croatian transmission system.

  7. Protection - Principles and practice.

    NASA Technical Reports Server (NTRS)

    Graham, G. S.; Denning, P. J.

    1972-01-01

    The protection mechanisms of computer systems control the access to objects, especially information objects. The principles of protection system design are formalized as a model (theory) of protection. Each process has a unique identification number which is attached by the system to each access attempted by the process. Details of system implementation are discussed, taking into account the storing of the access matrix, aspects of efficiency, and the selection of subjects and objects. Two systems which have protection features incorporating all the elements of the model are described.

  8. Practical application of fault tree analysis

    SciTech Connect

    Prugh, R.W.

    1980-01-01

    A detailed survey of standard and novel approaches to Fault Tree construction, based on recent developments at Du Pont, covers the effect-to-cause procedure for control systems as in process plants; the effect-to-cause procedure for processes; source-of-hazard analysis, as in pressure vessel rupture; use of the ''fire triangle'' in a Fault Tree; critical combinations of safeguard failures; action points for automatic or operator control of a process; situations involving hazardous reactant ratios; failure-initiating and failure-enabling events and intervention by the operator; ''daisy-chain'' hazards, e.g., in batch processes and ship accidents; combining batch and continuous operations in a Fault Tree; possible future structure-development procedures for fault-tree construction; and the use of quantitative results (calculated frequencies of Top-Event occurrence) to restructure the Fault Tree after improving the process to any acceptable risk level.

  9. The impact of no-fault compensation on the regulation of medical practice in New Zealand.

    PubMed

    Collins, D B

    1993-01-01

    On 1 April 1974 New Zealand's no-fault accident compensation scheme commenced. Since then doctors in New Zealand have been protected from civil claims for damages which arise directly or indirectly from medical misadventure. During this period disciplinary complaints against doctors have risen substantially but there is no empirical evidence that New Zealand's no-fault accident compensation scheme has adversely affected the regulation of medical practice in that country. PMID:8377622

  10. Fault-tolerant reactor protection system

    DOEpatents

    Gaubatz, Donald C.

    1997-01-01

    A reactor protection system having four divisions, with quad redundant sensors for each scram parameter providing input to four independent microprocessor-based electronic chassis. Each electronic chassis acquires the scram parameter data from its own sensor, digitizes the information, and then transmits the sensor reading to the other three electronic chassis via optical fibers. To increase system availability and reduce false scrams, the reactor protection system employs two levels of voting on a need for reactor scram. The electronic chassis perform software divisional data processing, vote 2/3 with spare based upon information from all four sensors, and send the divisional scram signals to the hardware logic panel, which performs a 2/4 division vote on whether or not to initiate a reactor scram. Each chassis makes a divisional scram decision based on data from all sensors. Each division performs independently of the others (asynchronous operation). All communications between the divisions are asynchronous. Each chassis substitutes its own spare sensor reading in the 2/3 vote if a sensor reading from one of the other chassis is faulty or missing. Therefore the presence of at least two valid sensor readings in excess of a set point is required before terminating the output to the hardware logic of a scram inhibition signal even when one of the four sensors is faulty or when one of the divisions is out of service.

  11. Fault-tolerant reactor protection system

    DOEpatents

    Gaubatz, D.C.

    1997-04-15

    A reactor protection system is disclosed having four divisions, with quad redundant sensors for each scram parameter providing input to four independent microprocessor-based electronic chassis. Each electronic chassis acquires the scram parameter data from its own sensor, digitizes the information, and then transmits the sensor reading to the other three electronic chassis via optical fibers. To increase system availability and reduce false scrams, the reactor protection system employs two levels of voting on a need for reactor scram. The electronic chassis perform software divisional data processing, vote 2/3 with spare based upon information from all four sensors, and send the divisional scram signals to the hardware logic panel, which performs a 2/4 division vote on whether or not to initiate a reactor scram. Each chassis makes a divisional scram decision based on data from all sensors. Each division performs independently of the others (asynchronous operation). All communications between the divisions are asynchronous. Each chassis substitutes its own spare sensor reading in the 2/3 vote if a sensor reading from one of the other chassis is faulty or missing. Therefore the presence of at least two valid sensor readings in excess of a set point is required before terminating the output to the hardware logic of a scram inhibition signal even when one of the four sensors is faulty or when one of the divisions is out of service. 16 figs.

  12. MER Surface Phase; Blurring the Line Between Fault Protection and What is Supposed to Happen

    NASA Technical Reports Server (NTRS)

    Reeves, Glenn E.

    2008-01-01

    An assessment on the limitations of communication with MER rovers and how such constraints drove the system design, flight software and fault protection architecture, blurring the line between traditional fault protection and expected nominal behavior, and requiring the most novel autonomous and semi-autonomous elements of the vehicle software including communication, surface mobility, attitude knowledge acquisition, fault protection, and the activity arbitration service.

  13. Functional Fault Modeling Conventions and Practices for Real-Time Fault Isolation

    NASA Technical Reports Server (NTRS)

    Ferrell, Bob; Lewis, Mark; Perotti, Jose; Oostdyk, Rebecca; Brown, Barbara

    2010-01-01

    The purpose of this paper is to present the conventions, best practices, and processes that were established based on the prototype development of a Functional Fault Model (FFM) for a Cryogenic System that would be used for real-time Fault Isolation in a Fault Detection, Isolation, and Recovery (FDIR) system. The FDIR system is envisioned to perform health management functions for both a launch vehicle and the ground systems that support the vehicle during checkout and launch countdown by using a suite of complimentary software tools that alert operators to anomalies and failures in real-time. The FFMs were created offline but would eventually be used by a real-time reasoner to isolate faults in a Cryogenic System. Through their development and review, a set of modeling conventions and best practices were established. The prototype FFM development also provided a pathfinder for future FFM development processes. This paper documents the rationale and considerations for robust FFMs that can easily be transitioned to a real-time operating environment.

  14. Advanced power system protection and incipient fault detection and protection of spaceborne power systems

    NASA Technical Reports Server (NTRS)

    Russell, B. Don

    1989-01-01

    This research concentrated on the application of advanced signal processing, expert system, and digital technologies for the detection and control of low grade, incipient faults on spaceborne power systems. The researchers have considerable experience in the application of advanced digital technologies and the protection of terrestrial power systems. This experience was used in the current contracts to develop new approaches for protecting the electrical distribution system in spaceborne applications. The project was divided into three distinct areas: (1) investigate the applicability of fault detection algorithms developed for terrestrial power systems to the detection of faults in spaceborne systems; (2) investigate the digital hardware and architectures required to monitor and control spaceborne power systems with full capability to implement new detection and diagnostic algorithms; and (3) develop a real-time expert operating system for implementing diagnostic and protection algorithms. Significant progress has been made in each of the above areas. Several terrestrial fault detection algorithms were modified to better adapt to spaceborne power system environments. Several digital architectures were developed and evaluated in light of the fault detection algorithms.

  15. A Generalised Fault Protection Structure Proposed for Uni-grounded Low-Voltage AC Microgrids

    NASA Astrophysics Data System (ADS)

    Bui, Duong Minh; Chen, Shi-Lin; Lien, Keng-Yu; Jiang, Jheng-Lun

    2016-04-01

    This paper presents three main configurations of uni-grounded low-voltage AC microgrids. Transient situations of a uni-grounded low-voltage (LV) AC microgrid (MG) are simulated through various fault tests and operation transition tests between grid-connected and islanded modes. Based on transient simulation results, available fault protection methods are proposed for main and back-up protection of a uni-grounded AC microgrid. In addition, concept of a generalised fault protection structure of uni-grounded LVAC MGs is mentioned in the paper. As a result, main contributions of the paper are: (i) definition of different uni-grounded LVAC MG configurations; (ii) analysing transient responses of a uni-grounded LVAC microgrid through line-to-line faults, line-to-ground faults, three-phase faults and a microgrid operation transition test, (iii) proposing available fault protection methods for uni-grounded microgrids, such as: non-directional or directional overcurrent protection, under/over voltage protection, differential current protection, voltage-restrained overcurrent protection, and other fault protection principles not based on phase currents and voltages (e.g. total harmonic distortion detection of currents and voltages, using sequence components of current and voltage, 3I0 or 3V0 components), and (iv) developing a generalised fault protection structure with six individual protection zones to be suitable for different uni-grounded AC MG configurations.

  16. Grounding and fault protection of the SMUD PVI array

    NASA Technical Reports Server (NTRS)

    Rosen, D.

    1983-01-01

    If large terrestrial photovoltaic (PV) power plants are to provide an economic source of generation, low-cost, reliable and easily maintainable systems must be developed. In addition, if the system is to be a central station powerplant owned and operated by an electric utility, the design must also be consistent with utility specifications and design standards. The particular solutions developed to address these issues, with regard to grounding and fault protection, for the first phase of the Sacramento Municipal Utility District's (SMUD) 100 mw(ac) PV powerplant are presented. This plant, known as PV1, is nominally rated at 1 mw(ac) and is scheduled to be in operation by the spring of 1984.

  17. Cathodic protection: Theory and practice

    SciTech Connect

    Ashworth, V.; Booker, C.J.L.

    1986-01-01

    This book presents an account of cathodic protection. It covers the advances made over the past decade, both in terms of understanding the complexity of the systems to which cathodic protection has been applied and assuring the reliability of the designs which have evolved. It shows how computer-validated design is superseding empirical design. The use of field gradient measurements for current output, acoustic transmission of potential data, and monitoring and surveying of cathodic protection systems are included.

  18. Practical Methods for Estimating Software Systems Fault Content and Location

    NASA Technical Reports Server (NTRS)

    Nikora, A.; Schneidewind, N.; Munson, J.

    1999-01-01

    Over the past several years, we have developed techniques to discriminate between fault-prone software modules and those that are not, to estimate a software system's residual fault content, to identify those portions of a software system having the highest estimated number of faults, and to estimate the effects of requirements changes on software quality.

  19. A new neural networks approach to on-line fault section estimation using information of protective relays and circuit breakers

    SciTech Connect

    Yang, H.T.; Chang, W.Y.; Huang, C.L. . Dept. of Electrical Engineering)

    1994-01-01

    This paper proposes a new Neural Networks Diagnostic System for on-line fault section estimation using information of relays and circuit breakers. This system has similar profile of an expert system, but can be constructed much more easily from elemental samples. These samples associate fault section with its primary, local and/or remote protective relays and breakers. The diagnostic system can be applicable to the power system control center for single or multiple fault sections estimation, even in the cases of failure operation of relay and breaker, or error-existent data transmission. The proposed approach has been practically verified by testing on a model power system. The test results, although preliminary, suggest the system can be implemented by various electric utilities with relatively low customization effort.

  20. Considerations for ground fault protection in medium-voltage industrial and cogeneration systems

    SciTech Connect

    Love, D.J.; Hashemi, N.

    1988-07-01

    Industrial plants utilize medium-voltage systems for in-plant distribution of purchased and cogenerated electrical energy. During the planning stage, system protection is generally specified, including the type of source neutral grounding and ground fault protection. Where medium-voltage systems have expanded, circuit-breaker interrupting ratings have also been increased. Accordingly, grounding consideration should be reviewed, particularly because charging and/or ground fault current values have also increased. The typical methods for grounding of medium-voltage neutral systems - high resistance, low resistance, and ungrounded, as well as methods used to detect the presence of a ground fault - are reviewed. Also, the effects of charging current and how the ground fault protection method could affect conductor ratings are analyzed.

  1. Investigation of Ground-Fault Protection Devices for Photovoltaic Power Systems Applications

    SciTech Connect

    BOWER,WARD I.; WILES,JOHN

    2000-10-03

    Photovoltaic (PV) power systems, like other electrical systems, may be subject to unexpected ground faults. Installed PV systems always have invisible elements other than those indicated by their electrical schematics. Stray inductance, capacitance and resistance are distributed throughout the system. Leakage currents associated with the PV modules, the interconnected array, wires, surge protection devices and conduit add up and can become large enough to look like a ground-fault. PV systems are frequently connected to other sources of power or energy storage such as batteries, standby generators, and the utility grid. This complex arrangement of distributed power and energy sources, distributed impedance and proximity to other sources of power requires sensing of ground faults and proper reaction by the ground-fault protection devices. The different dc grounding requirements (country to country) often add more confusion to the situation. This paper discusses the ground-fault issues associated with both the dc and ac side of PV systems and presents test results and operational impacts of backfeeding commercially available ac ground-fault protection devices under various modes of operation. Further, the measured effects of backfeeding the tripped ground-fault devices for periods of time comparable to anti-islanding allowances for utility interconnection of PV inverters in the United States are reported.

  2. Soil bentonite wall protects foundation from thrust faulting: analyses and experiment

    NASA Astrophysics Data System (ADS)

    Fadaee, Meysam; Anastasopoulos, I.; Gazetas, G.; Jafari, M. K.; Kamalian, M.

    2013-09-01

    When seismic thrust faults emerge on the ground surface, they are particularly damaging to buildings, bridges and lifelines that lie on the rupture path. To protect a structure founded on a rigid raft, a thick diaphragm-type soil bentonite wall (SBW) is installed in front of and near the foundation, at sufficient depth to intercept the propagating fault rupture. Extensive numerical analyses, verified against reduced-scale (1 g) split box physical model tests, reveal that such a wall, thanks to its high deformability and low shear resistance, "absorbs" the compressive thrust of the fault and forces the rupture to deviate upwards along its length. As a consequence, the foundation is left essentially intact. The effectiveness of SBW is demonstrated to depend on the exact location of the emerging fault and the magnitude of the fault offset. When the latter is large, the unprotected foundation experiences intolerable rigid-body rotation even if the foundation structural distress is not substantial.

  3. Failure modes of protective coatings: Who`s at fault?

    SciTech Connect

    Vincent, L.D.

    1999-04-01

    This article abstracts information from 42 articles in industry journals and magazines, and 41 case studies of failure analyses conducted by consultants. The studies demonstrate that 75% of all coating failures are not solely the fault of the contractor, as is commonly believed. Owners, engineers, specifiers, and contractors/applicators share most of the responsibility. A joint responsibility among all parties involved in a coating project is required. The most effective answer to preventing coating failures is training.

  4. Prediction of sub-seismic faults and fractures to ensure carbon traps - joint project PROTECT

    NASA Astrophysics Data System (ADS)

    Ziesch, Jennifer; Tanner, David C.; Beilecke, Thies; Krawczyk, Charlotte M.

    2015-04-01

    Deformation in the form of fractures and faults affects many reservoirs and their overburden. In a 3-D seismic data set we can identify faults on the large scale, while in well data we observe small-scale fractures. A large number of faults at the intermediate scale (sub-seismic space) also plays a very important role, but these are not detectable with conventional geophysical methods. Therefore, we use the retro-deformation approach within the context of long-term CO2 storage integrity to determine the characteristics of potential fluid migration pathways between reservoir and surface. This allows to produce strain maps, in order to analyse fault behaviour in the sub-seismic space. As part of the PROTECT (prediction of deformation to ensure carbon traps) project we focus on the sub-seismic faults of the CO2CRC Otway Project site in Australia. We interpreted a geological 3-D model of 8 km x 7 km x 4.5 km that comprises 8 stratigraphic horizons and 24 large-scale faults. This confirmed the site to contain a complex system of south-dipping normal faults and north-dipping antithetic normal faults. The most important aspect is that two different types of fault kinematics were simultaneously active: Dip-slip and a combination of dip-slip with dextral strike slip movement. After the retro-deformation of the 3-D model we calculated strain tensor maps to locate highly deformed or fractured zones and their orientation within the stratigraphic volume. The e1-strain magnitude shows heterogeneous distribution. The south of the study area is at least twice as much fractured on a sub-seismic scale. Four major faults act as "controlling faults" with smaller faults in between. The overburden is tilted northwards after retro-deformation. Thus, we believe that the area was affected by an even larger normal fault outside of the study area. In summary, this study reveals that good knowledge of the kinematics of the large-scale faults is essential to predict sub-seismic structures

  5. Fiber-fault protection WDM-PON using new apparatus in optical networking unit

    NASA Astrophysics Data System (ADS)

    Yeh, C. H.; Chow, C. W.; Shih, F. Y.; Wu, Y. F.; Sung, J. Y.

    2012-04-01

    In this study, we propose and experimentally demonstrate a simple self-protection architecture for WDM passive optical network (PON) by adding a novel 2 × 2 optical switch design in each optical networking unit (ONU). Two adjacent ONUs are interconnected into a group. By using the proposed protection architecture, the affected traffic can be restored immediately against fiber fault in the feeder and distributed fibers. Moreover, the performance of proposed self-protection WDM-PON is also discussed and analyzed.

  6. Development of New Type Gap Arrester for Earth Fault Protection in AC Feeding System

    NASA Astrophysics Data System (ADS)

    Ajiki, Kohji; Morimoto, Hiroaki; Hisamizu, Yasuzi; Kinoshita, Nobuo; Takai, Wataru; Sato, Ryogo

    A gap arrester is being used for ground fault protection in AC Feeding System. However there are faults in which a conventional gap arrester burns down in a normal state of circuit. We investigated the cause of the fault in which a gap arrester burns. Then, it was found out that the cause of the fault was the discharge of AC current from the surge voltage. Therefore, we developed a new type gap arrester which does not burn down. The new type gap arrester is composed of a discharge tube and a zinc oxide element which are connected in series. Unnecessary AC current discharge is prevented by this structure. The new type gap arrester is actually used at the railroad track.

  7. Fault tolerance in onboard processors - Protecting efficient FDM demultiplexers

    NASA Technical Reports Server (NTRS)

    Redinbo, Robert

    1992-01-01

    The application of convolutional codes to protect demultiplexer filter banks is demonstrated analytically for efficient implementations. An overview is given of the parameters for the efficient implementations of filter banks, and real convolutional codes are discussed in terms of DSP operations. Methods for composite filtering and parity generation are outlined, and attention is given to the protection of polyphase filter demultiplexing systems. Real convolutional codes can be applied to protect demultiplexer filter banks by employing two forms of low-rate parity calculation to each filter bank. The parity values are computed either by the output with an FIR parity filter or in parallel with the normal processing by a composite filter. Hardware similarities between the filter bank and the main demultiplexer bank permit efficient redeployment of the processing resources to the main processing function in any configuration.

  8. Prony`s method: An efficient tool for the analysis of earth fault currents in Petersen-coil-protected networks

    SciTech Connect

    Chaari, O.; Bastard, P.; Meunier, M.

    1995-07-01

    Prony`s method is a technique for estimating the modal components present in a signal. Every modal component is defined by four parameters: frequency, magnitude, phase, and damping. This method is used to analyze earth fault currents in Petersen-coil-protected 20 kV networks. The variations of Prony`s parameters in terms of some of the power system characteristics (distance between the busbar and the fault, fault resistance and capacitive current of the whole network) are presented. It is shown that some of the Prony`s parameters relating to the fault current transient may be useful to determine what kind of fault occurred, and where it did.

  9. Fault kinematics and retro-deformation analysis for prediction of potential leakage pathways - joint project PROTECT

    NASA Astrophysics Data System (ADS)

    Ziesch, Jennifer; Tanner, David C.; Dance, Tess; Beilecke, Thies; Krawczyk, Charlotte M.

    2014-05-01

    Within the context of long-term CO2 storage integrity, we determine the seismic and sub-seismic characteristics of potential fluid migration pathways between reservoir and surface. As a part of the PROTECT project we focus on the sub-seismic faults of the CO2CRC Otway Project pilot site in Australia. We carried out a detailed interpretation of 3D seismic data and have built a geological 3D model of 8 km x 7 km x 4.5 km (depth). The model comprises triangulated surfaces of 8 stratigraphic horizons and 24 large-scale faults with 75 m grid size. We have confirmed the site to comprise a complex system of south-dipping normal faults and north-dipping antithetic normal faults. Good knowledge of the kinematics of the large-scale faults is essential to predict sub-seismic structures. For this reason preconditioning analyses, such as thickness maps, fault curvature, cylindricity and connectivity studies, as well as Allan mapping were carried out. The most important aspect is that two different types of fault kinematics were simultaneously active: Dip-slip and a combination of dip-slip with dextral strike slip movement. Using these input parameters stratigraphic volumes are kinematically restored along the large-scale faults, taking fault topography into account (retro-deformation). The stratigraphic volumes are analyzed at the same time with respect to sub-seismic strain variation. Thereby we produce strain tensor maps to locate highly deformed or fractured zones and their orientation within the stratigraphic volumes. We will discuss the results in the framework of possible fluid/gas migration pathways and communication between storage reservoir and overburden. This will provide a tool to predict CO2 leakage and thus to adapt time-dependent monitoring strategies for subsurface storage in general. Acknowledgement: This work was sponsored in part by the Australian Commonwealth Government through the Cooperative Research Centre for Greenhouse Gas Technologies (CO2CRC). PROTECT

  10. A Novel Wide-Area Backup Protection Based on Fault Component Current Distribution and Improved Evidence Theory

    PubMed Central

    Zhang, Zhe; Kong, Xiangping; Yin, Xianggen; Yang, Zengli; Wang, Lijun

    2014-01-01

    In order to solve the problems of the existing wide-area backup protection (WABP) algorithms, the paper proposes a novel WABP algorithm based on the distribution characteristics of fault component current and improved Dempster/Shafer (D-S) evidence theory. When a fault occurs, slave substations transmit to master station the amplitudes of fault component currents of transmission lines which are the closest to fault element. Then master substation identifies suspicious faulty lines according to the distribution characteristics of fault component current. After that, the master substation will identify the actual faulty line with improved D-S evidence theory based on the action states of traditional protections and direction components of these suspicious faulty lines. The simulation examples based on IEEE 10-generator-39-bus system show that the proposed WABP algorithm has an excellent performance. The algorithm has low requirement of sampling synchronization, small wide-area communication flow, and high fault tolerance. PMID:25050399

  11. Protecting the GEO Environment: Policies and Practices

    NASA Technical Reports Server (NTRS)

    Johnson, Nicholas L.; McKay, Gordon A. (Technical Monitor)

    1999-01-01

    The geosynchronous orbital regime has long been recognized as a unique space resource, dictating special measures to ensure its continuing use for future generations. During the past 20 years a variety of national and international policies have been developed to preserve this environment. A review of current practices involving the deployment and disposal of geosynchronous spacecraft, associated upper stages and apogee kick motors, and geosynchronous orbit transfer objects indicates both positive and negative trends. Most spacecraft operators are indeed performing end-of-mission maneuvers, but the boost altitudes normally fall short of policy guidelines. Russia, a major operator in geosynchronous orbit, maneuvers only 1 in 3 spacecraft out of the region, while China has never refired a spacecraft above GEO. The viability of voluntary protection measures for this regime depends upon the responsible actions of the aerospace community as a whole.

  12. Biometrics IRB best practices and data protection

    NASA Astrophysics Data System (ADS)

    Boehnen, Christopher; Bolme, David; Flynn, Patrick

    2015-05-01

    The collection of data from human subjects for biometrics research in the United States requires the development of a data collection protocol that is reviewed by a Human Subjects Institutional Review Board (IRB). The IRB reviews the protocol for risks and approves it if it meets the criteria for approval specified in the relevant Federal regulations (45 CFR 46). Many other countries operate similar mechanisms for the protection of human subjects. IRBs review protocols for safety, confidentiality, and for minimization of risk associated with identity disclosure. Since biometric measurements are potentially identifying, IRB scrutiny of biometrics data collection protocols can be expected to be thorough. This paper discusses the intricacies of IRB best practices within the worldwide biometrics community. This is important because research decisions involving human subjects are made at a local level and do not set a precedent for decisions made by another IRB board. In many cases, what one board approves is not approved by another board, resulting in significant inconsistencies that prove detrimental to both researchers and human subjects. Furthermore, the level of biometrics expertise may be low on IRBs, which can contribute to the unevenness of reviews. This publication will suggest possible best practices for designing and seeking IRB approval for human subjects research involving biometrics measurements. The views expressed are the opinions of the authors.

  13. Toward a Model-Based Approach to Flight System Fault Protection

    NASA Technical Reports Server (NTRS)

    Day, John; Murray, Alex; Meakin, Peter

    2012-01-01

    Fault Protection (FP) is a distinct and separate systems engineering sub-discipline that is concerned with the off-nominal behavior of a system. Flight system fault protection is an important part of the overall flight system systems engineering effort, with its own products and processes. As with other aspects of systems engineering, the FP domain is highly amenable to expression and management in models. However, while there are standards and guidelines for performing FP related analyses, there are not standards or guidelines for formally relating the FP analyses to each other or to the system hardware and software design. As a result, the material generated for these analyses are effectively creating separate models that are only loosely-related to the system being designed. Development of approaches that enable modeling of FP concerns in the same model as the system hardware and software design enables establishment of formal relationships that has great potential for improving the efficiency, correctness, and verification of the implementation of flight system FP. This paper begins with an overview of the FP domain, and then continues with a presentation of a SysML/UML model of the FP domain and the particular analyses that it contains, by way of showing a potential model-based approach to flight system fault protection, and an exposition of the use of the FP models in FSW engineering. The analyses are small examples, inspired by current real-project examples of FP analyses.

  14. DC wiring system grounding and ground fault protection issues for central station photovoltaic power plants

    NASA Technical Reports Server (NTRS)

    Simburger, E. J.

    1983-01-01

    The DC wiring system for a photovoltaic power plant presents a number of unique challenges to be overcome by the plant designers. There are a number of different configurations that the grounding of the DC wiring system can take, and the choice will affect the number and type of protective devices required to ensure safety of personnel and protection of equipment. The major grounding and fault protection considerations that must be taken into account when selecting the basic overall circuit configuration are summarized. The inherent advantages and disadvantages of each type of circuit grounding (resistance or solid) along with the personnel safety and equipment protection issues for each of these grounding methods are presented.

  15. A practical morality for ecosystem protection

    SciTech Connect

    Randall, A.

    1995-12-31

    The moral foundations are surprisingly robust of a conservation policy framework based on benefits and costs, but subject to a safe minimum standard (SMS) of conservation. The benefit cost rule provides an account of standard (SMS) of conservation. The benefits cost rule provides an account of the net contribution of policy to the satisfaction of human preferrences. The difficulties confronting benefit cost analysis at the ecosystem level should not be understated, but any reasonably comprehensive evaluating system that takes citizen preferences seriously would encounter similar difficulties. The SMS constraint provides direct protection of ecosystems for their own sakes, or because humans seek prudently to avoid the risks entailed in ecosystem destruction. However, citizen commitment to a SMS policy is unlikely to be iron clad. Moral reasoning might find circumstances in which the sacrifice that would ensure conservation is too much to ask of particular groups of people. Practical reasoning suggests that people asked to bear an enormous cost in order to keep a conservation commitment may well defect. These considerations suggest some general principles for designing a workable conservation policy. First, the objective should focus on the sustainability of ecosystems rather than the preservation of particular species. Second, in order to maintain the commitment to conservation, the costs imposed on any particular group of people must be kept tolerably low. Cost tend to be high, for last-ditch preservation efforts made in a crisis atmosphere; so, early warning and early implementation of conservation strategies make sense. Conservation efforts sometimes impose high costs locally in order to provide benefits for society at large; in such cases, compensation mechanisms are both equitable and effective in facilitating local cooperation.

  16. Cassini Attitude Control Fault Protection Design: Launch to End of Prime Mission Performance

    NASA Technical Reports Server (NTRS)

    Meakin, Peter C.

    2008-01-01

    The Cassini Attitude and Articulation Control Subsystem (AACS) Fault Protection (FP) has been successfully supporting operations for over 10 years from launch through the end of the prime mission. Cassini's AACS FP is complex, containing hundreds of error monitors and thousands of tunable parameters. Since launch there have been environmental, hardware, personnel and mission event driven changes which have required AACS FP to adapt and be robust to a variety of scenarios. This paper will discuss the process of monitoring, maintaining and updating the AACS FP during Cassini's lengthy prime mission as well as provide some insight into lessons learned during tour operations.

  17. Managing Space System Faults: Coalescing NASA's Views

    NASA Technical Reports Server (NTRS)

    Muirhead, Brian; Fesq, Lorraine

    2012-01-01

    Managing faults and their resultant failures is a fundamental and critical part of developing and operating aerospace systems. Yet, recent studies have shown that the engineering "discipline" required to manage faults is not widely recognized nor evenly practiced within the NASA community. Attempts to simply name this discipline in recent years has been fraught with controversy among members of the Integrated Systems Health Management (ISHM), Fault Management (FM), Fault Protection (FP), Hazard Analysis (HA), and Aborts communities. Approaches to managing space system faults typically are unique to each organization, with little commonality in the architectures, processes and practices across the industry.

  18. Cautious to a Fault: Self-Protection and the Trajectory of Marital Satisfaction.

    PubMed

    Murray, Sandra L; Holmes, John G; Derrick, Jaye L; Harris, Brianna; Griffin, Dale W; Pinkus, Rebecca T

    2013-05-01

    A contextual model of self-protection is proposed to explain when adhering to cautious "if-then" rules in daily interaction erodes marital satisfaction. People can self-protect against partner non-responsiveness by distancing when a partner seems rejecting, promoting a partner's dependence when feeling unworthy, or by devaluing a partner in the face of costs. The model implies that being less trusting elicits self-protection, and that mismatches between self-protective practices and encountered risk accelerate declines in satisfaction. A longitudinal study of newlyweds revealed that the fit between self-protection practices and risk predicted declines in satisfaction over three years. When people self-protected more initially, satisfaction declined more in low-risk (i.e., low conflict, resilient partner) than high-risk relationships (i.e., high conflict, vulnerable partner). However, when people self-protected less initially, satisfaction declined more in high-risk than low-risk relationships. Process evidence was consistent with moderated mediation: In low-risk relationships only, being less trusting predicted higher levels of self-protective caution that forecast later declines in satisfaction. PMID:25013236

  19. Cautious to a Fault: Self-Protection and the Trajectory of Marital Satisfaction

    PubMed Central

    Murray, Sandra L.; Holmes, John G.; Derrick, Jaye L.; Harris, Brianna; Griffin, Dale W.; Pinkus, Rebecca T.

    2012-01-01

    A contextual model of self-protection is proposed to explain when adhering to cautious “if-then” rules in daily interaction erodes marital satisfaction. People can self-protect against partner non-responsiveness by distancing when a partner seems rejecting, promoting a partner’s dependence when feeling unworthy, or by devaluing a partner in the face of costs. The model implies that being less trusting elicits self-protection, and that mismatches between self-protective practices and encountered risk accelerate declines in satisfaction. A longitudinal study of newlyweds revealed that the fit between self-protection practices and risk predicted declines in satisfaction over three years. When people self-protected more initially, satisfaction declined more in low-risk (i.e., low conflict, resilient partner) than high-risk relationships (i.e., high conflict, vulnerable partner). However, when people self-protected less initially, satisfaction declined more in high-risk than low-risk relationships. Process evidence was consistent with moderated mediation: In low-risk relationships only, being less trusting predicted higher levels of self-protective caution that forecast later declines in satisfaction. PMID:25013236

  20. Cassini Attitude and Articulation Control Subsystem Fault Protection Challenges During Saturn Proximal Orbits

    NASA Technical Reports Server (NTRS)

    Bates, David M.

    2015-01-01

    NASA's Cassini Spacecraft, launched on October 15th, 1997 arrived at Saturn on June 30th, 2004, is the largest and most ambitious interplanetary spacecraft in history. As the first spacecraft to achieve orbit at Saturn, Cassini has collected science data throughout its four-year prime mission (2004-08), and has since been approved for a first and second extended mission through 2017. As part of the final extended mission, Cassini will begin an aggressive and exciting campaign of high inclination low altitude flybys within the inner most rings of Saturn, skimming Saturn's outer atmosphere, until the spacecraft is finally disposed of via planned impact with the planet. This final campaign, known as the proximal orbits, presents unique fault protection related challenges, the details of which are discussed in this paper.

  1. Protecting ground water: pesticides and agricultural practices. Technical report (Final)

    SciTech Connect

    Not Available

    1988-02-01

    The booklet presents the results of a project conducted by EPA's Office of Ground-Water Protection to evaluate the potential impacts of various agronomic, irrigation, and pesticide application practices on ground water. The report provides State and local water quality and agricultural officials with technical information to help in the development of programs to protect ground water from pesticide contamination. The report explains the principles involved in reducing the risk of pesticide contamination and describes what is known about the impact of various agricultural practices on pesticide leaching. It is hoped that the information will be helpful to water-quality officials in developing and implementing ground-water protection programs.

  2. Assessment of radiation protection practices among radiographers in Lagos, Nigeria

    PubMed Central

    Eze, Cletus Uche; Abonyi, Livinus Chibuzo; Njoku, Jerome; Irurhe, Nicholas Kayode; Olowu, Oluwabola

    2013-01-01

    Background: Use of ionising radiation in diagnostic radiography could lead to hazards such as somatic and genetic damages. Compliance to safe work and radiation protection practices could mitigate such risks. The aim of the study was to assess the knowledge and radiation protection practices among radiographers in Lagos, Nigeria. Materials and Methods: The study was a prospective cross sectional survey. Convenience sampling technique was used to select four x-ray diagnostic centres in four tertiary hospitals in Lagos metropolis. Data were analysed with Epi- info software, version 3.5.1. Results: Average score on assessment of knowledge was 73%. Most modern radiation protection instruments were lacking in all the centres studied. Application of shielding devices such as gonad shield for protection was neglected mostly in government hospitals. Most x-ray machines were quite old and evidence of quality assurance tests performed on such machines were lacking. Conclusion: Radiographers within Lagos metropolis showed an excellent knowledge of radiation protection within the study period. Adherence to radiation protection practices among radiographers in Lagos metropolis during the period studied was, however, poor. Radiographers in Lagos, Nigeria should embrace current trends in radiation protection and make more concerted efforts to apply their knowledge in protecting themselves and patients from harmful effects of ionising radiation. PMID:24665152

  3. Fault Protection Design and Testing for the Cassini Spacecraft in a "Mixed" Thruster Configuration

    NASA Technical Reports Server (NTRS)

    Bates, David; Lee, Allan; Meakin, Peter; Weitl, Raquel

    2013-01-01

    NASA's Cassini Spacecraft, launched on October 15th, 1997 and arrived at Saturn on June 30th, 2004, is the largest and most ambitious interplanetary spacecraft in history. In order to meet the challenging attitude control and navigation requirements of the orbit profile at Saturn, Cassini is equipped with a monopropellant thruster based Reaction Control System (RCS), a bipropellant Main Engine Assembly (MEA) and a Reaction Wheel Assembly (RWA). In 2008, after 11 years of reliable service, several RCS thrusters began to show signs of end of life degradation, which led the operations team to successfully perform the swap from the A-branch to the B-branch RCS system. If similar degradation begins to occur on any of the B-branch thrusters, Cassini might have to assume a "mixed" thruster configuration, where a subset of both A and B branch thrusters will be designated as prime. The Cassini Fault Protection FSW was recently updated to handle this scenario. The design, implementation, and testing of this update is described in this paper.

  4. ECOSYSTEM RESTORATION: MANAGEMENT PRACTICES FOR PROTECTING AND ENHANCING AQUATIC RESOURCES

    EPA Science Inventory

    This poster describes research that addresses the question: Which management practices are most successful for protection and restoration of ecological resources? The Ecosystem Restoration Research Program of EPA/ORD is designed to conduct basic and applied field research to eva...

  5. Targeting of Watershed Management Practices for Water Quality Protection

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Ensuring a clean and adequate water supply implies conservative use of water and protecting water resources from pollution. Sediment, nutrient, and pesticide losses in runoff are major pollutants of surface waters in the Midwest. This publication addresses the targeting of best management practices ...

  6. Safeguarding and Protecting Children in Maternity Services: Implications for Practice

    ERIC Educational Resources Information Center

    Lazenbatt, Anne; Greer, Jean

    2009-01-01

    This article debates the issues involved in safeguarding and protecting children in maternity services and offers implications for professional practice. Midwives and other staff who work as members of the maternity team have a safeguarding role to play in the identification of babies and children who have been abused, or are at risk of abuse, and…

  7. Protection goals in environmental risk assessment: a practical approach.

    PubMed

    Garcia-Alonso, Monica; Raybould, Alan

    2014-12-01

    Policy protection goals are set up in most countries to minimise harm to the environment, humans and animals caused by human activities. Decisions on whether to approve new agricultural products, like pesticides or genetically modified (GM) crops, take into account these policy protection goals. To support decision-making, applications for approval of commercial uses of GM crops usually comprise an environmental risk assessment (ERA). These risk assessments are analytical tools, based on science, that follow a conceptual model that includes a problem formulation step where policy protection goals are considered. However, in most countries, risk assessors face major problems in that policy protection goals set in the legislation are stated in very broad terms and are too ambiguous to be directly applicable in ERAs. This means that risk assessors often have to interpret policy protection goals without clear guidance on what effects would be considered harmful. In this paper we propose a practical approach that may help risk assessors to translate policy protection goals into unambiguous (i.e., operational) protection goals and to establish relevant assessment endpoints and risk hypotheses that can be used in ERAs. Examples are provided to show how this approach can be applied to two areas of environmental concern relevant to the ERAs of GM crops. PMID:24154954

  8. Reliability Improvement of Ground Fault Protection System Using an S-Type Horn Attachment Gap in AC Feeding System

    NASA Astrophysics Data System (ADS)

    Ajiki, Kohji; Morimoto, Hiroaki; Shimokawa, Fumiyuki; Sakai, Shinya; Sasaki, Kazuomi; Sato, Ryogo

    Contact wires used in feeding system for electric railroad are insulated by insulators. However, insulation of an insulator sometimes breaks down by surface dirt of an insulator and contact with a bird. The insulator breakdown derives a ground fault in feeding system. Ground fault will cause a human electric shock and a destruction of low voltage electric equipment. In order to prevent the damage by ground fault, an S-type horn has been applicable as equipped on insulators of negative feeder and protective wire until present. However, a concrete pole breaks down at the time of the ground fault because a spark-over voltage of the S-type horn is higher than a breakdown voltage of a concrete pole. Farther, the S-type horn installed in the steel tube pole does not discharge a case, because the earth resistance of a steel tube pole is very small. We assumed that we could solve these troubles by changing the power frequency spark-over voltage of the S-type horn from 12kV to 3kV. Accordingly, we developed an attachment gap that should be used to change the power frequency spark-over voltage of the S-type horn from 12kV to 3kV. The attachment gap consists of a gas gap arrester and a zinc oxide element. By the dynamic current test and the artificial ground fault test, we confirmed that the attachment gap in the S-type horn could prevent a trouble at the time of the ground fault.

  9. Do workplace and home protective practices protect farm workers? Findings from the For Healthy Kids Study

    PubMed Central

    Coronado, Gloria D.; Holte, Sarah E.; Vigoren, Eric M.; Griffith, William C; Barr, Dana B.; Faustman, Elaine M.; Thompson, Beti

    2013-01-01

    Objective To assess associations of protective workplace and home practices to pesticide exposure levels. Methods Using data from orchard workers in the Yakima Valley, Washington, we examined associations of workplace and home protective practices to (1) urinary metabolite concentrations of dimethylthiophosphate (DMTP) in adults and children aged 2–6; and (2) azinphos-methyl levels in house and vehicle dust. Results Data were from 95 orchard workers and 94 children. Contrary to expectation, adult farm workers who wore boots or washed hands using hand sanitizer had higher concentrations of DMTP than those who did not. Children who attended daycare had higher DMTP concentrations than children who did not. Conclusions Few workplace or home practices were associated with pesticide exposure levels; workers who used hand sanitizer had higher concentrations of DMTP, as did children who attended daycare. PMID:22772953

  10. Protection from ground faults in the stator winding of generators at power plants in the Siberian networks

    SciTech Connect

    Vainshtein, R. A.; Lapin, V. I.; Naumov, A. M.; Doronin, A. V.; Yudin, S. M.

    2010-05-15

    The experience of many years of experience in developing and utilization of ground fault protection in the stator winding of generators in the Siberian networks is generalized. The main method of protection is to apply a direct current or an alternating current with a frequency of 25 Hz to the primary circuits of the stator. A direct current is applied to turbo generators operating in a unit with a transformer without a resistive coupling to the external grid or to other generators. Applying a 25 Hz control current is appropriate for power generation systems with compensation of a capacitive short circuit current to ground. This method forms the basis for protection of generators operating on busbars, hydroelectric generators with a neutral grounded through an arc-suppression reactor, including in consolidated units with generators operating in parallel on a single low-voltage transformer winding.

  11. Performance factors as a basis of practical fault detection and diagnostic methods for air-handling units

    SciTech Connect

    Kaerki, S.H.; Karjalainen, S.J.

    1999-07-01

    The technical term performance is defined as how well a system fulfills its intended purpose in different operational circumstances. This paper describes the process of establishing the performance factors of air-handling units (AHUs), defining the performance requirements, and connecting them to fault detection and diagnosis methods. The most important performance requirements of AHUs are related to heating and cooling energy, the supply airflow rate and purity, energy efficiency, and control quality. Many solutions made during different life-cycle phases affect the final system performance. These solutions are discussed in this paper. Diagnostic tools and methods can be developed for monitoring the defined performance criteria. Practical FDD methods have been developed for the system considered here. The methods are simple and easy to apply in practice. Methods for monitoring the heat recovery unit and the AHU energy use are presented. Examples of utilizing characteristic curves and fault-symptom trees are also described.

  12. Hazardous materials protections in ED practice: laws and logistics.

    PubMed

    Gough, A R; Markus, K

    1989-01-01

    A great many laws, at federal, state, and local levels, deal with hazardous materials and protective measures against them. They are not yet specific to emergency departments but many surely include the emergency workplace. For a variety of reasons (cost, concentration on AIDS, too few nurses), too many departments may not be fulfilling their legal duties to protect their personnel. Every emergency department, regardless of size, should have immediately at hand appropriate sets of toxin-protective personal equipment in sizes to suit its staff and in numbers suitable to the population it serves. The unexpected is to be expected in emergency practice. To be sure, no garments, respiratory devices, or means of disposal exist that will be fully effective against everything, but that does not excuse the legal duty to take reasonable measures, such as providing appropriate training, equipment reasonably calculated to offer protection, clear and accessible procedures, and clear avenues to further resources, including posted phone numbers for the regional poison control center and for CHEM-TREC, the 24-hour response center of the chemical industry, which can provide indispensable assistance and information (800-424-9300). PMID:2687539

  13. Good Signal Detection Practices: Evidence from IMI PROTECT.

    PubMed

    Wisniewski, Antoni F Z; Bate, Andrew; Bousquet, Cedric; Brueckner, Andreas; Candore, Gianmario; Juhlin, Kristina; Macia-Martinez, Miguel A; Manlik, Katrin; Quarcoo, Naashika; Seabroke, Suzie; Slattery, Jim; Southworth, Harry; Thakrar, Bharat; Tregunno, Phil; Van Holle, Lionel; Kayser, Michael; Norén, G Niklas

    2016-06-01

    Over a period of 5 years, the Innovative Medicines Initiative PROTECT (Pharmacoepidemiological Research on Outcomes of Therapeutics by a European ConsorTium) project has addressed key research questions relevant to the science of safety signal detection. The results of studies conducted into quantitative signal detection in spontaneous reporting, clinical trial and electronic health records databases are summarised and 39 recommendations have been formulated, many based on comparative analyses across a range of databases (e.g. regulatory, pharmaceutical company). The recommendations point to pragmatic steps that those working in the pharmacovigilance community can take to improve signal detection practices, whether in a national or international agency or in a pharmaceutical company setting. PROTECT has also pointed to areas of potentially fruitful future research and some areas where further effort is likely to yield less. PMID:26951233

  14. Toward a Model-Based Approach for Flight System Fault Protection

    NASA Technical Reports Server (NTRS)

    Day, John; Meakin, Peter; Murray, Alex

    2012-01-01

    Use SysML/UML to describe the physical structure of the system This part of the model would be shared with other teams - FS Systems Engineering, Planning & Execution, V&V, Operations, etc., in an integrated model-based engineering environment Use the UML Profile mechanism, defining Stereotypes to precisely express the concepts of the FP domain This extends the UML/SysML languages to contain our FP concepts Use UML/SysML, along with our profile, to capture FP concepts and relationships in the model Generate typical FP engineering products (the FMECA, Fault Tree, MRD, V&V Matrices)

  15. Low Cost Arc Fault Detection and Protection for PV Systems: January 30, 2012 - September 30, 2013

    SciTech Connect

    McCalmont, S.

    2013-10-01

    Final report for Tigo Energy Incubator project. The specific objective of this 18-month research effort was to develop an off-the-shelf arc-fault detector. The starting point of the project was a prototype detector that was constructed using discrete components and laboratory equipment. An intermediate objective was to build a technically viable detector using programmable components in the detector circuitry. The final objective was to build a commercially viable detector by reducing the cost of the circuitry through the use of more sophisticated programmable components and higher levels of integration.

  16. ANNs pinpoint underground distribution faults

    SciTech Connect

    Glinkowski, M.T.; Wang, N.C.

    1995-10-01

    Many offline fault location techniques in power distribution circuits involve patrolling along the lines or cables. In overhead distribution lines, most of the failures can be located quickly by visual inspection without the aid of special equipment. However, locating a fault in underground cable systems is more difficult. It involves additional equipment (e.g., thumpers, radars, etc.) to transform the invisibility of the cable into other forms of signals, such as acoustic sound and electromagnetic pulses. Trained operators must carry the equipment above the ground, follow the path of the signal, and draw lines on their maps in order to locate the fault. Sometimes, even smelling the burnt cable faults is a way of detecting the problem. These techniques are time consuming, not always reliable, and, as in the case of high-voltage dc thumpers, can cause additional damage to the healthy parts of the cable circuit. Online fault location in power networks that involve interconnected lines (cables) and multiterminal sources continues receiving great attention, with limited success in techniques that would provide simple and practical solutions. This article features a new online fault location technique that: uses the pattern recognition feature of artificial neural networks (ANNs); utilizes new capabilities of modern protective relaying hardware. The output of the neural network can be graphically displayed as a simple three-dimensional (3-D) chart that can provide an operator with an instantaneous indication of the location of the fault.

  17. A new low-cost, fully fault-protected PWM-VSI inverter with true phase-current information

    SciTech Connect

    Blaabjerg, F.; Pedersen, J.K.

    1997-01-01

    This paper proposes a new topology for a low-cost three-phase pulse width modulation (PWM) VSI inverter where a true phase current information exists with the use of only one current sensor in the dc link. The topology also has the benefits of full protection of the inverter such as short circuits in the dc link, output phase faults, and output earth faults. Previously suggested methods have problems at low speed in detecting the phase current, but by the use of space vector modulation in a double-sided implementation, the PWM-pulses are adjusted so true information about the phase current exists even at very low speed. The dc link current is sampled twice during one switching period, and it is proposed to sample in the center of the PWM-pulses to obtain a real average phase current. The effects of blanking-time and overmodulation are also discussed. The strategy is implemented in a 16-b microcontroller, and the test results show the technique is useful in the whole speed range. It is concluded by the use of the proposed topology that only one current sensor will be necessary in the future for PWM-VSI inverters.

  18. Protection and advocacy: an ethics practice in mental health.

    PubMed

    Olsen, D P

    2001-04-01

    This paper reports the findings of investigations into allegations of patient abuse and the implications for policy and practice. These investigations were carried out by a nurse with a background in ethics for the office of Protection and Advocacy for Individuals with Mental Illness (PAIMI), a state agency operating under a United States federal law entitling it to investigate complaints by psychiatric patients. PAIMI uses investigations both to help individuals and to provide an avenue for broader change. There are four steps in the investigation process: (1) definition of the problem; (2) gathering information; (3) synthesis; and (4) addressing the problem. Cases are presented to illustrate the investigation process and identify ethical issues arising in mental health treatment. Among the issues raised are autonomy and forced treatment, deinstitutionalization, bias against the mentally ill, privacy, and surrogate treatment decisions. Resolutions range from providing individual advice to clients or clinicians, to changes in institutional policy and the publication of guidelines for specific situations. The following lessons were learnt from the investigations: (1) tell patients what to expect; (2) pay attention to the process of giving care; (3) allow patients to feel ambivalent about treatment; and (4) work to develop good relationships; underlying every investigation has been a poor relationship. PMID:11882117

  19. The Fitbit Fault Line: Two Proposals to Protect Health and Fitness Data at Work.

    PubMed

    Brown, Elizabeth A

    2016-01-01

    Employers are collecting and using their employees' health data, mined from wearable fitness devices and health apps, in new, profitable, and barely regulated ways. The importance of protecting employee health and fitness data will grow exponentially in the future. This is the moment for a robust discussion of how law can better protect employees from the potential misuse of their health data. While scholars have just begun to examine the problem of health data privacy, this Article contributes to the academic literature in three important ways. First, it analyzes the convergence of three trends resulting in an unprecedented growth of health-related data: the Internet of Things, the Quantified Self movement, and the Rise of Health Platforms. Second, it describes the insufficiencies of specific data privacy laws and federal agency actions in the context of protecting employee health data from employer misuse. Finally, it provides two detailed and workable solutions for remedying the current lack of protection of employee health data that will realign employer use with reasonable expectations of health and fitness privacy. The Article proceeds in four Parts. Part I describes the growth of self-monitoring apps, devices, and other sensor-enabled technology that can monitor a wide range of data related to an employee's health and fitness and the relationship of this growth to both the Quantified Self movement and the Internet of Things. Part II explains the increasing use of employee monitoring through a wide range of sensors, including wearable devices, and the potential uses of that health and fitness data. Part III explores the various regulations and agency actions that might protect employees from the potential misuse of their health and fitness data and the shortcomings of each. Part IV proposes two specific measures that would help ameliorate the ineffective legal protections that currently exist in this context. In order to improve employee notice of and control

  20. Current issues and future directions in evidence-based child protection practices: a view from Romania.

    PubMed

    Iovu, Mihai-Bogdan

    2013-10-01

    The constantly changing nature of social care practice necessitates continuous development of educational programs to prepare current practitioners. Basing social work practice on methodologically sound research evidence is an appealing prospect for the child protection services. However, Romania, once considered an Eastern European model for child welfare policy and practice, is somehow lagging behind. In this article the author gives a short overview of where the Romanian Child Protection System currently stands in terms of evidence-based practice. PMID:24066629

  1. Environmental Research In Practice: Restoration And Protection Of Water Resources

    EPA Science Inventory

    The U.S. Environmental Protection Agency (EPA) is tasked to protect human health and the environment. To carry out this task, the EPA makes use of technical expertise within its Office of Research and Development. Restoration and protection of water resources is one area of tec...

  2. Broadcast encryption: paving the road to practical content protection systems

    NASA Astrophysics Data System (ADS)

    Deen, G.; Ponceleon, D.; Leake, Donald, Jr.

    2009-02-01

    Broadcast encryption is a well established alternative to public key encryption for use in content protection systems. It offers significant performance benefits, as well as useful features such a one-to-many delivery, dynamic membership in the authorized receivers group, and provides anonymous access to content, permitting content protection systems to preserve privacy for consumers. Broadcast encryption has been successfully deployed to users for protection of commercial content on digital media such as flash memory devices and optical media for both standard-definition and high-definition content. In this paper, we present the Advanced Secure Content Cluster Technology which applies broadcast encryption to content protection for home media networks

  3. Policy and practice in Myanmar's protected area system.

    PubMed

    Myint Aung, U

    2007-07-01

    Myanmar's protected area (PA) system began nearly 150 years ago under royal patronage. Park policies and practices, embodied in 19 pieces of legislation developed sporadically during and after the colonial period. As a result of the FAO-UNDP's Nature Conservation and National Parks Project (1981-1985) the government established the Nature & Wildlife Conservation Division and placed it within the Forest Department as the agency responsible for PA management. As a consequence the number of parks increased from 14 to 33. Myanmar's median park size is less than 50 km(2), but only five parks (15%) are larger than 1000 km(2). Most parks conserve terrestrial habitats; parks encompassing inland wetlands, mangrove, and marine habitats are limited in number and size. Existing PAs unequally represent Myanmar's ecosystems; the Malay Transition, Burmese coast, Burmese Transition and Cardamom Mountains bio-units are under-represented within the system. The effective total PA size (i.e., area of all parks less the area of 13 paper parks) is currently about 2.1%. Budgetary support for parks increased 11% since 1996, but is insufficient to address park needs, particularly in remote parks that are understaffed. Limited education and training of PA staff is a major factor limiting effective park management. Fifty-eight percent of park wardens are educated as foresters, and 42% have university degrees. The average posting in a park is 4 years, which is less than ideal for management continuity. Recommended actions to secure Myanmar's PAs include evaluation and reformulation of policies, increasing representation of Myanmar's habitats within the PA system, management planning, and standardizing protocols for anti-poaching patrols and other forms of law enforcement. Improved leadership training for wardens and range forest officers can also improve park management. Funding for community relations and more integrated management of parks and people can reduce conflicts, while

  4. It’s our Fault: Immersing Young Learners in Authentic Practices of Seismology

    NASA Astrophysics Data System (ADS)

    Kilb, D. L.; Moher, T.; Wiley, J.

    2009-12-01

    The scalable RoomQuake seismology project uses a learning technology framework-embedded phenomena (Moher, 2006)—that simulates seismic phenomena mapped directly onto the physical space of classrooms. This project, aimed at the upper elementary level, situates students as the scientists engaged in an extended investigation designed to discover the spatial, temporal, and intensity distributions of a series of earthquakes. This project emulates earthquake occurrence over a condensed time and spatial span, with students mapping an earthquake fault imagined to be running through their classroom. The students learn: basic seismology terms; ability to identify seismic P- and S-waves; skills associated with trilateration; nomogram/graph reading skills; and the ability to recognize the emergence of a fault based on RoomQuake geometries. From the students’ perspectives, and similar to real-world earthquakes, RoomQuakes occur at unknown times over the course of several weeks. Multiple computers distributed around the perimeter of the classroom serve as simulated seismographs that depict continuous strip-chart seismic recordings. Most of the time the seismograms reflect background noise, but at (apparently) unpredictable times a crescendoing rumble (emanating from a subwoofer) signals a RoomQuake. Hearing this signal, students move to the seismic stations to read the strip charts. Next, the students trilaterate the RoomQuake epicenter by arcing calibrated strings of length proportional to S-P latencies from each seismic station until a common point is identified. Each RoomQuake epicenter is marked by hanging a Styrofoam ball (color-coded by magnitude) from the ceiling. The developing ‘fault’ within the classroom provides an immersive historic record of the RoomQuake’s spatial distribution. Students also maintain a temporal record of events on a large time-line on the wall (recognizing time-related phenomena like aftershocks) and a record of magnitude frequencies on

  5. A decentralized fault detection and isolation scheme for spacecraft: bridging the gap between model-based fault detection and isolation research and practice

    NASA Astrophysics Data System (ADS)

    Indra, S.; Travé-Massuyès, L.; Chanthery, E.

    2013-12-01

    This paper introduces a decentralized fault diagnosis and isolation (FDI) architecture for spacecraft and applies it to the attitude determination and control system (ADCS) of a satellite. A system is decomposed into functional subsystems. The architecture is composed of local diagnosers for subsystems which work with local models. Fault ambiguities due to interactions between subsystems are resolved at a higher level by a supervisor, which combines the partial view of the local diagnosers and performs isolation on request. The architecture is hierarchically scalable. The structure of the ADCS is modeled as constraints and variables and used to demonstrate the decentralized architecture.

  6. Fault tolerance in space-based digital signal processing and switching systems: Protecting up-link processing resources, demultiplexer, demodulator, and decoder

    NASA Technical Reports Server (NTRS)

    Redinbo, Robert

    1994-01-01

    Fault tolerance features in the first three major subsystems appearing in the next generation of communications satellites are described. These satellites will contain extensive but efficient high-speed processing and switching capabilities to support the low signal strengths associated with very small aperture terminals. The terminals' numerous data channels are combined through frequency division multiplexing (FDM) on the up-links and are protected individually by forward error-correcting (FEC) binary convolutional codes. The front-end processing resources, demultiplexer, demodulators, and FEC decoders extract all data channels which are then switched individually, multiplexed, and remodulated before retransmission to earth terminals through narrow beam spot antennas. Algorithm based fault tolerance (ABFT) techniques, which relate real number parity values with data flows and operations, are used to protect the data processing operations. The additional checking features utilize resources that can be substituted for normal processing elements when resource reconfiguration is required to replace a failed unit.

  7. Whose fault is it anyway? A practical illustration of human factors in process safety.

    PubMed

    Hughes, Gareth; Kornowa-Weichel, Megan

    2004-11-11

    Major process accidents have typically occurred not through a single failure, but through a combination of events, some of which had contributors from past actions and failures (latent or unrevealed failures). People are integral and key features of business systems; therefore systems, tools, and equipment should be designed with the potential capabilities and limitations of people in mind. This paper demonstrates the benefits of using human factors approaches to improve system safety and reliability. Practical examples from past experience are quoted and a framework for human error prediction is described. Guidance is given on the practicalities of deriving recommendations from these types of studies. PMID:15518974

  8. Quaternary Geology and Surface Faulting Hazard: Active and Capable Faults in Central Apennines, Italy

    NASA Astrophysics Data System (ADS)

    Falcucci, E.; Gori, S.

    2015-12-01

    The 2009 L'Aquila earthquake (Mw 6.1), in central Italy, raised the issue of surface faulting hazard in Italy, since large urban areas were affected by surface displacement along the causative structure, the Paganica fault. Since then, guidelines for microzonation were drew up that take into consideration the problem of surface faulting in Italy, and laying the bases for future regulations about related hazard, similarly to other countries (e.g. USA). More specific guidelines on the management of areas affected by active and capable faults (i.e. able to produce surface faulting) are going to be released by National Department of Civil Protection; these would define zonation of areas affected by active and capable faults, with prescriptions for land use planning. As such, the guidelines arise the problem of the time interval and general operational criteria to asses fault capability for the Italian territory. As for the chronology, the review of the international literature and regulatory allowed Galadini et al. (2012) to propose different time intervals depending on the ongoing tectonic regime - compressive or extensional - which encompass the Quaternary. As for the operational criteria, the detailed analysis of the large amount of works dealing with active faulting in Italy shows that investigations exclusively based on surface morphological features (e.g. fault planes exposition) or on indirect investigations (geophysical data), are not sufficient or even unreliable to define the presence of an active and capable fault; instead, more accurate geological information on the Quaternary space-time evolution of the areas affected by such tectonic structures is needed. A test area for which active and capable faults can be first mapped based on such a classical but still effective methodological approach can be the central Apennines. Reference Galadini F., Falcucci E., Galli P., Giaccio B., Gori S., Messina P., Moro M., Saroli M., Scardia G., Sposato A. (2012). Time

  9. Groundwater protection policies and practices in the United Kingdom.

    PubMed

    Chave, P A

    2000-01-01

    This paper summarises the main objectives for groundwater protection and describes the system adopted in the UK. Groundwater is essential as a source of drinking-water, even in countries with high rainfall; it is used for irrigation and industrial purposes; and it often comprises the base flow of rivers under dry conditions. Water quality is threatened by industry, landfill leachate, agricultural contamination and mine drainage. Quantity is threatened by over-abstraction. The UK uses a risk-based concept of vulnerability to pollution and over-abstraction. A groundwater protection policy has been devised providing guidance on those activities which require control in groundwater protection zones, reflecting the vulnerability of the aquifer. The zones make use of the travel time of contaminants to the water abstraction point. All major activities such as water abstraction, waste disposal, and spreading of agricultural materials may thus be assessed in terms of their risk to the groundwater, and suitable precautions may be taken. Groundwater, once polluted, is an asset which is difficult and expensive to replace. Vulnerability assessment is a useful tool to assist in its protection. PMID:10842809

  10. Protecting Confidentiality Rights: The Need for an Ethical Practice Model

    ERIC Educational Resources Information Center

    Fisher, Mary Alice

    2008-01-01

    All psychologists must uphold the same ethical standards about confidentiality even though each state imposes different legal limits on their ability to protect clients' confidences. The resulting ethical-legal confusion is exacerbated by legally based confidentiality training that treats legal exceptions as if they were the rule and fosters the…

  11. 78 FR 59725 - Construction Fall Protection Systems Criteria and Practices, and Training Requirements; Extension...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-09-27

    ...OSHA solicits public comments concerning its proposal to extend OMB approval of the information collection requirements contained in the Construction Standards on Fall Protection Systems Criteria and Practices (29 CFR 1926.502), and Training Requirements (29 CFR...

  12. 49 CFR 214.105 - Fall protection systems standards and practices.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 49 Transportation 4 2010-10-01 2010-10-01 false Fall protection systems standards and practices. 214.105 Section 214.105 Transportation Other Regulations Relating to Transportation (Continued) FEDERAL RAILROAD ADMINISTRATION, DEPARTMENT OF TRANSPORTATION RAILROAD WORKPLACE SAFETY Bridge Worker Safety Standards § 214.105 Fall protection...

  13. Hepatitis C Avoidance in Injection Drug Users: A Typology of Possible Protective Practices

    PubMed Central

    McGowan, Catherine; Harris, Magdalena; Rhodes, Tim

    2013-01-01

    Introduction Hepatitis C virus (HCV) represents a serious public health concern. People who inject drugs (PWID) are at particular risk and nearly half (45%) of PWID in England may be infected. HCV prevention interventions have only had moderate impact on the prevalence of HCV in this population. Using qualitative methods, we sought to detail the protective practices potentially linked to HCV avoidance among PWID, and explore the motivations for these. Methods The study used a life history approach allowing participants to detail their lived experience both before and during the course of their injecting careers. Thirty-seven participants were recruited from drug services in London, and from referrals within local injecting networks. A baseline and follow-up in-depth qualitative interview was carried out with each participant, and for half, a third interview was also undertaken. All underwent testing for HCV antibody. Analyses focused on developing a descriptive typology of protective practices potentially linked to HCV avoidance. Results Practices were deemed to be protective against HCV if they could be expected a priori to reduce the number of overall injections and/or the number of injections using shared injecting equipment. Participants reported engaging in various protective practices which fell into three categories identified through thematic analysis: principles about injecting, preparedness, and flexibility. Conclusions All participants engaged in protective practices irrespective of serostatus. It is important to consider the relative importance of different motivations framing protective practices in order to formulate harm reduction interventions which appeal to the situated concerns of PWID, especially given that these protective practices may also help protect against HIV and other blood borne infections. PMID:24194855

  14. Child Protection and Anti-Oppressive Practice: The Dynamics of Partnership with Parents Explored.

    ERIC Educational Resources Information Center

    Jones, Jocelyn

    1994-01-01

    Outlines the impact of legislative changes in England and Wales following implementation of the Children Act 1989. Notes the central role of partnership to the act. Focuses on preventive practice and partnership with parents for child protection, exploring the balance between collusive and oppressive child welfare practice. Concludes with a case…

  15. Study on Practical Application of Turboprop Engine Condition Monitoring and Fault Diagnostic System Using Fuzzy-Neuro Algorithms

    NASA Astrophysics Data System (ADS)

    Kong, Changduk; Lim, Semyeong; Kim, Keunwoo

    2013-03-01

    The Neural Networks is mostly used to engine fault diagnostic system due to its good learning performance, but it has a drawback due to low accuracy and long learning time to build learning data base. This work builds inversely a base performance model of a turboprop engine to be used for a high altitude operation UAV using measuring performance data, and proposes a fault diagnostic system using the base performance model and artificial intelligent methods such as Fuzzy and Neural Networks. Each real engine performance model, which is named as the base performance model that can simulate a new engine performance, is inversely made using its performance test data. Therefore the condition monitoring of each engine can be more precisely carried out through comparison with measuring performance data. The proposed diagnostic system identifies firstly the faulted components using Fuzzy Logic, and then quantifies faults of the identified components using Neural Networks leaned by fault learning data base obtained from the developed base performance model. In leaning the measuring performance data of the faulted components, the FFBP (Feed Forward Back Propagation) is used. In order to user's friendly purpose, the proposed diagnostic program is coded by the GUI type using MATLAB.

  16. Women empowerment and practices regarding use of dual protection among family planning clients in urban Zimbabwe

    PubMed Central

    Mutowo, Jesca; Kasu, Christine Mary; Mufunda, Esther

    2014-01-01

    Introduction Gender related vulnerability may increase women's susceptibility to HIV infection and unintended pregnancy. The purpose of the study was to examine the relationship between women empowerment and practices regarding use of dual protection. Methods A non-experimental descriptive correlational study design was conducted using systematic sampling method to recruit eighty women aged 18-49 years at an urban clinic in Zimbabwe. Data was collected using a structured interview schedule and was analysed and presented using descriptive and inferential statistics. Results A weak positive significant correlation existed between women empowerment and use of dual protection (r= .242, p=0.03). Findings demonstrated that as women empowerment levels increase practices regarding use of dual protection also increase. The coefficient of determination, R2=.0.058, b=0.293, indicated that the total amount of variation in utilization of dual protection explained by level of women empowerment was 5.8%. The major finding was that use of dual protection was very low (3.8%) and 67.5% had low levels of practices regarding use of dual protection. Additionally, 85.0% were not confident of using the female condom. Conclusion Gender inequality within sexual relations was associated with low levels of practices regarding use of dual protection. The study provided evidence for the need for a proactive integrated approach to empower women so that they could negotiate for safer sex practices. To increase female condom utilization, manufacturers need to redesign the female condom so that it becomes user friendly. Health personnel need to involve men for any health reproductive program to succeed. PMID:25328596

  17. Knowledge and practice of sun protection in schools in South Africa where no national sun protection programme exists.

    PubMed

    Wright, Caradee Y; Reeder, Anthony I; Albers, Patricia N

    2016-04-01

    Interventions in primary schools that increase sun-protective behaviours and decrease ultraviolet radiation exposure, sunburn incidence and skin cancer risk can be effective. SunSmart School Accreditation Programmes (SSAP) are recommended. Prior to SSAP implementation in South Africa, we explored the feasibility of obtaining national baseline information and investigated possible associations between strategies regarding sun protection in schools and students' responses to a questionnaire. Principals from randomly selected urban government schools in all nine South African provinces completed a questionnaire and 679 students were surveyed. The mean sun-related knowledge and behaviour scores of students were 4 (range: 1-7) and 3 (range-0-8) out of 9, respectively. The mean school sun protection effort score was 4 out of 14. There were no statistically significant correlations between students' knowledge or behaviour scores and their school score. The World Health Organization recommends an SSAP to address policy, practice and curriculum changes to support sun protection of students. This cross-sectional study demonstrates the feasibility of, and need for, a larger baseline study with longitudinal, multi-variable follow-up which includes other influential factors, such as parent support. Such research could quantify the impact of the SSAP and identify which key factors influence the sun-related knowledge and behaviours of students. PMID:26936482

  18. 75 FR 22844 - Construction Fall Protection Systems Criteria and Practices and Training Requirements; Extension...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-04-30

    ... 1995 (44 U.S.C. 3506 et seq.) and Secretary of Labor's Order No. 5-2007 (72 FR 31160). Signed at... Occupational Safety and Health Administration Construction Fall Protection Systems Criteria and Practices and Training Requirements; Extension of the Office of Management and Budget's (OMB) Approval of...

  19. 45 CFR 164.520 - Notice of privacy practices for protected health information.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 45 Public Welfare 1 2010-10-01 2010-10-01 false Notice of privacy practices for protected health information. 164.520 Section 164.520 Public Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES ADMINISTRATIVE DATA STANDARDS AND RELATED REQUIREMENTS SECURITY AND PRIVACY Privacy of Individually Identifiable Health Information § 164.520 Notice of...

  20. Farm operator perceptions of water quality protective pest management practices: Selected survey findings

    SciTech Connect

    Zimmerman, R.; Blair, J.; Webb, B.

    1995-12-01

    The use of pesticides in agriculture often poses a tension between water quality and environmental protection goals on the one hand and the viability of food supplies on the other hand. Pesticides used for field crops (e.g., corn, soy beans and wheat) have been detected in waterbodies, and according to some studies, are apparently finding their way into water supplies. A considerable amount of discretion is allowed in farm operator`s choice of pest management practices, and voluntary behavior becomes an important factor in promoting environmentally protective practices. Thus, it is important to know the attitudes of farmers who make pest management decisions including pesticide choices, toward the use of various water quality protective pest management practices. A number of studies show that more general environmental attitudes reflect a general world view that shapes attitudes toward particular environmental issues. This paper addresses the relationship between the more general environmental attitudes of farmers to their attitudes toward water quality issues and pest management practices which are protective of water quality. Some of the personal tradeoffs farmers are willing to make to enhance environmental controls on pesticides are also explored. Results are based on preliminary findings from a survey of farm operators who grow corn, soybeans and other field crops in three eastern states. The survey was conducted via a mail questionnaire to 2,700 farmers with telephone follow-up during the Fall of 1994. Implications of the findings for pest management in general are discussed.

  1. Sexual Harassment Preventive/Protective Practices at U.S. Colleges and Universities.

    ERIC Educational Resources Information Center

    Hobson, Charles J.; Guziewicz, Jennifer

    2002-01-01

    Reports on a survey concerning thirteen recommended sexual harassment preventive/protective practices at U.S. colleges and universities. A majority of responding institutions had formal sexual harassment policies, offered counseling to student victims, and investigated all complaints. Relatively fewer schools provided student access to faculty…

  2. Packaged Fault Model for Geometric Segmentation of Active Faults Into Earthquake Source Faults

    NASA Astrophysics Data System (ADS)

    Nakata, T.; Kumamoto, T.

    2004-12-01

    In Japan, the empirical formula proposed by Matsuda (1975) mainly based on the length of the historical surface fault ruptures and magnitude, is generally applied to estimate the size of future earthquakes from the extent of existing active faults for seismic hazard assessment. Therefore validity of the active fault length and defining individual segment boundaries where propagating ruptures terminate are essential and crucial to the reliability for the accurate assessments. It is, however, not likely for us to clearly identify the behavioral earthquake segments from observation of surface faulting during the historical period, because most of the active faults have longer recurrence intervals than 1000 years in Japan. Besides uncertainties of the datasets obtained mainly from fault trenching studies are quite large for fault grouping/segmentation. This is why new methods or criteria should be applied for active fault grouping/segmentation, and one of the candidates may be geometric criterion of active faults. Matsuda (1990) used _gfive kilometer_h as a critical distance for grouping and separation of neighboring active faults. On the other hand, Nakata and Goto (1998) proposed the geometric criteria such as (1) branching features of active fault traces and (2) characteristic pattern of vertical-slip distribution along the fault traces as tools to predict rupture length of future earthquakes. The branching during the fault rupture propagation is regarded as an effective energy dissipation process and could result in final rupture termination. With respect to the characteristic pattern of vertical-slip distribution, especially with strike-slip components, the up-thrown sides along the faults are, in general, located on the fault blocks in the direction of relative strike-slip. Applying these new geometric criteria to the high-resolution active fault distribution maps, the fault grouping/segmentation could be more practically conducted. We tested this model

  3. Practices Changes in the Child Protection System to Address the Needs of Parents With Cognitive Disabilities

    PubMed Central

    Azar, Sandra T.; Maggi, Mirella C.; Proctor, Stephon Nathanial

    2016-01-01

    Parents with cognitive disabilities (PCD) are over-represented in the child protection system. However, the current state of the child protection system is not well prepared for working with them. Biases that exist against their parenting, the need for accommodations in assessment and intervention practices, and specific training in staff and cross systems barriers need to be addressed. This paper argues for changes that will ensure such parents are more effectively served and that child protection staff and contract providers are better equipped to work with them. Specific changes are discussed in assessment and intervention practices. These changes will require human capacity building and organizational restructuring. Although empirically based behavioral approaches with PCD will be emphasized, recent empirical work suggests that social information processing and neurocognitive problems occur in PCD. Approaches to working with such problems are emerging and must also be considered and integrated into a blueprint for change.

  4. [Data protection, radiation protection and copyright: Problems of transferring results in assessment practice].

    PubMed

    Klemm, H-T

    2015-06-01

    In Germany, the medical assessor is subject to the law on contracts for work and services ("Werksvertragsrecht"). When a medical expert assesses a subject on behalf of a third party, there is no contractual relationship between them. In the field of private insurance law and in social insurance law, the medical expert is faced with various procedural requirements. Failing to meet these legal requirements often makes the assessment difficult or even impossible. The transfer of radiographs to the medical assessor is dealt with in the German X-ray regulations ("Röntgenverordnung"). The assessor, who is without doubt an examining doctor, has the right to have the radiographs temporarily made available (§ 28 et al.). Passing on the radiographs is all the more appropriate if by doing so additional X-ray examinations can be avoided. The right of access to medical data in the social security law, apart from X-ray regulations, is regulated by German Civil Code (BGB) § 810 and German Basic Law section 1 paragraph 1 in connection with section 2 paragraph 1 ("§ 810 BGB; Art. 1 Abs. 1, Art. 2 Abs. 1 GG"). In the absence of third party interest worthy of protection, the right of access to assessment records has to be granted to the subject, who will then authorize the examining medical expert to exercise this right. In private insurance law, only the private health insurance has its regulation concerning obtaining information about treatment or the access to medical assessments. In other types of insurance the medical assessor's right of access to medical examination data and/or the basis for medical findings can only be derived from secondary obligations as part of the insurance contract or directly from general constitutional personal rights. PMID:25971951

  5. Common Practice Lightning Strike Protection Characterization Technique to Quantify Damage Mechanisms on Composite Substrates

    NASA Technical Reports Server (NTRS)

    Szatkowski, George N.; Dudley, Kenneth L.; Koppen, Sandra V.; Ely, Jay J.; Nguyen, Truong X.; Ticatch, Larry A.; Mielnik, John J.; Mcneill, Patrick A.

    2013-01-01

    To support FAA certification airworthiness standards, composite substrates are subjected to lightning direct-effect electrical waveforms to determine performance characteristics of the lightning strike protection (LSP) conductive layers used to protect composite substrates. Test results collected from independent LSP studies are often incomparable due to variability in test procedures & applied practices at different organizations, which impairs performance correlations between different LSP data sets. Under a NASA supported contract, The Boeing Company developed technical procedures and documentation as guidance in order to facilitate a test method for conducting universal common practice lightning strike protection test procedures. The procedures obtain conformity in future lightning strike protection evaluations to allow meaningful performance correlations across data sets. This universal common practice guidance provides the manufacturing specifications to fabricate carbon fiber reinforced plastic (CFRP) test panels, including finish, grounding configuration, and acceptable methods for pretest nondestructive inspection (NDI) and posttest destructive inspection. The test operations guidance elaborates on the provisions contained in SAE ARP5416 to address inconsistencies in the generation of damage protection performance data, so as to provide for maximum achievable correlation across capable lab facilities. In addition, the guidance details a direct effects test bed design to aid in quantification of the multi-physical phenomena surrounding a lightning direct attachment supporting validation data requirements for the development of predictive computational modeling. The lightning test bed is designed to accommodate a repeatable installation procedure to secure the test panel and eliminate test installation uncertainty. It also facilitates a means to capture the electrical waveform parameters in 2 dimensions, along with the mechanical displacement and thermal

  6. Managing the Risk of Triggered Seismicity: Can We Identify (and Avoid) Potentially Active Faults? - A Practical Case Study in Oklahoma

    NASA Astrophysics Data System (ADS)

    Zoback, M. D.; Alt, R. C., II; Walsh, F. R.; Walters, R. J.

    2014-12-01

    It is well known that throughout the central and eastern U.S. there has been a marked increase in seismicity since 2009, at least some of which appears to increased wastewater injection. No area has seen a greater increase in seismicity than Oklahoma. In this paper, we utilize newly available information on in situ stress orientation and relative magnitudes, the distribution of high volume injection wells and knowledge of the intervals used for waste water disposal to identify the factors potentially contributing to the occurrence of triggered seismicity. While there are a number of sites where in situ stress data has been successfully used to identify potentially active faults, we are investigating whether this methodology can be implemented throughout a state utilizing the types of information frequently available in areas of oil and gas development. As an initial test of this concept, we have been compiling stress orientation data from wells throughout Oklahoma provided by private industry. Over fifty new high quality data points, principally drilling-induced tensile fractures observed in image logs, result in a greatly improved understanding of the stress field in much of the state. A relatively uniform ENE direction of maximum compressive stress is observed, although stress orientations (and possibly relative stress magnitudes) differ in the southern and southwestern parts of the state. The proposed methodology can be tested in the area of the NE-trending fault that produced the M 5+ earthquakes in the Prague, OK sequence in 2011, and the Meers fault in southwestern OK, that produced a M~7 reverse faulting earthquake about 1100 years ago. This methodology can also be used to essentially rule out slip on other major faults in the area, such as the ~N-S trending Nemaha fault system. Additional factors leading to the occurrence of relatively large triggered earthquakes in Oklahoma are 1) the overall increase in injection volumes throughout the state in recent

  7. The Relationship between Sun Protection Policy and Associated Practices in a National Sample of Early Childhood Services in Australia

    ERIC Educational Resources Information Center

    Ettridge, Kerry A.; Bowden, Jacqueline A.; Rayner, Joanne M.; Wilson, Carlene J.

    2011-01-01

    Limiting exposure to sunlight during childhood can significantly reduce the risk of skin cancer. This was the first national study to assess the sun protection policies and practices of early childhood services across Australia. It also examined the key predictors of services' sun protection practices. In 2007, 1017 respondents completed a…

  8. Fault Analysis in Solar Photovoltaic Arrays

    NASA Astrophysics Data System (ADS)

    Zhao, Ye

    Fault analysis in solar photovoltaic (PV) arrays is a fundamental task to increase reliability, efficiency and safety in PV systems. Conventional fault protection methods usually add fuses or circuit breakers in series with PV components. But these protection devices are only able to clear faults and isolate faulty circuits if they carry a large fault current. However, this research shows that faults in PV arrays may not be cleared by fuses under some fault scenarios, due to the current-limiting nature and non-linear output characteristics of PV arrays. First, this thesis introduces new simulation and analytic models that are suitable for fault analysis in PV arrays. Based on the simulation environment, this thesis studies a variety of typical faults in PV arrays, such as ground faults, line-line faults, and mismatch faults. The effect of a maximum power point tracker on fault current is discussed and shown to, at times, prevent the fault current protection devices to trip. A small-scale experimental PV benchmark system has been developed in Northeastern University to further validate the simulation conclusions. Additionally, this thesis examines two types of unique faults found in a PV array that have not been studied in the literature. One is a fault that occurs under low irradiance condition. The other is a fault evolution in a PV array during night-to-day transition. Our simulation and experimental results show that overcurrent protection devices are unable to clear the fault under "low irradiance" and "night-to-day transition". However, the overcurrent protection devices may work properly when the same PV fault occurs in daylight. As a result, a fault under "low irradiance" and "night-to-day transition" might be hidden in the PV array and become a potential hazard for system efficiency and reliability.

  9. Enacting Firm, Fair and Friendly Practice: A Model for Strengths-Based Child Protection Relationships?

    PubMed Central

    Oliver, Carolyn; Charles, Grant

    2016-01-01

    Strengths-based solution-focused approaches are gaining ground in statutory child protection work, but few studies have asked front line practitioners how they navigate the complex worker–client relationships such approaches require. This paper describes one component of a mixed-methods study in a large Canadian statutory child protection agency in which 225 workers described how they applied the ideas of strengths-based practice in their daily work. Interviews with twenty-four practitioners were analysed using an interpretive description approach. Only four interviewees appeared to successfully enact a version of strengths-based practice that closely mirrored those described by key strengths-based child protection theorists and was fully congruent with their mandated role. They described navigating a shifting balance of collaboration and authority in worker–client relationships based on transparency, impartial judgement, attentiveness to the worker–client interaction and the value that clients were fellow human beings. Their accounts extend current conceptualisations of the worker–client relationship in strengths-based child protection work and are congruent with current understandings of effective mandated relationships. They provide what may be a useful model to help workers understand and navigate relationships in which they must reconcile their own authority and expertise with genuine support for the authority and expertise of their clients. PMID:27559211

  10. Zipper Faults

    NASA Astrophysics Data System (ADS)

    Platt, J. P.; Passchier, C. W.

    2015-12-01

    Intersecting simultaneously active pairs of faults with different orientations and opposing slip sense ("conjugate faults") present geometrical and kinematic problems. Such faults rarely offset each other, even when they have displacements of many km. A simple solution to the problem is that the two faults merge, either zippering up or unzippering, depending on the relationship between the angle of intersection and the slip senses. A widely recognized example of this is the so-called blind front developed in some thrust belts, where a backthrust branches off a decollement surface at depth. The decollement progressively unzippers, so that its hanging wall becomes the hanging wall of the backthrust, and its footwall becomes the footwall of the active decollement. The opposite situation commonly arises in core complexes, where conjugate low-angle normal faults merge to form a single detachment; in this case the two faults zipper up. Analogous situations may arise for conjugate pairs of strike-slip faults. We present kinematic and geometrical analyses of the Garlock and San Andreas faults in California, the Najd fault system in Saudi Arabia, the North and East Anatolian faults, the Karakoram and Altyn Tagh faults in Tibet, and the Tonale and Guidicarie faults in the southern Alps, all of which appear to have undergone zippering over distances of several tens to hundreds of km. The zippering process may produce complex and significant patterns of strain and rotation in the surrounding rocks, particularly if the angle between the zippered faults is large. A zippering fault may be inactive during active movement on the intersecting faults, or it may have a slip rate that differs from either fault. Intersecting conjugate ductile shear zones behave in the same way on outcrop and micro-scales.

  11. Guide of good practices for occupational radiological protection in plutonium facilities

    SciTech Connect

    1998-06-01

    This Technical Standard (TS) does not contain any new requirements. Its purpose is to provide guides to good practice, update existing reference material, and discuss practical lessons learned relevant to the safe handling of plutonium. the technical rationale is given to allow US Department of Energy (DOE) health physicists to adapt the recommendations to similar situations throughout the DOE complex. Generally, DOE contractor health physicists will be responsible to implement radiation protection activities at DOE facilities and DOE health physicists will be responsible for oversight of those activities. This guidance is meant to be useful for both efforts. This TS replaces PNL-6534, Health Physics Manual of Good Practices for Plutonium Facilities, by providing more complete and current information and by emphasizing the situations that are typical of DOE`s current plutonium operations; safe storage, decontamination, and decommissioning (environmental restoration); and weapons disassembly.

  12. System for Collecting, Analyzing, and Using Information on Institutional Consumer Protection Practices: Accreditation User Guide. Improving The Consumer Protection Function in Postsecondary Education.

    ERIC Educational Resources Information Center

    Jung, Steven M.; And Others

    This accreditation user guide describes: (1) a set of institutional conditions, policies, and practices that are potentially abusive to students, and (2) a system for collecting, analyzing, and using quantitative data on these conditions, policies, and practices. Possible uses include: (1) setting consumer protection standards and goals that…

  13. Performance-based semi-active control algorithm for protecting base isolated buildings from near-fault earthquakes

    NASA Astrophysics Data System (ADS)

    Mehrparvar, Behnam; Khoshnoudian, Taramarz

    2012-03-01

    Base isolated structures have been found to be at risk in near-fault regions as a result of long period pulses that may exist in near-source ground motions. Various control strategies, including passive, active and semi-active control systems, have been investigated to overcome this problem. This study focuses on the development of a semi-active control algorithm based on several performance levels anticipated from an isolated building during different levels of ground shaking corresponding to various earthquake hazard levels. The proposed performance-based algorithm is based on a modified version of the well-known semi-active skyhook control algorithm. The proposed control algorithm changes the control gain depending on the level of shaking imposed on the structure. The proposed control system has been evaluated using a series of analyses performed on a base isolated benchmark building subjected to seven pairs of scaled ground motion records. Simulation results show that the newly proposed algorithm is effective in improving the structural and nonstructural performance of the building for selected earthquakes.

  14. Radiological protection in ion beam radiotherapy: practical guidance for clinical use of new technology.

    PubMed

    Yonekura, Y; Tsujii, H; Hopewell, J W; Ortiz López, P; Cosset, J-M; Paganetti, H; Montelius, A; Schardt, D; Jones, B; Nakamura, T

    2016-06-01

    Recently introduced technologies in radiotherapy have significantly improved the clinical outcome for patients. Ion beam radiotherapy, involving proton and carbon ion beams, provides excellent dose distributions in targeted tumours, with reduced doses to the surrounding normal tissues. However, careful treatment planning is required in order to maximise the treatment efficiency and minimise the dose to normal tissues. Radiation exposure from secondary neutrons and photons, particle fragments, and photons from activated materials should also be considered for radiological protection of the patient and medical staff. Appropriate maintenance is needed for the equipment and air in the treatment room, which may be activated by the particle beam and its secondary radiation. This new treatment requires complex procedures and careful adjustment of parameters for each patient. Therefore, education and training for the personnel involved in the procedure are essential for both effective treatment and patient protection. The International Commission on Radiological Protection (ICRP) has provided recommendations for radiological protection in ion beam radiotherapy in Publication 127 Medical staff should be aware of the possible risks resulting from inappropriate use and control of the equipment. They should also consider the necessary procedures for patient protection when new technologies are introduced into clinical practice. PMID:26980799

  15. Fault finder

    DOEpatents

    Bunch, Richard H.

    1986-01-01

    A fault finder for locating faults along a high voltage electrical transmission line. Real time monitoring of background noise and improved filtering of input signals is used to identify the occurrence of a fault. A fault is detected at both a master and remote unit spaced along the line. A master clock synchronizes operation of a similar clock at the remote unit. Both units include modulator and demodulator circuits for transmission of clock signals and data. All data is received at the master unit for processing to determine an accurate fault distance calculation.

  16. Associations of contextual risk and protective factors with fathers' parenting practices in the postdeployment environment.

    PubMed

    Davis, Laurel; Hanson, Sheila K; Zamir, Osnat; Gewirtz, Abigail H; DeGarmo, David S

    2015-08-01

    Deployment separation and reunifications are salient contexts that directly impact effective family functioning and parenting for military fathers. Yet, we know very little about determinants of postdeployed father involvement and effective parenting. The present study examined hypothesized risk and protective factors of observed parenting for 282 postdeployed fathers who served in the National Guard/Reserves. Preintervention data were employed from fathers participating in the After Deployment, Adaptive Parenting Tools randomized control trial. Parenting practices were obtained from direct observation of father-child interaction and included measures of problem solving, harsh discipline, positive involvement, encouragement, and monitoring. Risk factors included combat exposure, negative life events, months deployed, and posttraumatic stress disorder symptoms. Protective factors included education, income, dyadic adjustment, and social support. Results of a structural equation model assessing risk and protective factors for an effective parenting construct indicated that months deployed, income, and father age were most related to observed parenting, explaining 16% of the variance. We are aware of no other study using direct parent-child observations of fathers' parenting skills following overseas deployment. Implications for practice and preventive intervention are discussed. PMID:26213794

  17. Recommended Radiation Protection Practices for Low-Level Waste Disposal Sites

    SciTech Connect

    Hadlock, D. E.; Hooker, C. D.; Herrington, W. N.; Gilchrist, R. L.

    1983-12-01

    The United States Nuclear Regulatory Commission contracted with Pacific Northwest Laboratory (PNL) to provide technical assistance in estsblishing operational guidelines, with respect to radiation control programs and methods of minimizing occupational radiation exposure, at Low-Level Waste (LLW) dis- posal sites. The PNL, through site visits, evaluated operations at LLW dis- posal sites to determine the adequacy of current practices in maintaining occupational exposures as low as is reasonably achievable (ALARA). The data sought included the specifics of: ALARA programs, training programs, external exposure control , internal exposure control , respiratory protection, survei 1 - lance, radioactive waste management, facilities and equipment, and external dose analysis. The results of the study indicated the following: The Radiation Protection and ALARA programs at the three commercial LLW disposal sites were observed to be adequate in scope and content compared to similar programs at other types of nuclear facilities. However, it should be noted that there were many areas that could be improved upon to help ensure the health and safety of the occupa- tionally exposed individuals. As a result, radiation protection practices were recommended with related rationales in order to reduce occupational exposures as far below specified radiation limits as is reasonably achievable. In addition, recommendations were developed for achieving occupational exposure ALARA under the Regulatory Requirements issued in 10 CFR Part 61.

  18. Search and seizure law; practical advice and interpretation for nuclear protective force persons

    SciTech Connect

    Cadwell, J.J.

    1983-07-06

    Recent Supreme Court decisions, which interpret the 200-year-old Fourth Amendment of the US Constitution, are used to provide a brief overview of some search and seizure subjects important to management and officers responsible for physical protection of nuclear facilities. The overview is framed in practical terms in order to make the comments applicable to the everyday activity of nuclear-protective-force persons. The Supreme Court has described several exceptions where searches and seizures (arrests) are permitted without a warrant, despite the Fourth Amendment which states that warrants are always required. The seven exceptions briefly discussed are search incidents to a lawful arrest, the automobile-search exception, the suitcase or container exception, the hot-pursuit or emergency exception, the stop-and-frisk exception, the plain-view exception, and consent to be searched.

  19. 40 CFR 258.13 - Fault areas.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 24 2010-07-01 2010-07-01 false Fault areas. 258.13 Section 258.13 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) SOLID WASTES CRITERIA FOR MUNICIPAL SOLID WASTE LANDFILLS Location Restrictions § 258.13 Fault areas. (a) New MSWLF units and...

  20. 40 CFR 258.13 - Fault areas.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 40 Protection of Environment 25 2011-07-01 2011-07-01 false Fault areas. 258.13 Section 258.13 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) SOLID WASTES CRITERIA FOR MUNICIPAL SOLID WASTE LANDFILLS Location Restrictions § 258.13 Fault areas. (a) New MSWLF units and lateral expansions shall not be located within 200 feet...

  1. [General practitioner's role in immunization practice: prevention, counseling and protection of patients at risk].

    PubMed

    Marino, Maria Giulia; Corongiu, Maria; Franco, Elisabetta

    2014-01-01

    In Italy, General Practitioner (GP) plays a key role in directing patients in immunization practice, especially those at risk, who might benefit most from vaccine protection. The numerous GP's specific activities in this field include vaccine administration, reporting of adverse reactions, check of vaccination status, counseling, identification of at-risk patients, recommendation for post-exposure prophylaxis, self and ambulatory staff immunization. GP is one the main health professionals in charge of patients care and has the task to ensure both diseases prevention and health care costs restraint. PMID:25194124

  2. EFFECTIVENESS OF FORESTRY BEST MANAGEMENT PRACTICES IN PROTECTING FISH RESOURCES OF TWO NATIONAL FORESTS IN WESTERN WASHINGTON

    EPA Science Inventory

    This study is part of the National Network for Environmental Management Studies conducted under the auspices of the Office of Cooperative Environmental Management--U.S. Environmental Protection Agency. This study investigates whether Forestry Best Management Practices (BMP's) fun...

  3. Fault diagnosis of analog circuits

    SciTech Connect

    Bandler, J.W.; Salama, A.E.

    1985-08-01

    In this paper, various fault location techniques in analog networks are described and compared. The emphasis is on the more recent developments in the subject. Four main approaches for fault location are addressed, examined, and illustrated using simple network examples. In particular, we consider the fault dictionary approach, the parameter identification approach, the fault verification approach, and the approximation approach. Theory and algorithms that are associated with these approaches are reviewed and problems of their practical application are identified. Associated with the fault dictionary approach we consider fault dictionary construction techniques, methods of optimum measurement selection, different fault isolation criteria, and efficient fault simulation techniques. Parameter identification techniques that either utilize linear or nonlinear systems of equations to identify all network elements are examined very thoroughly. Under fault verification techniques we discuss node-fault diagnosis, branch-fault diagnosis, subnetwork testability conditions as well as combinatorial techniques, the failure bound technique, and the network decomposition technique. For the approximation approach we consider probabilistic methods and optimization-based methods. The artificial intelligence technique and the different measures of testability are also considered. The main features of the techniques considered are summarized in a comparative table. An extensive, but not exhaustive, bibliography is provided.

  4. On the design of fault-tolerant robotic manipulator systems

    NASA Astrophysics Data System (ADS)

    Tesar, Delbert

    1993-02-01

    Robotic systems are finding increasing use in space applications. Many of these devices are going to be operational on board the Space Station Freedom. Fault tolerance has been deemed necessary because of the criticality of the tasks and the inaccessibility of the systems to maintenance and repair. Design for fault tolerance in manipulator systems is an area within robotics that is without precedence in the literature. In this paper, we will attempt to lay down the foundations for such a technology. Design for fault tolerance demands new and special approaches to design, often at considerable variance from established design practices. These design aspects, together with reliability evaluation and modeling tools, are presented. Mechanical architectures that employ protective redundancies at many levels and have a modular architecture are then studied in detail. Once a mechanical architecture for fault tolerance has been derived, the chronological stages of operational fault tolerance are investigated. Failure detection, isolation, and estimation methods are surveyed, and such methods for robot sensors and actuators are derived. Failure recovery methods are also presented for each of the protective layers of redundancy. Failure recovery tactics often span all of the layers of a control hierarchy. Thus, a unified framework for decision-making and control, which orchestrates both the nominal redundancy management tasks and the failure management tasks, has been derived. The well-developed field of fault-tolerant computers is studied next, and some design principles relevant to the design of fault-tolerant robot controllers are abstracted. Conclusions are drawn, and a road map for the design of fault-tolerant manipulator systems is laid out with recommendations for a 10 DOF arm with dual actuators at each joint.

  5. FAULT & COORDINATION STUDY FOR T PLANT COMPLEX

    SciTech Connect

    MCDONALD, G.P.; BOYD-BODIAU, E.A.

    2004-09-01

    A short circuit study is performed to determine the maximum fault current that the system protective devices, transformers, and interconnections would he subject to in event of a three phase, phase-to-phase, or phase-to-ground fault. Generally, the short circuit study provides the worst case fault current levels at each bus or connection point of the system.

  6. 40 CFR 258.13 - Fault areas.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 40 Protection of Environment 25 2014-07-01 2014-07-01 false Fault areas. 258.13 Section 258.13... SOLID WASTE LANDFILLS Location Restrictions § 258.13 Fault areas. (a) New MSWLF units and lateral expansions shall not be located within 200 feet (60 meters) of a fault that has had displacement in...

  7. 40 CFR 258.13 - Fault areas.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 40 Protection of Environment 26 2013-07-01 2013-07-01 false Fault areas. 258.13 Section 258.13... SOLID WASTE LANDFILLS Location Restrictions § 258.13 Fault areas. (a) New MSWLF units and lateral expansions shall not be located within 200 feet (60 meters) of a fault that has had displacement in...

  8. 40 CFR 258.13 - Fault areas.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 40 Protection of Environment 26 2012-07-01 2011-07-01 true Fault areas. 258.13 Section 258.13... SOLID WASTE LANDFILLS Location Restrictions § 258.13 Fault areas. (a) New MSWLF units and lateral expansions shall not be located within 200 feet (60 meters) of a fault that has had displacement in...

  9. Ground Fault--A Health Hazard

    ERIC Educational Resources Information Center

    Jacobs, Clinton O.

    1977-01-01

    A ground fault is especially hazardous because the resistance through which the current is flowing to ground may be sufficient to cause electrocution. The Ground Fault Circuit Interrupter (G.F.C.I.) protects 15 and 25 ampere 120 volt circuits from ground fault condition. The design and examples of G.F.C.I. functions are described in this article.…

  10. Youth solar ultraviolet radiation exposure, concurrent activities and sun-protective practices: a review.

    PubMed

    Wright, C Y; Reeder, A I

    2005-01-01

    To assist standardization of procedures, facilitate comparisons, and help guide research efforts to optimally inform development of appropriately targeted interventions, there is a need to review methods used to quantify child and adolescent solar ultraviolet radiation (UV) exposure, related outdoor activities and sun-protective practices. This holistic approach is essential for comprehensive research that will provide all-inclusive, informative and meaningful messages for preventive measures of harmful UV exposure. Two databases were searched and 29 studies were retrieved, and these studies report measurement or assessment techniques documenting UV exposure patterns and related outdoor activities. Polysulfone film badges were the main measurement instrument used in 10 studies, with questionnaire, survey data, observation, a model, electronic dosimeters, biological dosimeters, colorimeter and UV colouring labels used in the remaining studies. Methods used to record activities included self-report, parental report, a logbook and observation. Measurement duration and unit of UV exposure varied in most studies, but a method common to 15 studies was measured UV exposure as a percentage of ambient UV. The studies reviewed do not provide sufficient information for the development and evaluation of targeted youth sun protection programs. Studies are required which document precise UV exposure, concurrent activities and sun protection usage for children and adolescents. PMID:16354111

  11. Exploring the support mechanisms health visitors use in safeguarding and child protection practice.

    PubMed

    Rooke, Justine

    2015-10-01

    Health visitors lead the Healthy Child Programme (HCP), a universal public health service designed to give children the best start in life. Running through the HCP are responsibilities to safeguard and protect children. Supporting the role of the health visitor is essential to ensure quality interventions and improved outcomes for children. This article describes an empirical study. It explores the experiences and views of health visitors on the mechanisms of support they use for working in child protection and safeguarding. A qualitative approach was used to collect data from two focus groups. The data produced was transcribed and a thematic analysis used to produce the results. The results demonstrate that health visitors gain the majority of their support from their colleagues and from supervision processes. Also identified from the data analysis were three factors which health visitors felt supported their role. These were support for managing the emotions associated with child protection work, feeling safe and effective in practice and having time to reflect and evaluate casework. PMID:26596133

  12. Fault-tolerant software for the FIMP

    NASA Technical Reports Server (NTRS)

    Hecht, H.; Hecht, M.

    1984-01-01

    The work reported here provides protection against software failures in the task dispatcher of the FTMP, a particularly critical portion of the system software. Faults in other system modules and application programs can be handled by similar techniques but are not covered in this effort. Goals of the work reported here are: (1) to develop provisions in the software design that will detect and mitigate software failures in the dispatcher portion of the FTMP Executive and, (2) to propose the implementation of specific software reliability measures in other parts of the system. Beyond the specific support to the FTMP project, the work reported here represents a considerable advance in the practical application of the recovery block methodology for fault tolerant software design.

  13. On-line fault diagnosis of power substation using connectionist expert system

    SciTech Connect

    Yang, H.T.; Chang, W.Y.; Huang, C.L.

    1995-02-01

    This paper proposes a new connectionist (or neural network) expert system for on-line fault diagnosis of a power substation. The Connectionist Expert Diagnosis System has similar profile of an expert system, but can be constructed much more easily from elemental samples. These samples associate the faults with their protective relays and breakers as well as the bus voltages and feeder currents. Through an elaborately designed structure, alarm signals are processed by different connectionist models. The output of the connectionist models is then integrated to provide the final conclusion with a confidence level. The proposed approach has been practically verified by testing on a typical Taiwan Power (Taipower) secondary substation. The test results show that rapid and exactly correct diagnosis is obtained even for the fault conditions involving multiple faults or failure operation of protective relay and circuit breaker. Moreover, the system can be transplanted into various substations with little additional implementation effort.

  14. Solar UVR exposure, concurrent activities and sun-protective practices among primary schoolchildren.

    PubMed

    Wright, Caradee Y; Reeder, Anthony I; Bodeker, Gregory E; Gray, Andrew; Cox, Brian

    2007-01-01

    Comprehensive measures of ultraviolet radiation (UVR) exposure, concurrent activities and sun-protective practices are needed to develop and evaluate skin cancer prevention and sun protection interventions. The UVR exposures of 345 primary schoolchildren at 23 schools around New Zealand were measured using electronic UVR monitors for 1-week periods over 12 weeks in 2004 and 2005. In addition, ambient UVR levels on a horizontal surface were measured on-site at each school. Children completed activity diaries during the period UVR measurements were made and provided information on their indoor and outdoor status and clothing and sun protection worn. Mean total daily UVR exposure (7:00-20:00 h NZST + 1) at the body location where the UVR monitors were worn was 0.9 SED (standard erythemal dose, 1 SED = 100 J m(-2)). This was 4.9% of the ambient UVR on a horizontal surface. Mean time spent outdoors was 2.3 h day(-1). Differences in children's UVR exposure could be explained in part by activity, where outdoor passive pursuits were associated with higher UVR exposure rates than outdoor active and outdoor travel pursuits. Compared with older children, the activities of younger children, although labeled the same, resulted in different UVR exposures, either as a result of reporting differences or a real difference in UVR exposure patterns. UVR exposure rates were generally higher on weekdays compared with the weekend, confirming the important role of school sun protection and skin cancer prevention programs. High UVR exposure activities included physical education, athletics and lunch break. PMID:17576384

  15. Source Water Protection and Sustainability: a Practical Approach to Assessing Risk

    NASA Astrophysics Data System (ADS)

    Nelson, D. O.; Chinitz, A. E.

    2009-12-01

    The recognition of climate change and the realization of the finite nature of sources of drinking water have spurred communities to develop resource sustainable practices. Sustainability requires overall management of the resource, and protecting existing and future water supplies from contamination plays a key role. Continued population growth will place increasing demands on water supplies and climate change will likely reduce groundwater recharge as well as increase the desire to replace dwindling surface water sources with groundwater. Aware of the need to protect its resource, the City of Springfield has delineated the source water protection areas (SWPAs) surrounding its wells and wellfields and has an ordinance in place that allows the City to establish specific guidelines associated with chemical usage. The City has established a range of best management strategies that vary in degree as a function of the time-of-travel zone and the presumed risk that a specific chemical poses to groundwater. When the guidelines were initially established, the City used more of a blanket approach to imposing specific practices on facilities within the SWPAs. The various approaches used were not site-specific and were based on limited information, reflecting limited resources, that were applied throughout the SWPAs as a function of proximity to the well(s). This practice led to the City’s receiving many questions from developers, etc., regarding “Why do you consider this product a risk?” The City needed an objective, consistent risk assessment tool that would reflect not only the specific chemical but also the geologic characteristics of the site where the chemical would be used. We used existing well reports to develop GIS coverages of both weighted hydraulic conductivity and depth to the aquifer that were then overlain on to the SWPA coverages. This exercise provided an assessment of the ease of water movement to the aquifer. Using that information, a spreadsheet was

  16. Fault Management Guiding Principles

    NASA Technical Reports Server (NTRS)

    Newhouse, Marilyn E.; Friberg, Kenneth H.; Fesq, Lorraine; Barley, Bryan

    2011-01-01

    Regardless of the mission type: deep space or low Earth orbit, robotic or human spaceflight, Fault Management (FM) is a critical aspect of NASA space missions. As the complexity of space missions grows, the complexity of supporting FM systems increase in turn. Data on recent NASA missions show that development of FM capabilities is a common driver for significant cost overruns late in the project development cycle. Efforts to understand the drivers behind these cost overruns, spearheaded by NASA's Science Mission Directorate (SMD), indicate that they are primarily caused by the growing complexity of FM systems and the lack of maturity of FM as an engineering discipline. NASA can and does develop FM systems that effectively protect mission functionality and assets. The cost growth results from a lack of FM planning and emphasis by project management, as well the maturity of FM as an engineering discipline, which lags behind the maturity of other engineering disciplines. As a step towards controlling the cost growth associated with FM development, SMD has commissioned a multi-institution team to develop a practitioner's handbook representing best practices for the end-to-end processes involved in engineering FM systems. While currently concentrating primarily on FM for science missions, the expectation is that this handbook will grow into a NASA-wide handbook, serving as a companion to the NASA Systems Engineering Handbook. This paper presents a snapshot of the principles that have been identified to guide FM development from cradle to grave. The principles range from considerations for integrating FM into the project and SE organizational structure, the relationship between FM designs and mission risk, and the use of the various tools of FM (e.g., redundancy) to meet the FM goal of protecting mission functionality and assets.

  17. The Patient Protection and Affordable Care Act of 2010: impact on otolaryngology practice and research.

    PubMed

    Sun, Gordon H; Davis, Matthew M

    2012-05-01

    The Patient Protection and Affordable Care Act (PPACA) was signed into law by President Barack Obama on March 23, 2010. Since its passage, the PPACA has led to increased health insurance coverage for millions more Americans, and it includes provisions leading to new avenues for clinical and health services research funding. The legislation also favors development of the primary care specialties and general surgery, increased training of midlevel health care providers, and medical training and service in underserved areas of the United States. However, the PPACA does not effectively engage otolaryngologists in quality improvement, despite modifications to the Physician Quality Reporting System. The legislation also levies a tax on cosmetic procedures, affecting both clinicians and patients. This article reviews the sections of the PPACA that are most pertinent to otolaryngologists and explains how these components of the bill will affect otolaryngologic practice and research over the coming decade. PMID:22282865

  18. Development and implementation of a power system fault diagnosis expert system

    SciTech Connect

    Minakawa, T.; Ichikawa, Y.; Kunugi, M.; Wada, N.; Shimada, K.; Utsunomiya, M.

    1995-05-01

    This paper describes a fault diagnosis expert system installed at the tohoku Electric Power Company. The main features of this system are careful selection of the inferencing input data, rapid inferencing, integration of the expert system with other systems in a practical structure, and the adoption of a domain shell. This system aims for improved practicability by using time-tagged data from circuit breakers, protective relays, and automatic reclosing relays in addition to the input data used in earlier systems. Furthermore, this system also uses data from fault detection systems that locate fault points within electric stations. This system uses an AI-specific back-end processor to perform inferencing rapidly. Additionally, this fault diagnosis expert system is interfaced and integrated with a restorative operations expert system, an intelligent alarm processing system, and a protective relay setting and management system. Authors developed and adopted a power system fault diagnosis domain shell to ease system development, and used the protective relay operation simulation function of a protective relay setting and management system for system verification.

  19. Fault diagnosis

    NASA Technical Reports Server (NTRS)

    Abbott, Kathy

    1990-01-01

    The objective of the research in this area of fault management is to develop and implement a decision aiding concept for diagnosing faults, especially faults which are difficult for pilots to identify, and to develop methods for presenting the diagnosis information to the flight crew in a timely and comprehensible manner. The requirements for the diagnosis concept were identified by interviewing pilots, analyzing actual incident and accident cases, and examining psychology literature on how humans perform diagnosis. The diagnosis decision aiding concept developed based on those requirements takes abnormal sensor readings as input, as identified by a fault monitor. Based on these abnormal sensor readings, the diagnosis concept identifies the cause or source of the fault and all components affected by the fault. This concept was implemented for diagnosis of aircraft propulsion and hydraulic subsystems in a computer program called Draphys (Diagnostic Reasoning About Physical Systems). Draphys is unique in two important ways. First, it uses models of both functional and physical relationships in the subsystems. Using both models enables the diagnostic reasoning to identify the fault propagation as the faulted system continues to operate, and to diagnose physical damage. Draphys also reasons about behavior of the faulted system over time, to eliminate possibilities as more information becomes available, and to update the system status as more components are affected by the fault. The crew interface research is examining display issues associated with presenting diagnosis information to the flight crew. One study examined issues for presenting system status information. One lesson learned from that study was that pilots found fault situations to be more complex if they involved multiple subsystems. Another was pilots could identify the faulted systems more quickly if the system status was presented in pictorial or text format. Another study is currently under way to

  20. Reputation management on facebook: awareness is key to protecting yourself, your practice, and the veterinary profession.

    PubMed

    Weijs, Cynthia A; Coe, Jason B; Muise, Amy; Christofides, Emily; Desmarais, Serge

    2014-01-01

    From the Social media use by health professionals occurs in a digital environment where etiquette has yet to be solidly defined. The objectives of this study were to explore veterinarians' personal use of Facebook, knowledge of privacy settings, and factors related to sharing personal information online. All American Animal Hospital Association member veterinarians with a valid e-mail address (9469) were invited to complete an online survey about Facebook (e.g., time spent on Facebook, awareness of consequences, types of information posted). Questions assessing personality dimensions including trust, popularity, self-esteem and professional identity were included. The response rate was 17% (1594 of 9469); 72% of respondents (1148 of 1594) had a personal Facebook profile. Veterinarians were more likely to share information on Facebook than they would in general. Trust, need for popularity, and more time spent on Facebook predicted more disclosure of personal information on Facebook. Awareness of consequences and increased veterinary experience predicted lesser disclosure. As veterinary practices use Facebook to improve client services, they need also to manage risks associated with online disclosure by staff. Raising awareness of reputation management and consequences of posting certain types of information to Facebook is integral to protecting the individual, the practice, and the veterinary profession. PMID:24855091

  1. Fault mechanics

    SciTech Connect

    Segall, P. )

    1991-01-01

    Recent observational, experimental, and theoretical modeling studies of fault mechanics are discussed in a critical review of U.S. research from the period 1987-1990. Topics examined include interseismic strain accumulation, coseismic deformation, postseismic deformation, and the earthquake cycle; long-term deformation; fault friction and the instability mechanism; pore pressure and normal stress effects; instability models; strain measurements prior to earthquakes; stochastic modeling of earthquakes; and deep-focus earthquakes. Maps, graphs, and a comprehensive bibliography are provided. 220 refs.

  2. Effect of organic and conventional crop rotation, fertilization, and crop protection practices on metal contents in wheat (Triticum aestivum).

    PubMed

    Cooper, Julia; Sanderson, Roy; Cakmak, Ismail; Ozturk, Levent; Shotton, Peter; Carmichael, Andrew; Haghighi, Reza Sadrabadi; Tetard-Jones, Catherine; Volakakis, Nikos; Eyre, Mick; Leifert, Carlo

    2011-05-11

    The effects of organic versus conventional crop management practices (crop rotation, crop protection, and fertility management strategies) on wheat yields and grain metal (Al, Cd, Cu, Ni, Pb, and Zn) concentrations were investigated in a long-term field trial. The interactions between crop management practices and the season that the crop was grown were investigated using univariate and redundancy analysis approaches. Grain yields were highest where conventional fertility management and crop protection practices were used, but growing wheat after a previous crop of grass/clover was shown to partially compensate for yield reductions due to the use of organic fertility management. All metals except for Pb were significantly affected by crop management practices and the year that the wheat was grown. Grain Cd and Cu levels were higher on average when conventional fertility management practices were used. Al and Cu were higher on average when conventional crop protection practices were used. The results demonstrate that there is potential to manage metal concentrations in the diet by adopting specific crop management practices shown to affect crop uptake of metals. PMID:21495704

  3. On Second-Order Fault Analysis Resistance for CRT-RSA Implementations

    NASA Astrophysics Data System (ADS)

    Dottax, Emmanuelle; Giraud, Christophe; Rivain, Matthieu; Sierra, Yannick

    Since their publication in 1996, Fault Attacks have been widely studied from both theoretical and practical points of view and most of cryptographic systems have been shown vulnerable to this kind of attacks. Until recently, most of the theoretical fault attacks and countermeasures used a fault model which assumes that the attacker is able to disturb the execution of a cryptographic algorithm only once. However, this approach seems too restrictive since the publication in 2007 of the successful experiment of an attack based on the injection of two faults, namely a second-order fault attack. Amongst the few papers dealing with second-order fault analysis, three countermeasures were published at WISTP’07 and FDTC’07 to protect the RSA cryptosystem using the CRT mode. In this paper, we analyse the security of these countermeasures with respect to the second-order fault model considered by their authors. We show that these countermeasures are not intrinsically resistant and we propose a new method allowing us to implement a CRT-RSA that resists to this kind of second-order fault attack.

  4. Trends in reliability modeling technology for fault tolerant systems

    NASA Technical Reports Server (NTRS)

    Bavuso, S. J.

    1979-01-01

    Reliability modeling for fault tolerant avionic computing systems was developed. The modeling of large systems involving issues of state size and complexity, fault coverage, and practical computation was discussed. A novel technique which provides the tool for studying the reliability of systems with nonconstant failure rates is presented. The fault latency which may provide a method of obtaining vital latent fault data is measured.

  5. Baseline survey of sun-protection knowledge, practices and policy in early childhood settings in Queensland, Australia.

    PubMed

    Harrison, Simone L; Saunders, V; Nowak, M

    2007-04-01

    Excessive exposure to sunlight during early childhood increases the risk of developing skin cancer. Self-administered questionnaires exploring sun-protection knowledge, practices and policy were mailed to the directors/coordinators/senior teachers of all known early childhood services in Queensland, Australia, in 2002 (n = 1383; 56.5% response). Most (73.7%) services had a written sun-protection policy (SPP). However, 40.6% of pre-schools and kindergartens had not developed a written SPP. Most directors had moderate knowledge about sun-protection (median score: 7/12 [IQR 6, 8]), but few understood the UV index, the sun-protection factor rating for sunscreens or the association between childhood sun-exposure, mole development and melanoma. Pre-school teachers had lower knowledge scores than directors of long day care centers and other services (P = 0.0005). Staff members reportedly wore sun-protective hats, clothing and sunglasses more often than children. However, sunscreen use was higher among children than staff. Directors' knowledge scores predicted reported hat, clothing, sunscreen and shade utilization among children. Remoteness impacted negatively on director's knowledge (P = 0.043) and written SPP development (P = 0.0005). Higher composite sun-protection scores were reported for children and staff from services with written sun-protection policies. SPP development and increased sun-protection knowledge of directors may improve reported sun-protective behaviors of children and staff of early childhood services. PMID:16880216

  6. Knowledge and Practice of Sun Protection in Schools in South Africa Where No National Sun Protection Programme Exists

    ERIC Educational Resources Information Center

    Wright, Caradee Y.; Reeder, Anthony I.; Albers, Patricia N.

    2016-01-01

    Interventions in primary schools that increase sun-protective behaviours and decrease ultraviolet radiation exposure, sunburn incidence and skin cancer risk can be effective. SunSmart School Accreditation Programmes (SSAP) are recommended. Prior to SSAP implementation in South Africa, we explored the feasibility of obtaining national baseline…

  7. Lower bound of optimization in radiological protection system taking account of practical implementation of clearance

    SciTech Connect

    Hattori, Takatoshi

    2007-07-01

    The dose criterion used to derive clearance and exemption levels is of the order of 0.01 mSv/y based on the Basic Safety Standard (BSS) of the International Atomic Energy Agency (IAEA), the use of which has been agreed upon by many countries. It is important for human beings, who are facing the fact that global resources for risk reduction are limited, to carefully consider the practical implementation of radiological protection systems, particularly for low-radiation-dose regions. For example, in direct gamma ray monitoring, to achieve clearance level compliance, difficult issues on how the uncertainty (error) of gamma measurement should be handled and also how the uncertainty (scattering) of the estimation of non-gamma emitters should be treated in clearance must be resolved. To resolve these issues, a new probabilistic approach has been proposed to establish an appropriate safety factor for compliance with the clearance level in Japan. This approach is based on the fundamental concept that 0.1 mSv/y should be complied with the 97.5. percentile of the probability distribution for the uncertainties of both the measurement and estimation of non-gamma emitters. The International Commission on Radiological Protection, ICRP published a new concept of the representative person in Publication 101 Part I. The representative person is a hypothetical person exposed to a dose that is representative of those of highly exposed persons in a population. In a probabilistic dose assessment, the ICRP recommends that the representative person should be defined such that the probability of exposure occurrence is lower than about 5% that of a person randomly selected from the population receiving a high dose. From the new concept of the ICRP, it is reasonable to consider that the 95. percentile of the dose distribution for the representative person is theoretically always lower than the dose constraint. Using this established relationship, it can be concluded that the minimum dose

  8. Differential Fault Analysis of Rabbit

    NASA Astrophysics Data System (ADS)

    Kircanski, Aleksandar; Youssef, Amr M.

    Rabbit is a high speed scalable stream cipher with 128-bit key and a 64-bit initialization vector. It has passed all three stages of the ECRYPT stream cipher project and is a member of eSTREAM software portfolio. In this paper, we present a practical fault analysis attack on Rabbit. The fault model in which we analyze the cipher is the one in which the attacker is assumed to be able to fault a random bit of the internal state of the cipher but cannot control the exact location of injected faults. Our attack requires around 128 - 256 faults, precomputed table of size 241.6 bytes and recovers the complete internal state of Rabbit in about 238 steps.

  9. Primary School Sun Protection Policies and Practices 4 Years after Baseline--A Follow-Up Study

    ERIC Educational Resources Information Center

    Reeder, Anthony I.; Jopson, Janet A.; Gray, Andrew

    2012-01-01

    Before the 2005 launch of the New Zealand SunSmart Schools Accreditation Programme (SSAP), 242 randomly sampled primary schools completed a mail survey about sun protection policies, practices, curriculum and environment. A 2009 follow-up included 189 (78%) and their mean Total Accreditation Score (TAS = total SSAP requirements met, range 0-12),…

  10. Results from an Exploratory Study of Sun Protection Practice: Implications for the Design of Health Promotion Messages

    ERIC Educational Resources Information Center

    Eadie, Douglas; MacAskill, Susan

    2007-01-01

    Purpose: The primary aim of the research reported here is to provide strategic guidance for the development of a national communication strategy to improve sun protection practice amongst young people. Design/methodology/approach: The research adopted an exploratory approach, employing qualitative focus groups to represent three population groups,…

  11. Baseline Survey of Sun-Protection Knowledge, Practices and Policy in Early Childhood Settings in Queensland, Australia

    ERIC Educational Resources Information Center

    Harrison, Simone L.; Saunders, V.; Nowak, M.

    2007-01-01

    Excessive exposure to sunlight during early childhood increases the risk of developing skin cancer. Self-administered questionnaires exploring sun-protection knowledge, practices and policy were mailed to the directors/co-ordinators/senior teachers of all known early childhood services in Queensland, Australia, in 2002 (n = 1383; 56.5% response).…

  12. Architecture of small-scale fault zones in the context of the Leinetalgraben Fault System

    NASA Astrophysics Data System (ADS)

    Reyer, Dorothea; Philipp, Sonja L.

    2010-05-01

    local stress field so that it stops many joints. Well developed fracture networks are therefore in most cases limited to single layers. From the data we finally determined the structural indices of the fault zones, that is, the ratios of damage zone and fault zone widths. By their nature structural indices can obtain values from 0 to 1; the values having implications for fault zone permeability. An ideal value of 0 would mean that a fault damage zone is absent. Such fault zones generally have low permeabilities as long as the faults are not active (slipping). A structural index of 1, however, would imply that there is practically no fault core and the fault zone permeability is entirely controlled by the fractures within the damage zone. Our measurements show that the damage zones of normal faults in the Muschelkalk limestone are relatively thick so that their structural indices are relatively high. In contrast to normal faults, reverse and strike-slip faults have smaller indices because of well developed brecciated fault cores. In addition we found that small-scale fault zones with parallel orientations to the major Leinetalgraben fault zones are more likely to have well developed damage zones than those with conjugate or perpendicular orientation. Our field data lead to the hypothesis that fault systems in the North German Basin may generally be surrounded by small-scale fault zones which have high permeabilities if orientated parallel to the major fault and lower permeabilities if conjugate or perpendicularly orientated. However, further studies of fault systems in different geological settings are needed to support or reject this hypothesis. Such studies help to improve the general understanding of fault zones and fault systems and thereby minimise the risk in matters of the exploitation of fault-related geothermal reservoirs.

  13. Effective Partnership is an underpinning of ANSI/ASSE Z88.2–2015 Practices for Respiratory Protection

    PubMed Central

    Metzler, Richard W.; Spelce, David; Johnson, James S.; Rehak, Timothy R

    2016-01-01

    In the U.S., respiratory protection is broadly supported by a system of coordinated efforts among governmental organizations, professional associations, researchers, industrial hygienists, manufacturers, and others who produce knowledge, best practice guidance, standards, regulations, technologies, and products to assure workers can be effectively protected. Ultimately, the work of these partners is applied by employers in establishing and implementing an effective ANSI/ASSE Z88.2–2015 conforming respirator program. This article describes key partners and their activities and/or responsibilities to assure an effective respirator program.

  14. The Impact of Solution-Focused Training on Professionals' Beliefs, Practices and Burnout of Child Protection Workers in Tenerife Island

    ERIC Educational Resources Information Center

    Medina, Antonio; Beyebach, Mark

    2014-01-01

    This paper presents the first results of a large-scale research project on the child protection services in Tenerife, Spain. In Study 1, the professional beliefs and practices of 152 child protection workers, as measured by a Professional Beliefs and Practices Questionnaire, were correlated with their scores on the Maslach Burnout Inventory.…

  15. Position and enforcement practice of the People's Republic of China's pharmaceutical data exclusivity protection.

    PubMed

    Li, Na; Yu, Xiang; Pecht, Michael

    2016-01-01

    The concept of pharmaceutical data exclusivity protection comes from the West. The Agreement on Trade-Related Aspects of Intellectual Property Rights (TRIPS) establishes the basic rules for pharmaceutical data exclusivity protection. People's Republic of China's domestic law is consistent with the TRIPS agreement. In the drug registration approval process of the People's Republic of China's Drug Supervision Department, pharmaceutical data exclusivity protection has encountered some problems, including data authentication, exclusive rights to data, number of drugs requiring data to be submitted, and drug costs. In view of the long-term interests of the People's Republic of China's pharmaceutical industry and intellectual property protection trends, there are a lot of difficulties in the enforcement of pharmaceutical data exclusivity protection law that need to be overcome. Some measures can be taken, such as establishing a shorter data exclusivity protection period, only protecting the data submitted and relied on in the People's Republic of China, only protecting the drugs that use new chemical components, allowing application and necessary research before the expiry of pharmaceutical data exclusivity protection period of generic drugs. PMID:27382254

  16. Neural networks for fault location in substations

    SciTech Connect

    Alves da Silva, A.P.; Silveira, P.M. da; Lambert-Torres, G.; Insfran, A.H.F.

    1996-01-01

    Faults producing load disconnections or emergency situations have to be located as soon as possible to start the electric network reconfiguration, restoring normal energy supply. This paper proposes the use of artificial neural networks (ANNs), of the associative memory type, to solve the fault location problem. The main idea is to store measurement sets representing the normal behavior of the protection system, considering the basic substation topology only, into associated memories. Afterwards, these memories are employed on-line for fault location using the protection system equipment status. The associative memories work correctly even in case of malfunction of the protection system and different pre-fault configurations. Although the ANNs are trained with single contingencies only, their generalization capability allows a good performance for multiple contingencies. The resultant fault location system is in operation at the 500 kV gas-insulated substation of the Itaipu system.

  17. Defining best practice in care and protection of children in crisis-affected settings: a Delphi study.

    PubMed

    Ager, Alastair; Stark, Lindsay; Akesson, Bree; Boothby, Neil

    2010-01-01

    Thirty specialists in humanitarian work supporting the care and protection of children in crisis settings completed a 3-phase Delphi consultation. Proposals of best practice were elicited, reviewed, and rated by participants. A high level of consensus support was reached for 55 statements. These statements emphasized utilization of existing resources, participation, and inclusivity. The influences of resilience theory, social ecology, and cultural sensitivity were clearly evident. The utilization of developmental theory could be strengthened in relation to more differentiated understanding of the operation of protective influences and conceptualization of such influences in terms of "adaptive systems." Wider research engagement by development scientists in diverse cultural settings and clear formulation of findings for practitioners and policy makers would further support evidence-based humanitarian practice. PMID:20636695

  18. "I Made Her Realise that I Could Be There for Her, that I Could Support Her": Child Protection Practices with Women in Domestic Violence Cases

    ERIC Educational Resources Information Center

    Lapierre, Simon; Cote, Isabelle

    2011-01-01

    This article presents findings from a study that investigated child protection policies and practices, and focuses on data gathered in a child protection agency located in Quebec, Canada. This research project draws upon a qualitative case-study methodology, involving a documentary analysis of both national and local child protection policies, as…

  19. Quantifying Anderson's fault types

    USGS Publications Warehouse

    Simpson, R.W.

    1997-01-01

    Anderson [1905] explained three basic types of faulting (normal, strike-slip, and reverse) in terms of the shape of the causative stress tensor and its orientation relative to the Earth's surface. Quantitative parameters can be defined which contain information about both shape and orientation [Ce??le??rier, 1995], thereby offering a way to distinguish fault-type domains on plots of regional stress fields and to quantify, for example, the degree of normal-faulting tendencies within strike-slip domains. This paper offers a geometrically motivated generalization of Angelier's [1979, 1984, 1990] shape parameters ?? and ?? to new quantities named A?? and A??. In their simple forms, A?? varies from 0 to 1 for normal, 1 to 2 for strike-slip, and 2 to 3 for reverse faulting, and A?? ranges from 0?? to 60??, 60?? to 120??, and 120?? to 180??, respectively. After scaling, A?? and A?? agree to within 2% (or 1??), a difference of little practical significance, although A?? has smoother analytical properties. A formulation distinguishing horizontal axes as well as the vertical axis is also possible, yielding an A?? ranging from -3 to +3 and A?? from -180?? to +180??. The geometrically motivated derivation in three-dimensional stress space presented here may aid intuition and offers a natural link with traditional ways of plotting yield and failure criteria. Examples are given, based on models of Bird [1996] and Bird and Kong [1994], of the use of Anderson fault parameters A?? and A?? for visualizing tectonic regimes defined by regional stress fields. Copyright 1997 by the American Geophysical Union.

  20. Coordinated and Evidence-Based Policy and Practice for Protecting Children outside of Family Care

    ERIC Educational Resources Information Center

    Boothby, Neil; Balster, Robert L.; Goldman, Philip; Wessells, Michael G.; Zeanah, Charles H.; Huebner, Gillian; Garbarino, James

    2012-01-01

    The 2011 U.S. Government Evidence Summit on Protecting Children Outside of Family Care brought together leading researchers and technical experts to assess the available evidence to inform policies, strategies, and programs relevant to protecting children outside of family care in lower and middle income countries. While child vulnerabilities are…

  1. Knowledge, attitude, and practice of Indonesian farmers regarding the use of personal protective equipment against pesticide exposure.

    PubMed

    Yuantari, Maria G C; Van Gestel, Cornelis A M; Van Straalen, Nico M; Widianarko, Budi; Sunoko, Henna R; Shobib, Muhammad N

    2015-03-01

    The use of synthetic pesticides in tropical countries has increased over the years, following the intensification of agriculture. However, awareness among farmers of the importance of protecting themselves from hazards associated with pesticide application is still lacking, especially in Indonesia. This paper reports results of an inventory on knowledge and attitudes regarding pesticide use by melon farmers of a village in Central Java, Indonesia. The importance of using personal protective equipment such as hats, masks, goggles, boots, and gloves on agricultural land is known and well understood by the farmers. However, in practice, only 3.8 % were wearing glasses and 1.9 % were using boots. In fact, the masks used only consisted of a part of their shirt tied around the mouth. The farmers were not wearing long pants and shirts with long sleeves and used the same clothes for more than 1 day without washing. Almost no farmers used personal protective equipment that was standard, in good condition, and complete. Based on the results of statistical analysis, no significant relationship was found between knowledge and attitude on the required practices on the one hand and the use of personal protective equipment in practice on the other hand. This shows that improved knowledge and attitudes are not enough to change the behavior of farmers to work in a healthy and safe way. The gap between knowledge and practice needs to be bridged by a more interactive and participatory training model. It is therefore of paramount importance to develop a special toolkit for pesticide risk reduction which is developed in a participatory manner involving the farmers as the main actors through a series of focus group discussions and field simulations. PMID:25716528

  2. EYE LENS DOSIMETRY FOR FLUOROSCOPICALLY GUIDED CLINICAL PROCEDURES: PRACTICAL APPROACHES TO PROTECTION AND DOSE MONITORING.

    PubMed

    Martin, Colin J

    2016-06-01

    Doses to the eye lenses of clinicians undertaking fluoroscopically guided procedures can exceed the dose annual limit of 20 mSv, so optimisation of radiation protection is essential. Ceiling-suspended shields and disposable radiation absorbing pads can reduce eye dose by factors of 2-7. Lead glasses that shield against exposures from the side can lower doses by 2.5-4.5 times. Training in effective use of protective devices is an essential element in achieving good protection and acceptable eye doses. Effective methods for dose monitoring are required to identify protection issues. Dosemeters worn adjacent to the eye provide the better option for interventional clinicians, but an unprotected dosemeter worn at the neck will give an indication of eye dose that is adequate for most interventional staff. Potential requirements for protective devices and dose monitoring can be determined from risk assessments using generic values for dose linked to examination workload. PMID:26454269

  3. A travel clinic in your office: grow your practice and protect international travelers.

    PubMed

    Kirsch, Michael

    2009-01-01

    Medical practices today face economic challenges from declining reimbursements and rising overhead costs. Physicians need to develop new income sources to invigorate their practices and remain viable. Travel medicine-advising and immunizing international travelers-is a rapidly growing specialty in the United States that generates substantial cash reimbursements and professional satisfaction. Travel Clinics of America, a physician-operated company, specializes in helping physicians to incorporate travel medicine into their existing practices. PMID:20073166

  4. The new European Competence Centre for Moor and Climate - A European initiative for practical peat bog and climate protection

    NASA Astrophysics Data System (ADS)

    Smidt, Geerd; Tänzer, Detlef

    2013-04-01

    The new European Competence Centre for Moor and Climate (EFMK) is an initiative by different local communities, environmental protection NGOs, agricultural services, and partners from the peat and other industries in Lower Saxony (Germany). The Centre aims to integrate practical peat bog conservation with a focus on green house gas emission after drainage and after water logging activities. Together with our partners we want to break new ground to protect the remaining bogs in the region. Sphagnum mosses will be produced in paludiculture on-site in cooperation with the local peat industry to provide economic and ecologic alternatives for peat products used in horticulture business. Land-use changes are needed in the region and will be stimulated in cooperation with agricultural services via compensation money transfers from environmental protection funds. On a global scale the ideas of Carbon Credit System have to be discussed to protect the peat bogs for climate protection issues. Environmental education is an important pillar of the EFMK. The local society is invited to explore the unique ecosystem and to participate in peat bog protection activities. Future generations will be taught to understand that the health of our peat bogs is interrelated with the health of the local and global climate. Besides extracurricular classes for schools the centre will provide infrastructure for Master and PhD students, as well for senior researchers for applied research in the surrounding moor. International partners in the scientific and practical fields of peat bog ecology, renaturation, green house gas emissions from peat bogs, and environmental policy are invited to participate in the European Competence Center for Moor and Climate.

  5. A feasibility study of full-bridge type superconducting fault current controller on electric machine power stability

    NASA Astrophysics Data System (ADS)

    Jang, J. Y.; Hwang, Y. J.; Lee, J.; Ko, T. K.

    2016-02-01

    Recently, because of the advent of Smart Grid and integration of distributed generations, electrical power grids are facing uncountable challenges. Increase of fault current is one of such serious challenges and there are some fault current limiters (FCLs) that can limit the fault current. Existing grid protection FCLs, however, simply limit the fault current passively and can allow the existing protection coordination schemes to fail. This phenomenon leads to catastrophic failure in the complex system and may cause unpredictable power grid operation. Unlike a FCL, a superconducting fault current controller (SFCC) employs a full-bridge thyristor rectifier, a high temperature superconducting (HTS) DC reactor, and an embedded control unit to maintain the fault current level at a proper value by adjusting the phase angle of thyristors. This paper contains experimental and numerical analysis to design and fabricate a SFCC system for protection and stability improvement in power grids. At first, fundamental characteristics of a SFCC system were introduced. System circuit diagram and operational principles were proposed. Secondly, the developed small-scale SFCC system was introduced and verified. A 40 Vrms/30 Arms class prototype SFCC employing HTS DC reactor was fabricated and short circuit tests that simulate various fault conditions were implemented to verify the control performance of the fault current. Finally, the practical feasibility of application of the SFCC system to the power system was studied. The problems caused by three-phase faults from the power grid were surveyed and transient stability analysis of the power system was conducted by simulations. From the experimental and simulation results, we can verify the feasibility of the SFCC in power system.

  6. Genetic Information, Non-Discrimination, and Privacy Protections in Genetic Counseling Practice

    PubMed Central

    Prince, Anya E.R.; Roche, Myra I.

    2014-01-01

    The passage of the Genetic Information Non Discrimination Act (GINA) was hailed as a pivotal achievement that was expected to calm the fears of both patients and research participants about the potential misuse of genetic information. However, six years later, patient and provider awareness of legal protections at both the federal and state level remains discouragingly low, thereby, limiting their potential effectiveness. The increasing demand for genetic testing will expand the number of individuals and families who could benefit from obtaining accurate information about the privacy and anti-discriminatory protections that GINA and other laws extend. In this paper we describe legal protections that are applicable to individuals seeking genetic counseling, review the literature on patient and provider fears of genetic discrimination and examine their awareness and understandings of existing laws, and summarize how genetic counselors currently discuss genetic discrimination. We then present three genetic counseling cases to illustrate issues of genetic discrimination and provide relevant information on applicable legal protections. Genetic counselors have an unprecedented opportunity, as well as the professional responsibility, to disseminate accurate knowledge about existing legal protections to their patients. They can strengthen their effectiveness in this role by achieving a greater knowledge of current protections including being able to identify specific steps that can help protect genetic information. PMID:25063358

  7. Formal Validation of Fault Management Design Solutions

    NASA Technical Reports Server (NTRS)

    Gibson, Corrina; Karban, Robert; Andolfato, Luigi; Day, John

    2013-01-01

    The work presented in this paper describes an approach used to develop SysML modeling patterns to express the behavior of fault protection, test the model's logic by performing fault injection simulations, and verify the fault protection system's logical design via model checking. A representative example, using a subset of the fault protection design for the Soil Moisture Active-Passive (SMAP) system, was modeled with SysML State Machines and JavaScript as Action Language. The SysML model captures interactions between relevant system components and system behavior abstractions (mode managers, error monitors, fault protection engine, and devices/switches). Development of a method to implement verifiable and lightweight executable fault protection models enables future missions to have access to larger fault test domains and verifiable design patterns. A tool-chain to transform the SysML model to jpf-Statechart compliant Java code and then verify the generated code via model checking was established. Conclusions and lessons learned from this work are also described, as well as potential avenues for further research and development.

  8. Certificates of Confidentiality: Protecting Human Subject Research Data in Law and Practice

    PubMed Central

    Patel, Mayank J.; Williams, Brett A.; Austin, Jeffrey L.; Dame, Lauren A.

    2014-01-01

    Researchers often require and collect sensitive information about individuals to answer important scientific questions that impact individual health and well-being and the public health. Researchers recognize they have a duty to maintain the confidentiality of the data they collect and typically make promises, which are documented in the consent form. The legal interests of others, however, can threaten researchers’ promises of confidentiality, if they seek access to the data through subpoena. Certificates of Confidentiality (Certificates), authorized by federal statute, are an important tool for protecting individually identifiable sensitive research data from compelled disclosure. However, questions persist in the research community about the strength of Certificate protections, and the evidence on which to judge the strength is scant. In this article, we address those questions through a careful examination of the legislation and regulations concerning Certificates and the reported and unreported cases we have identified through our legal research and interviews with legal counsel about their experiences with Certificates. We also analyze other statutes that protect research data to compare them to the Certificate’s protections, and we review other legal strategies available for protecting research data. Based on our analysis, we conclude with recommendations for how to strengthen protection of sensitive research data. PMID:25392705

  9. Fault diagnosis of power systems

    SciTech Connect

    Sekine, Y. ); Akimoto, Y. ); Kunugi, M. )

    1992-05-01

    Fault diagnosis of power systems plays a crucial role in power system monitoring and control that ensures stable supply of electrical power to consumers. In the case of multiple faults or incorrect operation of protective devices, fault diagnosis requires judgment of complex conditions at various levels. For this reason, research into application of knowledge-based systems go an early start and reports of such systems have appeared in may papers. In this paper, these systems are classified by the method of inference utilized in the knowledge-based systems for fault diagnosis of power systems. The characteristics of each class and corresponding issues as well as the state-of-the-art techniques for improving their performance are presented. Additional topics covered are user interfaces, interfaces with energy management systems (EMS's), and expert system development tools for fault diagnosis. Results and evaluation of actual operation in the field are also discussed. Knowledge-based fault diagnosis of power systems will continue to disseminate.

  10. Fault-tolerant adaptive FIR filters using variable detection threshold

    NASA Astrophysics Data System (ADS)

    Lin, L. K.; Redinbo, G. R.

    1994-10-01

    Adaptive filters are widely used in many digital signal processing applications, where tap weight of the filters are adjusted by stochastic gradient search methods. Block adaptive filtering techniques, such as block least mean square and block conjugate gradient algorithm, were developed to speed up the convergence as well as improve the tracking capability which are two important factors in designing real-time adaptive filter systems. Even though algorithm-based fault tolerance can be used as a low-cost high level fault-tolerant technique to protect the aforementioned systems from hardware failures with minimal hardware overhead, the issue of choosing a good detection threshold remains a challenging problem. First of all, the systems usually only have limited computational resources, i.e., concurrent error detection and correction is not feasible. Secondly, any prior knowledge of input data is very difficult to get in practical settings. We propose a checksum-based fault detection scheme using two-level variable detection thresholds that is dynamically dependent on the past syndromes. Simulations show that the proposed scheme reduces the possibility of false alarms and has a high degree of fault coverage in adaptive filter systems.

  11. Examples of Holistic Good Practices in Promoting and Protecting Mental Health in the Workplace: Current and Future Challenges

    PubMed Central

    Sivris, Kelly C.; Leka, Stavroula

    2015-01-01

    Background While attention has been paid to physical risks in the work environment and the promotion of individual employee health, mental health protection and promotion have received much less focus. Psychosocial risk management has not yet been fully incorporated in such efforts. This paper presents good practices in promoting mental health in the workplace in line with World Health Organization (WHO) guidance by identifying barriers, opportunities, and the way forward in this area. Methods Semistructured interviews were conducted with 17 experts who were selected on the basis of their knowledge and expertise in relation to good practice identified tools. Interviewees were asked to evaluate the approaches on the basis of the WHO model for healthy workplaces. Results The examples of good practice for Workplace Mental Health Promotion (WMHP) are in line with the principles and the five keys of the WHO model. They support the third objective of the WHO comprehensive mental health action plan 2013–2020 for multisectoral implementation of WMHP strategies. Examples of good practice include the engagement of all stakeholders and representatives, science-driven practice, dissemination of good practice, continual improvement, and evaluation. Actions to inform policies/legislation, promote education on psychosocial risks, and provide better evidence were suggested for higher WMHP success. Conclusion The study identified commonalities in good practice approaches in different countries and stressed the importance of a strong policy and enforcement framework as well as organizational responsibility for WMHP. For progress to be achieved in this area, a holistic and multidisciplinary approach was unanimously suggested as a way to successful implementation. PMID:26929841

  12. Association Analysis of System Failure in Wide Area Backup Protection System

    NASA Astrophysics Data System (ADS)

    Zhang, Yagang; Sun, Yi

    2015-12-01

    Wide area backup protection algorithm based on fault component identification is the heart of the whole wide area backup protection system, its validity and reliability is a problem which needs to be first considered in the engineering practice applications of wide area backup protection system. Wide are backup protection algorithm mainly use two kinds of wide area information to realize protection criterion, one is electrical quantity information, such as voltage, current, etc. Another one is protection action and circuit breaker information. The wide area backup protection algorithm based on electrical quantity information is mainly utilizing the significant change of electrical quantity to search fault component, and the primary means include current differential method of wide area multi-measuring points, the comparison method of calculation and measurement, the multiple statistics method. In this paper, a novel and effective association analysis of system failure in wide area backup protection system will be discussed carefully, and the analytical results are successful and reliable.

  13. Photovoltaic ground fault and blind spot electrical simulations.

    SciTech Connect

    Flicker, Jack David; Johnson, Jay

    2013-06-01

    Ground faults in photovoltaic (PV) systems pose a fire and shock hazard. To mitigate these risks, AC-isolated, DC grounded PV systems in the United States use Ground Fault Protection Devices (GFPDs), e.g., fuses, to de-energize the PV system when there is a ground fault. Recently the effectiveness of these protection devices has come under question because multiple fires have started when ground faults went undetected. In order to understand the limitations of fuse-based ground fault protection in PV systems, analytical and numerical simulations of different ground faults were performed. The numerical simulations were conducted with Simulation Program with Integrated Circuit Emphasis (SPICE) using a circuit model of the PV system which included the modules, wiring, switchgear, grounded or ungrounded components, and the inverter. The derivation of the SPICE model and the results of parametric fault current studies are provided with varying array topologies, fuse sizes, and fault impedances. Closed-form analytical approximations for GFPD currents from faults to the grounded current carrying conductor-known as %E2%80%9Cblind spot%E2%80%9D ground faults-are derived to provide greater understanding of the influence of array impedances on fault currents. The behavior of the array during various ground faults is studied for a range of ground fault fuse sizes to determine if reducing the size of the fuse improves ground fault detection sensitivity. The results of the simulations show that reducing the amperage rating of the protective fuse does increase fault current detection sensitivity without increasing the likelihood of nuisance trips to a degree. Unfortunately, this benefit reaches a limit as fuses become smaller and their internal resistance increases to the point of becoming a major element in the fault current circuit.

  14. 97 Savvy Secrets for Protecting Self and School: A Practical Guide for Today's Teachers and Administrators.

    ERIC Educational Resources Information Center

    Sesno, Alice Healy

    A teacher's professional integrity faces numerous challenges in the classroom. To help educators safeguard against potentially career-ending incidents, numerous "survival rules" are provided in this text. It argues that teachers must safeguard themselves with self-protecting knowledge and, in some instances, must reprogram themselves with new…

  15. Best management practices: Managing cropping systems for soil protection and bioenergy production

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Interest in renewable alternatives to fossil fuels has increased. Crop residue such as corn stover or wheat straw can be used for bioenergy including a substitution for natural gas or coal. Harvesting crop residue needs to be managed to protect the soil and future soil productivity. The amount of bi...

  16. Developing Evidence-Based Child Protection Practice: A View from Germany

    ERIC Educational Resources Information Center

    Kindler, Heinz

    2008-01-01

    The child welfare system in Germany has been described as family service-oriented because families in need are entitled to request family support services. If there is any form of child maltreatment, there may be some kind of mandatory state intervention to protect the child. Using trends in the number of children affected by maltreatment, the…

  17. A new intelligent hierarchical fault diagnosis system

    SciTech Connect

    Huang, Y.C.; Huang, C.L.; Yang, H.T.

    1997-02-01

    As a part of a substation-level decision support system, a new intelligent Hierarchical Fault Diagnosis System for on-line fault diagnosis is presented in this paper. The proposed diagnosis system divides the fault diagnosis process into two phases. Using time-stamped information of relays and breakers, phase 1 identifies the possible fault sections through the Group Method of Data Handling (GMDH) networks, and phase 2 recognizes the types and detailed situations of the faults identified in phase 1 by using a fast bit-operation logical inference mechanism. The diagnosis system has been practically verified by testing on a typical Taiwan power secondary transmission system. Test results show that rapid and accurate diagnosis can be obtained with flexibility and portability for fault diagnosis purpose of diverse substations.

  18. Human Subjects Protection: A Source for Ethical Service-Learning Practice

    ERIC Educational Resources Information Center

    Wendler, Rachael

    2012-01-01

    Human subjects research ethics were developed to ensure responsible conduct when university researchers learn by interacting with community members. As service-learning students also learn by interacting with community members, a similar set of principles may strengthen the ethical practice of service-learning. This article identifies ethical…

  19. Protecting National Security Information: An Overview of Federal Policy and Practice: A Symposium.

    ERIC Educational Resources Information Center

    Relyea, Harold C.; And Others

    1984-01-01

    A series of eight articles explores the impact of national security powers on government information policy and practice. Topics discussed include: the Freedom of Information Act; recommended standards for security classification; federal legislation dealing with national security; and national controls on scientific communication. (CLB)

  20. From Game Protection to Wildlife Management: Frame Shifts, Organizational Development, and Field Practices

    ERIC Educational Resources Information Center

    Brulle, Robert J.; Benford, Robert D.

    2012-01-01

    One enduring question in social movements research is the relationship between cultural representations and organizational structure. In this article, we examine the development of different discursive frames over time, and how such frame shifts affect movement structure and practices. This approach seeks to illuminate the dialectical interplay…

  1. What practical factors guide the choice of stent and protection device during carotid angioplasty?

    PubMed

    Bosiers, M; Deloose, K; Verbist, J; Peeters, P

    2008-06-01

    The importance of angioplasty and stenting in the treatment of carotid artery disease cannot be underestimated. Successful carotid stenting does not only depend of the operator's skills and experience, but also an adequate selection of cerebral protection devices and carotid stents can help avoiding neurological complications. A broad spectrum of carotid devices is currently on the market and since all have their assets and downsides, it is virtually impossible to acclaim one specific device as being the best. The individual characteristics of each specific protection system or stent may make it an attractive choice in one circumstance, but render it a less desirable option in others situations. The applicability depends primarily on the arterial anatomy and the specific details of the lesion being treated. But certainly, personal preferences and familiarity with a specific device may legitimately influence the decision to choose one over another. PMID:18375153

  2. Protecting civil aircraft from the MANPAD threat: is this a practical scenario?

    NASA Astrophysics Data System (ADS)

    Taylor, William

    2005-11-01

    This paper has been written as a key note address and backdrop to the 2005 SPIE Technologies for Optical Countermeasures II Conference. The paper uses as a topic the problem of protecting civil aircraft from the Man Portable Air Defence missile Systems (MANPADS). The paper examines the economic background of the airline industry and the effects such a successful attack could have. It then addresses the various motives, means, and opportunities that the terrorists have to use MANPADS to progress attacks against civil aircraft. In reviewing the various mitigation options available to defeat or deny MANPAD engagements, the paper identifies key technology areas available for exploitation. It then focuses on the optical countermeasure technologies used in providing aircraft platform self protection. Finally, the paper summarises and concludes that whilst a lot has and can be done to militate against the MANPAD threat there is not yet an exportable, affordable and robust countermeasures technology for large scale commercial systems and operations.

  3. Exploring the Links between Ethnobotany, Local Therapeutic Practices, and Protected Areas in Santa Catarina Coastline, Brazil

    PubMed Central

    Zank, Sofia; Hanazaki, Natalia

    2012-01-01

    We investigated the knowledge of medicinal plants in two areas proposed for the creation of protected areas for sustainable use in the city of Imbituba (SC). In this study, we analyzed the influence of gender, form of learning, and modern medicine on medicinal plant knowledge while also reflecting on the relationship of this knowledge to in situ conservation. Data collection was conducted through structured interviews, free listings, guided tours, and collection of botanical material. 197 species of medicinal plants belonging to 70 botanical families were recorded. Gender and the form of learning were factors that significantly influenced the similarity of the knowledge of medicinal plants among the informants. We also observed the existence of a therapeutic pluralism among key informants. Local medicinal plant knowledge emphasizes the importance of strategies to create protected areas of sustainable use as a way to ensure the maintenance of traditional lifestyles and associated local knowledge. PMID:22203874

  4. Fault slip distribution and fault roughness

    NASA Astrophysics Data System (ADS)

    Candela, Thibault; Renard, François; Schmittbuhl, Jean; Bouchon, Michel; Brodsky, Emily E.

    2011-11-01

    We present analysis of the spatial correlations of seismological slip maps and fault topography roughness, illuminating their identical self-affine exponent. Though the complexity of the coseismic spatial slip distribution can be intuitively associated with geometrical or stress heterogeneities along the fault surface, this has never been demonstrated. Based on new measurements of fault surface topography and on statistical analyses of kinematic inversions of slip maps, we propose a model, which quantitatively characterizes the link between slip distribution and fault surface roughness. Our approach can be divided into two complementary steps: (i) Using a numerical computation, we estimate the influence of fault roughness on the frictional strength (pre-stress). We model a fault as a rough interface where elastic asperities are squeezed. The Hurst exponent ?, characterizing the self-affinity of the frictional strength field, approaches ?, where ? is the roughness exponent of the fault surface in the direction of slip. (ii) Using a quasi-static model of fault propagation, which includes the effect of long-range elastic interactions and spatial correlations in the frictional strength, the spatial slip correlation is observed to scale as ?, where ? represents the Hurst exponent of the slip distribution. Under the assumption that the origin of the spatial fluctuations in frictional strength along faults is the elastic squeeze of fault asperities, we show that self-affine geometrical properties of fault surface roughness control slip correlations and that ?. Given that ? for a wide range of faults (various accumulated displacement, host rock and slip movement), we predict that ?. Even if our quasi-static fault model is more relevant for creeping faults, the spatial slip correlations observed are consistent with those of seismological slip maps. A consequence is that the self-affinity property of slip roughness may be explained by fault geometry without considering

  5. Plastic Surgery Practice Models and Research Aims Under the Patient Protection and Affordable Care Act

    PubMed Central

    Giladi, Aviram M.; Yuan, Frank; Chung, Kevin C.

    2014-01-01

    As the healthcare landscape in the United States changes under the Affordable Care Act (ACA), providers are set to face numerous new challenges. Although concerns about practice sustainability with declining reimbursement have dominated the dialogue, there are more pressing changes to the healthcare funding mechanism as a whole that must be addressed. Plastic surgeons, involved in various practice models each with different relationships to hospitals, referring physicians, and payers, must understand these reimbursement changes in order to dictate adequate compensation in the future. Here we discuss bundle payments and Accountable Care Organizations (ACOs), and how plastic surgeons might best engage in these new system designs. In addition, we review the value of a focused and driven health-services research agenda in plastic surgery, and the importance of this research in supporting long-term financial stability for the specialty. PMID:25626805

  6. Associations of contextual risk and protective factors with fathers’ parenting practices in the post-deployment environment

    PubMed Central

    Davis, Laurel; Hanson, Sheila K.; Zamir, Osnat; Gewirtz, Abigail H.; DeGarmo, David S.

    2015-01-01

    Deployment separation and reunifications are salient contexts that directly impact effective family functioning and parenting for military fathers. Yet, we know very little about determinants of post-deployed father involvement and effective parenting. The present study examined hypothesized risk and protective factors of observed parenting for 282 post-deployed fathers who served in the Army National Guard/Reserves. Pre-intervention data were employed from fathers participating in the After Deployment, Adaptive Parenting Tools (ADAPT) randomized control trial. Parenting practices were obtained from direct observation of father-child interaction and included measures of problem solving, harsh discipline, positive involvement, encouragement, and monitoring. Risk factors included combat exposure, negative life events, months deployed, and PTSD symptoms. Protective factors included education, income, dyadic adjustment, and social support. Results of a structural equation model predicting an effective parenting construct indicated that months deployed, income, and father age were most related to observed parenting, explaining 16% of the variance. We are aware of no other study utilizing direct parent-child observations of father’s parenting skills following overseas deployment. Implications for practice and preventive intervention are discussed. PMID:26213794

  7. Partial fault dictionary: A new approach for computer-aided fault localization

    SciTech Connect

    Hunger, A.; Papathanasiou, A.

    1995-12-31

    The approach described in this paper has been developed to address the computation time and problem size of localization methodologies in VLSI circuits in order to speed up the overall time consumption for fault localization. The reduction of the problem to solve is combined with the idea of the fault dictionary. In a pre-processing phase, a possibly faulty area is derived using the netlist and the actual test results as input data. The result is a set of cones originating from each faulty primary output. In the next step, the best cone is extracted for the fault dictionary methodology according to a heuristic formula. The circuit nodes, which are included in the intersection of the cones, are combined to a fault list. This fault list together with the best cone can be used by the fault simulator to generate a small and manageable fault dictionary related to one faulty output. In connection with additional algorithms for the reduction of stimuli and netlist a partial fault dictionary can be set up. This dictionary is valid only for the given faulty device together with the given and reduced stimuli, but offers important benefits: Practical results show a reduction of simulation time and size of the fault dictionary by factors around 100 or even more, depending on the actual circuit and assumed fault. The list of fault candidates is significantly reduced, and the required number of steps during the process of localization is reduced, too.

  8. Flight elements: Fault detection and fault management

    NASA Technical Reports Server (NTRS)

    Lum, H.; Patterson-Hine, A.; Edge, J. T.; Lawler, D.

    1990-01-01

    Fault management for an intelligent computational system must be developed using a top down integrated engineering approach. An approach proposed includes integrating the overall environment involving sensors and their associated data; design knowledge capture; operations; fault detection, identification, and reconfiguration; testability; causal models including digraph matrix analysis; and overall performance impacts on the hardware and software architecture. Implementation of the concept to achieve a real time intelligent fault detection and management system will be accomplished via the implementation of several objectives, which are: Development of fault tolerant/FDIR requirement and specification from a systems level which will carry through from conceptual design through implementation and mission operations; Implementation of monitoring, diagnosis, and reconfiguration at all system levels providing fault isolation and system integration; Optimize system operations to manage degraded system performance through system integration; and Lower development and operations costs through the implementation of an intelligent real time fault detection and fault management system and an information management system.

  9. Climate change and natural disasters – integrating science and practice to protect health

    PubMed Central

    Sauerborn, Rainer; Ebi, Kristie

    2012-01-01

    Background Hydro-meteorological disasters are the focus of this paper. The authors examine, to which extent climate change increases their frequency and intensity. Methods Review of IPCC-projections of climate-change related extreme weather events and related literature on health effects. Results Projections show that climate change is likely to increase the frequency, intensity, duration, and spatial distribution of a range of extreme weather events over coming decades. Conclusions There is a need for strengthened collaboration between climate scientists, the health researchers and policy-makers as well as the disaster community to jointly develop adaptation strategies to protect human. PMID:23273248

  10. Fault damage zones

    NASA Astrophysics Data System (ADS)

    Kim, Young-Seog; Peacock, David C. P.; Sanderson, David J.

    2004-03-01

    Damage zones show very similar geometries across a wide range of scales and fault types, including strike-slip, normal and thrust faults. We use a geometric classification of damage zones into tip-, wall-, and linking-damage zones, based on their location around faults. These classes can be sub-divided in terms of fault and fracture patterns within the damage zone. A variety of damage zone structures can occur at mode II tips of strike-slip faults, including wing cracks, horsetail fractures, antithetic faults, and synthetic branch faults. Wall damage zones result from the propagation of mode II and mode III fault tips through a rock, or from damage associated with the increase in slip on a fault. Wall damage zone structures include extension fractures, antithetic faults, synthetic faults, and rotated blocks with associated triangular openings. The damage formed at the mode III tips of strike-slip faults (e.g. observed in cliff sections) are classified as wall damage zones, because the damage zone structures are distributed along a fault trace in map view. Mixed-mode tips are likely to show characteristics of both mode II and mode III tips. Linking damage zones are developed at steps between two sub-parallel faults, and the structures developed depend on whether the step is extensional or contractional. Extension fractures and pull-aparts typically develop in extensional steps, whilst solution seams, antithetic faults and synthetic faults commonly develop in contractional steps. Rotated blocks, isolated lenses or strike-slip duplexes may occur in both extensional and contractional steps. Damage zone geometries and structures are strongly controlled by the location around a fault, the slip mode at a fault tip, and by the evolutionary stage of the fault. Although other factors control the nature of damage zones (e.g. lithology, rheology and stress system), the three-dimensional fault geometry and slip mode at each tip must be considered to gain an understanding of

  11. Technical Research Paper ìMicrocontroller based Fault Detectorî

    NASA Astrophysics Data System (ADS)

    Joshi, Nishith

    2012-10-01

    Purpose: The aim of this research is to develop a device used to detect faults in the line and isolate the connected system or instrument connected to it.Scope: This device involves the use of microcontroller for detection and isolation of the system of instrument with proper use of programming. The instrument devised is economical and effective compared to other protective devices available in market.After the patent of the product a large scale production is also possible for consumer use!Design Approach: The design methodology involves the use of microcontroller in conjugation with the relay circuitry with display on a LCD screen. It is a totally new design in the market and it will be a substitute to ELCBís, MCBís and Relays in near future .Practical Implications: The circuit devised can be used in conjugation with the medical instrument, industrial instrument and even in the household application. It finds a lot of applications in factories and industries where costly instruments are to be saved from faults.Conclusion: The device thus developed is very economical and effective in the use of protection in household as well as industrial instrument protection. The added advantage of this circuit is that it shows the type of fault occurred on LCD and at the same time keeps the circuit in isolation until the fault is cleared.

  12. Evaluating sustainable water quality management in the U.S.: Urban, Agricultural, and Environmental Protection Practices

    NASA Astrophysics Data System (ADS)

    van Oel, P. R.; Alfredo, K. A.; Russo, T. A.

    2015-12-01

    Sustainable water management typically emphasizes water resource quantity, with focus directed at availability and use practices. When attention is placed on sustainable water quality management, the holistic, cross-sector perspective inherent to sustainability is often lost. Proper water quality management is a critical component of sustainable development practices. However, sustainable development definitions and metrics related to water quality resilience and management are often not well defined; water quality is often buried in large indicator sets used for analysis, and the policy regulating management practices create sector specific burdens for ensuring adequate water quality. In this research, we investigated the methods by which water quality is evaluated through internationally applied indicators and incorporated into the larger idea of "sustainability." We also dissect policy's role in the distribution of responsibility with regard to water quality management in the United States through evaluation of three broad sectors: urban, agriculture, and environmental water quality. Our research concludes that despite a growing intention to use a single system approach for urban, agricultural, and environmental water quality management, one does not yet exist and is even hindered by our current policies and regulations. As policy continues to lead in determining water quality and defining contamination limits, new regulation must reconcile the disparity in requirements for the contaminators and those performing end-of-pipe treatment. Just as the sustainable development indicators we researched tried to integrate environmental, economic, and social aspects without skewing focus to one of these three categories, policy cannot continue to regulate a single sector of society without considering impacts to the entire watershed and/or region. Unequal distribution of the water pollution burden creates disjointed economic growth, infrastructure development, and policy

  13. Automated fault location and diagnosis on electric power distribution feeders

    SciTech Connect

    Zhu, J.; Lubkeman, D.L.; Girgis, A.A.

    1997-04-01

    This paper presents new techniques for locating and diagnosing faults on electric power distribution feeders. The proposed fault location and diagnosis scheme is capable of accurately identifying the location of a fault upon its occurrence, based on the integration of information available from disturbance recording devices with knowledge contained in a distribution feeder database. The developed fault location and diagnosis system can also be applied to the investigation of temporary faults that may not result in a blown fuse. The proposed fault location algorithm is based on the steady-state analysis of the faulted distribution network. To deal with the uncertainties inherent in the system modeling and the phasor estimation, the fault location algorithm has been adapted to estimate fault regions based on probabilistic modeling and analysis. Since the distribution feeder is a radial network, multiple possibilities of fault locations could be computed with measurements available only at the substation. To identify the actual fault location, a fault diagnosis algorithm has been developed to prune down and rank the possible fault locations by integrating the available pieces of evidence. Testing of the developed fault location and diagnosis system using field data has demonstrated its potential for practical use.

  14. Fault Management Design Strategies

    NASA Technical Reports Server (NTRS)

    Day, John C.; Johnson, Stephen B.

    2014-01-01

    Development of dependable systems relies on the ability of the system to determine and respond to off-nominal system behavior. Specification and development of these fault management capabilities must be done in a structured and principled manner to improve our understanding of these systems, and to make significant gains in dependability (safety, reliability and availability). Prior work has described a fundamental taxonomy and theory of System Health Management (SHM), and of its operational subset, Fault Management (FM). This conceptual foundation provides a basis to develop framework to design and implement FM design strategies that protect mission objectives and account for system design limitations. Selection of an SHM strategy has implications for the functions required to perform the strategy, and it places constraints on the set of possible design solutions. The framework developed in this paper provides a rigorous and principled approach to classifying SHM strategies, as well as methods for determination and implementation of SHM strategies. An illustrative example is used to describe the application of the framework and the resulting benefits to system and FM design and dependability.

  15. Practice.

    PubMed

    Chambers, David W

    2008-01-01

    Practice refers to a characteristic way professionals use common standards to customize solutions to a range of problems. Practice includes (a) standards for outcomes and processes that are shared with one's colleagues, (b) a rich repertoire of skills grounded in diagnostic acumen, (c) an ability to see the actual and the ideal and work back and forth between them, (d) functional artistry, and (e) learning by doing that transcends scientific rationality. Communities of practice, such as dental offices, are small groups that work together in interlocking roles to achieve these ends. PMID:19413050

  16. Physiological and practical evaluation of a biological/chemical protective device for infants.

    PubMed

    Amirav, I; Epstien, Y; Luder, A S

    2000-09-01

    The Chemical Infant Protective System (CHIPS) is a special hood-like system into which a small battery-operated blower delivers filtered air. Because it is a semiclosed system, there is a risk of dangerous CO2 accumulation within the device, which particularly affects infants with acute or chronic respiratory disorders. Eleven infants hospitalized with various respiratory illnesses wore the device for 15 minutes. Inspired O2, inspired CO2, heart rate, respiratory rate, oxygen saturation, and inside temperature and humidity were measured before and during this test period. Inspired O2 and heart rate during the test period were significantly lower than baseline levels (O2, 19.1 vs. 20.1%; heart rate, 133 vs. 142 beats/min). Inspired CO2 and inside temperature during the test period were significantly higher than baseline levels (CO2, 0.23 vs. 0.06%; temperature, 25.0 vs. 23.1 degrees C). Oxygen saturation, respiratory rate, and humidity were not different from baseline levels. A short-term stay within the CHIPS in well-ventilated surroundings did not result in significant clinical and physiological impact for sick infants. Nevertheless, trends were identified that may be worrisome during longer periods and in sealed rooms. PMID:11011536

  17. Genetic and epigenetic features in radiation sensitivity. Part II: implications for clinical practice and radiation protection.

    PubMed

    Bourguignon, Michel H; Gisone, Pablo A; Perez, Maria R; Michelin, Severino; Dubner, Diana; Giorgio, Marina Di; Carosella, Edgardo D

    2005-03-01

    Recent progress especially in the field of gene identification and expression has attracted greater attention to the genetic and epigenetic susceptibility to cancer, possibly enhanced by ionising radiation. This issue is especially important for radiation therapists since hypersensitive patients may suffer from adverse effects in normal tissues following standard radiation therapy, while normally sensitive patients could receive higher doses of radiation, offering a better likelihood of cure for malignant tumours. Although only a small percentage of individuals are "hypersensitive" to radiation effects, all medical specialists using ionising radiation should be aware of the aforementioned progress in medical knowledge. The present paper, the second of two parts, reviews human disorders known or strongly suspected to be associated with hypersensitivity to ionising radiation. The main tests capable of detecting such pathologies in advance are analysed, and ethical issues regarding genetic testing are considered. The implications for radiation protection of possible hypersensitivity to radiation in a part of the population are discussed, and some guidelines for nuclear medicine professionals are proposed. PMID:15692806

  18. Predicting future protection of respirator users: Statistical approaches and practical implications.

    PubMed

    Hu, Chengcheng; Harber, Philip; Su, Jing

    2016-05-01

    The purpose of this article is to describe a statistical approach for predicting a respirator user's fit factor in the future based upon results from initial tests. A statistical prediction model was developed based upon joint distribution of multiple fit factor measurements over time obtained from linear mixed effect models. The model accounts for within-subject correlation as well as short-term (within one day) and longer-term variability. As an example of applying this approach, model parameters were estimated from a research study in which volunteers were trained by three different modalities to use one of two types of respirators. They underwent two quantitative fit tests at the initial session and two on the same day approximately six months later. The fitted models demonstrated correlation and gave the estimated distribution of future fit test results conditional on past results for an individual worker. This approach can be applied to establishing a criterion value for passing an initial fit test to provide reasonable likelihood that a worker will be adequately protected in the future; and to optimizing the repeat fit factor test intervals individually for each user for cost-effective testing. PMID:26771896

  19. Fault tree handbook

    SciTech Connect

    Haasl, D.F.; Roberts, N.H.; Vesely, W.E.; Goldberg, F.F.

    1981-01-01

    This handbook describes a methodology for reliability analysis of complex systems such as those which comprise the engineered safety features of nuclear power generating stations. After an initial overview of the available system analysis approaches, the handbook focuses on a description of the deductive method known as fault tree analysis. The following aspects of fault tree analysis are covered: basic concepts for fault tree analysis; basic elements of a fault tree; fault tree construction; probability, statistics, and Boolean algebra for the fault tree analyst; qualitative and quantitative fault tree evaluation techniques; and computer codes for fault tree evaluation. Also discussed are several example problems illustrating the basic concepts of fault tree construction and evaluation.

  20. Discrete Wavelet Transform for Fault Locations in Underground Distribution System

    NASA Astrophysics Data System (ADS)

    Apisit, C.; Ngaopitakkul, A.

    2010-10-01

    In this paper, a technique for detecting faults in underground distribution system is presented. Discrete Wavelet Transform (DWT) based on traveling wave is employed in order to detect the high frequency components and to identify fault locations in the underground distribution system. The first peak time obtained from the faulty bus is employed for calculating the distance of fault from sending end. The validity of the proposed technique is tested with various fault inception angles, fault locations and faulty phases. The result is found that the proposed technique provides satisfactory result and will be very useful in the development of power systems protection scheme.

  1. Identifying biogeochemical processes beneath stormwater infiltration ponds in support of a new best management practice for groundwater protection

    USGS Publications Warehouse

    O'Reilly, Andrew M.; Chang, Ni-Bin; Wanielista, Martin P.; Xuan, Zhemin

    2011-01-01

     When applying a stormwater infiltration pond best management practice (BMP) for protecting the quality of underlying groundwater, a common constituent of concern is nitrate. Two stormwater infiltration ponds, the SO and HT ponds, in central Florida, USA, were monitored. A temporal succession of biogeochemical processes was identified beneath the SO pond, including oxygen reduction, denitrification, manganese and iron reduction, and methanogenesis. In contrast, aerobic conditions persisted beneath the HT pond, resulting in nitrate leaching into groundwater. Biogeochemical differences likely are related to soil textural and hydraulic properties that control surface/subsurface oxygen exchange. A new infiltration BMP was developed and a full-scale application was implemented for the HT pond. Preliminary results indicate reductions in nitrate concentration exceeding 50% in soil water and shallow groundwater beneath the HT pond.

  2. Malaria Knowledge, Concern, Land Management, and Protection Practices among Land Owners and/or Managers in Lowland versus Highland Ecuador

    PubMed Central

    Pinault, Lauren L.; Hunter, Fiona F.

    2011-01-01

    To control malaria effectively, it is essential to understand the current knowledge, beliefs, concerns, land management practices, and mosquito bite protection methods in use by citizens. This study presents a comparative, quantitative, interview-based study of land owners and/or managers (n = 262) in the Ecuadorian lowlands (presently considered malarious) (n = 131) and highlands (potentially malarious in the future) (n = 131). Although respondents had a strong understanding of where the disease occurs in their own country and of the basic relationship among standing water, mosquitoes, and malaria, about half of respondents in potential risk areas denied the current possibility of malaria infection on their own property. As well, about half of respondents with potential anopheline larval habitat did not report its presence, likely due to a highly specific definition of suitable mosquito habitat. Most respondents who are considered at risk of malaria currently use at least one type of mosquito bite prevention, most commonly bed nets. PMID:22363897

  3. Diagnosis without repair for hybrid fault situations. [in computer systems

    NASA Technical Reports Server (NTRS)

    Mallela, S.; Masson, G. M.

    1980-01-01

    In the present paper, the concept of a hybrid fault situation is introduced, which specifies bounded combinations of permanently faulty and intermittently faulty units in a system. The general class of hybrid fault situations includes, as special cases, the all permanent fault case and the unrestricted intermittent fault case, which have been previously considered with PMC models. An approach compatible with the diagnosis of permanent fault situations is then applied to the diagnosis of hybrid fault situation. The motivation for doing so is the common practice of testing for the presence of intermittent faults in systems by means of repeated applications of tests that are designed for the detection of permanent faults. The testing assignment of PMC models of system is characterized, and interrelationships between the number of intermittently and permanently faulty units that can be diagnosed is established.

  4. Fault zone hydrogeology

    NASA Astrophysics Data System (ADS)

    Bense, V. F.; Gleeson, T.; Loveless, S. E.; Bour, O.; Scibek, J.

    2013-12-01

    Deformation along faults in the shallow crust (< 1 km) introduces permeability heterogeneity and anisotropy, which has an important impact on processes such as regional groundwater flow, hydrocarbon migration, and hydrothermal fluid circulation. Fault zones have the capacity to be hydraulic conduits connecting shallow and deep geological environments, but simultaneously the fault cores of many faults often form effective barriers to flow. The direct evaluation of the impact of faults to fluid flow patterns remains a challenge and requires a multidisciplinary research effort of structural geologists and hydrogeologists. However, we find that these disciplines often use different methods with little interaction between them. In this review, we document the current multi-disciplinary understanding of fault zone hydrogeology. We discuss surface- and subsurface observations from diverse rock types from unlithified and lithified clastic sediments through to carbonate, crystalline, and volcanic rocks. For each rock type, we evaluate geological deformation mechanisms, hydrogeologic observations and conceptual models of fault zone hydrogeology. Outcrop observations indicate that fault zones commonly have a permeability structure suggesting they should act as complex conduit-barrier systems in which along-fault flow is encouraged and across-fault flow is impeded. Hydrogeological observations of fault zones reported in the literature show a broad qualitative agreement with outcrop-based conceptual models of fault zone hydrogeology. Nevertheless, the specific impact of a particular fault permeability structure on fault zone hydrogeology can only be assessed when the hydrogeological context of the fault zone is considered and not from outcrop observations alone. To gain a more integrated, comprehensive understanding of fault zone hydrogeology, we foresee numerous synergistic opportunities and challenges for the discipline of structural geology and hydrogeology to co-evolve and

  5. InSAR measurements around active faults: creeping Philippine Fault and un-creeping Alpine Fault

    NASA Astrophysics Data System (ADS)

    Fukushima, Y.

    2013-12-01

    Recently, interferometric synthetic aperture radar (InSAR) time-series analyses have been frequently applied to measure the time-series of small and quasi-steady displacements in wide areas. Large efforts in the methodological developments have been made to pursue higher temporal and spatial resolutions by using frequently acquired SAR images and detecting more pixels that exhibit phase stability. While such a high resolution is indispensable for tracking displacements of man-made and other small-scale structures, it is not necessarily needed and can be unnecessarily computer-intensive for measuring the crustal deformation associated with active faults and volcanic activities. I apply a simple and efficient method to measure the deformation around the Alpine Fault in the South Island of New Zealand, and the Philippine Fault in the Leyte Island. I use a small-baseline subset (SBAS) analysis approach (Berardino, et al., 2002). Generally, the more we average the pixel values, the more coherent the signals are. Considering that, for the deformation around active faults, the spatial resolution can be as coarse as a few hundred meters, we can severely 'multi-look' the interferograms. The two applied cases in this study benefited from this approach; I could obtain the mean velocity maps on practically the entire area without discarding decorrelated areas. The signals could have been only partially obtained by standard persistent scatterer or single-look small-baseline approaches that are much more computer-intensive. In order to further increase the signal detection capability, it is sometimes effective to introduce a processing algorithm adapted to the signal of interest. In an InSAR time-series processing, one usually needs to set the reference point because interferograms are all relative measurements. It is difficult, however, to fix the reference point when one aims to measure long-wavelength deformation signals that span the whole analysis area. This problem can be

  6. Fault recovery characteristics of the fault tolerant multi-processor

    NASA Technical Reports Server (NTRS)

    Padilla, Peter A.

    1990-01-01

    The fault handling performance of the fault tolerant multiprocessor (FTMP) was investigated. Fault handling errors detected during fault injection experiments were characterized. In these fault injection experiments, the FTMP disabled a working unit instead of the faulted unit once every 500 faults, on the average. System design weaknesses allow active faults to exercise a part of the fault management software that handles byzantine or lying faults. It is pointed out that these weak areas in the FTMP's design increase the probability that, for any hardware fault, a good LRU (line replaceable unit) is mistakenly disabled by the fault management software. It is concluded that fault injection can help detect and analyze the behavior of a system in the ultra-reliable regime. Although fault injection testing cannot be exhaustive, it has been demonstrated that it provides a unique capability to unmask problems and to characterize the behavior of a fault-tolerant system.

  7. A new fault analysis on RSA based on the fault of multiplier

    NASA Astrophysics Data System (ADS)

    Chen, Caisen; Wang, Tao; Kou, Yingzhan; Zhao, Xinjie

    2011-12-01

    The former fault analysis on RSA with "Left-to-Right" was based on modifying the public modulus N, but it is difficult to be taken in practice. In order to find a more practical attack method, considering the characteristic that the multiplier of microprocessor is easy to affect by voltage, the fault can be injected into the multiplier during the RSA signature by adjusting the voltage. This paper proposes a new fault analysis on RSA signature based the error with multiplier, improving the feasibility of attack, and extends the attack to RSA with fixed-window algorithm. In the end, the complexity of algorithm is analyzed; the expansibility and feasibility of algorithm are proved by demonstrating in theory and simulation experiments. The results of experiment show that the new fault analysis algorithm is more practical in operation.

  8. [The Application of the Fault Tree Analysis Method in Medical Equipment Maintenance].

    PubMed

    Liu, Hongbin

    2015-11-01

    In this paper, the traditional fault tree analysis method is presented, detailed instructions for its application characteristics in medical instrument maintenance is made. It is made significant changes when the traditional fault tree analysis method is introduced into the medical instrument maintenance: gave up the logic symbolic, logic analysis and calculation, gave up its complicated programs, and only keep its image and practical fault tree diagram, and the fault tree diagram there are also differences: the fault tree is no longer a logical tree but the thinking tree in troubleshooting, the definition of the fault tree's nodes is different, the composition of the fault tree's branches is also different. PMID:27066693

  9. Fault model development for fault tolerant VLSI design

    NASA Astrophysics Data System (ADS)

    Hartmann, C. R.; Lala, P. K.; Ali, A. M.; Visweswaran, G. S.; Ganguly, S.

    1988-05-01

    Fault models provide systematic and precise representations of physical defects in microcircuits in a form suitable for simulation and test generation. The current difficulty in testing VLSI circuits can be attributed to the tremendous increase in design complexity and the inappropriateness of traditional stuck-at fault models. This report develops fault models for three different types of common defects that are not accurately represented by the stuck-at fault model. The faults examined in this report are: bridging faults, transistor stuck-open faults, and transient faults caused by alpha particle radiation. A generalized fault model could not be developed for the three fault types. However, microcircuit behavior and fault detection strategies are described for the bridging, transistor stuck-open, and transient (alpha particle strike) faults. The results of this study can be applied to the simulation and analysis of faults in fault tolerant VLSI circuits.

  10. Reset tree-based optical fault detection.

    PubMed

    Lee, Dong-Geon; Choi, Dooho; Seo, Jungtaek; Kim, Howon

    2013-01-01

    In this paper, we present a new reset tree-based scheme to protect cryptographic hardware against optical fault injection attacks. As one of the most powerful invasive attacks on cryptographic hardware, optical fault attacks cause semiconductors to misbehave by injecting high-energy light into a decapped integrated circuit. The contaminated result from the affected chip is then used to reveal secret information, such as a key, from the cryptographic hardware. Since the advent of such attacks, various countermeasures have been proposed. Although most of these countermeasures are strong, there is still the possibility of attack. In this paper, we present a novel optical fault detection scheme that utilizes the buffers on a circuit's reset signal tree as a fault detection sensor. To evaluate our proposal, we model radiation-induced currents into circuit components and perform a SPICE simulation. The proposed scheme is expected to be used as a supplemental security tool. PMID:23698267

  11. Reset Tree-Based Optical Fault Detection

    PubMed Central

    Lee, Dong-Geon; Choi, Dooho; Seo, Jungtaek; Kim, Howon

    2013-01-01

    In this paper, we present a new reset tree-based scheme to protect cryptographic hardware against optical fault injection attacks. As one of the most powerful invasive attacks on cryptographic hardware, optical fault attacks cause semiconductors to misbehave by injecting high-energy light into a decapped integrated circuit. The contaminated result from the affected chip is then used to reveal secret information, such as a key, from the cryptographic hardware. Since the advent of such attacks, various countermeasures have been proposed. Although most of these countermeasures are strong, there is still the possibility of attack. In this paper, we present a novel optical fault detection scheme that utilizes the buffers on a circuit's reset signal tree as a fault detection sensor. To evaluate our proposal, we model radiation-induced currents into circuit components and perform a SPICE simulation. The proposed scheme is expected to be used as a supplemental security tool. PMID:23698267

  12. FTAPE: A fault injection tool to measure fault tolerance

    NASA Technical Reports Server (NTRS)

    Tsai, Timothy K.; Iyer, Ravishankar K.

    1995-01-01

    The paper introduces FTAPE (Fault Tolerance And Performance Evaluator), a tool that can be used to compare fault-tolerant computers. The tool combines system-wide fault injection with a controllable workload. A workload generator is used to create high stress conditions for the machine. Faults are injected based on this workload activity in order to ensure a high level of fault propagation. The errors/fault ratio and performance degradation are presented as measures of fault tolerance.

  13. FTAPE: A fault injection tool to measure fault tolerance

    NASA Technical Reports Server (NTRS)

    Tsai, Timothy K.; Iyer, Ravishankar K.

    1994-01-01

    The paper introduces FTAPE (Fault Tolerance And Performance Evaluator), a tool that can be used to compare fault-tolerant computers. The tool combines system-wide fault injection with a controllable workload. A workload generator is used to create high stress conditions for the machine. Faults are injected based on this workload activity in order to ensure a high level of fault propagation. The errors/fault ratio and performance degradation are presented as measures of fault tolerance.

  14. FTAPE: A fault injection tool to measure fault tolerance

    NASA Astrophysics Data System (ADS)

    Tsai, Timothy K.; Iyer, Ravishankar K.

    1994-07-01

    The paper introduces FTAPE (Fault Tolerance And Performance Evaluator), a tool that can be used to compare fault-tolerant computers. The tool combines system-wide fault injection with a controllable workload. A workload generator is used to create high stress conditions for the machine. Faults are injected based on this workload activity in order to ensure a high level of fault propagation. The errors/fault ratio and performance degradation are presented as measures of fault tolerance.

  15. An Assessment of Personal Health Protective Practices of Out-of-School Adolescents in South-South Senatorial District of Nigeria

    ERIC Educational Resources Information Center

    Izevbigie, T. I.; Owie, Ikponmwosa

    2006-01-01

    The Personal Health Protective Practices of the out-of-school adolescent was investigated using a survey design. Three research questions were formulated and tested. The population consists of out-of-school adolescents in South-South Senatorial District of Nigeria. The sample consists of 320 adolescents randomly selected using purposive sampling…

  16. School Social Workers' Perceived Understanding of Inner City and Urban Community and Neighborhood Risk and Protective Factors and Effectiveness in Practice Tasks

    ERIC Educational Resources Information Center

    Teasley, Martell; Randolph, Karen; Cho, Hyunkag

    2008-01-01

    This is an exploratory study that makes use of survey research methods and a convenience sample (N = 222) to examine how urban school social workers' perceived knowledge of neighborhood and community risk and protective factors affect their levels of self-reported effectiveness in practice tasks. Using hierarchical multiple regression, the…

  17. The Impact of Poverty, Fear of Crime, and Crime Victimization on Keeping Firearms for Protection and Unsafe Gun-Storage Practices: A Review and Analysis with Policy Recommendations.

    ERIC Educational Resources Information Center

    Vacha, Edward F.; McLaughlin, T. F.

    2000-01-01

    Reviews the literature on firearms accidents among young, poor, urban children. Evidence suggests that deciding to keep a gun for protection is a practical response to perceived living conditions. Reducing the consequences of risky gun behavior among poor families requires policies and programs that eliminate experiences associated with such…

  18. Analysis of the effects of asymmetric faults in three-phase superconducting inductive fault current limiters

    NASA Astrophysics Data System (ADS)

    Ferreira, R.; Pina, J. M.; Vilhena, N.; Arsénio, P.; Pronto, A. G.; Martins, J.

    2014-05-01

    Inductive fault current limiters of magnetic shielding type can be described in terms of the excursion in the plane defined by flux linked with primary and line current, and this methodology has been previously applied to single-phase devices. Practical applications, however, require three-phase limiters, which, for the sake of compactness, may be built by three legged cores, instead of three single phase units. This has the advantage of using well established methods of power transformers industry, but the performance of the devices depends on the type of fault, e.g. phase to ground or phase to phase. For instance, in a three legged core, a phase to ground fault affects healthy phases, and these are the most frequent faults in distribution grids, where such systems are envisaged. The effects of asymmetric faults are analysed in this paper, by means of measured excursions in the linked flux-current plane.

  19. Isolability of faults in sensor fault diagnosis

    NASA Astrophysics Data System (ADS)

    Sharifi, Reza; Langari, Reza

    2011-10-01

    A major concern with fault detection and isolation (FDI) methods is their robustness with respect to noise and modeling uncertainties. With this in mind, several approaches have been proposed to minimize the vulnerability of FDI methods to these uncertainties. But, apart from the algorithm used, there is a theoretical limit on the minimum effect of noise on detectability and isolability. This limit has been quantified in this paper for the problem of sensor fault diagnosis based on direct redundancies. In this study, first a geometric approach to sensor fault detection is proposed. The sensor fault is isolated based on the direction of residuals found from a residual generator. This residual generator can be constructed from an input-output or a Principal Component Analysis (PCA) based model. The simplicity of this technique, compared to the existing methods of sensor fault diagnosis, allows for more rational formulation of the isolability concepts in linear systems. Using this residual generator and the assumption of Gaussian noise, the effect of noise on isolability is studied, and the minimum magnitude of isolable fault in each sensor is found based on the distribution of noise in the measurement system. Finally, some numerical examples are presented to clarify this approach.

  20. Series and parallel arc-fault circuit interrupter tests.

    SciTech Connect

    Johnson, Jay; Fresquez, Armando J.; Gudgel, Bob; Meares, Andrew

    2013-07-01

    While the 2011 National Electrical Code%C2%AE (NEC) only requires series arc-fault protection, some arc-fault circuit interrupter (AFCI) manufacturers are designing products to detect and mitigate both series and parallel arc-faults. Sandia National Laboratories (SNL) has extensively investigated the electrical differences of series and parallel arc-faults and has offered possible classification and mitigation solutions. As part of this effort, Sandia National Laboratories has collaborated with MidNite Solar to create and test a 24-string combiner box with an AFCI which detects, differentiates, and de-energizes series and parallel arc-faults. In the case of the MidNite AFCI prototype, series arc-faults are mitigated by opening the PV strings, whereas parallel arc-faults are mitigated by shorting the array. A range of different experimental series and parallel arc-fault tests with the MidNite combiner box were performed at the Distributed Energy Technologies Laboratory (DETL) at SNL in Albuquerque, NM. In all the tests, the prototype de-energized the arc-faults in the time period required by the arc-fault circuit interrupt testing standard, UL 1699B. The experimental tests confirm series and parallel arc-faults can be successfully mitigated with a combiner box-integrated solution.

  1. Geoheritage, Geodiversity and natural landscape enhanced and protected through anthropogenic activity: a case study using the Chaîne des Puys and Limagne Fault, Afar and Mexico City

    NASA Astrophysics Data System (ADS)

    van Wyk de Vries, Benjamin; Hagos, Miruts; Guilbaud, Marie-Noelle

    2015-04-01

    The UNESCO World Heritage (WH) committee called in 2014 for all thematic geological and volcanological studies to be revised in light of a widening gap between current dogma and the progressive geoheritage science views. We discuss question of natural sites and anthropogenic activity. The Chaîne des Puys and Limagne fault UNESCO WH project is the basis of this presentation, but we also the Afar Region of Ethiopia and UNAM campus, Mexico City. It is now difficult to find any totally 'natural' (devoid of human influence) landscape. This very definition of natural ignores that humankind is a geological force, and humans are part of the natural process. The UNESCO WH guidelines recognise this in paragraph 90: 'it is recognized that no area is totally pristine and that all natural areas are in a dynamic state, and to some extent involve contact with people'. A geological landscape, may be large enough to accommodate human occupation without significantly changing landforms: this is the case of the Chaîne des Puys and Limagne fault. Human activity works in some ways to protect geological landscape: regulating vegetation and erosion. The aesthetic nature of humans may work to enhance the landscape's visibility by organisation of land use, and ceremonial use based on the sense of place. Humans also exercise economic activity such as quarrying and mining, which if uncontrolled can seriously modify a landscape. However, isolated works may not have an impact, or may even enhance the value of the site by uncovering geological features that would not naturally be seen. In the Chaîne des Puys only 0,3% of the land surface has been worked by artisanal methods and certain sites, like the Lemptégy volcano have been extracted with the view of enhancing the landscape's scientific value without detracting from the aesthetic. The site preserves its natural, scientific and aesthetic qualities, because of the human presence. The local population have always been and continue to be

  2. A review of recently active faults in Taiwan

    USGS Publications Warehouse

    Bonilla, Manuel G.

    1975-01-01

    high near Hualien, where an uplift of at least 0.6 m and probably more than 1 m occurred in the 1951 earthquake, and near and south of the 1946 faulting. Sudden uplifts can have serious consequences for installations near the shore. Investigation of this process, study of recently active faults, and continuing study of seismicity are necessary parts of a practical earthquake-hazard reduction program.

  3. Technology transfer by means of fault tree synthesis

    NASA Astrophysics Data System (ADS)

    Batzias, Dimitris F.

    2012-12-01

    Since Fault Tree Analysis (FTA) attempts to model and analyze failure processes of engineering, it forms a common technique for good industrial practice. On the contrary, fault tree synthesis (FTS) refers to the methodology of constructing complex trees either from dentritic modules built ad hoc or from fault tress already used and stored in a Knowledge Base. In both cases, technology transfer takes place in a quasi-inductive mode, from partial to holistic knowledge. In this work, an algorithmic procedure, including 9 activity steps and 3 decision nodes is developed for performing effectively this transfer when the fault under investigation occurs within one of the latter stages of an industrial procedure with several stages in series. The main parts of the algorithmic procedure are: (i) the construction of a local fault tree within the corresponding production stage, where the fault has been detected, (ii) the formation of an interface made of input faults that might occur upstream, (iii) the fuzzy (to count for uncertainty) multicriteria ranking of these faults according to their significance, and (iv) the synthesis of an extended fault tree based on the construction of part (i) and on the local fault tree of the first-ranked fault in part (iii). An implementation is presented, referring to 'uneven sealing of Al anodic film', thus proving the functionality of the developed methodology.

  4. Saturating time-delay transformer for overcurrent protection

    DOEpatents

    Praeg, Walter F.

    1977-01-01

    Electrical loads connected to d-c supplies are protected from damage by overcurrent in the case of a load fault by connecting in series with the load a saturating transformer that detects a load fault and limits the fault current to a safe level for a period long enough to correct the fault or else disconnect the power supply.

  5. Saturating time-delay transformer for overcurrent protection. [Patent application

    DOEpatents

    Praeg, W.F.

    1975-12-18

    Electrical loads connected to dc supplies are protected from damage by overcurrent in the case of a load fault by connecting in series with the load a saturating transformer that detects a load fault and limits the fault current to a safe level for a period long enough to correct the fault or else disconnect the power supply.

  6. A Fault Alarm and Diagnosis Method Based on Sensitive Parameters and Support Vector Machine

    NASA Astrophysics Data System (ADS)

    Zhang, Jinjie; Yao, Ziyun; Lv, Zhiquan; Zhu, Qunxiong; Xu, Fengtian; Jiang, Zhinong

    2015-08-01

    Study on the extraction of fault feature and the diagnostic technique of reciprocating compressor is one of the hot research topics in the field of reciprocating machinery fault diagnosis at present. A large number of feature extraction and classification methods have been widely applied in the related research, but the practical fault alarm and the accuracy of diagnosis have not been effectively improved. Developing feature extraction and classification methods to meet the requirements of typical fault alarm and automatic diagnosis in practical engineering is urgent task. The typical mechanical faults of reciprocating compressor are presented in the paper, and the existing data of online monitoring system is used to extract fault feature parameters within 15 types in total; the inner sensitive connection between faults and the feature parameters has been made clear by using the distance evaluation technique, also sensitive characteristic parameters of different faults have been obtained. On this basis, a method based on fault feature parameters and support vector machine (SVM) is developed, which will be applied to practical fault diagnosis. A better ability of early fault warning has been proved by the experiment and the practical fault cases. Automatic classification by using the SVM to the data of fault alarm has obtained better diagnostic accuracy.

  7. Protecting health care workers from tuberculosis in China: a review of policy and practice in China and the United States.

    PubMed

    Chai, Shua J; Mattingly, Daniel C; Varma, Jay K

    2013-01-01

    Tuberculosis causes >1.7 million deaths worldwide each year and is frequently transmitted in hospitals. Outbreaks of multidrug-resistant tuberculosis have led to illness and death among health care workers (HCWs) in many countries. Some countries, such as the United States, implemented occupational health policies that substantially reduced tuberculosis rates among HCWs. Inadequate tuberculosis infection control in China may contribute to its high burden of tuberculosis and multidrug-resistant tuberculosis, which are both the second highest worldwide. Occupational health policies in China for tuberculosis control can be strengthened. We reviewed the development and content of tuberculosis infection control policies in the United States and China. Sources included published academic literature, Chinese Ministry of Health policies, US government agency reports, legal databases, personal observations of hospitals, review of internet discussion sites, and discussions with HCWs and health care and law experts. In the United States, slow acceptance of the tuberculosis problem in HCWs resulted in decades of inaction. Tuberculosis infection control policies, based mostly on expert opinion, were implemented only after tuberculosis resurged in the 1980s. Effective evidence-based policies were developed only after multiple cycles of policy implementation, evaluation and revision. These policies have now substantially reduced occupational tuberculosis. In China, tuberculosis has not been formally recognized as an occupational disease, and data regarding the burden in HCWs are sparse. Vagueness of current labour laws and suboptimal alignment of infection control authority and expertise result in varied and sometimes absent protection of HCWs against tuberculosis. Formal evaluations of occupational tuberculosis policies have not been reported. By collecting data on its current HCW tuberculosis burden and infection control practices, refining policies, continually evaluating its

  8. The impact of stormwater treatment areas and agricultural best management practices on water quality in the Everglades Protection Area.

    PubMed

    Entry, James A; Gottlieb, Andrew

    2014-02-01

    Half of the original Everglades system has been lost to drainage and development. What remains is included within the boundaries of the Everglades Protection Area (EPA), comprised of three Water Conservation Areas (WCAs) and Everglades National Park (Park). Inflows to the EPA contain elevated nutrient concentrations. Best management practices (BMPs) were implemented and six large wetlands called stormwater treatment areas (STAs) were constructed to improve water quality. We analyzed water quality in the WCAs and Park and performed an economic analysis of the STAs to remove nutrients from EPA inflows. In general, nutrient concentrations in all WCAs were higher during the pre-STA period than after the STAs became operational. In WCA2 and the Park, total phosphorus (TP) trends showed more negative slopes prior, as compared to after, the STAs became operational. These results suggest that BMPs lead to large initial decreases in nutrient export resulting in improved downstream water quality. A preliminary economic analysis shows that operation and management of the STAs are complicated and cost intensive. Comparing the cost of phosphorus (P) removal from water entering the EPA using BMPs and STAs may not currently be viable. BMPs prevent P from being applied to, or leaving from agricultural fields while STAs remove P from stormwater. We expect nutrient concentrations in water flowing into and out of the STAs to decline as both BMPs and STAs become more effective. We suggest an economic analysis of BMPs, STAs, and other potential approaches to determine the most cost-effective methods to reduce nutrient concentrations and related stressors affecting the Everglades. PMID:24081816

  9. Three-dimensional fault drawing

    SciTech Connect

    Dongan, L. )

    1992-01-01

    In this paper, the author presents a structure interpretation based on three-dimensional fault drawing. It is required that fault closure must be based on geological theory, spacial plotting principle and restrictions in seismic exploration. Geological structure can be well ascertained by analysing the shapes and interrelation of the faults which have been drawn through reasonable fault point closure and fault point correlation. According to this method, the interrelation of fault points is determined by first closing corresponding fault points in intersecting sections, then reasonably correlating the relevant fault points. Fault point correlation is not achieved in base map, so its correctness can be improved greatly. Three-dimensional fault closure is achieved by iteratively revising. The closure grid should be densified gradually. The distribution of major fault system is determined prior to secondary faults. Fault interpretation by workstation also follows this procedure.

  10. How Faults Shape the Earth.

    ERIC Educational Resources Information Center

    Bykerk-Kauffman, Ann

    1992-01-01

    Presents fault activity with an emphasis on earthquakes and changes in continent shapes. Identifies three types of fault movement: normal, reverse, and strike faults. Discusses the seismic gap theory, plate tectonics, and the principle of superposition. Vignettes portray fault movement, and the locations of the San Andreas fault and epicenters of…

  11. Fault detection and fault tolerance in robotics

    NASA Technical Reports Server (NTRS)

    Visinsky, Monica; Walker, Ian D.; Cavallaro, Joseph R.

    1992-01-01

    Robots are used in inaccessible or hazardous environments in order to alleviate some of the time, cost and risk involved in preparing men to endure these conditions. In order to perform their expected tasks, the robots are often quite complex, thus increasing their potential for failures. If men must be sent into these environments to repair each component failure in the robot, the advantages of using the robot are quickly lost. Fault tolerant robots are needed which can effectively cope with failures and continue their tasks until repairs can be realistically scheduled. Before fault tolerant capabilities can be created, methods of detecting and pinpointing failures must be perfected. This paper develops a basic fault tree analysis of a robot in order to obtain a better understanding of where failures can occur and how they contribute to other failures in the robot. The resulting failure flow chart can also be used to analyze the resiliency of the robot in the presence of specific faults. By simulating robot failures and fault detection schemes, the problems involved in detecting failures for robots are explored in more depth.

  12. Software reliability through fault-avoidance and fault-tolerance

    NASA Technical Reports Server (NTRS)

    Vouk, Mladen A.; Mcallister, David F.

    1990-01-01

    The use of back-to-back, or comparison, testing for regression test or porting is examined. The efficiency and the cost of the strategy is compared with manual and table-driven single version testing. Some of the key parameters that influence the efficiency and the cost of the approach are the failure identification effort during single version program testing, the extent of implemented changes, the nature of the regression test data (e.g., random), and the nature of the inter-version failure correlation and fault-masking. The advantages and disadvantages of the technique are discussed, together with some suggestions concerning its practical use.

  13. Project to Study Fair-Hearings Practices in Child Protective Services. Final Report: Innovations in Protective Services, September 1, 1982 through August 31, 1984.

    ERIC Educational Resources Information Center

    Texas State Dept. of Human Resources, Austin. Office of Programs.

    This project was implemented in 1982 as a result of legal action taken against the Texas Department of Human Resources (DHR) by clients of child protective services (CPS) who alleged that they were not given notice of all services available or of their right to a fair hearing. The goal of the project was to determine whether special fair-hearing…

  14. Reconfigurable fault tolerant avionics system

    NASA Astrophysics Data System (ADS)

    Ibrahim, M. M.; Asami, K.; Cho, Mengu

    This paper presents the design of a reconfigurable avionics system based on modern Static Random Access Memory (SRAM)-based Field Programmable Gate Array (FPGA) to be used in future generations of nano satellites. A major concern in satellite systems and especially nano satellites is to build robust systems with low-power consumption profiles. The system is designed to be flexible by providing the capability of reconfiguring itself based on its orbital position. As Single Event Upsets (SEU) do not have the same severity and intensity in all orbital locations, having the maximum at the South Atlantic Anomaly (SAA) and the polar cusps, the system does not have to be fully protected all the time in its orbit. An acceptable level of protection against high-energy cosmic rays and charged particles roaming in space is provided within the majority of the orbit through software fault tolerance. Check pointing and roll back, besides control flow assertions, is used for that level of protection. In the minority part of the orbit where severe SEUs are expected to exist, a reconfiguration for the system FPGA is initiated where the processor systems are triplicated and protection through Triple Modular Redundancy (TMR) with feedback is provided. This technique of reconfiguring the system as per the level of the threat expected from SEU-induced faults helps in reducing the average dynamic power consumption of the system to one-third of its maximum. This technique can be viewed as a smart protection through system reconfiguration. The system is built on the commercial version of the (XC5VLX50) Xilinx Virtex5 FPGA on bulk silicon with 324 IO. Simulations of orbit SEU rates were carried out using the SPENVIS web-based software package.

  15. An examination of the Environmental Protection Agency risk assessment principles and practices: a brief commentary on section 4.1.3 of the EPA March 2004 Staff Paper.

    PubMed

    Mundt, K A

    2006-01-01

    The US Environmental Protection Agency (EPA) recently issued a Staff Paper that articulates current risk assessment practices. In section 4.1.3, EPA states, "... effects that appear to be adaptive, non-adverse, or beneficial may not be mentioned." This statement may be perceived as precluding risk assessments based on non-default risk models, including the hormetic--or biphasic--dose-response model. This commentary examines several potential interpretations of this statement and the anticipated impact of ignoring hormesis, if present, in light of necessary conservatism for protecting human and environmental health, and the potential for employing alternative risk assessment approaches. PMID:16459710

  16. Simulating the effect of SFCL on limiting the internal fault of synchronous machine

    NASA Astrophysics Data System (ADS)

    Kheirizad, I.; Varahram, M. H.; Jahed-Motlagh, M. R.; Rahnema, M.; Mohammadi, A.

    2008-02-01

    In this paper, we have modelled a synchronous generator with internal one phase to ground fault and then the performance of this machine with internal one phase to ground fault have been analyzed. The results show that if the faults occur in vicinity of machine's terminal, then we would have serious damages. To protect the machine from this kind of faults we have suggested integrating a SFCL (superconducting fault current limiter) into the machine's model. The results show that the fault currents in this case will reduce considerably without influencing the normal operation of the machine.

  17. Position and enforcement practice of the People’s Republic of China’s pharmaceutical data exclusivity protection

    PubMed Central

    Li, Na; Yu, Xiang; Pecht, Michael

    2016-01-01

    The concept of pharmaceutical data exclusivity protection comes from the West. The Agreement on Trade-Related Aspects of Intellectual Property Rights (TRIPS) establishes the basic rules for pharmaceutical data exclusivity protection. People’s Republic of China’s domestic law is consistent with the TRIPS agreement. In the drug registration approval process of the People’s Republic of China’s Drug Supervision Department, pharmaceutical data exclusivity protection has encountered some problems, including data authentication, exclusive rights to data, number of drugs requiring data to be submitted, and drug costs. In view of the long-term interests of the People’s Republic of China’s pharmaceutical industry and intellectual property protection trends, there are a lot of difficulties in the enforcement of pharmaceutical data exclusivity protection law that need to be overcome. Some measures can be taken, such as establishing a shorter data exclusivity protection period, only protecting the data submitted and relied on in the People’s Republic of China, only protecting the drugs that use new chemical components, allowing application and necessary research before the expiry of pharmaceutical data exclusivity protection period of generic drugs. PMID:27382254

  18. Normal faults, normal friction?

    NASA Astrophysics Data System (ADS)

    Collettini, Cristiano; Sibson, Richard H.

    2001-10-01

    Debate continues as to whether normal faults may be seismically active at very low dips (δ < 30°) in the upper continental crust. An updated compilation of dip estimates (n = 25) has been prepared from focal mechanisms of shallow, intracontinental, normal-slip earthquakes (M > 5.5; slip vector raking 90° ± 30° in the fault plane) where the rupture plane is unambiguously discriminated. The dip distribution for these moderate-to-large normal fault ruptures extends from 65° > δ > 30°, corresponding to a range, 25° < θr < 60°, for the reactivation angle between the fault and inferred vertical σ1. In a comparable data set previously obtained for reverse fault ruptures (n = 33), the active dip distribution is 10° < δ = θr < 60°. For vertical and horizontal σ1 trajectories within extensional and compressional tectonic regimes, respectively, dip-slip reactivation is thus restricted to faults oriented at θr ≤ 60° to inferred σ1. Apparent lockup at θr ≈ 60° in each dip distribution and a dominant 30° ± 5° peak in the reverse fault dip distribution, are both consistent with a friction coefficient μs ≈ 0.6, toward the bottom of Byerlee's experimental range, though localized fluid overpressuring may be needed for reactivation of less favorably oriented faults.

  19. Evaluation of Characteristics of Lightning Faults on 275kV Transmission Lines

    NASA Astrophysics Data System (ADS)

    Kawamura, Hironao; Itamoto, Naoki; Shinjo, Kazuo; Ishii, Masaru

    It is effective to design the lightning protection of transmission lines by employing simulation to be able to reproduce conditions of experienced lightning faults. As a result of the lightning faults on 275kV transmission lines in Hokuriku area, the frequency of only one-ground fault of upper or middle lines is higher. Especially, the frequency of one-ground faults of the middle lines in summer is the highest. It is thought that many one-ground faults in summer are caused by direct lightning strokes to phase conductors. Moreover, multi-ground faults caused by lightning strokes to tower tops or overhead ground wires also include many ground faults of the middle lines. In this paper, the experienced lightning faults on the 275kV transmission lines are reproduced by EMTP calculations and the characteristics of the ground-fault lines are examined.

  20. Solar system fault detection

    DOEpatents

    Farrington, R.B.; Pruett, J.C. Jr.

    1984-05-14

    A fault detecting apparatus and method are provided for use with an active solar system. The apparatus provides an indication as to whether one or more predetermined faults have occurred in the solar system. The apparatus includes a plurality of sensors, each sensor being used in determining whether a predetermined condition is present. The outputs of the sensors are combined in a pre-established manner in accordance with the kind of predetermined faults to be detected. Indicators communicate with the outputs generated by combining the sensor outputs to give the user of the solar system and the apparatus an indication as to whether a predetermined fault has occurred. Upon detection and indication of any predetermined fault, the user can take appropriate corrective action so that the overall reliability and efficiency of the active solar system are increased.

  1. Solar system fault detection

    DOEpatents

    Farrington, Robert B.; Pruett, Jr., James C.

    1986-01-01

    A fault detecting apparatus and method are provided for use with an active solar system. The apparatus provides an indication as to whether one or more predetermined faults have occurred in the solar system. The apparatus includes a plurality of sensors, each sensor being used in determining whether a predetermined condition is present. The outputs of the sensors are combined in a pre-established manner in accordance with the kind of predetermined faults to be detected. Indicators communicate with the outputs generated by combining the sensor outputs to give the user of the solar system and the apparatus an indication as to whether a predetermined fault has occurred. Upon detection and indication of any predetermined fault, the user can take appropriate corrective action so that the overall reliability and efficiency of the active solar system are increased.

  2. An Analysis of IT Governance Practices in the Federal Government: Protecting U.S. Critical Infrastructure from Cyber Terrorist Attacks

    ERIC Educational Resources Information Center

    Johnson, R. LeWayne

    2012-01-01

    Much of the governing process in the United States (U.S.) today depends on a reliable and well protected public information technology (IT) infrastructure. The Department of Homeland Security (DHS) is tasked with the responsibility of protecting the country's IT infrastructure. Critics contend that the DHS has failed to address planning and…

  3. The Impact of Parental Knowledge and Tanning Attitudes on Sun Protection Practice for Young Children in Germany

    PubMed Central

    Gefeller, Olaf; Li, Jiang; Uter, Wolfgang; Pfahlberg, Annette B.

    2014-01-01

    Public health campaigns have improved knowledge on UVR-associated skin cancer risk and increased sun protection awareness. However, tanned skin is still a common beauty ideal. The relationship between knowledge, attitudes and protective behavior is not fully understood yet. A population-based survey was thus performed in the district of Erlangen involving 2,619 parents of 3- to 6-year old children. By means of a self-administered standardized questionnaire parental knowledge about risk factors for skin cancer, their attitudes towards tanning and details of protective measures taken for their children were assessed. The study analyzed specifically the impact of parental tanning attitudes on sun-protective measures for their children while controlling for parental knowledge about skin cancer risk factors. While parental knowledge was significantly (inversely) associated with agreement to the statement “Tanned skin is healthy skin”, this was not the case for “Tanning makes me look better”. Overall, tanning affirmative attitudes were inversely associated with protective measures taken for the children, whereas parental knowledge had a positive impact on sun protection at the beach only. Multivariable analyses provided evidence for an effect of parental attitude on protective behavior independent of parental knowledge. Tanning attitudes and tanned skin as the misguided ideal of beauty need to be addressed in future public health campaigns to enhance the effectiveness of preventive activities in changing sun protective behavior. PMID:24802677

  4. The impact of parental knowledge and tanning attitudes on sun protection practice for young children in Germany.

    PubMed

    Gefeller, Olaf; Li, Jiang; Uter, Wolfgang; Pfahlberg, Annette B

    2014-05-01

    Public health campaigns have improved knowledge on UVR-associated skin cancer risk and increased sun protection awareness. However, tanned skin is still a common beauty ideal. The relationship between knowledge, attitudes and protective behavior is not fully understood yet. A population-based survey was thus performed in the district of Erlangen involving 2,619 parents of 3- to 6-year old children. By means of a self-administered standardized questionnaire parental knowledge about risk factors for skin cancer, their attitudes towards tanning and details of protective measures taken for their children were assessed. The study analyzed specifically the impact of parental tanning attitudes on sun-protective measures for their children while controlling for parental knowledge about skin cancer risk factors. While parental knowledge was significantly (inversely) associated with agreement to the statement "Tanned skin is healthy skin", this was not the case for "Tanning makes me look better". Overall, tanning affirmative attitudes were inversely associated with protective measures taken for the children, whereas parental knowledge had a positive impact on sun protection at the beach only. Multivariable analyses provided evidence for an effect of parental attitude on protective behavior independent of parental knowledge. Tanning attitudes and tanned skin as the misguided ideal of beauty need to be addressed in future public health campaigns to enhance the effectiveness of preventive activities in changing sun protective behavior. PMID:24802677

  5. Sliding mode fault detection and fault-tolerant control of smart dampers in semi-active control of building structures

    NASA Astrophysics Data System (ADS)

    Yeganeh Fallah, Arash; Taghikhany, Touraj

    2015-12-01

    Recent decades have witnessed much interest in the application of active and semi-active control strategies for seismic protection of civil infrastructures. However, the reliability of these systems is still in doubt as there remains the possibility of malfunctioning of their critical components (i.e. actuators and sensors) during an earthquake. This paper focuses on the application of the sliding mode method due to the inherent robustness of its fault detection observer and fault-tolerant control. The robust sliding mode observer estimates the state of the system and reconstructs the actuators’ faults which are used for calculating a fault distribution matrix. Then the fault-tolerant sliding mode controller reconfigures itself by the fault distribution matrix and accommodates the fault effect on the system. Numerical simulation of a three-story structure with magneto-rheological dampers demonstrates the effectiveness of the proposed fault-tolerant control system. It was shown that the fault-tolerant control system maintains the performance of the structure at an acceptable level in the post-fault case.

  6. Validation of Helicopter Gear Condition Indicators Using Seeded Fault Tests

    NASA Technical Reports Server (NTRS)

    Dempsey, Paula; Brandon, E. Bruce

    2013-01-01

    A "seeded fault test" in support of a rotorcraft condition based maintenance program (CBM), is an experiment in which a component is tested with a known fault while health monitoring data is collected. These tests are performed at operating conditions comparable to operating conditions the component would be exposed to while installed on the aircraft. Performance of seeded fault tests is one method used to provide evidence that a Health Usage Monitoring System (HUMS) can replace current maintenance practices required for aircraft airworthiness. Actual in-service experience of the HUMS detecting a component fault is another validation method. This paper will discuss a hybrid validation approach that combines in service-data with seeded fault tests. For this approach, existing in-service HUMS flight data from a naturally occurring component fault will be used to define a component seeded fault test. An example, using spiral bevel gears as the targeted component, will be presented. Since the U.S. Army has begun to develop standards for using seeded fault tests for HUMS validation, the hybrid approach will be mapped to the steps defined within their Aeronautical Design Standard Handbook for CBM. This paper will step through their defined processes, and identify additional steps that may be required when using component test rig fault tests to demonstrate helicopter CI performance. The discussion within this paper will provide the reader with a better appreciation for the challenges faced when defining a seeded fault test for HUMS validation.

  7. New fault locating system for air-insulated substations using optical current detector

    SciTech Connect

    Yoshida, Y.; Kawazoe, S. ); Ibuki, K.; Yamada, K.; Ochi, N. . Itami Works)

    1992-10-01

    This paper deals with a newly developed fault locating system. This fault locating system helps to shorten the time required for restoration of service after the occurrence of a busbar fault in an air-insulated distribution substation. Recent optical and electronic technologies allow highly accurate and compact fault locating system, which consists of optical current detectors using Faraday effect and a fault locating processor employing digital data processing technique. The fault location is made by discriminating the direction of zero-sequence currents. Through various tests and field operations it has been confirmed that the system has sufficient performance for practical application.

  8. Structural analysis and risk assessment of the All American pipeline at the San Andreas fault crossing

    SciTech Connect

    Hart, J.D.; Row, D.G.; Drugovich, D.

    1995-12-31

    The All American oil transmission pipeline crosses the San Andreas fault and a series of smaller, associated faults in a fault zone southwest (SW) of Bakersfield, California. The possibility of fault rupture on any of these faults during a major earthquake is a source of concern since such an event could result in pipeline damage or failure. This paper describes the development of a fault displacement risk model, the evaluation of the risk at the All American Pipeline (AAPL) site provided by an initial crossing design, and the evaluation of various alternative designs. A practical alternative design scheme, which significantly reduces the risk of pipeline damage or failure is then recommended.

  9. A Pilot Study of the Validity of Self-reported Ultraviolet Radiation Exposure and Sun Protection Practices Among Lifeguards, Parents and Children

    PubMed Central

    O’Riordan, David L.; Glanz, Karen; Gies, Peter; Elliott, Tom

    2013-01-01

    Outdoor recreation settings, such as swimming pools, provide a promising venue to assess UVR exposure and sun protection practices among individuals who are minimally clothed and exposed to potentially high levels of UVR. Most studies assessing sun exposure/protection practices rely on self-reported data, which are subject to bias. The aim of this study was to establish the feasibility of conducting a multimethod study to examine the validity of self-reported measures within a swimming pool setting. Data were collected from 27 lifeguards, children and parents in Hawaii. Each participant filled out a survey and a 4 day sun habits diary. On two occasions, researchers assessed observable sun protection behaviors (wearing hats, shirts, sunglasses), swabbed the skin to detect the presence of sunscreen, and subjects wore polysulphone dosimeters to measure UVR exposure. Overall, observed sun protection behaviors were more highly correlated with diary reports than with survey reports. While lifeguards and children reported spending comparable amounts of time in the sun, dosimeter measures showed that lifeguards received twice as much UVR exposure. This study demonstrated the feasibility of implementing a multimethod validity study within a broader population of swimming pools. PMID:18179624

  10. How clays weaken faults.

    NASA Astrophysics Data System (ADS)

    van der Pluijm, Ben A.; Schleicher, Anja M.; Warr, Laurence N.

    2010-05-01

    The weakness of upper crustal faults has been variably attributed to (i) low values of normal stress, (ii) elevated pore-fluid pressure, and (iii) low frictional strength. Direct observations on natural faults rocks provide new evidence for the role of frictional properties on fault strength, as illustrated by our recent work on samples from the San Andreas Fault Observatory at Depth (SAFOD) drillhole at Parkfield, California. Mudrock samples from fault zones at ~3066 m and ~3296 m measured depth show variably spaced and interconnected networks of displacement surfaces that consist of host rock particles that are abundantly coated by polished films with occasional striations. Transmission electron microscopy and X-ray diffraction study of the surfaces reveal the occurrence of neocrystallized thin-film clay coatings containing illite-smectite (I-S) and chlorite-smectite (C-S) phases. X-ray texture goniometry shows that the crystallographic fabric of these faults rocks is characteristically low, in spite of an abundance of clay phases. 40Ar/39Ar dating of the illitic mix-layered coatings demonstrate recent crystallization and reveal the initiation of an "older" fault strand (~8 Ma) at 3066 m measured depth, and a "younger" fault strand (~4 Ma) at 3296 m measured depth. Today, the younger strand is the site of active creep behavior, reflecting continued activation of these clay-weakened zones. We propose that the majority of slow fault creep is controlled by the high density of thin (< 100nm thick) nano-coatings on fracture surfaces, which become sufficiently smectite-rich and interconnected at low angles to allow slip with minimal breakage of stronger matrix clasts. Displacements are accommodated by localized frictional slip along coated particle surfaces and hydrated smectitic phases, in combination with intracrystalline deformation of the clay lattice, associated with extensive mineral dissolution, mass transfer and continued growth of expandable layers. The

  11. Multiversion software reliability through fault-avoidance and fault-tolerance

    NASA Technical Reports Server (NTRS)

    Vouk, Mladen A.; Mcallister, David F.

    1990-01-01

    In this project we have proposed to investigate a number of experimental and theoretical issues associated with the practical use of multi-version software in providing dependable software through fault-avoidance and fault-elimination, as well as run-time tolerance of software faults. In the period reported here we have working on the following: We have continued collection of data on the relationships between software faults and reliability, and the coverage provided by the testing process as measured by different metrics (including data flow metrics). We continued work on software reliability estimation methods based on non-random sampling, and the relationship between software reliability and code coverage provided through testing. We have continued studying back-to-back testing as an efficient mechanism for removal of uncorrelated faults, and common-cause faults of variable span. We have also been studying back-to-back testing as a tool for improvement of the software change process, including regression testing. We continued investigating existing, and worked on formulation of new fault-tolerance models. In particular, we have partly finished evaluation of Consensus Voting in the presence of correlated failures, and are in the process of finishing evaluation of Consensus Recovery Block (CRB) under failure correlation. We find both approaches far superior to commonly employed fixed agreement number voting (usually majority voting). We have also finished a cost analysis of the CRB approach.

  12. Multi-version software reliability through fault-avoidance and fault-tolerance

    NASA Technical Reports Server (NTRS)

    Vouk, Mladen A.; Mcallister, David F.

    1989-01-01

    A number of experimental and theoretical issues associated with the practical use of multi-version software to provide run-time tolerance to software faults were investigated. A specialized tool was developed and evaluated for measuring testing coverage for a variety of metrics. The tool was used to collect information on the relationships between software faults and coverage provided by the testing process as measured by different metrics (including data flow metrics). Considerable correlation was found between coverage provided by some higher metrics and the elimination of faults in the code. Back-to-back testing was continued as an efficient mechanism for removal of un-correlated faults, and common-cause faults of variable span. Software reliability estimation methods was also continued based on non-random sampling, and the relationship between software reliability and code coverage provided through testing. New fault tolerance models were formulated. Simulation studies of the Acceptance Voting and Multi-stage Voting algorithms were finished and it was found that these two schemes for software fault tolerance are superior in many respects to some commonly used schemes. Particularly encouraging are the safety properties of the Acceptance testing scheme.

  13. NASA Spacecraft Fault Management Workshop Results

    NASA Technical Reports Server (NTRS)

    Newhouse, Marilyn; McDougal, John; Barley, Bryan; Fesq, Lorraine; Stephens, Karen

    2010-01-01

    Fault Management is a critical aspect of deep-space missions. For the purposes of this paper, fault management is defined as the ability of a system to detect, isolate, and mitigate events that impact, or have the potential to impact, nominal mission operations. The fault management capabilities are commonly distributed across flight and ground subsystems, impacting hardware, software, and mission operations designs. The National Aeronautics and Space Administration (NASA) Discovery & New Frontiers (D&NF) Program Office at Marshall Space Flight Center (MSFC) recently studied cost overruns and schedule delays for 5 missions. The goal was to identify the underlying causes for the overruns and delays, and to develop practical mitigations to assist the D&NF projects in identifying potential risks and controlling the associated impacts to proposed mission costs and schedules. The study found that 4 out of the 5 missions studied had significant overruns due to underestimating the complexity and support requirements for fault management. As a result of this and other recent experiences, the NASA Science Mission Directorate (SMD) Planetary Science Division (PSD) commissioned a workshop to bring together invited participants across government, industry, academia to assess the state of the art in fault management practice and research, identify current and potential issues, and make recommendations for addressing these issues. The workshop was held in New Orleans in April of 2008. The workshop concluded that fault management is not being limited by technology, but rather by a lack of emphasis and discipline in both the engineering and programmatic dimensions. Some of the areas cited in the findings include different, conflicting, and changing institutional goals and risk postures; unclear ownership of end-to-end fault management engineering; inadequate understanding of the impact of mission-level requirements on fault management complexity; and practices, processes, and

  14. Parental practices and political violence: the protective role of parental warmth and authority-control in Jewish and Arab Israeli children.

    PubMed

    Lavi, Iris; Slone, Michelle

    2012-10-01

    Parental warmth and parental authority-control patterns have been documented as practices with highest significance for children's well-being and development in a variety of life areas. Various forms of these practices have been shown to have a direct positive effect on children and also to protect children from adverse effects of numerous stressors. However, surprisingly, few studies have examined the role of these practices as possible protective factors for children exposed to intractable conflict and political violence. Participants in this study were Jewish (n = 88) and Arab (n = 105) Israeli families, with children aged 7-12.5 (M = 10.73, SD = 0.99). Children completed questionnaires assessing political violence exposure, behavioral, psychological, and social difficulties, and perceived paternal and maternal warmth. Mothers and fathers completed questionnaires assessing parental warmth, parental authority-control, and the child's difficulties. Results showed parental warmth to be a significant moderator of political violence, related to low levels of behavioral and social difficulties of children. Parental authority-control patterns were not protectors from adverse effects of political violence exposure. Maternal authoritarian authority-control showed an effect resembling a risk factor. Differential roles of parental warmth and authority-control, fathers' versus mothers' roles, and ethnic differences are discussed, and practical clinical implications are proposed. PMID:23039353

  15. The Kunlun Fault

    NASA Technical Reports Server (NTRS)

    2002-01-01

    The Kunlun fault is one of the gigantic strike-slip faults that bound the north side of Tibet. Left-lateral motion along the 1,500-kilometer (932-mile) length of the Kunlun has occurred uniformly for the last 40,000 years at a rate of 1.1 centimeter per year, creating a cumulative offset of more than 400 meters. In this image, two splays of the fault are clearly seen crossing from east to west. The northern fault juxtaposes sedimentary rocks of the mountains against alluvial fans. Its trace is also marked by lines of vegetation, which appear red in the image. The southern, younger fault cuts through the alluvium. A dark linear area in the center of the image is wet ground where groundwater has ponded against the fault. Measurements from the image of displacements of young streams that cross the fault show 15 to 75 meters (16 to 82 yards) of left-lateral offset. The Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) acquired the visible light and near infrared scene on July 20, 2000. Image courtesy NASA/GSFC/MITI/ERSDAC/JAROS, and the U.S./Japan ASTER Science Team

  16. The Chaîne des Puys and Limagne Fault World Heritage project: a view from a scientist

    NASA Astrophysics Data System (ADS)

    van Wyk de Vries, B.

    2013-12-01

    The development of the Chaîne des Puys and Limagne Fault World Heritage UNESCO project has taken about five years, since the Laboratoire Magmas et Volcans was approached by the local Auvergne government. Before this we had been working locally with organisations such as the Lemptégy volcano and Vulcania to help disseminate geoscience ideas to the general public, however the UNESCO project has lead us to do much more outreach. It has also stimulated our research and has taught us to better explain this to the lay person. In visiting other heritage projects, where we have exchanged ideas and best practice, we have been able to help other sites and improve what we do. These links are particularly important, as they can be used to help broaden the outlook of the general public and local actors, and increase further earth science literacy. I have noticed a strong increase in the awareness of the volcanoes, and volcanism as a result of the Chaîne des Puys and Limagne Fault project. I think that, before, many locals considered the volcanoes only as their special back garden, for easy walks and views and leisure, or for that matter farming, mining and hunting. However, now, there is a greater sense of pride rooted in the increased awareness of their geological significance in a historical and global context. While this effect is clear for the volcanoes, it is not yet apparent for the fault. The lay person has no clear concept of a rift and a fault. Thus, one of our major present challenges is to open the public's eyes to the fault. This is vital for the UNESCO project not only for educational reasons, but also because the fault scarp is a natural barrier that protects the rest of the property from urban development. Only if there is awareness of its nature and significance will it be an effective defence. To arrive at this goal, there is now a local government sponsored project to create and disseminate fault walks, fault viewpoints and fault information points. This is

  17. 14 CFR 29.1357 - Circuit protective devices.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... faults or serious malfunction of the system or connected equipment. (b) The protective and control... be designed so that, when an overload or circuit fault exists, it will open the circuit regardless...

  18. 14 CFR 29.1357 - Circuit protective devices.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... faults or serious malfunction of the system or connected equipment. (b) The protective and control... be designed so that, when an overload or circuit fault exists, it will open the circuit regardless...

  19. 14 CFR 29.1357 - Circuit protective devices.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... faults or serious malfunction of the system or connected equipment. (b) The protective and control... be designed so that, when an overload or circuit fault exists, it will open the circuit regardless...

  20. Fault-tolerant electrical power system

    NASA Astrophysics Data System (ADS)

    Mehdi, Ishaque S.; Weimer, Joseph A.

    1987-10-01

    An electrical system that will meet the requirements of a 1990s two-engine fighter is being developed in the Fault-Tolerant Electrical Power System (FTEPS) program, sponsored by the AFWAL Aero Propulsion Laboratory. FTEPS will demonstrate the generation and distribution of fault-tolerant, reliable, electrical power required for future aircraft. The system incorporates MIL-STD-1750A digital processors and MIL-STD-1553B data buses for control and communications. Electrical power is distributed through electrical load management centers by means of solid-state power controllers for fault protection and individual load control. The system will provide uninterruptible power to flight-critical loads such as the flight control and mission computers with sealed lead-acid batteries. Primary power is provided by four 60 kVA variable speed constant frequency generators. Buildup and testing of the FTEPS demonstrator is expected to be complete by May 1988.

  1. Fault detection and isolation

    NASA Technical Reports Server (NTRS)

    Bernath, Greg

    1994-01-01

    In order for a current satellite-based navigation system (such as the Global Positioning System, GPS) to meet integrity requirements, there must be a way of detecting erroneous measurements, without help from outside the system. This process is called Fault Detection and Isolation (FDI). Fault detection requires at least one redundant measurement, and can be done with a parity space algorithm. The best way around the fault isolation problem is not necessarily isolating the bad measurement, but finding a new combination of measurements which excludes it.

  2. Measuring fault tolerance with the FTAPE fault injection tool

    NASA Technical Reports Server (NTRS)

    Tsai, Timothy K.; Iyer, Ravishankar K.

    1995-01-01

    This paper describes FTAPE (Fault Tolerance And Performance Evaluator), a tool that can be used to compare fault-tolerant computers. The major parts of the tool include a system-wide fault-injector, a workload generator, and a workload activity measurement tool. The workload creates high stress conditions on the machine. Using stress-based injection, the fault injector is able to utilize knowledge of the workload activity to ensure a high level of fault propagation. The errors/fault ratio, performance degradation, and number of system crashes are presented as measures of fault tolerance.

  3. Measuring fault tolerance with the FTAPE fault injection tool

    NASA Astrophysics Data System (ADS)

    Tsai, Timothy K.; Iyer, Ravishankar K.

    1995-05-01

    This paper describes FTAPE (Fault Tolerance And Performance Evaluator), a tool that can be used to compare fault-tolerant computers. The major parts of the tool include a system-wide fault-injector, a workload generator, and a workload activity measurement tool. The workload creates high stress conditions on the machine. Using stress-based injection, the fault injector is able to utilize knowledge of the workload activity to ensure a high level of fault propagation. The errors/fault ratio, performance degradation, and number of system crashes are presented as measures of fault tolerance.

  4. Protecting the Kidney in Liver Transplant Candidates: Practice-Based Recommendations From the American Society of Transplantation Liver and Intestine Community of Practice.

    PubMed

    O'Leary, J G; Levitsky, J; Wong, F; Nadim, M K; Charlton, M; Kim, W R

    2016-09-01

    Acute kidney injury (AKI) and chronic kidney disease (CKD) are common in patients awaiting liver transplantation, and both have a marked impact on the perioperative and long-term morbidity and mortality of liver transplant recipients. Consequently, we reviewed the epidemiology of AKI and CKD in patients with end-stage liver disease, highlighted strategies to prevent and manage AKI, evaluated the changing liver transplant waiting list's impact on kidney function, delineated important considerations in simultaneous liver-kidney transplant selection, and projected possible future transplant policy changes and outcomes. This review was assembled by experts in the field and endorsed by the American Society of Transplantation Liver and Intestinal Community of Practice and Board of Directors and provides practice-based recommendations for preservation of kidney function in patients with end-stage liver disease. PMID:26990924

  5. OpenStudio - Fault Modeling

    Energy Science and Technology Software Center (ESTSC)

    2014-09-19

    This software record documents the OpenStudio fault model development portion of the Fault Detection and Diagnostics LDRD project.The software provides a suite of OpenStudio measures (scripts) for modeling typical HVAC system faults in commercial buildings and also included supporting materials: example projects and OpenStudio measures for reporting fault costs and energy impacts.

  6. SUSTAIN – A Framework for Placement of Best Management Practices in Urban Watersheds to Protect Water Quality

    EPA Science Inventory

    SUSTAIN (System for Urban Stormwater Treatment and Analysis INtegration) is a decision support system to facilitate selection and placement of best management practices (BMPs) and low impact development (LID) techniques at strategic locations in urban watersheds. It was develope...

  7. Hayward Fault, California Interferogram

    NASA Technical Reports Server (NTRS)

    2000-01-01

    This image of California's Hayward fault is an interferogram created using a pair of images taken by Synthetic Aperture Radar(SAR) combined to measure changes in the surface that may have occurred between the time the two images were taken.

    The images were collected by the European Space Agency's Remote Sensing satellites ERS-1 and ERS-2 in June 1992 and September 1997 over the central San Francisco Bay in California.

    The radar image data are shown as a gray-scale image, with the interferometric measurements that show the changes rendered in color. Only the urbanized area could be mapped with these data. The color changes from orange tones to blue tones across the Hayward fault (marked by a thin red line) show about 2-3centimeters (0.8-1.1 inches) of gradual displacement or movement of the southwest side of the fault. The block west of the fault moved horizontally toward the northwest during the 63 months between the acquisition of the two SAR images. This fault movement is called a seismic creep because the fault moved slowly without generating an earthquake.

    Scientists are using the SAR interferometry along with other data collected on the ground to monitor this fault motion in an attempt to estimate the probability of earthquake on the Hayward fault, which last had a major earthquake of magnitude 7 in 1868. This analysis indicates that the northern part of the Hayward fault is creeping all the way from the surface to a depth of 12 kilometers (7.5 miles). This suggests that the potential for a large earthquake on the northern Hayward fault might be less than previously thought. The blue area to the west (lower left) of the fault near the center of the image seemed to move upward relative to the yellow and orange areas nearby by about 2 centimeters (0.8 inches). The cause of this apparent motion is not yet confirmed, but the rise of groundwater levels during the time between the images may have caused the reversal of a small portion of the subsidence that

  8. Cable-fault locator

    NASA Technical Reports Server (NTRS)

    Cason, R. L.; Mcstay, J. J.; Heymann, A. P., Sr.

    1979-01-01

    Inexpensive system automatically indicates location of short-circuited section of power cable. Monitor does not require that cable be disconnected from its power source or that test signals be applied. Instead, ground-current sensors are installed in manholes or at other selected locations along cable run. When fault occurs, sensors transmit information about fault location to control center. Repair crew can be sent to location and cable can be returned to service with minimum of downtime.

  9. Fault rupture segmentation

    NASA Astrophysics Data System (ADS)

    Cleveland, Kenneth Michael

    A critical foundation to earthquake study and hazard assessment is the understanding of controls on fault rupture, including segmentation. Key challenges to understanding fault rupture segmentation include, but are not limited to: What determines if a fault segment will rupture in a single great event or multiple moderate events? How is slip along a fault partitioned between seismic and seismic components? How does the seismicity of a fault segment evolve over time? How representative are past events for assessing future seismic hazards? In order to address the difficult questions regarding fault rupture segmentation, new methods must be developed that utilize the information available. Much of the research presented in this study focuses on the development of new methods for attacking the challenges of understanding fault rupture segmentation. Not only do these methods exploit a broader band of information within the waveform than has traditionally been used, but they also lend themselves to the inclusion of even more seismic phases providing deeper understandings. Additionally, these methods are designed to be fast and efficient with large datasets, allowing them to utilize the enormous volume of data available. Key findings from this body of work include demonstration that focus on fundamental earthquake properties on regional scales can provide general understanding of fault rupture segmentation. We present a more modern, waveform-based method that locates events using cross-correlation of the Rayleigh waves. Additionally, cross-correlation values can also be used to calculate precise earthquake magnitudes. Finally, insight regarding earthquake rupture directivity can be easily and quickly exploited using cross-correlation of surface waves.

  10. A Compensation Method of Conductor Parameter for Transient Fault Location

    NASA Astrophysics Data System (ADS)

    Ugbome, Chukwunweike Lucky

    Faults in underground distribution systems are predominantly caused by the deterioration of cable insulation. The inherent nature of underground distribution is such that cables are laid underground and exposed to harmful substances which can cause deterioration of cable insulation. The penetration of water into the cable splice is a common cause of cable deterioration and a common source of transitory sub-cycle cable fault in underground distribution systems. The presence of a sub-cycle fault in a distribution line is not necessarily noticeable and may not cause any protective device to operate due to its short live-span but can be destructive if it is sustained and unattended to. The location of transitory sub-cycle fault in underground cable is fundamentally important in preventing and containing a permanent fault which can potentially result to an unplanned outage. However the location of this type of fault is not easy due to so many unknowns. A few numbers of approaches have been developed for determining the location of short-lived sub-cycle (SLSC) faults, but they approximate the conductor parameter which would reduce the accuracy of the location determination. This thesis develops an algorithm for transitory sub-cycle fault location to compensate for the ignored conductor parameter by employing the X/R ratio of the distribution line. First, a model for transient faults at different locations in underground cable is presented and used to generate the voltage and current waveforms at the source side. Also presented is the performance of the fault location by the uncompensated and compensated algorithms under two configurations of the distribution line: a homogeneous distribution circuit and a heterogeneous distribution line. The result obtained from the performance studies show that the proposed compensation method would help the non-compensated fault location approaches to achieve relatively high accuracy in locating transitory sub-cycle faults in numerous

  11. Fault lubrication during earthquakes.

    PubMed

    Di Toro, G; Han, R; Hirose, T; De Paola, N; Nielsen, S; Mizoguchi, K; Ferri, F; Cocco, M; Shimamoto, T

    2011-03-24

    The determination of rock friction at seismic slip rates (about 1 m s(-1)) is of paramount importance in earthquake mechanics, as fault friction controls the stress drop, the mechanical work and the frictional heat generated during slip. Given the difficulty in determining friction by seismological methods, elucidating constraints are derived from experimental studies. Here we review a large set of published and unpublished experiments (∼300) performed in rotary shear apparatus at slip rates of 0.1-2.6 m s(-1). The experiments indicate a significant decrease in friction (of up to one order of magnitude), which we term fault lubrication, both for cohesive (silicate-built, quartz-built and carbonate-built) rocks and non-cohesive rocks (clay-rich, anhydrite, gypsum and dolomite gouges) typical of crustal seismogenic sources. The available mechanical work and the associated temperature rise in the slipping zone trigger a number of physicochemical processes (gelification, decarbonation and dehydration reactions, melting and so on) whose products are responsible for fault lubrication. The similarity between (1) experimental and natural fault products and (2) mechanical work measures resulting from these laboratory experiments and seismological estimates suggests that it is reasonable to extrapolate experimental data to conditions typical of earthquake nucleation depths (7-15 km). It seems that faults are lubricated during earthquakes, irrespective of the fault rock composition and of the specific weakening mechanism involved. PMID:21430777

  12. What Protects Youths from Externalising and Internalising Problems? A Critical Review of Research Findings and Implications for Practice

    ERIC Educational Resources Information Center

    Eriksson, Ida; Cater, Asa; Andershed, Anna-Karin; Andershed, Henrik

    2011-01-01

    Problems in childhood and adolescence, such as pronounced externalising and internalising problems, are associated with a relatively high risk for long-lasting psychosocial difficulties. Therefore, it is essential to understand what protects youth from externalising and internalising problems. The present article has three purposes: (1) To review…

  13. Do Australian Adolescent Female Fake Tan (Sunless Tan) Users Practice Better Sun-Protection Behaviours than Non-Users?

    ERIC Educational Resources Information Center

    Williams, Melinda; Jones, Sandra C.; Caputi, Peter; Iverson, Don

    2012-01-01

    Objective: To determine differences in sun-protection behaviours, and incidence of sunburn, between Australian adolescent female fake tan users and non-users. Design: Cross sectional survey. Method: 398 adolescent females aged 12 to 18 years participated in a survey at public venues, schools, and online. The main outcome measures were…

  14. No More "Personal Notes"? Data Protection Policy and Practice in Higher Education Counselling Services in the UK

    ERIC Educational Resources Information Center

    Jenkins, Peter; Potter, Steve

    2007-01-01

    The introduction of the Data Protection Act 1998 has presented significant challenges to counselling recording, through its detailed requirements for the processing of personal data. The impact of these changes on recording in Higher Education counselling services in the UK is explored, using responses from an electronic survey of universities and…

  15. Personal Protective Equipment Use and Safety Behaviors among Farm Adolescents: Gender Differences and Predictors of Work Practices

    ERIC Educational Resources Information Center

    Reed, Deborah B.; Browning, Steven R.; Westneat, Susan C.; Kidd, Pamela S.

    2006-01-01

    Context: Children on farms perform work that places them at risk for acute and chronic negative health outcomes. Despite strategies for preventing and reducing the risk of disease and injury, children's use of personal protective equipment and safety equipped farm machinery has generally remained unreported. Purpose: This paper reports the use of…

  16. Quantum fault-tolerant thresholds for universal concatenated schemes

    NASA Astrophysics Data System (ADS)

    Chamberland, Christopher; Jochym-O'Connor, Tomas; Laflamme, Raymond

    Fault-tolerant quantum computation uses ancillary qubits in order to protect logical data qubits while allowing for the manipulation of the quantum information without severe losses in coherence. While different models for fault-tolerant quantum computation exist, determining the ancillary qubit overhead for competing schemes remains a challenging theoretical problem. In this work, we study the fault-tolerance threshold rates of different models for universal fault-tolerant quantum computation. Namely, we provide different threshold rates for the 105-qubit concatenated coding scheme for universal computation without the need for state distillation. We study two error models: adversarial noise and depolarizing noise and provide lower bounds for the threshold in each of these error regimes. Establishing the threshold rates for the concatenated coding scheme will allow for a physical quantum resource comparison between our fault-tolerant universal quantum computation model and the traditional model using magic state distillation.

  17. Fault Roughness Records Strength

    NASA Astrophysics Data System (ADS)

    Brodsky, E. E.; Candela, T.; Kirkpatrick, J. D.

    2014-12-01

    Fault roughness is commonly ~0.1-1% at the outcrop exposure scale. More mature faults are smoother than less mature ones, but the overall range of roughness is surprisingly limited which suggests dynamic control. In addition, the power spectra of many exposed fault surfaces follow a single power law over scales from millimeters to 10's of meters. This is another surprising observation as distinct structures such as slickenlines and mullions are clearly visible on the same surfaces at well-defined scales. We can reconcile both observations by suggesting that the roughness of fault surfaces is controlled by the maximum strain that can be supported elastically in the wallrock. If the fault surface topography requires more than 0.1-1% strain, it fails. Invoking wallrock strength explains two additional observations on the Corona Heights fault for which we have extensive roughness data. Firstly, the surface is isotropic below a scale of 30 microns and has grooves at larger scales. Samples from at least three other faults (Dixie Valley, Mount St. Helens and San Andreas) also are isotropic at scales below 10's of microns. If grooves can only persist when the walls of the grooves have a sufficiently low slope to maintain the shape, this scale of isotropy can be predicted based on the measured slip perpendicular roughness data. The observed 30 micron scale at Corona Heights is consistent with an elastic strain of 0.01 estimated from the observed slip perpendicular roughness with a Hurst exponent of 0.8. The second observation at Corona Heights is that slickenlines are not deflected around meter-scale mullions. Yielding of these mullions at centimeter to meter scale is predicted from the slip parallel roughness as measured here. The success of the strain criterion for Corona Heights supports it as the appropriate control on fault roughness. Micromechanically, the criterion implies that failure of the fault surface is a continual process during slip. Macroscopically, the

  18. Protecting the Kidney in Liver Transplant Recipients: Practice-Based Recommendations From the American Society of Transplantation Liver and Intestine Community of Practice.

    PubMed

    Levitsky, J; O'Leary, J G; Asrani, S; Sharma, P; Fung, J; Wiseman, A; Niemann, C U

    2016-09-01

    Both acute and chronic kidney disease are common after liver transplantation and result in significant morbidity and mortality. The introduction of the Model for End-stage Liver Disease score has directly correlated with an increased prevalence of perioperative renal dysfunction and the number of simultaneous liver-kidney transplantations performed. Kidney dysfunction in this population is typically multifactorial and related to preexisting conditions, pretransplantation renal injury, perioperative events, and posttransplantation nephrotoxic immunosuppressive therapies. The management of kidney disease after liver transplantation is challenging, as by the time the serum creatinine level is significantly elevated, few interventions affect the course of progression. Also, immunological factors such as antibody-mediated kidney rejection have become of greater interest given the rising liver-kidney transplant population. Therefore, this review, assembled by experts in the field and endorsed by the American Society of Transplantation Liver and Intestine Community of Practice, provides a critical assessment of measures of renal function and interventions aimed at preserving renal function early and late after liver and simultaneous liver-kidney transplantation. Key points and practice-based recommendations for the prevention and management of kidney injury in this population are provided to offer guidance for clinicians and identify gaps in knowledge for future investigations. PMID:26932352

  19. Fault reactivation control on normal fault growth: an experimental study

    NASA Astrophysics Data System (ADS)

    Bellahsen, Nicolas; Daniel, Jean Marc

    2005-04-01

    Field studies frequently emphasize how fault reactivation is involved in the deformation of the upper crust. However, this phenomenon is generally neglected (except in inversion models) in analogue and numerical models performed to study fault network growth. Using sand/silicon analogue models, we show how pre-existing discontinuities can control the geometry and evolution of a younger fault network. The models show that the reactivation of pre-existing discontinuities and their orientation control: (i) the evolution of the main fault orientation distribution through time, (ii) the geometry of relay fault zones, (iii) the geometry of small scale faulting, and (iv) the geometry and location of fault-controlled basins and depocenters. These results are in good agreement with natural fault networks observed in both the Gulf of Suez and Lake Tanganyika. They demonstrate that heterogeneities such as pre-existing faults should be included in models designed to understand the behavior and the tectonic evolution of sedimentary basins.

  20. New approach to fault-tolerant routing in all-optical networks

    NASA Astrophysics Data System (ADS)

    Sengupta, Abhijit; Alluri, Shailesh; Bandyopadhyay, Subir; Jaekel, Arunita

    1999-08-01

    Fault management in WDM routed all-optical networks has mostly been addressed either by automatic protection switching or through loop-back recovery. These schemes are designed for managing single fault occurrence and generalization method to handle multiple faults are not known. Conventional routing schemes are static in nature (where the routers are programmed to realize the lightpaths between the end-nodes) and hence a fault management scheme needs to find a fault-free path between end-nodes using the settings of the routers. This paper considers the principle of survival route graphs to construct fault-free paths between end-nodes. As a result, the fault avoiding route between two end-nodes might be a multihop route in which the number of hops are limited to reduce the communication delay. The performance degradations of the network because of fault occurrence are studied through simulations and measured in terms of blocking probability and communication delay.

  1. Use of Fuzzy Logic Systems for Assessment of Primary Faults

    NASA Astrophysics Data System (ADS)

    Petrović, Ivica; Jozsa, Lajos; Baus, Zoran

    2015-09-01

    In electric power systems, grid elements are often subjected to very complex and demanding disturbances or dangerous operating conditions. Determining initial fault or cause of those states is a difficult task. When fault occurs, often it is an imperative to disconnect affected grid element from the grid. This paper contains an overview of possibilities for using fuzzy logic in an assessment of primary faults in the transmission grid. The tool for this task is SCADA system, which is based on information of currents, voltages, events of protection devices and status of circuit breakers in the grid. The function model described with the membership function and fuzzy logic systems will be presented in the paper. For input data, diagnostics system uses information of protection devices tripping, states of circuit breakers and measurements of currents and voltages before and after faults.

  2. The Impact of Evidence-Based Practice Implementation and Fidelity Monitoring on Staff Turnover: Evidence for a Protective Effect

    PubMed Central

    Aarons, Gregory A.; Sommerfeld, David H.; Hecht, Debra B.; Silovsky, Jane F.; Chaffin, Mark J.

    2009-01-01

    Staff retention is an ongoing challenge in mental health and community-based service organizations. Little is known about the impact of evidence-based practice implementation on the mental health and social service workforce. The present study examined the effect of evidence-based practice implementation and ongoing fidelity monitoring on staff retention in a children’s services system. The study took place in the context of a statewide regionally randomized effectiveness trial of an evidence-based intervention designed to reduce child neglect. Twenty-one teams consisting of 153 home-based service providers were followed over a 29 month period. Survival analyses revealed greater staff retention in the condition where the evidence-based practice was implemented along with ongoing fidelity monitoring presented to staff as supportive consultation. These results should help to allay concerns about staff retention when implementing evidence-based practices where there is good values-innovation fit and when fidelity monitoring is designed as an aid and support to service providers in providing a high standard of care for children and families. PMID:19309186

  3. Water Efficiency Improvements at Various Environmental Protection Agency Sites: Best Management Practice Case Study #12 - Laboratory/Medical Equipment (Brochure)

    SciTech Connect

    Blakley, H.

    2011-03-01

    The U.S. Environmental Protection Agency (EPA) built a successful water conservation program and reduced potable water use through a series of initiatives at EPA laboratories. The projects highlighted in this case study demonstrate EPA's ability to reduce water use in laboratory and medical equipment by implementing vacuum pump and steam sterilizer replacements and retrofits. Due to the success of the initial vacuum pump and steam sterilizer projects described here, EPA is implementing similar projects at several laboratories throughout the nation.

  4. Rupture history of 2014 Mw 6.0 South Napa earthquake inferred from near-fault strong motion data and its impact to the practice of ground strong motion prediction

    NASA Astrophysics Data System (ADS)

    Ji, Chen; Archuleta, Ralph J.; Twardzik, Cedric

    2015-04-01

    The spatiotemporal rupture history of the 2014 South Napa earthquake is constrained using near-fault strong motion records. An aggressive source parameterization with 1372 subfaults is adopted to match the signals in the transverse components up to 4 Hz. The result reveals that the rupture of the Napa earthquake initiated at a depth of 9.84 km and propagated mainly to north-northwest (NNW) and updip on a 13 km long fault patch. A gradual increase in average rise time when the rupture propagates to shallower depth is observed. However, it is the rupture of a small (Mw 4.9), isolated, and high stress drop fault patch that excited the largest ground acceleration at stations south of the epicenter. Such fine-scale rupture heterogeneity shall be considered during seismic hazard analysis.

  5. Validated Fault Tolerant Architectures for Space Station

    NASA Technical Reports Server (NTRS)

    Lala, Jaynarayan H.

    1990-01-01

    Viewgraphs on validated fault tolerant architectures for space station are presented. Topics covered include: fault tolerance approach; advanced information processing system (AIPS); and fault tolerant parallel processor (FTPP).

  6. Cable fault locator research

    NASA Astrophysics Data System (ADS)

    Cole, C. A.; Honey, S. K.; Petro, J. P.; Phillips, A. C.

    1982-07-01

    Cable fault location and the construction of four field test units are discussed. Swept frequency sounding of mine cables with RF signals was the technique most thoroughly investigated. The swept frequency technique is supplemented with a form of moving target indication to provide a method for locating the position of a technician along a cable and relative to a suspected fault. Separate, more limited investigations involved high voltage time domain reflectometry and acoustical probing of mine cables. Particular areas of research included microprocessor-based control of the swept frequency system, a microprocessor based fast Fourier transform for spectral analysis, and RF synthesizers.

  7. Fault tolerant linear actuator

    DOEpatents

    Tesar, Delbert

    2004-09-14

    In varying embodiments, the fault tolerant linear actuator of the present invention is a new and improved linear actuator with fault tolerance and positional control that may incorporate velocity summing, force summing, or a combination of the two. In one embodiment, the invention offers a velocity summing arrangement with a differential gear between two prime movers driving a cage, which then drives a linear spindle screw transmission. Other embodiments feature two prime movers driving separate linear spindle screw transmissions, one internal and one external, in a totally concentric and compact integrated module.

  8. Computer hardware fault administration

    DOEpatents

    Archer, Charles J.; Megerian, Mark G.; Ratterman, Joseph D.; Smith, Brian E.

    2010-09-14

    Computer hardware fault administration carried out in a parallel computer, where the parallel computer includes a plurality of compute nodes. The compute nodes are coupled for data communications by at least two independent data communications networks, where each data communications network includes data communications links connected to the compute nodes. Typical embodiments carry out hardware fault administration by identifying a location of a defective link in the first data communications network of the parallel computer and routing communications data around the defective link through the second data communications network of the parallel computer.

  9. Ius Chasma Fault

    NASA Technical Reports Server (NTRS)

    2003-01-01

    MGS MOC Release No. MOC2-415, 8 July 2003

    This Mars Global Surveyor (MGS) Mars Orbiter Camera (MOC) image shows a 'text-book example' of an offset in layered rock caused by a fault. The offset is most easily seen near the upper right of the image. The martian crust is faulted, and the planet has probably experienced 'earthquakes' (or, marsquakes) in the past. This scene is located on the floor of Ius Chasma near 7.8oS, 80.6oW. Sunlight illuminates the scene from the upper left.

  10. DIFFERENTIAL FAULT SENSING CIRCUIT

    DOEpatents

    Roberts, J.H.

    1961-09-01

    A differential fault sensing circuit is designed for detecting arcing in high-voltage vacuum tubes arranged in parallel. A circuit is provided which senses differences in voltages appearing between corresponding elements likely to fault. Sensitivity of the circuit is adjusted to some level above which arcing will cause detectable differences in voltage. For particular corresponding elements, a group of pulse transformers are connected in parallel with diodes connected across the secondaries thereof so that only voltage excursions are transmitted to a thyratron which is biased to the sensitivity level mentioned.

  11. Fault terminations, Seminoe Mountains, Wyoming

    SciTech Connect

    Dominic, J.B.; McConnell, D.A. . Dept. of Geology)

    1992-01-01

    Two basement-involved faults terminate in folds in the Seminoe Mountains. Mesoscopic and macroscopic structures in sedimentary rocks provide clues to the interrelationship of faults and folds in this region, and on the linkage between faulting and folding in general. The Hurt Creek fault trends 320[degree] and has maximum separation of 1.5 km measured at the basement/cover contact. Separation on the fault decreases upsection to zero within the Jurassic Sundance Formation. Unfaulted rock units form an anticline around the fault tip. The complementary syncline is angular with planar limbs and a narrow hinge zone. The syncline axial trace intersects the fault in the footwall at the basement/cover cut-off. Map patterns are interpreted to show thickening of Mesozoic units adjacent to the syncline hinge. In contrast, extensional structures are common in the faulted anticline within the Permian Goose Egg and Triassic Chugwater Formations. A hanging wall splay fault loses separation into the Goose Egg formation which is thinned by 50% at the fault tip. Mesoscopic normal faults are oriented 320--340[degree] and have an average inclination of 75[degree] SW. Megaboudins of Chugwater are present in the footwall of the Hurt Creek fault, immediately adjacent to the fault trace. The Black Canyon fault transported Precambrian-Pennsylvanian rocks over Pennsylvanian Tensleep sandstone. This fault is layer-parallel at the top of the Tensleep and loses separation along strike into an unfaulted syncline in the Goose Egg Formation. Shortening in the pre-Permian units is accommodated by slip on the basement-involved Black Canyon fault. Equivalent shortening in Permian-Cretaceous units occurs on a system of thin-skinned'' thrust faults.

  12. Fault displacement hazard for strike-slip faults

    USGS Publications Warehouse

    Petersen, M.D.; Dawson, T.E.; Chen, R.; Cao, T.; Wills, C.J.; Schwartz, D.P.; Frankel, A.D.

    2011-01-01

    In this paper we present a methodology, data, and regression equations for calculating the fault rupture hazard at sites near steeply dipping, strike-slip faults. We collected and digitized on-fault and off-fault displacement data for 9 global strikeslip earthquakes ranging from moment magnitude M 6.5 to M 7.6 and supplemented these with displacements from 13 global earthquakes compiled byWesnousky (2008), who considers events up to M 7.9. Displacements on the primary fault fall off at the rupture ends and are often measured in meters, while displacements on secondary (offfault) or distributed faults may measure a few centimeters up to more than a meter and decay with distance from the rupture. Probability of earthquake rupture is less than 15% for cells 200 m??200 m and is less than 2% for 25 m??25 m cells at distances greater than 200mfrom the primary-fault rupture. Therefore, the hazard for off-fault ruptures is much lower than the hazard near the fault. Our data indicate that rupture displacements up to 35cm can be triggered on adjacent faults at distances out to 10kmor more from the primary-fault rupture. An example calculation shows that, for an active fault which has repeated large earthquakes every few hundred years, fault rupture hazard analysis should be an important consideration in the design of structures or lifelines that are located near the principal fault, within about 150 m of well-mapped active faults with a simple trace and within 300 m of faults with poorly defined or complex traces.

  13. Fault Tree Analysis Application for Safety and Reliability

    NASA Technical Reports Server (NTRS)

    Wallace, Dolores R.

    2003-01-01

    Many commercial software tools exist for fault tree analysis (FTA), an accepted method for mitigating risk in systems. The method embedded in the tools identifies a root as use in system components, but when software is identified as a root cause, it does not build trees into the software component. No commercial software tools have been built specifically for development and analysis of software fault trees. Research indicates that the methods of FTA could be applied to software, but the method is not practical without automated tool support. With appropriate automated tool support, software fault tree analysis (SFTA) may be a practical technique for identifying the underlying cause of software faults that may lead to critical system failures. We strive to demonstrate that existing commercial tools for FTA can be adapted for use with SFTA, and that applied to a safety-critical system, SFTA can be used to identify serious potential problems long before integrator and system testing.

  14. Report from the Country of Georgia: Protecting and Promoting Breastfeeding through Regulation of Artificial-Feeding Marketing Practices.

    PubMed

    Nemsadze, Ketevan

    2004-01-01

    This paper highlights the southwest Asian country of Georgia's experience in creating efforts to protect and promote breastfeeding and to implement the International Code of Marketing of Breast-milk Substitutes. Since 1994, the country of Georgia (of the former Soviet Union) has successfully implemented the Baby-Friendly Hospital Initiative. In 1997-1998, Georgia conducted a study throughout the country's various regions to evaluate compliance with the International Code of Marketing of Breast-milk Substitutes. The research demonstrated numerous violations of the code by various companies and confirmed the necessity of ongoing activities to promote implementation of the code. Due to the great effort of Georgia's Ministry of Health and the International Baby-Food Action Network [IBFAN] Georgian group called "Claritas," the law titled "On Protection and Promotion of Breastfeeding and Regulation of Artificial Feeding" was adopted in 1999 by the country's parliament. As a result, Georgia has witnessed a sharp increase in breastfeeding percentages, the designation of baby-friendly status at 14 maternity houses, and a decrease in the advertisement of artificial-feeding products. PMID:17273372

  15. Fault tree models for fault tolerant hypercube multiprocessors

    NASA Technical Reports Server (NTRS)

    Boyd, Mark A.; Tuazon, Jezus O.

    1991-01-01

    Three candidate fault tolerant hypercube architectures are modeled, their reliability analyses are compared, and the resulting implications of these methods of incorporating fault tolerance into hypercube multiprocessors are discussed. In the course of performing the reliability analyses, the use of HARP and fault trees in modeling sequence dependent system behaviors is demonstrated.

  16. San Andreas fault zone head waves near Parkfield, California

    SciTech Connect

    Ben-Zion, Y.; Malin, P. Univ. of California, Santa Barbara, CA )

    1991-03-29

    Microearthquakes seismograms from the borehole seismic network on the San Andreas Fault near Parkfield, California, provide three lines of evidence that first P arrivals are head waves refracted along the cross-fault material contrast. First, the travel time difference between these arrivals and secondary phases identified as direct P waves scales linearly with the source-receiver distance. Second, these arrivals have the emergent wave character associated in theory and practice with refracted head waves instead of the sharp first breaks associated with direct P arrivals. Third, the first motion polarities of the emergent arrivals are reversed from those of the direct P waves as predicted by the theory of fault zone head waves for slip on the San Andreas fault. The presence of fault zone head waves in local seismic network data may help account for scatter in earthquake locations and source mechanisms. The fault zone head waves indicate that the velocity contrast across the San Andreas fault near Parkfield is approximately 4 percent. Further studies of these waves may provide a way of assessing changes in the physical state of the fault system.

  17. San andreas fault zone head waves near parkfield, california.

    PubMed

    Ben-Zion, Y; Malin, P

    1991-03-29

    Microearthquake seismograms from the borehole seismic network on the San Andreas fault near Parkfield, California, provide three lines of evidence that first P arrivals are "head" waves refracted along the cross-fault material contrast. First, the travel time difference between these arrivals and secondary phases identified as direct P waves scales linearly with the source-receiver distance. Second, these arrivals have the emergent wave character associated in theory and practice with refracted head waves instead of the sharp first breaks associated with direct P arrivals. Third, the first motion polarities of the emergent arrivals are reversed from those of the direct P waves as predicted by the theory of fault zone head waves for slip on the San Andreas fault. The presence of fault zone head waves in local seismic network data may help account for scatter in earthquake locations and source mechanisms. The fault zone head waves indicate that the velocity contrast across the San Andreas fault near Parkfield is approximately 4 percent. Further studies of these waves may provide a way of assessing changes in the physical state of the fault system. PMID:17793143

  18. Characteristics of On-fault and Off-fault displacement of various fault types based on numerical simulation

    NASA Astrophysics Data System (ADS)

    Inoue, N.; Kitada, N.; Takemura, K.

    2015-12-01

    There are two types of fault displacement related to the earthquake fault: on-fault displacement and off-fault displacement. Off-fault displacement should be evaluated in important facilities, such as Nuclear Installations. Probabilistic Fault Displacement Hazard Analysis (PFDHA) is developing on the basis of PSHA. PFDHA estimates on-fault and off-fault displacement. For estimation, PFDHA uses distance-displacement functions, which are constructed from field measurement data. However, observed displacement data are still sparse, especially off-fault displacement. In Nuclear Installations, estimation of off-fault displacement is more important than that of on-fault. We carried out numerical fault displacement simulations to assist in understanding distance-displacement relations of on-fault and off-fault according to fault types, normal, reverse and strike fault. We used Okada's dislocation method. The displacements were calculated based on the single fault model with several rakes of slip. On-fault displacements (along the fault profile) of each fault types show a similar trend. Off-fault displacements (cross profile to the fault) of vertical (reverse and normal) fault types show the rapid decreasing displacement on the foot wall side. In the presentation, we will show the displacement profile and also stress, strain and so on. The dislocation model can not express discontinuous displacements. In the future, we will apply various numerical simulations (Finite Element Method, Distinct Element Method) in order to evaluate off-fault displacements. We will also compare numerical simulation results with observed data.

  19. The property of fault zone and fault activity of Shionohira Fault, Fukushima, Japan

    NASA Astrophysics Data System (ADS)

    Seshimo, K.; Aoki, K.; Tanaka, Y.; Niwa, M.; Kametaka, M.; Sakai, T.; Tanaka, Y.

    2015-12-01

    The April 11, 2011 Fukushima-ken Hamadori Earthquake (hereafter the 4.11 earthquake) formed co-seismic surface ruptures trending in the NNW-SSE direction in Iwaki City, Fukushima Prefecture, which were newly named as the Shionohira Fault by Ishiyama et al. (2011). This earthquake was characterized by a westward dipping normal slip faulting, with a maximum displacement of about 2 m (e.g., Kurosawa et al., 2012). To the south of the area, the same trending lineaments were recognized to exist even though no surface ruptures occurred by the earthquake. In an attempt to elucidate the differences of active and non-active segments of the fault, this report discusses the results of observation of fault outcrops along the Shionohira Fault as well as the Coulomb stress calculations. Only a few outcrops have basement rocks of both the hanging-wall and foot-wall of the fault plane. Three of these outcrops (Kyodo-gawa, Shionohira and Betto) were selected for investigation. In addition, a fault outcrop (Nameishi-minami) located about 300 m south of the southern tip of the surface ruptures was investigated. The authors carried out observations of outcrops, polished slabs and thin sections, and performed X-ray diffraction (XRD) to fault materials. As a result, the fault zones originating from schists were investigated at Kyodo-gawa and Betto. A thick fault gouge was cut by a fault plane of the 4.11 earthquake in each outcrop. The fault materials originating from schists were fault bounded with (possibly Neogene) weakly deformed sandstone at Shionohira. A thin fault gouge was found along the fault plane of 4.11 earthquake. A small-scale fault zone with thin fault gouge was observed in Nameishi-minami. According to XRD analysis, smectite was detected in the gouges from Kyodo-gawa, Shionohira and Betto, while not in the gouge from Nameishi-minami.

  20. Towards Fault Resilient Global Arrays

    SciTech Connect

    Tipparaju, Vinod; Krishnan, Manoj Kumar; Palmer, Bruce J.; Petrini, Fabrizio; Nieplocha, Jaroslaw

    2007-09-03

    The focus of the current paper is adding fault resiliency to the Global Arrays. We extended the GA toolkit to provide a minimal level of capabilities to enable programmer to implement fault resiliency at the user level. Our fault-recovery approach is programmer assisted and based on frequent incremental checkpoints and rollback recovery. In addition, it relies of pool of spare nodes that are used to replace the failing node. We demonstrate usefulness of fault resilient Global Arrays in application context.

  1. The influence of historical forestry practices and climate forcing on the sediment retention function of wetlands and their ability to protect downstream aquatic ecosystems

    NASA Astrophysics Data System (ADS)

    Caley, K. A.; Owens, P. N.

    2012-04-01

    Information is currently lacking regarding the variability of a wetland's sediment storage function over time, and the impacts of increased sediment delivery to this function. To address this issue, two wetlands in the Quesnel River Basin in central British Columbia, whose surrounding catchments were logged, were studied. Sediment cores were collected in 2009 from both wetlands, as well as their adjacent lakes, to determine the relative proportion of sediment retained by each feature prior to, during and after forestry practices. Analysis of radionuclides (Pb-210 and Cs-137) was undertaken to determine core chronology and sedimentation rates. Other proxy indicators (magnetic susceptibility, loss-on-ignition, particle size distribution, C:N ratio and geochemical indicators) were used to assess wetland filtration over time, and to trace the movement of allochthonous sediment through the catchment. Climate data were also used to examine whether fluctuations in sedimentation rates are better explained by climatic factors (e.g. precipitation, temperature). Results suggest that there were periods of increased sedimentation associated with both forestry practices and climate forcing (e.g. Pacific Decadal Oscillation, PDO). Despite increases in sediment delivery due to forestry practices and climate changes the wetlands performed important buffering functions, thereby protecting downstream aquatic ecosystems.

  2. Insurance Applications of Active Fault Maps Showing Epistemic Uncertainty

    NASA Astrophysics Data System (ADS)

    Woo, G.

    2005-12-01

    high deductible is in force, this requires estimation of the epistemic uncertainty on fault geometry and activity. Transport infrastructure insurance is of practical interest in seismic countries. On the North Anatolian Fault in Turkey, there is uncertainty over an unbroken segment between the eastern end of the Dazce Fault and Bolu. This may have ruptured during the 1944 earthquake. Existing hazard maps may simply use a question mark to flag uncertainty. However, a far more informative type of hazard map might express spatial variations in the confidence level associated with a fault map. Through such visual guidance, an insurance risk analyst would be better placed to price earthquake cover, allowing for epistemic uncertainty.

  3. Row fault detection system

    DOEpatents

    Archer, Charles Jens; Pinnow, Kurt Walter; Ratterman, Joseph D.; Smith, Brian Edward

    2012-02-07

    An apparatus, program product and method check for nodal faults in a row of nodes by causing each node in the row to concurrently communicate with its adjacent neighbor nodes in the row. The communications are analyzed to determine a presence of a faulty node or connection.

  4. Row fault detection system

    DOEpatents

    Archer, Charles Jens; Pinnow, Kurt Walter; Ratterman, Joseph D.; Smith, Brian Edward

    2010-02-23

    An apparatus and program product check for nodal faults in a row of nodes by causing each node in the row to concurrently communicate with its adjacent neighbor nodes in the row. The communications are analyzed to determine a presence of a faulty node or connection.

  5. Dynamic Fault Detection Chassis

    SciTech Connect

    Mize, Jeffery J

    2007-01-01

    Abstract The high frequency switching megawatt-class High Voltage Converter Modulator (HVCM) developed by Los Alamos National Laboratory for the Oak Ridge National Laboratory's Spallation Neutron Source (SNS) is now in operation. One of the major problems with the modulator systems is shoot-thru conditions that can occur in a IGBTs H-bridge topology resulting in large fault currents and device failure in a few microseconds. The Dynamic Fault Detection Chassis (DFDC) is a fault monitoring system; it monitors transformer flux saturation using a window comparator and dV/dt events on the cathode voltage caused by any abnormality such as capacitor breakdown, transformer primary turns shorts, or dielectric breakdown between the transformer primary and secondary. If faults are detected, the DFDC will inhibit the IGBT gate drives and shut the system down, significantly reducing the possibility of a shoot-thru condition or other equipment damaging events. In this paper, we will present system integration considerations, performance characteristics of the DFDC, and discuss its ability to significantly reduce costly down time for the entire facility.

  6. Row fault detection system

    SciTech Connect

    Archer, Charles Jens; Pinnow, Kurt Walter; Ratterman, Joseph D.; Smith, Brian Edward

    2008-10-14

    An apparatus, program product and method checks for nodal faults in a row of nodes by causing each node in the row to concurrently communicate with its adjacent neighbor nodes in the row. The communications are analyzed to determine a presence of a faulty node or connection.

  7. Verification of an IGBT Fusing Switch for Over-current Protection of the SNS HVCM

    SciTech Connect

    Benwell, Andrew; Kemp, Mark; Burkhart, Craig; Nguyen, Minh; /SLAC

    2010-06-11

    An IGBT based over-current protection system has been developed to detect faults and limit the damage caused by faults in high voltage converter modulators. During normal operation, an IGBT enables energy to be transferred from storage capacitors to a H-bridge. When a fault occurs, the over-current protection system detects the fault, limits the fault current and opens the IGBT to isolate the remaining stored energy from the fault. This paper presents an experimental verification of the over-current protection system under applicable conditions.

  8. Fault-Related Sanctuaries

    NASA Astrophysics Data System (ADS)

    Piccardi, L.

    2001-12-01

    Beyond the study of historical surface faulting events, this work investigates the possibility, in specific cases, of identifying pre-historical events whose memory survives in myths and legends. The myths of many famous sacred places of the ancient world contain relevant telluric references: "sacred" earthquakes, openings to the Underworld and/or chthonic dragons. Given the strong correspondence with local geological evidence, these myths may be considered as describing natural phenomena. It has been possible in this way to shed light on the geologic origin of famous myths (Piccardi, 1999, 2000 and 2001). Interdisciplinary researches reveal that the origin of several ancient sanctuaries may be linked in particular to peculiar geological phenomena observed on local active faults (like ground shaking and coseismic surface ruptures, gas and flames emissions, strong underground rumours). In many of these sanctuaries the sacred area is laid directly above the active fault. In a few cases, faulting has affected also the archaeological relics, right through the main temple (e.g. Delphi, Cnidus, Hierapolis of Phrygia). As such, the arrangement of the cult site and content of relative myths suggest that specific points along the trace of active faults have been noticed in the past and worshiped as special `sacred' places, most likely interpreted as Hades' Doors. The mythological stratification of most of these sanctuaries dates back to prehistory, and points to a common derivation from the cult of the Mother Goddess (the Lady of the Doors), which was largely widespread since at least 25000 BC. The cult itself was later reconverted into various different divinities, while the `sacred doors' of the Great Goddess and/or the dragons (offspring of Mother Earth and generally regarded as Keepers of the Doors) persisted in more recent mythologies. Piccardi L., 1999: The "Footprints" of the Archangel: Evidence of Early-Medieval Surface Faulting at Monte Sant'Angelo (Gargano, Italy

  9. Earthquakes and fault creep on the northern San Andreas fault

    USGS Publications Warehouse

    Nason, R.

    1979-01-01

    At present there is an absence of both fault creep and small earthquakes on the northern San Andreas fault, which had a magnitude 8 earthquake with 5 m of slip in 1906. The fault has apparently been dormant after the 1906 earthquake. One possibility is that the fault is 'locked' in some way and only produces great earthquakes. An alternative possibility, presented here, is that the lack of current activity on the northern San Andreas fault is because of a lack of sufficient elastic strain after the 1906 earthquake. This is indicated by geodetic measurements at Fort Ross in 1874, 1906 (post-earthquake), and 1969, which show that the strain accumulation in 1969 (69 ?? 10-6 engineering strain) was only about one-third of the strain release (rebound) in the 1906 earthquake (200 ?? 10-6 engineering strain). The large difference in seismicity before and after 1906, with many strong local earthquakes from 1836 to 1906, but only a few strong earthquakes from 1906 to 1976, also indicates a difference of elastic strain. The geologic characteristics (serpentine, fault straightness) of most of the northern San Andreas fault are very similar to the characteristics of the fault south of Hollister, where fault creep is occurring. Thus, the current absence of fault creep on the northern fault segment is probably due to a lack of sufficient elastic strain at the present time. ?? 1979.

  10. An empirical comparison of software fault tolerance and fault elimination

    NASA Technical Reports Server (NTRS)

    Shimeall, Timothy J.; Leveson, Nancy G.

    1991-01-01

    Reliability is an important concern in the development of software for modern systems. Some researchers have hypothesized that particular fault-handling approaches or techniques are so effective that other approaches or techniques are superfluous. The authors have performed a study that compares two major approaches to the improvement of software, software fault elimination and software fault tolerance, by examination of the fault detection obtained by five techniques: run-time assertions, multi-version voting, functional testing augmented by structural testing, code reading by stepwise abstraction, and static data-flow analysis. This study has focused on characterizing the sets of faults detected by the techniques and on characterizing the relationships between these sets of faults. The results of the study show that none of the techniques studied is necessarily redundant to any combination of the others. Further results reveal strengths and weakness in the fault detection by the techniques studied and suggest directions for future research.

  11. Self-Lubrication Effect of Fault Caused by Structural Change of Carbon Mineral by Faulting

    NASA Astrophysics Data System (ADS)

    Oohashi, K.; Shimamoto, T.

    2009-12-01

    Concentration of carbonaceous material (low crystallinity carbon or graphite) within the fault-related rock was reported in several fault zones (Ushikubi fault, Tanakura Tectonic Line; Japan, KTB borehole; Germany, Err nappe detachment fault; Switzerland). The most convincing mechanism of carbon concentration is mechanical gathering from surrounding rock, and practically the protolith consists of mudstone or pelitic gneiss/schist in some carbon bearing fault. Among the carbonaceous mineral, the graphite is well known as a solid lubricant, so it can affect on fault strength. Therefore, we performed high-(m/s) to low velocity frictional experiment using rotary-shear deformation apparatus to elucidate the frictional property and mineralogical change with seismic slip. We used commercially synthesized amorphous carbon powder for starting material, and series of experiments were conducted under normal stress (σ) of 1-3 MPa, slip rate of 1.3 m/s, N2 purged condition. According to observation on naked eye and thin-section of recovered sample, thin and cohesive slip concentration zone was found within the simulated fault zone, and d002 peak of graphite was detected by X-ray diffraction analysis of slip zone. Moreover, steady state friction (μss) of graphitizated slip zone material at intermediate velocity(0.009 m/s) represented 0.4-0.5 compared with the original carbon powder represented μss of 0.6-0.7. These result suggest that the amorphous or low crystallinity carbon contained in the fault zone transform to the graphite by seismic faulting, and it may decrease own strength as a result. Generally, most of natural fault zone contain sheet silicate minerals which works as a lubricant within their core. But these minerals are unstable at seismogenic depth. On the other hand, the graphite may induce stable sliding even at deep crust because of its tolerance for high-temperature, if there is under non-oxidation atmosphere. The fluxion structure often developed within the

  12. Revisiting Stacking Fault Energy of Steels

    NASA Astrophysics Data System (ADS)

    Das, Arpan

    2016-02-01

    The stacking fault energy plays an important role in the transition of deformation microstructure. This energy is strongly dependent on the concentration of alloying elements and the temperature under which the alloy is exposed. Extensive literature review has been carried out and investigated that there are inconsistencies in findings on the influence of alloying elements on stacking fault energy. This may be attributed to the differences in chemical compositions, inaccuracy in measurements, and the methodology applied for evaluating the stacking fault energy. In the present research, a Bayesian neural network model is created to correlate the complex relationship between the extent of stacking fault energy with its influencing parameters in different austenitic grade steels. The model has been applied to confirm that the predictions are reasonable in the context of metallurgical principles and other data published in the open literature. In addition, it has been possible to estimate the isolated influence of particular variables such as nickel concentration, which exactly cannot in practice be varied independently. This demonstrates the ability of the method to investigate a new phenomenon in cases where the information cannot be accessed experimentally.

  13. Testing simple models of brittle normal faulting: slip rate, spacing, and segmentation

    NASA Astrophysics Data System (ADS)

    Connolly, J.; Dawers, N. H.

    2005-05-01

    Fault growth and evolution is a complex process, however any predictable pattern will yield important information for assessing seismic hazard and clues to what controls fault behavior. Models of slip rate variation along strike, spacing of active faults, and scaling of segment length are investigated using data from faults located within the parabola of seismicity around the Yellowstone hotspot. Based on displacement-length relations and segment size, Cowie and Roberts used fault geometry to estimate along-strike slip rate variation in their 2001 paper (JSG,23,1901-1915). Following their model, along-strike slip rate profiles were calculated for three active normal faults: the Beaverhead, Lemhi, and Lost River faults. Though the method yields estimated slip rates, the results roughly mirror along-strike variation in total displacement, because the three faults are similar in size and age. The profiles indicate that the Beaverhead is underdisplaced, i.e. having a low slip rate relative to its length. This suggests that segment linkage occurred later in the development of the Beaverhead than in the others. Cowie and Roberts also proposed a model for fault spacing based on initial fault length and spacing, and maximum length and spacing of fully developed fault systems. Fault spacing is important in determining incidence and magnitude of fault movement. If the distance between faults is too small, strain becomes localized along one while the other exhibits a decrease in seismicity until no activity occurs. In practice it is impossible to know if the distance between the largest faults represents maximum fault spacing, because the fault population is still active and evolving; thus, it is difficult to test or implement the method. A relationship was found among faults within the study area, where spacing of adjacent active faults is proportional to the sums of their lengths. It was also observed that average segment length increases with increasing total fault length

  14. A Role of Medial Olivocochlear Reflex as a Protection Mechanism from Noise-Induced Hearing Loss Revealed in Short-Practicing Violinists.

    PubMed

    Otsuka, Sho; Tsuzaki, Minoru; Sonoda, Junko; Tanaka, Satomi; Furukawa, Shigeto

    2016-01-01

    Previous studies have indicated that extended exposure to a high level of sound might increase the risk of hearing loss among professional symphony orchestra musicians. One of the major problems associated with musicians' hearing loss is difficulty in estimating its risk simply on the basis of the physical amount of exposure, i.e. the exposure level and duration. The aim of this study was to examine whether the measurement of the medial olivocochlear reflex (MOCR), which is assumed to protect the cochlear from acoustic damage, could enable us to assess the risk of hearing loss among musicians. To test this, we compared the MOCR strength and the hearing deterioration caused by one-hour instrument practice. The participants in the study were music university students who are majoring in the violin, whose left ear is exposed to intense violin sounds (broadband sounds containing a significant number of high-frequency components) during their regular instrument practice. Audiogram and click-evoked otoacoustic emissions (CEOAEs) were measured before and after a one-hour violin practice. There was a larger exposure to the left ear than to the right ear, and we observed a left-ear specific temporary threshold shift (TTS) after the violin practice. Left-ear CEOAEs decreased proportionally to the TTS. The exposure level, however, could not entirely explain the inter-individual variation in the TTS and the decrease in CEOAE. On the other hand, the MOCR strength could predict the size of the TTS and CEOAE decrease. Our findings imply that, among other factors, the MOCR is a promising measure for assessing the risk of hearing loss among musicians. PMID:26745634

  15. A Role of Medial Olivocochlear Reflex as a Protection Mechanism from Noise-Induced Hearing Loss Revealed in Short-Practicing Violinists

    PubMed Central

    Otsuka, Sho; Tsuzaki, Minoru; Sonoda, Junko; Tanaka, Satomi; Furukawa, Shigeto

    2016-01-01

    Previous studies have indicated that extended exposure to a high level of sound might increase the risk of hearing loss among professional symphony orchestra musicians. One of the major problems associated with musicians’ hearing loss is difficulty in estimating its risk simply on the basis of the physical amount of exposure, i.e. the exposure level and duration. The aim of this study was to examine whether the measurement of the medial olivocochlear reflex (MOCR), which is assumed to protect the cochlear from acoustic damage, could enable us to assess the risk of hearing loss among musicians. To test this, we compared the MOCR strength and the hearing deterioration caused by one-hour instrument practice. The participants in the study were music university students who are majoring in the violin, whose left ear is exposed to intense violin sounds (broadband sounds containing a significant number of high-frequency components) during their regular instrument practice. Audiogram and click-evoked otoacoustic emissions (CEOAEs) were measured before and after a one-hour violin practice. There was a larger exposure to the left ear than to the right ear, and we observed a left-ear specific temporary threshold shift (TTS) after the violin practice. Left-ear CEOAEs decreased proportionally to the TTS. The exposure level, however, could not entirely explain the inter-individual variation in the TTS and the decrease in CEOAE. On the other hand, the MOCR strength could predict the size of the TTS and CEOAE decrease. Our findings imply that, among other factors, the MOCR is a promising measure for assessing the risk of hearing loss among musicians. PMID:26745634

  16. Identifying best practices for "Safe Harbor" legislation to protect child sex trafficking victims: Decriminalization alone is not sufficient.

    PubMed

    Barnert, Elizabeth S; Abrams, Susan; Azzi, Veronica F; Ryan, Gery; Brook, Robert; Chung, Paul J

    2016-01-01

    Several states have recently enacted "Safe Harbor" laws to redirect child victims of commercial sexual exploitation and child sex trafficking from the criminal justice system and into the child welfare system. No comprehensive studies of Safe Harbor law implementation exist. The nine state Safe Harbor laws enacted by 2012 were analyzed to guide state legislators, health professionals, law enforcement agents, child welfare providers, and other responders to the commercial sexual exploitation of children on the development and implementation of state Safe Harbor laws. The authors conducted 32 semi-structured interviews with Safe Harbor experts in these states. Participants conveyed that Safe Harbor legislation signified a critical paradigm shift, treating commercially sexually exploited youth not as criminals but as vulnerable children in need of services. However, Safe Harbor legislation varied widely and significant gaps in laws exist. Such laws alone were considered insufficient without adequate funding for necessary services. As a result, many well-meaning providers were going around the Safe Harbor laws by continuing to incarcerate commercially sexually exploited youth in the juvenile justice system regardless of Safe Harbor laws in place. This was done, to act, in their view, in what was the best interest of the victimized children. With imperfect laws and implementation, these findings suggest an important role for local and state responders to act together to protect victims from unnecessary criminalization and potential further traumatization. PMID:26520827

  17. The education and practice program for medical students with quantitative and qualitative fit test for respiratory protective equipment.

    PubMed

    Myong, Jun-Pyo; Byun, JunSu; Cho, YounMo; Seo, Hye-Kyung; Baek, Jung-Eun; Koo, Jung-Wan; Kim, Hyunwook

    2016-01-01

    Tuberculosis infection is prevalent in Korea and health care workers are vulnerable to tuberculosis infection in the hospital. The aims of this study were to develop and validate an education program that teaches senior medical students how to wear and choose the proper size and type of respiratory protective equipment (RPE), which may help reduce the risk of contracting Mycobacterium tuberculosis (MTB) from patients. Overall, 50 senior medical students participated in this education program. Methods of choosing the proper type of RPE, performing a fit check of the RPE, and choosing a suitable mask size were taught by certified instructors using the real-time quantitative fit test (QNFT). The validity of education program was evaluated with qualitative fit test (QLFT) before and after the education as pass or fail. The education program was effective, as shown by the significantly pass rate (increased 30 to 74%) in the QLFT after the education program (p<0.05). Among study participants, changing mask size from medium to small significantly increased the pass rate (p<0.001). Incorporation of this program into the medical school curriculum may help reduce risk of MTB infection in medical students working in the hospital. PMID:26538001

  18. Chemical exposure reduction: Factors impacting on South African herbicide sprayers' personal protective equipment compliance and high risk work practices.

    PubMed

    Andrade-Rivas, Federico; Rother, Hanna-Andrea

    2015-10-01

    The high exposure risks of workers to herbicides in low- and middle-income countries is an important public health concern because of the potential resulting negative impacts on workers' health. This study investigated workers' personal protective equipment (PPE) compliance as a risk mitigation measure; particularly workers who apply herbicides for Working for Water (WfW) - a South African invasive alien vegetation control programme. The study aim was to understand workers' low PPE compliance by analysing their risk perceptions of herbicide use, working conditions and socio-cultural context. Research methods included ethnographic observations, informal interviews, visual media, questionnaires and a focus group. Study results indicated that low PPE compliance persists despite workers' awareness of herbicide exposure risks and as a result of the influence from workers' socio-cultural context (i.e. gender dynamics and social status), herbicide risk perceptions and working conditions (i.e. environmental and logistical). Interestingly, teams comprised of mostly women had the highest compliance rate. These findings highlighted that given the complexity of PPE compliance, especially in countries with several economic and social constraints, exposure reduction interventions should not rely solely on PPE use promotion. Instead, other control strategies requiring less worker input for effectiveness should be implemented, such as elimination and substitution of highly hazardous pesticides, and altering application methods. PMID:26093240

  19. The education and practice program for medical students with quantitative and qualitative fit test for respiratory protective equipment

    PubMed Central

    MYONG, Jun-Pyo; BYUN, JunSu; CHO, YounMo; SEO, Hye-Kyung; BAEK, Jung-Eun; KOO, Jung-Wan; KIM, Hyunwook

    2015-01-01

    Tuberculosis infection is prevalent in Korea and health care workers are vulnerable to tuberculosis infection in the hospital. The aims of this study were to develop and validate an education program that teaches senior medical students how to wear and choose the proper size and type of respiratory protective equipment (RPE), which may help reduce the risk of contracting Mycobacterium tuberculosis (MTB) from patients. Overall, 50 senior medical students participated in this education program. Methods of choosing the proper type of RPE, performing a fit check of the RPE, and choosing a suitable mask size were taught by certified instructors using the real-time quantitative fit test (QNFT). The validity of education program was evaluated with qualitative fit test (QLFT) before and after the education as pass or fail. The education program was effective, as shown by the significantly pass rate (increased 30 to 74%) in the QLFT after the education program (p<0.05). Among study participants, changing mask size from medium to small significantly increased the pass rate (p<0.001). Incorporation of this program into the medical school curriculum may help reduce risk of MTB infection in medical students working in the hospital. PMID:26538001

  20. Assessment of Technologies for the Space Shuttle External Tank Thermal Protection System and Recommendations for Technology Improvement. Part 2; Structural Analysis Technologies and Modeling Practices

    NASA Technical Reports Server (NTRS)

    Knight, Norman F., Jr.; Nemeth, Michael P.; Hilburger, Mark W.

    2004-01-01

    A technology review and assessment of modeling and analysis efforts underway in support of a safe return to flight of the thermal protection system (TPS) for the Space Shuttle external tank (ET) are summarized. This review and assessment effort focuses on the structural modeling and analysis practices employed for ET TPS foam design and analysis and on identifying analysis capabilities needed in the short-term and long-term. The current understanding of the relationship between complex flight environments and ET TPS foam failure modes are reviewed as they relate to modeling and analysis. A literature review on modeling and analysis of TPS foam material systems is also presented. Finally, a review of modeling and analysis tools employed in the Space Shuttle Program is presented for the ET TPS acreage and close-out foam regions. This review includes existing simplified engineering analysis tools are well as finite element analysis procedures.

  1. Commentary: A social environment approach to promotive and protective practice in childhood resilience--reflections on Ungar (2014).

    PubMed

    Wessells, Michael G

    2015-01-01

    The field of mental health and psychosocial support for children in humanitarian emergencies has been evolving rapidly. A decade ago, researchers and practitioners frequently took a deficits approach that emphasized problems such as trauma, particularly post-traumatic stress disorder (PTSD), in zones of armed conflict. Assessments focused on PTSD and typically led to the provision of curative responses such as Western psychotherapies as the first response for the affected population. Practitioners expressed diverse concerns about this approach, including its narrow, medicalized definition of the problem (mental disorder), the unsustainability of the programs it generated, the relative inattention to the context, the privileging of individual over systemic approaches, and the lack of cultural sensitivity. For these and other reasons, humanitarian practitioners have increasingly favored a resilience approach that features the agency of children, families, and communities and seeks to build upon existing assets or strengths. Already there is evidence of the effectiveness of numerous interventions that embody a resilience approach. Yet resilience approaches have been limited by a lack of conceptual clarity and ongoing questions about how to assess and measure it. In this context, Michael Ungar's Practitioner Review is an important contribution to practice. PMID:25409758

  2. Fault Scarp Offsets and Fault Population Analysis on Dione

    NASA Astrophysics Data System (ADS)

    Tarlow, S.; Collins, G. C.

    2010-12-01

    Cassini images of Dione show several fault zones cutting through the moon’s icy surface. We have measured the displacement and length of 271 faults, and estimated the strain occurring in 6 different fault zones. These measurements allow us to quantify the total amount of surface strain on Dione as well as constrain what processes might have caused these faults to form. Though we do not have detailed topography across fault scarps on Dione, we can use their projected size on the camera plane to estimate their heights, assuming a reasonable surface slope. Starting with high resolution images of Dione obtained by the Cassini ISS, we marked points at the top to the bottom of each fault scarp to measure the fault’s projected displacement and its orientation along strike. Line and sample information for the measurements were then processed through ISIS to derive latitude/longitude information and pixel dimensions. We then calculate the three dimensional orientation of a vector running from the bottom to the top of the fault scarp, assuming a 45 degree angle with respect to the surface, and project this vector onto the spacecraft camera plane. This projected vector gives us a correction factor to estimate the actual vertical displacement of the fault scarp. This process was repeated many times for each fault, to show variations of displacement along the length of the fault. To compare each fault to its neighbors and see how strain was accommodated across a population of faults, we divided the faults into fault zones, and created new coordinate systems oriented along the central axis of each fault zone. We could then quantify the amount of fault overlap and add the displacement of overlapping faults to estimate the amount of strain accommodated in each zone. Faults in the southern portion of Padua have a strain of 0.031(+/-) 0.0097, central Padua exhibits a strain of .032(+/-) 0.012, and faults in northern Padua have a strain of 0.025(+/-) 0.0080. The western faults of

  3. Fault intersections along the Hosgri Fault Zone, Central California

    NASA Astrophysics Data System (ADS)

    Watt, J. T.; Johnson, S. Y.; Langenheim, V. E.

    2011-12-01

    It is well-established that stresses concentrate at fault intersections or bends when subjected to tectonic loading, making focused studies of these areas particularly important for seismic hazard analysis. In addition, detailed fault models can be used to investigate how slip on one fault might transfer to another during an earthquake. We combine potential-field, high-resolution seismic-reflection, and multibeam bathymetry data with existing geologic and seismicity data to investigate the fault geometry and connectivity of the Hosgri, Los Osos, and Shoreline faults offshore of San Luis Obispo, California. The intersection of the Hosgri and Los Osos faults in Estero Bay is complex. The offshore extension of the Los Osos fault, as imaged with multibeam and high-resolution seismic data, is characterized by a west-northwest-trending zone (1-3 km wide) of near vertical faulting. Three distinct strands (northern, central, and southern) are visible on shallow seismic reflection profiles. The steep dip combined with dramatic changes in reflection character across mapped faults within this zone suggests horizontal offset of rock units and argues for predominantly strike-slip motion, however, the present orientation of the fault zone suggests oblique slip. As the Los Osos fault zone approaches the Hosgri fault, the northern and central strands become progressively more northwest-trending in line with the Hosgri fault. The northern strand runs subparallel to the Hosgri fault along the edge of a long-wavelength magnetic anomaly, intersecting the Hosgri fault southwest of Point Estero. Geophysical modeling suggests the northern strand dips 70° to the northeast, which is in agreement with earthquake focal mechanisms that parallel this strand. The central strand bends northward and intersects the Hosgri fault directly west of Morro Rock, corresponding to an area of compressional deformation visible in shallow seismic-reflection profiles. The southern strand of the Los Osos

  4. Abnormal fault-recovery characteristics of the fault-tolerant multiprocessor uncovered using a new fault-injection methodology

    NASA Astrophysics Data System (ADS)

    Padilla, Peter A.

    1991-03-01

    An investigation was made in AIRLAB of the fault handling performance of the Fault Tolerant MultiProcessor (FTMP). Fault handling errors detected during fault injection experiments were characterized. In these fault injection experiments, the FTMP disabled a working unit instead of the faulted unit once in every 500 faults, on the average. System design weaknesses allow active faults to exercise a part of the fault management software that handles Byzantine or lying faults. Byzantine faults behave such that the faulted unit points to a working unit as the source of errors. The design's problems involve: (1) the design and interface between the simplex error detection hardware and the error processing software, (2) the functional capabilities of the FTMP system bus, and (3) the communication requirements of a multiprocessor architecture. These weak areas in the FTMP's design increase the probability that, for any hardware fault, a good line replacement unit (LRU) is mistakenly disabled by the fault management software.

  5. Abnormal fault-recovery characteristics of the fault-tolerant multiprocessor uncovered using a new fault-injection methodology

    NASA Technical Reports Server (NTRS)

    Padilla, Peter A.

    1991-01-01

    An investigation was made in AIRLAB of the fault handling performance of the Fault Tolerant MultiProcessor (FTMP). Fault handling errors detected during fault injection experiments were characterized. In these fault injection experiments, the FTMP disabled a working unit instead of the faulted unit once in every 500 faults, on the average. System design weaknesses allow active faults to exercise a part of the fault management software that handles Byzantine or lying faults. Byzantine faults behave such that the faulted unit points to a working unit as the source of errors. The design's problems involve: (1) the design and interface between the simplex error detection hardware and the error processing software, (2) the functional capabilities of the FTMP system bus, and (3) the communication requirements of a multiprocessor architecture. These weak areas in the FTMP's design increase the probability that, for any hardware fault, a good line replacement unit (LRU) is mistakenly disabled by the fault management software.

  6. Holocene faulting on the Mission fault, northwest Montana

    SciTech Connect

    Ostenaa, D.A.; Klinger, R.E.; Levish, D.R. )

    1993-04-01

    South of Flathead Lake, fault scarps on late Quaternary surfaces are nearly continuous for 45 km along the western flank of the Mission Range. On late Pleistocene alpine lateral moraines, scarp heights reach a maximum of 17 m. Scarp heights on post glacial Lake Missoula surfaces range from 2.6--7.2 m and maximum scarp angles range from 10[degree]--24[degree]. The stratigraphy exposed in seven trenches across the fault demonstrates that the post glacial Lake Missoula scarps resulted from at least two surface-faulting events. Larger scarp heights on late Pleistocene moraines suggests a possible third event. This yields an estimated recurrence of 4--8 kyr. Analyses of scarp profiles show that the age of the most surface faulting is middle Holocene, consistent with stratigraphic evidence found in the trenches. Rupture length and displacement imply earthquake magnitudes of 7 to 7.5. Previous studies have not identified geologic evidence of late Quaternary surface faulting in the Rocky Mountain Trench or on faults north of the Lewis and Clark line despite abundant historic seismicity in the Flathead Lake area. In addition to the Mission fault, reconnaissance studies have located late Quaternary fault scarps along portions of faults bordering Jocko and Thompson Valleys. These are the first documented late Pleistocene/Holocene faults north of the Lewis and Clark line in Montana and should greatly revise estimates of earthquake hazards in this region.

  7. Managing Fault Management Development

    NASA Technical Reports Server (NTRS)

    McDougal, John M.

    2010-01-01

    As the complexity of space missions grows, development of Fault Management (FM) capabilities is an increasingly common driver for significant cost overruns late in the development cycle. FM issues and the resulting cost overruns are rarely caused by a lack of technology, but rather by a lack of planning and emphasis by project management. A recent NASA FM Workshop brought together FM practitioners from a broad spectrum of institutions, mission types, and functional roles to identify the drivers underlying FM overruns and recommend solutions. They identified a number of areas in which increased program and project management focus can be used to control FM development cost growth. These include up-front planning for FM as a distinct engineering discipline; managing different, conflicting, and changing institutional goals and risk postures; ensuring the necessary resources for a disciplined, coordinated approach to end-to-end fault management engineering; and monitoring FM coordination across all mission systems.

  8. Fault current limiter with solid-state circuit breakers

    NASA Astrophysics Data System (ADS)

    Bălan, H.; Neamț, L.; Buzdugan, M. I.; Varodi, T.; Pop, E.

    2016-08-01

    Switching of power circuit breakers is an important technical issue, especially at short circuit, since the fault current cause thermal and dynamic stresses, and the power quality worsens. Recently, the development of distributed renewable electricity induces the short circuit protection problematic because the distributed production of electric energy cause the transport networks to lose their radial character and disturbs the protective relays coordination. The modern technologies for power switching uses static fault current limiters, which offers a viable solution to remove the problems caused by large fault currents in the system. An appropriate design of the current limiting device reduces the thermal and dynamic stress and limits the fault current to a low value. The static switches are based on high power semiconductor devices that offer advantages compared to mechanical switches. Using a fault current limiter that minimizes the effect of distributed generation of electricity in a radial network on the co-ordination of protective relays is a solution to this problem in terms of switching speed and lifespan of power switches.

  9. Position of the American Dietetic Association: dietetics professionals can implement practices to conserve natural resources and protect the environment. (Previously titled "natural resource conservation and waste management").

    PubMed

    2001-10-01

    It is the position of the American Dietetic Association to encourage environmentally responsible practices that conserve natural resources, minimize the quantity of waste that is generated, and have the least adverse affect on the health of all living organisms and the environment. All components of the food system, from farmer to consumer, are affected by the availability and cost of energy and the availability and quality of water. Outdoor and indoor air quality significantly impacts the health of all living organisms. Decisions that dietetics professionals make as practitioners and consumers can affect the quantity and type of solid waste generated. The demand for natural resources should be evaluated when selecting the most cost-effective, environmentally sensitive approach to the management of solid waste. Special precautions are needed when using and disposing of hazardous and medical waste to protect the safety of our clients and employees. This position paper provides information and resources for dietetics professionals for addressing the complexity of the environmental issue presented. Conservation strategies are identified that dietetics professionals can use in their worksites and at home. These conservation practices may reduce cost and decrease the environmental impact we have on our communities and the world. PMID:11678498

  10. Dynamic faulting on a conjugate fault system detected by near-fault tilt measurements

    NASA Astrophysics Data System (ADS)

    Fukuyama, Eiichi

    2015-03-01

    There have been reports of conjugate faults that have ruptured during earthquakes. However, it is still unclear whether or not these conjugate faults ruptured coseismically during earthquakes. In this paper, we investigated near-fault ground tilt motions observed at the IWTH25 station during the 2008 Iwate-Miyagi Nairiku earthquake ( M w 6.9). Since near-fault tilt motion is very sensitive to the fault geometry on which the slip occurs during an earthquake, these data make it possible to distinguish between the main fault rupture and a rupture on the conjugate fault. We examined several fault models that have already been proposed and confirmed that only the models with a conjugated fault could explain the tilt data observed at IWTH25. The results support the existence of simultaneous conjugate faulting during the main rupture. This will contribute to the understanding of earthquake rupture dynamics because the conjugate rupture releases the same shear strain as that released on the main fault, and thus it has been considered quite difficult for both ruptures to accelerate simultaneously.

  11. Development, Implementation, and Testing of Fault Detection Strategies on the National Wind Technology Center's Controls Advanced Research Turbines

    SciTech Connect

    Johnson, K. E.; Fleming, P. A.

    2011-06-01

    The National Renewable Energy Laboratory's National Wind Technology Center dedicates two 600 kW turbines for advanced control systems research. A fault detection system for both turbines has been developed, analyzed, and improved across years of experiments to protect the turbines as each new controller is tested. Analysis of field data and ongoing fault detection strategy improvements have resulted in a system of sensors, fault definitions, and detection strategies that have thus far been effective at protecting the turbines. In this paper, we document this fault detection system and provide field data illustrating its operation while detecting a range of failures. In some cases, we discuss the refinement process over time as fault detection strategies were improved. The purpose of this article is to share field experience obtained during the development and field testing of the existing fault detection system, and to offer a possible baseline for comparison with more advanced turbine fault detection controllers.

  12. NASA Fire Protection

    NASA Technical Reports Server (NTRS)

    Clark, Theodore

    2001-01-01

    This viewgraph presentation provides information on fire protection operations and administration at Stennis Space Center (SSC). The presentation also lists innovative practices and recent improvements.

  13. Fault slip controlled by stress path and fluid pressurization rate

    NASA Astrophysics Data System (ADS)

    French, Melodie E.; Zhu, Wenlu; Banker, Jeremy

    2016-05-01

    The practice of injecting fluids into the crust is linked to regional increases in seismicity. Increasing fluid pressure along preexisting faults is believed to enhance seismicity rates by reducing the shear stress required for slip, but the processes that cause faults to slip under conditions of fluid pressurization are poorly constrained. We use experimental rock deformation to investigate the controls of fluid pressurization and pressurization rates on fault slip style. We show that pore fluid pressurization is less effective that mechanical changes in fault normal stress at initiating accelerated slip events. Fluid pressurization enhances the total slip, slip velocity, and shear stress drop of events initiated by mechanical changes in normal stress, and these parameters are correlated with pressurization rate, but not the magnitude of fluid pressure. This result is consistent with field-scale observations and indicates that processes active at the pore network scale affect induced seismicity.

  14. Fluid involvement in normal faulting

    NASA Astrophysics Data System (ADS)

    Sibson, Richard H.

    2000-04-01

    Evidence of fluid interaction with normal faults comes from their varied role as flow barriers or conduits in hydrocarbon basins and as hosting structures for hydrothermal mineralisation, and from fault-rock assemblages in exhumed footwalls of steep active normal faults and metamorphic core complexes. These last suggest involvement of predominantly aqueous fluids over a broad depth range, with implications for fault shear resistance and the mechanics of normal fault reactivation. A general downwards progression in fault rock assemblages (high-level breccia-gouge (often clay-rich) → cataclasites → phyllonites → mylonite → mylonitic gneiss with the onset of greenschist phyllonites occurring near the base of the seismogenic crust) is inferred for normal fault zones developed in quartzo-feldspathic continental crust. Fluid inclusion studies in hydrothermal veining from some footwall assemblages suggest a transition from hydrostatic to suprahydrostatic fluid pressures over the depth range 3-5 km, with some evidence for near-lithostatic to hydrostatic pressure cycling towards the base of the seismogenic zone in the phyllonitic assemblages. Development of fault-fracture meshes through mixed-mode brittle failure in rock-masses with strong competence layering is promoted by low effective stress in the absence of thoroughgoing cohesionless faults that are favourably oriented for reactivation. Meshes may develop around normal faults in the near-surface under hydrostatic fluid pressures to depths determined by rock tensile strength, and at greater depths in overpressured portions of normal fault zones and at stress heterogeneities, especially dilational jogs. Overpressures localised within developing normal fault zones also determine the extent to which they may reutilise existing discontinuities (for example, low-angle thrust faults). Brittle failure mode plots demonstrate that reactivation of existing low-angle faults under vertical σ1 trajectories is only likely if

  15. Fault management for data systems

    NASA Technical Reports Server (NTRS)

    Boyd, Mark A.; Iverson, David L.; Patterson-Hine, F. Ann

    1993-01-01

    Issues related to automating the process of fault management (fault diagnosis and response) for data management systems are considered. Substantial benefits are to be gained by successful automation of this process, particularly for large, complex systems. The use of graph-based models to develop a computer assisted fault management system is advocated. The general problem is described and the motivation behind choosing graph-based models over other approaches for developing fault diagnosis computer programs is outlined. Some existing work in the area of graph-based fault diagnosis is reviewed, and a new fault management method which was developed from existing methods is offered. Our method is applied to an automatic telescope system intended as a prototype for future lunar telescope programs. Finally, an application of our method to general data management systems is described.

  16. Electronic Power Switch for Fault-Tolerant Networks

    NASA Technical Reports Server (NTRS)

    Volp, J.

    1987-01-01

    Power field-effect transistors reduce energy waste and simplify interconnections. Current switch containing power field-effect transistor (PFET) placed in series with each load in fault-tolerant power-distribution system. If system includes several loads and supplies, switches placed in series with adjacent loads and supplies. System of switches protects against overloads and losses of individual power sources.

  17. Experimental Fault Reactivation on Favourably and Unfavourably Oriented Faults

    NASA Astrophysics Data System (ADS)

    Mitchell, T. M.; Sibson, R. H.; Renner, J.; Toy, V. G.; di Toro, G.; Smith, S. A.

    2010-12-01

    In this study, we introduce work which aims assess the loading of faults to failure under different stress regimes in a triaxial deformation apparatus. We explore experimentally the reshear of an existing fault in various orientations for particular values of (σ1 - σ3) and σ3' for contrasting loading systems - load-strengthening (equivalent to a thrust fault) with σ1' increasing at constant σ3', versus load-weakening (equivalent to a normal fault) with reducing σ3' under constant σ1'. Experiments are conducted on sawcut granite samples with fault angles at a variety of orientations relative to σ1 , ranging from an optimal orientation for reactivation to lockup angles where new faults are formed in preference to reactivating the existing sawcut orientation. Prefailure and postfailure behaviour is compared in terms of damage zone development via monitoring variations in ultrasonic velocity and acoustic emission behaviour. For example, damage surrounding unfavourably oriented faults is significantly higher than that seen around favourably orientated faults due to greater maximum stresses attained prior to unstable slip, which is reflected by the increased acoustic emission activity leading up to failure. In addition, we also experimentally explore the reshear of natural pseudotachylytes (PSTs) from two different fault zones; the Gole Larghe Fault, Adamello, Italy in which the PSTs are in relatively isotropic Tonalite (at lab sample scale) and the Alpine Fault, New Zealand in which the PSTs are in highly anisotropic foliated shist. We test whether PSTs will reshear in both rock types under the right conditions, or whether new fractures in the wall rock will form in preference to reactivating the PST (PST shear strength is higher than that of the host rock). Are PSTs representative of one slip event?

  18. Fault welding by pseudotachylyte generation

    NASA Astrophysics Data System (ADS)

    Mitchell, T. M.; Toy, V. G.; Di Toro, G.; Renner, J.

    2014-12-01

    During earthquakes, frictional melts can localize on slip surfaces and dramatically weaken faults by melt lubrication. Once seismic slip is arrested, the melt cools and solidifies to form pseudotachylyte (PST), the presence of which is commonly used to infer earthquake slip on ancient exhumed faults. Little is known about the effect of solidified melt on the strength of faults directly preceding a subsequent earthquake. We performed triaxial deformation experiments on cores of tonalite (Gole Larghe fault zone, N. Italy) and mylonite (Alpine fault, New Zealand) in order to assess the strength of PST bearing faults in the lab. Three types of sample were prepared for each rock type; intact, sawcut and PST bearing, and were cored so that the sawcut, PST and foliation planes were orientated at 35° to the length of the core and direction of σ1, i.e., a favorable orientation for reactivation. This choice of samples allowed us to compare the strength of 'pre-earthquake' fault (sawcut) to a 'post-earthquake' fault with solidified frictional melt, and assess their strength relative to intact samples. Our results show that PST veins effectively weld fault surfaces together, allowing previously faulted rocks to regain cohesive strengths comparable to that of an intact rock. Shearing of the PST is not favored, but subsequent failure and slip is accommodated on new faults nucleating at other zones of weakness. Thus, the mechanism of coseismic weakening by melt lubrication does not necessarily facilitate long-term interseismic deformation localization, at least at the scale of these experiments. In natural fault zones, PSTs are often found distributed over multiple adjacent fault planes or other zones of weakness such as foliation planes. We also modeled the temperature distribution in and around a PST using an approximation for cooling of a thin, infinite sheet by conduction perpendicular to its margins at ambient temperatures commensurate with the depth of PST formation

  19. Fault-tolerant processing system

    NASA Technical Reports Server (NTRS)

    Palumbo, Daniel L. (Inventor)

    1996-01-01

    A fault-tolerant, fiber optic interconnect, or backplane, which serves as a via for data transfer between modules. Fault tolerance algorithms are embedded in the backplane by dividing the backplane into a read bus and a write bus and placing a redundancy management unit (RMU) between the read bus and the write bus so that all data transmitted by the write bus is subjected to the fault tolerance algorithms before the data is passed for distribution to the read bus. The RMU provides both backplane control and fault tolerance.

  20. Fault interaction near Hollister, California

    SciTech Connect

    Mavko, G.M.

    1982-09-10

    A numerical model is used to study fault stress slip near Hollister, California. The geometrically complex system of interacting faults, including the San Andreas, Calaveras, Sargent, and Busch faults, is approximated with a two-dimensional distribution of short planar fault segments in an elastic medium. The steady stress and slip rate are simulated by specifying frictional strength and stepping the remote stress ahead in time. The resulting computed fault stress is roughly proportional to the observed spatial density of small earthquakes, suggesting that the distinction between segments characterized by earthquakes and those with aseismic creep results, in part, from geometry. A nonsteady simulation is made by introducing, in addition, stress drops for individual moderate earthquakes. A close fit of observed creep with calculated slip on the Calaveras and San Andreas faults suggests that many changes in creep rate (averaged over several months) are caused by local moderate earthquakes. In particular, a 3-year creep lag preceding the August 6, 1979, Coyote Lake earthquake on the Calaveras fault seems to have been a direct result of the November 28, 1974, Thanksgiving Day earthquake on the Busch fault. Computed lags in slip rate preceding some other moderate earthquakes in the area are also due to earlier earthquakes. Although the response of the upper 1 km of the fault zone may cause some individual creep events and introduce delays in others, the long-term rate appears to reflect deep slip.

  1. Fault interaction near Hollister, California

    NASA Astrophysics Data System (ADS)

    Mavko, Gerald M.

    1982-09-01

    A numerical model is used to study fault stress and slip near Hollister, California. The geometrically complex system of interacting faults, including the San Andreas, Calaveras, Sargent, and Busch faults, is approximated with a two-dimensional distribution of short planar fault segments in an elastic medium. The steady stress and slip rate are simulated by specifying frictional strength and stepping the remote stress ahead in time. The resulting computed fault stress is roughly proportional to the observed spatial density of small earthquakes, suggesting that the distinction between segments characterized by earthquakes and those with aseismic creep results, in part, from geometry. A nosteady simulation is made by introducing, in addition, stress drops for individual moderate earthquakes. A close fit of observed creep with calculated slip on the Calaveras and San Andreas faults suggests that many changes in creep rate (averaged over several months) are caused by local moderate earthquakes. In particular, a 3-year creep lag preceding the August 6, 1979, Coyote Lake earthquake on the Calaveras fault seems to have been a direct result of the November 28, 1974, Thanksgiving Day earthquake on the Busch fault. Computed lags in slip rate preceding some other moderate earthquakes in the area are also due to earlier earthquakes. Although the response of the upper 1 km of the fault zone may cause some individual creep events and introduce delays in others, the long-term rate appears to reflect deep slip.

  2. Perspective View, Garlock Fault

    NASA Technical Reports Server (NTRS)

    2000-01-01

    California's Garlock Fault, marking the northwestern boundary of the Mojave Desert, lies at the foot of the mountains, running from the lower right to the top center of this image, which was created with data from NASA's shuttle Radar Topography Mission (SRTM), flown in February 2000. The data will be used by geologists studying fault dynamics and landforms resulting from active tectonics. These mountains are the southern end of the Sierra Nevada and the prominent canyon emerging at the lower right is Lone Tree canyon. In the distance, the San Gabriel Mountains cut across from the leftside of the image. At their base lies the San Andreas Fault which meets the Garlock Fault near the left edge at Tejon Pass. The dark linear feature running from lower right to upper left is State Highway 14 leading from the town of Mojave in the distance to Inyokern and the Owens Valley in the north. The lighter parallel lines are dirt roads related to power lines and the Los Angeles Aqueduct which run along the base of the mountains.

    This type of display adds the important dimension of elevation to the study of land use and environmental processes as observed in satellite images. The perspective view was created by draping a Landsat satellite image over an SRTM elevation model. Topography is exaggerated 1.5 times vertically. The Landsat image was provided by the United States Geological Survey's Earth Resources Observations Systems (EROS) Data Center, Sioux Falls, South Dakota.

    Elevation data used in this image was acquired by the Shuttle Radar Topography Mission (SRTM) aboard the Space Shuttle Endeavour, launched on February 11,2000. SRTM used the same radar instrument that comprised the Spaceborne Imaging Radar-C/X-Band Synthetic Aperture Radar (SIR-C/X-SAR) that flew twice on the Space Shuttle Endeavour in 1994. SRTM was designed to collect three-dimensional measurements of the Earth's surface. To collect the 3-D data, engineers added a 60-meter-long (200-foot) mast

  3. Fault current limiter

    DOEpatents

    Darmann, Francis Anthony

    2013-10-08

    A fault current limiter (FCL) includes a series of high permeability posts for collectively define a core for the FCL. A DC coil, for the purposes of saturating a portion of the high permeability posts, surrounds the complete structure outside of an enclosure in the form of a vessel. The vessel contains a dielectric insulation medium. AC coils, for transporting AC current, are wound on insulating formers and electrically interconnected to each other in a manner such that the senses of the magnetic field produced by each AC coil in the corresponding high permeability core are opposing. There are insulation barriers between phases to improve dielectric withstand properties of the dielectric medium.

  4. Final Technical Report: PV Fault Detection Tool.

    SciTech Connect

    King, Bruce Hardison; Jones, Christian Birk

    2015-12-01

    The PV Fault Detection Tool project plans to demonstrate that the FDT can (a) detect catastrophic and degradation faults and (b) identify the type of fault. This will be accomplished by collecting fault signatures using different instruments and integrating this information to establish a logical controller for detecting, diagnosing and classifying each fault.

  5. 20 CFR 404.507 - Fault.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... Officer § 404.507 Fault. Fault as used in without fault (see § 404.506 and 42 CFR 405.355) applies only to the individual. Although the Administration may have been at fault in making the overpayment, that... 20 Employees' Benefits 2 2014-04-01 2014-04-01 false Fault. 404.507 Section 404.507...

  6. 20 CFR 404.507 - Fault.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... Officer § 404.507 Fault. Fault as used in without fault (see § 404.506 and 42 CFR 405.355) applies only to the individual. Although the Administration may have been at fault in making the overpayment, that... 20 Employees' Benefits 2 2012-04-01 2012-04-01 false Fault. 404.507 Section 404.507...

  7. 20 CFR 404.507 - Fault.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... Officer § 404.507 Fault. Fault as used in without fault (see § 404.506 and 42 CFR 405.355) applies only to the individual. Although the Administration may have been at fault in making the overpayment, that... 20 Employees' Benefits 2 2013-04-01 2013-04-01 false Fault. 404.507 Section 404.507...

  8. 20 CFR 404.507 - Fault.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... Officer § 404.507 Fault. Fault as used in without fault (see § 404.506 and 42 CFR 405.355) applies only to the individual. Although the Administration may have been at fault in making the overpayment, that... 20 Employees' Benefits 2 2011-04-01 2011-04-01 false Fault. 404.507 Section 404.507...

  9. Undulator Hall Air Temperature Fault Scenarios

    SciTech Connect

    Sevilla, J.; Welch, J.; /SLAC

    2010-11-17

    Recent experience indicates that the LCLS undulator segments must not, at any time following tuning, be allowed to change temperature by more than about {+-}2.5 C or the magnetic center will irreversibly shift outside of acceptable tolerances. This vulnerability raises a concern that under fault conditions the ambient temperature in the Undulator Hall might go outside of the safe range and potentially could require removal and retuning of all the segments. In this note we estimate changes that can be expected in the Undulator Hall air temperature for three fault scenarios: (1) System-wide power failure; (2) Heating Ventilation and Air Conditioning (HVAC) system shutdown; and (3) HVAC system temperature regulation fault. We find that for either a system-wide power failure or an HVAC system shutdown (with the technical equipment left on), the short-term temperature changes of the air would be modest due to the ability of the walls and floor to act as a heat ballast. No action would be needed to protect the undulator system in the event of a system-wide power failure. Some action to adjust the heat balance, in the case of the HVAC power failure with the equipment left on, might be desirable but is not required. On the other hand, a temperature regulation failure of the HVAC system can quickly cause large excursions in air temperature and prompt action would be required to avoid damage to the undulator system.

  10. The importance of upgrading power stations with numerical protection relays

    NASA Astrophysics Data System (ADS)

    Vilceanu, Radu-Constantin; Surianu, Flavius-Dan

    2016-06-01

    In this paper we present the importance of upgrading power stations with numerical protection relays. We will study the gravity and the economic impact of a power system fault when it appears and it is not cleared by old protection devices. Also we will compare how an old protection scheme works when it clears a power fault with how modern numerical protection relays work. The trip decision and the actual circuit breaker open times will be compared and analyzed.

  11. A fault-tolerant intelligent robotic control system

    NASA Technical Reports Server (NTRS)

    Marzwell, Neville I.; Tso, Kam Sing

    1993-01-01

    This paper describes the concept, design, and features of a fault-tolerant intelligent robotic control system being developed for space and commercial applications that require high dependability. The comprehensive strategy integrates system level hardware/software fault tolerance with task level handling of uncertainties and unexpected events for robotic control. The underlying architecture for system level fault tolerance is the distributed recovery block which protects against application software, system software, hardware, and network failures. Task level fault tolerance provisions are implemented in a knowledge-based system which utilizes advanced automation techniques such as rule-based and model-based reasoning to monitor, diagnose, and recover from unexpected events. The two level design provides tolerance of two or more faults occurring serially at any level of command, control, sensing, or actuation. The potential benefits of such a fault tolerant robotic control system include: (1) a minimized potential for damage to humans, the work site, and the robot itself; (2) continuous operation with a minimum of uncommanded motion in the presence of failures; and (3) more reliable autonomous operation providing increased efficiency in the execution of robotic tasks and decreased demand on human operators for controlling and monitoring the robotic servicing routines.

  12. Central Asia Active Fault Database

    NASA Astrophysics Data System (ADS)

    Mohadjer, Solmaz; Ehlers, Todd A.; Kakar, Najibullah

    2014-05-01

    The ongoing collision of the Indian subcontinent with Asia controls active tectonics and seismicity in Central Asia. This motion is accommodated by faults that have historically caused devastating earthquakes and continue to pose serious threats to the population at risk. Despite international and regional efforts to assess seismic hazards in Central Asia, little attention has been given to development of a comprehensive database for active faults in the region. To address this issue and to better understand the distribution and level of seismic hazard in Central Asia, we are developing a publically available database for active faults of Central Asia (including but not limited to Afghanistan, Tajikistan, Kyrgyzstan, northern Pakistan and western China) using ArcGIS. The database is designed to allow users to store, map and query important fault parameters such as fault location, displacement history, rate of movement, and other data relevant to seismic hazard studies including fault trench locations, geochronology constraints, and seismic studies. Data sources integrated into the database include previously published maps and scientific investigations as well as strain rate measurements and historic and recent seismicity. In addition, high resolution Quickbird, Spot, and Aster imagery are used for selected features to locate and measure offset of landforms associated with Quaternary faulting. These features are individually digitized and linked to attribute tables that provide a description for each feature. Preliminary observations include inconsistent and sometimes inaccurate information for faults documented in different studies. For example, the Darvaz-Karakul fault which roughly defines the western margin of the Pamir, has been mapped with differences in location of up to 12 kilometers. The sense of motion for this fault ranges from unknown to thrust and strike-slip in three different studies despite documented left-lateral displacements of Holocene and late

  13. SFT: Scalable Fault Tolerance

    SciTech Connect

    Petrini, Fabrizio; Nieplocha, Jarek; Tipparaju, Vinod

    2006-04-15

    In this paper we will present a new technology that we are currently developing within the SFT: Scalable Fault Tolerance FastOS project which seeks to implement fault tolerance at the operating system level. Major design goals include dynamic reallocation of resources to allow continuing execution in the presence of hardware failures, very high scalability, high efficiency (low overhead), and transparency—requiring no changes to user applications. Our technology is based on a global coordination mechanism, that enforces transparent recovery lines in the system, and TICK, a lightweight, incremental checkpointing software architecture implemented as a Linux kernel module. TICK is completely user-transparent and does not require any changes to user code or system libraries; it is highly responsive: an interrupt, such as a timer interrupt, can trigger a checkpoint in as little as 2.5μs; and it supports incremental and full checkpoints with minimal overhead—less than 6% with full checkpointing to disk performed as frequently as once per minute.

  14. Colorado Regional Faults

    DOE Data Explorer

    Hussein, Khalid

    2012-02-01

    Citation Information: Originator: Earth Science &Observation Center (ESOC), CIRES, University of Colorado at Boulder Originator: Colorado Geological Survey (CGS) Publication Date: 2012 Title: Regional Faults Edition: First Publication Information: Publication Place: Earth Science & Observation Center, Cooperative Institute for Research in Environmental Science, University of Colorado, Boulder Publisher: Earth Science &Observation Center (ESOC), CIRES, University of Colorado at Boulder Description: This layer contains the regional faults of Colorado Spatial Domain: Extent: Top: 4543192.100000 m Left: 144385.020000 m Right: 754585.020000 m Bottom: 4094592.100000 m Contact Information: Contact Organization: Earth Science &Observation Center (ESOC), CIRES, University of Colorado at Boulder Contact Person: Khalid Hussein Address: CIRES, Ekeley Building Earth Science & Observation Center (ESOC) 216 UCB City: Boulder State: CO Postal Code: 80309-0216 Country: USA Contact Telephone: 303-492-6782 Spatial Reference Information: Coordinate System: Universal Transverse Mercator (UTM) WGS’1984 Zone 13N False Easting: 500000.00000000 False Northing: 0.00000000 Central Meridian: -105.00000000 Scale Factor: 0.99960000 Latitude of Origin: 0.00000000 Linear Unit: Meter Datum: World Geodetic System 1984 (WGS ’984) Prime Meridian: Greenwich Angular Unit: Degree Digital Form: Format Name: Shape file

  15. Dynamics of fault interaction - Parallel strike-slip faults

    NASA Astrophysics Data System (ADS)

    Harris, Ruth A.; Day, Steven M.

    1993-03-01

    We use a 2D finite difference computer program to study the effect of fault steps on dynamic ruptures. Our results indicate that a strike-slip earthquake is unlikely to jump a fault step wider than 5 km, in correlation with field observations of moderate to great-sized earthquakes. We also find that dynamically propagating ruptures can jump both compressional and dilational fault steps, although wider dilational fault steps can be jumped. Dilational steps tend to delay the rupture for a longer time than compressional steps do. This delay leads to a slower apparent rupture velocity in the vicinity of dilational steps. These 'dry' cases assumed hydrostatic or greater pore-pressures but did not include the effects of changing pore pressures. In an additional study, we simulated the dynamic effects of a fault rupture on 'undrained' pore fluids to test Sibson's (1985, 1986) suggestion that 'wet' dilational steps are a barrier to rupture propagation. Our numerical results validate Sibson's hypothesis.

  16. Fault deformation mechanisms and fault rocks in micritic limestones: Examples from Corinth rift normal faults

    NASA Astrophysics Data System (ADS)

    Bussolotto, M.; Benedicto, A.; Moen-Maurel, L.; Invernizzi, C.

    2015-08-01

    A multidisciplinary study investigates the influence of different parameters on fault rock architecture development along normal faults affecting non-porous carbonates of the Corinth rift southern margin. Here, some fault systems cut the same carbonate unit (Pindus), and the gradual and fast uplift since the initiation of the rift led to the exhumation of deep parts of the older faults. This exceptional context allows superficial active fault zones and old exhumed fault zones to be compared. Our approach includes field studies, micro-structural (optical microscope and cathodoluminescence), geochemical analyses (δ13C, δ18O, trace elements) and fluid inclusions microthermometry of calcite sin-kinematic cements. Our main results, in a depth-window ranging from 0 m to about 2500 m, are: i) all cements precipitated from meteoric fluids in a close or open circulation system depending on depth; ii) depth (in terms of P/T condition) determines the development of some structures and their sealing; iii) lithology (marly levels) influences the type of structures and its cohesive/non-cohesive nature; iv) early distributed rather than final total displacement along the main fault plane is the responsible for the fault zone architecture; v) petrophysical properties of each fault zone depend on the variable combination of these factors.

  17. On the application of a machine learning technique to fault diagnosis of power distribution lines

    SciTech Connect

    Togami, Masato; Abe, Norihiro; Kitahashi, T.; Ogawa, Harunao

    1995-10-01

    This paper presents one method for fault diagnosis of power distribution lines by using a decision tree. The conventional method, using a decision tree, applies only to discrete attribute values. To apply it to fault diagnosis of power distribution lines, in practice it must be revised in order to treat attributes whose values range over certain widths. This is because the sensor value or attribute value varies owing to the resistance of the fault point or is influenced by noise. The proposed method is useful when the attribute value has such a property, and it takes into consideration the cost of acquiring the information and the probability of the occurrence of a fault.

  18. Detection of faults and software reliability analysis

    NASA Technical Reports Server (NTRS)

    Knight, John C.

    1987-01-01

    Multi-version or N-version programming is proposed as a method of providing fault tolerance in software. The approach requires the separate, independent preparation of multiple versions of a piece of software for some application. These versions are executed in parallel in the application environment; each receives identical inputs and each produces its version of the required outputs. The outputs are collected by a voter and, in principle, they should all be the same. In practice there may be some disagreement. If this occurs, the results of the majority are taken to be the correct output, and that is the output used by the system. A total of 27 programs were produced. Each of these programs was then subjected to one million randomly-generated test cases. The experiment yielded a number of programs containing faults that are useful for general studies of software reliability as well as studies of N-version programming. Fault tolerance through data diversity and analytic models of comparison testing are discussed.

  19. To protect and to support: How citizenship and self-determination are legally constructed and managed in practice for people living with dementia in Sweden.

    PubMed

    Nedlund, Ann-Charlotte; Taghizadeh Larsson, Annika

    2016-05-01

    Since living with dementia implies increasing difficulties in taking charge of rights due to cognitive as well as communicative impairments, many people with dementia are vulnerable and in need of support in order to realize full citizenship. In Sweden, all adults right to self-determination is strongly emphasized in law, regulations, and policies. Further, and in contrast to the situation in many other countries, people living with dementia cannot be declared as incompetent of making decisions concerning social care and their right to self-determination cannot legally be taken away. The article shows that in the Swedish welfare system, the focus is more on protecting the self-determination of citizens than on supporting people in making decisions and exercising citizenship. Subsequently, this causes legally constructed zones of inclusion and exclusion. This article examines and problematizes how different institutional contexts, legal constructions, norms, and practices in Sweden affect the management of issues concerning guardianship, supported decision-making and self-determination, and outline the implications for people living with dementia. PMID:27170586

  20. Soft-Fault Detection Technologies Developed for Electrical Power Systems

    NASA Technical Reports Server (NTRS)

    Button, Robert M.

    2004-01-01

    The NASA Glenn Research Center, partner universities, and defense contractors are working to develop intelligent power management and distribution (PMAD) technologies for future spacecraft and launch vehicles. The goals are to provide higher performance (efficiency, transient response, and stability), higher fault tolerance, and higher reliability through the application of digital control and communication technologies. It is also expected that these technologies will eventually reduce the design, development, manufacturing, and integration costs for large, electrical power systems for space vehicles. The main focus of this research has been to incorporate digital control, communications, and intelligent algorithms into power electronic devices such as direct-current to direct-current (dc-dc) converters and protective switchgear. These technologies, in turn, will enable revolutionary changes in the way electrical power systems are designed, developed, configured, and integrated in aerospace vehicles and satellites. Initial successes in integrating modern, digital controllers have proven that transient response performance can be improved using advanced nonlinear control algorithms. One technology being developed includes the detection of "soft faults," those not typically covered by current systems in use today. Soft faults include arcing faults, corona discharge faults, and undetected leakage currents. Using digital control and advanced signal analysis algorithms, we have shown that it is possible to reliably detect arcing faults in high-voltage dc power distribution systems (see the preceding photograph). Another research effort has shown that low-level leakage faults and cable degradation can be detected by analyzing power system parameters over time. This additional fault detection capability will result in higher reliability for long-lived power systems such as reusable launch vehicles and space exploration missions.

  1. Challenges and perspectives in the geological study of active faults.

    NASA Astrophysics Data System (ADS)

    Rizza, M.

    2011-12-01

    better model fault behavior. This is vital if we want to improve seismic hazard estimates and construction codes for a better protection of buildings and people living in earthquake zones.

  2. 20 CFR 404.507 - Fault.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... Officer § 404.507 Fault. Fault as used in without fault (see § 404.506 and 42 CFR 405.355) applies only to..., educational, or linguistic limitations (including any lack of facility with the English language)...

  3. Chip level simulation of fault tolerant computers

    NASA Technical Reports Server (NTRS)

    Armstrong, J. R.

    1982-01-01

    Chip-level modeling techniques in the evaluation of fault tolerant systems were researched. A fault tolerant computer was modeled. An efficient approach to functional fault simulation was developed. Simulation software was also developed.

  4. What Can We Do to Bring the Sparkle Back into This Child's Eyes? Child Rights/Community Development Principles: Key Elements for a Strengths-Based Child Protection Practice

    ERIC Educational Resources Information Center

    Young, Susan; McKenzie, Margaret; Schjelderup, Liv; Omre, Cecilie; Walker, Shayne

    2014-01-01

    Working from practice experiences, Social Work educators from Aotearoa/New Zealand, Norway and Western Australia have developed a framework for child welfare work . The framework brings together the Rights of the Child, Community Development and Child Protection. This article describes the principles and theoretical underpinnings of this…

  5. Accelerometer having integral fault null

    NASA Technical Reports Server (NTRS)

    Bozeman, Richard J., Jr. (Inventor)

    1995-01-01

    An improved accelerometer is introduced. It comprises a transducer responsive to vibration in machinery which produces an electrical signal related to the magnitude and frequency of the vibration; and a decoding circuit responsive to the transducer signal which produces a first fault signal to produce a second fault signal in which ground shift effects are nullified.

  6. Experimental Fault Reactivation on Favourably and Unfavourably Oriented Faults

    NASA Astrophysics Data System (ADS)

    Mitchell, T. M.; Renner, J.; Sibson, R. H.

    2011-12-01

    In this study, we assess the loading of faults to failure under different stress regimes in a triaxial deformation apparatus, both in dry and saturated conditions. We explore experimentally the reshear of an existing fault in various orientations for particular values of (σ_1 - σ_3) and σ_3' for contrasting loading systems - load-strengthening (equivalent to a thrust fault) with σ1' increasing at constant σ_3', versus load-weakening (equivalent to a normal fault) with reducing σ_3' under constant σ_1'. Experiments are conducted on sawcut granite samples with fault angles at a variety of orientations relative to σ_1, ranging from an optimal orientation for reactivation to lockup angles where new faults are formed in preference to reactivating the existing sawcut orientation. Prefailure and postfailure behaviour is compared in terms of damage zone development via monitoring variations in ultrasonic velocity and acoustic emission behaviour. For example, damage surrounding unfavourably oriented faults is significantly higher than that seen around favourably orientated faults due to greater maximum stresses attained prior to unstable slip, which is reflected by the increased acoustic emission activity leading up to failure. In addition, we explore reshear conditions under an initial condition of (σ_1' = σ_3'), then inducing reshear on the existing fault first by increasing σ_1'(load-strengthening), then by decreasing σ_3' (load-weakening), again comparing relative damage zone development and acoustic emission levels. In saturated experiments, we explore the values of pore fluid pressure (P_f) needed for re-shear to occur in preference to the formation of a new fault. Typically a limiting factor in conventional triaxial experiments performed in compression is that P_f cannot exceed the confining pressure (σ_2 and σ_3). By employing a sample assembly that allows deformation while the loading piston is in extension, it enables us to achieve pore pressures in

  7. How do normal faults grow?

    NASA Astrophysics Data System (ADS)

    Jackson, Christopher; Bell, Rebecca; Rotevatn, Atle; Tvedt, Anette

    2016-04-01

    Normal faulting accommodates stretching of the Earth's crust, and it is arguably the most fundamental tectonic process leading to continent rupture and oceanic crust emplacement. Furthermore, the incremental and finite geometries associated with normal faulting dictate landscape evolution, sediment dispersal and hydrocarbon systems development in rifts. Displacement-length scaling relationships compiled from global datasets suggest normal faults grow via a sympathetic increase in these two parameters (the 'isolated fault model'). This model has dominated the structural geology literature for >20 years and underpins the structural and tectono-stratigraphic models developed for active rifts. However, relatively recent analysis of high-quality 3D seismic reflection data suggests faults may grow by rapid establishment of their near-final length prior to significant displacement accumulation (the 'coherent fault model'). The isolated and coherent fault models make very different predictions regarding the tectono-stratigraphic evolution of rift basin, thus assessing their applicability is important. To-date, however, very few studies have explicitly set out to critically test the coherent fault model thus, it may be argued, it has yet to be widely accepted in the structural geology community. Displacement backstripping is a simple graphical technique typically used to determine how faults lengthen and accumulate displacement; this technique should therefore allow us to test the competing fault models. However, in this talk we use several subsurface case studies to show that the most commonly used backstripping methods (the 'original' and 'modified' methods) are, however, of limited value, because application of one over the other requires an a priori assumption of the model most applicable to any given fault; we argue this is illogical given that the style of growth is exactly what the analysis is attempting to determine. We then revisit our case studies and demonstrate

  8. Current Fault Management Trends in NASA's Planetary Spacecraft

    NASA Technical Reports Server (NTRS)

    Fesq, Lorraine M.

    2009-01-01

    The key product of this three-day workshop is a NASA White Paper that documents lessons learned from previous missions, recommended best practices, and future opportunities for investments in the fault management domain. This paper summarizes the findings and recommendations that are captured in the White Paper.

  9. [Super-low-frequency spectrum analysis for buried faults in coalfield].

    PubMed

    Chen, Li; Qin, Qi-Ming; Zhen, Guang-Wei; Wang, Nan; Bai, Yan-Bing; Chen, Chao

    2013-08-01

    Based on the super-low-frequency (SLF) electromagnetic detection technology, the advanced detection for the buried fault in the coalfield is still at the exploratory stage, while the technology has a strong practical significance for production and design of the coal mine. Firstly, in this paper, the SLF electromagnetic detection signals were collected in study area. Spectrum analysis of SLF signal by wavelet transform can remove high-frequency noise. Secondly, the profile of the measuring line across the fault was analyzed and interpreted geologically. Accordingly SLF spectrum characteristics of the buried fault could be researched. Finally, combined with the geological and seismic data, the characteristics and distribution of fault structures can be verified in the mining area. The results show that: the buried fault could be detected quickly and effectively by SLF electromagnetic detection Hence, SLF electromagnetic detection technology is an effective method for buried fault detection. PMID:24159862

  10. Seismicity and recent faulting in eastern California and western and central Nevada: A preliminary report

    NASA Technical Reports Server (NTRS)

    Abdel-Gawad, M. (Principal Investigator); Silverstein, J.; Tubbesing, L.

    1973-01-01

    The author has identified the following significant results. ERTS-1 imagery covering the eastern California-Nevada seismic belt were utilized to study the fault pattern in relation to the distribution of earthquake epicenters and Quaternary volcanic rocks. Many suspected faults not previously mapped were identified. These include several suspected shear zones in Nevada, faults showing evidence of recent breakage, and major lineaments. Highly seismic areas are generally characterized by Holocene faulting and Quaternary volcanic activity. However, several major fault segments showing evidence of recent breakage are associated with little or no seismicity. The tectonic pattern strongly suggests that the eastern California-Nevada seismic belt coincides with a major crustal rift associated with zones of lateral shear. New data on potentially active fault zones have direct practical applications in national and local earthquake hazard reduction programs. Positive contacts have been made with Kern and Ventura Counties to make results of this investigation available for application to their earthquake hazards definition projects.

  11. Enhanced Fault-Tolerant Quantum Computing in d -Level Systems

    NASA Astrophysics Data System (ADS)

    Campbell, Earl T.

    2014-12-01

    Error-correcting codes protect quantum information and form the basis of fault-tolerant quantum computing. Leading proposals for fault-tolerant quantum computation require codes with an exceedingly rare property, a transversal non-Clifford gate. Codes with the desired property are presented for d -level qudit systems with prime d . The codes use n =d -1 qudits and can detect up to ˜d /3 errors. We quantify the performance of these codes for one approach to quantum computation known as magic-state distillation. Unlike prior work, we find performance is always enhanced by increasing d .

  12. The Lawanopo Fault, central Sulawesi, East Indonesia

    NASA Astrophysics Data System (ADS)

    Natawidjaja, Danny Hilman; Daryono, Mudrik R.

    2015-04-01

    The dominant tectonic-force factor in the Sulawesi Island is the westward Bangga-Sula microplate tectonic intrusion, driven by the 12 mm/year westward motion of the Pacific Plate relative to Eurasia. This tectonic intrusion are accommodated by a series of major left-lateral strike-slip fault zones including Sorong Fault, Sula-Sorong Fault, Matano Fault, Palukoro Fault, and Lawanopo Fault zones. The Lawanopo fault has been considered as an active left-lateral strike-slip fault. The natural exposures of the Lawanopo Fault are clear, marked by the breaks and liniemants of topography along the fault line, and also it serves as a tectonic boundary between the different rock assemblages. Inpections of IFSAR 5m-grid DEM and field checks show that the fault traces are visible by lineaments of topographical slope breaks, linear ridges and stream valleys, ridge neckings, and they are also associated with hydrothermal deposits and hot springs. These are characteristics of young fault, so their morphological expressions can be seen still. However, fault scarps and other morpho-tectonic features appear to have been diffused by erosions and young sediment depositions. No fresh fault scarps, stream deflections or offsets, or any influences of fault movements on recent landscapes are observed associated with fault traces. Hence, the faults do not show any evidence of recent activity. This is consistent with lack of seismicity on the fault.

  13. Blind Source Separation and Dynamic Fuzzy Neural Network for Fault Diagnosis in Machines

    NASA Astrophysics Data System (ADS)

    Huang, Haifeng; Ouyang, Huajiang; Gao, Hongli

    2015-07-01

    Many assessment and detection methods are used to diagnose faults in machines. High accuracy in fault detection and diagnosis can be achieved by using numerical methods with noise-resistant properties. However, to some extent, noise always exists in measured data on real machines, which affects the identification results, especially in the diagnosis of early- stage faults. In view of this situation, a damage assessment method based on blind source separation and dynamic fuzzy neural network (DFNN) is presented to diagnose the early-stage machinery faults in this paper. In the processing of measurement signals, blind source separation is adopted to reduce noise. Then sensitive features of these faults are obtained by extracting low dimensional manifold characteristics from the signals. The model for fault diagnosis is established based on DFNN. Furthermore, on-line computation is accelerated by means of compressed sensing. Numerical vibration signals of ball screw fault modes are processed on the model for mechanical fault diagnosis and the results are in good agreement with the actual condition even at the early stage of fault development. This detection method is very useful in practice and feasible for early-stage fault diagnosis.

  14. Faulted Sedimentary Rocks

    NASA Technical Reports Server (NTRS)

    2004-01-01

    27 June 2004 This Mars Global Surveyor (MGS) Mars Orbiter Camera (MOC) image shows some of the layered, sedimentary rock outcrops that occur in a crater located at 8oN, 7oW, in western Arabia Terra. Dark layers and dark sand have enhanced the contrast of this scene. In the upper half of the image, one can see numerous lines that off-set the layers. These lines are faults along which the rocks have broken and moved. The regularity of layer thickness and erosional expression are taken as evidence that the crater in which these rocks occur might once have been a lake. The image covers an area about 1.9 km (1.2 mi) wide. Sunlight illuminates the scene from the lower left.

  15. Fault Tolerant State Machines

    NASA Technical Reports Server (NTRS)

    Burke, Gary R.; Taft, Stephanie

    2004-01-01

    State machines are commonly used to control sequential logic in FPGAs and ASKS. An errant state machine can cause considerable damage to the device it is controlling. For example in space applications, the FPGA might be controlling Pyros, which when fired at the wrong time will cause a mission failure. Even a well designed state machine can be subject to random errors us a result of SEUs from the radiation environment in space. There are various ways to encode the states of a state machine, and the type of encoding makes a large difference in the susceptibility of the state machine to radiation. In this paper we compare 4 methods of state machine encoding and find which method gives the best fault tolerance, as well as determining the resources needed for each method.

  16. Arc fault detection system

    DOEpatents

    Jha, K.N.

    1999-05-18

    An arc fault detection system for use on ungrounded or high-resistance-grounded power distribution systems is provided which can be retrofitted outside electrical switchboard circuits having limited space constraints. The system includes a differential current relay that senses a current differential between current flowing from secondary windings located in a current transformer coupled to a power supply side of a switchboard, and a total current induced in secondary windings coupled to a load side of the switchboard. When such a current differential is experienced, a current travels through a operating coil of the differential current relay, which in turn opens an upstream circuit breaker located between the switchboard and a power supply to remove the supply of power to the switchboard. 1 fig.

  17. Arc fault detection system

    DOEpatents

    Jha, Kamal N.

    1999-01-01

    An arc fault detection system for use on ungrounded or high-resistance-grounded power distribution systems is provided which can be retrofitted outside electrical switchboard circuits having limited space constraints. The system includes a differential current relay that senses a current differential between current flowing from secondary windings located in a current transformer coupled to a power supply side of a switchboard, and a total current induced in secondary windings coupled to a load side of the switchboard. When such a current differential is experienced, a current travels through a operating coil of the differential current relay, which in turn opens an upstream circuit breaker located between the switchboard and a power supply to remove the supply of power to the switchboard.

  18. Combining dynamical decoupling with fault-tolerant quantum computation

    SciTech Connect

    Ng, Hui Khoon; Preskill, John; Lidar, Daniel A.

    2011-07-15

    We study how dynamical decoupling (DD) pulse sequences can improve the reliability of quantum computers. We prove upper bounds on the accuracy of DD-protected quantum gates and derive sufficient conditions for DD-protected gates to outperform unprotected gates. Under suitable conditions, fault-tolerant quantum circuits constructed from DD-protected gates can tolerate stronger noise and have a lower overhead cost than fault-tolerant circuits constructed from unprotected gates. Our accuracy estimates depend on the dynamics of the bath that couples to the quantum computer and can be expressed either in terms of the operator norm of the bath's Hamiltonian or in terms of the power spectrum of bath correlations; we explain in particular how the performance of recursively generated concatenated pulse sequences can be analyzed from either viewpoint. Our results apply to Hamiltonian noise models with limited spatial correlations.

  19. Improving Multiple Fault Diagnosability using Possible Conflicts

    NASA Technical Reports Server (NTRS)

    Daigle, Matthew J.; Bregon, Anibal; Biswas, Gautam; Koutsoukos, Xenofon; Pulido, Belarmino

    2012-01-01

    Multiple fault diagnosis is a difficult problem for dynamic systems. Due to fault masking, compensation, and relative time of fault occurrence, multiple faults can manifest in many different ways as observable fault signature sequences. This decreases diagnosability of multiple faults, and therefore leads to a loss in effectiveness of the fault isolation step. We develop a qualitative, event-based, multiple fault isolation framework, and derive several notions of multiple fault diagnosability. We show that using Possible Conflicts, a model decomposition technique that decouples faults from residuals, we can significantly improve the diagnosability of multiple faults compared to an approach using a single global model. We demonstrate these concepts and provide results using a multi-tank system as a case study.

  20. Comparison of Cenozoic Faulting at the Savannah River Site to Fault Characteristics of the Atlantic Coast Fault Province: Implications for Fault Capability

    SciTech Connect

    Cumbest, R.J.

    2000-11-14

    This study compares the faulting observed on the Savannah River Site and vicinity with the faults of the Atlantic Coastal Fault Province and concludes that both sets of faults exhibit the same general characteristics and are closely associated. Based on the strength of this association it is concluded that the faults observed on the Savannah River Site and vicinity are in fact part of the Atlantic Coastal Fault Province. Inclusion in this group means that the historical precedent established by decades of previous studies on the seismic hazard potential for the Atlantic Coastal Fault Province is relevant to faulting at the Savannah River Site. That is, since these faults are genetically related the conclusion of ''not capable'' reached in past evaluations applies.In addition, this study establishes a set of criteria by which individual faults may be evaluated in order to assess their inclusion in the Atlantic Coast Fault Province and the related association of the ''not capable'' conclusion.

  1. Subaru FATS (fault tracking system)

    NASA Astrophysics Data System (ADS)

    Winegar, Tom W.; Noumaru, Junichi

    2000-07-01

    The Subaru Telescope requires a fault tracking system to record the problems and questions that staff experience during their work, and the solutions provided by technical experts to these problems and questions. The system records each fault and routes it to a pre-selected 'solution-provider' for each type of fault. The solution provider analyzes the fault and writes a solution that is routed back to the fault reporter and recorded in a 'knowledge-base' for future reference. The specifications of our fault tracking system were unique. (1) Dual language capacity -- Our staff speak both English and Japanese. Our contractors speak Japanese. (2) Heterogeneous computers -- Our computer workstations are a mixture of SPARCstations, Macintosh and Windows computers. (3) Integration with prime contractors -- Mitsubishi and Fujitsu are primary contractors in the construction of the telescope. In many cases, our 'experts' are our contractors. (4) Operator scheduling -- Our operators spend 50% of their work-month operating the telescope, the other 50% is spent working day shift at the base facility in Hilo, or day shift at the summit. We plan for 8 operators, with a frequent rotation. We need to keep all operators informed on the current status of all faults, no matter the operator's location.

  2. The Dynamics of Fault Zones

    NASA Astrophysics Data System (ADS)

    Mooney, W. D.; Beroza, G.; Kind, R.

    2006-05-01

    Geophysical studies of the Earth's crust, including fault zones, have developed over the past 80 years. Among the first methods to be employed, seismic refraction and reflection profiles were recorded in the North American Gulf Coast to detect salt domes which were known to trap hydrocarbons. Seismic methods continue to be the most important geophysical technique in use today due to the methods' relatively high accuracy, high resolution, and great depth of penetration. However, in the past decade, a much expanded repertoire of seismic and non-seismic techniques have been brought to bear on studies of the Earth's crust and uppermost mantle. Important insights have also been obtained using seismic tomography, measurements of seismic anisotropy, fault zone guided waves, borehole surveys, and geo-electrical, magnetic, and gravity methods. In this presentation, we briefly review recent geophysical progress in the study of the structure and internal properties of faults zones, from their surface exposures to their lower limit. We focus on the structure of faults within continental crystalline and competent sedimentary rock rather than within the overlying, poorly consolidated sedimentary rocks. A significant body of literature exists for oceanic fracture zones, however, due to space limitations we restrict this review to faults within and at the margins of the continents. We also address some unanswered questions, including: 1) Does fault-zone complexity, as observed at the surface, extend to great depth, or do active faults become thin simple planes at depth? and 2) How is crustal deformation accommodated within the lithospheric mantle?

  3. Spotted in the News: Using Media Reports to Examine Leopard Distribution, Depredation, and Management Practices outside Protected Areas in Southern India.

    PubMed

    Athreya, Vidya; Srivathsa, Arjun; Puri, Mahi; Karanth, Krithi K; Kumar, N Samba; Karanth, K Ullas

    2015-01-01

    There is increasing evidence of large carnivore presence outside protected areas, globally. Although this spells conservation success through population recoveries, it makes carnivore persistence in human-use landscapes tenuous. The widespread distribution of leopards in certain regions of India typifies this problem. We obtained information on leopard-human interactions at a regional scale in Karnataka State, India, based on systematic surveys of local media reports. We applied an innovative occupancy modelling approach to map their distribution patterns and identify hotspots of livestock/human depredation. We also evaluated management responses like removals of 'problem' leopards through capture and translocations. Leopards occupied around 84,000 km2 or 47% of the State's geographic area, outside designated national parks and wildlife sanctuaries. Their presence was facilitated by extent of vegetative cover- including irrigated croplands, rocky escarpments, and prey base in the form of feral and free-ranging dogs. Higher probabilities of livestock/human attacks by leopards were associated with similar ecological features as well as with capture/removals of leopards. Of the 56 cases of leopard removals reported, 91% did not involve human attacks, but followed livestock predation or only leopard sightings. The lack of knowledge on leopard ecology in human-use areas has resulted in unscientific interventions, which could aggravate the problem rather than mitigating it. Our results establish the presence of resident, breeding leopards in human-use areas. We therefore propose a shift in management focus, from current reactive practices like removal and translocation of leopards, to proactive measures that ensure safety of human lives and livelihoods. PMID:26556229

  4. Spotted in the News: Using Media Reports to Examine Leopard Distribution, Depredation, and Management Practices outside Protected Areas in Southern India

    PubMed Central

    Athreya, Vidya; Srivathsa, Arjun; Puri, Mahi; Karanth, Krithi K.; Kumar, N. Samba; Karanth, K. Ullas

    2015-01-01

    There is increasing evidence of large carnivore presence outside protected areas, globally. Although this spells conservation success through population recoveries, it makes carnivore persistence in human-use landscapes tenuous. The widespread distribution of leopards in certain regions of India typifies this problem. We obtained information on leopard-human interactions at a regional scale in Karnataka State, India, based on systematic surveys of local media reports. We applied an innovative occupancy modelling approach to map their distribution patterns and identify hotspots of livestock/human depredation. We also evaluated management responses like removals of ‘problem’ leopards through capture and translocations. Leopards occupied around 84,000 km2 or 47% of the State’s geographic area, outside designated national parks and wildlife sanctuaries. Their presence was facilitated by extent of vegetative cover- including irrigated croplands, rocky escarpments, and prey base in the form of feral and free-ranging dogs. Higher probabilities of livestock/human attacks by leopards were associated with similar ecological features as well as with capture/removals of leopards. Of the 56 cases of leopard removals reported, 91% did not involve human attacks, but followed livestock predation or only leopard sightings. The lack of knowledge on leopard ecology in human-use areas has resulted in unscientific interventions, which could aggravate the problem rather than mitigating it. Our results establish the presence of resident, breeding leopards in human-use areas. We therefore propose a shift in management focus, from current reactive practices like removal and translocation of leopards, to proactive measures that ensure safety of human lives and livelihoods. PMID:26556229

  5. Fault Injection Campaign for a Fault Tolerant Duplex Framework

    NASA Technical Reports Server (NTRS)

    Sacco, Gian Franco; Ferraro, Robert D.; von llmen, Paul; Rennels, Dave A.

    2007-01-01

    Fault tolerance is an efficient approach adopted to avoid or reduce the damage of a system failure. In this work we present the results of a fault injection campaign we conducted on the Duplex Framework (DF). The DF is a software developed by the UCLA group [1, 2] that uses a fault tolerant approach and allows to run two replicas of the same process on two different nodes of a commercial off-the-shelf (COTS) computer cluster. A third process running on a different node, constantly monitors the results computed by the two replicas, and eventually restarts the two replica processes if an inconsistency in their computation is detected. This approach is very cost efficient and can be adopted to control processes on spacecrafts where the fault rate produced by cosmic rays is not very high.

  6. Baby doe redux? The Department of Health and Human Services and the Born-Alive Infants Protection Act of 2002: a cautionary note on normative neonatal practice.

    PubMed

    Sayeed, Sadath A

    2005-10-01

    The Born-Alive Infants Protection Act (BAIPA), passed by Congress in 2002, has attracted little publicity. Its purposes were, in part, "to repudiate the flawed notion that a child's entitlement to the protections of the law is dependent on whether that child's mother or others want him or her." Understood as antiabortion rhetoric, the bill raised little concern among physicians at the time of legislative hearings and passed in both Houses by overwhelming majorities, hardly suggesting contentious legislation. After its signing into law, the Neonatal Resuscitation Program (NRP) Steering Committee issued an opinion stating that "[BAIPA] should not in any way affect the approach that physicians currently follow with respect to the extremely premature infant." This interpretation of the law, however, may have been short sighted. In April 2005, the US Department of Health and Human Services (DHHS) brought life to the BAIPA, announcing: "As a matter of law and policy, [DHHS] will investigate all circumstances where individuals and entities are reported to be withholding medical care from an infant born alive in potential violation of federal statutes." The agency issued instructions to state officials on how the definitional provision within the BAIPA interacts with the Emergency Medical Treatment and Labor Act (EMTALA) and the Child Abuse Prevention and Treatment Act (CAPTA). These interagency memoranda potentially resurrect dormant governmental oversight of newborn-treatment decisions and thus may have influence over normative neonatal practice. Under the BAIPA, the DHHS interprets EMTALA to protect all "born-alive" infants; hospitals and physicians violating regulatory requirements face agency-sanctioned monetary penalties or a "private right of action by any individual harmed as a direct result." According to its memorandum, the DHHS will investigate allegations of EMTALA violations whenever it finds evidence that a newborn was not provided with at least a medical

  7. Detection of Rooftop Cooling Unit Faults Based on Electrical Measurements

    SciTech Connect

    Armstrong, Peter R.; Laughman, C R.; Leeb, S B.; Norford, L K.

    2006-01-31

    Non-intrusive load monitoring (NILM) is accomplished by sampling voltage and current at high rates and reducing the resulting start transients or harmonic contents to concise ''signatures''. Changes in these signatures can be used to detect, and in many cases directly diagnose, equipment and component faults associated with roof-top cooling units. Use of the NILM for fault detection and diagnosis (FDD) is important because (1) it complements other FDD schemes that are based on thermo-fluid sensors and analyses and (2) it is minimally intrusive (one measuring point in the relatively protected confines of the control panel) and therefore inherently reliable. This paper describes changes in the power signatures of fans and compressors that were found, experimentally and theoretically, to be useful for fault detection.

  8. Finding faults with the data

    NASA Astrophysics Data System (ADS)

    Showstack, Randy

    Rudolph Giuliani and Hillary Rodham Clinton are crisscrossing upstate New York looking for votes in the U.S. Senate race. Also cutting back and forth across upstate New York are hundreds of faults of a kind characterized by very sporadic seismic activity according to Robert Jacobi, professor of geology at the University of Buffalo (UB), who conducted research with fellow UB geology professor John Fountain."We have proof that upstate New York is crisscrossed by faults," Jacobi said. "In the past, the Appalachian Plateau—which stretches from Albany to Buffalo—was considered a pretty boring place structurally without many faults or folds of any significance."

  9. Method of locating ground faults

    NASA Astrophysics Data System (ADS)

    Patterson, Richard L.; Rose, Allen H.; Cull, Ronald C.

    1994-11-01

    The present invention discloses a method of detecting and locating current imbalances such as ground faults in multiwire systems using the Faraday effect. As an example, for 2-wire or 3-wire (1 ground wire) electrical systems, light is transmitted along an optical path which is exposed to magnetic fields produced by currents flowing in the hot and neutral wires. The rotations produced by these two magnetic fields cancel each other, therefore light on the optical path does not read the effect of either. However, when a ground fault occurs, the optical path is exposed to a net Faraday effect rotation due to the current imbalance thereby exposing the ground fault.

  10. Method of locating ground faults

    NASA Technical Reports Server (NTRS)

    Patterson, Richard L. (Inventor); Rose, Allen H. (Inventor); Cull, Ronald C. (Inventor)

    1994-01-01

    The present invention discloses a method of detecting and locating current imbalances such as ground faults in multiwire systems using the Faraday effect. As an example, for 2-wire or 3-wire (1 ground wire) electrical systems, light is transmitted along an optical path which is exposed to magnetic fields produced by currents flowing in the hot and neutral wires. The rotations produced by these two magnetic fields cancel each other, therefore light on the optical path does not read the effect of either. However, when a ground fault occurs, the optical path is exposed to a net Faraday effect rotation due to the current imbalance thereby exposing the ground fault.

  11. Granular packings and fault zones

    PubMed

    Astrom; Herrmann; Timonen

    2000-01-24

    The failure of a two-dimensional packing of elastic grains is analyzed using a numerical model. The packing fails through formation of shear bands or faults. During failure there is a separation of the system into two grain-packing states. In a shear band, local "rotating bearings" are spontaneously formed. The bearing state is favored in a shear band because it has a low stiffness against shearing. The "seismic activity" distribution in the packing has the same characteristics as that of the earthquake distribution in tectonic faults. The directions of the principal stresses in a bearing are reminiscent of those found at the San Andreas Fault. PMID:11017335

  12. Fault-free performance validation of fault-tolerant multiprocessors

    NASA Technical Reports Server (NTRS)

    Czeck, Edward W.; Feather, Frank E.; Grizzaffi, Ann Marie; Segall, Zary Z.; Siewiorek, Daniel P.

    1987-01-01

    A validation methodology for testing the performance of fault-tolerant computer systems was developed and applied to the Fault-Tolerant Multiprocessor (FTMP) at NASA-Langley's AIRLAB facility. This methodology was claimed to be general enough to apply to any ultrareliable computer system. The goal of this research was to extend the validation methodology and to demonstrate the robustness of the validation methodology by its more extensive application to NASA's Fault-Tolerant Multiprocessor System (FTMP) and to the Software Implemented Fault-Tolerance (SIFT) Computer System. Furthermore, the performance of these two multiprocessors was compared by conducting similar experiments. An analysis of the results shows high level language instruction execution times for both SIFT and FTMP were consistent and predictable, with SIFT having greater throughput. At the operating system level, FTMP consumes 60% of the throughput for its real-time dispatcher and 5% on fault-handling tasks. In contrast, SIFT consumes 16% of its throughput for the dispatcher, but consumes 66% in fault-handling software overhead.

  13. Orion GN&C Fault Management System Verification: Scope And Methodology

    NASA Technical Reports Server (NTRS)

    Brown, Denise; Weiler, David; Flanary, Ronald

    2016-01-01

    In order to ensure long-term ability to meet mission goals and to provide for the safety of the public, ground personnel, and any crew members, nearly all spacecraft include a fault management (FM) system. For a manned vehicle such as Orion, the safety of the crew is of paramount importance. The goal of the Orion Guidance, Navigation and Control (GN&C) fault management system is to detect, isolate, and respond to faults before they can result in harm to the human crew or loss of the spacecraft. Verification of fault management/fault protection capability is challenging due to the large number of possible faults in a complex spacecraft, the inherent unpredictability of faults, the complexity of interactions among the various spacecraft components, and the inability to easily quantify human reactions to failure scenarios. The Orion GN&C Fault Detection, Isolation, and Recovery (FDIR) team has developed a methodology for bounding the scope of FM system verification while ensuring sufficient coverage of the failure space and providing high confidence that the fault management system meets all safety requirements. The methodology utilizes a swarm search algorithm to identify failure cases that can result in catastrophic loss of the crew or the vehicle and rare event sequential Monte Carlo to verify safety and FDIR performance requirements.

  14. Normal faults geometry and morphometry on Mars

    NASA Astrophysics Data System (ADS)

    Vaz, D. A.; Spagnuolo, M. G.; Silvestro, S.

    2014-04-01

    In this report, we show how normal faults scarps geometry and degradation history can be accessed using high resolution imagery and topography. We show how the initial geometry of the faults can be inferred from faulted craters and we demonstrate how a comparative morphometric analysis of faults scarps can be used to study erosion rates through time on Mars.

  15. 20 CFR 410.561b - Fault.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 20 Employees' Benefits 2 2010-04-01 2010-04-01 false Fault. 410.561b Section 410.561b Employees' Benefits SOCIAL SECURITY ADMINISTRATION FEDERAL COAL MINE HEALTH AND SAFETY ACT OF 1969, TITLE IV-BLACK LUNG BENEFITS (1969- ) Payment of Benefits § 410.561b Fault. Fault as used in without fault (see §...

  16. 22 CFR 17.3 - Fault.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 22 Foreign Relations 1 2014-04-01 2014-04-01 false Fault. 17.3 Section 17.3 Foreign Relations...) § 17.3 Fault. A recipient of an overpayment is without fault if he or she performed no act of... agency may have been at fault in initiating an overpayment will not necessarily relieve the...

  17. 22 CFR 17.3 - Fault.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 22 Foreign Relations 1 2011-04-01 2011-04-01 false Fault. 17.3 Section 17.3 Foreign Relations...) § 17.3 Fault. A recipient of an overpayment is without fault if he or she performed no act of... agency may have been at fault in initiating an overpayment will not necessarily relieve the...

  18. 22 CFR 17.3 - Fault.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 22 Foreign Relations 1 2012-04-01 2012-04-01 false Fault. 17.3 Section 17.3 Foreign Relations...) § 17.3 Fault. A recipient of an overpayment is without fault if he or she performed no act of... agency may have been at fault in initiating an overpayment will not necessarily relieve the...

  19. 22 CFR 17.3 - Fault.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 22 Foreign Relations 1 2010-04-01 2010-04-01 false Fault. 17.3 Section 17.3 Foreign Relations...) § 17.3 Fault. A recipient of an overpayment is without fault if he or she performed no act of... agency may have been at fault in initiating an overpayment will not necessarily relieve the...

  20. 22 CFR 17.3 - Fault.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 22 Foreign Relations 1 2013-04-01 2013-04-01 false Fault. 17.3 Section 17.3 Foreign Relations...) § 17.3 Fault. A recipient of an overpayment is without fault if he or she performed no act of... agency may have been at fault in initiating an overpayment will not necessarily relieve the...

  1. 20 CFR 410.561b - Fault.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 20 Employees' Benefits 2 2011-04-01 2011-04-01 false Fault. 410.561b Section 410.561b Employees' Benefits SOCIAL SECURITY ADMINISTRATION FEDERAL COAL MINE HEALTH AND SAFETY ACT OF 1969, TITLE IV-BLACK LUNG BENEFITS (1969- ) Payment of Benefits § 410.561b Fault. Fault as used in without fault (see §...

  2. A fault-tolerant clock

    NASA Technical Reports Server (NTRS)

    Daley, W. P.; Mckenna, J. F., Jr.

    1973-01-01

    Computers must operate correctly even though one or more of components have failed. Electronic clock has been designed to be insensitive to occurrence of faults; it is substantial advance over any known clock.

  3. Spontaneous rupture on irregular faults

    NASA Astrophysics Data System (ADS)

    Liu, C.

    2014-12-01

    It is now know (e.g. Robinson et al., 2006) that when ruptures propagate around bends, the rupture velocity decrease. In the extreme case, a large bend in the fault can stop the rupture. We develop a 2-D finite difference method to simulate spontaneous dynamic rupture on irregular faults. This method is based on a second order leap-frog finite difference scheme on a uniform mesh of triangles. A relaxation method is used to generate an irregular fault geometry-conforming mesh from the uniform mesh. Through this numerical coordinate mapping, the elastic wave equations are transformed and solved in a curvilinear coordinate system. Extensive numerical experiments using the linear slip-weakening law will be shown to demonstrate the effect of fault geometry on rupture properties. A long term goal is to simulate the strong ground motion near the vicinity of bends, jogs, etc.

  4. The fault-tree compiler

    NASA Technical Reports Server (NTRS)

    Martensen, Anna L.; Butler, Ricky W.

    1987-01-01

    The Fault Tree Compiler Program is a new reliability tool used to predict the top event probability for a fault tree. Five different gate types are allowed in the fault tree: AND, OR, EXCLUSIVE OR, INVERT, and M OF N gates. The high level input language is easy to understand and use when describing the system tree. In addition, the use of the hierarchical fault tree capability can simplify the tree description and decrease program execution time. The current solution technique provides an answer precise (within the limits of double precision floating point arithmetic) to the five digits in the answer. The user may vary one failure rate or failure probability over a range of values and plot the results for sensitivity analyses. The solution technique is implemented in FORTRAN; the remaining program code is implemented in Pascal. The program is written to run on a Digital Corporation VAX with the VMS operation system.

  5. Cell boundary fault detection system

    DOEpatents

    Archer, Charles Jens; Pinnow, Kurt Walter; Ratterman, Joseph D.; Smith, Brian Edward

    2009-05-05

    A method determines a nodal fault along the boundary, or face, of a computing cell. Nodes on adjacent cell boundaries communicate with each other, and the communications are analyzed to determine if a node or connection is faulty.

  6. Accident Fault Trees for Defense Waste Processing Facility

    SciTech Connect

    Sarrack, A.G.

    1999-06-22

    The purpose of this report is to document fault tree analyses which have been completed for the Defense Waste Processing Facility (DWPF) safety analysis. Logic models for equipment failures and human error combinations that could lead to flammable gas explosions in various process tanks, or failure of critical support systems were developed for internal initiating events and for earthquakes. These fault trees provide frequency estimates for support systems failures and accidents that could lead to radioactive and hazardous chemical releases both on-site and off-site. Top event frequency results from these fault trees will be used in further APET analyses to calculate accident risk associated with DWPF facility operations. This report lists and explains important underlying assumptions, provides references for failure data sources, and briefly describes the fault tree method used. Specific commitments from DWPF to provide new procedural/administrative controls or system design changes are listed in the ''Facility Commitments'' section. The purpose of the ''Assumptions'' section is to clarify the basis for fault tree modeling, and is not necessarily a list of items required to be protected by Technical Safety Requirements (TSRs).

  7. Weakening inside incipient thrust fault

    NASA Astrophysics Data System (ADS)

    Lacroix, B.; Tesei, T.; Collettini, C.; Oliot, E.

    2013-12-01

    In fold-and-thrust belts, shortening is mainly accommodated by thrust faults that nucleate along décollement levels. Geological and geophysical evidence suggests that these faults might be weak because of a combination of processes such as pressure-solution, phyllosilicates reorientation and delamination, and fluid pressurization. In this study we aim to decipher the processes and the kinetics responsible for weakening of tectonic décollements. We studied the Millaris thrust (Southern Pyrenees): a fault representative of a décollement in its incipient stage. This fault accommodated a total shortening of about 30 meters and is constituted by a 10m thick, intensively foliated phyllonite developed inside a homogeneous marly unit. Detailed chemical and mineralogical analyses have been carried out to characterize the mineralogical change, the chemical transfers and volume change in the fault zone compared to non-deformed parent sediments. We also carried out microstructural analysis on natural and experimentally deformed rocks. Illite and chlorite are the main hydrous minerals. Inside fault zone, illite minerals are oriented along the schistosity whereas chlorite coats the shear surfaces. Mass balance calculations demonstrated a volume loss of up to 50% for calcite inside fault zone (and therefore a relative increase of phyllosilicates contents) because of calcite pressure solution mechanisms. We performed friction experiments in a biaxial deformation apparatus using intact rocks sheared in the in-situ geometry from the Millaris fault and its host sediments. We imposed a range of normal stresses (10 to 50 MPa), sliding velocity steps (3-100 μm/s) and slide-hold slide sequences (3 to 1000 s hold) under saturated conditions. Mechanical results demonstrate that both fault rocks and parent sediments are weaker than average geological materials (friction μ<<0.6) and have velocity-strengthening behavior because of the presence of phyllosilicate horizons. Fault rocks are

  8. Seismic fault zone trapped noise

    NASA Astrophysics Data System (ADS)

    Hillers, G.; Campillo, M.; Ben-Zion, Y.; Roux, P.

    2014-07-01

    Systematic velocity contrasts across and within fault zones can lead to head and trapped waves that provide direct information on structural units that are important for many aspects of earthquake and fault mechanics. Here we construct trapped waves from the scattered seismic wavefield recorded by a fault zone array. The frequency-dependent interaction between the ambient wavefield and the fault zone environment is studied using properties of the noise correlation field. A critical frequency fc ≈ 0.5 Hz defines a threshold above which the in-fault scattered wavefield has increased isotropy and coherency compared to the ambient noise. The increased randomization of in-fault propagation directions produces a wavefield that is trapped in a waveguide/cavity-like structure associated with the low-velocity damage zone. Dense spatial sampling allows the resolution of a near-field focal spot, which emerges from the superposition of a collapsing, time reversed wavefront. The shape of the focal spot depends on local medium properties, and a focal spot-based fault normal distribution of wave speeds indicates a ˜50% velocity reduction consistent with estimates from a far-field travel time inversion. The arrival time pattern of a synthetic correlation field can be tuned to match properties of an observed pattern, providing a noise-based imaging tool that can complement analyses of trapped ballistic waves. The results can have wide applicability for investigating the internal properties of fault damage zones, because mechanisms controlling the emergence of trapped noise have less limitations compared to trapped ballistic waves.

  9. Fault Tree Analysis: A Bibliography

    NASA Technical Reports Server (NTRS)

    2000-01-01

    Fault tree analysis is a top-down approach to the identification of process hazards. It is as one of the best methods for systematically identifying an graphically displaying the many ways some things can go wrong. This bibliography references 266 documents in the NASA STI Database that contain the major concepts. fault tree analysis, risk an probability theory, in the basic index or major subject terms. An abstract is included with most citations, followed by the applicable subject terms.

  10. Hardware Fault Simulator for Microprocessors

    NASA Technical Reports Server (NTRS)

    Hess, L. M.; Timoc, C. C.

    1983-01-01

    Breadboarded circuit is faster and more thorough than software simulator. Elementary fault simulator for AND gate uses three gates and shaft register to simulate stuck-at-one or stuck-at-zero conditions at inputs and output. Experimental results showed hardware fault simulator for microprocessor gave faster results than software simulator, by two orders of magnitude, with one test being applied every 4 microseconds.

  11. Fault-tolerant rotary actuator

    DOEpatents

    Tesar, Delbert

    2006-10-17

    A fault-tolerant actuator module, in a single containment shell, containing two actuator subsystems that are either asymmetrically or symmetrically laid out is provided. Fault tolerance in the actuators of the present invention is achieved by the employment of dual sets of equal resources. Dual resources are integrated into single modules, with each having the external appearance and functionality of a single set of resources.

  12. From experiment to design -- Fault characterization and detection in parallel computer systems using computational accelerators

    NASA Astrophysics Data System (ADS)

    Yim, Keun Soo

    program states that included dynamically allocated memory (to be spatially comprehensive). In GPUs, we used fault injection studies to demonstrate the importance of detecting silent data corruption (SDC) errors that are mainly due to the lack of fine-grained protections and the massive use of fault-insensitive data. This dissertation also presents transparent fault tolerance frameworks and techniques that are directly applicable to hybrid computers built using only commercial off-the-shelf hardware components. This dissertation shows that by developing understanding of the failure characteristics and error propagation paths of target programs, we were able to create fault tolerance frameworks and techniques that can quickly detect and recover from hardware faults with low performance and hardware overheads.

  13. Dynamic test input generation for multiple-fault isolation

    NASA Technical Reports Server (NTRS)

    Schaefer, Phil

    1990-01-01

    Recent work is Causal Reasoning has provided practical techniques for multiple fault diagnosis. These techniques provide a hypothesis/measurement diagnosis cycle. Using probabilistic methods, they choose the best measurements to make, then update fault hypotheses in response. For many applications such as computers and spacecraft, few measurement points may be accessible, or values may change quickly as the system under diagnosis operates. In these cases, a hypothesis/measurement cycle is insufficient. A technique is presented for a hypothesis/test-input/measurement diagnosis cycle. In contrast to generating tests a priori for determining device functionality, it dynamically generates tests in response to current knowledge about fault probabilities. It is shown how the mathematics previously used for measurement specification can be applied to the test input generation process. An example from an efficient implementation called Multi-Purpose Causal (MPC) is presented.

  14. Multiple Embedded Processors for Fault-Tolerant Computing

    NASA Technical Reports Server (NTRS)

    Bolotin, Gary; Watson, Robert; Katanyoutanant, Sunant; Burke, Gary; Wang, Mandy

    2005-01-01

    A fault-tolerant computer architecture has been conceived in an effort to reduce vulnerability to single-event upsets (spurious bit flips caused by impingement of energetic ionizing particles or photons). As in some prior fault-tolerant architectures, the redundancy needed for fault tolerance is obtained by use of multiple processors in one computer. Unlike prior architectures, the multiple processors are embedded in a single field-programmable gate array (FPGA). What makes this new approach practical is the recent commercial availability of FPGAs that are capable of having multiple embedded processors. A working prototype (see figure) consists of two embedded IBM PowerPC 405 processor cores and a comparator built on a Xilinx Virtex-II Pro FPGA. This relatively simple instantiation of the architecture implements an error-detection scheme. A planned future version, incorporating four processors and two comparators, would correct some errors in addition to detecting them.

  15. Model-Based Fault Diagnosis for Turboshaft Engines

    NASA Technical Reports Server (NTRS)

    Green, Michael D.; Duyar, Ahmet; Litt, Jonathan S.

    1998-01-01

    Tests are described which, when used to augment the existing periodic maintenance and pre-flight checks of T700 engines, can greatly improve the chances of uncovering a problem compared to the current practice. These test signals can be used to expose and differentiate between faults in various components by comparing the responses of particular engine variables to the expected. The responses can be processed on-line in a variety of ways which have been shown to reveal and identify faults. The combination of specific test signals and on-line processing methods provides an ad hoc approach to the isolation of faults which might not otherwise be detected during pre-flight checkout.

  16. Distribution Line Protection Relaying Scheme Based on Complex Number Operation

    NASA Astrophysics Data System (ADS)

    Tsuji, Kouichi

    In Japan, operational principles from the mechanical-type era, such as the directional ground relay (DG) and overcurrent relay (OC), are still used without modification for the protection of distribution lines. We are now facing various problems, such as ground faults with high fault resistance in systems isolated from the ground. A new operational principle to solve the above problems is summarized below. ·A fault is simulated by replacing a branched distribution line with a one-section single distribution line, in which loads are concentrated at its terminals. ·Lines with a one-line to ground fault are detected on the basis of the relationship between the fault-phase voltage E before the fault and the fault current using Ho and Thevenin's theorem. ·Faults are assumed to occur at both terminals in the section. If the signs of the obtained fault reactance Xf at the two terminals are opposite, it is judged that a fault point where Xf = 0 holds exists within the section. As a result of the simulation using the program we developed, it was confirmed that no errors in the detection of the internal and external ground faults with a high fault resistance of 0-30kΩ were observed.

  17. Software Fault Tolerance: A Tutorial

    NASA Technical Reports Server (NTRS)

    Torres-Pomales, Wilfredo

    2000-01-01

    Because of our present inability to produce error-free software, software fault tolerance is and will continue to be an important consideration in software systems. The root cause of software design errors is the complexity of the systems. Compounding the problems in building correct software is the difficulty in assessing the correctness of software for highly complex systems. After a brief overview of the software development processes, we note how hard-to-detect design faults are likely to be introduced during development and how software faults tend to be state-dependent and activated by particular input sequences. Although component reliability is an important quality measure for system level analysis, software reliability is hard to characterize and the use of post-verification reliability estimates remains a controversial issue. For some applications software safety is more important than reliability, and fault tolerance techniques used in those applications are aimed at preventing catastrophes. Single version software fault tolerance techniques discussed include system structuring and closure, atomic actions, inline fault detection, exception handling, and others. Multiversion techniques are based on the assumption that software built differently should fail differently and thus, if one of the redundant versions fails, it is expected that at least one of the other versions will provide an acceptable output. Recovery blocks, N-version programming, and other multiversion techniques are reviewed.

  18. Passive fault current limiting device

    DOEpatents

    Evans, D.J.; Cha, Y.S.

    1999-04-06

    A passive current limiting device and isolator is particularly adapted for use at high power levels for limiting excessive currents in a circuit in a fault condition such as an electrical short. The current limiting device comprises a magnetic core wound with two magnetically opposed, parallel connected coils of copper, a high temperature superconductor or other electrically conducting material, and a fault element connected in series with one of the coils. Under normal operating conditions, the magnetic flux density produced by the two coils cancel each other. Under a fault condition, the fault element is triggered to cause an imbalance in the magnetic flux density between the two coils which results in an increase in the impedance in the coils. While the fault element may be a separate current limiter, switch, fuse, bimetal strip or the like, it preferably is a superconductor current limiter conducting one-half of the current load compared to the same limiter wired to carry the total current of the circuit. The major voltage during a fault condition is in the coils wound on the common core in a preferred embodiment. 6 figs.

  19. Passive fault current limiting device

    DOEpatents

    Evans, Daniel J.; Cha, Yung S.

    1999-01-01

    A passive current limiting device and isolator is particularly adapted for use at high power levels for limiting excessive currents in a circuit in a fault condition such as an electrical short. The current limiting device comprises a magnetic core wound with two magnetically opposed, parallel connected coils of copper, a high temperature superconductor or other electrically conducting material, and a fault element connected in series with one of the coils. Under normal operating conditions, the magnetic flux density produced by the two coils cancel each other. Under a fault condition, the fault element is triggered to cause an imbalance in the magnetic flux density between the two coils which results in an increase in the impedance in the coils. While the fault element may be a separate current limiter, switch, fuse, bimetal strip or the like, it preferably is a superconductor current limiter conducting one-half of the current load compared to the same limiter wired to carry the total current of the circuit. The major voltage during a fault condition is in the coils wound on the common core in a preferred embodiment.

  20. Aeromagnetic anomalies over faulted strata

    USGS Publications Warehouse

    Grauch, V.J.S.; Hudson, Mark R.

    2011-01-01

    High-resolution aeromagnetic surveys are now an industry standard and they commonly detect anomalies that are attributed to faults within sedimentary basins. However, detailed studies identifying geologic sources of magnetic anomalies in sedimentary environments are rare in the literature. Opportunities to study these sources have come from well-exposed sedimentary basins of the Rio Grande rift in New Mexico and Colorado. High-resolution aeromagnetic data from these areas reveal numerous, curvilinear, low-amplitude (2–15 nT at 100-m terrain clearance) anomalies that consistently correspond to intrasedimentary normal faults (Figure 1). Detailed geophysical and rock-property studies provide evidence for the magnetic sources at several exposures of these faults in the central Rio Grande rift (summarized in Grauch and Hudson, 2007, and Hudson et al., 2008). A key result is that the aeromagnetic anomalies arise from the juxtaposition of magnetically differing strata at the faults as opposed to chemical processes acting at the fault zone. The studies also provide (1) guidelines for understanding and estimating the geophysical parameters controlling aeromagnetic anomalies at faulted strata (Grauch and Hudson), and (2) observations on key geologic factors that are favorable for developing similar sedimentary sources of aeromagnetic anomalies elsewhere (Hudson et al.).

  1. Normal fault earthquakes or graviquakes

    PubMed Central

    Doglioni, C.; Carminati, E.; Petricca, P.; Riguzzi, F.

    2015-01-01

    Earthquakes are dissipation of energy throughout elastic waves. Canonically is the elastic energy accumulated during the interseismic period. However, in crustal extensional settings, gravity is the main energy source for hangingwall fault collapsing. Gravitational potential is about 100 times larger than the observed magnitude, far more than enough to explain the earthquake. Therefore, normal faults have a different mechanism of energy accumulation and dissipation (graviquakes) with respect to other tectonic settings (strike-slip and contractional), where elastic energy allows motion even against gravity. The bigger the involved volume, the larger is their magnitude. The steeper the normal fault, the larger is the vertical displacement and the larger is the seismic energy released. Normal faults activate preferentially at about 60° but they can be shallower in low friction rocks. In low static friction rocks, the fault may partly creep dissipating gravitational energy without releasing great amount of seismic energy. The maximum volume involved by graviquakes is smaller than the other tectonic settings, being the activated fault at most about three times the hypocentre depth, explaining their higher b-value and the lower magnitude of the largest recorded events. Having different phenomenology, graviquakes show peculiar precursors. PMID:26169163

  2. Normal fault earthquakes or graviquakes.

    PubMed

    Doglioni, C; Carminati, E; Petricca, P; Riguzzi, F

    2015-01-01

    Earthquakes are dissipation of energy throughout elastic waves. Canonically is the elastic energy accumulated during the interseismic period. However, in crustal extensional settings, gravity is the main energy source for hangingwall fault collapsing. Gravitational potential is about 100 times larger than the observed magnitude, far more than enough to explain the earthquake. Therefore, normal faults have a different mechanism of energy accumulation and dissipation (graviquakes) with respect to other tectonic settings (strike-slip and contractional), where elastic energy allows motion even against gravity. The bigger the involved volume, the larger is their magnitude. The steeper the normal fault, the larger is the vertical displacement and the larger is the seismic energy released. Normal faults activate preferentially at about 60° but they can be shallower in low friction rocks. In low static friction rocks, the fault may partly creep dissipating gravitational energy without releasing great amount of seismic energy. The maximum volume involved by graviquakes is smaller than the other tectonic settings, being the activated fault at most about three times the hypocentre depth, explaining their higher b-value and the lower magnitude of the largest recorded events. Having different phenomenology, graviquakes show peculiar precursors. PMID:26169163

  3. Tutorial: Advanced fault tree applications using HARP

    NASA Technical Reports Server (NTRS)

    Dugan, Joanne Bechta; Bavuso, Salvatore J.; Boyd, Mark A.

    1993-01-01

    Reliability analysis of fault tolerant computer systems for critical applications is complicated by several factors. These modeling difficulties are discussed and dynamic fault tree modeling techniques for handling them are described and demonstrated. Several advanced fault tolerant computer systems are described, and fault tree models for their analysis are presented. HARP (Hybrid Automated Reliability Predictor) is a software package developed at Duke University and NASA Langley Research Center that is capable of solving the fault tree models presented.

  4. Nonlinear Network Dynamics on Earthquake Fault Systems

    SciTech Connect

    Rundle, Paul B.; Rundle, John B.; Tiampo, Kristy F.; Sa Martins, Jorge S.; McGinnis, Seth; Klein, W.

    2001-10-01

    Earthquake faults occur in interacting networks having emergent space-time modes of behavior not displayed by isolated faults. Using simulations of the major faults in southern California, we find that the physics depends on the elastic interactions among the faults defined by network topology, as well as on the nonlinear physics of stress dissipation arising from friction on the faults. Our results have broad applications to other leaky threshold systems such as integrate-and-fire neural networks.

  5. Data-driven and adaptive statistical residual evaluation for fault detection with an automotive application

    NASA Astrophysics Data System (ADS)

    Svärd, Carl; Nyberg, Mattias; Frisk, Erik; Krysander, Mattias

    2014-03-01

    An important step in model-based fault detection is residual evaluation, where residuals are evaluated with the aim to detect changes in their behavior caused by faults. To handle residuals subject to time-varying uncertainties and disturbances, which indeed are present in practice, a novel statistical residual evaluation approach is presented. The main contribution is to base the residual evaluation on an explicit comparison of the probability distribution of the residual, estimated online using current data, with a no-fault residual distribution. The no-fault distribution is based on a set of a priori known no-fault residual distributions, and is continuously adapted to the current situation. As a second contribution, a method is proposed for estimating the required set of no-fault residual distributions off-line from no-fault training data. The proposed residual evaluation approach is evaluated with measurement data on a residual for fault detection in the gas-flow system of a Scania truck diesel engine. Results show that small faults can be reliably detected with the proposed approach in cases where regular methods fail.

  6. Induction machine bearing faults detection based on a multi-dimensional MUSIC algorithm and maximum likelihood estimation.

    PubMed

    Elbouchikhi, Elhoussin; Choqueuse, Vincent; Benbouzid, Mohamed

    2016-07-01

    Condition monitoring of electric drives is of paramount importance since it contributes to enhance the system reliability and availability. Moreover, the knowledge about the fault mode behavior is extremely important in order to improve system protection and fault-tolerant control. Fault detection and diagnosis in squirrel cage induction machines based on motor current signature analysis (MCSA) has been widely investigated. Several high resolution spectral estimation techniques have been developed and used to detect induction machine abnormal operating conditions. This paper focuses on the application of MCSA for the detection of abnormal mechanical conditions that may lead to induction machines failure. In fact, this paper is devoted to the detection of single-point defects in bearings based on parametric spectral estimation. A multi-dimensional MUSIC (MD MUSIC) algorithm has been developed for bearing faults detection based on bearing faults characteristic frequencies. This method has been used to estimate the fundamental frequency and the fault related frequency. Then, an amplitude estimator of the fault characteristic frequencies has been proposed and fault indicator has been derived for fault severity measurement. The proposed bearing faults detection approach is assessed using simulated stator currents data, issued from a coupled electromagnetic circuits approach for air-gap eccentricity emulating bearing faults. Then, experimental data are used for validation purposes. PMID:27038887

  7. Quantifying Fault Slip Rate Variations and Earthquake Clustering

    NASA Astrophysics Data System (ADS)

    Cowie, Patience; Roberts, Gerald; Phillips, Richard

    2010-05-01

    An outstanding challenge to our understanding of fault behaviour remains the appropriate characterisation and mechanistic understanding of episodic fault activity and temporal variations in slip rate. This gap in understanding inhibits our ability to reconcile geodetic and geologic strain rates and hence predict future earthquakes. Existing models for earthquake recurrence and seismic hazard are based on the key principle that a mean recurrence interval, Tmeancan be defined. For areas of active crustal deformation where there are several active faults, and/or the regional strain rate is relatively low, historical earthquake catalogues are unable to provide adequate constraints on Tmean because the earthquake cycle of some faults is longer than the catalogue itself. Paleoseismological trench studies have extended the window of observation back for a few thousand years and suggest that large temporal variability in recurrence interval occurs on individual faults. Current practice in seismic hazard analysis is to characterise variability in recurrence interval by defining the Coefficient of Variation (CV) for a sequence of earthquakes where CV = σ/ Tmean, and σ is the standard deviation of the inter-event times. Several studies acknowledge that CV values for earthquake recurrence intervals are poorly constrained yet small differences in CV can lead to order of magnitude difference in conditional probability calculations. We use a numerical fault growth model, which includes earthquake rupture, healing and elastic interaction, to investigate the controls on CV, both spatially across the fault array and through time as the fault pattern evolves. We find that CV varies inversely with fault slip rate, which itself varies as a function of fault zone complexity, i.e., when the strain is partitioned on more than one structure CV increases. However, we also find that CV is not the most informative parameter to measure in our model output. Firstly, it only takes into account

  8. Development of Hydrologic Characterization Methodology of Faults: Outline of the Project in Berkeley, California

    NASA Astrophysics Data System (ADS)

    Goto, J.; Miwa, T.; Tsuchi, H.; Karasaki, K.

    2009-12-01

    The Nuclear Waste Management Organization of Japan (NUMO), after volunteering municipalities arise, will start a three-staged program for selecting a HLW and TRU waste repository site. It is recognized from experiences from various site characterization programs in the world that the hydrologic property of faults is one of the most important parameters in the early stage of the program. It is expected that numerous faults of interest exist in an investigation area of several tens of square kilometers. It is, however, impossible to characterize all these faults in a limited time and budget. This raises problems in the repository designing and safety assessment that we may have to accept unrealistic or over conservative results by using a single model or parameters for all the faults in the area. We, therefore, seek to develop an efficient and practical methodology to characterize hydrologic property of faults. This project is a five year program started in 2007, and comprises the basic methodology development through literature study and its verification through field investigations. The literature study tries to classify faults by correlating their geological features with hydraulic property, to search for the most efficient technology for fault characterization, and to develop a work flow diagram. The field investigation starts from selection of a site and fault(s), followed by existing site data analyses, surface geophysics, geological mapping, trenching, water sampling, a series of borehole investigations and modeling/analyses. Based on the results of the field investigations, we plan to develop a systematic hydrologic characterization methodology of faults. A classification method that correlates combinations of geological features (rock type, fault displacement, fault type, position in a fault zone, fracture zone width, damage zone width) with widths of high permeability zones around a fault zone was proposed through a survey on available documents of the site

  9. Pest Management Practices for the Military: Novel Field Studies to Develop Methods to Protect Deployed Troops from Mosquito, Filth/Biting Flies, and Sand Fly Vectors

    Technology Transfer Automated Retrieval System (TEKTRAN)

    New techniques that we developed to protect deployed military troops from the threat of vector-borne diseases and are also applicable for use by civilian mosquito control program use are described. Techniques illustrated included (1) novel military personal protection methods, (2) barrier treatments...

  10. A Framework For Evaluating Comprehensive Fault Resilience Mechanisms In Numerical Programs

    SciTech Connect

    Chen, S.; Peng, L.; Bronevetsky, G.

    2015-01-09

    As HPC systems approach Exascale, their circuit feature will shrink, while their overall size will grow, all at a fixed power limit. These trends imply that soft faults in electronic circuits will become an increasingly significant problem for applications that run on these systems, causing them to occasionally crash or worse, silently return incorrect results. This is motivating extensive work on application resilience to such faults, ranging from generic techniques such as replication or checkpoint/restart to algorithm-specific error detection and resilience techniques. Effective use of such techniques requires a detailed understanding of (1) which vulnerable parts of the application are most worth protecting (2) the performance and resilience impact of fault resilience mechanisms on the application. This paper presents FaultTelescope, a tool that combines these two and generates actionable insights by presenting in an intuitive way application vulnerabilities and impact of fault resilience mechanisms on applications.

  11. Study of Stand-Alone Microgrid under Condition of Faults on Distribution Line

    NASA Astrophysics Data System (ADS)

    Malla, S. G.; Bhende, C. N.

    2014-10-01

    The behavior of stand-alone microgrid is analyzed under the condition of faults on distribution feeders. During fault since battery is not able to maintain dc-link voltage within limit, the resistive dump load control is presented to do so. An inverter control is proposed to maintain balanced voltages at PCC under the unbalanced load condition and to reduce voltage unbalance factor (VUF) at load points. The proposed inverter control also has facility to protect itself from high fault current. Existing maximum power point tracker (MPPT) algorithm is modified to limit the speed of generator during fault. Extensive simulation results using MATLAB/SIMULINK established that the performance of the controllers is quite satisfactory under different fault conditions as well as unbalanced load conditions.

  12. Fault evolution in volcanic tuffs and quartz-rich eolian sandstone as mechanical analogs for faulting in Martian pyroclastic deposits

    NASA Astrophysics Data System (ADS)

    Okubo, C. H.

    2014-12-01

    In order to establish a foundation for studies of faulting in Martian rocks and soils in volcanic terrain, the distribution of brittle strain around faults within the North Menan Butte Tuff in the eastern Snake River Plain, Idaho and the Joe Lott Tuff Member of the Mount Belknap Volcanics, Utah, has been recently described. These studies employed a combination of macroscopic and microscopic observations, including measurements of in situ permeability as a proxy for non-localized brittle deformation of the host rock. In areas where the tuff retained its primary granular nature at the time of deformation, initial plastic yielding in both tuffs occurred along deformation bands. Both compactional and dilational types of deformation bands were observed, and faulting occurred along clusters of deformation bands. Where secondary alteration processes imparted a massive texture to the tuff, brittle deformation was accommodated along fractures. Host-rock permeability exhibits little variation from non-deformed values in the North Menan Butte Tuff, whereas host rock permeability is reduced by roughly an order of magnitude through compaction alone (no alteration) in the Joe Lott Tuff. To create a bridge between these observations in tuff and the more substantial body of work centered on deformation band formation and faulting in quartz-rich sandstones, the same techniques employed in the North Menan Butte Tuff and the Joe Lott Tuff have also been applied to a kilometer-scale fault in the Jurassic Navajo Sandstone in the Waterpocket Fold, Utah. These observations demonstrate that the manifestation of strain and evolution of faulting in the Mars-analog tuffs are comparable to that in quartz-rich sandstones. Therefore, current understanding of brittle deformation in quartz-rich sandstones can be used to inform investigations into fault growth within porous tuffs on Mars. A discussion of these observations, practical limitations, and directions for future work are presented here.

  13. Boolean modeling and fault diagnosis in oxidative stress response

    PubMed Central

    2012-01-01

    Background Oxidative stress is a consequence of normal and abnormal cellular metabolism and is linked to the development of human diseases. The effective functioning of the pathway responding to oxidative stress protects the cellular DNA against oxidative damage; conversely the failure of the oxidative stress response mechanism can induce aberrant cellular behavior leading to diseases such as neurodegenerative disorders and cancer. Thus, understanding the normal signaling present in oxidative stress response pathways and determining possible signaling alterations leading to disease could provide us with useful pointers for therapeutic purposes. Using knowledge of oxidative stress response pathways from the literature, we developed a Boolean network model whose simulated behavior is consistent with earlier experimental observations from the literature. Concatenating the oxidative stress response pathways with the PI3-Kinase-Akt pathway, the oxidative stress is linked to the phenotype of apoptosis, once again through a Boolean network model. Furthermore, we present an approach for pinpointing possible fault locations by using temporal variations in the oxidative stress input and observing the resulting deviations in the apoptotic signature from the normally predicted pathway. Such an approach could potentially form the basis for designing more effective combination therapies against complex diseases such as cancer. Results In this paper, we have developed a Boolean network model for the oxidative stress response. This model was developed based on pathway information from the current literature pertaining to oxidative stress. Where applicable, the behaviour predicted by the model is in agreement with experimental observations from the published literature. We have also linked the oxidative stress response to the phenomenon of apoptosis via the PI3k/Akt pathway. Conclusions It is our hope that some of the additional predictions here, such as those pertaining to the

  14. Fault-Tolerant Coding for State Machines

    NASA Technical Reports Server (NTRS)

    Naegle, Stephanie Taft; Burke, Gary; Newell, Michael

    2008-01-01

    Two reliable fault-tolerant coding schemes have been proposed for state machines that are used in field-programmable gate arrays and application-specific integrated circuits to implement sequential logic functions. The schemes apply to strings of bits in state registers, which are typically implemented in practice as assemblies of flip-flop circuits. If a single-event upset (SEU, a radiation-induced change in the bit in one flip-flop) occurs in a state register, the state machine that contains the register could go into an erroneous state or could hang, by which is meant that the machine could remain in undefined states indefinitely. The proposed fault-tolerant coding schemes are intended to prevent the state machine from going into an erroneous or hang state when an SEU occurs. To ensure reliability of the state machine, the coding scheme for bits in the state register must satisfy the following criteria: 1. All possible states are defined. 2. An SEU brings the state machine to a known state. 3. There is no possibility of a hang state. 4. No false state is entered. 5. An SEU exerts no effect on the state machine. Fault-tolerant coding schemes that have been commonly used include binary encoding and "one-hot" encoding. Binary encoding is the simplest state machine encoding and satisfies criteria 1 through 3 if all possible states are defined. Binary encoding is a binary count of the state machine number in sequence; the table represents an eight-state example. In one-hot encoding, N bits are used to represent N states: All except one of the bits in a string are 0, and the position of the 1 in the string represents the state. With proper circuit design, one-hot encoding can satisfy criteria 1 through 4. Unfortunately, the requirement to use N bits to represent N states makes one-hot coding inefficient.

  15. Faulting processes at high fluid pressures: An example of fault valve behavior from the Wattle Gully Fault, Victoria, Australia

    NASA Astrophysics Data System (ADS)

    Cox, Stephen F.

    1995-07-01

    The internal structures of the Wattle Gully Fault provide insights about the mechanics and dynamics of fault systems exhibiting fault valve behavior in high fluid pressure regimes. This small, high-angle reverse fault zone developed at temperatures near 300°C in the upper crust, late during mid-Devonian regional crustal shortening in central Victoria, Australia. The Wattle Gully Fault forms part of a network of faults that focused upward migration of fluids generated by metamorphism and devolatilisation at deeper crustal levels. The fault has a length of around 800 m and a maximum displacement of 50 m and was oriented at 60° to 80° to the maximum principal stress during faulting. The structure was therefore severely misoriented for frictional reactivation. This factor, together with the widespread development of steeply dipping fault fill quartz veins and associated subhorizontal extension veins within the fault zone, indicates that faulting occurred at low shear stresses and in a near-lithostatic fluid pressure regime. The internal structures of these veins, and overprinting relationships between veins and faults, indicate that vein development was intimately associated with faulting and involved numerous episodes of fault dilatation and hydrothermal sealing and slip, together with repeated hydraulic extension fracturing adjacent to slip surfaces. The geometries, distribution and internal structures of veins in the Wattle Gully Fault Zone are related to variations in shear stress, fluid pressure, and near-field principal stress orientations during faulting. Vein opening is interpreted to have been controlled by repeated fluid pressure fluctuations associated with cyclic, deformation-induced changes in fault permeability during fault valve behavior. Rates of recovery of shear stress and fluid pressure after rupture events are interpreted to be important factors controlling time dependence of fault shear strength and slip recurrence. Fluctuations in shear stress

  16. Software reliability through fault-avoidance and fault-tolerance

    NASA Technical Reports Server (NTRS)

    Vouk, Mladen A.; Mcallister, David F.

    1991-01-01

    Twenty independently developed but functionally equivalent software versions were used to investigate and compare empirically some properties of N-version programming, Recovery Block, and Consensus Recovery Block, using the majority and consensus voting algorithms. This was also compared with another hybrid fault-tolerant scheme called Acceptance Voting, using dynamic versions of consensus and majority voting. Consensus voting provides adaptation of the voting strategy to varying component reliability, failure correlation, and output space characteristics. Since failure correlation among versions effectively reduces the cardinality of the space in which the voter make decisions, consensus voting is usually preferable to simple majority voting in any fault-tolerant system. When versions have considerably different reliabilities, the version with the best reliability will perform better than any of the fault-tolerant techniques.

  17. Technical Reference Suite Addressing Challenges of Providing Assurance for Fault Management Architectural Design

    NASA Technical Reports Server (NTRS)

    Fitz, Rhonda; Whitman, Gerek

    2016-01-01

    Research into complexities of software systems Fault Management (FM) and how architectural design decisions affect safety, preservation of assets, and maintenance of desired system functionality has coalesced into a technical reference (TR) suite that advances the provision of safety and mission assurance. The NASA Independent Verification and Validation (IV&V) Program, with Software Assurance Research Program support, extracted FM architectures across the IV&V portfolio to evaluate robustness, assess visibility for validation and test, and define software assurance methods applied to the architectures and designs. This investigation spanned IV&V projects with seven different primary developers, a wide range of sizes and complexities, and encompassed Deep Space Robotic, Human Spaceflight, and Earth Orbiter mission FM architectures. The initiative continues with an expansion of the TR suite to include Launch Vehicles, adding the benefit of investigating differences intrinsic to model-based FM architectures and insight into complexities of FM within an Agile software development environment, in order to improve awareness of how nontraditional processes affect FM architectural design and system health management. The identification of particular FM architectures, visibility, and associated IV&V techniques provides a TR suite that enables greater assurance that critical software systems will adequately protect against faults and respond to adverse conditions. Additionally, the role FM has with regard to strengthened security requirements, with potential to advance overall asset protection of flight software systems, is being addressed with the development of an adverse conditions database encompassing flight software vulnerabilities. Capitalizing on the established framework, this TR suite provides assurance capability for a variety of FM architectures and varied development approaches. Research results are being disseminated across NASA, other agencies, and the

  18. Fault zone connectivity: slip rates on faults in the san francisco bay area, california.

    PubMed

    Bilham, R; Bodin, P

    1992-10-01

    The slip rate of a fault segment is related to the length of the fault zone of which it is part. In turn, the slip rate of a fault zone is related to its connectivity with adjoining or contiguous fault zones. The observed variation in slip rate on fault segments in the San Francisco Bay area in California is consistent with connectivity between the Hayward, Calaveras, and San Andreas fault zones. Slip rates on the southern Hayward fault taper northward from a maximum of more than 10 millimeters per year and are sensitive to the active length of the Maacama fault. PMID:17835127

  19. Reconsidering Fault Slip Scaling

    NASA Astrophysics Data System (ADS)

    Gomberg, J. S.; Wech, A.; Creager, K. C.; Obara, K.; Agnew, D. C.

    2015-12-01

    The scaling of fault slip events given by the relationship between the scalar moment M0, and duration T, potentially provides key constraints on the underlying physics controlling slip. Many studies have suggested that measurements of M0 and T are related as M0=KfT3 for 'fast' slip events (earthquakes) and M0=KsT for 'slow' slip events, in which Kf and Ks are proportionality constants, although some studies have inferred intermediate relations. Here 'slow' and 'fast' refer to slip front propagation velocities, either so slow that seismic radiation is too small or long period to be measurable or fast enough that dynamic processes may be important for the slip process and measurable seismic waves radiate. Numerous models have been proposed to explain the differing M0-T scaling relations. We show that a single, simple dislocation model of slip events within a bounded slip zone may explain nearly all M0-T observations. Rather than different scaling for fast and slow populations, we suggest that within each population the scaling changes from M0 proportional to T3 to T when the slipping area reaches the slip zone boundaries and transitions from unbounded, 2-dimensional to bounded, 1-dimensional growth. This transition has not been apparent previously for slow events because data have sampled only the bounded regime and may be obscured for earthquakes when observations from multiple tectonic regions are combined. We have attempted to sample the expected transition between bounded and unbounded regimes for the slow slip population, measuring tremor cluster parameters from catalogs for Japan and Cascadia and using them as proxies for small slow slip event characteristics. For fast events we employed published earthquake slip models. Observations corroborate our hypothesis, but highlight observational difficulties. We find that M0-T observations for both slow and fast slip events, spanning 12 orders of magnitude in M0, are consistent with a single model based on dislocation

  20. Rupture interaction with fault jogs

    NASA Astrophysics Data System (ADS)

    Sibson, Richard H.

    Propagation of moderate to large earthquake ruptures within major transcurrent fault systems is affected by their large-scale brittle infrastructure, comprising echelon segmentation and curvature of principal slip surfaces (PSS) within typically ˜1 km wide main fault zones. These PSS irregularities are classified into dilational and antidilational fault jogs depending on the tendency for areal increase or reduction, respectively, across the jog structures. High precision microearthquake studies show that the jogs often extend throughout the seismogenic regime to depths of around 10 km. On geomorphic evidence, the larger jogs may persist for periods >105 years. While antidilational jogs form obstacles to both short- and long-term displacements, dilational jogs appear to act as kinetic barriers capable of perturbing or arresting earthquake ruptures, but allowing time-dependent slip transfer. In the case of antidilational jogs slip transfer is accommodated by widespread subsidiary faulting, but for dilational jogs it additionally involves extensional fracture opening localized in the echelon stepover. In fluid-saturated crust, the rapid opening of linking extensional fracture systems to allow passage of earthquake ruptures is opposed by induced suctions which scale with the width of the jog. Rupture arrest at dilational jogs may then be followed by delayed slip transfer as fluid pressures reequilibrate by diffusion. Aftershock distributions associated with the different fault jogs reflect these contrasts in their internal structure and mechanical response.

  1. Stress-time context of fault permeability at the Krasnokamensk Area SE Transbaikalia

    NASA Astrophysics Data System (ADS)

    Petrov, V. A.; Lespinasse, M.; Poluektov, V. V.; Cuney, M.; Nasimov, R. M.; Hammer, J.; Schukin, S. I.

    2013-03-01

    The main aim of the contribution is to combine data on the consecutive stages of deformation, inflow and migration of palaeofluids and accumulation of mineral filling with uranium traces within the faulted-fractured environment at the Krasnokamensk Area, SE Transbaikalia, Russia. Object of examination is a framework of fault zones transecting the Proterozoic-Paleozoic granitic unit to the extent of northwestern part of uranium-bearing Streltsovskaya caldera of Mesozoic age. Considerations of stress- and permeability-time relationships in faulted-fractured zones were taken with account of stress and strain dependencies within fluid saturated rock massifs at crustal seismogenic level. Stress-time consecution of fault zone permeability was developed using set of fieldwork and lab tests including structural-geological survey, fault slip data analysis, mineral-chemical diagnostics, microstructural observations, and radiographic studies. Practical applications of obtained data for solving uranium mining and environmental issues are indicated in conclusion.

  2. Fault Detection of Reciprocating Compressors using a Model from Principles Component Analysis of Vibrations

    NASA Astrophysics Data System (ADS)

    Ahmed, M.; Gu, F.; Ball, A. D.

    2012-05-01

    Traditional vibration monitoring techniques have found it difficult to determine a set of effective diagnostic features due to the high complexity of the vibration signals originating from the many different impact sources and wide ranges of practical operating conditions. In this paper Principal Component Analysis (PCA) is used for selecting vibration feature and detecting different faults in a reciprocating compressor. Vibration datasets were collected from the compressor under baseline condition and five common faults: valve leakage, inter-cooler leakage, suction valve leakage, loose drive belt combined with intercooler leakage and belt loose drive belt combined with suction valve leakage. A model using five PCs has been developed using the baseline data sets and the presence of faults can be detected by comparing the T2 and Q values from the features of fault vibration signals with corresponding thresholds developed from baseline data. However, the Q -statistic procedure produces a better detection as it can separate the five faults completely.

  3. Method and system for controlling a permanent magnet machine during fault conditions

    DOEpatents

    Krefta, Ronald John; Walters, James E.; Gunawan, Fani S.

    2004-05-25

    Method and system for controlling a permanent magnet machine driven by an inverter is provided. The method allows for monitoring a signal indicative of a fault condition. The method further allows for generating during the fault condition a respective signal configured to maintain a field weakening current even though electrical power from an energy source is absent during said fault condition. The level of the maintained field-weakening current enables the machine to operate in a safe mode so that the inverter is protected from excess voltage.

  4. The Development of NASA's Fault Management Handbook

    NASA Technical Reports Server (NTRS)

    Fesq, Lorraine

    2011-01-01

    Disciplined approach to Fault Management (FM) has not always been emphasized by projects, contributing to major schedule and cost overruns: (1) Often faults aren't addressed until nominal spacecraft design is fairly stable. (2) Design relegated to after-the-fact patchwork, Band-Aid approach. Progress is being made on a number of fronts outside of Handbook effort: (1) Processes, Practices and Tools being developed at some Centers and Institutions (2) Management recognition. Constellation FM roles, Discovery/New Frontiers mission reviews (3) Potential Technology solutions. New approaches could avoid many current pitfalls (3a) New FM architectures, including model-based approach integrated with NASA's MBSE (Model-Based System Engineering) efforts (3b) NASA's Office of the Chief Technologist: FM identified in seven of NASA's 14 Space Technology Roadmaps. Opportunity to coalesce and establish thrust area to progressively develop new FM techniques. FM Handbook will help ensure that future missions do not encounter same FM-related problems as previous missions. Version 1 of the FM Handbook is a good start: (1) Still need Version 2 Agency-wide FM Handbook to expand Handbook to other areas, especially crewed missions. (2) Still need to reach out to other organizations to develop common understanding and vocabulary. Handbook doesn't/can't address all Workshop recommendations. Still need to identify how to address programmatic and infrastructure issues.

  5. Fiber Bragg Grating sensor for fault detection in radial and network transmission lines.

    PubMed

    Moghadas, Amin A; Shadaram, Mehdi

    2010-01-01

    In this paper, a fiber optic based sensor capable of fault detection in both radial and network overhead transmission power line systems is investigated. Bragg wavelength shift is used to measure the fault current and detect fault in power systems. Magnetic fields generated by currents in the overhead transmission lines cause a strain in magnetostrictive material which is then detected by Fiber Bragg Grating (FBG). The Fiber Bragg interrogator senses the reflected FBG signals, and the Bragg wavelength shift is calculated and the signals are processed. A broadband light source in the control room scans the shift in the reflected signal. Any surge in the magnetic field relates to an increased fault current at a certain location. Also, fault location can be precisely defined with an artificial neural network (ANN) algorithm. This algorithm can be easily coordinated with other protective devices. It is shown that the faults in the overhead transmission line cause a detectable wavelength shift on the reflected signal of FBG and can be used to detect and classify different kind of faults. The proposed method has been extensively tested by simulation and results confirm that the proposed scheme is able to detect different kinds of fault in both radial and network system. PMID:22163416

  6. Fiber Bragg Grating Sensor for Fault Detection in Radial and Network Transmission Lines

    PubMed Central

    Moghadas, Amin A.; Shadaram, Mehdi

    2010-01-01

    In this paper, a fiber optic based sensor capable of fault detection in both radial and network overhead transmission power line systems is investigated. Bragg wavelength shift is used to measure the fault current and detect fault in power systems. Magnetic fields generated by currents in the overhead transmission lines cause a strain in magnetostrictive material which is then detected by Fiber Bragg Grating (FBG). The Fiber Bragg interrogator senses the reflected FBG signals, and the Bragg wavelength shift is calculated and the signals are processed. A broadband light source in the control room scans the shift in the reflected signal. Any surge in the magnetic field relates to an increased fault current at a certain location. Also, fault location can be precisely defined with an artificial neural network (ANN) algorithm. This algorithm can be easily coordinated with other protective devices. It is shown that the faults in the overhead transmission line cause a detectable wavelength shift on the reflected signal of FBG and can be used to detect and classify different kind of faults. The proposed method has been extensively tested by simulation and results confirm that the proposed scheme is able to detect different kinds of fault in both radial and network system. PMID:22163416

  7. Fiber Bragg grating sensor for fault detection in high voltage overhead transmission lines

    NASA Astrophysics Data System (ADS)

    Moghadas, Amin

    2011-12-01

    A fiber optic based sensor capable of fault detection in both radial and network overhead transmission power line systems is investigated. Bragg wavelength shift is used to measure the fault current and detect fault in power systems. Magnetic fields generated by currents in the overhead transmission lines cause a strain in magnetostrictive material which is then detected by fiber Bragg grating (FBG) sensors. The Fiber Bragg interrogator senses the reflected FBG signals, and the Bragg wavelength shift is calculated and the signals are processed. A broadband light source in the control room scans the shift in the reflected signals. Any surge in the magnetic field relates to an increased fault current at a certain location. Also, fault location can be precisely defined with an artificial neural network (ANN) algorithm. This algorithm can be easily coordinated with other protective devices. It is shown that the faults in the overhead transmission line cause a detectable wavelength shift on the reflected signal of FBG sensors and can be used to detect and classify different kind of faults. The proposed method has been extensively tested by simulation and results confirm that the proposed scheme is able to detect different kinds of fault in both radial and network system.

  8. Faulting in porous carbonate grainstones

    NASA Astrophysics Data System (ADS)

    Tondi, Emanuele; Agosta, Fabrizio

    2010-05-01

    In the recent past, a new faulting mechanism has been documented within porous carbonate grainstones. This mechanism is due to strain localization into narrow tabular bands characterized by both volumetric and shear strain; for this reason, these features are named compactive shear bands. In the field, compactive shear bands are easily recognizable because they are lightly coloured with respect to the parent rock, and/or show a positive relief because of their increased resistance to weathering. Both characteristics, light colours and positive relief, are a consequence of the compaction processes that characterize these bands, which are the simplest structure element that form within porous carbonate grainstones. With ongoing deformation, the single compactive shear bands, which solve only a few mm of displacement, may evolve into zone of compactive shear bands and, finally, into well-developed faults characterized by slip surfaces and fault rocks. Field analysis conducted in key areas of Italy allow us to documented different modalities of interaction and linkage among the compactive shear bands: (i) a simple divergence of two different compactive shear bands from an original one, (ii) extensional and contractional jogs formed by two continuous, interacting compactive shear bands, and (iii) eye structures formed by collinear interacting compactive shear bands, which have been already described for deformation bands in sandstones. The last two types of interaction may localize the formation of compaction bands, which are characterized by pronounced component of compaction and negligible components of shearing, and/or pressure solution seams. All the aforementioned types of interaction and linkage could happen at any deformation stage, single bands, zone of bands or well developed faults. The transition from one deformation process to another, which is likely to be controlled by the changes in the material properties, is recorded by different ratios and

  9. Intelligent fault-tolerant controllers

    NASA Technical Reports Server (NTRS)

    Huang, Chien Y.

    1987-01-01

    A system with fault tolerant controls is one that can detect, isolate, and estimate failures and perform necessary control reconfiguration based on this new information. Artificial intelligence (AI) is concerned with semantic processing, and it has evolved to include the topics of expert systems and machine learning. This research represents an attempt to apply AI to fault tolerant controls, hence, the name intelligent fault tolerant control (IFTC). A generic solution to the problem is sought, providing a system based on logic in addition to analytical tools, and offering machine learning capabilities. The advantages are that redundant system specific algorithms are no longer needed, that reasonableness is used to quickly choose the correct control strategy, and that the system can adapt to new situations by learning about its effects on system dynamics.

  10. Transient Faults in Computer Systems

    NASA Technical Reports Server (NTRS)

    Masson, Gerald M.

    1993-01-01

    A powerful technique particularly appropriate for the detection of errors caused by transient faults in computer systems was developed. The technique can be implemented in either software or hardware; the research conducted thus far primarily considered software implementations. The error detection technique developed has the distinct advantage of having provably complete coverage of all errors caused by transient faults that affect the output produced by the execution of a program. In other words, the technique does not have to be tuned to a particular error model to enhance error coverage. Also, the correctness of the technique can be formally verified. The technique uses time and software redundancy. The foundation for an effective, low-overhead, software-based certification trail approach to real-time error detection resulting from transient fault phenomena was developed.

  11. Approximate Entropy Based Fault Localization and Fault Type Recognition for Non-solidly Earthed Network

    NASA Astrophysics Data System (ADS)

    Pang, Qingle; Liu, Xinyun; Sun, Bo; Ling, Qunli

    2012-12-01

    For non-solidly earthed network, the fault localization of single phase grounding fault has been a problem. A novel fault localization and fault type recognition method of single phase grounding fault based on approximate entropy is presented. The approximate entropies of transient zero sequence current at both ends of healthy section are approximately equal, and the ratio is close to 1. On the contrary, the approximate entropies at both ends of fault section are different, and the ratio is far from 1. So, the fault section is located. At the same fault section, the smaller is the fault resistance, the larger is the approximate entropy of transient zero sequence current. According to the function between approximate entropy and fault resistance, the fault type is determined. The method has the advantages of transferring less data and unneeded synchronous sampling accurately. The simulation results show that the proposed method is feasible and accurate.

  12. Design of a fault-tolerant decision-making system for biomedical applications.

    PubMed

    Faust, Oliver; Acharya, U Rajendra; Sputh, Bernhard H C; Tamura, Toshiyo

    2013-01-01

    This paper describes the design of a fault-tolerant classification system for medical applications. The design process follows the systems engineering methodology: in the agreement phase, we make the case for fault tolerance in diagnosis systems for biomedical applications. The argument extends the idea that machine diagnosis systems mimic the functionality of human decision-making, but in many cases they do not achieve the fault tolerance of the human brain. After making the case for fault tolerance, both requirements and specification for the fault-tolerant system are introduced before the implementation is discussed. The system is tested with fault and use cases to build up trust in the implemented system. This structured approach aided in the realisation of the fault-tolerant classification system. During the specification phase, we produced a formal model that enabled us to discuss what fault tolerance, reliability and safety mean for this particular classification system. Furthermore, such a formal basis for discussion is extremely useful during the initial stages of the design, because it helps to avoid big mistakes caused by a lack of overview later on in the project. During the implementation, we practiced component reuse by incorporating a reliable classification block, which was developed during a previous project, into the current design. Using a well-structured approach and practicing component reuse we follow best practice for both research and industry projects, which enabled us to realise the fault-tolerant classification system on time and within budget. This system can serve in a wide range of future health care systems. PMID:22288838

  13. Update: San Andreas Fault experiment

    NASA Technical Reports Server (NTRS)

    Christodoulidis, D. C.; Smith, D. E.

    1984-01-01

    Satellite laser ranging techniques are used to monitor the broad motion of the tectonic plates comprising the San Andreas Fault System. The San Andreas Fault Experiment, (SAFE), has progressed through the upgrades made to laser system hardware and an improvement in the modeling capabilities of the spaceborne laser targets. Of special note is the launch of the Laser Geodynamic Satellite, LAGEOS spacecraft, NASA's only completely dedicated laser satellite in 1976. The results of plate motion projected into this 896 km measured line over the past eleven years are summarized and intercompared.

  14. The Development of NASA's Fault Management Handbook

    NASA Technical Reports Server (NTRS)

    Fesq, Lorraine

    2011-01-01

    Disciplined approach to Fault Management (FM) has not always been emphasized by projects, contributing to major schedule and cost overruns. Progress is being made on a number of fronts outside of Handbook effort: (1) Processes, Practices and Tools being developed at some Centers and Institutions (2) Management recognition . Constellation FM roles, Discovery/New Frontiers mission reviews (3) Potential Technology solutions . New approaches could avoid many current pitfalls (3a) New FM architectures, including model ]based approach integrated with NASA fs MBSE efforts (3b) NASA fs Office of the Chief Technologist: FM identified in seven of NASA fs 14 Space Technology Roadmaps . opportunity to coalesce and establish thrust area to progressively develop new FM techniques FM Handbook will help ensure that future missions do not encounter same FM ]related problems as previous missions Version 1 of the FM Handbook is a good start.

  15. Award ER25750: Coordinated Infrastructure for Fault Tolerance Systems Indiana University Final Report

    SciTech Connect

    Lumsdaine, Andrew

    2013-03-08

    The main purpose of the Coordinated Infrastructure for Fault Tolerance in Systems initiative has been to conduct research with a goal of providing end-to-end fault tolerance on a systemwide basis for applications and other system software. While fault tolerance has been an integral part of most high-performance computing (HPC) system software developed over the past decade, it has been treated mostly as a collection of isolated stovepipes. Visibility and response to faults has typically been limited to the particular hardware and software subsystems in which they are initially observed. Little fault information is shared across subsystems, allowing little flexibility or control on a system-wide basis, making it practically impossible to provide cohesive end-to-end fault tolerance in support of scientific applications. As an example, consider faults such as communication link failures that can be seen by a network library but are not directly visible to the job scheduler, or consider faults related to node failures that can be detected by system monitoring software but are not inherently visible to the resource manager. If information about such faults could be shared by the network libraries or monitoring software, then other system software, such as a resource manager or job scheduler, could ensure that failed nodes or failed network links were excluded from further job allocations and that further diagnosis could be performed. As a founding member and one of the lead developers of the Open MPI project, our efforts over the course of this project have been focused on making Open MPI more robust to failures by supporting various fault tolerance techniques, and using fault information exchange and coordination between MPI and the HPC system software stack from the application, numeric libraries, and programming language runtime to other common system components such as jobs schedulers, resource managers, and monitoring tools.

  16. Uncertainties in models for hydrologic characterization of faults - implications from a study on the Wildcat Fault, Berkeley, California -

    NASA Astrophysics Data System (ADS)

    Goto, J.; Yoshimura, K.; Karasaki, K.; Kiho, K.

    2012-12-01

    The Nuclear Waste Management Organization of Japan (NUMO) will select a site for HLW and TRU waste repository through the three-staged program, namely, the Literature Surveys, the Preliminary Investigations and the Detailed Investigations. Areas that are susceptible to natural hazards such as volcanism, faulting and significant uplift/erosion will be eliminated in the early stages. Then, sites that have more favorable geological environment will be selected with respect to the repository design and long-term safety after closure. It is internationally acknowledged that hydrologic features of faults are of special concern in the above respects. It is highly likely from the experiences of site characterization worldwide that one could encounter numerous faults in an area of one hundred square kilometer assumed for the Preliminary Investigations. Efficient and practical investigation programs, and reliable models/parameters for the repository design and safety analysis are important aspects for implementers. Therefore, we studied a methodology for hydrologic characterization of faults through the multi-phased field investigations and modeling on the Wildcat Fault at the LBNL site. The field investigations consisted surface geophysics, geological mapping, trench surveys, hydraulic tests and monitoring both in vertical and inclined boreholes. In general, the degree of understanding of the site increased along with the progress of the investigation phases. At the end of the project, relatively large uncertainties remained in the hydrogeological model. The major reasons were that two different interpretations on the fault geometry were possible, and that hydrologic data on the fault and host rock were not enough because of the difficulties in hydraulic tests in the fracture zones and the borehole configuration under various constraints. Implications for reducing uncertainties in hydrologic characterization of faults based on our experience are summarized below. - More

  17. Faulting at Mormon Point, Death Valley, California: A low-angle normal fault cut by high-angle faults

    NASA Astrophysics Data System (ADS)

    Keener, Charles; Serpa, Laura; Pavlis, Terry L.

    1993-04-01

    New geophysical and fault kinematic studies indicate that late Cenozoic basin development in the Mormon Point area of Death Valley, California, was accommodated by fault rotations. Three of six fault segments recognized at Mormon Point are now inactive and have been rotated to low dips during extension. The remaining three segments are now active and moderately to steeply dipping. From the geophysical data, one active segment appears to offset the low-angle faults in the subsurface of Death Valley.

  18. Maximum Magnitude in Relation to Mapped Fault Length and Fault Rupture

    NASA Astrophysics Data System (ADS)

    Black, N.; Jackson, D.; Rockwell, T.

    2004-12-01

    Earthquake hazard zones are highlighted using known fault locations and an estimate of the fault's maximum magnitude earthquake. Magnitude limits are commonly determined from fault geometry, which is dependent on fault length. Over the past 30 years it has become apparent that fault length is often poorly constrained and that a single event can rupture across several individual fault segments. In this study fault geometries are analyzed before and after several moderate to large magnitude earthquakes to determine how well fault length can accurately assess seismic hazard. Estimates of future earthquake magnitudes are often inferred from prior determinations of fault length, but use magnitude regressions based on rupture length. However, rupture length is not always limited to the previously estimated fault length or contained on a single fault. Therefore, the maximum magnitude for a fault may be underestimated, unless the geometry and segmentation of faulting is completely understood. This study examines whether rupture/fault length can be used to accurately predict the maximum magnitude for a given fault. We examine earthquakes greater than 6.0 that occurred after 1970 in Southern California. Geologic maps, fault evaluation reports, and aerial photos that existed prior to these earthquakes are used to obtain the pre-earthquake fault lengths. Pre-earthquake fault lengths are compared with rupture lengths to determine: 1) if fault lengths are the same before and after the ruptures and 2) to constrain the geology and geometry of ruptures that propagated beyond the originally recognized endpoints of a mapped fault. The ruptures examined in this study typically follow one of the following models. The ruptures are either: 1) contained within the dimensions of the original fault trace, 2) break through one or both end points of the originally mapped fault trace, or 3) break through multiple faults, connecting segments into one large fault line. No rupture simply broke a

  19. Time-frequency atoms-driven support vector machine method for bearings incipient fault diagnosis

    NASA Astrophysics Data System (ADS)

    Liu, Ruonan; Yang, Boyuan; Zhang, Xiaoli; Wang, Shibin; Chen, Xuefeng

    2016-06-01

    Bearing plays an essential role in the performance of mechanical system and fault diagnosis of mechanical system is inseparably related to the diagnosis of the bearings. However, it is a challenge to detect weak fault from the complex and non-stationary vibration signals with a large amount of noise, especially at the early stage. To improve the anti-noise ability and detect incipient fault, a novel fault detection method based on a short-time matching method and Support Vector Machine (SVM) is proposed. In this paper, the mechanism of roller bearing is discussed and the impact time frequency dictionary is constructed targeting the multi-component characteristics and fault feature of roller bearing fault vibration signals. Then, a short-time matching method is described and the simulation results show the excellent feature extraction effects in extremely low signal-to-noise ratio (SNR). After extracting the most relevance atoms as features, SVM was trained for fault recognition. Finally, the practical bearing experiments indicate that the proposed method is more effective and efficient than the traditional methods in weak impact signal oscillatory characters extraction and incipient fault diagnosis.

  20. Intelligent method for diagnosing structural faults of rotating machinery using ant colony optimization.

    PubMed

    Li, Ke; Chen, Peng

    2011-01-01

    Structural faults, such as unbalance, misalignment and looseness, etc., often occur in the shafts of rotating machinery. These faults may cause serious machine accidents and lead to great production losses. This paper proposes an intelligent method for diagnosing structural faults of rotating machinery using ant colony optimization (ACO) and relative ratio symptom parameters (RRSPs) in order to detect faults and distinguish fault types at an early stage. New symptom parameters called "relative ratio symptom parameters" are defined for reflecting the features of vibration signals measured in each state. Synthetic detection index (SDI) using statistical theory has also been defined to evaluate the applicability of the RRSPs. The SDI can be used to indicate the fitness of a RRSP for ACO. Lastly, this paper also compares the proposed method with the conventional neural networks (NN) method. Practical examples of fault diagnosis for a centrifugal fan are provided to verify the effectiveness of the proposed method. The verification results show that the structural faults often occurring in the centrifugal fan, such as unbalance, misalignment and looseness states are effectively identified by the proposed method, while these faults are difficult to detect using conventional neural networks. PMID:22163833

  1. Integration of InSAR and GIS in the Study of Surface Faults Caused by Subsidence-Creep-Fault Processes in Celaya, Guanajuato, Mexico

    SciTech Connect

    Avila-Olivera, Jorge A.; Farina, Paolo; Garduno-Monroy, Victor H.

    2008-05-07

    In Celaya city, Subsidence-Creep-Fault Processes (SCFP) began to become visible at the beginning of the 1980s with the sprouting of the crackings that gave rise to the surface faults 'Oriente' and 'Poniente'. At the present time, the city is being affected by five surface faults that display a preferential NNW-SSE direction, parallel to the regional faulting system 'Taxco-San Miguel de Allende'. In order to study the SCFP in the city, the first step was to obtain a map of surface faults, by integrating in a GIS field survey and an urban city plan. The following step was to create a map of the current phreatic level decline in city with the information of deep wells and using the 'kriging' method in order to obtain a continuous surface. Finally the interferograms maps resulted of an InSAR analysis of 9 SAR images covering the time interval between July 12 of 2003 and May 27 of 2006 were integrated to a GIS. All the maps generated, show how the surface faults divide the city from North to South, in two zones that behave in a different way. The difference of the phreatic level decline between these two zones is 60 m; and the InSAR study revealed that the Western zone practically remains stable, while sinkings between the surface faults 'Oriente' and 'Universidad Pedagogica' are present, as well as in portions NE and SE of the city, all of these sinkings between 7 and 10 cm/year.

  2. Active fault traces along Bhuj Fault and Katrol Hill Fault, and trenching survey at Wandhay, Kachchh, Gujarat, India

    NASA Astrophysics Data System (ADS)

    Morino, Michio; Malik, Javed N.; Mishra, Prashant; Bhuiyan, Chandrashekhar; Kaneko, Fumio

    2008-06-01

    Several new active fault traces were identified along Katrol Hill Fault (KHF). A new fault (named as Bhuj Fault, BF) that extends into the Bhuj Plain was also identified. These fault traces were identified based on satellite photo interpretation and field survey. Trenches were excavated to identify the paleoseismic events, pattern of faulting and the nature of deformation. New active fault traces were recognized about 1km north of the topographic boundary between the Katrol Hill and the plain area. The fault exposure along the left bank of Khari River with 10m wide shear zone in the Mesozoic rocks and showing displacement of the overlying Quaternary deposits is indicative of continued tectonic activity along the ancient fault. The E-W trending active fault traces along the KHF in the western part changes to NE-SW or ENE-WSW near Wandhay village. Trenching survey across a low scarp near Wandhay village reveals three major fault strands F1, F2, and F3. These fault strands displaced the older terrace deposits comprising Sand, Silt and Gravel units along with overlying younger deposits from units 1 to 5 made of gravel, sand and silt. Stratigraphic relationship indicates at least three large magnitude earthquakes along KHF during Late Holocene or recent historic past.

  3. Parametric Modeling and Fault Tolerant Control

    NASA Technical Reports Server (NTRS)

    Wu, N. Eva; Ju, Jianhong

    2000-01-01

    Fault tolerant control is considered for a nonlinear aircraft model expressed as a linear parameter-varying system. By proper parameterization of foreseeable faults, the linear parameter-varying system can include fault effects as additional varying parameters. A recently developed technique in fault effect parameter estimation allows us to assume that estimates of the fault effect parameters are available on-line. Reconfigurability is calculated for this model with respect to the loss of control effectiveness to assess the potentiality of the model to tolerate such losses prior to control design. The control design is carried out by applying a polytopic method to the aircraft model. An error bound on fault effect parameter estimation is provided, within which the Lyapunov stability of the closed-loop system is robust. Our simulation results show that as long as the fault parameter estimates are sufficiently accurate, the polytopic controller can provide satisfactory fault-tolerance.

  4. Detection of faults and software reliability analysis

    NASA Technical Reports Server (NTRS)

    Knight, J. C.

    1987-01-01

    Specific topics briefly addressed include: the consistent comparison problem in N-version system; analytic models of comparison testing; fault tolerance through data diversity; and the relationship between failures caused by automatically seeded faults.

  5. Solar Dynamic Power System Fault Diagnosis

    NASA Technical Reports Server (NTRS)

    Momoh, James A.; Dias, Lakshman G.

    1996-01-01

    The objective of this research is to conduct various fault simulation studies for diagnosing the type and location of faults in the power distribution system. Different types of faults are simulated at different locations within the distribution system and the faulted waveforms are monitored at measurable nodes such as at the output of the DDCU's. These fault signatures are processed using feature extractors such as FFT and wavelet transforms. The extracted features are fed to a clustering based neural network for training and subsequent testing using previously unseen data. Different load models consisting of constant impedance and constant power are used for the loads. Open circuit faults and short circuit faults are studied. It is concluded from present studies that using features extracted from wavelet transforms give better success rates during ANN testing. The trained ANN's are capable of diagnosing fault types and approximate locations in the solar dynamic power distribution system.

  6. Fault seals in oil fields in Nevada

    SciTech Connect

    Foster, N.H.; Veal, H.K.; Bortz, L.C.

    1987-08-01

    Faults forms seals for oil accumulations in the Eagle Springs, Trap Spring, and Blackburn fields, and probably in the Grant Canyon field, in Nevada. The main boundary fault on the east side of the Pine Valley graben forms a seal in the Blackburn field. A fault on the west side of the trap Spring field forms a seal. In Grant Canyon field, it is interpreted that the main boundary fault on the east side of the Railroad Valley graben forms a seal. Calcite is deposited by hot spring activity, plugging up many fault zones and, in some cases, forming seals. Some fault zones have calcite mineralization up to several thousand feet wide. Within the Eagle Springs field on the east side of the Railroad Valley graben, a northeast-trending fault separates oil accumulations with different oil-water contacts. This separation indicates that the fault forms at least a partial seal within the accumulation.

  7. Seismology: Diary of a wimpy fault

    NASA Astrophysics Data System (ADS)

    Bürgmann, Roland

    2015-05-01

    Subduction zone faults can slip slowly, generating tremor. The varying correlation between tidal stresses and tremor occurring deep in the Cascadia subduction zone suggests that the fault is inherently weak, and gets weaker as it slips.

  8. Time-series methods for fault detection and identification in vibrating structures.

    PubMed

    Fassois, Spilios D; Sakellariou, John S

    2007-02-15

    An overview of the principles and techniques of time-series methods for fault detection, identification and estimation in vibrating structures is presented, and certain new methods are introduced. The methods are classified, and their features and operation are discussed. Their practicality and effectiveness are demonstrated through brief presentations of three case studies pertaining to fault detection, identification and estimation in an aircraft panel, a scale aircraft skeleton structure and a simple nonlinear simulated structure. PMID:17255046

  9. Role of geometric complexities and off-fault damage in dynamic rupture propagation

    NASA Astrophysics Data System (ADS)

    Bhat, Harsha Suresh

    2007-12-01

    To analyze the effect of fault branches on dynamic rupture propagation we numerically simulated the observed dynamic slip transfer from the Denali to Totschunda faults during the Mw 7.9, November 3, 2002, Denali fault earthquake, Alaska and show that the theory and methodology of Poliakov et al. [2002] and Kame et al. [2003] is valid for the 2002 Denali fault event. To understand the effect of fault branch length on dynamic rupture propagation we analyze earthquake ruptures propagating along a straight "main" fault and encountering a finite-length branch fault. We show finite branches have the tendency of stopping or re-nucleating rupture on the main fault depending on their length in addition to the parameters singled out by Kame et al. [2003]. We also illustrate branch-related complexities in rupture velocity and slip evolution. We illustrate the effect of backward branches (branches at obtuse angle to the main fault with the same sense of slip as the main fault) and propose a mechanism of backward branching. As a field example we simulate numerically, using a two-dimensional elastodynamic boundary integral equation formulation incorporating slip-weakening rupture, the backward branching phenomenon observed during the Landers 1992 earthquake. To characterize the effect of supershear ruptures on off-fault materials we extend a model of a two-dimensional self-healing slip pulse, propagating dynamically in steady-state with slip-weakening failure criterion, to the supershear regime and show that there exists a non-attenuating stress field behind the Mach front which radiates high stresses arbitrarily far from the fault (practically this would be limited to distances comparable to the depth of the seismogenic zone). We apply this model to study damage features induced during the 2001 Kokoxili (Kunlun) event in Tibet. We also study the 3D effects of supershear ruptures by simulating bilateral ruptures on a finite-width vertical strike-slip fault breaking the surface

  10. Fault Diagnosis for the Heat Exchanger of the Aircraft Environmental Control System Based on the Strong Tracking Filter

    PubMed Central

    Ma, Jian; Lu, Chen; Liu, Hongmei

    2015-01-01

    The aircraft environmental control system (ECS) is a critical aircraft system, which provides the appropriate environmental conditions to ensure the safe transport of air passengers and equipment. The functionality and reliability of ECS have received increasing attention in recent years. The heat exchanger is a particularly significant component of the ECS, because its failure decreases the system’s efficiency, which can lead to catastrophic consequences. Fault diagnosis of the heat exchanger is necessary to prevent risks. However, two problems hinder the implementation of the heat exchanger fault diagnosis in practice. First, the actual measured parameter of the heat exchanger cannot effectively reflect the fault occurrence, whereas the heat exchanger faults are usually depicted by utilizing the corresponding fault-related state parameters that cannot be measured directly. Second, both the traditional Extended Kalman Filter (EKF) and the EKF-based Double Model Filter have certain disadvantages, such as sensitivity to modeling errors and difficulties in selection of initialization values. To solve the aforementioned problems, this paper presents a fault-related parameter adaptive estimation method based on strong tracking filter (STF) and Modified Bayes classification algorithm for fault detection and failure mode classification of the heat exchanger, respectively. Heat exchanger fault simulation is conducted to generate fault data, through which the proposed methods are validated. The results demonstrate that the proposed methods are capable of providing accurate, stable, and rapid fault diagnosis of the heat exchanger. PMID:25823010

  11. Fault diagnosis for the heat exchanger of the aircraft environmental control system based on the strong tracking filter.

    PubMed

    Ma, Jian; Lu, Chen; Liu, Hongmei

    2015-01-01

    The aircraft environmental control system (ECS) is a critical aircraft system, which provides the appropriate environmental conditions to ensure the safe transport of air passengers and equipment. The functionality and reliability of ECS have received increasing attention in recent years. The heat exchanger is a particularly significant component of the ECS, because its failure decreases the system's efficiency, which can lead to catastrophic consequences. Fault diagnosis of the heat exchanger is necessary to prevent risks. However, two problems hinder the implementation of the heat exchanger fault diagnosis in practice. First, the actual measured parameter of the heat exchanger cannot effectively reflect the fault occurrence, whereas the heat exchanger faults are usually depicted by utilizing the corresponding fault-related state parameters that cannot be measured directly. Second, both the traditional Extended Kalman Filter (EKF) and the EKF-based Double Model Filter have certain disadvantages, such as sensitivity to modeling errors and difficulties in selection of initialization values. To solve the aforementioned problems, this paper presents a fault-related parameter adaptive estimation method based on strong tracking filter (STF) and Modified Bayes classification algorithm for fault detection and failure mode classification of the heat exchanger, respectively. Heat exchanger fault simulation is conducted to generate fault data, through which the proposed methods are validated. The results demonstrate that the proposed methods are capable of providing accurate, stable, and rapid fault diagnosis of the heat exchanger. PMID:25823010

  12. Implementing fault-tolerant sensors

    NASA Technical Reports Server (NTRS)

    Marzullo, Keith

    1989-01-01

    One aspect of fault tolerance in process control programs is the ability to tolerate sensor failure. A methodology is presented for transforming a process control program that cannot tolerate sensor failures to one that can. Additionally, a hierarchy of failure models is identified.

  13. Cell boundary fault detection system

    DOEpatents

    Archer, Charles Jens; Pinnow, Kurt Walter; Ratterman, Joseph D.; Smith, Brian Edward

    2011-04-19

    An apparatus and program product determine a nodal fault along the boundary, or face, of a computing cell. Nodes on adjacent cell boundaries communicate with each other, and the communications are analyzed to determine if a node or connection is faulty.

  14. MOS integrated circuit fault modeling

    NASA Technical Reports Server (NTRS)

    Sievers, M.

    1985-01-01

    Three digital simulation techniques for MOS integrated circuit faults were examined. These techniques embody a hierarchy of complexity bracketing the range of simulation levels. The digital approaches are: transistor-level, connector-switch-attenuator level, and gate level. The advantages and disadvantages are discussed. Failure characteristics are also described.

  15. Environment and protection of a gas-insulated substation

    SciTech Connect

    Koepfinger, J.L.; Lauth, W.C.

    1980-01-01

    Unlike a conventional open-air designed substation, the environment of a gas-insulated substation (GIS) is under the control of the designer and operator. This is a feature that can be utilized by the protection engineer to develop a protective scheme for the substation. A discussion is provided to develop a fault anticipation protection scheme.

  16. High temperature superconducting fault current limiter

    DOEpatents

    Hull, J.R.

    1997-02-04

    A fault current limiter for an electrical circuit is disclosed. The fault current limiter includes a high temperature superconductor in the electrical circuit. The high temperature superconductor is cooled below its critical temperature to maintain the superconducting electrical properties during operation as the fault current limiter. 15 figs.

  17. High temperature superconducting fault current limiter

    DOEpatents

    Hull, John R.

    1997-01-01

    A fault current limiter (10) for an electrical circuit (14). The fault current limiter (10) includes a high temperature superconductor (12) in the electrical circuit (14). The high temperature superconductor (12) is cooled below its critical temperature to maintain the superconducting electrical properties during operation as the fault current limiter (10).

  18. Fault-related clay authigenesis along the Moab Fault: Implications for calculations of fault rock composition and mechanical and hydrologic fault zone properties

    NASA Astrophysics Data System (ADS)

    Solum, John G.; Davatzes, Nicholas C.; Lockner, David A.

    2010-12-01

    The presence of clays in fault rocks influences both the mechanical and hydrologic properties of clay-bearing faults, and therefore it is critical to understand the origin of clays in fault rocks and their distributions is of great importance for defining fundamental properties of faults in the shallow crust. Field mapping shows that layers of clay gouge and shale smear are common along the Moab Fault, from exposures with throws ranging from 10 to ˜1000 m. Elemental analyses of four locations along the Moab Fault show that fault rocks are enriched in clays at R191 and Bartlett Wash, but that this clay enrichment occurred at different times and was associated with different fluids. Fault rocks at Corral and Courthouse Canyons show little difference in elemental composition from adjacent protolith, suggesting that formation of fault rocks at those locations is governed by mechanical processes. Friction tests show that these authigenic clays result in fault zone weakening, and potentially influence the style of failure along the fault (seismogenic vs. aseismic) and potentially influence the amount of fluid loss associated with coseismic dilation. Scanning electron microscopy shows that authigenesis promotes that continuity of slip surfaces, thereby enhancing seal capacity. The occurrence of the authigenesis, and its influence on the sealing properties of faults, highlights the importance of determining the processes that control this phenomenon.

  19. Fault tolerant software modules for SIFT

    NASA Technical Reports Server (NTRS)

    Hecht, M.; Hecht, H.

    1982-01-01

    The implementation of software fault tolerance is investigated for critical modules of the Software Implemented Fault Tolerance (SIFT) operating system to support the computational and reliability requirements of advanced fly by wire transport aircraft. Fault tolerant designs generated for the error reported and global executive are examined. A description of the alternate routines, implementation requirements, and software validation are included.

  20. 5 CFR 845.302 - Fault.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 5 Administrative Personnel 2 2012-01-01 2012-01-01 false Fault. 845.302 Section 845.302... EMPLOYEES RETIREMENT SYSTEM-DEBT COLLECTION Standards for Waiver of Overpayments § 845.302 Fault. A recipient of an overpayment is without fault if he or she performed no act of commission or omission...

  1. 5 CFR 831.1402 - Fault.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 5 Administrative Personnel 2 2014-01-01 2014-01-01 false Fault. 831.1402 Section 831.1402...) RETIREMENT Standards for Waiver of Overpayments § 831.1402 Fault. A recipient of an overpayment is without fault if he/she performed no act of commission or omission which resulted in the overpayment. The...

  2. 5 CFR 831.1402 - Fault.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 5 Administrative Personnel 2 2010-01-01 2010-01-01 false Fault. 831.1402 Section 831.1402...) RETIREMENT Standards for Waiver of Overpayments § 831.1402 Fault. A recipient of an overpayment is without fault if he/she performed no act of commission or omission which resulted in the overpayment. The...

  3. 20 CFR 255.11 - Fault.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 20 Employees' Benefits 1 2013-04-01 2012-04-01 true Fault. 255.11 Section 255.11 Employees... § 255.11 Fault. (a) Before recovery of an overpayment may be waived, it must be determined that the overpaid individual was without fault in causing the overpayment. If recovery is sought from other than...

  4. 5 CFR 831.1402 - Fault.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 5 Administrative Personnel 2 2013-01-01 2013-01-01 false Fault. 831.1402 Section 831.1402...) RETIREMENT Standards for Waiver of Overpayments § 831.1402 Fault. A recipient of an overpayment is without fault if he/she performed no act of commission or omission which resulted in the overpayment. The...

  5. 20 CFR 255.11 - Fault.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 20 Employees' Benefits 1 2011-04-01 2011-04-01 false Fault. 255.11 Section 255.11 Employees... § 255.11 Fault. (a) Before recovery of an overpayment may be waived, it must be determined that the overpaid individual was without fault in causing the overpayment. If recovery is sought from other than...

  6. 5 CFR 845.302 - Fault.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 5 Administrative Personnel 2 2013-01-01 2013-01-01 false Fault. 845.302 Section 845.302... EMPLOYEES RETIREMENT SYSTEM-DEBT COLLECTION Standards for Waiver of Overpayments § 845.302 Fault. A recipient of an overpayment is without fault if he or she performed no act of commission or omission...

  7. 5 CFR 845.302 - Fault.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 5 Administrative Personnel 2 2014-01-01 2014-01-01 false Fault. 845.302 Section 845.302... EMPLOYEES RETIREMENT SYSTEM-DEBT COLLECTION Standards for Waiver of Overpayments § 845.302 Fault. A recipient of an overpayment is without fault if he or she performed no act of commission or omission...

  8. 5 CFR 845.302 - Fault.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 5 Administrative Personnel 2 2011-01-01 2011-01-01 false Fault. 845.302 Section 845.302... EMPLOYEES RETIREMENT SYSTEM-DEBT COLLECTION Standards for Waiver of Overpayments § 845.302 Fault. A recipient of an overpayment is without fault if he or she performed no act of commission or omission...

  9. 20 CFR 255.11 - Fault.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 20 Employees' Benefits 1 2014-04-01 2012-04-01 true Fault. 255.11 Section 255.11 Employees... § 255.11 Fault. (a) Before recovery of an overpayment may be waived, it must be determined that the overpaid individual was without fault in causing the overpayment. If recovery is sought from other than...

  10. 5 CFR 845.302 - Fault.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 5 Administrative Personnel 2 2010-01-01 2010-01-01 false Fault. 845.302 Section 845.302... EMPLOYEES RETIREMENT SYSTEM-DEBT COLLECTION Standards for Waiver of Overpayments § 845.302 Fault. A recipient of an overpayment is without fault if he or she performed no act of commission or omission...

  11. 20 CFR 255.11 - Fault.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false Fault. 255.11 Section 255.11 Employees... § 255.11 Fault. (a) Before recovery of an overpayment may be waived, it must be determined that the overpaid individual was without fault in causing the overpayment. If recovery is sought from other than...

  12. 5 CFR 831.1402 - Fault.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 5 Administrative Personnel 2 2012-01-01 2012-01-01 false Fault. 831.1402 Section 831.1402...) RETIREMENT Standards for Waiver of Overpayments § 831.1402 Fault. A recipient of an overpayment is without fault if he/she performed no act of commission or omission which resulted in the overpayment. The...

  13. 5 CFR 831.1402 - Fault.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 5 Administrative Personnel 2 2011-01-01 2011-01-01 false Fault. 831.1402 Section 831.1402...) RETIREMENT Standards for Waiver of Overpayments § 831.1402 Fault. A recipient of an overpayment is without fault if he/she performed no act of commission or omission which resulted in the overpayment. The...

  14. 20 CFR 255.11 - Fault.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 20 Employees' Benefits 1 2012-04-01 2012-04-01 false Fault. 255.11 Section 255.11 Employees... § 255.11 Fault. (a) Before recovery of an overpayment may be waived, it must be determined that the overpaid individual was without fault in causing the overpayment. If recovery is sought from other than...

  15. Fault-related clay authigenesis along the Moab Fault: Implications for calculations of fault rock composition and mechanical and hydrologic fault zone properties

    USGS Publications Warehouse

    Solum, J.G.; Davatzes, N.C.; Lockner, D.A.

    2010-01-01

    The presence of clays in fault rocks influences both the mechanical and hydrologic properties of clay-bearing faults, and therefore it is critical to understand the origin of clays in fault rocks and their distributions is of great importance for defining fundamental properties of faults in the shallow crust. Field mapping shows that layers of clay gouge and shale smear are common along the Moab Fault, from exposures with throws ranging from 10 to ???1000 m. Elemental analyses of four locations along the Moab Fault show that fault rocks are enriched in clays at R191 and Bartlett Wash, but that this clay enrichment occurred at different times and was associated with different fluids. Fault rocks at Corral and Courthouse Canyons show little difference in elemental composition from adjacent protolith, suggesting that formation of fault rocks at those locations is governed by mechanical processes. Friction tests show that these authigenic clays result in fault zone weakening, and potentially influence the style of failure along the fault (seismogenic vs. aseismic) and potentially influence the amount of fluid loss associated with coseismic dilation. Scanning electron microscopy shows that authigenesis promotes that continuity of slip surfaces, thereby enhancing seal capacity. The occurrence of the authigenesis, and its influence on the sealing properties of faults, highlights the importance of determining the processes that control this phenomenon. ?? 2010 Elsevier Ltd.

  16. Reliability computation using fault tree analysis

    NASA Technical Reports Server (NTRS)

    Chelson, P. O.

    1971-01-01

    A method is presented for calculating event probabilities from an arbitrary fault tree. The method includes an analytical derivation of the system equation and is not a simulation program. The method can handle systems that incorporate standby redundancy and it uses conditional probabilities for computing fault trees where the same basic failure appears in more than one fault path.

  17. Fault-Tolerant Heat Exchanger

    NASA Technical Reports Server (NTRS)

    Izenson, Michael G.; Crowley, Christopher J.

    2005-01-01

    A compact, lightweight heat exchanger has been designed to be fault-tolerant in the sense that a single-point leak would not cause mixing of heat-transfer fluids. This particular heat exchanger is intended to be part of the temperature-regulation system for habitable modules of the International Space Station and to function with water and ammonia as the heat-transfer fluids. The basic fault-tolerant design is adaptable to other heat-transfer fluids and heat exchangers for applications in which mixing of heat-transfer fluids would pose toxic, explosive, or other hazards: Examples could include fuel/air heat exchangers for thermal management on aircraft, process heat exchangers in the cryogenic industry, and heat exchangers used in chemical processing. The reason this heat exchanger can tolerate a single-point leak is that the heat-transfer fluids are everywhere separated by a vented volume and at least two seals. The combination of fault tolerance, compactness, and light weight is implemented in a unique heat-exchanger core configuration: Each fluid passage is entirely surrounded by a vented region bridged by solid structures through which heat is conducted between the fluids. Precise, proprietary fabrication techniques make it possible to manufacture the vented regions and heat-conducting structures with very small dimensions to obtain a very large coefficient of heat transfer between the two fluids. A large heat-transfer coefficient favors compact design by making it possible to use a relatively small core for a given heat-transfer rate. Calculations and experiments have shown that in most respects, the fault-tolerant heat exchanger can be expected to equal or exceed the performance of the non-fault-tolerant heat exchanger that it is intended to supplant (see table). The only significant disadvantages are a slight weight penalty and a small decrease in the mass-specific heat transfer.

  18. Fault tolerant control of spacecraft

    NASA Astrophysics Data System (ADS)

    Godard

    Autonomous multiple spacecraft formation flying space missions demand the development of reliable control systems to ensure rapid, accurate, and effective response to various attitude and formation reconfiguration commands. Keeping in mind the complexities involved in the technology development to enable spacecraft formation flying, this thesis presents the development and validation of a fault tolerant control algorithm that augments the AOCS on-board a spacecraft to ensure that these challenging formation flying missions will fly successfully. Taking inspiration from the existing theory of nonlinear control, a fault-tolerant control system for the RyePicoSat missions is designed to cope with actuator faults whilst maintaining the desirable degree of overall stability and performance. Autonomous fault tolerant adaptive control scheme for spacecraft equipped with redundant actuators and robust control of spacecraft in underactuated configuration, represent the two central themes of this thesis. The developed algorithms are validated using a hardware-in-the-loop simulation. A reaction wheel testbed is used to validate the proposed fault tolerant attitude control scheme. A spacecraft formation flying experimental testbed is used to verify the performance of the proposed robust control scheme for underactuated spacecraft configurations. The proposed underactuated formation flying concept leads to more than 60% savings in fuel consumption when compared to a fully actuated spacecraft formation configuration. We also developed a novel attitude control methodology that requires only a single thruster to stabilize three axis attitude and angular velocity components of a spacecraft. Numerical simulations and hardware-in-the-loop experimental results along with rigorous analytical stability analysis shows that the proposed methodology will greatly enhance the reliability of the spacecraft, while allowing for potentially significant overall mission cost reduction.

  19. Fault Diagnosis in HVAC Chillers

    NASA Technical Reports Server (NTRS)

    Choi, Kihoon; Namuru, Setu M.; Azam, Mohammad S.; Luo, Jianhui; Pattipati, Krishna R.; Patterson-Hine, Ann

    2005-01-01

    Modern buildings are being equipped with increasingly sophisticated power and control systems with substantial capabilities for monitoring and controlling the amenities. Operational problems associated with heating, ventilation, and air-conditioning (HVAC) systems plague many commercial buildings, often the result of degraded equipment, failed sensors, improper installation, poor maintenance, and improperly implemented controls. Most existing HVAC fault-diagnostic schemes are based on analytical models and knowledge bases. These schemes are adequate for generic systems. However, real-world systems significantly differ from the generic ones and necessitate modifications of the models and/or customization of the standard knowledge bases, which can be labor intensive. Data-driven techniques for fault detection and isolation (FDI) have a close relationship with pattern recognition, wherein one seeks to categorize the input-output data into normal or faulty classes. Owing to the simplicity and adaptability, customization of a data-driven FDI approach does not require in-depth knowledge of the HVAC system. It enables the building system operators to improve energy efficiency and maintain the desired comfort level at a reduced cost. In this article, we consider a data-driven approach for FDI of chillers in HVAC systems. To diagnose the faults of interest in the chiller, we employ multiway dynamic principal component analysis (MPCA), multiway partial least squares (MPLS), and support vector machines (SVMs). The simulation of a chiller under various fault conditions is conducted using a standard chiller simulator from the American Society of Heating, Refrigerating, and Air-conditioning Engineers (ASHRAE). We validated our FDI scheme using experimental data obtained from different types of chiller faults.

  20. Fault-crossing P delays, epicentral biasing, and fault behavior in Central California

    USGS Publications Warehouse

    Marks, S.M.; Bufe, C.G.

    1979-01-01

    The P delays across the San Andreas fault zone in central California have been determined from travel-time differences at station pairs spanning the fault, using off-fault local earthquake or quarry blast sources. Systematic delays as large as 0.4 sec have been observed for paths crossing the fault at depths of 5-10 km. These delays can account for the apparent deviation of epicenters from the mapped fault trace. The largest delays occur along the San Andreas fault between San Juan Bautista and Bear Valley and Between Bitterwater Valley and Parkfield. Spatial variations in fault behavior correlate with the magnitude of the fault-crossing P delay. The delay decreases to the northwest of San Juan Bautista across the "locked" section of the San Andreas fault and also decreases to the southeast approaching Parkfield. Where the delay is large, seismicity is relatively high and the fault is creeping. ?? 1979.

  1. Coordinated Fault-Tolerance for High-Performance Computing Final Project Report

    SciTech Connect

    Panda, Dhabaleswar Kumar; Beckman, Pete

    2011-07-28

    With the Coordinated Infrastructure for Fault Tolerance Systems (CIFTS, as the original project came to be called) project, our aim has been to understand and tackle the following broad research questions, the answers to which will help the HEC community analyze and shape the direction of research in the field of fault tolerance and resiliency on future high-end leadership systems. Will availability of global fault information, obtained by fault information exchange between the different HEC software on a system, allow individual system software to better detect, diagnose, and adaptively respond to faults? If fault-awareness is raised throughout the system through fault information exchange, is it possible to get all system software working together to provide a more comprehensive end-to-end fault management on the system? What are the missing fault-tolerance features that widely used HEC system software lacks today that would inhibit such software from taking advantage of systemwide global fault information? What are the practical limitations of a systemwide approach for end-to-end fault management based on fault awareness and coordination? What mechanisms, tools, and technologies are needed to bring about fault awareness and coordination of responses on a leadership-class system? What standards, outreach, and community interaction are needed for adoption of the concept of fault awareness and coordination for fault management on future systems? Keeping our overall objectives in mind, the CIFTS team has taken a parallel fourfold approach. Our central goal was to design and implement a light-weight, scalable infrastructure with a simple, standardized interface to allow communication of fault-related information through the system and facilitate coordinated responses. This work led to the development of the Fault Tolerance Backplane (FTB) publish-subscribe API specification, together with a reference implementation and several experimental implementations on top of

  2. Fault-tolerant multichannel demultiplexer subsystems

    NASA Technical Reports Server (NTRS)

    Redinbo, Robert

    1991-01-01

    Fault tolerance in future processing and switching communication satellites is addressed by showing new methods for detecting hardware failures in the first major subsystem, the multichannel demultiplexer. An efficient method for demultiplexing frequency slotted channels uses multirate filter banks which contain fast Fourier transform processing. All numerical processing is performed at a lower rate commensurate with the small bandwidth of each bandbase channel. The integrity of the demultiplexing operations is protected by using real number convolutional codes to compute comparable parity values which detect errors at the data sample level. High rate, systematic convolutional codes produce parity values at a much reduced rate, and protection is achieved by generating parity values in two ways and comparing them. Parity values corresponding to each output channel are generated in parallel by a subsystem, operating even slower and in parallel with the demultiplexer that is virtually identical to the original structure. These parity calculations may be time shared with the same processing resources because they are so similar.

  3. Three-dimensional Geology of the Hayward Fault and its Correlation with Fault Behavior, Northern California

    NASA Astrophysics Data System (ADS)

    Ponce, D. A.; Graymer, R. C.; Jachens, R. C.; Simpson, R. W.; Phelps, G. A.; Wentworth, C. M.

    2004-12-01

    Relationships between fault behavior and geology along the Hayward Fault were investigated using a three-dimensional geologic model of the Hayward fault and vicinity. The three-dimensional model, derived from geologic, geophysical, and seismicity data, allowed the construction of a `geologic map' of east- and west-side surfaces, maps that show the distribution of geologic units on either side of the fault that truncate against the fault surface. These two resulting geologic maps were compared with seismicity and creep along the Hayward Fault using three-dimensional visualization software. The seismic behavior of the Hayward Fault correlates with rock unit contacts along the fault, rather than in rock types across the fault. This suggests that fault activity is, in part, controlled by the physical properties of the rocks that abut the fault and not by properties of the fault zone itself. For example, far fewer earthquakes occur along the northern part of the fault where an intensely sheared Franciscan mélange on the west side abuts the fault face, compared to the region to the south where more coherent rocks of other Franciscan terranes or the Coast Range Ophiolite are present. More locally, clusters of earthquakes correlate spatially with some of the contacts between Franciscan terranes as well as mafic rocks of the Coast Range Ophiolite. Steady creep rates along the fault correlate with the lateral extent of the San Leandro gabbro, and changes in creep rate correlate with changes in geology. Although preliminary, the results of comparing fault behavior with the inferred three-dimensional geology adjacent to the Hayward Fault suggest that any attempt to understand the detailed distribution of earthquakes or creep along the fault should include consideration of the rock types that abut the fault surface. Such consideration would benefit greatly from incorporating into the three-dimensional geologic model the physical properties of the rock types along the fault.

  4. Best Practices for Siting Solar Photovoltaics on Municipal Solid Waste Landfills. A Study Prepared in Partnership with the Environmental Protection Agency for the RE-Powering America's Land Initiative: Siting Renewable Energy on Potentially Contaminated Land and Mine Sites

    SciTech Connect

    Kiatreungwattana, K.; Mosey, G.; Jones-Johnson, S.; Dufficy, C.; Bourg, J.; Conroy, A.; Keenan, M.; Michaud, W.; Brown, K.

    2013-04-01

    The Environmental Protection Agency and the National Renewable Energy Laboratory developed this best practices document to address common technical challenges for siting solar photovoltaics (PV) on municipal solid waste (MSW) landfills. The purpose of this document is to promote the use of MSW landfills for solar energy systems. Closed landfills and portions of active landfills with closed cells represent thousands of acres of property that may be suitable for siting solar photovoltaics (PV). These closed landfills may be suitable for near-term construction, making these sites strong candidate to take advantage of the 30% Federal Business Energy Investment Tax Credit. It was prepared in response to the increasing interest in siting renewable energy on landfills from solar developers; landfill owners; and federal, state, and local governments. It contains examples of solar PV projects on landfills and technical considerations and best practices that were gathered from examining the implementation of several of these projects.

  5. Recurrent late Quaternary surface faulting along the southern Mohawk Valley fault zone, NE California

    SciTech Connect

    Sawyer, T.L.; Hemphill-Haley, M.A. ); Page, W.D. )

    1993-04-01

    The Mohawk Valley fault zone comprises NW- to NNW-striking, normal and strike-slip( ) faults that form the western edge of the Plumas province, a diffuse transitional zone between the Basin and Range and the northern Sierra Nevada. The authors detailed evaluation of the southern part of the fault zone reveals evidence for recurrent late Pleistocene to possibly Holocene, moderate to large surface-faulting events. The southern Mohawk fault zone is a complex, 6-km-wide zone of faults and related features that extends from near the crest of the Sierra Nevada to the middle of southern Sierra Valley. The fault zone has two distinct and generally parallel subzones, 3 km apart, that are delineated by markedly different geomorphic characteristics and apparently different styles of faulting. Paleoseismic activity of the western subzone was evaluated in two trenches: one across a fault antithetic to the main range-bounding fault, and the other across a splay fault delineated by a 3.7-m-high scarp in alluvium. Stratigraphic relations, soil development, and radiocarbon dates indicate that at least four mid- to late-Pleistocene surface-faulting events, having single-event displacements in excess of 1.6 to 2.6 m, occurred along the splay fault prior to 12 ka. The antithetic fault has evidence of three late Pleistocene events that may correspond to event documented on the splay fault, and a Holocene event that is inferred from youthful scarplets and small closed depressions.

  6. Novel neural networks-based fault tolerant control scheme with fault alarm.

    PubMed

    Shen, Qikun; Jiang, Bin; Shi, Peng; Lim, Cheng-Chew

    2014-11-01

    In this paper, the problem of adaptive active fault-tolerant control for a class of nonlinear systems with unknown actuator fault is investigated. The actuator fault is assumed to have no traditional affine appearance of the system state variables and control input. The useful property of the basis function of the radial basis function neural network (NN), which will be used in the design of the fault tolerant controller, is explored. Based on the analysis of the design of normal and passive fault tolerant controllers, by using the implicit function theorem, a novel NN-based active fault-tolerant control scheme with fault alarm is proposed. Comparing with results in the literature, the fault-tolerant control scheme can minimize the time delay between fault occurrence and accommodation that is called the time delay due to fault diagnosis, and reduce the adverse effect on system performance. In addition, the FTC scheme has the advantages of a passive fault-tolerant control scheme as well as the traditional active fault-tolerant control scheme's properties. Furthermore, the fault-tolerant control scheme requires no additional fault detection and isolation model which is necessary in the traditional active fault-tolerant control scheme. Finally, simulation results are presented to demonstrate the efficiency of the developed techniques. PMID:25014982

  7. Predeployment validation of fault-tolerant systems through software-implemented fault insertion

    NASA Technical Reports Server (NTRS)

    Czeck, Edward W.; Siewiorek, Daniel P.; Segall, Zary Z.

    1989-01-01

    Fault injection-based automated testing (FIAT) environment, which can be used to experimentally characterize and evaluate distributed realtime systems under fault-free and faulted conditions is described. A survey is presented of validation methodologies. The need for fault insertion based on validation methodologies is demonstrated. The origins and models of faults, and motivation for the FIAT concept are reviewed. FIAT employs a validation methodology which builds confidence in the system through first providing a baseline of fault-free performance data and then characterizing the behavior of the system with faults present. Fault insertion is accomplished through software and allows faults or the manifestation of faults to be inserted by either seeding faults into memory or triggering error detection mechanisms. FIAT is capable of emulating a variety of fault-tolerant strategies and architectures, can monitor system activity, and can automatically orchestrate experiments involving insertion of faults. There is a common system interface which allows ease of use to decrease experiment development and run time. Fault models chosen for experiments on FIAT have generated system responses which parallel those observed in real systems under faulty conditions. These capabilities are shown by two example experiments each using a different fault-tolerance strategy.

  8. Multiple Fault Isolation in Redundant Systems

    NASA Technical Reports Server (NTRS)

    Pattipati, Krishna R.; Patterson-Hine, Ann; Iverson, David

    1997-01-01

    Fault diagnosis in large-scale systems that are products of modern technology present formidable challenges to manufacturers and users. This is due to large number of failure sources in such systems and the need to quickly isolate and rectify failures with minimal down time. In addition, for fault-tolerant systems and systems with infrequent opportunity for maintenance (e.g., Hubble telescope, space station), the assumption of at most a single fault in the system is unrealistic. In this project, we have developed novel block and sequential diagnostic strategies to isolate multiple faults in the shortest possible time without making the unrealistic single fault assumption.

  9. Detection of faults and software reliability analysis

    NASA Technical Reports Server (NTRS)

    Knight, J. C.

    1986-01-01

    Multiversion or N-version programming was proposed as a method of providing fault tolerance in software. The approach requires the separate, independent preparation of multiple versions of a piece of software for some application. Specific topics addressed are: failure probabilities in N-version systems, consistent comparison in N-version systems, descriptions of the faults found in the Knight and Leveson experiment, analytic models of comparison testing, characteristics of the input regions that trigger faults, fault tolerance through data diversity, and the relationship between failures caused by automatically seeded faults.

  10. Multiple Fault Isolation in Redundant Systems

    NASA Technical Reports Server (NTRS)

    Pattipati, Krishna R.

    1997-01-01

    Fault diagnosis in large-scale systems that are products of modem technology present formidable challenges to manufacturers and users. This is due to large number of failure sources in such systems and the need to quickly isolate and rectify failures with minimal down time. In addition, for fault-tolerant systems and systems with infrequent opportunity for maintenance (e.g., Hubble telescope, space station), the assumption of at most a single fault in the system is unrealistic. In this project, we have developed novel block and sequential diagnostic strategies to isolate multiple faults in the shortest possible time without making the unrealistic single fault assumption.

  11. Evaluating Effectiveness of Green Infrastructure Application of Stormwater Best Management Practices in Protecting Stream Habitat and Biotic Condition in New England

    EPA Science Inventory

    The US EPA is developing assessment tools to evaluate the effectiveness of green infrastructure (GI) applied in stormwater best management practices (BMPs) at the small watershed (HUC12 or finer) scale. Based on analysis of historical monitoring data using boosted regression tre...

  12. Application of padmounted fault interrupters to single-phase URD systems

    SciTech Connect

    Israel, W.F.

    1995-12-31

    Underground distribution has proven itself to be a highly effective means of eliminating the temporary faults that plague overhead distribution, however, the permanent faults that do occur are often cable or splice failures which are unpredictable, difficult to locate and require a lot of time and expense to repair. One of the ways that the designer seeks to minimize the impact of such events on service reliability is by incorporating an overcurrent protection scheme which removes the faulted section from the rest of the system and contingency scheme which allows isolation of the faulted section and restoration of service in the shortest possible time with a minimal number of switching operations. As a result there has been a continuing interest in the development of devices, equipment, and methods which help to realize the ideals of rapid service restoration and isolation of faults to the smallest segment of the system. Overcurrent protection of the single-phase URD system has, traditionally, been the province of fused cutouts and fused padmounted sectionalizing terminals and switchgear. Recently however, padmounted vacuum fault interrupters, that utilize electronic controls for tripping, have become available from several manufacturers and are beginning to make inroads into this portion of the distribution system. This paper will discuss the characteristics and potential applications of a new product offered by Cooper Power Systems, the Shrubline VFI, which was designed specifically for use in single phase residential underground distribution.

  13. Results from the NASA Spacecraft Fault Management Workshop: Cost Drivers for Deep Space Missions

    NASA Technical Reports Server (NTRS)

    Newhouse, Marilyn E.; McDougal, John; Barley, Bryan; Stephens Karen; Fesq, Lorraine M.

    2010-01-01

    Fault Management, the detection of and response to in-flight anomalies, is a critical aspect of deep-space missions. Fault management capabilities are commonly distributed across flight and ground subsystems, impacting hardware, software, and mission operations designs. The National Aeronautics and Space Administration (NASA) Discovery & New Frontiers (D&NF) Program Office at Marshall Space Flight Center (MSFC) recently studied cost overruns and schedule delays for five missions. The goal was to identify the underlying causes for the overruns and delays, and to develop practical mitigations to assist the D&NF projects in identifying potential risks and controlling the associated impacts to proposed mission costs and schedules. The study found that four out of the five missions studied had significant overruns due to underestimating the complexity and support requirements for fault management. As a result of this and other recent experiences, the NASA Science Mission Directorate (SMD) Planetary Science Division (PSD) commissioned a workshop to bring together invited participants across government, industry, and academia to assess the state of the art in fault management practice and research, identify current and potential issues, and make recommendations for addressing these issues. The workshop was held in New Orleans in April of 2008. The workshop concluded that fault management is not being limited by technology, but rather by a lack of emphasis and discipline in both the engineering and programmatic dimensions. Some of the areas cited in the findings include different, conflicting, and changing institutional goals and risk postures; unclear ownership of end-to-end fault management engineering; inadequate understanding of the impact of mission-level requirements on fault management complexity; and practices, processes, and tools that have not kept pace with the increasing complexity of mission requirements and spacecraft systems. This paper summarizes the

  14. Experiments in fault tolerant software reliability

    NASA Technical Reports Server (NTRS)

    Mcallister, David F.; Vouk, Mladen A.

    1989-01-01

    Twenty functionally equivalent programs were built and tested in a multiversion software experiment. Following unit testing, all programs were subjected to an extensive system test. In the process sixty-one distinct faults were identified among the versions. Less than 12 percent of the faults exhibited varying degrees of positive correlation. The common-cause (or similar) faults spanned as many as 14 components. However, a majority of these faults were trivial, and easily detected by proper unit and/or system testing. Only two of the seven similar faults were difficult faults, and both were caused by specification ambiguities. One of these faults exhibited variable identical-and-wrong response span, i.e. response span which varied with the testing conditions and input data. Techniques that could have been used to avoid the faults are discussed. For example, it was determined that back-to-back testing of 2-tuples could have been used to eliminate about 90 percent of the faults. In addition, four of the seven similar faults could have been detected by using back-to-back testing of 5-tuples. It is believed that most, if not all, similar faults could have been avoided had the specifications been written using more formal notation, the unit testing phase was subject to more stringent standards and controls, and better tools for measuring the quality and adequacy of the test data (e.g. coverage) were used.

  15. Model-Based Fault Tolerant Control

    NASA Technical Reports Server (NTRS)

    Kumar, Aditya; Viassolo, Daniel

    2008-01-01

    The Model Based Fault Tolerant Control (MBFTC) task was conducted under the NASA Aviation Safety and Security Program. The goal of MBFTC is to develop and demonstrate real-time strategies to diagnose and accommodate anomalous aircraft engine events such as sensor faults, actuator faults, or turbine gas-path component damage that can lead to in-flight shutdowns, aborted take offs, asymmetric thrust/loss of thrust control, or engine surge/stall events. A suite of model-based fault detection algorithms were developed and evaluated. Based on the performance and maturity of the developed algorithms two approaches were selected for further analysis: (i) multiple-hypothesis testing, and (ii) neural networks; both used residuals from an Extended Kalman Filter to detect the occurrence of the selected faults. A simple fusion algorithm was implemented to combine the results from each algorithm to obtain an overall estimate of the identified fault type and magnitude. The identification of the fault type and magnitude enabled the use of an online fault accommodation strategy to correct for the adverse impact of these faults on engine operability thereby enabling continued engine operation in the presence of these faults. The performance of the fault detection and accommodation algorithm was extensively tested in a simulation environment.

  16. Tool for Viewing Faults Under Terrain

    NASA Technical Reports Server (NTRS)

    Siegel, Herbert, L.; Li, P. Peggy

    2005-01-01

    Multi Surface Light Table (MSLT) is an interactive software tool that was developed in support of the QuakeSim project, which has created an earthquake- fault database and a set of earthquake- simulation software tools. MSLT visualizes the three-dimensional geometries of faults embedded below the terrain and animates time-varying simulations of stress and slip. The fault segments, represented as rectangular surfaces at dip angles, are organized into collections, that is, faults. An interface built into MSLT queries and retrieves fault definitions from the QuakeSim fault database. MSLT also reads time-varying output from one of the QuakeSim simulation tools, called "Virtual California." Stress intensity is represented by variations in color. Slips are represented by directional indicators on the fault segments. The magnitudes of the slips are represented by the duration of the directional indicators in time. The interactive controls in MSLT provide a virtual track-ball, pan and zoom, translucency adjustment, simulation playback, and simulation movie capture. In addition, geographical information on the fault segments and faults is displayed on text windows. Because of the extensive viewing controls, faults can be seen in relation to one another, and to the terrain. These relations can be realized in simulations. Correlated slips in parallel faults are visible in the playback of Virtual California simulations.

  17. A Quaternary fault database for central Asia

    NASA Astrophysics Data System (ADS)

    Mohadjer, Solmaz; Ehlers, Todd Alan; Bendick, Rebecca; Stübner, Konstanze; Strube, Timo

    2016-02-01

    Earthquakes represent the highest risk in terms of potential loss of lives and economic damage for central Asian countries. Knowledge of fault location and behavior is essential in calculating and mapping seismic hazard. Previous efforts in compiling fault information for central Asia have generated a large amount of data that are published in limited-access journals with no digital maps publicly available, or are limited in their description of important fault parameters such as slip rates. This study builds on previous work by improving access to fault information through a web-based interactive map and an online database with search capabilities that allow users to organize data by different fields. The data presented in this compilation include fault location, its geographic, seismic, and structural characteristics, short descriptions, narrative comments, and references to peer-reviewed publications. The interactive map displays 1196 fault traces and 34 000 earthquake locations on a shaded-relief map. The online database contains attributes for 123 faults mentioned in the literature, with Quaternary and geodetic slip rates reported for 38 and 26 faults respectively, and earthquake history reported for 39 faults. All data are accessible for viewing and download via http://www.geo.uni-tuebingen.de/faults/. This work has implications for seismic hazard studies in central Asia as it summarizes important fault parameters, and can reduce earthquake risk by enhancing public access to information. It also allows scientists and hazard assessment teams to identify structures and regions where data gaps exist and future investigations are needed.

  18. Improved Fault Classification in Series Compensated Transmission Line: Comparative Evaluation of Chebyshev Neural Network Training Algorithms.

    PubMed

    Vyas, Bhargav Y; Das, Biswarup; Maheshwari, Rudra Prakash

    2016-08-01

    This paper presents the Chebyshev neural network (ChNN) as an improved artificial intelligence technique for power system protection studies and examines the performances of two ChNN learning algorithms for fault classification of series compensated transmission line. The training algorithms are least-square Levenberg-Marquardt (LSLM) and recursive least-square algorithm with forgetting factor (RLSFF). The performances of these algorithms are assessed based on their generalization capability in relating the fault current parameters with an event of fault in the transmission line. The proposed algorithm is fast in response as it utilizes postfault samples of three phase currents measured at the relaying end corresponding to half-cycle duration only. After being trained with only a small part of the generated fault data, the algorithms have been tested over a large number of fault cases with wide variation of system and fault parameters. Based on the studies carried out in this paper, it has been found that although the RLSFF algorithm is faster for training the ChNN in the fault classification application for series compensated transmission lines, the LSLM algorithm has the best accuracy in testing. The results prove that the proposed ChNN-based method is accurate, fast, easy to design, and immune to the level of compensations. Thus, it is suitable for digital relaying applications. PMID:25314714

  19. Arc burst pattern analysis fault detection system

    NASA Technical Reports Server (NTRS)

    Russell, B. Don (Inventor); Aucoin, B. Michael (Inventor); Benner, Carl L. (Inventor)

    1997-01-01

    A method and apparatus are provided for detecting an arcing fault on a power line carrying a load current. Parameters indicative of power flow and possible fault events on the line, such as voltage and load current, are monitored and analyzed for an arc burst pattern exhibited by arcing faults in a power system. These arcing faults are detected by identifying bursts of each half-cycle of the fundamental current. Bursts occurring at or near a voltage peak indicate arcing on that phase. Once a faulted phase line is identified, a comparison of the current and voltage reveals whether the fault is located in a downstream direction of power flow toward customers, or upstream toward a generation station. If the fault is located downstream, the line is de-energized, and if located upstream, the line may remain energized to prevent unnecessary power outages.

  20. Multiple Fault Isolation in Redundant Systems

    NASA Technical Reports Server (NTRS)

    Shakeri, M.; Pattipati, Krishna R.; Raghavan, V.; Patterson-Hine, Ann; Iverson, David L.

    1997-01-01

    We consider the problem of sequencing tests to isolate multiple faults in redundant (fault-tolerant) systems with minimum expected testing cost (time). It can be shown that single faults and minimal faults, i.e., minimum number of failures with a failure signature different from the union of failure signatures of individual failures, together with their failure signatures, constitute the necessary information for fault diagnosis in redundant systems. In this paper, we develop an algorithm to find all the minimal faults and their failure signatures. Then, we extend the Sure diagnostic strategies [1] of our previous work to diagnose multiple faults in redundant systems. The proposed algorithms and strategies are illustrated using several examples.