Sample records for linked fault tree

  1. Reconfigurable tree architectures using subtree oriented fault tolerance

    NASA Technical Reports Server (NTRS)

    Lowrie, Matthew B.

    1987-01-01

    An approach to the design of reconfigurable tree architecture is presented in which spare processors are allocated at the leaves. The approach is unique in that spares are associated with subtrees and sharing of spares between these subtrees can occur. The Subtree Oriented Fault Tolerance (SOFT) approach is more reliable than previous approaches capable of tolerating link and switch failures for both single chip and multichip tree implementations while reducing redundancy in terms of both spare processors and links. VLSI layout is 0(n) for binary trees and is directly extensible to N-ary trees and fault tolerance through performance degradation.

  2. Fault-Tree Compiler

    NASA Technical Reports Server (NTRS)

    Butler, Ricky W.; Boerschlein, David P.

    1993-01-01

    Fault-Tree Compiler (FTC) program, is software tool used to calculate probability of top event in fault tree. Gates of five different types allowed in fault tree: AND, OR, EXCLUSIVE OR, INVERT, and M OF N. High-level input language easy to understand and use. In addition, program supports hierarchical fault-tree definition feature, which simplifies tree-description process and reduces execution time. Set of programs created forming basis for reliability-analysis workstation: SURE, ASSIST, PAWS/STEM, and FTC fault-tree tool (LAR-14586). Written in PASCAL, ANSI-compliant C language, and FORTRAN 77. Other versions available upon request.

  3. Two Trees: Migrating Fault Trees to Decision Trees for Real Time Fault Detection on International Space Station

    NASA Technical Reports Server (NTRS)

    Lee, Charles; Alena, Richard L.; Robinson, Peter

    2004-01-01

    We started from ISS fault trees example to migrate to decision trees, presented a method to convert fault trees to decision trees. The method shows that the visualizations of root cause of fault are easier and the tree manipulating becomes more programmatic via available decision tree programs. The visualization of decision trees for the diagnostic shows a format of straight forward and easy understands. For ISS real time fault diagnostic, the status of the systems could be shown by mining the signals through the trees and see where it stops at. The other advantage to use decision trees is that the trees can learn the fault patterns and predict the future fault from the historic data. The learning is not only on the static data sets but also can be online, through accumulating the real time data sets, the decision trees can gain and store faults patterns in the trees and recognize them when they come.

  4. Fault Tree Analysis.

    PubMed

    McElroy, Lisa M; Khorzad, Rebeca; Rowe, Theresa A; Abecassis, Zachary A; Apley, Daniel W; Barnard, Cynthia; Holl, Jane L

    The purpose of this study was to use fault tree analysis to evaluate the adequacy of quality reporting programs in identifying root causes of postoperative bloodstream infection (BSI). A systematic review of the literature was used to construct a fault tree to evaluate 3 postoperative BSI reporting programs: National Surgical Quality Improvement Program (NSQIP), Centers for Medicare and Medicaid Services (CMS), and The Joint Commission (JC). The literature review revealed 699 eligible publications, 90 of which were used to create the fault tree containing 105 faults. A total of 14 identified faults are currently mandated for reporting to NSQIP, 5 to CMS, and 3 to JC; 2 or more programs require 4 identified faults. The fault tree identifies numerous contributing faults to postoperative BSI and reveals substantial variation in the requirements and ability of national quality data reporting programs to capture these potential faults. Efforts to prevent postoperative BSI require more comprehensive data collection to identify the root causes and develop high-reliability improvement strategies.

  5. Fault-Tree Compiler Program

    NASA Technical Reports Server (NTRS)

    Butler, Ricky W.; Martensen, Anna L.

    1992-01-01

    FTC, Fault-Tree Compiler program, is reliability-analysis software tool used to calculate probability of top event of fault tree. Five different types of gates allowed in fault tree: AND, OR, EXCLUSIVE OR, INVERT, and M OF N. High-level input language of FTC easy to understand and use. Program supports hierarchical fault-tree-definition feature simplifying process of description of tree and reduces execution time. Solution technique implemented in FORTRAN, and user interface in Pascal. Written to run on DEC VAX computer operating under VMS operating system.

  6. Fault diagnosis of power transformer based on fault-tree analysis (FTA)

    NASA Astrophysics Data System (ADS)

    Wang, Yongliang; Li, Xiaoqiang; Ma, Jianwei; Li, SuoYu

    2017-05-01

    Power transformers is an important equipment in power plants and substations, power distribution transmission link is made an important hub of power systems. Its performance directly affects the quality and health of the power system reliability and stability. This paper summarizes the five parts according to the fault type power transformers, then from the time dimension divided into three stages of power transformer fault, use DGA routine analysis and infrared diagnostics criterion set power transformer running state, finally, according to the needs of power transformer fault diagnosis, by the general to the section by stepwise refinement of dendritic tree constructed power transformer fault

  7. The fault-tree compiler

    NASA Technical Reports Server (NTRS)

    Martensen, Anna L.; Butler, Ricky W.

    1987-01-01

    The Fault Tree Compiler Program is a new reliability tool used to predict the top event probability for a fault tree. Five different gate types are allowed in the fault tree: AND, OR, EXCLUSIVE OR, INVERT, and M OF N gates. The high level input language is easy to understand and use when describing the system tree. In addition, the use of the hierarchical fault tree capability can simplify the tree description and decrease program execution time. The current solution technique provides an answer precise (within the limits of double precision floating point arithmetic) to the five digits in the answer. The user may vary one failure rate or failure probability over a range of values and plot the results for sensitivity analyses. The solution technique is implemented in FORTRAN; the remaining program code is implemented in Pascal. The program is written to run on a Digital Corporation VAX with the VMS operation system.

  8. Fault trees and sequence dependencies

    NASA Technical Reports Server (NTRS)

    Dugan, Joanne Bechta; Boyd, Mark A.; Bavuso, Salvatore J.

    1990-01-01

    One of the frequently cited shortcomings of fault-tree models, their inability to model so-called sequence dependencies, is discussed. Several sources of such sequence dependencies are discussed, and new fault-tree gates to capture this behavior are defined. These complex behaviors can be included in present fault-tree models because they utilize a Markov solution. The utility of the new gates is demonstrated by presenting several models of the fault-tolerant parallel processor, which include both hot and cold spares.

  9. Automatic translation of digraph to fault-tree models

    NASA Technical Reports Server (NTRS)

    Iverson, David L.

    1992-01-01

    The author presents a technique for converting digraph models, including those models containing cycles, to a fault-tree format. A computer program which automatically performs this translation using an object-oriented representation of the models has been developed. The fault-trees resulting from translations can be used for fault-tree analysis and diagnosis. Programs to calculate fault-tree and digraph cut sets and perform diagnosis with fault-tree models have also been developed. The digraph to fault-tree translation system has been successfully tested on several digraphs of varying size and complexity. Details of some representative translation problems are presented. Most of the computation performed by the program is dedicated to finding minimal cut sets for digraph nodes in order to break cycles in the digraph. Fault-trees produced by the translator have been successfully used with NASA's Fault-Tree Diagnosis System (FTDS) to produce automated diagnostic systems.

  10. Fault Tree in the Trenches, A Success Story

    NASA Technical Reports Server (NTRS)

    Long, R. Allen; Goodson, Amanda (Technical Monitor)

    2000-01-01

    Getting caught up in the explanation of Fault Tree Analysis (FTA) minutiae is easy. In fact, most FTA literature tends to address FTA concepts and methodology. Yet there seems to be few articles addressing actual design changes resulting from the successful application of fault tree analysis. This paper demonstrates how fault tree analysis was used to identify and solve a potentially catastrophic mechanical problem at a rocket motor manufacturer. While developing the fault tree given in this example, the analyst was told by several organizations that the piece of equipment in question had been evaluated by several committees and organizations, and that the analyst was wasting his time. The fault tree/cutset analysis resulted in a joint-redesign of the control system by the tool engineering group and the fault tree analyst, as well as bragging rights for the analyst. (That the fault tree found problems where other engineering reviews had failed was not lost on the other engineering groups.) Even more interesting was that this was the analyst's first fault tree which further demonstrates how effective fault tree analysis can be in guiding (i.e., forcing) the analyst to take a methodical approach in evaluating complex systems.

  11. Tutorial: Advanced fault tree applications using HARP

    NASA Technical Reports Server (NTRS)

    Dugan, Joanne Bechta; Bavuso, Salvatore J.; Boyd, Mark A.

    1993-01-01

    Reliability analysis of fault tolerant computer systems for critical applications is complicated by several factors. These modeling difficulties are discussed and dynamic fault tree modeling techniques for handling them are described and demonstrated. Several advanced fault tolerant computer systems are described, and fault tree models for their analysis are presented. HARP (Hybrid Automated Reliability Predictor) is a software package developed at Duke University and NASA Langley Research Center that is capable of solving the fault tree models presented.

  12. Reliability computation using fault tree analysis

    NASA Technical Reports Server (NTRS)

    Chelson, P. O.

    1971-01-01

    A method is presented for calculating event probabilities from an arbitrary fault tree. The method includes an analytical derivation of the system equation and is not a simulation program. The method can handle systems that incorporate standby redundancy and it uses conditional probabilities for computing fault trees where the same basic failure appears in more than one fault path.

  13. Fault tree models for fault tolerant hypercube multiprocessors

    NASA Technical Reports Server (NTRS)

    Boyd, Mark A.; Tuazon, Jezus O.

    1991-01-01

    Three candidate fault tolerant hypercube architectures are modeled, their reliability analyses are compared, and the resulting implications of these methods of incorporating fault tolerance into hypercube multiprocessors are discussed. In the course of performing the reliability analyses, the use of HARP and fault trees in modeling sequence dependent system behaviors is demonstrated.

  14. Technology transfer by means of fault tree synthesis

    NASA Astrophysics Data System (ADS)

    Batzias, Dimitris F.

    2012-12-01

    Since Fault Tree Analysis (FTA) attempts to model and analyze failure processes of engineering, it forms a common technique for good industrial practice. On the contrary, fault tree synthesis (FTS) refers to the methodology of constructing complex trees either from dentritic modules built ad hoc or from fault tress already used and stored in a Knowledge Base. In both cases, technology transfer takes place in a quasi-inductive mode, from partial to holistic knowledge. In this work, an algorithmic procedure, including 9 activity steps and 3 decision nodes is developed for performing effectively this transfer when the fault under investigation occurs within one of the latter stages of an industrial procedure with several stages in series. The main parts of the algorithmic procedure are: (i) the construction of a local fault tree within the corresponding production stage, where the fault has been detected, (ii) the formation of an interface made of input faults that might occur upstream, (iii) the fuzzy (to count for uncertainty) multicriteria ranking of these faults according to their significance, and (iv) the synthesis of an extended fault tree based on the construction of part (i) and on the local fault tree of the first-ranked fault in part (iii). An implementation is presented, referring to 'uneven sealing of Al anodic film', thus proving the functionality of the developed methodology.

  15. Fault Tree Analysis: A Bibliography

    NASA Technical Reports Server (NTRS)

    2000-01-01

    Fault tree analysis is a top-down approach to the identification of process hazards. It is as one of the best methods for systematically identifying an graphically displaying the many ways some things can go wrong. This bibliography references 266 documents in the NASA STI Database that contain the major concepts. fault tree analysis, risk an probability theory, in the basic index or major subject terms. An abstract is included with most citations, followed by the applicable subject terms.

  16. [The Application of the Fault Tree Analysis Method in Medical Equipment Maintenance].

    PubMed

    Liu, Hongbin

    2015-11-01

    In this paper, the traditional fault tree analysis method is presented, detailed instructions for its application characteristics in medical instrument maintenance is made. It is made significant changes when the traditional fault tree analysis method is introduced into the medical instrument maintenance: gave up the logic symbolic, logic analysis and calculation, gave up its complicated programs, and only keep its image and practical fault tree diagram, and the fault tree diagram there are also differences: the fault tree is no longer a logical tree but the thinking tree in troubleshooting, the definition of the fault tree's nodes is different, the composition of the fault tree's branches is also different.

  17. The Fault Tree Compiler (FTC): Program and mathematics

    NASA Technical Reports Server (NTRS)

    Butler, Ricky W.; Martensen, Anna L.

    1989-01-01

    The Fault Tree Compiler Program is a new reliability tool used to predict the top-event probability for a fault tree. Five different gate types are allowed in the fault tree: AND, OR, EXCLUSIVE OR, INVERT, AND m OF n gates. The high-level input language is easy to understand and use when describing the system tree. In addition, the use of the hierarchical fault tree capability can simplify the tree description and decrease program execution time. The current solution technique provides an answer precisely (within the limits of double precision floating point arithmetic) within a user specified number of digits accuracy. The user may vary one failure rate or failure probability over a range of values and plot the results for sensitivity analyses. The solution technique is implemented in FORTRAN; the remaining program code is implemented in Pascal. The program is written to run on a Digital Equipment Corporation (DEC) VAX computer with the VMS operation system.

  18. Object-oriented fault tree evaluation program for quantitative analyses

    NASA Technical Reports Server (NTRS)

    Patterson-Hine, F. A.; Koen, B. V.

    1988-01-01

    Object-oriented programming can be combined with fault free techniques to give a significantly improved environment for evaluating the safety and reliability of large complex systems for space missions. Deep knowledge about system components and interactions, available from reliability studies and other sources, can be described using objects that make up a knowledge base. This knowledge base can be interrogated throughout the design process, during system testing, and during operation, and can be easily modified to reflect design changes in order to maintain a consistent information source. An object-oriented environment for reliability assessment has been developed on a Texas Instrument (TI) Explorer LISP workstation. The program, which directly evaluates system fault trees, utilizes the object-oriented extension to LISP called Flavors that is available on the Explorer. The object representation of a fault tree facilitates the storage and retrieval of information associated with each event in the tree, including tree structural information and intermediate results obtained during the tree reduction process. Reliability data associated with each basic event are stored in the fault tree objects. The object-oriented environment on the Explorer also includes a graphical tree editor which was modified to display and edit the fault trees.

  19. Software For Fault-Tree Diagnosis Of A System

    NASA Technical Reports Server (NTRS)

    Iverson, Dave; Patterson-Hine, Ann; Liao, Jack

    1993-01-01

    Fault Tree Diagnosis System (FTDS) computer program is automated-diagnostic-system program identifying likely causes of specified failure on basis of information represented in system-reliability mathematical models known as fault trees. Is modified implementation of failure-cause-identification phase of Narayanan's and Viswanadham's methodology for acquisition of knowledge and reasoning in analyzing failures of systems. Knowledge base of if/then rules replaced with object-oriented fault-tree representation. Enhancement yields more-efficient identification of causes of failures and enables dynamic updating of knowledge base. Written in C language, C++, and Common LISP.

  20. DG TO FT - AUTOMATIC TRANSLATION OF DIGRAPH TO FAULT TREE MODELS

    NASA Technical Reports Server (NTRS)

    Iverson, D. L.

    1994-01-01

    Fault tree and digraph models are frequently used for system failure analysis. Both types of models represent a failure space view of the system using AND and OR nodes in a directed graph structure. Each model has its advantages. While digraphs can be derived in a fairly straightforward manner from system schematics and knowledge about component failure modes and system design, fault tree structure allows for fast processing using efficient techniques developed for tree data structures. The similarities between digraphs and fault trees permits the information encoded in the digraph to be translated into a logically equivalent fault tree. The DG TO FT translation tool will automatically translate digraph models, including those with loops or cycles, into fault tree models that have the same minimum cut set solutions as the input digraph. This tool could be useful, for example, if some parts of a system have been modeled using digraphs and others using fault trees. The digraphs could be translated and incorporated into the fault trees, allowing them to be analyzed using a number of powerful fault tree processing codes, such as cut set and quantitative solution codes. A cut set for a given node is a group of failure events that will cause the failure of the node. A minimum cut set for a node is any cut set that, if any of the failures in the set were to be removed, the occurrence of the other failures in the set will not cause the failure of the event represented by the node. Cut sets calculations can be used to find dependencies, weak links, and vital system components whose failures would cause serious systems failure. The DG TO FT translation system reads in a digraph with each node listed as a separate object in the input file. The user specifies a terminal node for the digraph that will be used as the top node of the resulting fault tree. A fault tree basic event node representing the failure of that digraph node is created and becomes a child of the terminal

  1. Application Research of Fault Tree Analysis in Grid Communication System Corrective Maintenance

    NASA Astrophysics Data System (ADS)

    Wang, Jian; Yang, Zhenwei; Kang, Mei

    2018-01-01

    This paper attempts to apply the fault tree analysis method to the corrective maintenance field of grid communication system. Through the establishment of the fault tree model of typical system and the engineering experience, the fault tree analysis theory is used to analyze the fault tree model, which contains the field of structural function, probability importance and so on. The results show that the fault tree analysis can realize fast positioning and well repairing of the system. Meanwhile, it finds that the analysis method of fault tree has some guiding significance to the reliability researching and upgrading f the system.

  2. A dynamic fault tree model of a propulsion system

    NASA Technical Reports Server (NTRS)

    Xu, Hong; Dugan, Joanne Bechta; Meshkat, Leila

    2006-01-01

    We present a dynamic fault tree model of the benchmark propulsion system, and solve it using Galileo. Dynamic fault trees (DFT) extend traditional static fault trees with special gates to model spares and other sequence dependencies. Galileo solves DFT models using a judicious combination of automatically generated Markov and Binary Decision Diagram models. Galileo easily handles the complexities exhibited by the benchmark problem. In particular, Galileo is designed to model phased mission systems.

  3. Fault Tree Analysis Application for Safety and Reliability

    NASA Technical Reports Server (NTRS)

    Wallace, Dolores R.

    2003-01-01

    Many commercial software tools exist for fault tree analysis (FTA), an accepted method for mitigating risk in systems. The method embedded in the tools identifies a root as use in system components, but when software is identified as a root cause, it does not build trees into the software component. No commercial software tools have been built specifically for development and analysis of software fault trees. Research indicates that the methods of FTA could be applied to software, but the method is not practical without automated tool support. With appropriate automated tool support, software fault tree analysis (SFTA) may be a practical technique for identifying the underlying cause of software faults that may lead to critical system failures. We strive to demonstrate that existing commercial tools for FTA can be adapted for use with SFTA, and that applied to a safety-critical system, SFTA can be used to identify serious potential problems long before integrator and system testing.

  4. Probabilistic fault tree analysis of a radiation treatment system.

    PubMed

    Ekaette, Edidiong; Lee, Robert C; Cooke, David L; Iftody, Sandra; Craighead, Peter

    2007-12-01

    Inappropriate administration of radiation for cancer treatment can result in severe consequences such as premature death or appreciably impaired quality of life. There has been little study of vulnerable treatment process components and their contribution to the risk of radiation treatment (RT). In this article, we describe the application of probabilistic fault tree methods to assess the probability of radiation misadministration to patients at a large cancer treatment center. We conducted a systematic analysis of the RT process that identified four process domains: Assessment, Preparation, Treatment, and Follow-up. For the Preparation domain, we analyzed possible incident scenarios via fault trees. For each task, we also identified existing quality control measures. To populate the fault trees we used subjective probabilities from experts and compared results with incident report data. Both the fault tree and the incident report analysis revealed simulation tasks to be most prone to incidents, and the treatment prescription task to be least prone to incidents. The probability of a Preparation domain incident was estimated to be in the range of 0.1-0.7% based on incident reports, which is comparable to the mean value of 0.4% from the fault tree analysis using probabilities from the expert elicitation exercise. In conclusion, an analysis of part of the RT system using a fault tree populated with subjective probabilities from experts was useful in identifying vulnerable components of the system, and provided quantitative data for risk management.

  5. Evidential Networks for Fault Tree Analysis with Imprecise Knowledge

    NASA Astrophysics Data System (ADS)

    Yang, Jianping; Huang, Hong-Zhong; Liu, Yu; Li, Yan-Feng

    2012-06-01

    Fault tree analysis (FTA), as one of the powerful tools in reliability engineering, has been widely used to enhance system quality attributes. In most fault tree analyses, precise values are adopted to represent the probabilities of occurrence of those events. Due to the lack of sufficient data or imprecision of existing data at the early stage of product design, it is often difficult to accurately estimate the failure rates of individual events or the probabilities of occurrence of the events. Therefore, such imprecision and uncertainty need to be taken into account in reliability analysis. In this paper, the evidential networks (EN) are employed to quantify and propagate the aforementioned uncertainty and imprecision in fault tree analysis. The detailed conversion processes of some logic gates to EN are described in fault tree (FT). The figures of the logic gates and the converted equivalent EN, together with the associated truth tables and the conditional belief mass tables, are also presented in this work. The new epistemic importance is proposed to describe the effect of ignorance degree of event. The fault tree of an aircraft engine damaged by oil filter plugs is presented to demonstrate the proposed method.

  6. Object-oriented fault tree models applied to system diagnosis

    NASA Technical Reports Server (NTRS)

    Iverson, David L.; Patterson-Hine, F. A.

    1990-01-01

    When a diagnosis system is used in a dynamic environment, such as the distributed computer system planned for use on Space Station Freedom, it must execute quickly and its knowledge base must be easily updated. Representing system knowledge as object-oriented augmented fault trees provides both features. The diagnosis system described here is based on the failure cause identification process of the diagnostic system described by Narayanan and Viswanadham. Their system has been enhanced in this implementation by replacing the knowledge base of if-then rules with an object-oriented fault tree representation. This allows the system to perform its task much faster and facilitates dynamic updating of the knowledge base in a changing diagnosis environment. Accessing the information contained in the objects is more efficient than performing a lookup operation on an indexed rule base. Additionally, the object-oriented fault trees can be easily updated to represent current system status. This paper describes the fault tree representation, the diagnosis algorithm extensions, and an example application of this system. Comparisons are made between the object-oriented fault tree knowledge structure solution and one implementation of a rule-based solution. Plans for future work on this system are also discussed.

  7. Reset Tree-Based Optical Fault Detection

    PubMed Central

    Lee, Dong-Geon; Choi, Dooho; Seo, Jungtaek; Kim, Howon

    2013-01-01

    In this paper, we present a new reset tree-based scheme to protect cryptographic hardware against optical fault injection attacks. As one of the most powerful invasive attacks on cryptographic hardware, optical fault attacks cause semiconductors to misbehave by injecting high-energy light into a decapped integrated circuit. The contaminated result from the affected chip is then used to reveal secret information, such as a key, from the cryptographic hardware. Since the advent of such attacks, various countermeasures have been proposed. Although most of these countermeasures are strong, there is still the possibility of attack. In this paper, we present a novel optical fault detection scheme that utilizes the buffers on a circuit's reset signal tree as a fault detection sensor. To evaluate our proposal, we model radiation-induced currents into circuit components and perform a SPICE simulation. The proposed scheme is expected to be used as a supplemental security tool. PMID:23698267

  8. Planning effectiveness may grow on fault trees.

    PubMed

    Chow, C W; Haddad, K; Mannino, B

    1991-10-01

    The first step of a strategic planning process--identifying and analyzing threats and opportunities--requires subjective judgments. By using an analytical tool known as a fault tree, healthcare administrators can reduce the unreliability of subjective decision making by creating a logical structure for problem solving and decision making. A case study of 11 healthcare administrators showed that an analysis technique called prospective hindsight can add to a fault tree's ability to improve a strategic planning process.

  9. Lognormal Approximations of Fault Tree Uncertainty Distributions.

    PubMed

    El-Shanawany, Ashraf Ben; Ardron, Keith H; Walker, Simon P

    2018-01-26

    Fault trees are used in reliability modeling to create logical models of fault combinations that can lead to undesirable events. The output of a fault tree analysis (the top event probability) is expressed in terms of the failure probabilities of basic events that are input to the model. Typically, the basic event probabilities are not known exactly, but are modeled as probability distributions: therefore, the top event probability is also represented as an uncertainty distribution. Monte Carlo methods are generally used for evaluating the uncertainty distribution, but such calculations are computationally intensive and do not readily reveal the dominant contributors to the uncertainty. In this article, a closed-form approximation for the fault tree top event uncertainty distribution is developed, which is applicable when the uncertainties in the basic events of the model are lognormally distributed. The results of the approximate method are compared with results from two sampling-based methods: namely, the Monte Carlo method and the Wilks method based on order statistics. It is shown that the closed-form expression can provide a reasonable approximation to results obtained by Monte Carlo sampling, without incurring the computational expense. The Wilks method is found to be a useful means of providing an upper bound for the percentiles of the uncertainty distribution while being computationally inexpensive compared with full Monte Carlo sampling. The lognormal approximation method and Wilks's method appear attractive, practical alternatives for the evaluation of uncertainty in the output of fault trees and similar multilinear models. © 2018 Society for Risk Analysis.

  10. FTC - THE FAULT-TREE COMPILER (SUN VERSION)

    NASA Technical Reports Server (NTRS)

    Butler, R. W.

    1994-01-01

    FTC, the Fault-Tree Compiler program, is a tool used to calculate the top-event probability for a fault-tree. Five different gate types are allowed in the fault tree: AND, OR, EXCLUSIVE OR, INVERT, and M OF N. The high-level input language is easy to understand and use. In addition, the program supports a hierarchical fault tree definition feature which simplifies the tree-description process and reduces execution time. A rigorous error bound is derived for the solution technique. This bound enables the program to supply an answer precisely (within the limits of double precision floating point arithmetic) at a user-specified number of digits accuracy. The program also facilitates sensitivity analysis with respect to any specified parameter of the fault tree such as a component failure rate or a specific event probability by allowing the user to vary one failure rate or the failure probability over a range of values and plot the results. The mathematical approach chosen to solve a reliability problem may vary with the size and nature of the problem. Although different solution techniques are utilized on different programs, it is possible to have a common input language. The Systems Validation Methods group at NASA Langley Research Center has created a set of programs that form the basis for a reliability analysis workstation. The set of programs are: SURE reliability analysis program (COSMIC program LAR-13789, LAR-14921); the ASSIST specification interface program (LAR-14193, LAR-14923), PAWS/STEM reliability analysis programs (LAR-14165, LAR-14920); and the FTC fault tree tool (LAR-14586, LAR-14922). FTC is used to calculate the top-event probability for a fault tree. PAWS/STEM and SURE are programs which interpret the same SURE language, but utilize different solution methods. ASSIST is a preprocessor that generates SURE language from a more abstract definition. SURE, ASSIST, and PAWS/STEM are also offered as a bundle. Please see the abstract for COS-10039/COS

  11. Fault trees for decision making in systems analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lambert, Howard E.

    1975-10-09

    The application of fault tree analysis (FTA) to system safety and reliability is presented within the framework of system safety analysis. The concepts and techniques involved in manual and automated fault tree construction are described and their differences noted. The theory of mathematical reliability pertinent to FTA is presented with emphasis on engineering applications. An outline of the quantitative reliability techniques of the Reactor Safety Study is given. Concepts of probabilistic importance are presented within the fault tree framework and applied to the areas of system design, diagnosis and simulation. The computer code IMPORTANCE ranks basic events and cut setsmore » according to a sensitivity analysis. A useful feature of the IMPORTANCE code is that it can accept relative failure data as input. The output of the IMPORTANCE code can assist an analyst in finding weaknesses in system design and operation, suggest the most optimal course of system upgrade, and determine the optimal location of sensors within a system. A general simulation model of system failure in terms of fault tree logic is described. The model is intended for efficient diagnosis of the causes of system failure in the event of a system breakdown. It can also be used to assist an operator in making decisions under a time constraint regarding the future course of operations. The model is well suited for computer implementation. New results incorporated in the simulation model include an algorithm to generate repair checklists on the basis of fault tree logic and a one-step-ahead optimization procedure that minimizes the expected time to diagnose system failure.« less

  12. Using Fault Trees to Advance Understanding of Diagnostic Errors.

    PubMed

    Rogith, Deevakar; Iyengar, M Sriram; Singh, Hardeep

    2017-11-01

    Diagnostic errors annually affect at least 5% of adults in the outpatient setting in the United States. Formal analytic techniques are only infrequently used to understand them, in part because of the complexity of diagnostic processes and clinical work flows involved. In this article, diagnostic errors were modeled using fault tree analysis (FTA), a form of root cause analysis that has been successfully used in other high-complexity, high-risk contexts. How factors contributing to diagnostic errors can be systematically modeled by FTA to inform error understanding and error prevention is demonstrated. A team of three experts reviewed 10 published cases of diagnostic error and constructed fault trees. The fault trees were modeled according to currently available conceptual frameworks characterizing diagnostic error. The 10 trees were then synthesized into a single fault tree to identify common contributing factors and pathways leading to diagnostic error. FTA is a visual, structured, deductive approach that depicts the temporal sequence of events and their interactions in a formal logical hierarchy. The visual FTA enables easier understanding of causative processes and cognitive and system factors, as well as rapid identification of common pathways and interactions in a unified fashion. In addition, it enables calculation of empirical estimates for causative pathways. Thus, fault trees might provide a useful framework for both quantitative and qualitative analysis of diagnostic errors. Future directions include establishing validity and reliability by modeling a wider range of error cases, conducting quantitative evaluations, and undertaking deeper exploration of other FTA capabilities. Copyright © 2017 The Joint Commission. Published by Elsevier Inc. All rights reserved.

  13. Fault Tree Analysis: Its Implications for Use in Education.

    ERIC Educational Resources Information Center

    Barker, Bruce O.

    This study introduces the concept of Fault Tree Analysis as a systems tool and examines the implications of Fault Tree Analysis (FTA) as a technique for isolating failure modes in educational systems. A definition of FTA and discussion of its history, as it relates to education, are provided. The step by step process for implementation and use of…

  14. FTC - THE FAULT-TREE COMPILER (VAX VMS VERSION)

    NASA Technical Reports Server (NTRS)

    Butler, R. W.

    1994-01-01

    FTC, the Fault-Tree Compiler program, is a tool used to calculate the top-event probability for a fault-tree. Five different gate types are allowed in the fault tree: AND, OR, EXCLUSIVE OR, INVERT, and M OF N. The high-level input language is easy to understand and use. In addition, the program supports a hierarchical fault tree definition feature which simplifies the tree-description process and reduces execution time. A rigorous error bound is derived for the solution technique. This bound enables the program to supply an answer precisely (within the limits of double precision floating point arithmetic) at a user-specified number of digits accuracy. The program also facilitates sensitivity analysis with respect to any specified parameter of the fault tree such as a component failure rate or a specific event probability by allowing the user to vary one failure rate or the failure probability over a range of values and plot the results. The mathematical approach chosen to solve a reliability problem may vary with the size and nature of the problem. Although different solution techniques are utilized on different programs, it is possible to have a common input language. The Systems Validation Methods group at NASA Langley Research Center has created a set of programs that form the basis for a reliability analysis workstation. The set of programs are: SURE reliability analysis program (COSMIC program LAR-13789, LAR-14921); the ASSIST specification interface program (LAR-14193, LAR-14923), PAWS/STEM reliability analysis programs (LAR-14165, LAR-14920); and the FTC fault tree tool (LAR-14586, LAR-14922). FTC is used to calculate the top-event probability for a fault tree. PAWS/STEM and SURE are programs which interpret the same SURE language, but utilize different solution methods. ASSIST is a preprocessor that generates SURE language from a more abstract definition. SURE, ASSIST, and PAWS/STEM are also offered as a bundle. Please see the abstract for COS-10039/COS

  15. Review: Evaluation of Foot-and-Mouth Disease Control Using Fault Tree Analysis.

    PubMed

    Isoda, N; Kadohira, M; Sekiguchi, S; Schuppers, M; Stärk, K D C

    2015-06-01

    An outbreak of foot-and-mouth disease (FMD) causes huge economic losses and animal welfare problems. Although much can be learnt from past FMD outbreaks, several countries are not satisfied with their degree of contingency planning and aiming at more assurance that their control measures will be effective. The purpose of the present article was to develop a generic fault tree framework for the control of an FMD outbreak as a basis for systematic improvement and refinement of control activities and general preparedness. Fault trees are typically used in engineering to document pathways that can lead to an undesired event, that is, ineffective FMD control. The fault tree method allows risk managers to identify immature parts of the control system and to analyse the events or steps that will most probably delay rapid and effective disease control during a real outbreak. The present developed fault tree is generic and can be tailored to fit the specific needs of countries. For instance, the specific fault tree for the 2001 FMD outbreak in the UK was refined based on control weaknesses discussed in peer-reviewed articles. Furthermore, the specific fault tree based on the 2001 outbreak was applied to the subsequent FMD outbreak in 2007 to assess the refinement of control measures following the earlier, major outbreak. The FMD fault tree can assist risk managers to develop more refined and adequate control activities against FMD outbreaks and to find optimum strategies for rapid control. Further application using the current tree will be one of the basic measures for FMD control worldwide. © 2013 Blackwell Verlag GmbH.

  16. A fuzzy decision tree for fault classification.

    PubMed

    Zio, Enrico; Baraldi, Piero; Popescu, Irina C

    2008-02-01

    In plant accident management, the control room operators are required to identify the causes of the accident, based on the different patterns of evolution of the monitored process variables thereby developing. This task is often quite challenging, given the large number of process parameters monitored and the intense emotional states under which it is performed. To aid the operators, various techniques of fault classification have been engineered. An important requirement for their practical application is the physical interpretability of the relationships among the process variables underpinning the fault classification. In this view, the present work propounds a fuzzy approach to fault classification, which relies on fuzzy if-then rules inferred from the clustering of available preclassified signal data, which are then organized in a logical and transparent decision tree structure. The advantages offered by the proposed approach are precisely that a transparent fault classification model is mined out of the signal data and that the underlying physical relationships among the process variables are easily interpretable as linguistic if-then rules that can be explicitly visualized in the decision tree structure. The approach is applied to a case study regarding the classification of simulated faults in the feedwater system of a boiling water reactor.

  17. Reliability analysis of the solar array based on Fault Tree Analysis

    NASA Astrophysics Data System (ADS)

    Jianing, Wu; Shaoze, Yan

    2011-07-01

    The solar array is an important device used in the spacecraft, which influences the quality of in-orbit operation of the spacecraft and even the launches. This paper analyzes the reliability of the mechanical system and certifies the most vital subsystem of the solar array. The fault tree analysis (FTA) model is established according to the operating process of the mechanical system based on DFH-3 satellite; the logical expression of the top event is obtained by Boolean algebra and the reliability of the solar array is calculated. The conclusion shows that the hinges are the most vital links between the solar arrays. By analyzing the structure importance(SI) of the hinge's FTA model, some fatal causes, including faults of the seal, insufficient torque of the locking spring, temperature in space, and friction force, can be identified. Damage is the initial stage of the fault, so limiting damage is significant to prevent faults. Furthermore, recommendations for improving reliability associated with damage limitation are discussed, which can be used for the redesigning of the solar array and the reliability growth planning.

  18. Interim reliability evaluation program, Browns Ferry fault trees

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stewart, M.E.

    1981-01-01

    An abbreviated fault tree method is used to evaluate and model Browns Ferry systems in the Interim Reliability Evaluation programs, simplifying the recording and displaying of events, yet maintaining the system of identifying faults. The level of investigation is not changed. The analytical thought process inherent in the conventional method is not compromised. But the abbreviated method takes less time, and the fault modes are much more visible.

  19. Modular techniques for dynamic fault-tree analysis

    NASA Technical Reports Server (NTRS)

    Patterson-Hine, F. A.; Dugan, Joanne B.

    1992-01-01

    It is noted that current approaches used to assess the dependability of complex systems such as Space Station Freedom and the Air Traffic Control System are incapable of handling the size and complexity of these highly integrated designs. A novel technique for modeling such systems which is built upon current techniques in Markov theory and combinatorial analysis is described. It enables the development of a hierarchical representation of system behavior which is more flexible than either technique alone. A solution strategy which is based on an object-oriented approach to model representation and evaluation is discussed. The technique is virtually transparent to the user since the fault tree models can be built graphically and the objects defined automatically. The tree modularization procedure allows the two model types, Markov and combinatoric, to coexist and does not require that the entire fault tree be translated to a Markov chain for evaluation. This effectively reduces the size of the Markov chain required and enables solutions with less truncation, making analysis of longer mission times possible. Using the fault-tolerant parallel processor as an example, a model is built and solved for a specific mission scenario and the solution approach is illustrated in detail.

  20. Fault tree analysis for urban flooding.

    PubMed

    ten Veldhuis, J A E; Clemens, F H L R; van Gelder, P H A J M

    2009-01-01

    Traditional methods to evaluate flood risk generally focus on heavy storm events as the principal cause of flooding. Conversely, fault tree analysis is a technique that aims at modelling all potential causes of flooding. It quantifies both overall flood probability and relative contributions of individual causes of flooding. This paper presents a fault model for urban flooding and an application to the case of Haarlem, a city of 147,000 inhabitants. Data from a complaint register, rainfall gauges and hydrodynamic model calculations are used to quantify probabilities of basic events in the fault tree. This results in a flood probability of 0.78/week for Haarlem. It is shown that gully pot blockages contribute to 79% of flood incidents, whereas storm events contribute only 5%. This implies that for this case more efficient gully pot cleaning is a more effective strategy to reduce flood probability than enlarging drainage system capacity. Whether this is also the most cost-effective strategy can only be decided after risk assessment has been complemented with a quantification of consequences of both types of events. To do this will be the next step in this study.

  1. Application of Fault Tree Analysis and Fuzzy Neural Networks to Fault Diagnosis in the Internet of Things (IoT) for Aquaculture.

    PubMed

    Chen, Yingyi; Zhen, Zhumi; Yu, Huihui; Xu, Jing

    2017-01-14

    In the Internet of Things (IoT) equipment used for aquaculture is often deployed in outdoor ponds located in remote areas. Faults occur frequently in these tough environments and the staff generally lack professional knowledge and pay a low degree of attention in these areas. Once faults happen, expert personnel must carry out maintenance outdoors. Therefore, this study presents an intelligent method for fault diagnosis based on fault tree analysis and a fuzzy neural network. In the proposed method, first, the fault tree presents a logic structure of fault symptoms and faults. Second, rules extracted from the fault trees avoid duplicate and redundancy. Third, the fuzzy neural network is applied to train the relationship mapping between fault symptoms and faults. In the aquaculture IoT, one fault can cause various fault symptoms, and one symptom can be caused by a variety of faults. Four fault relationships are obtained. Results show that one symptom-to-one fault, two symptoms-to-two faults, and two symptoms-to-one fault relationships can be rapidly diagnosed with high precision, while one symptom-to-two faults patterns perform not so well, but are still worth researching. This model implements diagnosis for most kinds of faults in the aquaculture IoT.

  2. Application of Fault Tree Analysis and Fuzzy Neural Networks to Fault Diagnosis in the Internet of Things (IoT) for Aquaculture

    PubMed Central

    Chen, Yingyi; Zhen, Zhumi; Yu, Huihui; Xu, Jing

    2017-01-01

    In the Internet of Things (IoT) equipment used for aquaculture is often deployed in outdoor ponds located in remote areas. Faults occur frequently in these tough environments and the staff generally lack professional knowledge and pay a low degree of attention in these areas. Once faults happen, expert personnel must carry out maintenance outdoors. Therefore, this study presents an intelligent method for fault diagnosis based on fault tree analysis and a fuzzy neural network. In the proposed method, first, the fault tree presents a logic structure of fault symptoms and faults. Second, rules extracted from the fault trees avoid duplicate and redundancy. Third, the fuzzy neural network is applied to train the relationship mapping between fault symptoms and faults. In the aquaculture IoT, one fault can cause various fault symptoms, and one symptom can be caused by a variety of faults. Four fault relationships are obtained. Results show that one symptom-to-one fault, two symptoms-to-two faults, and two symptoms-to-one fault relationships can be rapidly diagnosed with high precision, while one symptom-to-two faults patterns perform not so well, but are still worth researching. This model implements diagnosis for most kinds of faults in the aquaculture IoT. PMID:28098822

  3. A diagnosis system using object-oriented fault tree models

    NASA Technical Reports Server (NTRS)

    Iverson, David L.; Patterson-Hine, F. A.

    1990-01-01

    Spaceborne computing systems must provide reliable, continuous operation for extended periods. Due to weight, power, and volume constraints, these systems must manage resources very effectively. A fault diagnosis algorithm is described which enables fast and flexible diagnoses in the dynamic distributed computing environments planned for future space missions. The algorithm uses a knowledge base that is easily changed and updated to reflect current system status. Augmented fault trees represented in an object-oriented form provide deep system knowledge that is easy to access and revise as a system changes. Given such a fault tree, a set of failure events that have occurred, and a set of failure events that have not occurred, this diagnosis system uses forward and backward chaining to propagate causal and temporal information about other failure events in the system being diagnosed. Once the system has established temporal and causal constraints, it reasons backward from heuristically selected failure events to find a set of basic failure events which are a likely cause of the occurrence of the top failure event in the fault tree. The diagnosis system has been implemented in common LISP using Flavors.

  4. Object-Oriented Algorithm For Evaluation Of Fault Trees

    NASA Technical Reports Server (NTRS)

    Patterson-Hine, F. A.; Koen, B. V.

    1992-01-01

    Algorithm for direct evaluation of fault trees incorporates techniques of object-oriented programming. Reduces number of calls needed to solve trees with repeated events. Provides significantly improved software environment for such computations as quantitative analyses of safety and reliability of complicated systems of equipment (e.g., spacecraft or factories).

  5. Graphical fault tree analysis for fatal falls in the construction industry.

    PubMed

    Chi, Chia-Fen; Lin, Syuan-Zih; Dewi, Ratna Sari

    2014-11-01

    The current study applied a fault tree analysis to represent the causal relationships among events and causes that contributed to fatal falls in the construction industry. Four hundred and eleven work-related fatalities in the Taiwanese construction industry were analyzed in terms of age, gender, experience, falling site, falling height, company size, and the causes for each fatality. Given that most fatal accidents involve multiple events, the current study coded up to a maximum of three causes for each fall fatality. After the Boolean algebra and minimal cut set analyses, accident causes associated with each falling site can be presented as a fault tree to provide an overview of the basic causes, which could trigger fall fatalities in the construction industry. Graphical icons were designed for each falling site along with the associated accident causes to illustrate the fault tree in a graphical manner. A graphical fault tree can improve inter-disciplinary discussion of risk management and the communication of accident causation to first line supervisors. Copyright © 2014 Elsevier Ltd. All rights reserved.

  6. CUTSETS - MINIMAL CUT SET CALCULATION FOR DIGRAPH AND FAULT TREE RELIABILITY MODELS

    NASA Technical Reports Server (NTRS)

    Iverson, D. L.

    1994-01-01

    Fault tree and digraph models are frequently used for system failure analysis. Both type of models represent a failure space view of the system using AND and OR nodes in a directed graph structure. Fault trees must have a tree structure and do not allow cycles or loops in the graph. Digraphs allow any pattern of interconnection between loops in the graphs. A common operation performed on digraph and fault tree models is the calculation of minimal cut sets. A cut set is a set of basic failures that could cause a given target failure event to occur. A minimal cut set for a target event node in a fault tree or digraph is any cut set for the node with the property that if any one of the failures in the set is removed, the occurrence of the other failures in the set will not cause the target failure event. CUTSETS will identify all the minimal cut sets for a given node. The CUTSETS package contains programs that solve for minimal cut sets of fault trees and digraphs using object-oriented programming techniques. These cut set codes can be used to solve graph models for reliability analysis and identify potential single point failures in a modeled system. The fault tree minimal cut set code reads in a fault tree model input file with each node listed in a text format. In the input file the user specifies a top node of the fault tree and a maximum cut set size to be calculated. CUTSETS will find minimal sets of basic events which would cause the failure at the output of a given fault tree gate. The program can find all the minimal cut sets of a node, or minimal cut sets up to a specified size. The algorithm performs a recursive top down parse of the fault tree, starting at the specified top node, and combines the cut sets of each child node into sets of basic event failures that would cause the failure event at the output of that gate. Minimal cut set solutions can be found for all nodes in the fault tree or just for the top node. The digraph cut set code uses the same

  7. Fault Tree Analysis: A Research Tool for Educational Planning. Technical Report No. 1.

    ERIC Educational Resources Information Center

    Alameda County School Dept., Hayward, CA. PACE Center.

    This ESEA Title III report describes fault tree analysis and assesses its applicability to education. Fault tree analysis is an operations research tool which is designed to increase the probability of success in any system by analyzing the most likely modes of failure that could occur. A graphic portrayal, which has the form of a tree, is…

  8. The weakest t-norm based intuitionistic fuzzy fault-tree analysis to evaluate system reliability.

    PubMed

    Kumar, Mohit; Yadav, Shiv Prasad

    2012-07-01

    In this paper, a new approach of intuitionistic fuzzy fault-tree analysis is proposed to evaluate system reliability and to find the most critical system component that affects the system reliability. Here weakest t-norm based intuitionistic fuzzy fault tree analysis is presented to calculate fault interval of system components from integrating expert's knowledge and experience in terms of providing the possibility of failure of bottom events. It applies fault-tree analysis, α-cut of intuitionistic fuzzy set and T(ω) (the weakest t-norm) based arithmetic operations on triangular intuitionistic fuzzy sets to obtain fault interval and reliability interval of the system. This paper also modifies Tanaka et al.'s fuzzy fault-tree definition. In numerical verification, a malfunction of weapon system "automatic gun" is presented as a numerical example. The result of the proposed method is compared with the listing approaches of reliability analysis methods. Copyright © 2012 ISA. Published by Elsevier Ltd. All rights reserved.

  9. Try Fault Tree Analysis, a Step-by-Step Way to Improve Organization Development.

    ERIC Educational Resources Information Center

    Spitzer, Dean

    1980-01-01

    Fault Tree Analysis, a systems safety engineering technology used to analyze organizational systems, is described. Explains the use of logic gates to represent the relationship between failure events, qualitative analysis, quantitative analysis, and effective use of Fault Tree Analysis. (CT)

  10. Fault Tree Analysis for an Inspection Robot in a Nuclear Power Plant

    NASA Astrophysics Data System (ADS)

    Ferguson, Thomas A.; Lu, Lixuan

    2017-09-01

    The life extension of current nuclear reactors has led to an increasing demand on inspection and maintenance of critical reactor components that are too expensive to replace. To reduce the exposure dosage to workers, robotics have become an attractive alternative as a preventative safety tool in nuclear power plants. It is crucial to understand the reliability of these robots in order to increase the veracity and confidence of their results. This study presents the Fault Tree (FT) analysis to a coolant outlet piper snake-arm inspection robot in a nuclear power plant. Fault trees were constructed for a qualitative analysis to determine the reliability of the robot. Insight on the applicability of fault tree methods for inspection robotics in the nuclear industry is gained through this investigation.

  11. An overview of the phase-modular fault tree approach to phased mission system analysis

    NASA Technical Reports Server (NTRS)

    Meshkat, L.; Xing, L.; Donohue, S. K.; Ou, Y.

    2003-01-01

    We look at how fault tree analysis (FTA), a primary means of performing reliability analysis of PMS, can meet this challenge in this paper by presenting an overview of the modular approach to solving fault trees that represent PMS.

  12. System Analysis by Mapping a Fault-tree into a Bayesian-network

    NASA Astrophysics Data System (ADS)

    Sheng, B.; Deng, C.; Wang, Y. H.; Tang, L. H.

    2018-05-01

    In view of the limitations of fault tree analysis in reliability assessment, Bayesian Network (BN) has been studied as an alternative technology. After a brief introduction to the method for mapping a Fault Tree (FT) into an equivalent BN, equations used to calculate the structure importance degree, the probability importance degree and the critical importance degree are presented. Furthermore, the correctness of these equations is proved mathematically. Combining with an aircraft landing gear’s FT, an equivalent BN is developed and analysed. The results show that richer and more accurate information have been achieved through the BN method than the FT, which demonstrates that the BN is a superior technique in both reliability assessment and fault diagnosis.

  13. Fire safety in transit systems fault tree analysis

    DOT National Transportation Integrated Search

    1981-09-01

    Fire safety countermeasures applicable to transit vehicles are identified and evaluated. This document contains fault trees which illustrate the sequences of events which may lead to a transit-fire related casualty. A description of the basis for the...

  14. Applying fault tree analysis to the prevention of wrong-site surgery.

    PubMed

    Abecassis, Zachary A; McElroy, Lisa M; Patel, Ronak M; Khorzad, Rebeca; Carroll, Charles; Mehrotra, Sanjay

    2015-01-01

    Wrong-site surgery (WSS) is a rare event that occurs to hundreds of patients each year. Despite national implementation of the Universal Protocol over the past decade, development of effective interventions remains a challenge. We performed a systematic review of the literature reporting root causes of WSS and used the results to perform a fault tree analysis to assess the reliability of the system in preventing WSS and identifying high-priority targets for interventions aimed at reducing WSS. Process components where a single error could result in WSS were labeled with OR gates; process aspects reinforced by verification were labeled with AND gates. The overall redundancy of the system was evaluated based on prevalence of AND gates and OR gates. In total, 37 studies described risk factors for WSS. The fault tree contains 35 faults, most of which fall into five main categories. Despite the Universal Protocol mandating patient verification, surgical site signing, and a brief time-out, a large proportion of the process relies on human transcription and verification. Fault tree analysis provides a standardized perspective of errors or faults within the system of surgical scheduling and site confirmation. It can be adapted by institutions or specialties to lead to more targeted interventions to increase redundancy and reliability within the preoperative process. Copyright © 2015 Elsevier Inc. All rights reserved.

  15. Structural system reliability calculation using a probabilistic fault tree analysis method

    NASA Technical Reports Server (NTRS)

    Torng, T. Y.; Wu, Y.-T.; Millwater, H. R.

    1992-01-01

    The development of a new probabilistic fault tree analysis (PFTA) method for calculating structural system reliability is summarized. The proposed PFTA procedure includes: developing a fault tree to represent the complex structural system, constructing an approximation function for each bottom event, determining a dominant sampling sequence for all bottom events, and calculating the system reliability using an adaptive importance sampling method. PFTA is suitable for complicated structural problems that require computer-intensive computer calculations. A computer program has been developed to implement the PFTA.

  16. Transforming incomplete fault tree to Ishikawa diagram as an alternative method for technology transfer

    NASA Astrophysics Data System (ADS)

    Batzias, Dimitris F.

    2012-12-01

    Fault Tree Analysis (FTA) can be used for technology transfer when the relevant problem (called 'top even' in FTA) is solved in a technology centre and the results are diffused to interested parties (usually Small Medium Enterprises - SMEs) that have not the proper equipment and the required know-how to solve the problem by their own. Nevertheless, there is a significant drawback in this procedure: the information usually provided by the SMEs to the technology centre, about production conditions and corresponding quality characteristics of the product, and (sometimes) the relevant expertise in the Knowledge Base of this centre may be inadequate to form a complete fault tree. Since such cases are quite frequent in practice, we have developed a methodology for transforming incomplete fault tree to Ishikawa diagram, which is more flexible and less strict in establishing causal chains, because it uses a surface phenomenological level with a limited number of categories of faults. On the other hand, such an Ishikawa diagram can be extended to simulate a fault tree as relevant knowledge increases. An implementation of this transformation, referring to anodization of aluminium, is presented.

  17. Fault tree applications within the safety program of Idaho Nuclear Corporation

    NASA Technical Reports Server (NTRS)

    Vesely, W. E.

    1971-01-01

    Computerized fault tree analyses are used to obtain both qualitative and quantitative information about the safety and reliability of an electrical control system that shuts the reactor down when certain safety criteria are exceeded, in the design of a nuclear plant protection system, and in an investigation of a backup emergency system for reactor shutdown. The fault tree yields the modes by which the system failure or accident will occur, the most critical failure or accident causing areas, detailed failure probabilities, and the response of safety or reliability to design modifications and maintenance schemes.

  18. Decision tree and PCA-based fault diagnosis of rotating machinery

    NASA Astrophysics Data System (ADS)

    Sun, Weixiang; Chen, Jin; Li, Jiaqing

    2007-04-01

    After analysing the flaws of conventional fault diagnosis methods, data mining technology is introduced to fault diagnosis field, and a new method based on C4.5 decision tree and principal component analysis (PCA) is proposed. In this method, PCA is used to reduce features after data collection, preprocessing and feature extraction. Then, C4.5 is trained by using the samples to generate a decision tree model with diagnosis knowledge. At last the tree model is used to make diagnosis analysis. To validate the method proposed, six kinds of running states (normal or without any defect, unbalance, rotor radial rub, oil whirl, shaft crack and a simultaneous state of unbalance and radial rub), are simulated on Bently Rotor Kit RK4 to test C4.5 and PCA-based method and back-propagation neural network (BPNN). The result shows that C4.5 and PCA-based diagnosis method has higher accuracy and needs less training time than BPNN.

  19. Direct evaluation of fault trees using object-oriented programming techniques

    NASA Technical Reports Server (NTRS)

    Patterson-Hine, F. A.; Koen, B. V.

    1989-01-01

    Object-oriented programming techniques are used in an algorithm for the direct evaluation of fault trees. The algorithm combines a simple bottom-up procedure for trees without repeated events with a top-down recursive procedure for trees with repeated events. The object-oriented approach results in a dynamic modularization of the tree at each step in the reduction process. The algorithm reduces the number of recursive calls required to solve trees with repeated events and calculates intermediate results as well as the solution of the top event. The intermediate results can be reused if part of the tree is modified. An example is presented in which the results of the algorithm implemented with conventional techniques are compared to those of the object-oriented approach.

  20. Fault Tree Analysis: An Operations Research Tool for Identifying and Reducing Undesired Events in Training.

    ERIC Educational Resources Information Center

    Barker, Bruce O.; Petersen, Paul D.

    This paper explores the fault-tree analysis approach to isolating failure modes within a system. Fault tree investigates potentially undesirable events and then looks for failures in sequence that would lead to their occurring. Relationships among these events are symbolized by AND or OR logic gates, AND used when single events must coexist to…

  1. Distributed Fault-Tolerant Control of Networked Uncertain Euler-Lagrange Systems Under Actuator Faults.

    PubMed

    Chen, Gang; Song, Yongduan; Lewis, Frank L

    2016-05-03

    This paper investigates the distributed fault-tolerant control problem of networked Euler-Lagrange systems with actuator and communication link faults. An adaptive fault-tolerant cooperative control scheme is proposed to achieve the coordinated tracking control of networked uncertain Lagrange systems on a general directed communication topology, which contains a spanning tree with the root node being the active target system. The proposed algorithm is capable of compensating for the actuator bias fault, the partial loss of effectiveness actuation fault, the communication link fault, the model uncertainty, and the external disturbance simultaneously. The control scheme does not use any fault detection and isolation mechanism to detect, separate, and identify the actuator faults online, which largely reduces the online computation and expedites the responsiveness of the controller. To validate the effectiveness of the proposed method, a test-bed of multiple robot-arm cooperative control system is developed for real-time verification. Experiments on the networked robot-arms are conduced and the results confirm the benefits and the effectiveness of the proposed distributed fault-tolerant control algorithms.

  2. Fault Tree Analysis: An Emerging Methodology for Instructional Science.

    ERIC Educational Resources Information Center

    Wood, R. Kent; And Others

    1979-01-01

    Describes Fault Tree Analysis, a tool for systems analysis which attempts to identify possible modes of failure in systems to increase the probability of success. The article defines the technique and presents the steps of FTA construction, focusing on its application to education. (RAO)

  3. TH-EF-BRC-03: Fault Tree Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thomadsen, B.

    2016-06-15

    This Hands-on Workshop will be focused on providing participants with experience with the principal tools of TG 100 and hence start to build both competence and confidence in the use of risk-based quality management techniques. The three principal tools forming the basis of TG 100’s risk analysis: Process mapping, Failure-Modes and Effects Analysis and fault-tree analysis will be introduced with a 5 minute refresher presentation and each presentation will be followed by a 30 minute small group exercise. An exercise on developing QM from the risk analysis follows. During the exercise periods, participants will apply the principles in 2 differentmore » clinical scenarios. At the conclusion of each exercise there will be ample time for participants to discuss with each other and the faculty their experience and any challenges encountered. Learning Objectives: To review the principles of Process Mapping, Failure Modes and Effects Analysis and Fault Tree Analysis. To gain familiarity with these three techniques in a small group setting. To share and discuss experiences with the three techniques with faculty and participants. Director, TreatSafely, LLC. Director, Center for the Assessment of Radiological Sciences. Occasional Consultant to the IAEA and Varian.« less

  4. Reliability database development for use with an object-oriented fault tree evaluation program

    NASA Technical Reports Server (NTRS)

    Heger, A. Sharif; Harringtton, Robert J.; Koen, Billy V.; Patterson-Hine, F. Ann

    1989-01-01

    A description is given of the development of a fault-tree analysis method using object-oriented programming. In addition, the authors discuss the programs that have been developed or are under development to connect a fault-tree analysis routine to a reliability database. To assess the performance of the routines, a relational database simulating one of the nuclear power industry databases has been constructed. For a realistic assessment of the results of this project, the use of one of existing nuclear power reliability databases is planned.

  5. Determining preventability of pediatric readmissions using fault tree analysis.

    PubMed

    Jonas, Jennifer A; Devon, Erin Pete; Ronan, Jeanine C; Ng, Sonia C; Owusu-McKenzie, Jacqueline Y; Strausbaugh, Janet T; Fieldston, Evan S; Hart, Jessica K

    2016-05-01

    Previous studies attempting to distinguish preventable from nonpreventable readmissions reported challenges in completing reviews efficiently and consistently. (1) Examine the efficiency and reliability of a Web-based fault tree tool designed to guide physicians through chart reviews to a determination about preventability. (2) Investigate root causes of general pediatrics readmissions and identify the percent that are preventable. General pediatricians from The Children's Hospital of Philadelphia used a Web-based fault tree tool to classify root causes of all general pediatrics 15-day readmissions in 2014. The tool guided reviewers through a logical progression of questions, which resulted in 1 of 18 root causes of readmission, 8 of which were considered potentially preventable. Twenty percent of cases were cross-checked to measure inter-rater reliability. Of the 7252 discharges, 248 were readmitted, for an all-cause general pediatrics 15-day readmission rate of 3.4%. Of those readmissions, 15 (6.0%) were deemed potentially preventable, corresponding to 0.2% of total discharges. The most common cause of potentially preventable readmissions was premature discharge. For the 50 cross-checked cases, both reviews resulted in the same root cause for 44 (86%) of files (κ = 0.79; 95% confidence interval: 0.60-0.98). Completing 1 review using the tool took approximately 20 minutes. The Web-based fault tree tool helped physicians to identify root causes of hospital readmissions and classify them as either preventable or not preventable in an efficient and consistent way. It also confirmed that only a small percentage of general pediatrics 15-day readmissions are potentially preventable. Journal of Hospital Medicine 2016;11:329-335. © 2016 Society of Hospital Medicine. © 2016 Society of Hospital Medicine.

  6. Reliability Analysis of Main-axis Control System of the Equatorial Antarctica Astronomical Telescope Based on Fault Tree

    NASA Astrophysics Data System (ADS)

    LI, Y.; Yang, S. H.

    2017-05-01

    The Antarctica astronomical telescopes work chronically on the top of the unattended South Pole, and they have only one chance to maintain every year. Due to the complexity of the optical, mechanical, and electrical systems, the telescopes are hard to be maintained and need multi-tasker expedition teams, which means an excessive awareness is essential for the reliability of the Antarctica telescopes. Based on the fault mechanism and fault mode of the main-axis control system for the equatorial Antarctica astronomical telescope AST3-3 (Antarctic Schmidt Telescopes 3-3), the method of fault tree analysis is introduced in this article, and we obtains the importance degree of the top event from the importance degree of the bottom event structure. From the above results, the hidden problems and weak links can be effectively found out, which will indicate the direction for promoting the stability of the system and optimizing the design of the system.

  7. A Fault Tree Approach to Needs Assessment -- An Overview.

    ERIC Educational Resources Information Center

    Stephens, Kent G.

    A "failsafe" technology is presented based on a new unified theory of needs assessment. Basically the paper discusses fault tree analysis as a technique for enhancing the probability of success in any system by analyzing the most likely modes of failure that could occur and then suggesting high priority avoidance strategies for those…

  8. TU-AB-BRD-03: Fault Tree Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dunscombe, P.

    2015-06-15

    Current quality assurance and quality management guidelines provided by various professional organizations are prescriptive in nature, focusing principally on performance characteristics of planning and delivery devices. However, published analyses of events in radiation therapy show that most events are often caused by flaws in clinical processes rather than by device failures. This suggests the need for the development of a quality management program that is based on integrated approaches to process and equipment quality assurance. Industrial engineers have developed various risk assessment tools that are used to identify and eliminate potential failures from a system or a process before amore » failure impacts a customer. These tools include, but are not limited to, process mapping, failure modes and effects analysis, fault tree analysis. Task Group 100 of the American Association of Physicists in Medicine has developed these tools and used them to formulate an example risk-based quality management program for intensity-modulated radiotherapy. This is a prospective risk assessment approach that analyzes potential error pathways inherent in a clinical process and then ranks them according to relative risk, typically before implementation, followed by the design of a new process or modification of the existing process. Appropriate controls are then put in place to ensure that failures are less likely to occur and, if they do, they will more likely be detected before they propagate through the process, compromising treatment outcome and causing harm to the patient. Such a prospective approach forms the basis of the work of Task Group 100 that has recently been approved by the AAPM. This session will be devoted to a discussion of these tools and practical examples of how these tools can be used in a given radiotherapy clinic to develop a risk based quality management program. Learning Objectives: Learn how to design a process map for a radiotherapy process Learn

  9. Fault tree analysis for system modeling in case of intentional EMI

    NASA Astrophysics Data System (ADS)

    Genender, E.; Mleczko, M.; Döring, O.; Garbe, H.; Potthast, S.

    2011-08-01

    The complexity of modern systems on the one hand and the rising threat of intentional electromagnetic interference (IEMI) on the other hand increase the necessity for systematical risk analysis. Most of the problems can not be treated deterministically since slight changes in the configuration (source, position, polarization, ...) can dramatically change the outcome of an event. For that purpose, methods known from probabilistic risk analysis can be applied. One of the most common approaches is the fault tree analysis (FTA). The FTA is used to determine the system failure probability and also the main contributors to its failure. In this paper the fault tree analysis is introduced and a possible application of that method is shown using a small computer network as an example. The constraints of this methods are explained and conclusions for further research are drawn.

  10. A Fault Tree Approach to Analysis of Organizational Communication Systems.

    ERIC Educational Resources Information Center

    Witkin, Belle Ruth; Stephens, Kent G.

    Fault Tree Analysis (FTA) is a method of examing communication in an organization by focusing on: (1) the complex interrelationships in human systems, particularly in communication systems; (2) interactions across subsystems and system boundaries; and (3) the need to select and "prioritize" channels which will eliminate noise in the…

  11. The Role of Coseismic Coulomb Stress Changes in Shaping the Hard Link Between Normal Fault Segments

    NASA Astrophysics Data System (ADS)

    Hodge, M.; Fagereng, Å.; Biggs, J.

    2018-01-01

    The mechanism and evolution of fault linkage is important in the growth and development of large faults. Here we investigate the role of coseismic stress changes in shaping the hard links between parallel normal fault segments (or faults), by comparing numerical models of the Coulomb stress change from simulated earthquakes on two en echelon fault segments to natural observations of hard-linked fault geometry. We consider three simplified linking fault geometries: (1) fault bend, (2) breached relay ramp, and (3) strike-slip transform fault. We consider scenarios where either one or both segments rupture and vary the distance between segment tips. Fault bends and breached relay ramps are favored where segments underlap or when the strike-perpendicular distance between overlapping segments is less than 20% of their total length, matching all 14 documented examples. Transform fault linkage geometries are preferred when overlapping segments are laterally offset at larger distances. Few transform faults exist in continental extensional settings, and our model suggests that propagating faults or fault segments may first link through fault bends or breached ramps before reaching sufficient overlap for a transform fault to develop. Our results suggest that Coulomb stresses arising from multisegment ruptures or repeated earthquakes are consistent with natural observations of the geometry of hard links between parallel normal fault segments.

  12. Fault tree analysis: NiH2 aerospace cells for LEO mission

    NASA Technical Reports Server (NTRS)

    Klein, Glenn C.; Rash, Donald E., Jr.

    1992-01-01

    The Fault Tree Analysis (FTA) is one of several reliability analyses or assessments applied to battery cells to be utilized in typical Electric Power Subsystems for spacecraft in low Earth orbit missions. FTA is generally the process of reviewing and analytically examining a system or equipment in such a way as to emphasize the lower level fault occurrences which directly or indirectly contribute to the major fault or top level event. This qualitative FTA addresses the potential of occurrence for five specific top level events: hydrogen leakage through either discrete leakage paths or through pressure vessel rupture; and four distinct modes of performance degradation - high charge voltage, suppressed discharge voltage, loss of capacity, and high pressure.

  13. Integrating Insults: Using Fault Tree Analysis to Guide Schizophrenia Research across Levels of Analysis.

    PubMed

    MacDonald Iii, Angus W; Zick, Jennifer L; Chafee, Matthew V; Netoff, Theoden I

    2015-01-01

    The grand challenges of schizophrenia research are linking the causes of the disorder to its symptoms and finding ways to overcome those symptoms. We argue that the field will be unable to address these challenges within psychiatry's standard neo-Kraepelinian (DSM) perspective. At the same time the current corrective, based in molecular genetics and cognitive neuroscience, is also likely to flounder due to its neglect for psychiatry's syndromal structure. We suggest adopting a new approach long used in reliability engineering, which also serves as a synthesis of these approaches. This approach, known as fault tree analysis, can be combined with extant neuroscientific data collection and computational modeling efforts to uncover the causal structures underlying the cognitive and affective failures in people with schizophrenia as well as other complex psychiatric phenomena. By making explicit how causes combine from basic faults to downstream failures, this approach makes affordances for: (1) causes that are neither necessary nor sufficient in and of themselves; (2) within-diagnosis heterogeneity; and (3) between diagnosis co-morbidity.

  14. Integrating Insults: Using Fault Tree Analysis to Guide Schizophrenia Research across Levels of Analysis

    PubMed Central

    MacDonald III, Angus W.; Zick, Jennifer L.; Chafee, Matthew V.; Netoff, Theoden I.

    2016-01-01

    The grand challenges of schizophrenia research are linking the causes of the disorder to its symptoms and finding ways to overcome those symptoms. We argue that the field will be unable to address these challenges within psychiatry’s standard neo-Kraepelinian (DSM) perspective. At the same time the current corrective, based in molecular genetics and cognitive neuroscience, is also likely to flounder due to its neglect for psychiatry’s syndromal structure. We suggest adopting a new approach long used in reliability engineering, which also serves as a synthesis of these approaches. This approach, known as fault tree analysis, can be combined with extant neuroscientific data collection and computational modeling efforts to uncover the causal structures underlying the cognitive and affective failures in people with schizophrenia as well as other complex psychiatric phenomena. By making explicit how causes combine from basic faults to downstream failures, this approach makes affordances for: (1) causes that are neither necessary nor sufficient in and of themselves; (2) within-diagnosis heterogeneity; and (3) between diagnosis co-morbidity. PMID:26779007

  15. Missing link between the Hayward and Rodgers Creek faults

    PubMed Central

    Watt, Janet; Ponce, David; Parsons, Tom; Hart, Patrick

    2016-01-01

    The next major earthquake to strike the ~7 million residents of the San Francisco Bay Area will most likely result from rupture of the Hayward or Rodgers Creek faults. Until now, the relationship between these two faults beneath San Pablo Bay has been a mystery. Detailed subsurface imaging provides definitive evidence of active faulting along the Hayward fault as it traverses San Pablo Bay and bends ~10° to the right toward the Rodgers Creek fault. Integrated geophysical interpretation and kinematic modeling show that the Hayward and Rodgers Creek faults are directly connected at the surface—a geometric relationship that has significant implications for earthquake dynamics and seismic hazard. A direct link enables simultaneous rupture of the Hayward and Rodgers Creek faults, a scenario that could result in a major earthquake (M = 7.4) that would cause extensive damage and loss of life with global economic impact. PMID:27774514

  16. Missing link between the Hayward and Rodgers Creek faults

    USGS Publications Warehouse

    Watt, Janet; Ponce, David A.; Parsons, Thomas E.; Hart, Patrick E.

    2016-01-01

    The next major earthquake to strike the ~7 million residents of the San Francisco Bay Area will most likely result from rupture of the Hayward or Rodgers Creek faults. Until now, the relationship between these two faults beneath San Pablo Bay has been a mystery. Detailed subsurface imaging provides definitive evidence of active faulting along the Hayward fault as it traverses San Pablo Bay and bends ~10° to the right toward the Rodgers Creek fault. Integrated geophysical interpretation and kinematic modeling show that the Hayward and Rodgers Creek faults are directly connected at the surface—a geometric relationship that has significant implications for earthquake dynamics and seismic hazard. A direct link enables simultaneous rupture of the Hayward and Rodgers Creek faults, a scenario that could result in a major earthquake (M = 7.4) that would cause extensive damage and loss of life with global economic impact.

  17. Fault tree analysis of most common rolling bearing tribological failures

    NASA Astrophysics Data System (ADS)

    Vencl, Aleksandar; Gašić, Vlada; Stojanović, Blaža

    2017-02-01

    Wear as a tribological process has a major influence on the reliability and life of rolling bearings. Field examinations of bearing failures due to wear indicate possible causes and point to the necessary measurements for wear reduction or elimination. Wear itself is a very complex process initiated by the action of different mechanisms, and can be manifested by different wear types which are often related. However, the dominant type of wear can be approximately determined. The paper presents the classification of most common bearing damages according to the dominant wear type, i.e. abrasive wear, adhesive wear, surface fatigue wear, erosive wear, fretting wear and corrosive wear. The wear types are correlated with the terms used in ISO 15243 standard. Each wear type is illustrated with an appropriate photograph, and for each wear type, appropriate description of causes and manifestations is presented. Possible causes of rolling bearing failure are used for the fault tree analysis (FTA). It was performed to determine the root causes for bearing failures. The constructed fault tree diagram for rolling bearing failure can be useful tool for maintenance engineers.

  18. Fault Tree Analysis as a Planning and Management Tool: A Case Study

    ERIC Educational Resources Information Center

    Witkin, Belle Ruth

    1977-01-01

    Fault Tree Analysis is an operations research technique used to analyse the most probable modes of failure in a system, in order to redesign or monitor the system more closely in order to increase its likelihood of success. (Author)

  19. Program listing for fault tree analysis of JPL technical report 32-1542

    NASA Technical Reports Server (NTRS)

    Chelson, P. O.

    1971-01-01

    The computer program listing for the MAIN program and those subroutines unique to the fault tree analysis are described. Some subroutines are used for analyzing the reliability block diagram. The program is written in FORTRAN 5 and is running on a UNIVAC 1108.

  20. A Fault Tree Approach to Analysis of Behavioral Systems: An Overview.

    ERIC Educational Resources Information Center

    Stephens, Kent G.

    Developed at Brigham Young University, Fault Tree Analysis (FTA) is a technique for enhancing the probability of success in any system by analyzing the most likely modes of failure that could occur. It provides a logical, step-by-step description of possible failure events within a system and their interaction--the combinations of potential…

  1. Probabilistic Risk Assessment of Hydraulic Fracturing in Unconventional Reservoirs by Means of Fault Tree Analysis: An Initial Discussion

    NASA Astrophysics Data System (ADS)

    Rodak, C. M.; McHugh, R.; Wei, X.

    2016-12-01

    The development and combination of horizontal drilling and hydraulic fracturing has unlocked unconventional hydrocarbon reserves around the globe. These advances have triggered a number of concerns regarding aquifer contamination and over-exploitation, leading to scientific studies investigating potential risks posed by directional hydraulic fracturing activities. These studies, balanced with potential economic benefits of energy production, are a crucial source of information for communities considering the development of unconventional reservoirs. However, probabilistic quantification of the overall risk posed by hydraulic fracturing at the system level are rare. Here we present the concept of fault tree analysis to determine the overall probability of groundwater contamination or over-exploitation, broadly referred to as the probability of failure. The potential utility of fault tree analysis for the quantification and communication of risks is approached with a general application. However, the fault tree design is robust and can handle various combinations of regional-specific data pertaining to relevant spatial scales, geological conditions, and industry practices where available. All available data are grouped into quantity and quality-based impacts and sub-divided based on the stage of the hydraulic fracturing process in which the data is relevant as described by the USEPA. Each stage is broken down into the unique basic events required for failure; for example, to quantify the risk of an on-site spill we must consider the likelihood, magnitude, composition, and subsurface transport of the spill. The structure of the fault tree described above can be used to render a highly complex system of variables into a straightforward equation for risk calculation based on Boolean logic. This project shows the utility of fault tree analysis for the visual communication of the potential risks of hydraulic fracturing activities on groundwater resources.

  2. Mines Systems Safety Improvement Using an Integrated Event Tree and Fault Tree Analysis

    NASA Astrophysics Data System (ADS)

    Kumar, Ranjan; Ghosh, Achyuta Krishna

    2017-04-01

    Mines systems such as ventilation system, strata support system, flame proof safety equipment, are exposed to dynamic operational conditions such as stress, humidity, dust, temperature, etc., and safety improvement of such systems can be done preferably during planning and design stage. However, the existing safety analysis methods do not handle the accident initiation and progression of mine systems explicitly. To bridge this gap, this paper presents an integrated Event Tree (ET) and Fault Tree (FT) approach for safety analysis and improvement of mine systems design. This approach includes ET and FT modeling coupled with redundancy allocation technique. In this method, a concept of top hazard probability is introduced for identifying system failure probability and redundancy is allocated to the system either at component or system level. A case study on mine methane explosion safety with two initiating events is performed. The results demonstrate that the presented method can reveal the accident scenarios and improve the safety of complex mine systems simultaneously.

  3. Locating hardware faults in a data communications network of a parallel computer

    DOEpatents

    Archer, Charles J.; Megerian, Mark G.; Ratterman, Joseph D.; Smith, Brian E.

    2010-01-12

    Hardware faults location in a data communications network of a parallel computer. Such a parallel computer includes a plurality of compute nodes and a data communications network that couples the compute nodes for data communications and organizes the compute node as a tree. Locating hardware faults includes identifying a next compute node as a parent node and a root of a parent test tree, identifying for each child compute node of the parent node a child test tree having the child compute node as root, running a same test suite on the parent test tree and each child test tree, and identifying the parent compute node as having a defective link connected from the parent compute node to a child compute node if the test suite fails on the parent test tree and succeeds on all the child test trees.

  4. SCAP: a new methodology for safety management based on feedback from credible accident-probabilistic fault tree analysis system.

    PubMed

    Khan, F I; Iqbal, A; Ramesh, N; Abbasi, S A

    2001-10-12

    As it is conventionally done, strategies for incorporating accident--prevention measures in any hazardous chemical process industry are developed on the basis of input from risk assessment. However, the two steps-- risk assessment and hazard reduction (or safety) measures--are not linked interactively in the existing methodologies. This prevents a quantitative assessment of the impacts of safety measures on risk control. We have made an attempt to develop a methodology in which risk assessment steps are interactively linked with implementation of safety measures. The resultant system tells us the extent of reduction of risk by each successive safety measure. It also tells based on sophisticated maximum credible accident analysis (MCAA) and probabilistic fault tree analysis (PFTA) whether a given unit can ever be made 'safe'. The application of the methodology has been illustrated with a case study.

  5. Analytical simulation and PROFAT II: a new methodology and a computer automated tool for fault tree analysis in chemical process industries.

    PubMed

    Khan, F I; Abbasi, S A

    2000-07-10

    Fault tree analysis (FTA) is based on constructing a hypothetical tree of base events (initiating events) branching into numerous other sub-events, propagating the fault and eventually leading to the top event (accident). It has been a powerful technique used traditionally in identifying hazards in nuclear installations and power industries. As the systematic articulation of the fault tree is associated with assigning probabilities to each fault, the exercise is also sometimes called probabilistic risk assessment. But powerful as this technique is, it is also very cumbersome and costly, limiting its area of application. We have developed a new algorithm based on analytical simulation (named as AS-II), which makes the application of FTA simpler, quicker, and cheaper; thus opening up the possibility of its wider use in risk assessment in chemical process industries. Based on the methodology we have developed a computer-automated tool. The details are presented in this paper.

  6. Faults Discovery By Using Mined Data

    NASA Technical Reports Server (NTRS)

    Lee, Charles

    2005-01-01

    Fault discovery in the complex systems consist of model based reasoning, fault tree analysis, rule based inference methods, and other approaches. Model based reasoning builds models for the systems either by mathematic formulations or by experiment model. Fault Tree Analysis shows the possible causes of a system malfunction by enumerating the suspect components and their respective failure modes that may have induced the problem. The rule based inference build the model based on the expert knowledge. Those models and methods have one thing in common; they have presumed some prior-conditions. Complex systems often use fault trees to analyze the faults. Fault diagnosis, when error occurs, is performed by engineers and analysts performing extensive examination of all data gathered during the mission. International Space Station (ISS) control center operates on the data feedback from the system and decisions are made based on threshold values by using fault trees. Since those decision-making tasks are safety critical and must be done promptly, the engineers who manually analyze the data are facing time challenge. To automate this process, this paper present an approach that uses decision trees to discover fault from data in real-time and capture the contents of fault trees as the initial state of the trees.

  7. Probability and possibility-based representations of uncertainty in fault tree analysis.

    PubMed

    Flage, Roger; Baraldi, Piero; Zio, Enrico; Aven, Terje

    2013-01-01

    Expert knowledge is an important source of input to risk analysis. In practice, experts might be reluctant to characterize their knowledge and the related (epistemic) uncertainty using precise probabilities. The theory of possibility allows for imprecision in probability assignments. The associated possibilistic representation of epistemic uncertainty can be combined with, and transformed into, a probabilistic representation; in this article, we show this with reference to a simple fault tree analysis. We apply an integrated (hybrid) probabilistic-possibilistic computational framework for the joint propagation of the epistemic uncertainty on the values of the (limiting relative frequency) probabilities of the basic events of the fault tree, and we use possibility-probability (probability-possibility) transformations for propagating the epistemic uncertainty within purely probabilistic and possibilistic settings. The results of the different approaches (hybrid, probabilistic, and possibilistic) are compared with respect to the representation of uncertainty about the top event (limiting relative frequency) probability. Both the rationale underpinning the approaches and the computational efforts they require are critically examined. We conclude that the approaches relevant in a given setting depend on the purpose of the risk analysis, and that further research is required to make the possibilistic approaches operational in a risk analysis context. © 2012 Society for Risk Analysis.

  8. Fault Tree Based Diagnosis with Optimal Test Sequencing for Field Service Engineers

    NASA Technical Reports Server (NTRS)

    Iverson, David L.; George, Laurence L.; Patterson-Hine, F. A.; Lum, Henry, Jr. (Technical Monitor)

    1994-01-01

    When field service engineers go to customer sites to service equipment, they want to diagnose and repair failures quickly and cost effectively. Symptoms exhibited by failed equipment frequently suggest several possible causes which require different approaches to diagnosis. This can lead the engineer to follow several fruitless paths in the diagnostic process before they find the actual failure. To assist in this situation, we have developed the Fault Tree Diagnosis and Optimal Test Sequence (FTDOTS) software system that performs automated diagnosis and ranks diagnostic hypotheses based on failure probability and the time or cost required to isolate and repair each failure. FTDOTS first finds a set of possible failures that explain exhibited symptoms by using a fault tree reliability model as a diagnostic knowledge to rank the hypothesized failures based on how likely they are and how long it would take or how much it would cost to isolate and repair them. This ordering suggests an optimal sequence for the field service engineer to investigate the hypothesized failures in order to minimize the time or cost required to accomplish the repair task. Previously, field service personnel would arrive at the customer site and choose which components to investigate based on past experience and service manuals. Using FTDOTS running on a portable computer, they can now enter a set of symptoms and get a list of possible failures ordered in an optimal test sequence to help them in their decisions. If facilities are available, the field engineer can connect the portable computer to the malfunctioning device for automated data gathering. FTDOTS is currently being applied to field service of medical test equipment. The techniques are flexible enough to use for many different types of devices. If a fault tree model of the equipment and information about component failure probabilities and isolation times or costs are available, a diagnostic knowledge base for that device can be

  9. Causation mechanism analysis for haze pollution related to vehicle emission in Guangzhou, China by employing the fault tree approach.

    PubMed

    Huang, Weiqing; Fan, Hongbo; Qiu, Yongfu; Cheng, Zhiyu; Xu, Pingru; Qian, Yu

    2016-05-01

    Recently, China has frequently experienced large-scale, severe and persistent haze pollution due to surging urbanization and industrialization and a rapid growth in the number of motor vehicles and energy consumption. The vehicle emission due to the consumption of a large number of fossil fuels is no doubt a critical factor of the haze pollution. This work is focused on the causation mechanism of haze pollution related to the vehicle emission for Guangzhou city by employing the Fault Tree Analysis (FTA) method for the first time. With the establishment of the fault tree system of "Haze weather-Vehicle exhausts explosive emission", all of the important risk factors are discussed and identified by using this deductive FTA method. The qualitative and quantitative assessments of the fault tree system are carried out based on the structure, probability and critical importance degree analysis of the risk factors. The study may provide a new simple and effective tool/strategy for the causation mechanism analysis and risk management of haze pollution in China. Copyright © 2016 Elsevier Ltd. All rights reserved.

  10. A fast bottom-up algorithm for computing the cut sets of noncoherent fault trees

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Corynen, G.C.

    1987-11-01

    An efficient procedure for finding the cut sets of large fault trees has been developed. Designed to address coherent or noncoherent systems, dependent events, shared or common-cause events, the method - called SHORTCUT - is based on a fast algorithm for transforming a noncoherent tree into a quasi-coherent tree (COHERE), and on a new algorithm for reducing cut sets (SUBSET). To assure sufficient clarity and precision, the procedure is discussed in the language of simple sets, which is also developed in this report. Although the new method has not yet been fully implemented on the computer, we report theoretical worst-casemore » estimates of its computational complexity. 12 refs., 10 figs.« less

  11. Distributed intrusion monitoring system with fiber link backup and on-line fault diagnosis functions

    NASA Astrophysics Data System (ADS)

    Xu, Jiwei; Wu, Huijuan; Xiao, Shunkun

    2014-12-01

    A novel multi-channel distributed optical fiber intrusion monitoring system with smart fiber link backup and on-line fault diagnosis functions was proposed. A 1× N optical switch was intelligently controlled by a peripheral interface controller (PIC) to expand the fiber link from one channel to several ones to lower the cost of the long or ultra-long distance intrusion monitoring system and also to strengthen the intelligent monitoring link backup function. At the same time, a sliding window auto-correlation method was presented to identify and locate the broken or fault point of the cable. The experimental results showed that the proposed multi-channel system performed well especially whenever any a broken cable was detected. It could locate the broken or fault point by itself accurately and switch to its backup sensing link immediately to ensure the security system to operate stably without a minute idling. And it was successfully applied in a field test for security monitoring of the 220-km-length national borderline in China.

  12. Fault isolation through no-overhead link level CRC

    DOEpatents

    Chen, Dong; Coteus, Paul W.; Gara, Alan G.

    2007-04-24

    A fault isolation technique for checking the accuracy of data packets transmitted between nodes of a parallel processor. An independent crc is kept of all data sent from one processor to another, and received from one processor to another. At the end of each checkpoint, the crcs are compared. If they do not match, there was an error. The crcs may be cleared and restarted at each checkpoint. In the preferred embodiment, the basic functionality is to calculate a CRC of all packet data that has been successfully transmitted across a given link. This CRC is done on both ends of the link, thereby allowing an independent check on all data believed to have been correctly transmitted. Preferably, all links have this CRC coverage, and the CRC used in this link level check is different from that used in the packet transfer protocol. This independent check, if successfully passed, virtually eliminates the possibility that any data errors were missed during the previous transfer period.

  13. Fuzzy fault tree assessment based on improved AHP for fire and explosion accidents for steel oil storage tanks.

    PubMed

    Shi, Lei; Shuai, Jian; Xu, Kui

    2014-08-15

    Fire and explosion accidents of steel oil storage tanks (FEASOST) occur occasionally during the petroleum and chemical industry production and storage processes and often have devastating impact on lives, the environment and property. To contribute towards the development of a quantitative approach for assessing the occurrence probability of FEASOST, a fault tree of FEASOST is constructed that identifies various potential causes. Traditional fault tree analysis (FTA) can achieve quantitative evaluation if the failure data of all of the basic events (BEs) are available, which is almost impossible due to the lack of detailed data, as well as other uncertainties. This paper makes an attempt to perform FTA of FEASOST by a hybrid application between an expert elicitation based improved analysis hierarchy process (AHP) and fuzzy set theory, and the occurrence possibility of FEASOST is estimated for an oil depot in China. A comparison between statistical data and calculated data using fuzzy fault tree analysis (FFTA) based on traditional and improved AHP is also made. Sensitivity and importance analysis has been performed to identify the most crucial BEs leading to FEASOST that will provide insights into how managers should focus effective mitigation. Copyright © 2014 Elsevier B.V. All rights reserved.

  14. FAULT TREE ANALYSIS FOR EXPOSURE TO REFRIGERANTS USED FOR AUTOMOTIVE AIR CONDITIONING IN THE U.S.

    EPA Science Inventory

    A fault tree analysis was used to estimate the number of refrigerant exposures of automotive service technicians and vehicle occupants in the United States. Exposures of service technicians can occur when service equipment or automotive air-conditioning systems leak during servic...

  15. Using minimal spanning trees to compare the reliability of network topologies

    NASA Technical Reports Server (NTRS)

    Leister, Karen J.; White, Allan L.; Hayhurst, Kelly J.

    1990-01-01

    Graph theoretic methods are applied to compute the reliability for several types of networks of moderate size. The graph theory methods used are minimal spanning trees for networks with bi-directional links and the related concept of strongly connected directed graphs for networks with uni-directional links. A comparison is conducted of ring networks and braided networks. The case is covered where just the links fail and the case where both links and nodes fail. Two different failure modes for the links are considered. For one failure mode, the link no longer carries messages. For the other failure mode, the link delivers incorrect messages. There is a description and comparison of link-redundancy versus path-redundancy as methods to achieve reliability. All the computations are carried out by means of a fault tree program.

  16. Fault-zone waves observed at the southern Joshua Tree earthquake rupture zone

    USGS Publications Warehouse

    Hough, S.E.; Ben-Zion, Y.; Leary, P.

    1994-01-01

    Waveform and spectral characteristics of several aftershocks of the M 6.1 22 April 1992 Joshua Tree earthquake recorded at stations just north of the Indio Hills in the Coachella Valley can be interpreted in terms of waves propagating within narrow, low-velocity, high-attenuation, vertical zones. Evidence for our interpretation consists of: (1) emergent P arrivals prior to and opposite in polarity to the impulsive direct phase; these arrivals can be modeled as headwaves indicative of a transfault velocity contrast; (2) spectral peaks in the S wave train that can be interpreted as internally reflected, low-velocity fault-zone wave energy; and (3) spatial selectivity of event-station pairs at which these data are observed, suggesting a long, narrow geologic structure. The observed waveforms are modeled using the analytical solution of Ben-Zion and Aki (1990) for a plane-parallel layered fault-zone structure. Synthetic waveform fits to the observed data indicate the presence of NS-trending vertical fault-zone layers characterized by a thickness of 50 to 100 m, a velocity decrease of 10 to 15% relative to the surrounding rock, and a P-wave quality factor in the range 25 to 50.

  17. Control model design to limit DC-link voltage during grid fault in a dfig variable speed wind turbine

    NASA Astrophysics Data System (ADS)

    Nwosu, Cajethan M.; Ogbuka, Cosmas U.; Oti, Stephen E.

    2017-08-01

    This paper presents a control model design capable of inhibiting the phenomenal rise in the DC-link voltage during grid- fault condition in a variable speed wind turbine. Against the use of power circuit protection strategies with inherent limitations in fault ride-through capability, a control circuit algorithm capable of limiting the DC-link voltage rise which in turn bears dynamics that has direct influence on the characteristics of the rotor voltage especially during grid faults is here proposed. The model results so obtained compare favorably with the simulation results as obtained in a MATLAB/SIMULINK environment. The generated model may therefore be used to predict near accurately the nature of DC-link voltage variations during fault given some factors which include speed and speed mode of operation, the value of damping resistor relative to half the product of inner loop current control bandwidth and the filter inductance.

  18. Reliability analysis method of a solar array by using fault tree analysis and fuzzy reasoning Petri net

    NASA Astrophysics Data System (ADS)

    Wu, Jianing; Yan, Shaoze; Xie, Liyang

    2011-12-01

    To address the impact of solar array anomalies, it is important to perform analysis of the solar array reliability. This paper establishes the fault tree analysis (FTA) and fuzzy reasoning Petri net (FRPN) models of a solar array mechanical system and analyzes reliability to find mechanisms of the solar array fault. The index final truth degree (FTD) and cosine matching function (CMF) are employed to resolve the issue of how to evaluate the importance and influence of different faults. So an improvement reliability analysis method is developed by means of the sorting of FTD and CMF. An example is analyzed using the proposed method. The analysis results show that harsh thermal environment and impact caused by particles in space are the most vital causes of the solar array fault. Furthermore, other fault modes and the corresponding improvement methods are discussed. The results reported in this paper could be useful for the spacecraft designers, particularly, in the process of redesigning the solar array and scheduling its reliability growth plan.

  19. Corridors of crestal and radial faults linking salt diapirs in the Espírito Santo Basin, SE Brazil

    NASA Astrophysics Data System (ADS)

    Mattos, Nathalia H.; Alves, Tiago M.

    2018-03-01

    This work uses high-quality 3D seismic data to assess the geometry of fault families around salt diapirs in SE Brazil (Espírito Santo Basin). It aims at evaluating the timings of fault growth, and suggests the generation of corridors for fluid migration linking discrete salt diapirs. Three salt diapirs, one salt ridge, and five fault families were identified based on their geometry and relative locations. Displacement-length (D-x) plots, Throw-depth (T-z) data and structural maps indicate that faults consist of multiple segments that were reactivated by dip-linkage following a preferential NE-SW direction. This style of reactivation and linkage is distinct from other sectors of the Espírito Santo Basin where the preferential mode of reactivation is by upwards vertical propagation. Reactivation of faults above a Mid-Eocene unconformity is also scarce in the study area. Conversely, two halokinetic episodes dated as Cretaceous and Paleogene are interpreted below a Mid-Eocene unconformity. This work is important as it recognises the juxtaposition of permeable strata across faults as marking the generation of fault corridors linking adjacent salt structures. In such a setting, fault modelling shows that fluid will migrate towards the shallower salt structures along the fault corridors first identified in this work.

  20. Interacting faults

    NASA Astrophysics Data System (ADS)

    Peacock, D. C. P.; Nixon, C. W.; Rotevatn, A.; Sanderson, D. J.; Zuluaga, L. F.

    2017-04-01

    The way that faults interact with each other controls fault geometries, displacements and strains. Faults rarely occur individually but as sets or networks, with the arrangement of these faults producing a variety of different fault interactions. Fault interactions are characterised in terms of the following: 1) Geometry - the spatial arrangement of the faults. Interacting faults may or may not be geometrically linked (i.e. physically connected), when fault planes share an intersection line. 2) Kinematics - the displacement distributions of the interacting faults and whether the displacement directions are parallel, perpendicular or oblique to the intersection line. Interacting faults may or may not be kinematically linked, where the displacements, stresses and strains of one fault influences those of the other. 3) Displacement and strain in the interaction zone - whether the faults have the same or opposite displacement directions, and if extension or contraction dominates in the acute bisector between the faults. 4) Chronology - the relative ages of the faults. This characterisation scheme is used to suggest a classification for interacting faults. Different types of interaction are illustrated using metre-scale faults from the Mesozoic rocks of Somerset and examples from the literature.

  1. Fault tree safety analysis of a large Li/SOCl(sub)2 spacecraft battery

    NASA Technical Reports Server (NTRS)

    Uy, O. Manuel; Maurer, R. H.

    1987-01-01

    The results of the safety fault tree analysis on the eight module, 576 F cell Li/SOCl2 battery on the spacecraft and in the integration and test environment prior to launch on the ground are presented. The analysis showed that with the right combination of blocking diodes, electrical fuses, thermal fuses, thermal switches, cell balance, cell vents, and battery module vents the probability of a single cell or a 72 cell module exploding can be reduced to .000001, essentially the probability due to explosion for unexplained reasons.

  2. Friction Laws Derived From the Acoustic Emissions of a Laboratory Fault by Machine Learning

    NASA Astrophysics Data System (ADS)

    Rouet-Leduc, B.; Hulbert, C.; Ren, C. X.; Bolton, D. C.; Marone, C.; Johnson, P. A.

    2017-12-01

    Fault friction controls nearly all aspects of fault rupture, yet it is only possible to measure in the laboratory. Here we describe laboratory experiments where acoustic emissions are recorded from the fault. We find that by applying a machine learning approach known as "extreme gradient boosting trees" to the continuous acoustical signal, the fault friction can be directly inferred, showing that instantaneous characteristics of the acoustic signal are a fingerprint of the frictional state. This machine learning-based inference leads to a simple law that links the acoustic signal to the friction state, and holds for every stress cycle the laboratory fault goes through. The approach does not use any other measured parameter than instantaneous statistics of the acoustic signal. This finding may have importance for inferring frictional characteristics from seismic waves in Earth where fault friction cannot be measured.

  3. Machine Learning of Fault Friction

    NASA Astrophysics Data System (ADS)

    Johnson, P. A.; Rouet-Leduc, B.; Hulbert, C.; Marone, C.; Guyer, R. A.

    2017-12-01

    We are applying machine learning (ML) techniques to continuous acoustic emission (AE) data from laboratory earthquake experiments. Our goal is to apply explicit ML methods to this acoustic datathe AE in order to infer frictional properties of a laboratory fault. The experiment is a double direct shear apparatus comprised of fault blocks surrounding fault gouge comprised of glass beads or quartz powder. Fault characteristics are recorded, including shear stress, applied load (bulk friction = shear stress/normal load) and shear velocity. The raw acoustic signal is continuously recorded. We rely on explicit decision tree approaches (Random Forest and Gradient Boosted Trees) that allow us to identify important features linked to the fault friction. A training procedure that employs both the AE and the recorded shear stress from the experiment is first conducted. Then, testing takes place on data the algorithm has never seen before, using only the continuous AE signal. We find that these methods provide rich information regarding frictional processes during slip (Rouet-Leduc et al., 2017a; Hulbert et al., 2017). In addition, similar machine learning approaches predict failure times, as well as slip magnitudes in some cases. We find that these methods work for both stick slip and slow slip experiments, for periodic slip and for aperiodic slip. We also derive a fundamental relationship between the AE and the friction describing the frictional behavior of any earthquake slip cycle in a given experiment (Rouet-Leduc et al., 2017b). Our goal is to ultimately scale these approaches to Earth geophysical data to probe fault friction. References Rouet-Leduc, B., C. Hulbert, N. Lubbers, K. Barros, C. Humphreys and P. A. Johnson, Machine learning predicts laboratory earthquakes, in review (2017). https://arxiv.org/abs/1702.05774Rouet-LeDuc, B. et al., Friction Laws Derived From the Acoustic Emissions of a Laboratory Fault by Machine Learning (2017), AGU Fall Meeting Session S025

  4. Two-dimensional fuzzy fault tree analysis for chlorine release from a chlor-alkali industry using expert elicitation.

    PubMed

    Renjith, V R; Madhu, G; Nayagam, V Lakshmana Gomathi; Bhasi, A B

    2010-11-15

    The hazards associated with major accident hazard (MAH) industries are fire, explosion and toxic gas releases. Of these, toxic gas release is the worst as it has the potential to cause extensive fatalities. Qualitative and quantitative hazard analyses are essential for the identification and quantification of these hazards related to chemical industries. Fault tree analysis (FTA) is an established technique in hazard identification. This technique has the advantage of being both qualitative and quantitative, if the probabilities and frequencies of the basic events are known. This paper outlines the estimation of the probability of release of chlorine from storage and filling facility of chlor-alkali industry using FTA. An attempt has also been made to arrive at the probability of chlorine release using expert elicitation and proven fuzzy logic technique for Indian conditions. Sensitivity analysis has been done to evaluate the percentage contribution of each basic event that could lead to chlorine release. Two-dimensional fuzzy fault tree analysis (TDFFTA) has been proposed for balancing the hesitation factor involved in expert elicitation. Copyright © 2010 Elsevier B.V. All rights reserved.

  5. Investigating Strain Transfer Along the Southern San Andreas Fault: A Geomorphic and Geodetic Study of Block Rotation in the Eastern Transverse Ranges, Joshua Tree National Park, CA

    NASA Astrophysics Data System (ADS)

    Guns, K. A.; Bennett, R. A.; Blisniuk, K.

    2017-12-01

    To better evaluate the distribution and transfer of strain and slip along the Southern San Andreas Fault (SSAF) zone in the northern Coachella valley in southern California, we integrate geological and geodetic observations to test whether strain is being transferred away from the SSAF system towards the Eastern California Shear Zone through microblock rotation of the Eastern Transverse Ranges (ETR). The faults of the ETR consist of five east-west trending left lateral strike slip faults that have measured cumulative offsets of up to 20 km and as low as 1 km. Present kinematic and block models present a variety of slip rate estimates, from as low as zero to as high as 7 mm/yr, suggesting a gap in our understanding of what role these faults play in the larger system. To determine whether present-day block rotation along these faults is contributing to strain transfer in the region, we are applying 10Be surface exposure dating methods to observed offset channel and alluvial fan deposits in order to estimate fault slip rates along two faults in the ETR. We present observations of offset geomorphic landforms using field mapping and LiDAR data at three sites along the Blue Cut Fault and one site along the Smoke Tree Wash Fault in Joshua Tree National Park which indicate recent Quaternary fault activity. Initial results of site mapping and clast count analyses reveal at least three stages of offset, including potential Holocene offsets, for one site along the Blue Cut Fault, while preliminary 10Be geochronology is in progress. This geologic slip rate data, combined with our new geodetic surface velocity field derived from updated campaign-based GPS measurements within Joshua Tree National Park will allow us to construct a suite of elastic fault block models to elucidate rates of strain transfer away from the SSAF and how that strain transfer may be affecting the length of the interseismic period along the SSAF.

  6. Failure analysis of storage tank component in LNG regasification unit using fault tree analysis method (FTA)

    NASA Astrophysics Data System (ADS)

    Mulyana, Cukup; Muhammad, Fajar; Saad, Aswad H.; Mariah, Riveli, Nowo

    2017-03-01

    Storage tank component is the most critical component in LNG regasification terminal. It has the risk of failure and accident which impacts to human health and environment. Risk assessment is conducted to detect and reduce the risk of failure in storage tank. The aim of this research is determining and calculating the probability of failure in regasification unit of LNG. In this case, the failure is caused by Boiling Liquid Expanding Vapor Explosion (BLEVE) and jet fire in LNG storage tank component. The failure probability can be determined by using Fault Tree Analysis (FTA). Besides that, the impact of heat radiation which is generated is calculated. Fault tree for BLEVE and jet fire on storage tank component has been determined and obtained with the value of failure probability for BLEVE of 5.63 × 10-19 and for jet fire of 9.57 × 10-3. The value of failure probability for jet fire is high enough and need to be reduced by customizing PID scheme of regasification LNG unit in pipeline number 1312 and unit 1. The value of failure probability after customization has been obtained of 4.22 × 10-6.

  7. A novel method of fuzzy fault tree analysis combined with VB program to identify and assess the risk of coal dust explosions

    PubMed Central

    Li, Jia; Wang, Deming; Huang, Zonghou

    2017-01-01

    Coal dust explosions (CDE) are one of the main threats to the occupational safety of coal miners. Aiming to identify and assess the risk of CDE, this paper proposes a novel method of fuzzy fault tree analysis combined with the Visual Basic (VB) program. In this methodology, various potential causes of the CDE are identified and a CDE fault tree is constructed. To overcome drawbacks from the lack of exact probability data for the basic events, fuzzy set theory is employed and the probability data of each basic event is treated as intuitionistic trapezoidal fuzzy numbers. In addition, a new approach for calculating the weighting of each expert is also introduced in this paper to reduce the error during the expert elicitation process. Specifically, an in-depth quantitative analysis of the fuzzy fault tree, such as the importance measure of the basic events and the cut sets, and the CDE occurrence probability is given to assess the explosion risk and acquire more details of the CDE. The VB program is applied to simplify the analysis process. A case study and analysis is provided to illustrate the effectiveness of this proposed method, and some suggestions are given to take preventive measures in advance and avoid CDE accidents. PMID:28793348

  8. A novel method of fuzzy fault tree analysis combined with VB program to identify and assess the risk of coal dust explosions.

    PubMed

    Wang, Hetang; Li, Jia; Wang, Deming; Huang, Zonghou

    2017-01-01

    Coal dust explosions (CDE) are one of the main threats to the occupational safety of coal miners. Aiming to identify and assess the risk of CDE, this paper proposes a novel method of fuzzy fault tree analysis combined with the Visual Basic (VB) program. In this methodology, various potential causes of the CDE are identified and a CDE fault tree is constructed. To overcome drawbacks from the lack of exact probability data for the basic events, fuzzy set theory is employed and the probability data of each basic event is treated as intuitionistic trapezoidal fuzzy numbers. In addition, a new approach for calculating the weighting of each expert is also introduced in this paper to reduce the error during the expert elicitation process. Specifically, an in-depth quantitative analysis of the fuzzy fault tree, such as the importance measure of the basic events and the cut sets, and the CDE occurrence probability is given to assess the explosion risk and acquire more details of the CDE. The VB program is applied to simplify the analysis process. A case study and analysis is provided to illustrate the effectiveness of this proposed method, and some suggestions are given to take preventive measures in advance and avoid CDE accidents.

  9. Understanding tree growth in response to moisture variability: Linking 32 years of satellite based soil moisture observations with tree rings

    NASA Astrophysics Data System (ADS)

    Albrecht, Franziska; Dorigo, Wouter; Gruber, Alexander; Wagner, Wolfgang; Kainz, Wolfgang

    2014-05-01

    Climate change induced drought variability impacts global forest ecosystems and forest carbon cycle dynamics. Physiological drought stress might even become an issue in regions generally not considered water-limited. The water balance at the soil surface is essential for forest growth. Soil moisture is a key driver linking precipitation and tree development. Tree ring based analyses are a potential approach to study the driving role of hydrological parameters for tree growth. However, at present two major research gaps are apparent: i) soil moisture records are hardly considered and ii) only a few studies are linking tree ring chronologies and satellite observations. Here we used tree ring chronologies obtained from the International Tree ring Data Bank (ITRDB) and remotely sensed soil moisture observations (ECV_SM) to analyze the moisture-tree growth relationship. The ECV_SM dataset, which is being distributed through ESA's Climate Change Initiative for soil moisture covers the period 1979 to 2010 at a spatial resolution of 0.25°. First analyses were performed for Mongolia, a country characterized by a continental arid climate. We extracted 13 tree ring chronologies suitable for our analysis from the ITRDB. Using monthly satellite based soil moisture observations we confirmed previous studies on the seasonality of soil moisture in Mongolia. Further, we investigated the relationship between tree growth (as reflected by tree ring width index) and remotely sensed soil moisture records by applying correlation analysis. In terms of correlation coefficient a strong response of tree growth to soil moisture conditions of current April to August was observed, confirming a strong linkage between tree growth and soil water storage. The highest correlation was found for current April (R=0.44), indicating that sufficient water supply is vital for trees at the beginning of the growing season. To verify these results, we related the chronologies to reanalysis precipitation and

  10. [Impact of water pollution risk in water transfer project based on fault tree analysis].

    PubMed

    Liu, Jian-Chang; Zhang, Wei; Wang, Li-Min; Li, Dai-Qing; Fan, Xiu-Ying; Deng, Hong-Bing

    2009-09-15

    The methods to assess water pollution risk for medium water transfer are gradually being explored. The event-nature-proportion method was developed to evaluate the probability of the single event. Fault tree analysis on the basis of calculation on single event was employed to evaluate the extent of whole water pollution risk for the channel water body. The result indicates, that the risk of pollutants from towns and villages along the line of water transfer project to the channel water body is at high level with the probability of 0.373, which will increase pollution to the channel water body at the rate of 64.53 mg/L COD, 4.57 mg/L NH4(+) -N and 0.066 mg/L volatilization hydroxybenzene, respectively. The measurement of fault probability on the basis of proportion method is proved to be useful in assessing water pollution risk under much uncertainty.

  11. Methodology for Designing Fault-Protection Software

    NASA Technical Reports Server (NTRS)

    Barltrop, Kevin; Levison, Jeffrey; Kan, Edwin

    2006-01-01

    A document describes a methodology for designing fault-protection (FP) software for autonomous spacecraft. The methodology embodies and extends established engineering practices in the technical discipline of Fault Detection, Diagnosis, Mitigation, and Recovery; and has been successfully implemented in the Deep Impact Spacecraft, a NASA Discovery mission. Based on established concepts of Fault Monitors and Responses, this FP methodology extends the notion of Opinion, Symptom, Alarm (aka Fault), and Response with numerous new notions, sub-notions, software constructs, and logic and timing gates. For example, Monitor generates a RawOpinion, which graduates into Opinion, categorized into no-opinion, acceptable, or unacceptable opinion. RaiseSymptom, ForceSymptom, and ClearSymptom govern the establishment and then mapping to an Alarm (aka Fault). Local Response is distinguished from FP System Response. A 1-to-n and n-to- 1 mapping is established among Monitors, Symptoms, and Responses. Responses are categorized by device versus by function. Responses operate in tiers, where the early tiers attempt to resolve the Fault in a localized step-by-step fashion, relegating more system-level response to later tier(s). Recovery actions are gated by epoch recovery timing, enabling strategy, urgency, MaxRetry gate, hardware availability, hazardous versus ordinary fault, and many other priority gates. This methodology is systematic, logical, and uses multiple linked tables, parameter files, and recovery command sequences. The credibility of the FP design is proven via a fault-tree analysis "top-down" approach, and a functional fault-mode-effects-and-analysis via "bottoms-up" approach. Via this process, the mitigation and recovery strategy(s) per Fault Containment Region scope (width versus depth) the FP architecture.

  12. Enterprise architecture availability analysis using fault trees and stakeholder interviews

    NASA Astrophysics Data System (ADS)

    Närman, Per; Franke, Ulrik; König, Johan; Buschle, Markus; Ekstedt, Mathias

    2014-01-01

    The availability of enterprise information systems is a key concern for many organisations. This article describes a method for availability analysis based on Fault Tree Analysis and constructs from the ArchiMate enterprise architecture (EA) language. To test the quality of the method, several case-studies within the banking and electrical utility industries were performed. Input data were collected through stakeholder interviews. The results from the case studies were compared with availability of log data to determine the accuracy of the method's predictions. In the five cases where accurate log data were available, the yearly downtime estimates were within eight hours from the actual downtimes. The cost of performing the analysis was low; no case study required more than 20 man-hours of work, making the method ideal for practitioners with an interest in obtaining rapid availability estimates of their enterprise information systems.

  13. Survey of critical failure events in on-chip interconnect by fault tree analysis

    NASA Astrophysics Data System (ADS)

    Yokogawa, Shinji; Kunii, Kyousuke

    2018-07-01

    In this paper, a framework based on reliability physics is proposed for adopting fault tree analysis (FTA) to the on-chip interconnect system of a semiconductor. By integrating expert knowledge and experience regarding the possibilities of failure on basic events, critical issues of on-chip interconnect reliability will be evaluated by FTA. In particular, FTA is used to identify the minimal cut sets with high risk priority. Critical events affecting the on-chip interconnect reliability are identified and discussed from the viewpoint of long-term reliability assessment. The moisture impact is evaluated as an external event.

  14. Viewpoint on ISA TR84.0.02--simplified methods and fault tree analysis.

    PubMed

    Summers, A E

    2000-01-01

    ANSI/ISA-S84.01-1996 and IEC 61508 require the establishment of a safety integrity level for any safety instrumented system or safety related system used to mitigate risk. Each stage of design, operation, maintenance, and testing is judged against this safety integrity level. Quantitative techniques can be used to verify whether the safety integrity level is met. ISA-dTR84.0.02 is a technical report under development by ISA, which discusses how to apply quantitative analysis techniques to safety instrumented systems. This paper discusses two of those techniques: (1) Simplified equations and (2) Fault tree analysis.

  15. The engine fuel system fault analysis

    NASA Astrophysics Data System (ADS)

    Zhang, Yong; Song, Hanqiang; Yang, Changsheng; Zhao, Wei

    2017-05-01

    For improving the reliability of the engine fuel system, the typical fault factor of the engine fuel system was analyzed from the point view of structure and functional. The fault character was gotten by building the fuel system fault tree. According the utilizing of fault mode effect analysis method (FMEA), several factors of key component fuel regulator was obtained, which include the fault mode, the fault cause, and the fault influences. All of this made foundation for next development of fault diagnosis system.

  16. Locating hardware faults in a parallel computer

    DOEpatents

    Archer, Charles J.; Megerian, Mark G.; Ratterman, Joseph D.; Smith, Brian E.

    2010-04-13

    Locating hardware faults in a parallel computer, including defining within a tree network of the parallel computer two or more sets of non-overlapping test levels of compute nodes of the network that together include all the data communications links of the network, each non-overlapping test level comprising two or more adjacent tiers of the tree; defining test cells within each non-overlapping test level, each test cell comprising a subtree of the tree including a subtree root compute node and all descendant compute nodes of the subtree root compute node within a non-overlapping test level; performing, separately on each set of non-overlapping test levels, an uplink test on all test cells in a set of non-overlapping test levels; and performing, separately from the uplink tests and separately on each set of non-overlapping test levels, a downlink test on all test cells in a set of non-overlapping test levels.

  17. Goal-Function Tree Modeling for Systems Engineering and Fault Management

    NASA Technical Reports Server (NTRS)

    Johnson, Stephen B.; Breckenridge, Jonathan T.

    2013-01-01

    This paper describes a new representation that enables rigorous definition and decomposition of both nominal and off-nominal system goals and functions: the Goal-Function Tree (GFT). GFTs extend the concept and process of functional decomposition, utilizing state variables as a key mechanism to ensure physical and logical consistency and completeness of the decomposition of goals (requirements) and functions, and enabling full and complete traceabilitiy to the design. The GFT also provides for means to define and represent off-nominal goals and functions that are activated when the system's nominal goals are not met. The physical accuracy of the GFT, and its ability to represent both nominal and off-nominal goals enable the GFT to be used for various analyses of the system, including assessments of the completeness and traceability of system goals and functions, the coverage of fault management failure detections, and definition of system failure scenarios.

  18. Uncertainty analysis in fault tree models with dependent basic events.

    PubMed

    Pedroni, Nicola; Zio, Enrico

    2013-06-01

    In general, two types of dependence need to be considered when estimating the probability of the top event (TE) of a fault tree (FT): "objective" dependence between the (random) occurrences of different basic events (BEs) in the FT and "state-of-knowledge" (epistemic) dependence between estimates of the epistemically uncertain probabilities of some BEs of the FT model. In this article, we study the effects on the TE probability of objective and epistemic dependences. The well-known Frèchet bounds and the distribution envelope determination (DEnv) method are used to model all kinds of (possibly unknown) objective and epistemic dependences, respectively. For exemplification, the analyses are carried out on a FT with six BEs. Results show that both types of dependence significantly affect the TE probability; however, the effects of epistemic dependence are likely to be overwhelmed by those of objective dependence (if present). © 2012 Society for Risk Analysis.

  19. Fault tree analysis for integrated and probabilistic risk analysis of drinking water systems.

    PubMed

    Lindhe, Andreas; Rosén, Lars; Norberg, Tommy; Bergstedt, Olof

    2009-04-01

    Drinking water systems are vulnerable and subject to a wide range of risks. To avoid sub-optimisation of risk-reduction options, risk analyses need to include the entire drinking water system, from source to tap. Such an integrated approach demands tools that are able to model interactions between different events. Fault tree analysis is a risk estimation tool with the ability to model interactions between events. Using fault tree analysis on an integrated level, a probabilistic risk analysis of a large drinking water system in Sweden was carried out. The primary aims of the study were: (1) to develop a method for integrated and probabilistic risk analysis of entire drinking water systems; and (2) to evaluate the applicability of Customer Minutes Lost (CML) as a measure of risk. The analysis included situations where no water is delivered to the consumer (quantity failure) and situations where water is delivered but does not comply with water quality standards (quality failure). Hard data as well as expert judgements were used to estimate probabilities of events and uncertainties in the estimates. The calculations were performed using Monte Carlo simulations. CML is shown to be a useful measure of risks associated with drinking water systems. The method presented provides information on risk levels, probabilities of failure, failure rates and downtimes of the system. This information is available for the entire system as well as its different sub-systems. Furthermore, the method enables comparison of the results with performance targets and acceptable levels of risk. The method thus facilitates integrated risk analysis and consequently helps decision-makers to minimise sub-optimisation of risk-reduction options.

  20. Linking Incoming Plate Faulting and Intermediate Depth Seismicity

    NASA Astrophysics Data System (ADS)

    Kwong, K. B.; van Zelst, I.; Tong, X.; Eimer, M. O.; Naif, S.; Hu, Y.; Zhan, Z.; Boneh, Y.; Schottenfels, E.; Miller, M. S.; Moresi, L. N.; Warren, J. M.; Wiens, D. A.

    2017-12-01

    Intermediate depth earthquakes, occurring between 70-350 km depth, are often attributed to dehydration reactions within the subducting plate. It is proposed that incoming plate normal faulting associated with plate bending at the trench may control the amount of hydration in the plate by producing large damage zones that create pathways for the infiltration of seawater deep into the subducting mantle. However, a relationship between incoming plate seismicity, faulting, and intermediate depth seismicity has not been established. We compiled a global dataset consisting of incoming plate earthquake moment tensor (CMT) solutions, focal depths, bend fault spacing and offset measurements, along with plate age and convergence rates. In addition, a global intermediate depth seismicity dataset was compiled with parameters such as the maximum seismic moment and seismicity rate, as well as thicknesses of double seismic zones. The maximum fault offset in the bending region has a strong correlation with the intermediate depth seismicity rate, but a more modest correlation with other parameters such as convergence velocity and plate age. We estimated the expected rate of seismic moment release for the incoming plate faults using mapped fault scarps from bathymetry. We compare this with the cumulative moment from normal faulting earthquakes in the incoming plate from the global CMT catalog to determine whether outer rise fault movement has an aseismic component. Preliminary results from Tonga and the Middle America Trench suggest there may be an aseismic component to incoming plate bending faulting. The cumulative seismic moment calculated for the outer rise faults will also be compared to the cumulative moment from intermediate depth earthquakes to assess whether these parameters are related. To support the observational part of this study, we developed a geodynamic numerical modeling study to systematically explore the influence of parameters such as plate age and convergence

  1. Towards generating ECSS-compliant fault tree analysis results via ConcertoFLA

    NASA Astrophysics Data System (ADS)

    Gallina, B.; Haider, Z.; Carlsson, A.

    2018-05-01

    Attitude Control Systems (ACSs) maintain the orientation of the satellite in three-dimensional space. ACSs need to be engineered in compliance with ECSS standards and need to ensure a certain degree of dependability. Thus, dependability analysis is conducted at various levels and by using ECSS-compliant techniques. Fault Tree Analysis (FTA) is one of these techniques. FTA is being automated within various Model Driven Engineering (MDE)-based methodologies. The tool-supported CHESS-methodology is one of them. This methodology incorporates ConcertoFLA, a dependability analysis technique enabling failure behavior analysis and thus FTA-results generation. ConcertoFLA, however, similarly to other techniques, still belongs to the academic research niche. To promote this technique within the space industry, we apply it on an ACS and discuss about its multi-faceted potentialities in the context of ECSS-compliant engineering.

  2. Mesoscale models for stacking faults, deformation twins and martensitic transformations: Linking atomistics to continuum

    NASA Astrophysics Data System (ADS)

    Kibey, Sandeep A.

    We present a hierarchical approach that spans multiple length scales to describe defect formation---in particular, formation of stacking faults (SFs) and deformation twins---in fcc crystals. We link the energy pathways (calculated here via ab initio density functional theory, DFT) associated with formation of stacking faults and twins to corresponding heterogeneous defect nucleation models (described through mesoscale dislocation mechanics). Through the generalized Peieirls-Nabarro model, we first correlate the width of intrinsic SFs in fcc alloy systems to their nucleation pathways called generalized stacking fault energies (GSFE). We then establish a qualitative dependence of twinning tendency in fee metals and alloys---specifically, in pure Cu and dilute Cu-xAl (x= 5.0 and 8.3 at.%)---on their twin-energy pathways called the generalized planar fault energies (GPFE). We also link the twinning behavior of Cu-Al alloys to their electronic structure by determining the effect of solute Al on the valence charge density redistribution at the SF through ab initio DFT. Further, while several efforts have been undertaken to incorporate twinning for predicting stress-strain response of fcc materials, a fundamental law for critical twinning stress has not yet emerged. We resolve this long-standing issue by linking quantitatively the twin-energy pathways (GPFE) obtained via ab initio DFT to heterogeneous, dislocation-based twin nucleation models. We establish an analytical expression that quantitatively predicts the critical twinning stress in fcc metals in agreement with experiments without requiring any empiricism at any length scale. Our theory connects twinning stress to twin-energy pathways and predicts a monotonic relation between stress and unstable twin stacking fault energy revealing the physics of twinning. We further demonstrate that the theory holds for fcc alloys as well. Our theory inherently accounts for directional nature of twinning which available

  3. Heat Transfer Processes Linking Fire Behavior and Tree Mortality

    NASA Astrophysics Data System (ADS)

    Michaletz, S. T.; Johnson, E. A.

    2004-12-01

    Traditional methods for predicting post-fire tree mortality employ statistical models which neglect the processes linking fire behavior to physiological mortality mechanisms. Here we present a physical process approach which predicts tree mortality by linking fireline intensity with lateral (vascular cambium) and apical (vegetative bud) meristem necrosis. We use a linefire plume model with independently validated conduction and lumped capacitance heat transfer analyses to predict lethal meristem temperatures in tree stems, branches, and buds. These models show that meristem necrosis in large diameter (Bi ≥ 0.3) stems/branches is governed by meristem height, bark thickness, and bark water content, while meristem necrosis in small diameter (Bi < 0.3) branches/buds is governed by meristem height, branch/bud size, branch/bud water content, and foliage architecture. To investigate effects of interspecfic variation in these properties, we compare model results for Picea glauca (Moench) Voss and Pinus contorta Loudon var. latifolia Engelm. at fireline intensities from 50 to 3000 kWm-1. Parameters are obtained from allometric models which relate stem/branch diameter to bark thickness and height, as well as bark and bud water content data collected in the southern Canadian Rocky Mountains. Variation in foliage architecture is quantified using forced convection heat transfer coefficients measured in a laminar flow wind tunnel at Re from 100 to 2000, typical for branches/buds in a linefire plume. Results indicate that in unfoliated stems/branches, P. glauca meristems are more protected due to thicker bark, whereas in foliated branches/buds, P. contorta meristems are more protected due to larger bud size and foliage architecture.

  4. An approach for automated fault diagnosis based on a fuzzy decision tree and boundary analysis of a reconstructed phase space.

    PubMed

    Aydin, Ilhan; Karakose, Mehmet; Akin, Erhan

    2014-03-01

    Although reconstructed phase space is one of the most powerful methods for analyzing a time series, it can fail in fault diagnosis of an induction motor when the appropriate pre-processing is not performed. Therefore, boundary analysis based a new feature extraction method in phase space is proposed for diagnosis of induction motor faults. The proposed approach requires the measurement of one phase current signal to construct the phase space representation. Each phase space is converted into an image, and the boundary of each image is extracted by a boundary detection algorithm. A fuzzy decision tree has been designed to detect broken rotor bars and broken connector faults. The results indicate that the proposed approach has a higher recognition rate than other methods on the same dataset. © 2013 ISA Published by ISA All rights reserved.

  5. Fault tree analysis of the causes of waterborne outbreaks.

    PubMed

    Risebro, Helen L; Doria, Miguel F; Andersson, Yvonne; Medema, Gertjan; Osborn, Keith; Schlosser, Olivier; Hunter, Paul R

    2007-01-01

    Prevention and containment of outbreaks requires examination of the contribution and interrelation of outbreak causative events. An outbreak fault tree was developed and applied to 61 enteric outbreaks related to public drinking water supplies in the EU. A mean of 3.25 causative events per outbreak were identified; each event was assigned a score based on percentage contribution per outbreak. Source and treatment system causative events often occurred concurrently (in 34 outbreaks). Distribution system causative events occurred less frequently (19 outbreaks) but were often solitary events contributing heavily towards the outbreak (a mean % score of 87.42). Livestock and rainfall in the catchment with no/inadequate filtration of water sources contributed concurrently to 11 of 31 Cryptosporidium outbreaks. Of the 23 protozoan outbreaks experiencing at least one treatment causative event, 90% of these events were filtration deficiencies; by contrast, for bacterial, viral, gastroenteritis and mixed pathogen outbreaks, 75% of treatment events were disinfection deficiencies. Roughly equal numbers of groundwater and surface water outbreaks experienced at least one treatment causative event (18 and 17 outbreaks, respectively). Retrospective analysis of multiple outbreaks of enteric disease can be used to inform outbreak investigations, facilitate corrective measures, and further develop multi-barrier approaches.

  6. Molecular Infectious Disease Epidemiology: Survival Analysis and Algorithms Linking Phylogenies to Transmission Trees

    PubMed Central

    Kenah, Eben; Britton, Tom; Halloran, M. Elizabeth; Longini, Ira M.

    2016-01-01

    Recent work has attempted to use whole-genome sequence data from pathogens to reconstruct the transmission trees linking infectors and infectees in outbreaks. However, transmission trees from one outbreak do not generalize to future outbreaks. Reconstruction of transmission trees is most useful to public health if it leads to generalizable scientific insights about disease transmission. In a survival analysis framework, estimation of transmission parameters is based on sums or averages over the possible transmission trees. A phylogeny can increase the precision of these estimates by providing partial information about who infected whom. The leaves of the phylogeny represent sampled pathogens, which have known hosts. The interior nodes represent common ancestors of sampled pathogens, which have unknown hosts. Starting from assumptions about disease biology and epidemiologic study design, we prove that there is a one-to-one correspondence between the possible assignments of interior node hosts and the transmission trees simultaneously consistent with the phylogeny and the epidemiologic data on person, place, and time. We develop algorithms to enumerate these transmission trees and show these can be used to calculate likelihoods that incorporate both epidemiologic data and a phylogeny. A simulation study confirms that this leads to more efficient estimates of hazard ratios for infectiousness and baseline hazards of infectious contact, and we use these methods to analyze data from a foot-and-mouth disease virus outbreak in the United Kingdom in 2001. These results demonstrate the importance of data on individuals who escape infection, which is often overlooked. The combination of survival analysis and algorithms linking phylogenies to transmission trees is a rigorous but flexible statistical foundation for molecular infectious disease epidemiology. PMID:27070316

  7. Fault detection and fault tolerance in robotics

    NASA Technical Reports Server (NTRS)

    Visinsky, Monica; Walker, Ian D.; Cavallaro, Joseph R.

    1992-01-01

    Robots are used in inaccessible or hazardous environments in order to alleviate some of the time, cost and risk involved in preparing men to endure these conditions. In order to perform their expected tasks, the robots are often quite complex, thus increasing their potential for failures. If men must be sent into these environments to repair each component failure in the robot, the advantages of using the robot are quickly lost. Fault tolerant robots are needed which can effectively cope with failures and continue their tasks until repairs can be realistically scheduled. Before fault tolerant capabilities can be created, methods of detecting and pinpointing failures must be perfected. This paper develops a basic fault tree analysis of a robot in order to obtain a better understanding of where failures can occur and how they contribute to other failures in the robot. The resulting failure flow chart can also be used to analyze the resiliency of the robot in the presence of specific faults. By simulating robot failures and fault detection schemes, the problems involved in detecting failures for robots are explored in more depth.

  8. Fault tree analysis for data-loss in long-term monitoring networks.

    PubMed

    Dirksen, J; ten Veldhuis, J A E; Schilperoort, R P S

    2009-01-01

    Prevention of data-loss is an important aspect in the design as well as the operational phase of monitoring networks since data-loss can seriously limit intended information yield. In the literature limited attention has been paid to the origin of unreliable or doubtful data from monitoring networks. Better understanding of causes of data-loss points out effective solutions to increase data yield. This paper introduces FTA as a diagnostic tool to systematically deduce causes of data-loss in long-term monitoring networks in urban drainage systems. In order to illustrate the effectiveness of FTA, a fault tree is developed for a monitoring network and FTA is applied to analyze the data yield of a UV/VIS submersible spectrophotometer. Although some of the causes of data-loss cannot be recovered because the historical database of metadata has been updated infrequently, the example points out that FTA still is a powerful tool to analyze the causes of data-loss and provides useful information on effective data-loss prevention.

  9. Tectonic links between the Olympic-Wallowa lineament and the Hite fault, Cascadia backarc of Oregon and Washington, as interpreted from high-resolution aeromagnetic anomalies

    NASA Astrophysics Data System (ADS)

    Blakely, R. J.; Sherrod, B. L.; Glen, J. M. G.; Ritzinger, B. T.; Staisch, L.

    2017-12-01

    High-resolution aeromagnetic surveys of Washington and Oregon, acquired over the past two decades by the U.S. Geological Survey, serve as proxies for geologic mapping in a terrain modified by glacial and catastrophic flood processes and covered by vegetation and urban development. In concert with geologic mapping and ancillary geophysical measurements, these data show possible kinematic links between forearc and backarc regions and have improved understanding of Cascadia crustal framework. Here we investigate a possible link between the NW-striking Wallula fault zone (WFZ), a segment of the Olympic-Wallowa lineament (OWL), and the N-striking Hite fault in Cascadia's backarc. Strike-slip displacement on the WFZ is indicated by offset of NW-striking Ice Harbor dikes (8.5 Ma), as displayed in magnetic anomalies. An exposed dike immediately south of the Walla Walla River has been used by others to argue against strike-slip displacement; i.e., the exposure lies south of one strand of the WFZ but is not displaced with respect to its linear magnetic anomaly north of the fault. However, high-resolution magnetic anomalies and a recently discovered, 25-km-long LiDAR scarp show that the dike exposure actually lies north of the fault and thus is irrelevant in determining strike-slip displacement on the fault. Our most recent magnetic survey illuminates with unprecedented detail strands of the N-striking Hite fault system and structural links to the WFZ. The survey lies over an area underlain by strongly magnetic Miocene Columbia River flood basalts (CRB) and older intrusive and volcanic rocks. NW-striking magnetic anomalies associated with the WFZ do not extend eastward beyond the Hite fault, suggesting that this is the region at which strain is transferred from the OWL. Magnetic anomalies originating from CRB across the Hite fault serve as piercing points and indicate 1.5 to 2 km of sinistral slip since middle Miocene. Vertical offsets in depth to magnetic basement across the

  10. Interactions between Polygonal Normal Faults and Larger Normal Faults, Offshore Nova Scotia, Canada

    NASA Astrophysics Data System (ADS)

    Pham, T. Q. H.; Withjack, M. O.; Hanafi, B. R.

    2017-12-01

    Polygonal faults, small normal faults with polygonal arrangements that form in fine-grained sedimentary rocks, can influence ground-water flow and hydrocarbon migration. Using well and 3D seismic-reflection data, we have examined the interactions between polygonal faults and larger normal faults on the passive margin of offshore Nova Scotia, Canada. The larger normal faults strike approximately E-W to NE-SW. Growth strata indicate that the larger normal faults were active in the Late Cretaceous (i.e., during the deposition of the Wyandot Formation) and during the Cenozoic. The polygonal faults were also active during the Cenozoic because they affect the top of the Wyandot Formation, a fine-grained carbonate sedimentary rock, and the overlying Cenozoic strata. Thus, the larger normal faults and the polygonal faults were both active during the Cenozoic. The polygonal faults far from the larger normal faults have a wide range of orientations. Near the larger normal faults, however, most polygonal faults have preferred orientations, either striking parallel or perpendicular to the larger normal faults. Some polygonal faults nucleated at the tip of a larger normal fault, propagated outward, and linked with a second larger normal fault. The strike of these polygonal faults changed as they propagated outward, ranging from parallel to the strike of the original larger normal fault to orthogonal to the strike of the second larger normal fault. These polygonal faults hard-linked the larger normal faults at and above the level of the Wyandot Formation but not below it. We argue that the larger normal faults created stress-enhancement and stress-reorientation zones for the polygonal faults. Numerous small, polygonal faults formed in the stress-enhancement zones near the tips of larger normal faults. Stress-reorientation zones surrounded the larger normal faults far from their tips. Fewer polygonal faults are present in these zones, and, more importantly, most polygonal faults

  11. Reliability analysis of a wastewater treatment plant using fault tree analysis and Monte Carlo simulation.

    PubMed

    Taheriyoun, Masoud; Moradinejad, Saber

    2015-01-01

    The reliability of a wastewater treatment plant is a critical issue when the effluent is reused or discharged to water resources. Main factors affecting the performance of the wastewater treatment plant are the variation of the influent, inherent variability in the treatment processes, deficiencies in design, mechanical equipment, and operational failures. Thus, meeting the established reuse/discharge criteria requires assessment of plant reliability. Among many techniques developed in system reliability analysis, fault tree analysis (FTA) is one of the popular and efficient methods. FTA is a top down, deductive failure analysis in which an undesired state of a system is analyzed. In this study, the problem of reliability was studied on Tehran West Town wastewater treatment plant. This plant is a conventional activated sludge process, and the effluent is reused in landscape irrigation. The fault tree diagram was established with the violation of allowable effluent BOD as the top event in the diagram, and the deficiencies of the system were identified based on the developed model. Some basic events are operator's mistake, physical damage, and design problems. The analytical method is minimal cut sets (based on numerical probability) and Monte Carlo simulation. Basic event probabilities were calculated according to available data and experts' opinions. The results showed that human factors, especially human error had a great effect on top event occurrence. The mechanical, climate, and sewer system factors were in subsequent tier. Literature shows applying FTA has been seldom used in the past wastewater treatment plant (WWTP) risk analysis studies. Thus, the developed FTA model in this study considerably improves the insight into causal failure analysis of a WWTP. It provides an efficient tool for WWTP operators and decision makers to achieve the standard limits in wastewater reuse and discharge to the environment.

  12. A fault tree model to assess probability of contaminant discharge from shipwrecks.

    PubMed

    Landquist, H; Rosén, L; Lindhe, A; Norberg, T; Hassellöv, I-M; Lindgren, J F; Dahllöf, I

    2014-11-15

    Shipwrecks on the sea floor around the world may contain hazardous substances that can cause harm to the marine environment. Today there are no comprehensive methods for environmental risk assessment of shipwrecks, and thus there is poor support for decision-making on prioritization of mitigation measures. The purpose of this study was to develop a tool for quantitative risk estimation of potentially polluting shipwrecks, and in particular an estimation of the annual probability of hazardous substance discharge. The assessment of the probability of discharge is performed using fault tree analysis, facilitating quantification of the probability with respect to a set of identified hazardous events. This approach enables a structured assessment providing transparent uncertainty and sensitivity analyses. The model facilitates quantification of risk, quantification of the uncertainties in the risk calculation and identification of parameters to be investigated further in order to obtain a more reliable risk calculation. Copyright © 2014 Elsevier Ltd. All rights reserved.

  13. Fault tree analysis for exposure to refrigerants used for automotive air conditioning in the United States.

    PubMed

    Jetter, J J; Forte, R; Rubenstein, R

    2001-02-01

    A fault tree analysis was used to estimate the number of refrigerant exposures of automotive service technicians and vehicle occupants in the United States. Exposures of service technicians can occur when service equipment or automotive air-conditioning systems leak during servicing. The number of refrigerant exposures of service technicians was estimated to be 135,000 per year. Exposures of vehicle occupants can occur when refrigerant enters passenger compartments due to sudden leaks in air-conditioning systems, leaks following servicing, or leaks caused by collisions. The total number of exposures of vehicle occupants was estimated to be 3,600 per year. The largest number of exposures of vehicle occupants was estimated for leaks caused by collisions, and the second largest number of exposures was estimated for leaks following servicing. Estimates used in the fault tree analysis were based on a survey of automotive air-conditioning service shops, the best available data from the literature, and the engineering judgement of the authors and expert reviewers from the Society of Automotive Engineers Interior Climate Control Standards Committee. Exposure concentrations and durations were estimated and compared with toxicity data for refrigerants currently used in automotive air conditioners. Uncertainty was high for the estimated numbers of exposures, exposure concentrations, and exposure durations. Uncertainty could be reduced in the future by conducting more extensive surveys, measurements of refrigerant concentrations, and exposure monitoring. Nevertheless, the analysis indicated that the risk of exposure of service technicians and vehicle occupants is significant, and it is recommended that no refrigerant that is substantially more toxic than currently available substitutes be accepted for use in vehicle air-conditioning systems, absent a means of mitigating exposure.

  14. Computer hardware fault administration

    DOEpatents

    Archer, Charles J.; Megerian, Mark G.; Ratterman, Joseph D.; Smith, Brian E.

    2010-09-14

    Computer hardware fault administration carried out in a parallel computer, where the parallel computer includes a plurality of compute nodes. The compute nodes are coupled for data communications by at least two independent data communications networks, where each data communications network includes data communications links connected to the compute nodes. Typical embodiments carry out hardware fault administration by identifying a location of a defective link in the first data communications network of the parallel computer and routing communications data around the defective link through the second data communications network of the parallel computer.

  15. Risk assessment for enterprise resource planning (ERP) system implementations: a fault tree analysis approach

    NASA Astrophysics Data System (ADS)

    Zeng, Yajun; Skibniewski, Miroslaw J.

    2013-08-01

    Enterprise resource planning (ERP) system implementations are often characterised with large capital outlay, long implementation duration, and high risk of failure. In order to avoid ERP implementation failure and realise the benefits of the system, sound risk management is the key. This paper proposes a probabilistic risk assessment approach for ERP system implementation projects based on fault tree analysis, which models the relationship between ERP system components and specific risk factors. Unlike traditional risk management approaches that have been mostly focused on meeting project budget and schedule objectives, the proposed approach intends to address the risks that may cause ERP system usage failure. The approach can be used to identify the root causes of ERP system implementation usage failure and quantify the impact of critical component failures or critical risk events in the implementation process.

  16. High-resolution gravity and seismic-refraction surveys of the Smoke Tree Wash area, Joshua Tree National Park, California

    USGS Publications Warehouse

    Langenheim, Victoria E.; Rymer, Michael J.; Catchings, Rufus D.; Goldman, Mark R.; Watt, Janet T.; Powell, Robert E.; Matti, Jonathan C.

    2016-03-02

    We describe high-resolution gravity and seismic refraction surveys acquired to determine the thickness of valley-fill deposits and to delineate geologic structures that might influence groundwater flow beneath the Smoke Tree Wash area in Joshua Tree National Park. These surveys identified a sedimentary basin that is fault-controlled. A profile across the Smoke Tree Wash fault zone reveals low gravity values and seismic velocities that coincide with a mapped strand of the Smoke Tree Wash fault. Modeling of the gravity data reveals a basin about 2–2.5 km long and 1 km wide that is roughly centered on this mapped strand, and bounded by inferred faults. According to the gravity model the deepest part of the basin is about 270 m, but this area coincides with low velocities that are not characteristic of typical basement complex rocks. Most likely, the density contrast assumed in the inversion is too high or the uncharacteristically low velocities represent highly fractured or weathered basement rocks, or both. A longer seismic profile extending onto basement outcrops would help differentiate which scenario is more accurate. The seismic velocities also determine the depth to water table along the profile to be about 40–60 m, consistent with water levels measured in water wells near the northern end of the profile.

  17. Failure mode effect analysis and fault tree analysis as a combined methodology in risk management

    NASA Astrophysics Data System (ADS)

    Wessiani, N. A.; Yoshio, F.

    2018-04-01

    There have been many studies reported the implementation of Failure Mode Effect Analysis (FMEA) and Fault Tree Analysis (FTA) as a method in risk management. However, most of the studies usually only choose one of these two methods in their risk management methodology. On the other side, combining these two methods will reduce the drawbacks of each methods when implemented separately. This paper aims to combine the methodology of FMEA and FTA in assessing risk. A case study in the metal company will illustrate how this methodology can be implemented. In the case study, this combined methodology will assess the internal risks that occur in the production process. Further, those internal risks should be mitigated based on their level of risks.

  18. Methods to enhance seismic faults and construct fault surfaces

    NASA Astrophysics Data System (ADS)

    Wu, Xinming; Zhu, Zhihui

    2017-10-01

    Faults are often apparent as reflector discontinuities in a seismic volume. Numerous types of fault attributes have been proposed to highlight fault positions from a seismic volume by measuring reflection discontinuities. These attribute volumes, however, can be sensitive to noise and stratigraphic features that are also apparent as discontinuities in a seismic volume. We propose a matched filtering method to enhance a precomputed fault attribute volume, and simultaneously estimate fault strikes and dips. In this method, a set of efficient 2D exponential filters, oriented by all possible combinations of strike and dip angles, are applied to the input attribute volume to find the maximum filtering responses at all samples in the volume. These maximum filtering responses are recorded to obtain the enhanced fault attribute volume while the corresponding strike and dip angles, that yield the maximum filtering responses, are recoded to obtain volumes of fault strikes and dips. By doing this, we assume that a fault surface is locally planar, and a 2D smoothing filter will yield a maximum response if the smoothing plane coincides with a local fault plane. With the enhanced fault attribute volume and the estimated fault strike and dip volumes, we then compute oriented fault samples on the ridges of the enhanced fault attribute volume, and each sample is oriented by the estimated fault strike and dip. Fault surfaces can be constructed by directly linking the oriented fault samples with consistent fault strikes and dips. For complicated cases with missing fault samples and noisy samples, we further propose to use a perceptual grouping method to infer fault surfaces that reasonably fit the positions and orientations of the fault samples. We apply these methods to 3D synthetic and real examples and successfully extract multiple intersecting fault surfaces and complete fault surfaces without holes.

  19. Impact of fault models on probabilistic seismic hazard assessment: the example of the West Corinth rift.

    NASA Astrophysics Data System (ADS)

    Chartier, Thomas; Scotti, Oona; Boiselet, Aurelien; Lyon-Caen, Hélène

    2016-04-01

    Including faults in probabilistic seismic hazard assessment tends to increase the degree of uncertainty in the results due to the intrinsically uncertain nature of the fault data. This is especially the case in the low to moderate seismicity regions of Europe, where slow slipping faults are difficult to characterize. In order to better understand the key parameters that control the uncertainty in the fault-related hazard computations, we propose to build an analytic tool that provides a clear link between the different components of the fault-related hazard computations and their impact on the results. This will allow identifying the important parameters that need to be better constrained in order to reduce the resulting uncertainty in hazard and also provide a more hazard-oriented strategy for collecting relevant fault parameters in the field. The tool will be illustrated through the example of the West Corinth rifts fault-models. Recent work performed in the gulf has shown the complexity of the normal faulting system that is accommodating the extensional deformation of the rift. A logic-tree approach is proposed to account for this complexity and the multiplicity of scientifically defendable interpretations. At the nodes of the logic tree, different options that could be considered at each step of the fault-related seismic hazard will be considered. The first nodes represent the uncertainty in the geometries of the faults and their slip rates, which can derive from different data and methodologies. The subsequent node explores, for a given geometry/slip rate of faults, different earthquake rupture scenarios that may occur in the complex network of faults. The idea is to allow the possibility of several faults segments to break together in a single rupture scenario. To build these multiple-fault-segment scenarios, two approaches are considered: one based on simple rules (i.e. minimum distance between faults) and a second one that relies on physically

  20. A possible link between life and death of a xeric tree in desert.

    PubMed

    Xu, Gui-Qing; McDowell, Nate G; Li, Yan

    2016-05-01

    Understanding the interactions between drought and tree ontogeny or size remains an essential research priority because size-specific mortality patterns have large impacts on ecosystem structure and function, determine forest carbon storage capacity, and are sensitive to climatic change. Here we investigate a xerophytic tree species (Haloxylon ammodendron (C.A. Mey.)) with which the changes in biomass allocation with tree size may play an important role in size-specific mortality patterns. Size-related changes in biomass allocation, root distribution, plant water status, gas exchange, hydraulic architecture and non-structural carbohydrate reserves of this xerophytic tree species were investigated to assess their potential role in the observed U-shaped mortality pattern. We found that excessively negative water potentials (<-4.7MPa, beyond the P50leaf of -4.1MPa) during prolonged drought in young trees lead to hydraulic failure; while the imbalance of photoassimilate allocation between leaf and root system in larger trees, accompanied with declining C reserves (<2% dry matter across four tissues), might have led to carbon starvation. The drought-resistance strategy of this species is preferential biomass allocation to the roots to improve water capture. In young trees, the drought-resistance strategy is not well developed, and hydraulic failure appears to be the dominant driver of mortality during drought. With old trees, excess root growth at the expense of leaf area may lead to carbon starvation during prolonged drought. Our results suggest that the drought-resistance strategy of this xeric tree is closely linked to its life and death: well-developed drought-resistance strategy means life, while underdeveloped or overdeveloped drought-resistance strategy means death. Copyright © 2016 Elsevier GmbH. All rights reserved.

  1. Fault2SHA- A European Working group to link faults and Probabilistic Seismic Hazard Assessment communities in Europe

    NASA Astrophysics Data System (ADS)

    Scotti, Oona; Peruzza, Laura

    2016-04-01

    The key questions we ask are: What is the best strategy to fill in the gap in knowledge and know-how in Europe when considering faults in seismic hazard assessments? Are field geologists providing the relevant information for seismic hazard assessment? Are seismic hazard analysts interpreting field data appropriately? Is the full range of uncertainties associated with the characterization of faults correctly understood and propagated in the computations? How can fault-modellers contribute to a better representation of the long-term behaviour of fault-networks in seismic hazard studies? Providing answers to these questions is fundamental, in order to reduce the consequences of future earthquakes and improve the reliability of seismic hazard assessments. An informal working group was thus created at a meeting in Paris in November 2014, partly financed by the Institute of Radioprotection and Nuclear Safety, with the aim to motivate exchanges between field geologists, fault modellers and seismic hazard practitioners. A variety of approaches were presented at the meeting and a clear gap emerged between some field geologists, that are not necessarily familiar with probabilistic seismic hazard assessment methods and needs and practitioners that do not necessarily propagate the "full" uncertainty associated with the characterization of faults. The group thus decided to meet again a year later in Chieti (Italy), to share concepts and ideas through a specific exercise on a test case study. Some solutions emerged but many problems of seismic source characterizations with people working in the field as well as with people tackling models of interacting faults remained. Now, in Wien, we want to open the group and launch a call for the European community at large to contribute to the discussion. The 2016 EGU session Fault2SHA is motivated by such an urgency to increase the number of round tables on this topic and debate on the peculiarities of using faults in seismic hazard

  2. Risk management of PPP project in the preparation stage based on Fault Tree Analysis

    NASA Astrophysics Data System (ADS)

    Xing, Yuanzhi; Guan, Qiuling

    2017-03-01

    The risk management of PPP(Public Private Partnership) project can improve the level of risk control between government departments and private investors, so as to make more beneficial decisions, reduce investment losses and achieve mutual benefit as well. Therefore, this paper takes the PPP project preparation stage venture as the research object to identify and confirm four types of risks. At the same time, fault tree analysis(FTA) is used to evaluate the risk factors that belong to different parts, and quantify the influencing degree of risk impact on the basis of risk identification. In addition, it determines the importance order of risk factors by calculating unit structure importance on PPP project preparation stage. The result shows that accuracy of government decision-making, rationality of private investors funds allocation and instability of market returns are the main factors to generate the shared risk on the project.

  3. Examining Relay Ramp Evolution Through Paleo-shoreline Deformation Analysis, Warner Valley Fault, Oregon

    NASA Astrophysics Data System (ADS)

    Young, C. S.; Dawers, N. H.

    2017-12-01

    Fault growth is often accomplished by linking a series of en echelon faults through relay ramps. A relay ramp is the area between two overlapping fault segments that tilts and deforms as the faults accrue displacement. The structural evolution of breached normal fault relay ramps remains poorly understood because of the difficulty in defining how slip is partitioned between the most basinward fault (known as the outboard fault), the overlapping fault (inboard fault), and any ramp-breaching linking faults. Along the Warner Valley fault in south-central Oregon, two relay ramps displaying different fault linkage geometries are lined with a series of paleo-lacustrine shorelines that record a Pleistocene paleolake regression. The inner edges of these shorelines act as paleo-horizontal datums that have been deformed by fault activity, and are used to measure relative slip variations across the relay ramp bounding faults. By measuring the elevation changes using a 10m digital elevation model (DEM) of shoreline inner edges, we estimate the amount of slip partitioned between the inboard, outboard and ramp-breaching linking faults. In order to attribute shoreline deformation to fault activity we identify shoreline elevation anomalies, where deformation exceeds a ± 3.34 m window, which encompass our conservative estimates of natural variability in the shoreline geomorphology and the error associated with the data collection. Fault activity along the main length of the fault for each ramp-breaching style is concentrated near the intersection of the linking fault and the outboard portion of the main fault segment. However, fault activity along the outboard fault tip varies according to breaching style. At a footwall breach the entire outboard fault tip appears relatively inactive. At a mid-ramp breach the outboard fault tip remains relatively active because of the proximity of the linking fault to this fault tip.

  4. Model-based development of a fault signature matrix to improve solid oxide fuel cell systems on-site diagnosis

    NASA Astrophysics Data System (ADS)

    Polverino, Pierpaolo; Pianese, Cesare; Sorrentino, Marco; Marra, Dario

    2015-04-01

    The paper focuses on the design of a procedure for the development of an on-field diagnostic algorithm for solid oxide fuel cell (SOFC) systems. The diagnosis design phase relies on an in-deep analysis of the mutual interactions among all system components by exploiting the physical knowledge of the SOFC system as a whole. This phase consists of the Fault Tree Analysis (FTA), which identifies the correlations among possible faults and their corresponding symptoms at system components level. The main outcome of the FTA is an inferential isolation tool (Fault Signature Matrix - FSM), which univocally links the faults to the symptoms detected during the system monitoring. In this work the FTA is considered as a starting point to develop an improved FSM. Making use of a model-based investigation, a fault-to-symptoms dependency study is performed. To this purpose a dynamic model, previously developed by the authors, is exploited to simulate the system under faulty conditions. Five faults are simulated, one for the stack and four occurring at BOP level. Moreover, the robustness of the FSM design is increased by exploiting symptom thresholds defined for the investigation of the quantitative effects of the simulated faults on the affected variables.

  5. Expert systems for fault diagnosis in nuclear reactor control

    NASA Astrophysics Data System (ADS)

    Jalel, N. A.; Nicholson, H.

    1990-11-01

    An expert system for accident analysis and fault diagnosis for the Loss Of Fluid Test (LOFT) reactor, a small scale pressurized water reactor, was developed for a personal computer. The knowledge of the system is presented using a production rule approach with a backward chaining inference engine. The data base of the system includes simulated dependent state variables of the LOFT reactor model. Another system is designed to assist the operator in choosing the appropriate cooling mode and to diagnose the fault in the selected cooling system. The response tree, which is used to provide the link between a list of very specific accident sequences and a set of generic emergency procedures which help the operator in monitoring system status, and to differentiate between different accident sequences and select the correct procedures, is used to build the system knowledge base. Both systems are written in TURBO PROLOG language and can be run on an IBM PC compatible with 640k RAM, 40 Mbyte hard disk and color graphics.

  6. Active transfer fault zone linking a segmented extensional system (Betics, southern Spain): Insight into heterogeneous extension driven by edge delamination

    NASA Astrophysics Data System (ADS)

    Martínez-Martínez, José Miguel; Booth-Rea, Guillermo; Azañón, José Miguel; Torcal, Federico

    2006-08-01

    Pliocene and Quaternary tectonic structures mainly consisting of segmented northwest-southeast normal faults, and associated seismicity in the central Betics do not agree with the transpressive tectonic nature of the Africa-Eurasia plate boundary in the Ibero-Maghrebian region. Active extensional deformation here is heterogeneous, individual segmented normal faults being linked by relay ramps and transfer faults, including oblique-slip and both dextral and sinistral strike-slip faults. Normal faults extend the hanging wall of an extensional detachment that is the active segment of a complex system of successive WSW-directed extensional detachments which have thinned the Betic upper crust since middle Miocene. Two areas, which are connected by an active 40-km long dextral strike-slip transfer fault zone, concentrate present-day extension. Both the seismicity distribution and focal mechanisms agree with the position and regime of the observed faults. The activity of the transfer zone during middle Miocene to present implies a mode of extension which must have remained substantially the same over the entire period. Thus, the mechanisms driving extension should still be operating. Both the westward migration of the extensional loci and the high asymmetry of the extensional systems can be related to edge delamination below the south Iberian margin coupled with roll-back under the Alborán Sea; involving the asymmetric westward inflow of asthenospheric material under the margins.

  7. Application of fault tree approach for the causation mechanism of urban haze in Beijing--Considering the risk events related with exhausts of coal combustion.

    PubMed

    Huang, Weiqing; Fan, Hongbo; Qiu, Yongfu; Cheng, Zhiyu; Qian, Yu

    2016-02-15

    Haze weather has become a serious environmental pollution problem which occurs in many Chinese cities. One of the most critical factors for the formation of haze weather is the exhausts of coal combustion, thus it is meaningful to figure out the causation mechanism between urban haze and the exhausts of coal combustion. Based on above considerations, the fault tree analysis (FAT) approach was employed for the causation mechanism of urban haze in Beijing by considering the risk events related with the exhausts of coal combustion for the first time. Using this approach, firstly the fault tree of the urban haze causation system connecting with coal combustion exhausts was established; consequently the risk events were discussed and identified; then, the minimal cut sets were successfully determined using Boolean algebra; finally, the structure, probability and critical importance degree analysis of the risk events were completed for the qualitative and quantitative assessment. The study results proved that the FTA was an effective and simple tool for the causation mechanism analysis and risk management of urban haze in China. Copyright © 2015 Elsevier B.V. All rights reserved.

  8. The Tjellefonna fault system of Western Norway: Linking late-Caledonian extension, post-Caledonian normal faulting, and Tertiary rock column uplift with the landslide-generated tsunami event of 1756

    NASA Astrophysics Data System (ADS)

    Redfield, T. F.; Osmundsen, P. T.

    2009-09-01

    On February 22, 1756, approximately 15.7 million cubic meters of bedrock were catastrophically released as a giant rockslide into the Langfjorden. Subsequently, three ˜ 40 meter high tsunami waves overwhelmed the village of Tjelle and several other local communities. Inherited structures had isolated a compartment in the hanging wall damage zone of the fjord-dwelling Tjellefonna fault. Because the region is seismically active in oblique-normal mode, and in accordance with scant historical sources, we speculate that an earthquake on a nearby fault may have caused the already-weakened Tjelle hillside to fail. From interpretation of structural, geomorphic, and thermo-chronological data we suggest that today's escarpment topography of Møre og Trøndelag is controlled to a first order by post-rift reactivation of faults parallel to the Mesozoic passive margin. In turn, a number of these faults reactivated Late Caledonian or early post-Caledonian fabrics. Normal-sense reactivation of inherited structures along much of coastal Norway suggests that a structural link exists between the processes that destroy today's mountains and those that created them. The Paleozoic Møre-Trøndelag Fault Complex was reactivated as a normal fault during the Mesozoic and, probably, throughout the Cenozoic until the present day. Its NE-SW trending strands crop out between the coast and the base of a c. 1.7 km high NW-facing topographic 'Great Escarpment.' Well-preserved kinematic indicators and multiple generations of fault products are exposed along the Tjellefonna fault, a well-defined structural and topographic lineament parallel to both the Langfjorden and the Great Escarpment. The slope instability that was formerly present at Tjelle, and additional instabilities currently present throughout the region, may be viewed as the direct product of past and ongoing development of tectonic topography in Møre og Trøndelag county. In the Langfjorden region in particular, structural geometry

  9. Structural styles of Paleozoic intracratonic fault reactivation: A case study of the Grays Point fault zone in southeastern Missouri, USA

    USGS Publications Warehouse

    Clendenin, C.W.; Diehl, S.F.

    1999-01-01

    A pronounced, subparallel set of northeast-striking faults occurs in southeastern Missouri, but little is known about these faults because of poor exposure. The Commerce fault system is the southernmost exposed fault system in this set and has an ancestry related to Reelfoot rift extension. Recent published work indicates that this fault system has a long history of reactivation. The northeast-striking Grays Point fault zone is a segment of the Commerce fault system and is well exposed along the southeast rim of an inactive quarry. Our mapping shows that the Grays Point fault zone also has a complex history of polyphase reactivation, involving three periods of Paleozoic reactivation that occurred in Late Ordovician, Devonian, and post-Mississippian. Each period is characterized by divergent, right-lateral oblique-slip faulting. Petrographic examination of sidwall rip-out clasts in calcite-filled faults associated with the Grays Point fault zone supports a minimum of three periods of right-lateral oblique-slip. The reported observations imply that a genetic link exists between intracratonic fault reactivation and strain produced by Paleozoic orogenies affecting the eastern margin of Laurentia (North America). Interpretation of this link indicate that right-lateral oblique-slip has occurred on all of the northeast-striking faults in southeastern Missouri as a result of strain influenced by the convergence directions of the different Paleozoic orogenies.

  10. Sequential Test Strategies for Multiple Fault Isolation

    NASA Technical Reports Server (NTRS)

    Shakeri, M.; Pattipati, Krishna R.; Raghavan, V.; Patterson-Hine, Ann; Kell, T.

    1997-01-01

    In this paper, we consider the problem of constructing near optimal test sequencing algorithms for diagnosing multiple faults in redundant (fault-tolerant) systems. The computational complexity of solving the optimal multiple-fault isolation problem is super-exponential, that is, it is much more difficult than the single-fault isolation problem, which, by itself, is NP-hard. By employing concepts from information theory and Lagrangian relaxation, we present several static and dynamic (on-line or interactive) test sequencing algorithms for the multiple fault isolation problem that provide a trade-off between the degree of suboptimality and computational complexity. Furthermore, we present novel diagnostic strategies that generate a static diagnostic directed graph (digraph), instead of a static diagnostic tree, for multiple fault diagnosis. Using this approach, the storage complexity of the overall diagnostic strategy reduces substantially. Computational results based on real-world systems indicate that the size of a static multiple fault strategy is strictly related to the structure of the system, and that the use of an on-line multiple fault strategy can diagnose faults in systems with as many as 10,000 failure sources.

  11. Fault compaction and overpressured faults: results from a 3-D model of a ductile fault zone

    NASA Astrophysics Data System (ADS)

    Fitzenz, D. D.; Miller, S. A.

    2003-10-01

    investigated. Significant leakage perpendicular to the fault strike (in the case of a young fault), or cracks hydraulically linking the fault core to the damaged zone (for a mature fault) are probable mechanisms for keeping the faults strong and might play a significant role in modulating fault pore pressures. Therefore, fault-normal hydraulic properties of fault zones should be a future focus of field and numerical experiments.

  12. Triggered surface slips in the Coachella Valley area associated with the 1992 Joshua Tree and Landers, California, Earthquakes

    USGS Publications Warehouse

    Rymer, M.J.

    2000-01-01

    The Coachella Valley area was strongly shaken by the 1992 Joshua Tree (23 April) and Landers (28 June) earthquakes, and both events caused triggered slip on active faults within the area. Triggered slip associated with the Joshua Tree earthquake was on a newly recognized fault, the East Wide Canyon fault, near the southwestern edge of the Little San Bernardino Mountains. Slip associated with the Landers earthquake formed along the San Andreas fault in the southeastern Coachella Valley. Surface fractures formed along the East Wide Canyon fault in association with the Joshua Tree earthquake. The fractures extended discontinuously over a 1.5-km stretch of the fault, near its southern end. Sense of slip was consistently right-oblique, west side down, similar to the long-term style of faulting. Measured offset values were small, with right-lateral and vertical components of slip ranging from 1 to 6 mm and 1 to 4 mm, respectively. This is the first documented historic slip on the East Wide Canyon fault, which was first mapped only months before the Joshua Tree earthquake. Surface slip associated with the Joshua Tree earthquake most likely developed as triggered slip given its 5 km distance from the Joshua Tree epicenter and aftershocks. As revealed in a trench investigation, slip formed in an area with only a thin (<3 m thick) veneer of alluvium in contrast to earlier documented triggered slip events in this region, all in the deep basins of the Salton Trough. A paleoseismic trench study in an area of 1992 surface slip revealed evidence of two and possibly three surface faulting events on the East Wide Canyon fault during the late Quaternary, probably latest Pleistocene (first event) and mid- to late Holocene (second two events). About two months after the Joshua Tree earthquake, the Landers earthquake then triggered slip on many faults, including the San Andreas fault in the southeastern Coachella Valley. Surface fractures associated with this event formed discontinuous

  13. Linking Tree Growth Response to Measured Microclimate - A Field Based Approach

    NASA Astrophysics Data System (ADS)

    Martin, J. T.; Hoylman, Z. H.; Looker, N. T.; Jencso, K. G.; Hu, J.

    2015-12-01

    climate and annual ring formation, and suggest a rather immediate growth response to critical micro-meteorological conditions occurring at different times across the landscape by linking the timing and magnitude of tree growth responses to in situ measurements of environmental conditions.

  14. Fault and event tree analyses for process systems risk analysis: uncertainty handling formulations.

    PubMed

    Ferdous, Refaul; Khan, Faisal; Sadiq, Rehan; Amyotte, Paul; Veitch, Brian

    2011-01-01

    Quantitative risk analysis (QRA) is a systematic approach for evaluating likelihood, consequences, and risk of adverse events. QRA based on event (ETA) and fault tree analyses (FTA) employs two basic assumptions. The first assumption is related to likelihood values of input events, and the second assumption is regarding interdependence among the events (for ETA) or basic events (for FTA). Traditionally, FTA and ETA both use crisp probabilities; however, to deal with uncertainties, the probability distributions of input event likelihoods are assumed. These probability distributions are often hard to come by and even if available, they are subject to incompleteness (partial ignorance) and imprecision. Furthermore, both FTA and ETA assume that events (or basic events) are independent. In practice, these two assumptions are often unrealistic. This article focuses on handling uncertainty in a QRA framework of a process system. Fuzzy set theory and evidence theory are used to describe the uncertainties in the input event likelihoods. A method based on a dependency coefficient is used to express interdependencies of events (or basic events) in ETA and FTA. To demonstrate the approach, two case studies are discussed. © 2010 Society for Risk Analysis.

  15. Coseismic fault slip associated with the 1992 M(sub w) 6.1 Joshua Tree, California, earthquake: Implications for the Joshua Tree-Landers earthquake sequence

    NASA Technical Reports Server (NTRS)

    Bennett, Richard A.; Reilinger, Robert E.; Rodi, William; Li, Yingping; Toksoz, M. Nafi; Hudnut, Ken

    1995-01-01

    Coseismic surface deformation associated with the M(sub w) 6.1, April 23, 1992, Joshua Tree earthquake is well represented by estimates of geodetic monument displacements at 20 locations independently derived from Global Positioning System and trilateration measurements. The rms signal to noise ratio for these inferred displacements is 1.8 with near-fault displacement estimates exceeding 40 mm. In order to determine the long-wavelength distribution of slip over the plane of rupture, a Tikhonov regularization operator is applied to these estimates which minimizes stress variability subject to purely right-lateral slip and zero surface slip constraints. The resulting slip distribution yields a geodetic moment estimate of 1.7 x 10(exp 18) N m with corresponding maximum slip around 0.8 m and compares well with independent and complementary information including seismic moment and source time function estimates and main shock and aftershock locations. From empirical Green's functions analyses, a rupture duration of 5 s is obtained which implies a rupture radius of 6-8 km. Most of the inferred slip lies to the north of the hypocenter, consistent with northward rupture propagation. Stress drop estimates are in the range of 2-4 MPa. In addition, predicted Coulomb stress increases correlate remarkably well with the distribution of aftershock hypocenters; most of the aftershocks occur in areas for which the mainshock rupture produced stress increases larger than about 0.1 MPa. In contrast, predicted stress changes are near zero at the hypocenter of the M(sub w) 7.3, June 28, 1992, Landers earthquake which nucleated about 20 km beyond the northernmost edge of the Joshua Tree rupture. Based on aftershock migrations and the predicted static stress field, we speculate that redistribution of Joshua Tree-induced stress perturbations played a role in the spatio-temporal development of the earth sequence culminating in the Landers event.

  16. Influence of fault trend, fault bends, and fault convergence on shallow structure, geomorphology, and hazards, Hosgri strike-slip fault, offshore central California

    NASA Astrophysics Data System (ADS)

    Johnson, S. Y.; Watt, J. T.; Hartwell, S. R.

    2012-12-01

    We mapped a ~94-km-long portion of the right-lateral Hosgri Fault Zone from Point Sal to Piedras Blancas in offshore central California using high-resolution seismic reflection profiles, marine magnetic data, and multibeam bathymetry. The database includes 121 seismic profiles across the fault zone and is perhaps the most comprehensive reported survey of the shallow structure of an active strike-slip fault. These data document the location, length, and near-surface continuity of multiple fault strands, highlight fault-zone heterogeneity, and demonstrate the importance of fault trend, fault bends, and fault convergences in the development of shallow structure and tectonic geomorphology. The Hosgri Fault Zone is continuous through the study area passing through a broad arc in which fault trend changes from about 338° to 328° from south to north. The southern ~40 km of the fault zone in this area is more extensional, resulting in accommodation space that is filled by deltaic sediments of the Santa Maria River. The central ~24 km of the fault zone is characterized by oblique convergence of the Hosgri Fault Zone with the more northwest-trending Los Osos and Shoreline Faults. Convergence between these faults has resulted in the formation of local restraining and releasing fault bends, transpressive uplifts, and transtensional basins of varying size and morphology. We present a hypothesis that links development of a paired fault bend to indenting and bulging of the Hosgri Fault by a strong crustal block translated to the northwest along the Shoreline Fault. Two diverging Hosgri Fault strands bounding a central uplifted block characterize the northern ~30 km of the Hosgri Fault in this area. The eastern Hosgri strand passes through releasing and restraining bends; the releasing bend is the primary control on development of an elongate, asymmetric, "Lazy Z" sedimentary basin. The western strand of the Hosgri Fault Zone passes through a significant restraining bend and

  17. The Effects of Fault Bends on Rupture Propagation: A Parameter Study

    NASA Astrophysics Data System (ADS)

    Lozos, J. C.; Oglesby, D. D.; Duan, B.; Wesnousky, S. G.

    2008-12-01

    Segmented faults with stepovers are ubiquitous, and occur at a variety of scales, ranging from small stepovers on the San Jacinto Fault, to the large-scale stepover on of the San Andreas Fault between Tejon Pass and San Gorgonio Pass. Because this type of fault geometry is so prevalent, understanding how rupture propagates through such systems is important for evaluating seismic hazard at different points along these faults. In the present study, we systematically investigate how far rupture will propagate through a fault with a linked (i.e., continuous fault) stepover, based on the length of the linking fault segment and the angle that connects the linking segment to adjacent segments. We conducted dynamic models of such systems using a two-dimensional finite element code (Duan and Oglesby 2007). The fault system in our models consists of three segments: two parallel 10km-long faults linked at a specified angle by a linking segment of between 500 m and 5 km. This geometry was run both as a extensional system and a compressional system. We observed several distinct rupture behaviors, with systematic differences between compressional and extensional cases. Both shear directions rupture straight through the stepover for very shallow stepover angles. In compressional systems with steeper angles, rupture may jump ahead from the stepover segment onto the far segment; whether or not rupture on this segment reaches critical patch size and slips fully is also a function of angle and stepover length. In some compressional cases, if the angle is steep enough and the stepover short enough, rupture may jump over the step entirely and propagate down the far segment without touching the linking segment. In extensional systems, rupture jumps from the nucleating segment onto the linking segment even at shallow angles, but at steeper angles, rupture propagates through without jumping. It is easier to propagate through a wider range of angles in extensional cases. In both

  18. Fault-scale controls on rift geometry: the Bilila-Mtakataka Fault, Malawi

    NASA Astrophysics Data System (ADS)

    Hodge, M.; Fagereng, A.; Biggs, J.; Mdala, H. S.

    2017-12-01

    Border faults that develop during initial stages of rifting determine the geometry of rifts and passive margins. At outcrop and regional scales, it has been suggested that border fault orientation may be controlled by reactivation of pre-existing weaknesses. Here, we perform a multi-scale investigation on the influence of anisotropic fabrics along a major developing border fault in the southern East African Rift, Malawi. The 130 km long Bilila-Mtakataka fault has been proposed to have slipped in a single MW 8 earthquake with 10 m of normal displacement. The fault is marked by an 11±7 m high scarp with an average trend that is oblique to the current plate motion. Variations in scarp height are greatest at lithological boundaries and where the scarp switches between following and cross-cutting high-grade metamorphic foliation. Based on the scarp's geometry and morphology, we define 6 geometrically distinct segments. We suggest that the segments link to at least one deeper structure that strikes parallel to the average scarp trend, an orientation consistent with the kinematics of an early phase of rift initiation. The slip required on a deep fault(s) to match the height of the current scarp suggests multiple earthquakes along the fault. We test this hypothesis by studying the scarp morphology using high-resolution satellite data. Our results suggest that during the earthquake(s) that formed the current scarp, the propagation of the fault toward the surface locally followed moderately-dipping foliation well oriented for reactivation. In conclusion, although well oriented pre-existing weaknesses locally influence shallow fault geometry, large-scale border fault geometry appears primarily controlled by the stress field at the time of fault initiation.

  19. Ultrareliable fault-tolerant control systems

    NASA Technical Reports Server (NTRS)

    Webster, L. D.; Slykhouse, R. A.; Booth, L. A., Jr.; Carson, T. M.; Davis, G. J.; Howard, J. C.

    1984-01-01

    It is demonstrated that fault-tolerant computer systems, such as on the Shuttles, based on redundant, independent operation are a viable alternative in fault tolerant system designs. The ultrareliable fault-tolerant control system (UFTCS) was developed and tested in laboratory simulations of an UH-1H helicopter. UFTCS includes asymptotically stable independent control elements in a parallel, cross-linked system environment. Static redundancy provides the fault tolerance. A polling is performed among the computers, with results allowing for time-delay channel variations with tight bounds. When compared with the laboratory and actual flight data for the helicopter, the probability of a fault was, for the first 10 hr of flight given a quintuple computer redundancy, found to be 1 in 290 billion. Two weeks of untended Space Station operations would experience a fault probability of 1 in 24 million. Techniques for avoiding channel divergence problems are identified.

  20. Support vector machines-based fault diagnosis for turbo-pump rotor

    NASA Astrophysics Data System (ADS)

    Yuan, Sheng-Fa; Chu, Fu-Lei

    2006-05-01

    Most artificial intelligence methods used in fault diagnosis are based on empirical risk minimisation principle and have poor generalisation when fault samples are few. Support vector machines (SVM) is a new general machine-learning tool based on structural risk minimisation principle that exhibits good generalisation even when fault samples are few. Fault diagnosis based on SVM is discussed. Since basic SVM is originally designed for two-class classification, while most of fault diagnosis problems are multi-class cases, a new multi-class classification of SVM named 'one to others' algorithm is presented to solve the multi-class recognition problems. It is a binary tree classifier composed of several two-class classifiers organised by fault priority, which is simple, and has little repeated training amount, and the rate of training and recognition is expedited. The effectiveness of the method is verified by the application to the fault diagnosis for turbo pump rotor.

  1. Diel growth dynamics in tree stems: linking anatomy and ecophysiology.

    PubMed

    Steppe, Kathy; Sterck, Frank; Deslauriers, Annie

    2015-06-01

    Impacts of climate on stem growth in trees are studied in anatomical, ecophysiological, and ecological disciplines, but an integrative framework to assess those impacts remains lacking. In this opinion article, we argue that three research efforts are required to provide that integration. First, we need to identify the missing links in diel patterns in stem diameter and stem growth and relate those patterns to the underlying mechanisms that control water and carbon balance. Second, we should focus on the understudied mechanisms responsible for seasonal impacts on such diel patterns. Third, information on stem anatomy and ecophysiology should be integrated in the same experiments and mechanistic plant growth models to capture both diel and seasonal scales. Copyright © 2015 Elsevier Ltd. All rights reserved.

  2. TreeVector: scalable, interactive, phylogenetic trees for the web.

    PubMed

    Pethica, Ralph; Barker, Gary; Kovacs, Tim; Gough, Julian

    2010-01-28

    Phylogenetic trees are complex data forms that need to be graphically displayed to be human-readable. Traditional techniques of plotting phylogenetic trees focus on rendering a single static image, but increases in the production of biological data and large-scale analyses demand scalable, browsable, and interactive trees. We introduce TreeVector, a Scalable Vector Graphics-and Java-based method that allows trees to be integrated and viewed seamlessly in standard web browsers with no extra software required, and can be modified and linked using standard web technologies. There are now many bioinformatics servers and databases with a range of dynamic processes and updates to cope with the increasing volume of data. TreeVector is designed as a framework to integrate with these processes and produce user-customized phylogenies automatically. We also address the strengths of phylogenetic trees as part of a linked-in browsing process rather than an end graphic for print. TreeVector is fast and easy to use and is available to download precompiled, but is also open source. It can also be run from the web server listed below or the user's own web server. It has already been deployed on two recognized and widely used database Web sites.

  3. Learning from examples - Generation and evaluation of decision trees for software resource analysis

    NASA Technical Reports Server (NTRS)

    Selby, Richard W.; Porter, Adam A.

    1988-01-01

    A general solution method for the automatic generation of decision (or classification) trees is investigated. The approach is to provide insights through in-depth empirical characterization and evaluation of decision trees for software resource data analysis. The trees identify classes of objects (software modules) that had high development effort. Sixteen software systems ranging from 3,000 to 112,000 source lines were selected for analysis from a NASA production environment. The collection and analysis of 74 attributes (or metrics), for over 4,700 objects, captured information about the development effort, faults, changes, design style, and implementation style. A total of 9,600 decision trees were automatically generated and evaluated. The trees correctly identified 79.3 percent of the software modules that had high development effort or faults, and the trees generated from the best parameter combinations correctly identified 88.4 percent of the modules on the average.

  4. Analysis of a hardware and software fault tolerant processor for critical applications

    NASA Technical Reports Server (NTRS)

    Dugan, Joanne B.

    1993-01-01

    Computer systems for critical applications must be designed to tolerate software faults as well as hardware faults. A unified approach to tolerating hardware and software faults is characterized by classifying faults in terms of duration (transient or permanent) rather than source (hardware or software). Errors arising from transient faults can be handled through masking or voting, but errors arising from permanent faults require system reconfiguration to bypass the failed component. Most errors which are caused by software faults can be considered transient, in that they are input-dependent. Software faults are triggered by a particular set of inputs. Quantitative dependability analysis of systems which exhibit a unified approach to fault tolerance can be performed by a hierarchical combination of fault tree and Markov models. A methodology for analyzing hardware and software fault tolerant systems is applied to the analysis of a hypothetical system, loosely based on the Fault Tolerant Parallel Processor. The models consider both transient and permanent faults, hardware and software faults, independent and related software faults, automatic recovery, and reconfiguration.

  5. Link importance incorporated failure probability measuring solution for multicast light-trees in elastic optical networks

    NASA Astrophysics Data System (ADS)

    Li, Xin; Zhang, Lu; Tang, Ying; Huang, Shanguo

    2018-03-01

    The light-tree-based optical multicasting (LT-OM) scheme provides a spectrum- and energy-efficient method to accommodate emerging multicast services. Some studies focus on the survivability technologies for LTs against a fixed number of link failures, such as single-link failure. However, a few studies involve failure probability constraints when building LTs. It is worth noting that each link of an LT plays different important roles under failure scenarios. When calculating the failure probability of an LT, the importance of its every link should be considered. We design a link importance incorporated failure probability measuring solution (LIFPMS) for multicast LTs under independent failure model and shared risk link group failure model. Based on the LIFPMS, we put forward the minimum failure probability (MFP) problem for the LT-OM scheme. Heuristic approaches are developed to address the MFP problem in elastic optical networks. Numerical results show that the LIFPMS provides an accurate metric for calculating the failure probability of multicast LTs and enhances the reliability of the LT-OM scheme while accommodating multicast services.

  6. Node degree distribution in spanning trees

    NASA Astrophysics Data System (ADS)

    Pozrikidis, C.

    2016-03-01

    A method is presented for computing the number of spanning trees involving one link or a specified group of links, and excluding another link or a specified group of links, in a network described by a simple graph in terms of derivatives of the spanning-tree generating function defined with respect to the eigenvalues of the Kirchhoff (weighted Laplacian) matrix. The method is applied to deduce the node degree distribution in a complete or randomized set of spanning trees of an arbitrary network. An important feature of the proposed method is that the explicit construction of spanning trees is not required. It is shown that the node degree distribution in the spanning trees of the complete network is described by the binomial distribution. Numerical results are presented for the node degree distribution in square, triangular, and honeycomb lattices.

  7. Tree Colors: Color Schemes for Tree-Structured Data.

    PubMed

    Tennekes, Martijn; de Jonge, Edwin

    2014-12-01

    We present a method to map tree structures to colors from the Hue-Chroma-Luminance color model, which is known for its well balanced perceptual properties. The Tree Colors method can be tuned with several parameters, whose effect on the resulting color schemes is discussed in detail. We provide a free and open source implementation with sensible parameter defaults. Categorical data are very common in statistical graphics, and often these categories form a classification tree. We evaluate applying Tree Colors to tree structured data with a survey on a large group of users from a national statistical institute. Our user study suggests that Tree Colors are useful, not only for improving node-link diagrams, but also for unveiling tree structure in non-hierarchical visualizations.

  8. Fault tree analysis of failure cause of crushing plant and mixing bed hall at Khoy cement factory in Iran☆

    PubMed Central

    Nouri.Gharahasanlou, Ali; Mokhtarei, Ashkan; Khodayarei, Aliasqar; Ataei, Mohammad

    2014-01-01

    Evaluating and analyzing the risk in the mining industry is a new approach for improving the machinery performance. Reliability, safety, and maintenance management based on the risk analysis can enhance the overall availability and utilization of the mining technological systems. This study investigates the failure occurrence probability of the crushing and mixing bed hall department at Azarabadegan Khoy cement plant by using fault tree analysis (FTA) method. The results of the analysis in 200 h operating interval show that the probability of failure occurrence for crushing, conveyor systems, crushing and mixing bed hall department is 73, 64, and 95 percent respectively and the conveyor belt subsystem found as the most probable system for failure. Finally, maintenance as a method of control and prevent the occurrence of failure is proposed. PMID:26779433

  9. Fault tree analysis of failure cause of crushing plant and mixing bed hall at Khoy cement factory in Iran.

    PubMed

    Nouri Gharahasanlou, Ali; Mokhtarei, Ashkan; Khodayarei, Aliasqar; Ataei, Mohammad

    2014-04-01

    Evaluating and analyzing the risk in the mining industry is a new approach for improving the machinery performance. Reliability, safety, and maintenance management based on the risk analysis can enhance the overall availability and utilization of the mining technological systems. This study investigates the failure occurrence probability of the crushing and mixing bed hall department at Azarabadegan Khoy cement plant by using fault tree analysis (FTA) method. The results of the analysis in 200 h operating interval show that the probability of failure occurrence for crushing, conveyor systems, crushing and mixing bed hall department is 73, 64, and 95 percent respectively and the conveyor belt subsystem found as the most probable system for failure. Finally, maintenance as a method of control and prevent the occurrence of failure is proposed.

  10. LinkFinder: An expert system that constructs phylogenic trees

    NASA Technical Reports Server (NTRS)

    Inglehart, James; Nelson, Peter C.

    1991-01-01

    An expert system has been developed using the C Language Integrated Production System (CLIPS) that automates the process of constructing DNA sequence based phylogenies (trees or lineages) that indicate evolutionary relationships. LinkFinder takes as input homologous DNA sequences from distinct individual organisms. It measures variations between the sequences, selects appropriate proportionality constants, and estimates the time that has passed since each pair of organisms diverged from a common ancestor. It then designs and outputs a phylogenic map summarizing these results. LinkFinder can find genetic relationships between different species, and between individuals of the same species, including humans. It was designed to take advantage of the vast amount of sequence data being produced by the Genome Project, and should be of value to evolution theorists who wish to utilize this data, but who have no formal training in molecular genetics. Evolutionary theory holds that distinct organisms carrying a common gene inherited that gene from a common ancestor. Homologous genes vary from individual to individual and species to species, and the amount of variation is now believed to be directly proportional to the time that has passed since divergence from a common ancestor. The proportionality constant must be determined experimentally; it varies considerably with the types of organisms and DNA molecules under study. Given an appropriate constant, and the variation between two DNA sequences, a simple linear equation gives the divergence time.

  11. Linking definitions, mechanisms, and modeling of drought-induced tree death.

    PubMed

    Anderegg, William R L; Berry, Joseph A; Field, Christopher B

    2012-12-01

    Tree death from drought and heat stress is a critical and uncertain component in forest ecosystem responses to a changing climate. Recent research has illuminated how tree mortality is a complex cascade of changes involving interconnected plant systems over multiple timescales. Explicit consideration of the definitions, dynamics, and temporal and biological scales of tree mortality research can guide experimental and modeling approaches. In this review, we draw on the medical literature concerning human death to propose a water resource-based approach to tree mortality that considers the tree as a complex organism with a distinct growth strategy. This approach provides insight into mortality mechanisms at the tree and landscape scales and presents promising avenues into modeling tree death from drought and temperature stress. Copyright © 2012 Elsevier Ltd. All rights reserved.

  12. Goal-Function Tree Modeling for Systems Engineering and Fault Management

    NASA Technical Reports Server (NTRS)

    Patterson, Jonathan D.; Johnson, Stephen B.

    2013-01-01

    The draft NASA Fault Management (FM) Handbook (2012) states that Fault Management (FM) is a "part of systems engineering", and that it "demands a system-level perspective" (NASAHDBK- 1002, 7). What, exactly, is the relationship between systems engineering and FM? To NASA, systems engineering (SE) is "the art and science of developing an operable system capable of meeting requirements within often opposed constraints" (NASA/SP-2007-6105, 3). Systems engineering starts with the elucidation and development of requirements, which set the goals that the system is to achieve. To achieve these goals, the systems engineer typically defines functions, and the functions in turn are the basis for design trades to determine the best means to perform the functions. System Health Management (SHM), by contrast, defines "the capabilities of a system that preserve the system's ability to function as intended" (Johnson et al., 2011, 3). Fault Management, in turn, is the operational subset of SHM, which detects current or future failures, and takes operational measures to prevent or respond to these failures. Failure, in turn, is the "unacceptable performance of intended function." (Johnson 2011, 605) Thus the relationship of SE to FM is that SE defines the functions and the design to perform those functions to meet system goals and requirements, while FM detects the inability to perform those functions and takes action. SHM and FM are in essence "the dark side" of SE. For every function to be performed (SE), there is the possibility that it is not successfully performed (SHM); FM defines the means to operationally detect and respond to this lack of success. We can also describe this in terms of goals: for every goal to be achieved, there is the possibility that it is not achieved; FM defines the means to operationally detect and respond to this inability to achieve the goal. This brief description of relationships between SE, SHM, and FM provide hints to a modeling approach to

  13. Fault tree analysis of fire and explosion accidents for dual fuel (diesel/natural gas) ship engine rooms

    NASA Astrophysics Data System (ADS)

    Guan, Yifeng; Zhao, Jie; Shi, Tengfei; Zhu, Peipei

    2016-09-01

    In recent years, China's increased interest in environmental protection has led to a promotion of energy-efficient dual fuel (diesel/natural gas) ships in Chinese inland rivers. A natural gas as ship fuel may pose dangers of fire and explosion if a gas leak occurs. If explosions or fires occur in the engine rooms of a ship, heavy damage and losses will be incurred. In this paper, a fault tree model is presented that considers both fires and explosions in a dual fuel ship; in this model, dual fuel engine rooms are the top events. All the basic events along with the minimum cut sets are obtained through the analysis. The primary factors that affect accidents involving fires and explosions are determined by calculating the degree of structure importance of the basic events. According to these results, corresponding measures are proposed to ensure and improve the safety and reliability of Chinese inland dual fuel ships.

  14. Evolving transpressional strain fields along the San Andreas fault in southern California: implications for fault branching, fault dip segmentation and strain partitioning

    NASA Astrophysics Data System (ADS)

    Bergh, Steffen; Sylvester, Arthur; Damte, Alula; Indrevær, Kjetil

    2014-05-01

    , renewed strike-slip movements and contractile fold-thrust belt structures. Notably, the strike-slip movements on the San Andreas fault were transformed outward into the surrounding rocks as oblique-reverse faults to link up with the subsidiary Skeleton Canyon fault in the Mecca Hills. Instead of a classic flower structure model for this transpressional uplift, the San Andreas fault strands were segmented into domains that record; (i) early strike-slip motion, (ii) later oblique shortening with distributed deformation (en echelon fold domains), followed by (iii) localized fault-parallel deformation (strike-slip) and (iv) superposed out-of-sequence faulting and fault-normal, partitioned deformation (fold-thrust belt domains). These results contribute well to the question if spatial and temporal fold-fault branching and migration patterns evolving along non-vertical strike-slip fault segments can play a role in the localization of earthquakes along the San Andreas fault.

  15. Using fault tree analysis to identify contributing factors to engulfment in flowing grain in on-farm grain bins.

    PubMed

    Kingman, D M; Field, W E

    2005-11-01

    Findings reported by researchers at Illinois State University and Purdue University indicated that since 1980, an average of eight individuals per year have become engulfed and died in farm grain bins in the U.S. and Canada and that all these deaths are significant because they are believed to be preventable. During a recent effort to develop intervention strategies and recommendations for an ASAE farm grain bin safety standard, fault tree analysis (FTA) was utilized to identify contributing factors to engulfments in grain stored in on-farm grain bins. FTA diagrams provided a spatial perspective of the circumstances that occurred prior to engulfment incidents, a perspective never before presented in other hazard analyses. The FTA also demonstrated relationships and interrelationships of the contributing factors. FTA is a useful tool that should be applied more often in agricultural incident investigations to assist in the more complete understanding of the problem studied.

  16. The integration methods of fuzzy fault mode and effect analysis and fault tree analysis for risk analysis of yogurt production

    NASA Astrophysics Data System (ADS)

    Aprilia, Ayu Rizky; Santoso, Imam; Ekasari, Dhita Murita

    2017-05-01

    Yogurt is a product based on milk, which has beneficial effects for health. The process for the production of yogurt is very susceptible to failure because it involves bacteria and fermentation. For an industry, the risks may cause harm and have a negative impact. In order for a product to be successful and profitable, it requires the analysis of risks that may occur during the production process. Risk analysis can identify the risks in detail and prevent as well as determine its handling, so that the risks can be minimized. Therefore, this study will analyze the risks of the production process with a case study in CV.XYZ. The method used in this research is the Fuzzy Failure Mode and Effect Analysis (fuzzy FMEA) and Fault Tree Analysis (FTA). The results showed that there are 6 risks from equipment variables, raw material variables, and process variables. Those risks include the critical risk, which is the risk of a lack of an aseptic process, more specifically if starter yogurt is damaged due to contamination by fungus or other bacteria and a lack of sanitation equipment. The results of quantitative analysis of FTA showed that the highest probability is the probability of the lack of an aseptic process, with a risk of 3.902%. The recommendations for improvement include establishing SOPs (Standard Operating Procedures), which include the process, workers, and environment, controlling the starter of yogurt and improving the production planning and sanitation equipment using hot water immersion.

  17. Discovering the Complexity of Capable Faults in Northern Chile

    NASA Astrophysics Data System (ADS)

    Gonzalez, G.; del Río, I. A.; Rojas Orrego, C., Sr.; Astudillo, L. A., Sr.

    2017-12-01

    Great crustal earthquakes (Mw >7.0) in the upper plate of subduction zones are relatively uncommon and less well documented. We hypothesize that crustal earthquakes are poorly represented in the instrumental record because they have long recurrence intervals. In northern Chile, the extreme long-term aridity permits extraordinary preservation of landforms related to fault activity, making this region a primary target to understand how upper plate faults work at subduction zones. To understand how these faults relate to crustal seismicity in the long-term, we have conducted a detailed palaeoseismological study. We performed a palaeoseismological survey integrating trench logging and photogrammetry based on UAVs. Optically stimulated luminescence (OSL) age determinations were practiced for dating deposits linked to faulting. In this contribution we present the study case of two primary faults located in the Coastal Cordillera of northern Chile between Iquique (21ºS) and Antofagasta (24ºS). We estimate the maximum moment magnitude of earthquakes generated in these upper plate faults, their recurrence interval and the fault-slip rate. We conclude that the studied upper plate faults show a complex kinematics on geological timescales. Faults seem to change their kinematics from normal (extension) to reverse (compression) or from normal to transcurrent (compression) according to the stage of subduction earthquake cycle. Normal displacement is related to coseismic stages and compression is linked to interseismic period. As result this complex interaction these faults are capable of generating Mw 7.0 earthquakes, with recurrence times on the order of thousands of years during every stage of the subduction earthquake cycle.

  18. Using certification trails to achieve software fault tolerance

    NASA Technical Reports Server (NTRS)

    Sullivan, Gregory F.; Masson, Gerald M.

    1993-01-01

    A conceptually novel and powerful technique to achieve fault tolerance in hardware and software systems is introduced. When used for software fault tolerance, this new technique uses time and software redundancy and can be outlined as follows. In the initial phase, a program is run to solve a problem and store the result. In addition, this program leaves behind a trail of data called a certification trail. In the second phase, another program is run which solves the original problem again. This program, however, has access to the certification trail left by the first program. Because of the availability of the certification trail, the second phase can be performed by a less complex program and can execute more quickly. In the final phase, the two results are accepted as correct; otherwise an error is indicated. An essential aspect of this approach is that the second program must always generate either an error indication or a correct output even when the certification trail it receives from the first program is incorrect. The certification trail approach to fault tolerance was formalized and it was illustrated by applying it to the fundamental problem of finding a minimum spanning tree. Cases in which the second phase can be run concorrectly with the first and act as a monitor are discussed. The certification trail approach was compared to other approaches to fault tolerance. Because of space limitations we have omitted examples of our technique applied to the Huffman tree, and convex hull problems. These can be found in the full version of this paper.

  19. Fault Tree Analysis: Investigation of Epidemic Hemorrhagic Fever Infection Acquired in Animal Laboratories in China.

    PubMed

    Liu, Xiao Yu; Xue, Kang Ning; Rong, Rong; Zhao, Chi Hong

    2016-01-01

    Epidemic hemorrhagic fever has been an ongoing threat to laboratory personnel involved in animal care and use. Laboratory transmissions and severe infections occurred over the past twenty years, even though the standards and regulations for laboratory biosafety have been issued, upgraded, and implemented in China. Therefore, there is an urgent need to identify risk factors and to seek effective preventive measures that can curb the incidences of epidemic hemorrhagic fever among laboratory personnel. In the present study, we reviewed literature that relevant to animals laboratory-acquired hemorrhagic fever infections reported from 1995 to 2015, and analyzed these incidences using fault tree analysis (FTA). The results of data analysis showed that purchasing of qualified animals and guarding against wild rats which could make sure the laboratory animals without hantaviruses, are the basic measures to prevent infections. During the process of daily management, the consciousness of personal protecting and the ability of personal protecting need to be further improved. Undoubtedly vaccination is the most direct and effective method, while it plays role after infection. So avoiding infections can't rely entirely on vaccination. Copyright © 2016 The Editorial Board of Biomedical and Environmental Sciences. Published by China CDC. All rights reserved.

  20. Active faults in Africa: a review

    NASA Astrophysics Data System (ADS)

    Skobelev, S. F.; Hanon, M.; Klerkx, J.; Govorova, N. N.; Lukina, N. V.; Kazmin, V. G.

    2004-03-01

    The active fault database and Map of active faults in Africa, in scale of 1:5,000,000, were compiled according to the ILP Project II-2 "World Map of Major Active Faults". The data were collected in the Royal Museum of Central Africa, Tervuren, Belgium, and in the Geological Institute, Moscow, where the final edition was carried out. Active faults of Africa form three groups. The first group is represented by thrusts and reverse faults associated with compressed folds in the northwest Africa. They belong to the western part of the Alpine-Central Asian collision belt. The faults disturb only the Earth's crust and some of them do not penetrate deeper than the sedimentary cover. The second group comprises the faults of the Great African rift system. The faults form the known Western and Eastern branches, which are rifts with abnormal mantle below. The deep-seated mantle "hot" anomaly probably relates to the eastern volcanic branch. In the north, it joins with the Aden-Red Sea rift zone. Active faults in Egypt, Libya and Tunis may represent a link between the East African rift system and Pantellerian rift zone in the Mediterranean. The third group included rare faults in the west of Equatorial Africa. The data were scarce, so that most of the faults of this group were identified solely by interpretation of space imageries and seismicity. Some longer faults of the group may continue the transverse faults of the Atlantic and thus can penetrate into the mantle. This seems evident for the Cameron fault line.

  1. Development, Interaction and Linkage of Normal Fault Segments along the 100-km Bilila-Mtakataka Fault, Malawi

    NASA Astrophysics Data System (ADS)

    Fagereng, A.; Hodge, M.; Biggs, J.; Mdala, H. S.; Goda, K.

    2016-12-01

    Faults grow through the interaction and linkage of isolated fault segments. Continuous fault systems are those where segments interact, link and may slip synchronously, whereas non-continuous fault systems comprise isolated faults. As seismic moment is related to fault length (Wells and Coppersmith, 1994), understanding whether a fault system is continuous or not is critical in evaluating seismic hazard. Maturity may be a control on fault continuity: immature, low displacement faults are typically assumed to be non-continuous. Here, we study two overlapping, 20 km long, normal fault segments of the N-S striking Bilila-Mtakataka fault, Malawi, in the southern section of the East African Rift System. Despite its relative immaturity, previous studies concluded the Bilila-Mtakataka fault is continuous for its entire 100 km length, with the most recent event equating to an Mw8.0 earthquake (Jackson and Blenkinsop, 1997). We explore whether segment geometry and relationship to pre-existing high-grade metamorphic foliation has influenced segment interaction and fault development. Fault geometry and scarp height is constrained by DEMs derived from SRTM, Pleiades and `Structure from Motion' photogrammetry using a UAV, alongside direct field observations. The segment strikes differ on average by 10°, but up to 55° at their adjacent tips. The southern segment is sub-parallel to the foliation, whereas the northern segment is highly oblique to the foliation. Geometrical surface discontinuities suggest two isolated faults; however, displacement-length profiles and Coulomb stress change models suggest segment interaction, with potential for linkage at depth. Further work must be undertaken on other segments to assess the continuity of the entire fault, concluding whether an earthquake greater than that of the maximum instrumentally recorded (1910 M7.4 Rukwa) is possible.

  2. An Intelligent Gear Fault Diagnosis Methodology Using a Complex Wavelet Enhanced Convolutional Neural Network.

    PubMed

    Sun, Weifang; Yao, Bin; Zeng, Nianyin; Chen, Binqiang; He, Yuchao; Cao, Xincheng; He, Wangpeng

    2017-07-12

    As a typical example of large and complex mechanical systems, rotating machinery is prone to diversified sorts of mechanical faults. Among these faults, one of the prominent causes of malfunction is generated in gear transmission chains. Although they can be collected via vibration signals, the fault signatures are always submerged in overwhelming interfering contents. Therefore, identifying the critical fault's characteristic signal is far from an easy task. In order to improve the recognition accuracy of a fault's characteristic signal, a novel intelligent fault diagnosis method is presented. In this method, a dual-tree complex wavelet transform (DTCWT) is employed to acquire the multiscale signal's features. In addition, a convolutional neural network (CNN) approach is utilized to automatically recognise a fault feature from the multiscale signal features. The experiment results of the recognition for gear faults show the feasibility and effectiveness of the proposed method, especially in the gear's weak fault features.

  3. The SCEC 3D Community Fault Model (CFM-v5): An updated and expanded fault set of oblique crustal deformation and complex fault interaction for southern California

    NASA Astrophysics Data System (ADS)

    Nicholson, C.; Plesch, A.; Sorlien, C. C.; Shaw, J. H.; Hauksson, E.

    2014-12-01

    Southern California represents an ideal natural laboratory to investigate oblique deformation in 3D owing to its comprehensive datasets, complex tectonic history, evolving components of oblique slip, and continued crustal rotations about horizontal and vertical axes. As the SCEC Community Fault Model (CFM) aims to accurately reflect this 3D deformation, we present the results of an extensive update to the model by using primarily detailed fault trace, seismic reflection, relocated hypocenter and focal mechanism nodal plane data to generate improved, more realistic digital 3D fault surfaces. The results document a wide variety of oblique strain accommodation, including various aspects of strain partitioning and fault-related folding, sets of both high-angle and low-angle faults that mutually interact, significant non-planar, multi-stranded faults with variable dip along strike and with depth, and active mid-crustal detachments. In places, closely-spaced fault strands or fault systems can remain surprisingly subparallel to seismogenic depths, while in other areas, major strike-slip to oblique-slip faults can merge, such as the S-dipping Arroyo Parida-Mission Ridge and Santa Ynez faults with the N-dipping North Channel-Pitas Point-Red Mountain fault system, or diverge with depth. Examples of the latter include the steep-to-west-dipping Laguna Salada-Indiviso faults with the steep-to-east-dipping Sierra Cucapah faults, and the steep southern San Andreas fault with the adjacent NE-dipping Mecca Hills-Hidden Springs fault system. In addition, overprinting by steep predominantly strike-slip faulting can segment which parts of intersecting inherited low-angle faults are reactivated, or result in mutual cross-cutting relationships. The updated CFM 3D fault surfaces thus help characterize a more complex pattern of fault interactions at depth between various fault sets and linked fault systems, and a more complex fault geometry than typically inferred or expected from

  4. Pipeline synthetic aperture radar data compression utilizing systolic binary tree-searched architecture for vector quantization

    NASA Technical Reports Server (NTRS)

    Chang, Chi-Yung (Inventor); Fang, Wai-Chi (Inventor); Curlander, John C. (Inventor)

    1995-01-01

    A system for data compression utilizing systolic array architecture for Vector Quantization (VQ) is disclosed for both full-searched and tree-searched. For a tree-searched VQ, the special case of a Binary Tree-Search VQ (BTSVQ) is disclosed with identical Processing Elements (PE) in the array for both a Raw-Codebook VQ (RCVQ) and a Difference-Codebook VQ (DCVQ) algorithm. A fault tolerant system is disclosed which allows a PE that has developed a fault to be bypassed in the array and replaced by a spare at the end of the array, with codebook memory assignment shifted one PE past the faulty PE of the array.

  5. Design of physical and logical topologies with fault-tolerant ability in wavelength-routed optical network

    NASA Astrophysics Data System (ADS)

    Chen, Chunfeng; Liu, Hua; Fan, Ge

    2005-02-01

    In this paper we consider the problem of designing a network of optical cross-connects(OXCs) to provide end-to-end lightpath services to label switched routers (LSRs). Like some previous work, we select the number of OXCs as our objective. Compared with the previous studies, we take into account the fault-tolerant characteristic of logical topology. First of all, using a Prufer number randomly generated, we generate a tree. By adding some edges to the tree, we can obtain a physical topology which consists of a certain number of OXCs and fiber links connecting OXCs. It is notable that we for the first time limit the number of layers of the tree produced according to the method mentioned above. Then we design the logical topologies based on the physical topologies mentioned above. In principle, we will select the shortest path in addition to some consideration on the load balancing of links and the limitation owing to the SRLG. Notably, we implement the routing algorithm for the nodes in increasing order of the degree of the nodes. With regarding to the problem of the wavelength assignment, we adopt the heuristic algorithm of the graph coloring commonly used. It is clear our problem is computationally intractable especially when the scale of the network is large. We adopt the taboo search algorithm to find the near optimal solution to our objective. We present numerical results for up to 1000 LSRs and for a wide range of system parameters such as the number of wavelengths supported by each fiber link and traffic. The results indicate that it is possible to build large-scale optical networks with rich connectivity in a cost-effective manner, using relatively few but properly dimensioned OXCs.

  6. Fault linkage and continental breakup

    NASA Astrophysics Data System (ADS)

    Cresswell, Derren; Lymer, Gaël; Reston, Tim; Stevenson, Carl; Bull, Jonathan; Sawyer, Dale; Morgan, Julia

    2017-04-01

    The magma-poor rifted margin off the west coast of Galicia (NW Spain) has provided some of the key observations in the development of models describing the final stages of rifting and continental breakup. In 2013, we collected a 68 x 20 km 3D seismic survey across the Galicia margin, NE Atlantic. Processing through to 3D Pre-stack Time Migration (12.5 m bin-size) and 3D depth conversion reveals the key structures, including an underlying detachment fault (the S detachment), and the intra-block and inter-block faults. These data reveal multiple phases of faulting, which overlap spatially and temporally, have thinned the crust to between zero and a few km thickness, producing 'basement windows' where crustal basement has been completely pulled apart and sediments lie directly on the mantle. Two approximately N-S trending fault systems are observed: 1) a margin proximal system of two linked faults that are the upward extension (breakaway faults) of the S; in the south they form one surface that splays northward to form two faults with an intervening fault block. These faults were thus demonstrably active at one time rather than sequentially. 2) An oceanward relay structure that shows clear along strike linkage. Faults within the relay trend NE-SW and heavily dissect the basement. The main block bounding faults can be traced from the S detachment through the basement into, and heavily deforming, the syn-rift sediments where they die out, suggesting that the faults propagated up from the S detachment surface. Analysis of the fault heaves and associated maps at different structural levels show complementary fault systems. The pattern of faulting suggests a variation in main tectonic transport direction moving oceanward. This might be interpreted as a temporal change during sequential faulting, however the transfer of extension between faults and the lateral variability of fault blocks suggests that many of the faults across the 3D volume were active at least in part

  7. Bayesian updating in a fault tree model for shipwreck risk assessment.

    PubMed

    Landquist, H; Rosén, L; Lindhe, A; Norberg, T; Hassellöv, I-M

    2017-07-15

    Shipwrecks containing oil and other hazardous substances have been deteriorating on the seabeds of the world for many years and are threatening to pollute the marine environment. The status of the wrecks and the potential volume of harmful substances present in the wrecks are affected by a multitude of uncertainties. Each shipwreck poses a unique threat, the nature of which is determined by the structural status of the wreck and possible damage resulting from hazardous activities that could potentially cause a discharge. Decision support is required to ensure the efficiency of the prioritisation process and the allocation of resources required to carry out risk mitigation measures. Whilst risk assessments can provide the requisite decision support, comprehensive methods that take into account key uncertainties related to shipwrecks are limited. The aim of this paper was to develop a method for estimating the probability of discharge of hazardous substances from shipwrecks. The method is based on Bayesian updating of generic information on the hazards posed by different activities in the surroundings of the wreck, with information on site-specific and wreck-specific conditions in a fault tree model. Bayesian updating is performed using Monte Carlo simulations for estimating the probability of a discharge of hazardous substances and formal handling of intrinsic uncertainties. An example application involving two wrecks located off the Swedish coast is presented. Results show the estimated probability of opening, discharge and volume of the discharge for the two wrecks and illustrate the capability of the model to provide decision support. Together with consequence estimations of a discharge of hazardous substances, the suggested model enables comprehensive and probabilistic risk assessments of shipwrecks to be made. Copyright © 2017 Elsevier B.V. All rights reserved.

  8. The relationship of near-surface active faulting to megathrust splay fault geometry in Prince William Sound, Alaska

    NASA Astrophysics Data System (ADS)

    Finn, S.; Liberty, L. M.; Haeussler, P. J.; Northrup, C.; Pratt, T. L.

    2010-12-01

    We interpret regionally extensive, active faults beneath Prince William Sound (PWS), Alaska, to be structurally linked to deeper megathrust splay faults, such as the one that ruptured in the 1964 M9.2 earthquake. Western PWS in particular is unique; the locations of active faulting offer insights into the transition at the southern terminus of the previously subducted Yakutat slab to Pacific plate subduction. Newly acquired high-resolution, marine seismic data show three seismic facies related to Holocene and older Quaternary to Tertiary strata. These sediments are cut by numerous high angle normal faults in the hanging wall of megathrust splay. Crustal-scale seismic reflection profiles show splay faults emerging from 20 km depth between the Yakutat block and North American crust and surfacing as the Hanning Bay and Patton Bay faults. A distinct boundary coinciding beneath the Hinchinbrook Entrance causes a systematic fault trend change from N30E in southwestern PWS to N70E in northeastern PWS. The fault trend change underneath Hinchinbrook Entrance may occur gradually or abruptly and there is evidence for similar deformation near the Montague Strait Entrance. Landward of surface expressions of the splay fault, we observe subsidence, faulting, and landslides that record deformation associated with the 1964 and older megathrust earthquakes. Surface exposures of Tertiary rocks throughout PWS along with new apatite-helium dates suggest long-term and regional uplift with localized, fault-controlled subsidence.

  9. A fault is born: The Landers-Mojave earthquake line

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nur, A.; Ron, H.

    1993-04-01

    The epicenter and the southern portion of the 1992 Landers earthquake fell on an approximately N-S earthquake line, defined by both epicentral locations and by the rupture directions of four previous M>5 earthquakes in the Mojave: The 1947 Manix; 1975 Galway Lake; 1979 Homestead Valley: and 1992 Joshua Tree events. Another M 5.2 earthquake epicenter in 1965 fell on this line where it intersects the Calico fault. In contrast, the northern part of the Landers rupture followed the NW-SE trending Camp Rock and parallel faults, exhibiting an apparently unusual rupture kink. The block tectonic model (Ron et al., 1984) combiningmore » fault kinematic and mechanics, explains both the alignment of the events, and their ruptures (Nur et al., 1986, 1989), as well as the Landers kink (Nur et al., 1992). Accordingly, the now NW oriented faults have rotated into their present direction away from the direction of maximum shortening, close to becoming locked, whereas a new fault set, optimally oriented relative to the direction of shortening, is developing to accommodate current crustal deformation. The Mojave-Landers line may thus be a new fault in formation. During the transition of faulting from the old, well developed and wak but poorly oriented faults to the strong, but favorably oriented new ones, both can slip simultaneously, giving rise to kinks such as Landers.« less

  10. Slip triggered on southern California faults by the 1992 Joshua Tree, Landers, and big bear earthquakes

    USGS Publications Warehouse

    Bodin, Paul; Bilham, Roger; Behr, Jeff; Gomberg, Joan; Hudnut, Kenneth W.

    1994-01-01

    Five out of six functioning creepmeters on southern California faults recorded slip triggered at the time of some or all of the three largest events of the 1992 Landers earthquake sequence. Digital creep data indicate that dextral slip was triggered within 1 min of each mainshock and that maximum slip velocities occurred 2 to 3 min later. The duration of triggered slip events ranged from a few hours to several weeks. We note that triggered slip occurs commonly on faults that exhibit fault creep. To account for the observation that slip can be triggered repeatedly on a fault, we propose that the amplitude of triggered slip may be proportional to the depth of slip in the creep event and to the available near-surface tectonic strain that would otherwise eventually be released as fault creep. We advance the notion that seismic surface waves, perhaps amplified by sediments, generate transient local conditions that favor the release of tectonic strain to varying depths. Synthetic strain seismograms are presented that suggest increased pore pressure during periods of fault-normal contraction may be responsible for triggered slip, since maximum dextral shear strain transients correspond to times of maximum fault-normal contraction.

  11. Effect of Fault Parameter Uncertainties on PSHA explored by Monte Carlo Simulations: A case study for southern Apennines, Italy

    NASA Astrophysics Data System (ADS)

    Akinci, A.; Pace, B.

    2017-12-01

    In this study, we discuss the seismic hazard variability of peak ground acceleration (PGA) at 475 years return period in the Southern Apennines of Italy. The uncertainty and parametric sensitivity are presented to quantify the impact of the several fault parameters on ground motion predictions for 10% exceedance in 50-year hazard. A time-independent PSHA model is constructed based on the long-term recurrence behavior of seismogenic faults adopting the characteristic earthquake model for those sources capable of rupturing the entire fault segment with a single maximum magnitude. The fault-based source model uses the dimensions and slip rates of mapped fault to develop magnitude-frequency estimates for characteristic earthquakes. Variability of the selected fault parameter is given with a truncated normal random variable distribution presented by standard deviation about a mean value. A Monte Carlo approach, based on the random balanced sampling by logic tree, is used in order to capture the uncertainty in seismic hazard calculations. For generating both uncertainty and sensitivity maps, we perform 200 simulations for each of the fault parameters. The results are synthesized both in frequency-magnitude distribution of modeled faults as well as the different maps: the overall uncertainty maps provide a confidence interval for the PGA values and the parameter uncertainty maps determine the sensitivity of hazard assessment to variability of every logic tree branch. These branches of logic tree, analyzed through the Monte Carlo approach, are maximum magnitudes, fault length, fault width, fault dip and slip rates. The overall variability of these parameters is determined by varying them simultaneously in the hazard calculations while the sensitivity of each parameter to overall variability is determined varying each of the fault parameters while fixing others. However, in this study we do not investigate the sensitivity of mean hazard results to the consideration of

  12. Achieving Agreement in Three Rounds With Bounded-Byzantine Faults

    NASA Technical Reports Server (NTRS)

    Malekpour, Mahyar R.

    2015-01-01

    A three-round algorithm is presented that guarantees agreement in a system of K (nodes) greater than or equal to 3F (faults) +1 nodes provided each faulty node induces no more than F faults and each good node experiences no more than F faults, where, F is the maximum number of simultaneous faults in the network. The algorithm is based on the Oral Message algorithm of Lamport et al. and is scalable with respect to the number of nodes in the system and applies equally to the traditional node-fault model as well as the link-fault model. We also present a mechanical verification of the algorithm focusing on verifying the correctness of a bounded model of the algorithm as well as confirming claims of determinism.

  13. Evolution of triangular topographic facets along active normal faults

    NASA Astrophysics Data System (ADS)

    Balogun, A.; Dawers, N. H.; Gasparini, N. M.; Giachetta, E.

    2011-12-01

    Triangular shaped facets, which are generally formed by the erosion of fault - bounded mountain ranges, are arguably one of the most prominent geomorphic features on active normal fault scarps. Some previous studies of triangular facet development have suggested that facet size and slope exhibit a strong linear dependency on fault slip rate, thus linking their growth directly to the kinematics of fault initiation and linkage. Other studies, however, generally conclude that there is no variation in triangular facet geometry (height and slope) with fault slip rate. The landscape of the northeastern Basin and Range Province of the western United States provides an opportunity for addressing this problem. This is due to the presence of well developed triangular facets along active normal faults, as well as spatial variations in fault scale and slip rate. In addition, the Holocene climatic record for this region suggests a dominant tectonic regime, as the faulted landscape shows little evidence of precipitation gradients associated with tectonic uplift. Using GIS-based analyses of USGS 30 m digital elevation data (DEMs) for east - central Idaho and southwestern Montana, we analyze triangular facet geometries along fault systems of varying number of constituent segments. This approach allows us to link these geometries with established patterns of along - strike slip rate variation. For this study, we consider major watersheds to include only catchments with upstream and downstream boundaries extending from the drainage divide to the mapped fault trace, respectively. In order to maintain consistency in the selection criteria for the analyzed triangular facets, only facets bounded on opposite sides by major watersheds were considered. Our preliminary observations reflect a general along - strike increase in the surface area, average slope, and relief of triangular facets from the tips of the fault towards the center. We attribute anomalies in the along - strike geometric

  14. Fault detection and diagnosis of induction motors using motor current signature analysis and a hybrid FMM-CART model.

    PubMed

    Seera, Manjeevan; Lim, Chee Peng; Ishak, Dahaman; Singh, Harapajan

    2012-01-01

    In this paper, a novel approach to detect and classify comprehensive fault conditions of induction motors using a hybrid fuzzy min-max (FMM) neural network and classification and regression tree (CART) is proposed. The hybrid model, known as FMM-CART, exploits the advantages of both FMM and CART for undertaking data classification and rule extraction problems. A series of real experiments is conducted, whereby the motor current signature analysis method is applied to form a database comprising stator current signatures under different motor conditions. The signal harmonics from the power spectral density are extracted as discriminative input features for fault detection and classification with FMM-CART. A comprehensive list of induction motor fault conditions, viz., broken rotor bars, unbalanced voltages, stator winding faults, and eccentricity problems, has been successfully classified using FMM-CART with good accuracy rates. The results are comparable, if not better, than those reported in the literature. Useful explanatory rules in the form of a decision tree are also elicited from FMM-CART to analyze and understand different fault conditions of induction motors.

  15. Fault zone structure and fluid-rock interaction of a high angle normal fault in Carrara marble (NW Tuscany, Italy)

    NASA Astrophysics Data System (ADS)

    Molli, G.; Cortecci, G.; Vaselli, L.; Ottria, G.; Cortopassi, A.; Dinelli, E.; Mussi, M.; Barbieri, M.

    2010-09-01

    We studied the geometry, intensity of deformation and fluid-rock interaction of a high angle normal fault within Carrara marble in the Alpi Apuane NW Tuscany, Italy. The fault is comprised of a core bounded by two major, non-parallel slip surfaces. The fault core, marked by crush breccia and cataclasites, asymmetrically grades to the host protolith through a damage zone, which is well developed only in the footwall block. On the contrary, the transition from the fault core to the hangingwall protolith is sharply defined by the upper main slip surface. Faulting was associated with fluid-rock interaction, as evidenced by kinematically related veins observable in the damage zone and fluid channelling within the fault core, where an orange-brownish cataclasite matrix can be observed. A chemical and isotopic study of veins and different structural elements of the fault zone (protolith, damage zone and fault core), including a mathematical model, was performed to document type, role, and activity of fluid-rock interactions during deformation. The results of our studies suggested that deformation pattern was mainly controlled by processes associated with a linking-damage zone at a fault tip, development of a fault core, localization and channelling of fluids within the fault zone. Syn-kinematic microstructural modification of calcite microfabric possibly played a role in confining fluid percolation.

  16. The implication of gouge mineralogy evolution on fault creep: an example from The North Anatolian Fault, Turkey

    NASA Astrophysics Data System (ADS)

    Kaduri, M.; Gratier, J. P.; Renard, F.; Cakir, Z.; Lasserre, C.

    2015-12-01

    Aseismic creep is found along several sections of major active faults at shallow depth, such as the North Anatolian Fault in Turkey, the San Andreas Fault in California (USA), the Longitudinal Valley Fault in Taiwan, the Haiyuan fault in China and the El Pilar Fault in Venezuela. Identifying the mechanisms controlling creep and their evolution with time and space represents a major challenge for predicting the mechanical evolution of active faults, the interplay between creep and earthquakes, and the link between short-term observations from geodesy and the geological setting. Hence, studying the evolution of initial rock into damaged rock, then into gouge, is one of the key question for understanding the origin of fault creep. In order to address this question we collected samples from a dozen well-preserved fault outcrops along creeping and locked sections of the North Anatolian Fault. We used various methods such as microscopic and geological observations, EPMA, XRD analysis, combined with image processing, to characterize their mineralogy and strain. We conclude that (1) there is a clear correlation between creep localization and gouge composition. The locked sections of the fault are mostly composed of massive limestone. The creeping sections comprises clay gouges with 40-80% low friction minerals such as smectite, saponite, kaolinite, that facilitates the creeping. (2) The fault gouge shows two main structures that evolve with displacement: anastomosing cleavage develop during the first stage of displacement; amplifying displacement leads to layering development oblique or sub-parallel to the fault. (3) We demonstrate that the fault gouge result from a progressive evolution of initial volcanic rocks including dissolution of soluble species that move at least partially toward the damage zones and alteration transformations by fluid flow that weaken the gouge and strengthen the damage zone.

  17. Tectono-stratigraphic evolution of normal fault zones: Thal Fault Zone, Suez Rift, Egypt

    NASA Astrophysics Data System (ADS)

    Leppard, Christopher William

    propagation and early linkage of the precursor fault strands at depth before the fault segment broke surface, followed by the accumulation of displacement on the linked fault segment with minimal lateral propagation. This style of fault growth contrasts conventional fault growth models by which growth occurs through incremental increases in both displacement and length through time. The evolution of normal fault populations and fault zones exerts a first- order control on basin physiography and sediment supply, and therefore, the architecture and distribution of coeval syn-rift stratigraphy. The early syn-rift continental, Abu Zenima Formation, to shallow marine, Nukhul Formation show a pronounced westward increase in thickness controlled by the series of synthetic and antithetic faults up to 3 km west of present day Thai fault. The orientation of these faults controlled the location of fluvial conglomerates, sandstones and mudstones that shifted to the topographic lows created. The progressive localisation of displacement onto the Sarbut El Gamal fault segment during rift-climax resulted in an overall change in basin geometry. Accelerated subsidence rates led to sedimentation rates being outpaced by subsidence resulting in the development of a marine, sediment-starved, underfilled hangingwall depocentre characterised by slope-to-basinal depositional environments, with a laterally continuous slope apron in the immediate hangingwall, and point-sourced submarine fans. Controls on the spatial distribution, three dimensional architecture, and facies stacking patterns of coeval syn-rift deposits are identified as: I) structural style of the evolution and linkage of normal fault populations, ii) basin physiography, iii) evolution of drainage catchments, iv) bedrock lithology, and v) variations in sea/lake level.

  18. Fault tolerance in space-based digital signal processing and switching systems: Protecting up-link processing resources, demultiplexer, demodulator, and decoder

    NASA Technical Reports Server (NTRS)

    Redinbo, Robert

    1994-01-01

    Fault tolerance features in the first three major subsystems appearing in the next generation of communications satellites are described. These satellites will contain extensive but efficient high-speed processing and switching capabilities to support the low signal strengths associated with very small aperture terminals. The terminals' numerous data channels are combined through frequency division multiplexing (FDM) on the up-links and are protected individually by forward error-correcting (FEC) binary convolutional codes. The front-end processing resources, demultiplexer, demodulators, and FEC decoders extract all data channels which are then switched individually, multiplexed, and remodulated before retransmission to earth terminals through narrow beam spot antennas. Algorithm based fault tolerance (ABFT) techniques, which relate real number parity values with data flows and operations, are used to protect the data processing operations. The additional checking features utilize resources that can be substituted for normal processing elements when resource reconfiguration is required to replace a failed unit.

  19. Seismic Hazard and Fault Length

    NASA Astrophysics Data System (ADS)

    Black, N. M.; Jackson, D. D.; Mualchin, L.

    2005-12-01

    If mx is the largest earthquake magnitude that can occur on a fault, then what is mp, the largest magnitude that should be expected during the planned lifetime of a particular structure? Most approaches to these questions rely on an estimate of the Maximum Credible Earthquake, obtained by regression (e.g. Wells and Coppersmith, 1994) of fault length (or area) and magnitude. Our work differs in two ways. First, we modify the traditional approach to measuring fault length, to allow for hidden fault complexity and multi-fault rupture. Second, we use a magnitude-frequency relationship to calculate the largest magnitude expected to occur within a given time interval. Often fault length is poorly defined and multiple faults rupture together in a single event. Therefore, we need to expand the definition of a mapped fault length to obtain a more accurate estimate of the maximum magnitude. In previous work, we compared fault length vs. rupture length for post-1975 earthquakes in Southern California. In this study, we found that mapped fault length and rupture length are often unequal, and in several cases rupture broke beyond the previously mapped fault traces. To expand the geologic definition of fault length we outlined several guidelines: 1) if a fault truncates at young Quaternary alluvium, the fault line should be inferred underneath the younger sediments 2) faults striking within 45° of one another should be treated as a continuous fault line and 3) a step-over can link together faults at least 5 km apart. These definitions were applied to fault lines in Southern California. For example, many of the along-strike faults lines in the Mojave Desert are treated as a single fault trending from the Pinto Mountain to the Garlock fault. In addition, the Rose Canyon and Newport-Inglewood faults are treated as a single fault line. We used these more generous fault lengths, and the Wells and Coppersmith regression, to estimate the maximum magnitude (mx) for the major faults in

  20. Linking fault pattern with groundwater flow in crystalline rocks at the Grimsel Test Site (Switzerland)

    NASA Astrophysics Data System (ADS)

    Schneeberger, Raphael; Berger, Alfons; Mäder, Urs K.; Niklaus Waber, H.; Kober, Florian; Herwegh, Marco

    2017-04-01

    Water flow across crystalline bedrock is of major interest for deep-seated geothermal energy projects as well as for underground disposal of radioactive waste. In crystalline rocks enhanced fluid flow is related to zones of increased permeability, i.e. to fractures that are associated to fault zones. The flow regime around the Grimsel Test Site (GTS, Central Aar massif) was assessed by establishing a 3D fault zone pattern on a local scale in the GTS underground facility (deca-meter scale) and on a regional scale at the surface (km-scale). The study reveals the existence of a dense fault zone network consisting of several km long and few tens of cm to meter wide, sub-vertically oriented major faults that are connected by tens to hundreds of meters long minor bridging faults. This geometrical information was used as input for the generation of a 3D fault zone network model. The faults originate from ductile shear zones that were reactivated as brittle faults under retrograde conditions during exhumation. Embrittlement and associated dilatancy along the faults provide the pathways for today's groundwater flow. Detection of the actual 3D flow paths is, however, challenging since flow seem to be not planar but rather tube-like. Two strategies are applied to constrain the 3D geometry of the flow tubes: (i) Characterization of the groundwater infiltrating into the GTS (location, yield, hydraulic head, and chemical composition) and (ii) stress modelling on the base of the 3D structural model to unravel potential domains of enhanced fluid flow such as fault plane intersections and domains of dilatancy. At the Grimsel Test Site, hydraulic and structural data demonstrate that the groundwater flow is head-driven from the surface towards the GTS located some 450 m below the surface. The residence time of the groundwater in this surface-near section is >60 years as evidenced by absence of detectable tritium. However, hydraulic heads obtained from interval pressure measurements

  1. Field-based Digital Mapping of the November 3, 2002 Susitna Glacier Fault Rupture - Integrating remotely sensed data, GIS, and photo-linking technologies

    NASA Astrophysics Data System (ADS)

    Staft, L. A.; Craw, P. A.

    2003-12-01

    In July 2003, the U.S. Geological Survey and the Alaska Division of Geological & Geophysical Surveys (DGGS) conducted field studies on the Susitna Glacier Fault (SGF), which ruptured on November 2002 during the M 7.9 Denali fault earthquake. The DGGS assumed responsibility for Geographic Information System (GIS) and data management, integrating remotely sensed imagery, GPS data, GIS, and photo-linking software to aid in planning and documentation of fieldwork. Pre-field preparation included acquisition of over 150, 1:6,000-scale true-color aerial photographs taken shortly after the SGF rupture, 1:63,360-scale color-infrared (CIR) 1980 aerial photographs, and digital geographic information including a 15-minute Digital Elevation Model (DEM), 1:63,360-scale Digital Raster Graphics (DRG), and LandSat 7 satellite imagery. Using Orthomapper software, we orthorectified and mosaiced seven CIRs, creating a georeferenced, digital photo base of the study area. We used this base to reference the 1:6,000-scale aerial photography, to view locations of field sites downloaded from GPS, and to locate linked digital photographs that were taken in the field. Photos were linked using GPS-Photo Link software which "links" digital photographs to GPS data by correlating time stamps from the GPS track log or waypoint file to those of the digital photos, using the correlated point data to create a photo location ESRI shape file. When this file is opened in ArcMap or ArcView with the GPS-Photo Link utility enabled, a thumbnail image of the linked photo appears when the cursor is over the photo location. Viewing photographed features and scarp-profile locations in GIS allowed us to evaluate data coverage of the rupture daily. Using remotely sensed imagery in the field with GIS gave us the versatility to display data on a variety of bases, including topographic maps, air photos, and satellite imagery, during fieldwork. In the field, we downloaded, processed, and reviewed data as it was

  2. An Intelligent Gear Fault Diagnosis Methodology Using a Complex Wavelet Enhanced Convolutional Neural Network

    PubMed Central

    Sun, Weifang; Yao, Bin; Zeng, Nianyin; He, Yuchao; Cao, Xincheng; He, Wangpeng

    2017-01-01

    As a typical example of large and complex mechanical systems, rotating machinery is prone to diversified sorts of mechanical faults. Among these faults, one of the prominent causes of malfunction is generated in gear transmission chains. Although they can be collected via vibration signals, the fault signatures are always submerged in overwhelming interfering contents. Therefore, identifying the critical fault’s characteristic signal is far from an easy task. In order to improve the recognition accuracy of a fault’s characteristic signal, a novel intelligent fault diagnosis method is presented. In this method, a dual-tree complex wavelet transform (DTCWT) is employed to acquire the multiscale signal’s features. In addition, a convolutional neural network (CNN) approach is utilized to automatically recognise a fault feature from the multiscale signal features. The experiment results of the recognition for gear faults show the feasibility and effectiveness of the proposed method, especially in the gear’s weak fault features. PMID:28773148

  3. Achieving Agreement in Three Rounds with Bounded-Byzantine Faults

    NASA Technical Reports Server (NTRS)

    Malekpour, Mahyar, R.

    2017-01-01

    A three-round algorithm is presented that guarantees agreement in a system of K greater than or equal to 3F+1 nodes provided each faulty node induces no more than F faults and each good node experiences no more than F faults, where, F is the maximum number of simultaneous faults in the network. The algorithm is based on the Oral Message algorithm of Lamport, Shostak, and Pease and is scalable with respect to the number of nodes in the system and applies equally to traditional node-fault model as well as the link-fault model. We also present a mechanical verification of the algorithm focusing on verifying the correctness of a bounded model of the algorithm as well as confirming claims of determinism.

  4. The Design of a Fault-Tolerant COTS-Based Bus Architecture for Space Applications

    NASA Technical Reports Server (NTRS)

    Chau, Savio N.; Alkalai, Leon; Tai, Ann T.

    2000-01-01

    The high-performance, scalability and miniaturization requirements together with the power, mass and cost constraints mandate the use of commercial-off-the-shelf (COTS) components and standards in the X2000 avionics system architecture for deep-space missions. In this paper, we report our experiences and findings on the design of an IEEE 1394 compliant fault-tolerant COTS-based bus architecture. While the COTS standard IEEE 1394 adequately supports power management, high performance and scalability, its topological criteria impose restrictions on fault tolerance realization. To circumvent the difficulties, we derive a "stack-tree" topology that not only complies with the IEEE 1394 standard but also facilitates fault tolerance realization in a spaceborne system with limited dedicated resource redundancies. Moreover, by exploiting pertinent standard features of the 1394 interface which are not purposely designed for fault tolerance, we devise a comprehensive set of fault detection mechanisms to support the fault-tolerant bus architecture.

  5. Data-driven simultaneous fault diagnosis for solid oxide fuel cell system using multi-label pattern identification

    NASA Astrophysics Data System (ADS)

    Li, Shuanghong; Cao, Hongliang; Yang, Yupu

    2018-02-01

    Fault diagnosis is a key process for the reliability and safety of solid oxide fuel cell (SOFC) systems. However, it is difficult to rapidly and accurately identify faults for complicated SOFC systems, especially when simultaneous faults appear. In this research, a data-driven Multi-Label (ML) pattern identification approach is proposed to address the simultaneous fault diagnosis of SOFC systems. The framework of the simultaneous-fault diagnosis primarily includes two components: feature extraction and ML-SVM classifier. The simultaneous-fault diagnosis approach can be trained to diagnose simultaneous SOFC faults, such as fuel leakage, air leakage in different positions in the SOFC system, by just using simple training data sets consisting only single fault and not demanding simultaneous faults data. The experimental result shows the proposed framework can diagnose the simultaneous SOFC system faults with high accuracy requiring small number training data and low computational burden. In addition, Fault Inference Tree Analysis (FITA) is employed to identify the correlations among possible faults and their corresponding symptoms at the system component level.

  6. Heterogeneity in the Fault Damage Zone: a Field Study on the Borrego Fault, B.C., Mexico

    NASA Astrophysics Data System (ADS)

    Ostermeijer, G.; Mitchell, T. M.; Dorsey, M. T.; Browning, J.; Rockwell, T. K.; Aben, F. M.; Fletcher, J. M.; Brantut, N.

    2017-12-01

    The nature and distribution of damage around faults, and its impacts on fault zone properties has been a hot topic of research over the past decade. Understanding the mechanisms that control the formation of off fault damage can shed light on the processes during the seismic cycle, and the nature of fault zone development. Recent published work has identified three broad zones of damage around most faults based on the type, intensity, and extent of fracturing; Tip, Wall, and Linking damage. Although these zones are able to adequately characterise the general distribution of damage, little has been done to identify the nature of damage heterogeneity within those zones, often simplifying the distribution to fit log-normal linear decay trends. Here, we attempt to characterise the distribution of fractures that make up the wall damage around seismogenic faults. To do so, we investigate an extensive two dimensional fracture network exposed on a river cut platform along the Borrego Fault, BC, Mexico, 5m wide, and extending 20m from the fault core into the damage zone. High resolution fracture mapping of the outcrop, covering scales ranging three orders of magnitude (cm to m), has allowed for detailed observations of the 2D damage distribution within the fault damage zone. Damage profiles were obtained along several 1D transects perpendicular to the fault and micro-damage was examined from thin-sections at various locations around the outcrop for comparison. Analysis of the resulting fracture network indicates heterogeneities in damage intensity at decimetre scales resulting from a patchy distribution of high and low intensity corridors and clusters. Such patchiness may contribute to inconsistencies in damage zone widths defined along 1D transects and the observed variability of fracture densities around decay trends. How this distribution develops with fault maturity and the scaling of heterogeneities above and below the observed range will likely play a key role in

  7. The Inference of Gene Trees with Species Trees

    PubMed Central

    Szöllősi, Gergely J.; Tannier, Eric; Daubin, Vincent; Boussau, Bastien

    2015-01-01

    This article reviews the various models that have been used to describe the relationships between gene trees and species trees. Molecular phylogeny has focused mainly on improving models for the reconstruction of gene trees based on sequence alignments. Yet, most phylogeneticists seek to reveal the history of species. Although the histories of genes and species are tightly linked, they are seldom identical, because genes duplicate, are lost or horizontally transferred, and because alleles can coexist in populations for periods that may span several speciation events. Building models describing the relationship between gene and species trees can thus improve the reconstruction of gene trees when a species tree is known, and vice versa. Several approaches have been proposed to solve the problem in one direction or the other, but in general neither gene trees nor species trees are known. Only a few studies have attempted to jointly infer gene trees and species trees. These models account for gene duplication and loss, transfer or incomplete lineage sorting. Some of them consider several types of events together, but none exists currently that considers the full repertoire of processes that generate gene trees along the species tree. Simulations as well as empirical studies on genomic data show that combining gene tree–species tree models with models of sequence evolution improves gene tree reconstruction. In turn, these better gene trees provide a more reliable basis for studying genome evolution or reconstructing ancestral chromosomes and ancestral gene sequences. We predict that gene tree–species tree methods that can deal with genomic data sets will be instrumental to advancing our understanding of genomic evolution. PMID:25070970

  8. The western limits of the Seattle fault zone and its interaction with the Olympic Peninsula, Washington

    USGS Publications Warehouse

    A.P. Lamb,; L.M. Liberty,; Blakely, Richard J.; Pratt, Thomas L.; Sherrod, B.L.; Van Wijk, K.

    2012-01-01

    We present evidence that the Seattle fault zone of Washington State extends to the west edge of the Puget Lowland and is kinemati-cally linked to active faults that border the Olympic Massif, including the Saddle Moun-tain deformation zone. Newly acquired high-resolution seismic reflection and marine magnetic data suggest that the Seattle fault zone extends west beyond the Seattle Basin to form a >100-km-long active fault zone. We provide evidence for a strain transfer zone, expressed as a broad set of faults and folds connecting the Seattle and Saddle Mountain deformation zones near Hood Canal. This connection provides an explanation for the apparent synchroneity of M7 earthquakes on the two fault systems ~1100 yr ago. We redefi ne the boundary of the Tacoma Basin to include the previously termed Dewatto basin and show that the Tacoma fault, the southern part of which is a backthrust of the Seattle fault zone, links with a previously unidentifi ed fault along the western margin of the Seattle uplift. We model this north-south fault, termed the Dewatto fault, along the western margin of the Seattle uplift as a low-angle thrust that initiated with exhu-mation of the Olympic Massif and today accommodates north-directed motion. The Tacoma and Dewatto faults likely control both the southern and western boundaries of the Seattle uplift. The inferred strain trans-fer zone linking the Seattle fault zone and Saddle Mountain deformation zone defi nes the northern margin of the Tacoma Basin, and the Saddle Mountain deformation zone forms the northwestern boundary of the Tacoma Basin. Our observations and model suggest that the western portions of the Seattle fault zone and Tacoma fault are com-plex, require temporal variations in principal strain directions, and cannot be modeled as a simple thrust and/or backthrust system.

  9. Fault diagnosis of helical gearbox using acoustic signal and wavelets

    NASA Astrophysics Data System (ADS)

    Pranesh, SK; Abraham, Siju; Sugumaran, V.; Amarnath, M.

    2017-05-01

    The efficient transmission of power in machines is needed and gears are an appropriate choice. Faults in gears result in loss of energy and money. The monitoring and fault diagnosis are done by analysis of the acoustic and vibrational signals which are generally considered to be unwanted by products. This study proposes the usage of machine learning algorithm for condition monitoring of a helical gearbox by using the sound signals produced by the gearbox. Artificial faults were created and subsequently signals were captured by a microphone. An extensive study using different wavelet transformations for feature extraction from the acoustic signals was done, followed by waveletselection and feature selection using J48 decision tree and feature classification was performed using K star algorithm. Classification accuracy of 100% was obtained in the study

  10. A fault diagnosis scheme for planetary gearboxes using adaptive multi-scale morphology filter and modified hierarchical permutation entropy

    NASA Astrophysics Data System (ADS)

    Li, Yongbo; Li, Guoyan; Yang, Yuantao; Liang, Xihui; Xu, Minqiang

    2018-05-01

    The fault diagnosis of planetary gearboxes is crucial to reduce the maintenance costs and economic losses. This paper proposes a novel fault diagnosis method based on adaptive multi-scale morphological filter (AMMF) and modified hierarchical permutation entropy (MHPE) to identify the different health conditions of planetary gearboxes. In this method, AMMF is firstly adopted to remove the fault-unrelated components and enhance the fault characteristics. Second, MHPE is utilized to extract the fault features from the denoised vibration signals. Third, Laplacian score (LS) approach is employed to refine the fault features. In the end, the obtained features are fed into the binary tree support vector machine (BT-SVM) to accomplish the fault pattern identification. The proposed method is numerically and experimentally demonstrated to be able to recognize the different fault categories of planetary gearboxes.

  11. Redundancy management for efficient fault recovery in NASA's distributed computing system

    NASA Technical Reports Server (NTRS)

    Malek, Miroslaw; Pandya, Mihir; Yau, Kitty

    1991-01-01

    The management of redundancy in computer systems was studied and guidelines were provided for the development of NASA's fault-tolerant distributed systems. Fault recovery and reconfiguration mechanisms were examined. A theoretical foundation was laid for redundancy management by efficient reconfiguration methods and algorithmic diversity. Algorithms were developed to optimize the resources for embedding of computational graphs of tasks in the system architecture and reconfiguration of these tasks after a failure has occurred. The computational structure represented by a path and the complete binary tree was considered and the mesh and hypercube architectures were targeted for their embeddings. The innovative concept of Hybrid Algorithm Technique was introduced. This new technique provides a mechanism for obtaining fault tolerance while exhibiting improved performance.

  12. Dating faults by quantifying shear heating

    NASA Astrophysics Data System (ADS)

    Maino, Matteo; Casini, Leonardo; Langone, Antonio; Oggiano, Giacomo; Seno, Silvio; Stuart, Finlay

    2017-04-01

    Dating brittle and brittle-ductile faults is crucial for developing seismic models and for understanding the geological evolution of a region. Improvement the geochronological approaches for absolute fault dating and its accuracy is, therefore, a key objective for the geological community. Direct dating of ancient faults may be attained by exploiting the thermal effects associated with deformation. Heat generated during faulting - i.e. the shear heating - is perhaps the best signal that provides a link between time and activity of a fault. However, other mechanisms not instantaneously related to fault motion can generate heating (advection, upwelling of hot fluids), resulting in a difficulty to determine if the thermal signal corresponds to the timing of fault movement. Recognizing the contribution of shear heating is a fundamental pre-requisite for dating the fault motion through thermochronometric techniques; therefore, a comprehensive thermal characterization of the fault zone is needed. Several methods have been proposed to assess radiometric ages of faulting from either newly grown crystals on fault gouges or surfaces (e.g. Ar/Ar dating), or thermochronometric reset of existing minerals (e.g. zircon and apatite fission tracks). In this contribution we show two cases of brittle and brittle-ductile faulting, one shallow thrust from the SW Alps and one HT, pseudotachylite-bearing fault zone in Sardinia. We applied, in both examples, a multidisciplinary approach that integrates field and micro-structural observations, petrographical characterization, geochemical and mineralogical analyses, fluid inclusion microthermometry and numerical modeling with thermochronometric dating of the two fault zones. We used the zircon (U-Th)/He thermochronometry to estimate the temperatures experienced by the shallow Alpine thrust. The ZHe thermochronometer has a closure temperature (Tc) of 180°C. Consequently, it is ideally suited to dating large heat-producing faults that were

  13. Strike-slip fault propagation and linkage via work optimization with application to the San Jacinto fault, California

    NASA Astrophysics Data System (ADS)

    Madden, E. H.; McBeck, J.; Cooke, M. L.

    2013-12-01

    Over multiple earthquake cycles, strike-slip faults link to form through-going structures, as demonstrated by the continuous nature of the mature San Andreas fault system in California relative to the younger and more segmented San Jacinto fault system nearby. Despite its immaturity, the San Jacinto system accommodates between one third and one half of the slip along the boundary between the North American and Pacific plates. It therefore poses a significant seismic threat to southern California. Better understanding of how the San Jacinto system has evolved over geologic time and of current interactions between faults within the system is critical to assessing this seismic hazard accurately. Numerical models are well suited to simulating kilometer-scale processes, but models of fault system development are challenged by the multiple physical mechanisms involved. For example, laboratory experiments on brittle materials show that faults propagate and eventually join (hard-linkage) by both opening-mode and shear failure. In addition, faults interact prior to linkage through stress transfer (soft-linkage). The new algorithm GROW (GRowth by Optimization of Work) accounts for this complex array of behaviors by taking a global approach to fault propagation while adhering to the principals of linear elastic fracture mechanics. This makes GROW a powerful tool for studying fault interactions and fault system development over geologic time. In GROW, faults evolve to minimize the work (or energy) expended during deformation, thereby maximizing the mechanical efficiency of the entire system. Furthermore, the incorporation of both static and dynamic friction allows GROW models to capture fault slip and fault propagation in single earthquakes as well as over consecutive earthquake cycles. GROW models with idealized faults reveal that the initial fault spacing and the applied stress orientation control fault linkage propensity and linkage patterns. These models allow the gains in

  14. Fault-tolerant Control of a Cyber-physical System

    NASA Astrophysics Data System (ADS)

    Roxana, Rusu-Both; Eva-Henrietta, Dulf

    2017-10-01

    Cyber-physical systems represent a new emerging field in automatic control. The fault system is a key component, because modern, large scale processes must meet high standards of performance, reliability and safety. Fault propagation in large scale chemical processes can lead to loss of production, energy, raw materials and even environmental hazard. The present paper develops a multi-agent fault-tolerant control architecture using robust fractional order controllers for a (13C) cryogenic separation column cascade. The JADE (Java Agent DEvelopment Framework) platform was used to implement the multi-agent fault tolerant control system while the operational model of the process was implemented in Matlab/SIMULINK environment. MACSimJX (Multiagent Control Using Simulink with Jade Extension) toolbox was used to link the control system and the process model. In order to verify the performance and to prove the feasibility of the proposed control architecture several fault simulation scenarios were performed.

  15. EDNA: Expert fault digraph analysis using CLIPS

    NASA Technical Reports Server (NTRS)

    Dixit, Vishweshwar V.

    1990-01-01

    Traditionally fault models are represented by trees. Recently, digraph models have been proposed (Sack). Digraph models closely imitate the real system dependencies and hence are easy to develop, validate and maintain. However, they can also contain directed cycles and analysis algorithms are hard to find. Available algorithms tend to be complicated and slow. On the other hand, the tree analysis (VGRH, Tayl) is well understood and rooted in vast research effort and analytical techniques. The tree analysis algorithms are sophisticated and orders of magnitude faster. Transformation of a digraph (cyclic) into trees (CLP, LP) is a viable approach to blend the advantages of the representations. Neither the digraphs nor the trees provide the ability to handle heuristic knowledge. An expert system, to capture the engineering knowledge, is essential. We propose an approach here, namely, expert network analysis. We combine the digraph representation and tree algorithms. The models are augmented by probabilistic and heuristic knowledge. CLIPS, an expert system shell from NASA-JSC will be used to develop a tool. The technique provides the ability to handle probabilities and heuristic knowledge. Mixed analysis, some nodes with probabilities, is possible. The tool provides graphics interface for input, query, and update. With the combined approach it is expected to be a valuable tool in the design process as well in the capture of final design knowledge.

  16. Differential growth responses to water balance of coexisting deciduous tree species are linked to wood density in a Bolivian tropical dry forest.

    PubMed

    Mendivelso, Hooz A; Camarero, J Julio; Royo Obregón, Oriol; Gutiérrez, Emilia; Toledo, Marisol

    2013-01-01

    A seasonal period of water deficit characterizes tropical dry forests (TDFs). There, sympatric tree species exhibit a diversity of growth rates, functional traits, and responses to drought, suggesting that each species may possess different strategies to grow under different conditions of water availability. The evaluation of the long-term growth responses to changes in the soil water balance should provide an understanding of how and when coexisting tree species respond to water deficit in TDFs. Furthermore, such differential growth responses may be linked to functional traits related to water storage and conductance. We used dendrochronology and climate data to retrospectively assess how the radial growth of seven coexisting deciduous tree species responded to the seasonal soil water balance in a Bolivian TDF. Linear mixed-effects models were used to quantify the relationships between basal area increment and seasonal water balance. We related these relationships with wood density and sapwood production to assess if they affect the growth responses to climate. The growth of all species responded positively to water balance during the wet season, but such responses differed among species as a function of their wood density. For instance, species with a strong growth response to water availability averaged a low wood density which may facilitate the storage of water in the stem. By contrast, species with very dense wood were those whose growth was less sensitive to water availability. Coexisting tree species thus show differential growth responses to changes in soil water balance during the wet season. Our findings also provide a link between wood density, a trait related to the ability of trees to store water in the stem, and wood formation in response to water availability.

  17. Magnetotelluric Studies of Fault Zones Surrounding the 2016 Pawnee, Oklahoma Earthquake

    NASA Astrophysics Data System (ADS)

    Evans, R. L.; Key, K.; Atekwana, E. A.

    2016-12-01

    Since 2008, there has been a dramatic increase in earthquake activity in the central United States in association with major oil and gas operations. Oklahoma is now considered one the most seismically active states. Although seismic networks are able to detect activity and map its locus, they are unable to image the distribution of fluids in the fault responsible for triggering seismicity. Electrical geophysical methods are ideally suited to image fluid bearing faults since the injected waste-waters are highly saline and hence have a high electrical conductivity. To date, no study has imaged the fluids in the faults in Oklahoma and made a direct link to the seismicity. The 2016 M5.8 Pawnee, Oklahoma earthquake provides an unprecedented opportunity for scientists to provide that link. Several injection wells are located within a 20 km radius of the epicenter; and studies have suggested that injection of fluids in high-volume wells can trigger earthquakes as far away as 30 km. During late October to early November, 2016, we are collecting magnetotelluric (MT) data with the aim of constraining the distribution of fluids in the fault zone. The MT technique uses naturally occurring electric and magnetic fields measured at Earth's surface to measure conductivity structure. We plan to carry out a series of short two-dimensional (2D) profiles of wideband MT acquisition located through areas where the fault recently ruptured and seismic activity is concentrated and also across the faults in the vicinity that did not rupture. The integration of our results and ongoing seismic studies will lead to a better understanding of the links between fluid injection and seismicity.

  18. Geology of Joshua Tree National Park geodatabase

    USGS Publications Warehouse

    Powell, Robert E.; Matti, Jonathan C.; Cossette, Pamela M.

    2015-09-16

    The database in this Open-File Report describes the geology of Joshua Tree National Park and was completed in support of the National Cooperative Geologic Mapping Program of the U.S. Geological Survey (USGS) and in cooperation with the National Park Service (NPS). The geologic observations and interpretations represented in the database are relevant to both the ongoing scientific interests of the USGS in southern California and the management requirements of NPS, specifically of Joshua Tree National Park (JOTR).Joshua Tree National Park is situated within the eastern part of California’s Transverse Ranges province and straddles the transition between the Mojave and Sonoran deserts. The geologically diverse terrain that underlies JOTR reveals a rich and varied geologic evolution, one that spans nearly two billion years of Earth history. The Park’s landscape is the current expression of this evolution, its varied landforms reflecting the differing origins of underlying rock types and their differing responses to subsequent geologic events. Crystalline basement in the Park consists of Proterozoic plutonic and metamorphic rocks intruded by a composite Mesozoic batholith of Triassic through Late Cretaceous plutons arrayed in northwest-trending lithodemic belts. The basement was exhumed during the Cenozoic and underwent differential deep weathering beneath a low-relief erosion surface, with the deepest weathering profiles forming on quartz-rich, biotite-bearing granitoid rocks. Disruption of the basement terrain by faults of the San Andreas system began ca. 20 Ma and the JOTR sinistral domain, preceded by basalt eruptions, began perhaps as early as ca. 7 Ma, but no later than 5 Ma. Uplift of the mountain blocks during this interval led to erosional stripping of the thick zones of weathered quartz-rich granitoid rocks to form etchplains dotted by bouldery tors—the iconic landscape of the Park. The stripped debris filled basins along the fault zones.Mountain ranges

  19. A-Priori Rupture Models for Northern California Type-A Faults

    USGS Publications Warehouse

    Wills, Chris J.; Weldon, Ray J.; Field, Edward H.

    2008-01-01

    This appendix describes how a-priori rupture models were developed for the northern California Type-A faults. As described in the main body of this report, and in Appendix G, ?a-priori? models represent an initial estimate of the rate of single and multi-segment surface ruptures on each fault. Whether or not a given model is moment balanced (i.e., satisfies section slip-rate data) depends on assumptions made regarding the average slip on each segment in each rupture (which in turn depends on the chosen magnitude-area relationship). Therefore, for a given set of assumptions, or branch on the logic tree, the methodology of the present Working Group (WGCEP-2007) is to find a final model that is as close as possible to the a-priori model, in the least squares sense, but that also satisfies slip rate and perhaps other data. This is analogous the WGCEP- 2002 approach of effectively voting on the relative rate of each possible rupture, and then finding the closest moment-balance model (under a more limiting set of assumptions than adopted by the present WGCEP, as described in detail in Appendix G). The 2002 Working Group Report (WCCEP, 2003, referred to here as WGCEP-2002), created segmented earthquake rupture forecast models for all faults in the region, including some that had been designated as Type B faults in the NSHMP, 1996, and one that had not previously been considered. The 2002 National Seismic Hazard Maps used the values from WGCEP-2002 for all the faults in the region, essentially treating all the listed faults as Type A faults. As discussed in Appendix A, the current WGCEP found that there are a number of faults with little or no data on slip-per-event, or dates of previous earthquakes. As a result, the WGCEP recommends that faults with minimal available earthquake recurrence data: the Greenville, Mount Diablo, San Gregorio, Monte Vista-Shannon and Concord-Green Valley be modeled as Type B faults to be consistent with similarly poorly-known faults statewide

  20. A new multiscale noise tuning stochastic resonance for enhanced fault diagnosis in wind turbine drivetrains

    NASA Astrophysics Data System (ADS)

    Hu, Bingbing; Li, Bing

    2016-02-01

    It is very difficult to detect weak fault signatures due to the large amount of noise in a wind turbine system. Multiscale noise tuning stochastic resonance (MSTSR) has proved to be an effective way to extract weak signals buried in strong noise. However, the MSTSR method originally based on discrete wavelet transform (DWT) has disadvantages such as shift variance and the aliasing effects in engineering application. In this paper, the dual-tree complex wavelet transform (DTCWT) is introduced into the MSTSR method, which makes it possible to further improve the system output signal-to-noise ratio and the accuracy of fault diagnosis by the merits of DTCWT (nearly shift invariant and reduced aliasing effects). Moreover, this method utilizes the relationship between the two dual-tree wavelet basis functions, instead of matching the single wavelet basis function to the signal being analyzed, which may speed up the signal processing and be employed in on-line engineering monitoring. The proposed method is applied to the analysis of bearing outer ring and shaft coupling vibration signals carrying fault information. The results confirm that the method performs better in extracting the fault features than the original DWT-based MSTSR, the wavelet transform with post spectral analysis, and EMD-based spectral analysis methods.

  1. Unraveling the Earthquake History of the Denali Fault System, Alaska: Filling a Blank Canvas With Paleoearthquakes

    NASA Astrophysics Data System (ADS)

    Schwartz, D. P.; Haeussler, P. J.; Seitz, G. G.; Dawson, T. E.; Stenner, H. D.; Matmon, A.; Crone, A. J.; Personius, S.; Burns, P. B.; Cadena, A.; Thoms, E.

    2005-12-01

    Developing accurate rupture histories of long, high-slip-rate strike-slip faults is is especially challenging where recurrence is relatively short (hundreds of years), adjacent segments may fail within decades of each other, and uncertainties in dating can be as large as, or larger than, the time between events. The Denali Fault system (DFS) is the major active structure of interior Alaska, but received little study since pioneering fault investigations in the early 1970s. Until the summer of 2003 essentially no data existed on the timing or spatial distribution of past ruptures on the DFS. This changed with the occurrence of the M7.9 2002 Denali fault earthquake, which has been a catalyst for present paleoseismic investigations. It provided a well-constrained rupture length and slip distribution. Strike-slip faulting occurred along 290 km of the Denali and Totschunda faults, leaving unruptured ?140km of the eastern Denali fault, ?180 km of the western Denali fault, and ?70 km of the eastern Totschunda fault. The DFS presents us with a blank canvas on which to fill a chronology of past earthquakes using modern paleoseismic techniques. Aware of correlation issues with potentially closely-timed earthquakes we have a) investigated 11 paleoseismic sites that allow a variety of dating techniques, b) measured paleo offsets, which provide insight into magnitude and rupture length of past events, at 18 locations, and c) developed late Pleistocene and Holocene slip rates using exposure age dating to constrain long-term fault behavior models. We are in the process of: 1) radiocarbon-dating peats involved in faulting and liquefaction, and especially short-lived forest floor vegetation that includes outer rings of trees, spruce needles, and blueberry leaves killed and buried during paleoearthquakes; 2) supporting development of a 700-900 year tree-ring time-series for precise dating of trees used in event timing; 3) employing Pb 210 for constraining the youngest ruptures in

  2. Fault-Related Sanctuaries

    NASA Astrophysics Data System (ADS)

    Piccardi, L.

    2001-12-01

    Beyond the study of historical surface faulting events, this work investigates the possibility, in specific cases, of identifying pre-historical events whose memory survives in myths and legends. The myths of many famous sacred places of the ancient world contain relevant telluric references: "sacred" earthquakes, openings to the Underworld and/or chthonic dragons. Given the strong correspondence with local geological evidence, these myths may be considered as describing natural phenomena. It has been possible in this way to shed light on the geologic origin of famous myths (Piccardi, 1999, 2000 and 2001). Interdisciplinary researches reveal that the origin of several ancient sanctuaries may be linked in particular to peculiar geological phenomena observed on local active faults (like ground shaking and coseismic surface ruptures, gas and flames emissions, strong underground rumours). In many of these sanctuaries the sacred area is laid directly above the active fault. In a few cases, faulting has affected also the archaeological relics, right through the main temple (e.g. Delphi, Cnidus, Hierapolis of Phrygia). As such, the arrangement of the cult site and content of relative myths suggest that specific points along the trace of active faults have been noticed in the past and worshiped as special `sacred' places, most likely interpreted as Hades' Doors. The mythological stratification of most of these sanctuaries dates back to prehistory, and points to a common derivation from the cult of the Mother Goddess (the Lady of the Doors), which was largely widespread since at least 25000 BC. The cult itself was later reconverted into various different divinities, while the `sacred doors' of the Great Goddess and/or the dragons (offspring of Mother Earth and generally regarded as Keepers of the Doors) persisted in more recent mythologies. Piccardi L., 1999: The "Footprints" of the Archangel: Evidence of Early-Medieval Surface Faulting at Monte Sant'Angelo (Gargano, Italy

  3. Optical fiber-fault surveillance for passive optical networks in S-band operation window

    NASA Astrophysics Data System (ADS)

    Yeh, Chien-Hung; Chi, Sien

    2005-07-01

    An S-band (1470 to 1520 nm) fiber laser scheme, which uses multiple fiber Bragg grating (FBG) elements as feedback elements on each passive branch, is proposed and described for in-service fault identification in passive optical networks (PONs). By tuning a wavelength selective filter located within the laser cavity over a gain bandwidth, the fiber-fault of each branch can be monitored without affecting the in-service channels. In our experiment, an S-band four-branch monitoring tree-structured PON system is demonstrated and investigated experimentally.

  4. Optical fiber-fault surveillance for passive optical networks in S-band operation window.

    PubMed

    Yeh, Chien-Hung; Chi, Sien

    2005-07-11

    An S-band (1470 to 1520 nm) fiber laser scheme, which uses multiple fiber Bragg grating (FBG) elements as feedback elements on each passive branch, is proposed and described for in-service fault identification in passive optical networks (PONs). By tuning a wavelength selective filter located within the laser cavity over a gain bandwidth, the fiber-fault of each branch can be monitored without affecting the in-service channels. In our experiment, an S-band four-branch monitoring tree-structured PON system is demonstrated and investigated experimentally.

  5. Link between the northward extension of Great Sumatra Fault and continental rifting in the Andaman Sea: new results from seismic reflection studies

    NASA Astrophysics Data System (ADS)

    Singh, S. C.; Moeremans, R. E.; McArdle, J.; Johansen, K.

    2012-12-01

    lie in the mantle down to 30 km depth, which along with the presence of volcanic arc just 15 km east of these faults, suggest that there is no generic link between the strike-slip fault and volcanic arc.

  6. Seeing the forest and the trees: USGS scientist links local changes to global scale

    USGS Publications Warehouse

    Wilson, Jim; Allen, Craig D.

    2011-01-01

    The recent recipient of two major awards, Craig D. Allen, a research ecologist with the U.S. Geological Survey Fort Collins Science Center, has loved trees since childhood. He is now considered an expert of world renown on the twin phenomena of forest changes and tree mortality resulting from climate warming and drought, and in 2010 was twice recognized for his scientific contributions.In December 2010, Dr. Allen was named a 2010 Fellow of the American Association for the Advancement of Science “for outstanding leadership in the synthesis of global forest responses to climate change, built from worldwide collaboration and a deep understanding of the environmental history of the southwestern United States.”In March 2010, he was honored with the Meritorious Service Award from the U.S. Department of the Interior (DOI) in recognition of his outstanding vision, initiative, and scientific contributions to the USGS, DOI, and U.S. Department of Agriculture in establishing a model science program to support adaptive land management at the new Valles Caldera National Preserve in north-central New Mexico.Dr. Allen has authored more than 85 publications on landscape ecology and landscape change, from fire history and ecology to ecosystem responses to climate change. He has appeared on NOVA discussing fire ecology and on The Discovery Channel and Discovery Canada explaining the links between drought-induced tree mortality and climate warming, in addition to being interviewed and quoted in innumerable newspaper articles on both topics.But how did this unassuming scientist grow from nurturing maple saplings on 40 acres in Wisconsin to understanding forest system stress worldwide?

  7. Quantifying Anderson's fault types

    USGS Publications Warehouse

    Simpson, R.W.

    1997-01-01

    Anderson [1905] explained three basic types of faulting (normal, strike-slip, and reverse) in terms of the shape of the causative stress tensor and its orientation relative to the Earth's surface. Quantitative parameters can be defined which contain information about both shape and orientation [Ce??le??rier, 1995], thereby offering a way to distinguish fault-type domains on plots of regional stress fields and to quantify, for example, the degree of normal-faulting tendencies within strike-slip domains. This paper offers a geometrically motivated generalization of Angelier's [1979, 1984, 1990] shape parameters ?? and ?? to new quantities named A?? and A??. In their simple forms, A?? varies from 0 to 1 for normal, 1 to 2 for strike-slip, and 2 to 3 for reverse faulting, and A?? ranges from 0?? to 60??, 60?? to 120??, and 120?? to 180??, respectively. After scaling, A?? and A?? agree to within 2% (or 1??), a difference of little practical significance, although A?? has smoother analytical properties. A formulation distinguishing horizontal axes as well as the vertical axis is also possible, yielding an A?? ranging from -3 to +3 and A?? from -180?? to +180??. The geometrically motivated derivation in three-dimensional stress space presented here may aid intuition and offers a natural link with traditional ways of plotting yield and failure criteria. Examples are given, based on models of Bird [1996] and Bird and Kong [1994], of the use of Anderson fault parameters A?? and A?? for visualizing tectonic regimes defined by regional stress fields. Copyright 1997 by the American Geophysical Union.

  8. Differential Growth Responses to Water Balance of Coexisting Deciduous Tree Species Are Linked to Wood Density in a Bolivian Tropical Dry Forest

    PubMed Central

    Mendivelso, Hooz A.; Camarero, J. Julio; Royo Obregón, Oriol; Gutiérrez, Emilia; Toledo, Marisol

    2013-01-01

    A seasonal period of water deficit characterizes tropical dry forests (TDFs). There, sympatric tree species exhibit a diversity of growth rates, functional traits, and responses to drought, suggesting that each species may possess different strategies to grow under different conditions of water availability. The evaluation of the long-term growth responses to changes in the soil water balance should provide an understanding of how and when coexisting tree species respond to water deficit in TDFs. Furthermore, such differential growth responses may be linked to functional traits related to water storage and conductance. We used dendrochronology and climate data to retrospectively assess how the radial growth of seven coexisting deciduous tree species responded to the seasonal soil water balance in a Bolivian TDF. Linear mixed-effects models were used to quantify the relationships between basal area increment and seasonal water balance. We related these relationships with wood density and sapwood production to assess if they affect the growth responses to climate. The growth of all species responded positively to water balance during the wet season, but such responses differed among species as a function of their wood density. For instance, species with a strong growth response to water availability averaged a low wood density which may facilitate the storage of water in the stem. By contrast, species with very dense wood were those whose growth was less sensitive to water availability. Coexisting tree species thus show differential growth responses to changes in soil water balance during the wet season. Our findings also provide a link between wood density, a trait related to the ability of trees to store water in the stem, and wood formation in response to water availability. PMID:24116001

  9. CLEAR: Communications Link Expert Assistance Resource

    NASA Technical Reports Server (NTRS)

    Hull, Larry G.; Hughes, Peter M.

    1987-01-01

    Communications Link Expert Assistance Resource (CLEAR) is a real time, fault diagnosis expert system for the Cosmic Background Explorer (COBE) Mission Operations Room (MOR). The CLEAR expert system is an operational prototype which assists the MOR operator/analyst by isolating and diagnosing faults in the spacecraft communication link with the Tracking and Data Relay Satellite (TDRS) during periods of realtime data acquisition. The mission domain, user requirements, hardware configuration, expert system concept, tool selection, development approach, and system design were discussed. Development approach and system implementation are emphasized. Also discussed are system architecture, tool selection, operation, and future plans.

  10. Automated Generation of Fault Management Artifacts from a Simple System Model

    NASA Technical Reports Server (NTRS)

    Kennedy, Andrew K.; Day, John C.

    2013-01-01

    Our understanding of off-nominal behavior - failure modes and fault propagation - in complex systems is often based purely on engineering intuition; specific cases are assessed in an ad hoc fashion as a (fallible) fault management engineer sees fit. This work is an attempt to provide a more rigorous approach to this understanding and assessment by automating the creation of a fault management artifact, the Failure Modes and Effects Analysis (FMEA) through querying a representation of the system in a SysML model. This work builds off the previous development of an off-nominal behavior model for the upcoming Soil Moisture Active-Passive (SMAP) mission at the Jet Propulsion Laboratory. We further developed the previous system model to more fully incorporate the ideas of State Analysis, and it was restructured in an organizational hierarchy that models the system as layers of control systems while also incorporating the concept of "design authority". We present software that was developed to traverse the elements and relationships in this model to automatically construct an FMEA spreadsheet. We further discuss extending this model to automatically generate other typical fault management artifacts, such as Fault Trees, to efficiently portray system behavior, and depend less on the intuition of fault management engineers to ensure complete examination of off-nominal behavior.

  11. Geometric incompatibility in a fault system.

    PubMed Central

    Gabrielov, A; Keilis-Borok, V; Jackson, D D

    1996-01-01

    Interdependence between geometry of a fault system, its kinematics, and seismicity is investigated. Quantitative measure is introduced for inconsistency between a fixed configuration of faults and the slip rates on each fault. This measure, named geometric incompatibility (G), depicts summarily the instability near the fault junctions: their divergence or convergence ("unlocking" or "locking up") and accumulation of stress and deformations. Accordingly, the changes in G are connected with dynamics of seismicity. Apart from geometric incompatibility, we consider deviation K from well-known Saint Venant condition of kinematic compatibility. This deviation depicts summarily unaccounted stress and strain accumulation in the region and/or internal inconsistencies in a reconstruction of block- and fault system (its geometry and movements). The estimates of G and K provide a useful tool for bringing together the data on different types of movement in a fault system. An analog of Stokes formula is found that allows determination of the total values of G and K in a region from the data on its boundary. The phenomenon of geometric incompatibility implies that nucleation of strong earthquakes is to large extent controlled by processes near fault junctions. The junctions that have been locked up may act as transient asperities, and unlocked junctions may act as transient weakest links. Tentative estimates of K and G are made for each end of the Big Bend of the San Andreas fault system in Southern California. Recent strong earthquakes Landers (1992, M = 7.3) and Northridge (1994, M = 6.7) both reduced K but had opposite impact on G: Landers unlocked the area, whereas Northridge locked it up again. Images Fig. 1 Fig. 2 PMID:11607673

  12. Incremental Holocene slip rates from the Hope fault at Hossack Station, Marlborough fault zone, South Island, New Zealand

    NASA Astrophysics Data System (ADS)

    Hatem, A. E.; Dolan, J. F.; Langridge, R.; Zinke, R. W.; McGuire, C. P.; Rhodes, E. J.; Van Dissen, R. J.

    2015-12-01

    The Marlborough fault system, which links the Alpine fault with the Hikurangi subduction zone within the complex Australian-Pacific plate boundary zone, partitions strain between the Wairau, Awatere, Clarence and Hope faults. Previous best estimates of dextral strike-slip along the Hope fault are ≤ ~23 mm/yr± 4 mm/year. Those rates, however, are poorly constrained and could be improved using better age determinations in conjunction with measurements of fault offsets using high-resolution imagery. In this study, we use airborne lidar- and field-based mapping together with the subsurface geometry of offset channels at the Hossack site 12 km ESE of Hanmer Springs to more precisely determine stream offsets that were previously identified by McMorran (1991). Specifically, we measured fault offsets of ~10m, ~75 m, and ~195m. Together with 65 radiocarbon ages on charcoal, peat, and wood and 25 pending post-IR50-IRSL225 luminescence ages from the channel deposits, these offsets yield three different fault slip rates for the early Holocene, the late Holocene, and the past ca. 500-1,000 years. Using the large number of age determinations, we document in detail the timing of initiation and abandonment of each channel, enhancing the geomorphic interpretation at the Hossack site as channels deform over many earthquake cycles. Our preliminary incremental slip rate results from the Hossack site may indicate temporally variable strain release along the Hope fault. This study is part of a broader effort aimed at determining incremental slip rates and paleo-earthquake ages and displacements from all four main Marlborough faults. Collectively, these data will allow us to determine how the four main Marlborough faults have work together during Holocene-late Pleistocene to accommodate plate-boundary deformation in time and space.

  13. Naive Bayes Bearing Fault Diagnosis Based on Enhanced Independence of Data

    PubMed Central

    Zhang, Nannan; Wu, Lifeng; Yang, Jing; Guan, Yong

    2018-01-01

    The bearing is the key component of rotating machinery, and its performance directly determines the reliability and safety of the system. Data-based bearing fault diagnosis has become a research hotspot. Naive Bayes (NB), which is based on independent presumption, is widely used in fault diagnosis. However, the bearing data are not completely independent, which reduces the performance of NB algorithms. In order to solve this problem, we propose a NB bearing fault diagnosis method based on enhanced independence of data. The method deals with data vector from two aspects: the attribute feature and the sample dimension. After processing, the classification limitation of NB is reduced by the independence hypothesis. First, we extract the statistical characteristics of the original signal of the bearings effectively. Then, the Decision Tree algorithm is used to select the important features of the time domain signal, and the low correlation features is selected. Next, the Selective Support Vector Machine (SSVM) is used to prune the dimension data and remove redundant vectors. Finally, we use NB to diagnose the fault with the low correlation data. The experimental results show that the independent enhancement of data is effective for bearing fault diagnosis. PMID:29401730

  14. Combining task analysis and fault tree analysis for accident and incident analysis: a case study from Bulgaria.

    PubMed

    Doytchev, Doytchin E; Szwillus, Gerd

    2009-11-01

    Understanding the reasons for incident and accident occurrence is important for an organization's safety. Different methods have been developed to achieve this goal. To better understand the human behaviour in incident occurrence we propose an analysis concept that combines Fault Tree Analysis (FTA) and Task Analysis (TA). The former method identifies the root causes of an accident/incident, while the latter analyses the way people perform the tasks in their work environment and how they interact with machines or colleagues. These methods were complemented with the use of the Human Error Identification in System Tools (HEIST) methodology and the concept of Performance Shaping Factors (PSF) to deepen the insight into the error modes of an operator's behaviour. HEIST shows the external error modes that caused the human error and the factors that prompted the human to err. To show the validity of the approach, a case study at a Bulgarian Hydro power plant was carried out. An incident - the flooding of the plant's basement - was analysed by combining the afore-mentioned methods. The case study shows that Task Analysis in combination with other methods can be applied successfully to human error analysis, revealing details about erroneous actions in a realistic situation.

  15. Treelink: data integration, clustering and visualization of phylogenetic trees.

    PubMed

    Allende, Christian; Sohn, Erik; Little, Cedric

    2015-12-29

    Phylogenetic trees are central to a wide range of biological studies. In many of these studies, tree nodes need to be associated with a variety of attributes. For example, in studies concerned with viral relationships, tree nodes are associated with epidemiological information, such as location, age and subtype. Gene trees used in comparative genomics are usually linked with taxonomic information, such as functional annotations and events. A wide variety of tree visualization and annotation tools have been developed in the past, however none of them are intended for an integrative and comparative analysis. Treelink is a platform-independent software for linking datasets and sequence files to phylogenetic trees. The application allows an automated integration of datasets to trees for operations such as classifying a tree based on a field or showing the distribution of selected data attributes in branches and leafs. Genomic and proteonomic sequences can also be linked to the tree and extracted from internal and external nodes. A novel clustering algorithm to simplify trees and display the most divergent clades was also developed, where validation can be achieved using the data integration and classification function. Integrated geographical information allows ancestral character reconstruction for phylogeographic plotting based on parsimony and likelihood algorithms. Our software can successfully integrate phylogenetic trees with different data sources, and perform operations to differentiate and visualize those differences within a tree. File support includes the most popular formats such as newick and csv. Exporting visualizations as images, cluster outputs and genomic sequences is supported. Treelink is available as a web and desktop application at http://www.treelinkapp.com .

  16. How geometrical constraints contribute to the weakness of mature faults

    USGS Publications Warehouse

    Lockner, D.A.; Byerlee, J.D.

    1993-01-01

    Increasing evidence that the San Andreas fault has low shear strength1 has fuelled considerable discussion regarding the role of fluid pressure in controlling fault strength. Byerlee2,3 and Rice4 have shown how fluid pressure gradients within a fault zone can produce a fault with low strength while avoiding hydraulic fracture of the surrounding rock due to excessive fluid pressure. It may not be widely realised, however, that the same analysis2-4 shows that even in the absence of fluids, the presence of a relatively soft 'gouge' layer surrounded by harder country rock can also reduce the effective shear strength of the fault. As shown most recently by Byerlee and Savage5, as the shear stress across a fault increases, the stress state within the fault zone evolves to a limiting condition in which the maximum shear stress within the fault zone is parallel to the fault, which then slips with a lower apparent coefficient of friction than the same material unconstrained by the fault. Here we confirm the importance of fault geometry in determining the apparent weakness of fault zones, by showing that the apparent friction on a sawcut granite surface can be predicted from the friction measured in intact rock, given only the geometrical constraints introduced by the fault surfaces. This link between the sliding friction of faults and the internal friction of intact rock suggests a new approach to understanding the microphysical processes that underlie friction in brittle materials.

  17. Analysis of Fault Lengths Across Valles Marineris, Mars

    NASA Astrophysics Data System (ADS)

    Fori, A. N.; Schultz, R. A.

    1996-03-01

    Summary. As part of a larger project to determine the history of stress and strain across Valles Marineris, Mars, graben lengths located within the Valley are measured using a two-dimensional window-sampling method to investigate depth of faulting and accuracy of measurement. The resulting degree of uncertainty in measuring lengths (+19 km - 80% accuracy) is independent of the resolution at which the faults are measured, so data sets and resultant statistical analysis from different scales or map areas can be compared. The cumulative length frequency plots show that the geometry of Valley faults display no evidence of a frictional stability transition at depth in the lithosphere if mechanical interaction between individual faults (an unphysical situation) is not considered. If strongly interacting faults are linked and the composite lengths used to re-create the cumulative lengths plots, a significant change in slope is apparent suggesting the existence of a transition at about 35-65 km below the surface (assuming faults are dipping from 50deg to 70deg This suggests the thermal gradient to the associated 300-400degC isotherm is 53C/km to 12degC/km.

  18. Synthetic Earthquake Statistics From Physical Fault Models for the Lower Rhine Embayment

    NASA Astrophysics Data System (ADS)

    Brietzke, G. B.; Hainzl, S.; Zöller, G.

    2012-04-01

    As of today, seismic risk and hazard estimates mostly use pure empirical, stochastic models of earthquake fault systems tuned specifically to the vulnerable areas of interest. Although such models allow for reasonable risk estimates they fail to provide a link between the observed seismicity and the underlying physical processes. Solving a state-of-the-art fully dynamic description set of all relevant physical processes related to earthquake fault systems is likely not useful since it comes with a large number of degrees of freedom, poor constraints on its model parameters and a huge computational effort. Here, quasi-static and quasi-dynamic physical fault simulators provide a compromise between physical completeness and computational affordability and aim at providing a link between basic physical concepts and statistics of seismicity. Within the framework of quasi-static and quasi-dynamic earthquake simulators we investigate a model of the Lower Rhine Embayment (LRE) that is based upon seismological and geological data. We present and discuss statistics of the spatio-temporal behavior of generated synthetic earthquake catalogs with respect to simplification (e.g. simple two-fault cases) as well as to complication (e.g. hidden faults, geometric complexity, heterogeneities of constitutive parameters).

  19. Kaltag fault, northern Yukon, Canada: Constraints on evolution of Arctic Alaska

    NASA Astrophysics Data System (ADS)

    Lane, Larry S.

    1992-07-01

    The Kaltag fault has been linked to several strike-slip models of evolution of the western Arctic Ocean. Hundreds of kilometres of Cretaceous-Tertiary displacement have been hypothesized in models that emplace Arctic Alaska into its present position by either left- or right-lateral strike slip. However, regional-scale displacement is precluded by new potential-field data. Postulated transform emplacement of Arctic Alaska cannot be accommodated by motion on the Kaltag fault or adjacent structures. The Kaltag fault of the northern Yukon is an eastward extrapolation of its namesake in west-central Alaska; however, a connection cannot be demonstrated. Cretaceous-Tertiary displacement on the Alaskan Kaltag fault is probably accommodated elsewhere.

  20. Goal-Function Tree Modeling for Systems Engineering and Fault Management

    NASA Technical Reports Server (NTRS)

    Johnson, Stephen B.; Breckenridge, Jonathan T.

    2013-01-01

    The draft NASA Fault Management (FM) Handbook (2012) states that Fault Management (FM) is a "part of systems engineering", and that it "demands a system-level perspective" (NASAHDBK- 1002, 7). What, exactly, is the relationship between systems engineering and FM? To NASA, systems engineering (SE) is "the art and science of developing an operable system capable of meeting requirements within often opposed constraints" (NASA/SP-2007-6105, 3). Systems engineering starts with the elucidation and development of requirements, which set the goals that the system is to achieve. To achieve these goals, the systems engineer typically defines functions, and the functions in turn are the basis for design trades to determine the best means to perform the functions. System Health Management (SHM), by contrast, defines "the capabilities of a system that preserve the system's ability to function as intended" (Johnson et al., 2011, 3). Fault Management, in turn, is the operational subset of SHM, which detects current or future failures, and takes operational measures to prevent or respond to these failures. Failure, in turn, is the "unacceptable performance of intended function." (Johnson 2011, 605) Thus the relationship of SE to FM is that SE defines the functions and the design to perform those functions to meet system goals and requirements, while FM detects the inability to perform those functions and takes action. SHM and FM are in essence "the dark side" of SE. For every function to be performed (SE), there is the possibility that it is not successfully performed (SHM); FM defines the means to operationally detect and respond to this lack of success. We can also describe this in terms of goals: for every goal to be achieved, there is the possibility that it is not achieved; FM defines the means to operationally detect and respond to this inability to achieve the goal. This brief description of relationships between SE, SHM, and FM provide hints to a modeling approach to

  1. Experimental fault characterization of a neural network

    NASA Technical Reports Server (NTRS)

    Tan, Chang-Huong

    1990-01-01

    The effects of a variety of faults on a neural network is quantified via simulation. The neural network consists of a single-layered clustering network and a three-layered classification network. The percentage of vectors mistagged by the clustering network, the percentage of vectors misclassified by the classification network, the time taken for the network to stabilize, and the output values are all measured. The results show that both transient and permanent faults have a significant impact on the performance of the measured network. The corresponding mistag and misclassification percentages are typically within 5 to 10 percent of each other. The average mistag percentage and the average misclassification percentage are both about 25 percent. After relearning, the percentage of misclassifications is reduced to 9 percent. In addition, transient faults are found to cause the network to be increasingly unstable as the duration of a transient is increased. The impact of link faults is relatively insignificant in comparison with node faults (1 versus 19 percent misclassified after relearning). There is a linear increase in the mistag and misclassification percentages with decreasing hardware redundancy. In addition, the mistag and misclassification percentages linearly decrease with increasing network size.

  2. Faulting along the southern margin of Reelfoot Lake, Tennessee

    USGS Publications Warehouse

    Van Arsdale, R.; Purser, J.; Stephenson, W.; Odum, J.

    1998-01-01

    The Reelfoot Lake basin, Tennessee, is structurally complex and of great interest seismologically because it is located at the junction of two seismicity trends of the New Madrid seismic zone. To better understand the structure at this location, a 7.5-km-long seismic reflection profile was acquired on roads along the southern margin of Reelfoot Lake. The seismic line reveals a westerly dipping basin bounded on the west by the Reelfoot reverse fault zone, the Ridgely right-lateral transpressive fault zone on the east, and the Cottonwood Grove right-lateral strike-slip fault in the middle of the basin. The displacement history of the Reelfoot fault zone appears to be the same as the Ridgely fault zone, thus suggesting that movement on these fault zones has been synchronous, perhaps since the Cretaceous. Since the Reelfoot and Ridgely fault systems are believed responsible for two of the mainshocks of 1811-1812, the fault history revealed in the Reelfoot Lake profile suggests that multiple mainshocks may be typical of the New Madrid seismic zone. The Ridgely fault zone consists of two northeast-striking faults that lie at the base of and within the Mississippi Valley bluff line. This fault zone has 15 m of post-Eocene, up-to-the-east displacement and appears to locally control the eastern limit of Mississippi River migration. The Cottonwood Grove fault zone passes through the center of the seismic line and has approximately 5 m up-to-the-east displacement. Correlation of the Cottonwood Grove fault with a possible fault scarp on the floor of Reelfoot Lake and the New Markham fault north of the lake suggests the Cottonwood Grove fault may change to a northerly strike at Reelfoot Lake, thereby linking the northeast-trending zones of seismicity in the New Madrid seismic zone.

  3. Fault Analysis on Bevel Gear Teeth Surface Damage of Aeroengine

    NASA Astrophysics Data System (ADS)

    Cheng, Li; Chen, Lishun; Li, Silu; Liang, Tao

    2017-12-01

    Aiming at the trouble phenomenon for bevel gear teeth surface damage of Aero-engine, Fault Tree of bevel gear teeth surface damage was drawing by logical relations, the possible cause of trouble was analyzed, scanning electron-microscope, energy spectrum analysis, Metallographic examination, hardness measurement and other analysis means were adopted to investigate the spall gear tooth. The results showed that Material composition, Metallographic structure, Micro-hardness, Carburization depth of the fault bevel gear accord with technical requirements. Contact fatigue spall defect caused bevel gear teeth surface damage. The small magnitude of Interference of accessory gearbox install hole and driving bevel gear bearing seat was mainly caused. Improved measures were proposed, after proof, Thermoelement measures are effective.

  4. San Andreas tremor cascades define deep fault zone complexity

    USGS Publications Warehouse

    Shelly, David R.

    2015-01-01

    Weak seismic vibrations - tectonic tremor - can be used to delineate some plate boundary faults. Tremor on the deep San Andreas Fault, located at the boundary between the Pacific and North American plates, is thought to be a passive indicator of slow fault slip. San Andreas Fault tremor migrates at up to 30 m s-1, but the processes regulating tremor migration are unclear. Here I use a 12-year catalogue of more than 850,000 low-frequency earthquakes to systematically analyse the high-speed migration of tremor along the San Andreas Fault. I find that tremor migrates most effectively through regions of greatest tremor production and does not propagate through regions with gaps in tremor production. I interpret the rapid tremor migration as a self-regulating cascade of seismic ruptures along the fault, which implies that tremor may be an active, rather than passive participant in the slip propagation. I also identify an isolated group of tremor sources that are offset eastwards beneath the San Andreas Fault, possibly indicative of the interface between the Monterey Microplate, a hypothesized remnant of the subducted Farallon Plate, and the North American Plate. These observations illustrate a possible link between the central San Andreas Fault and tremor-producing subduction zones.

  5. Displacement-length scaling of brittle faults in ductile shear.

    PubMed

    Grasemann, Bernhard; Exner, Ulrike; Tschegg, Cornelius

    2011-11-01

    Within a low-grade ductile shear zone, we investigated exceptionally well exposed brittle faults, which accumulated antithetic slip and rotated into the shearing direction. The foliation planes of the mylonitic host rock intersect the faults approximately at their centre and exhibit ductile reverse drag. Three types of brittle faults can be distinguished: (i) Faults developing on pre-existing K-feldspar/mica veins that are oblique to the shear direction. These faults have triclinic flanking structures. (ii) Wing cracks opening as mode I fractures at the tips of the triclinic flanking structures, perpendicular to the shear direction. These cracks are reactivated as faults with antithetic shear, extend from the parent K-feldspar/mica veins and form a complex linked flanking structure system. (iii) Joints forming perpendicular to the shearing direction are deformed to form monoclinic flanking structures. Triclinic and monoclinic flanking structures record elliptical displacement-distance profiles with steep displacement gradients at the fault tips by ductile flow in the host rocks, resulting in reverse drag of the foliation planes. These structures record one of the greatest maximum displacement/length ratios reported from natural fault structures. These exceptionally high ratios can be explained by localized antithetic displacement along brittle slip surfaces, which did not propagate during their rotation during surrounding ductile flow.

  6. Displacement–length scaling of brittle faults in ductile shear

    PubMed Central

    Grasemann, Bernhard; Exner, Ulrike; Tschegg, Cornelius

    2011-01-01

    Within a low-grade ductile shear zone, we investigated exceptionally well exposed brittle faults, which accumulated antithetic slip and rotated into the shearing direction. The foliation planes of the mylonitic host rock intersect the faults approximately at their centre and exhibit ductile reverse drag. Three types of brittle faults can be distinguished: (i) Faults developing on pre-existing K-feldspar/mica veins that are oblique to the shear direction. These faults have triclinic flanking structures. (ii) Wing cracks opening as mode I fractures at the tips of the triclinic flanking structures, perpendicular to the shear direction. These cracks are reactivated as faults with antithetic shear, extend from the parent K-feldspar/mica veins and form a complex linked flanking structure system. (iii) Joints forming perpendicular to the shearing direction are deformed to form monoclinic flanking structures. Triclinic and monoclinic flanking structures record elliptical displacement–distance profiles with steep displacement gradients at the fault tips by ductile flow in the host rocks, resulting in reverse drag of the foliation planes. These structures record one of the greatest maximum displacement/length ratios reported from natural fault structures. These exceptionally high ratios can be explained by localized antithetic displacement along brittle slip surfaces, which did not propagate during their rotation during surrounding ductile flow. PMID:26806996

  7. An update of Quaternary faults of central and eastern Oregon

    USGS Publications Warehouse

    Weldon, Ray J.; Fletcher, D.K.; Weldon, E.M.; Scharer, K.M.; McCrory, P.A.

    2002-01-01

    This is the online version of a CD-ROM publication. We have updated the eastern portion of our previous active fault map of Oregon (Pezzopane, Nakata, and Weldon, 1992) as a contribution to the larger USGS effort to produce digital maps of active faults in the Pacific Northwest region. The 1992 fault map has seen wide distribution and has been reproduced in essentially all subsequent compilations of active faults of Oregon. The new map provides a substantial update of known active or suspected active faults east of the Cascades. Improvements in the new map include (1) many newly recognized active faults, (2) a linked ArcInfo map and reference database, (3) more precise locations for previously recognized faults on shaded relief quadrangles generated from USGS 30-m digital elevations models (DEM), (4) more uniform coverage resulting in more consistent grouping of the ages of active faults, and (5) a new category of 'possibly' active faults that share characteristics with known active faults, but have not been studied adequately to assess their activity. The distribution of active faults has not changed substantially from the original Pezzopane, Nakata and Weldon map. Most faults occur in the south-central Basin and Range tectonic province that is located in the backarc portion of the Cascadia subduction margin. These faults occur in zones consisting of numerous short faults with similar rates, ages, and styles of movement. Many active faults strongly correlate with the most active volcanic centers of Oregon, including Newberry Craters and Crater Lake.

  8. Award ER25750: Coordinated Infrastructure for Fault Tolerance Systems Indiana University Final Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lumsdaine, Andrew

    2013-03-08

    The main purpose of the Coordinated Infrastructure for Fault Tolerance in Systems initiative has been to conduct research with a goal of providing end-to-end fault tolerance on a systemwide basis for applications and other system software. While fault tolerance has been an integral part of most high-performance computing (HPC) system software developed over the past decade, it has been treated mostly as a collection of isolated stovepipes. Visibility and response to faults has typically been limited to the particular hardware and software subsystems in which they are initially observed. Little fault information is shared across subsystems, allowing little flexibility ormore » control on a system-wide basis, making it practically impossible to provide cohesive end-to-end fault tolerance in support of scientific applications. As an example, consider faults such as communication link failures that can be seen by a network library but are not directly visible to the job scheduler, or consider faults related to node failures that can be detected by system monitoring software but are not inherently visible to the resource manager. If information about such faults could be shared by the network libraries or monitoring software, then other system software, such as a resource manager or job scheduler, could ensure that failed nodes or failed network links were excluded from further job allocations and that further diagnosis could be performed. As a founding member and one of the lead developers of the Open MPI project, our efforts over the course of this project have been focused on making Open MPI more robust to failures by supporting various fault tolerance techniques, and using fault information exchange and coordination between MPI and the HPC system software stack from the application, numeric libraries, and programming language runtime to other common system components such as jobs schedulers, resource managers, and monitoring tools.« less

  9. Fault recovery characteristics of the fault tolerant multi-processor

    NASA Technical Reports Server (NTRS)

    Padilla, Peter A.

    1990-01-01

    The fault handling performance of the fault tolerant multiprocessor (FTMP) was investigated. Fault handling errors detected during fault injection experiments were characterized. In these fault injection experiments, the FTMP disabled a working unit instead of the faulted unit once every 500 faults, on the average. System design weaknesses allow active faults to exercise a part of the fault management software that handles byzantine or lying faults. It is pointed out that these weak areas in the FTMP's design increase the probability that, for any hardware fault, a good LRU (line replaceable unit) is mistakenly disabled by the fault management software. It is concluded that fault injection can help detect and analyze the behavior of a system in the ultra-reliable regime. Although fault injection testing cannot be exhaustive, it has been demonstrated that it provides a unique capability to unmask problems and to characterize the behavior of a fault-tolerant system.

  10. Probabilistic seismic hazard study based on active fault and finite element geodynamic models

    NASA Astrophysics Data System (ADS)

    Kastelic, Vanja; Carafa, Michele M. C.; Visini, Francesco

    2016-04-01

    We present a probabilistic seismic hazard analysis (PSHA) that is exclusively based on active faults and geodynamic finite element input models whereas seismic catalogues were used only in a posterior comparison. We applied the developed model in the External Dinarides, a slow deforming thrust-and-fold belt at the contact between Adria and Eurasia.. is the Our method consists of establishing s two earthquake rupture forecast models: (i) a geological active fault input (GEO) model and, (ii) a finite element (FEM) model. The GEO model is based on active fault database that provides information on fault location and its geometric and kinematic parameters together with estimations on its slip rate. By default in this model all deformation is set to be released along the active faults. The FEM model is based on a numerical geodynamic model developed for the region of study. In this model the deformation is, besides along the active faults, released also in the volumetric continuum elements. From both models we calculated their corresponding activity rates, its earthquake rates and their final expected peak ground accelerations. We investigated both the source model and the earthquake model uncertainties by varying the main active fault and earthquake rate calculation parameters through constructing corresponding branches of the seismic hazard logic tree. Hazard maps and UHS curves have been produced for horizontal ground motion on bedrock conditions VS 30 ≥ 800 m/s), thereby not considering local site amplification effects. The hazard was computed over a 0.2° spaced grid considering 648 branches of the logic tree and the mean value of 10% probability of exceedance in 50 years hazard level, while the 5th and 95th percentiles were also computed to investigate the model limits. We conducted a sensitivity analysis to control which of the input parameters influence the final hazard results in which measure. The results of such comparison evidence the deformation model and

  11. Seismic link at plate boundary

    NASA Astrophysics Data System (ADS)

    Ramdani, Faical; Kettani, Omar; Tadili, Benaissa

    2015-06-01

    Seismic triggering at plate boundaries has a very complex nature that includes seismic events at varying distances. The spatial orientation of triggering cannot be reduced to sequences from the main shocks. Seismic waves propagate at all times in all directions, particularly in highly active zones. No direct evidence can be obtained regarding which earthquakes trigger the shocks. The first approach is to determine the potential linked zones where triggering may occur. The second step is to determine the causality between the events and their triggered shocks. The spatial orientation of the links between events is established from pre-ordered networks and the adapted dependence of the spatio-temporal occurrence of earthquakes. Based on a coefficient of synchronous seismic activity to grid couples, we derive a network link by each threshold. The links of high thresholds are tested using the coherence of time series to determine the causality and related orientation. The resulting link orientations at the plate boundary conditions indicate that causal triggering seems to be localized along a major fault, as a stress transfer between two major faults, and parallel to the geothermal area extension.

  12. Geothermal induced seismicity: What links source mechanics and event magnitudes to faulting regime and injection rates?

    NASA Astrophysics Data System (ADS)

    Martinez-Garzon, Patricia; Kwiatek, Grzegorz; Bohnhoff, Marco; Dresen, Georg

    2017-04-01

    Improving estimates of seismic hazard associated to reservoir stimulation requires advanced understanding of the physical processes governing induced seismicity, which can be better achieved by carefully processing large datasets. To this end, we investigate source-type processes (shear/tensile/compaction) and rupture geometries with respect to the local stress field using seismicity from The Geysers (TG) and Salton Sea geothermal reservoirs, California. Analysis of 869 well-constrained full moment tensors (MW 0.8-3.5) at TG reveals significant non-double-couple (NDC) components (>25%) for 65% of the events and remarkably diversity in the faulting mechanisms. Volumetric deformation is clearly governed by injection rates with larger NDC components observed near injection wells and during high injection periods. The overall volumetric deformation from the moment tensors increases with time, possibly reflecting a reservoir pore pressure increase after several years of fluid injection with no significant production nearby. The obtained source mechanisms and fault orientations are magnitude-dependent and vary significantly between faulting regimes. Normal faulting events (MW < 2) reveal substantial NDC components indicating dilatancy, and they occur on varying fault orientations. In contrast, strike-slip events dominantly reveal a double-couple source, larger magnitudes (MW > 2) and mostly occur on optimally oriented faults with respect to the local stress field. NDC components indicating closure of cracks and pore spaces in the source region are found for reverse faulting events with MW > 2.5. Our findings from TG are generally consistent with preliminary source-type results from a reduced subset of well-recorded seismicity at the Salton Sea geothermal reservoir. Combined results imply that source processes and magnitudes of geothermal-induced seismicity are strongly affected by and systematically related to the hydraulic operations and the local stress state.

  13. Linking root traits to nutrient foraging in arbuscular mycorrhizal trees in a temperate forest.

    PubMed

    Eissenstat, David M; Kucharski, Joshua M; Zadworny, Marcin; Adams, Thomas S; Koide, Roger T

    2015-10-01

    The identification of plant functional traits that can be linked to ecosystem processes is of wide interest, especially for predicting vegetational responses to climate change. Root diameter of the finest absorptive roots may be one plant trait that has wide significance. Do species with relatively thick absorptive roots forage in nutrient-rich patches differently from species with relatively fine absorptive roots? We measured traits related to nutrient foraging (root morphology and architecture, root proliferation, and mycorrhizal colonization) across six coexisting arbuscular mycorrhizal (AM) temperate tree species with and without nutrient addition. Root traits such as root diameter and specific root length were highly correlated with root branching intensity, with thin-root species having higher branching intensity than thick-root species. In both fertilized and unfertilized soil, species with thin absorptive roots and high branching intensity showed much greater root length and mass proliferation but lower mycorrhizal colonization than species with thick absorptive roots. Across all species, fertilization led to increased root proliferation and reduced mycorrhizal colonization. These results suggest that thin-root species forage more by root proliferation, whereas thick-root species forage more by mycorrhizal fungi. In mineral nutrient-rich patches, AM trees seem to forage more by proliferating roots than by mycorrhizal fungi. © 2015 The Authors. New Phytologist © 2015 New Phytologist Trust.

  14. Abnormal fault-recovery characteristics of the fault-tolerant multiprocessor uncovered using a new fault-injection methodology

    NASA Technical Reports Server (NTRS)

    Padilla, Peter A.

    1991-01-01

    An investigation was made in AIRLAB of the fault handling performance of the Fault Tolerant MultiProcessor (FTMP). Fault handling errors detected during fault injection experiments were characterized. In these fault injection experiments, the FTMP disabled a working unit instead of the faulted unit once in every 500 faults, on the average. System design weaknesses allow active faults to exercise a part of the fault management software that handles Byzantine or lying faults. Byzantine faults behave such that the faulted unit points to a working unit as the source of errors. The design's problems involve: (1) the design and interface between the simplex error detection hardware and the error processing software, (2) the functional capabilities of the FTMP system bus, and (3) the communication requirements of a multiprocessor architecture. These weak areas in the FTMP's design increase the probability that, for any hardware fault, a good line replacement unit (LRU) is mistakenly disabled by the fault management software.

  15. Unsupervised Learning —A Novel Clustering Method for Rolling Bearing Faults Identification

    NASA Astrophysics Data System (ADS)

    Kai, Li; Bo, Luo; Tao, Ma; Xuefeng, Yang; Guangming, Wang

    2017-12-01

    To promptly process the massive fault data and automatically provide accurate diagnosis results, numerous studies have been conducted on intelligent fault diagnosis of rolling bearing. Among these studies, such as artificial neural networks, support vector machines, decision trees and other supervised learning methods are used commonly. These methods can detect the failure of rolling bearing effectively, but to achieve better detection results, it often requires a lot of training samples. Based on above, a novel clustering method is proposed in this paper. This novel method is able to find the correct number of clusters automatically the effectiveness of the proposed method is validated using datasets from rolling element bearings. The diagnosis results show that the proposed method can accurately detect the fault types of small samples. Meanwhile, the diagnosis results are also relative high accuracy even for massive samples.

  16. Study of Stand-Alone Microgrid under Condition of Faults on Distribution Line

    NASA Astrophysics Data System (ADS)

    Malla, S. G.; Bhende, C. N.

    2014-10-01

    The behavior of stand-alone microgrid is analyzed under the condition of faults on distribution feeders. During fault since battery is not able to maintain dc-link voltage within limit, the resistive dump load control is presented to do so. An inverter control is proposed to maintain balanced voltages at PCC under the unbalanced load condition and to reduce voltage unbalance factor (VUF) at load points. The proposed inverter control also has facility to protect itself from high fault current. Existing maximum power point tracker (MPPT) algorithm is modified to limit the speed of generator during fault. Extensive simulation results using MATLAB/SIMULINK established that the performance of the controllers is quite satisfactory under different fault conditions as well as unbalanced load conditions.

  17. Inferring patterns in mitochondrial DNA sequences through hypercube independent spanning trees.

    PubMed

    Silva, Eduardo Sant Ana da; Pedrini, Helio

    2016-03-01

    Given a graph G, a set of spanning trees rooted at a vertex r of G is said vertex/edge independent if, for each vertex v of G, v≠r, the paths of r to v in any pair of trees are vertex/edge disjoint. Independent spanning trees (ISTs) provide a number of advantages in data broadcasting due to their fault tolerant properties. For this reason, some studies have addressed the issue by providing mechanisms for constructing independent spanning trees efficiently. In this work, we investigate how to construct independent spanning trees on hypercubes, which are generated based upon spanning binomial trees, and how to use them to predict mitochondrial DNA sequence parts through paths on the hypercube. The prediction works both for inferring mitochondrial DNA sequences comprised of six bases as well as infer anomalies that probably should not belong to the mitochondrial DNA standard. Copyright © 2016 Elsevier Ltd. All rights reserved.

  18. Fault Tolerance for VLSI Multicomputers

    DTIC Science & Technology

    1985-08-01

    that consists of hundreds or thousands of VLSI computation nodes interconnected by dedicated links. Some important applications of high-end computers...technology, and intended applications . A proposed fault tolerance scheme combines hardware that performs error detection and system-level protocols for...order to recover from the error and resume correct operation, a valid system state must be restored. A low-overhead, application -transparent error

  19. Application of fuzzy fault tree analysis based on modified fuzzy AHP and fuzzy TOPSIS for fire and explosion in the process industry.

    PubMed

    Yazdi, Mohammad; Korhan, Orhan; Daneshvar, Sahand

    2018-05-09

    This study aimed at establishing fault tree analysis (FTA) using expert opinion to compute the probability of an event. To find the probability of the top event (TE), all probabilities of the basic events (BEs) should be available when the FTA is drawn. In this case, employing expert judgment can be used as an alternative to failure data in an awkward situation. The fuzzy analytical hierarchy process as a standard technique is used to give a specific weight to each expert, and fuzzy set theory is engaged for aggregating expert opinion. In this regard, the probability of BEs will be computed and, consequently, the probability of the TE obtained using Boolean algebra. Additionally, to reduce the probability of the TE in terms of three parameters (safety consequences, cost and benefit), the importance measurement technique and modified TOPSIS was employed. The effectiveness of the proposed approach is demonstrated with a real-life case study.

  20. Quaternary low-angle slip on detachment faults in Death Valley, California

    USGS Publications Warehouse

    Hayman, N.W.; Knott, J.R.; Cowan, D.S.; Nemser, E.; Sarna-Wojcicki, A. M.

    2003-01-01

    Detachment faults on the west flank of the Black Mountains (Nevada and California) dip 29??-36?? and cut subhorizontal layers of the 0.77 Ma Bishop ash. Steeply dipping normal faults confined to the hanging walls of the detachments offset layers of the 0.64 Ma Lava Creek B tephra and the base of 0.12-0.18 Ma Lake Manly gravel. These faults sole into and do not cut the low-angle detachments. Therefore the detachments accrued any measurable slip across the kinematically linked hanging-wall faults. An analysis of the orientations of hundreds of the hanging-wall faults shows that extension occurred at modest slip rates (<1 mm/yr) under a steep to vertically oriented maximum principal stress. The Black Mountain detachments are appropriately described as the basal detachments of near-critical Coulomb wedges. We infer that the formation of late Pleistocene and Holocene range-front fault scarps accompanied seismogenic slip on the detachments.

  1. Links to Literature--Huge Trees, Small Drawings: Ideas of Relative Sizes.

    ERIC Educational Resources Information Center

    Burton, Gail

    1996-01-01

    Discusses a unit integrating science, mathematics, and environmental education centered around "The Great Kapok Tree," by Lynne Cherry (1990). Ratios are used to make scale drawings of trees in a rain forest. Other activities include a terrarium and problem-solving activities based on eating habits of rain forest animals. (KMC)

  2. Insurance Applications of Active Fault Maps Showing Epistemic Uncertainty

    NASA Astrophysics Data System (ADS)

    Woo, G.

    2005-12-01

    Insurance loss modeling for earthquakes utilizes available maps of active faulting produced by geoscientists. All such maps are subject to uncertainty, arising from lack of knowledge of fault geometry and rupture history. Field work to undertake geological fault investigations drains human and monetary resources, and this inevitably limits the resolution of fault parameters. Some areas are more accessible than others; some may be of greater social or economic importance than others; some areas may be investigated more rapidly or diligently than others; or funding restrictions may have curtailed the extent of the fault mapping program. In contrast with the aleatory uncertainty associated with the inherent variability in the dynamics of earthquake fault rupture, uncertainty associated with lack of knowledge of fault geometry and rupture history is epistemic. The extent of this epistemic uncertainty may vary substantially from one regional or national fault map to another. However aware the local cartographer may be, this uncertainty is generally not conveyed in detail to the international map user. For example, an area may be left blank for a variety of reasons, ranging from lack of sufficient investigation of a fault to lack of convincing evidence of activity. Epistemic uncertainty in fault parameters is of concern in any probabilistic assessment of seismic hazard, not least in insurance earthquake risk applications. A logic-tree framework is appropriate for incorporating epistemic uncertainty. Some insurance contracts cover specific high-value properties or transport infrastructure, and therefore are extremely sensitive to the geometry of active faulting. Alternative Risk Transfer (ART) to the capital markets may also be considered. In order for such insurance or ART contracts to be properly priced, uncertainty should be taken into account. Accordingly, an estimate is needed for the likelihood of surface rupture capable of causing severe damage. Especially where a

  3. Can diligent and extensive mapping of faults provide reliable estimates of the expected maximum earthquakes at these faults? No. (Invited)

    NASA Astrophysics Data System (ADS)

    Bird, P.

    2010-12-01

    The hope expressed in the title question above can be contradicted in 5 ways, listed below. To summarize, an earthquake rupture can be larger than anticipated either because the fault system has not been fully mapped, or because the rupture is not limited to the pre-existing fault network. 1. Geologic mapping of faults is always incomplete due to four limitations: (a) Map-scale limitation: Faults below a certain (scale-dependent) apparent offset are omitted; (b) Field-time limitation: The most obvious fault(s) get(s) the most attention; (c) Outcrop limitation: You can't map what you can't see; and (d) Lithologic-contrast limitation: Intra-formation faults can be tough to map, so they are often assumed to be minor and omitted. If mapping is incomplete, fault traces may be longer and/or better-connected than we realize. 2. Fault trace “lengths” are unreliable guides to maximum magnitude. Fault networks have multiply-branching, quasi-fractal shapes, so fault “length” may be meaningless. Naming conventions for main strands are unclear, and rarely reviewed. Gaps due to Quaternary alluvial cover may not reflect deeper seismogenic structure. Mapped kinks and other “segment boundary asperities” may be only shallow structures. Also, some recent earthquakes have jumped and linked “separate” faults (Landers, California 1992; Denali, Alaska, 2002) [Wesnousky, 2006; Black, 2008]. 3. Distributed faulting (“eventually occurring everywhere”) is predicted by several simple theories: (a) Viscoelastic stress redistribution in plate/microplate interiors concentrates deviatoric stress upward until they fail by faulting; (b) Unstable triple-junctions (e.g., between 3 strike-slip faults) in 2-D plate theory require new faults to form; and (c) Faults which appear to end (on a geologic map) imply distributed permanent deformation. This means that all fault networks evolve and that even a perfect fault map would be incomplete for future ruptures. 4. A recent attempt

  4. Salt movements and faulting of the overburden - can numerical modeling predict the fault patterns above salt structures?

    NASA Astrophysics Data System (ADS)

    Clausen, O. R.; Egholm, D. L.; Wesenberg, R.

    2012-04-01

    Salt deformation has been the topic of numerous studies through the 20th century and up until present because of the close relation between commercial hydrocarbons and salt structure provinces of the world (Hudec & Jackson, 2007). The fault distribution in sediments above salt structures influences among other things the productivity due to the segmentation of the reservoir (Stewart 2006). 3D seismic data above salt structures can map such fault patterns in great detail and studies have shown that a variety of fault patterns exists. Yet, most patterns fall between two end members: concentric and radiating fault patterns. Here we use a modified version of the numerical spring-slider model introduced by Malthe-Sørenssen et al.(1998a) for simulating the emergence of small scale faults and fractures above a rising salt structure. The three-dimensional spring-slider model enables us to control the rheology of the deforming overburden, the mechanical coupling between the overburden and the underlying salt, as well as the kinematics of the moving salt structure. In this presentation, we demonstrate how the horizontal component on the salt motion influences the fracture patterns within the overburden. The modeling shows that purely vertical movement of the salt introduces a mesh of concentric normal faults in the overburden, and that the frequency of radiating faults increases with the amount of lateral movements across the salt-overburden interface. The two end-member fault patterns (concentric vs. radiating) can thus be linked to two different styles of salt movement: i) the vertical rising of a salt indenter and ii) the inflation of a 'salt-balloon' beneath the deformed strata. The results are in accordance with published analogue and theoretical models, as well as natural systems, and the model may - when used appropriately - provide new insight into how the internal dynamics of the salt in a structure controls the generation of fault patterns above the structure. The

  5. Dynamic Reconfiguration and Link Fault Tolerance in a Transputer Network

    DTIC Science & Technology

    1989-06-01

    linkO and link3 are connected to the C004s. LinkI and link2 are routed to the P2 edge connector, labelled ConfigUp and ConfiDown for access to...various commands recieved PROC handle.screen (VAL BYTE link.byte, SEQ -place the first byte on screen (source) I F1 linki < 16 -- a link 0 SEQ line.num l...determine characters used on screen for -- display of source & dest IF ((INT(bytel)) < 32) linki : to.slot[INT(bytel)] otherwise linki : 10 IF ((INT(byte2

  6. Linking xylem water storage with anatomical parameters in five temperate tree species.

    PubMed

    Jupa, Radek; Plavcová, Lenka; Gloser, Vít; Jansen, Steven

    2016-06-01

    The release of water from storage compartments to the transpiration stream is an important functional mechanism that provides the buffering of sudden fluctuations in water potential. The ability of tissues to release water per change in water potential, referred to as hydraulic capacitance, is assumed to be associated with the anatomy of storage tissues. However, information about how specific anatomical parameters determine capacitance is limited. In this study, we measured sapwood capacitance (C) in terminal branches and roots of five temperate tree species (Fagus sylvatica L., Picea abies L., Quercus robur L., Robinia pseudoacacia L., Tilia cordata Mill.). Capacitance was calculated separately for water released mainly from capillary (CI; open vessels, tracheids, fibres, intercellular spaces and cracks) and elastic storage compartments (CII; living parenchyma cells), corresponding to two distinct phases of the moisture release curve. We found that C was generally higher in roots than branches, with CI being 3-11 times higher than CII Sapwood density and the ratio of dead to living xylem cells were most closely correlated with C In addition, the magnitude of CI was strongly correlated with fibre/tracheid lumen area, whereas CII was highly dependent on the thickness of axial parenchyma cell walls. Our results indicate that water released from capillary compartments predominates over water released from elastic storage in both branches and roots, suggesting the limited importance of parenchyma cells for water storage in juvenile xylem of temperate tree species. Contrary to intact organs, water released from open conduits in our small wood samples significantly increased CI at relatively high water potentials. Linking anatomical parameters with the hydraulic capacitance of a tissue contributes to a better understanding of water release mechanisms and their implications for plant hydraulics. © The Author 2016. Published by Oxford University Press. All rights

  7. FTAPE: A fault injection tool to measure fault tolerance

    NASA Technical Reports Server (NTRS)

    Tsai, Timothy K.; Iyer, Ravishankar K.

    1995-01-01

    The paper introduces FTAPE (Fault Tolerance And Performance Evaluator), a tool that can be used to compare fault-tolerant computers. The tool combines system-wide fault injection with a controllable workload. A workload generator is used to create high stress conditions for the machine. Faults are injected based on this workload activity in order to ensure a high level of fault propagation. The errors/fault ratio and performance degradation are presented as measures of fault tolerance.

  8. Nonbinary Tree-Based Phylogenetic Networks.

    PubMed

    Jetten, Laura; van Iersel, Leo

    2018-01-01

    Rooted phylogenetic networks are used to describe evolutionary histories that contain non-treelike evolutionary events such as hybridization and horizontal gene transfer. In some cases, such histories can be described by a phylogenetic base-tree with additional linking arcs, which can, for example, represent gene transfer events. Such phylogenetic networks are called tree-based. Here, we consider two possible generalizations of this concept to nonbinary networks, which we call tree-based and strictly-tree-based nonbinary phylogenetic networks. We give simple graph-theoretic characterizations of tree-based and strictly-tree-based nonbinary phylogenetic networks. Moreover, we show for each of these two classes that it can be decided in polynomial time whether a given network is contained in the class. Our approach also provides a new view on tree-based binary phylogenetic networks. Finally, we discuss two examples of nonbinary phylogenetic networks in biology and show how our results can be applied to them.

  9. Flight elements: Fault detection and fault management

    NASA Technical Reports Server (NTRS)

    Lum, H.; Patterson-Hine, A.; Edge, J. T.; Lawler, D.

    1990-01-01

    Fault management for an intelligent computational system must be developed using a top down integrated engineering approach. An approach proposed includes integrating the overall environment involving sensors and their associated data; design knowledge capture; operations; fault detection, identification, and reconfiguration; testability; causal models including digraph matrix analysis; and overall performance impacts on the hardware and software architecture. Implementation of the concept to achieve a real time intelligent fault detection and management system will be accomplished via the implementation of several objectives, which are: Development of fault tolerant/FDIR requirement and specification from a systems level which will carry through from conceptual design through implementation and mission operations; Implementation of monitoring, diagnosis, and reconfiguration at all system levels providing fault isolation and system integration; Optimize system operations to manage degraded system performance through system integration; and Lower development and operations costs through the implementation of an intelligent real time fault detection and fault management system and an information management system.

  10. Dynamic 3D simulations of earthquakes on en echelon faults

    USGS Publications Warehouse

    Harris, R.A.; Day, S.M.

    1999-01-01

    One of the mysteries of earthquake mechanics is why earthquakes stop. This process determines the difference between small and devastating ruptures. One possibility is that fault geometry controls earthquake size. We test this hypothesis using a numerical algorithm that simulates spontaneous rupture propagation in a three-dimensional medium and apply our knowledge to two California fault zones. We find that the size difference between the 1934 and 1966 Parkfield, California, earthquakes may be the product of a stepover at the southern end of the 1934 earthquake and show how the 1992 Landers, California, earthquake followed physically reasonable expectations when it jumped across en echelon faults to become a large event. If there are no linking structures, such as transfer faults, then strike-slip earthquakes are unlikely to propagate through stepovers >5 km wide. Copyright 1999 by the American Geophysical Union.

  11. Fault-related clay authigenesis along the Moab Fault: Implications for calculations of fault rock composition and mechanical and hydrologic fault zone properties

    USGS Publications Warehouse

    Solum, J.G.; Davatzes, N.C.; Lockner, D.A.

    2010-01-01

    The presence of clays in fault rocks influences both the mechanical and hydrologic properties of clay-bearing faults, and therefore it is critical to understand the origin of clays in fault rocks and their distributions is of great importance for defining fundamental properties of faults in the shallow crust. Field mapping shows that layers of clay gouge and shale smear are common along the Moab Fault, from exposures with throws ranging from 10 to ???1000 m. Elemental analyses of four locations along the Moab Fault show that fault rocks are enriched in clays at R191 and Bartlett Wash, but that this clay enrichment occurred at different times and was associated with different fluids. Fault rocks at Corral and Courthouse Canyons show little difference in elemental composition from adjacent protolith, suggesting that formation of fault rocks at those locations is governed by mechanical processes. Friction tests show that these authigenic clays result in fault zone weakening, and potentially influence the style of failure along the fault (seismogenic vs. aseismic) and potentially influence the amount of fluid loss associated with coseismic dilation. Scanning electron microscopy shows that authigenesis promotes that continuity of slip surfaces, thereby enhancing seal capacity. The occurrence of the authigenesis, and its influence on the sealing properties of faults, highlights the importance of determining the processes that control this phenomenon. ?? 2010 Elsevier Ltd.

  12. FaultLab: Results on the crustal structure of the North Anatolian Fault from a dense seismic network

    NASA Astrophysics Data System (ADS)

    Thompson, David; Rost, Sebastian; Houseman, Greg; Cornwell, David; Türkelli, Niyazi; Uǧur, Teoman, Kahraman, Metin; Altuncu Poyraz, Selda; Gülen, Levent; Utkucu, Murat; Frederiksen, Andrew

    2013-04-01

    fault segments elsewhere, and models of geodetic strain-rate across the fault system. By linking together results from the complementary techniques being employed in the FaultLab project, we aim to produce a comprehensive picture of fault structure and dynamics throughout the crust and shallow upper mantle of this major active fault zone.

  13. Integral Sensor Fault Detection and Isolation for Railway Traction Drive.

    PubMed

    Garramiola, Fernando; Del Olmo, Jon; Poza, Javier; Madina, Patxi; Almandoz, Gaizka

    2018-05-13

    Due to the increasing importance of reliability and availability of electric traction drives in Railway applications, early detection of faults has become an important key for Railway traction drive manufacturers. Sensor faults are important sources of failures. Among the different fault diagnosis approaches, in this article an integral diagnosis strategy for sensors in traction drives is presented. Such strategy is composed of an observer-based approach for direct current (DC)-link voltage and catenary current sensors, a frequency analysis approach for motor current phase sensors and a hardware redundancy solution for speed sensors. None of them requires any hardware change requirement in the actual traction drive. All the fault detection and isolation approaches have been validated in a Hardware-in-the-loop platform comprising a Real Time Simulator and a commercial Traction Control Unit for a tram. In comparison to safety-critical systems in Aerospace applications, Railway applications do not need instantaneous detection, and the diagnosis is validated in a short time period for reliable decision. Combining the different approaches and existing hardware redundancy, an integral fault diagnosis solution is provided, to detect and isolate faults in all the sensors installed in the traction drive.

  14. Integral Sensor Fault Detection and Isolation for Railway Traction Drive

    PubMed Central

    del Olmo, Jon; Poza, Javier; Madina, Patxi; Almandoz, Gaizka

    2018-01-01

    Due to the increasing importance of reliability and availability of electric traction drives in Railway applications, early detection of faults has become an important key for Railway traction drive manufacturers. Sensor faults are important sources of failures. Among the different fault diagnosis approaches, in this article an integral diagnosis strategy for sensors in traction drives is presented. Such strategy is composed of an observer-based approach for direct current (DC)-link voltage and catenary current sensors, a frequency analysis approach for motor current phase sensors and a hardware redundancy solution for speed sensors. None of them requires any hardware change requirement in the actual traction drive. All the fault detection and isolation approaches have been validated in a Hardware-in-the-loop platform comprising a Real Time Simulator and a commercial Traction Control Unit for a tram. In comparison to safety-critical systems in Aerospace applications, Railway applications do not need instantaneous detection, and the diagnosis is validated in a short time period for reliable decision. Combining the different approaches and existing hardware redundancy, an integral fault diagnosis solution is provided, to detect and isolate faults in all the sensors installed in the traction drive. PMID:29757251

  15. The Evergreen basin and the role of the Silver Creek fault in the San Andreas fault system, San Francisco Bay region, California

    USGS Publications Warehouse

    Jachens, Robert C.; Wentworth, Carl M.; Graymer, Russell W.; Williams, Robert; Ponce, David A.; Mankinen, Edward A.; Stephenson, William J.; Langenheim, Victoria

    2017-01-01

    The Evergreen basin is a 40-km-long, 8-km-wide Cenozoic sedimentary basin that lies mostly concealed beneath the northeastern margin of the Santa Clara Valley near the south end of San Francisco Bay (California, USA). The basin is bounded on the northeast by the strike-slip Hayward fault and an approximately parallel subsurface fault that is structurally overlain by a set of west-verging reverse-oblique faults which form the present-day southeastward extension of the Hayward fault. It is bounded on the southwest by the Silver Creek fault, a largely dormant or abandoned fault that splays from the active southern Calaveras fault. We propose that the Evergreen basin formed as a strike-slip pull-apart basin in the right step from the Silver Creek fault to the Hayward fault during a time when the Silver Creek fault served as a segment of the main route by which slip was transferred from the central California San Andreas fault to the Hayward and other East Bay faults. The dimensions and shape of the Evergreen basin, together with palinspastic reconstructions of geologic and geophysical features surrounding it, suggest that during its lifetime, the Silver Creek fault transferred a significant portion of the ∼100 km of total offset accommodated by the Hayward fault, and of the 175 km of total San Andreas system offset thought to have been accommodated by the entire East Bay fault system. As shown previously, at ca. 1.5–2.5 Ma the Hayward-Calaveras connection changed from a right-step, releasing regime to a left-step, restraining regime, with the consequent effective abandonment of the Silver Creek fault. This reorganization was, perhaps, preceded by development of the previously proposed basin-bisecting Mount Misery fault, a fault that directly linked the southern end of the Hayward fault with the southern Calaveras fault during extinction of pull-apart activity. Historic seismicity indicates that slip below a depth of 5 km is mostly transferred from the Calaveras

  16. SeaMARC II mapping of transform faults in the Cayman Trough, Caribbean Sea

    USGS Publications Warehouse

    Rosencrantz, Eric; Mann, Paul

    1992-01-01

    SeaMARC II maps of the southern wall of the Cayman Trough between Honduras and Jamaica show zones of continuous, well-defined fault lineaments adjacent and parallel to the wall, both to the east and west of the Cayman spreading axis. These lineaments mark the present, active traces of transform faults which intersect the southern end of the spreading axis at a triple junction. The Swan Islands transform fault to the west is dominated by two major lineaments that overlap with right-stepping sense across a large push-up ridge beneath the Swan Islands. The fault zone to the east of the axis, named the Walton fault, is more complex, containing multiple fault strands and a large pull-apart structure. The Walton fault links the spreading axis to Jamaican and Hispaniolan strike-slip faults, and it defines the southern boundary of a microplate composed of the eastern Cayman Trough and western Hispaniola. The presence of this microplate raises questions about the veracity of Caribbean plate velocities based primarily on Cayman Trough opening rates.

  17. Evaluation of the safety performance of highway alignments based on fault tree analysis and safety boundaries.

    PubMed

    Chen, Yikai; Wang, Kai; Xu, Chengcheng; Shi, Qin; He, Jie; Li, Peiqing; Shi, Ting

    2018-05-19

    To overcome the limitations of previous highway alignment safety evaluation methods, this article presents a highway alignment safety evaluation method based on fault tree analysis (FTA) and the characteristics of vehicle safety boundaries, within the framework of dynamic modeling of the driver-vehicle-road system. Approaches for categorizing the vehicle failure modes while driving on highways and the corresponding safety boundaries were comprehensively investigated based on vehicle system dynamics theory. Then, an overall crash probability model was formulated based on FTA considering the risks of 3 failure modes: losing steering capability, losing track-holding capability, and rear-end collision. The proposed method was implemented on a highway segment between Bengbu and Nanjing in China. A driver-vehicle-road multibody dynamics model was developed based on the 3D alignments of the Bengbu to Nanjing section of Ning-Luo expressway using Carsim, and the dynamics indices, such as sideslip angle and, yaw rate were obtained. Then, the average crash probability of each road section was calculated with a fixed-length method. Finally, the average crash probability was validated against the crash frequency per kilometer to demonstrate the accuracy of the proposed method. The results of the regression analysis and correlation analysis indicated good consistency between the results of the safety evaluation and the crash data and that it outperformed the safety evaluation methods used in previous studies. The proposed method has the potential to be used in practical engineering applications to identify crash-prone locations and alignment deficiencies on highways in the planning and design phases, as well as those in service.

  18. Evolution of groundwater chemistry along fault structures in sandstone

    NASA Astrophysics Data System (ADS)

    Dausse, A.; Guiheneuf, N.; Pierce, A. A.; Cherry, J. A.; Parker, B. L.

    2016-12-01

    Fluid-rock interaction across geological structures plays a major role on evolution of groundwater chemistry and physical properties of reservoirs. In particular, groundwater chemistry evolve on different facies according to residence times which can be linked to hydraulic properties of the geological unit. In this study, we analyze groundwater samples collected at an 11 km² site located in southern California (USA) to evaluate the evolution of groundwater chemistry according to different geological structures. Major and minor elements were sampled at the same period of time from 40 wells located along the main structures in the northeast of the site, where major NE-SW trending faults and other oriented ESE-WNW are present in sandstone Chatsworth formation. By analyzing the spatial distribution of ions concentration at the site scale, several hydrochemical compartments (main- and sub-compartments) can be distinguished and are in agreement with structural and hydrological information. In particular, as previously observed from piezometric informations, the shear zone fault serves as a barrier for groundwater flow and separates the site on two mains compartments. In addition, the analysis along major faults oriented orthogonal to this shear zone (ESE-WNW) in the eastern part of the site, shows an increase in mineralization following the hydraulic gradient. This salinization has been confirmed by ionic ratio and Gibbs plots and is attributed to fluid-rock interaction processes. In particular, groundwater chemistry seems to evolve from bicarbonate to sodium facies. Moreover, the gradient of concentrations vary depending on fault locations and can be related to their hydraulic properties and hence to different characteristic times from point to point. To conclude, major faults across the site display different degrees of groundwater chemistry evolution, linked to their physical properties, which may in turn have a large impact on contaminant transport and attenuation.

  19. Paleoseismic Investigation of the Ranong and Khlong Marui faults, Chumphon Province, Southern Thailand

    NASA Astrophysics Data System (ADS)

    Fenton, C. H.; Sutiwanich, C.

    2005-12-01

    The Ranong and Khlong Marui faults are northeast-southwest trending structures in the Isthmus of Kra, southern Thailand, that apparently link the extensional regimes of the Mergui Basin in the Andaman Sea and the Gulf of Thailand. These faults are depicted commonly as strike-slip faults, acting as conjugate structures to the dominant northwest-southeast trending strike-slip faults, in Southeast Asia. These faults are parallel to the predominant structural grain in the Carboniferous rocks of peninsular Thailand. In addition, they appear to be bounding structures for several Tertiary basins, including the onshore parts of the Surat Thani basin and the offshore Chumphon basin. Initial remote sensing studies showed that both faults have relatively subdued geomorphic expressions. Field reconnaissance investigations indicated a lack of youthful tectonic geomorphology along the Khlong Marui fault and ambiguous evidence for recent movement along the Ranong fault. Fault exposures along both fault trends and on minor parallel faults in the region indicated that, rather than predominantly strike-slip motion, these faults have experienced up-to-the-west reverse movement. Because of its more youthful geomorphic expression, several sites along the Ranong fault were chosen for paleoseismic trenching. Initial trench exposures indicate an absence of Holocene movement. Some exposures indicate the possibility of Late Tertiary-Early Holocene vertical movement. These investigations are currently ongoing and we hope to report our conclusions at the Fall Meeting.

  20. Tree-ring 14C links seismic swarm to CO2 spike at Yellowstone, USA

    USGS Publications Warehouse

    Evans, William C.; Bergfeld, D.; McGeehin, J.P.; King, J.C.; Heasler, H.

    2010-01-01

    Mechanisms to explain swarms of shallow seismicity and inflation-deflation cycles at Yellowstone caldera (western United States) commonly invoke episodic escape of magma-derived brines or gases from the ductile zone, but no correlative changes in the surface efflux of magmatic constituents have ever been documented. Our analysis of individual growth rings in a tree core from the Mud Volcano thermal area within the caldera links a sharp ~25% drop in 14C to a local seismic swarm in 1978. The implied fivefold increase in CO2 emissions clearly associates swarm seismicity with upflow of magma-derived fluid and shows that pulses of magmatic CO2 can rapidly traverse the 5-kmthick brittle zone, even through Yellowstone's enormous hydrothermal reservoir. The 1978 event predates annual deformation surveys, but recognized connections between subsequent seismic swarms and changes in deformation suggest that CO2 might drive both processes. ?? 2010 Geological Society of America.

  1. Imaging the crustal structure of Haiti's transpressional fault system using seismicity and tomography

    NASA Astrophysics Data System (ADS)

    Possee, D.; Keir, D.; Harmon, N.; Rychert, C.; Rolandone, F.; Leroy, S. D.; Stuart, G. W.; Calais, E.; Boisson, D.; Ulysse, S. M. J.; Guerrier, K.; Momplaisir, R.; Prepetit, C.

    2017-12-01

    Oblique convergence of the Caribbean and North American plates has partitioned strain across an extensive transpressional fault system that bisects Haiti. Most recently the 2010, MW7.0 earthquake ruptured multiple thrust faults in southern Haiti. However, while the rupture mechanism has been well studied, how these faults are segmented and link to deformation across the plate boundary is still debated. Understanding the link between strain accumulation and faulting in Haiti is also key to future modelling of seismic hazards. To assess seismic activity and fault structures we used data from 31 broadband seismic stations deployed on Haiti for 16-months. Local earthquakes were recorded and hypocentre locations determined using a 1D velocity model. A high-quality subset of the data was then inverted using travel-time tomography for relocated hypocentres and 2D images of Vp and Vp/Vs crustal structure. Earthquake locations reveal two clusters of seismic activity, the first delineates faults associated with the 2010 earthquake and the second shows activity 100km further east along a thrust fault north of Lake Enriquillo (Dominican Republic). The velocity models show large variations in seismic properties across the plate boundary; shallow low-velocity zones with a 5-8% decrease in Vp and high Vp/Vs ratios of 1.85-1.95 correspond to sedimentary basins that form the low-lying terrain on Haiti. We also image a region with a 4-5% decrease in Vp and an increased Vp/Vs ratio of 1.80-1.85 dipping south to a depth of 20km beneath southern Haiti. This feature matches the location of a major thrust fault and suggests a substantial damage zone around this fault. Beneath northern Haiti a transition to lower Vp/Vs values of 1.70-1.75 reflects a compositional change from mafic facies such as the Caribbean large igneous province in the south, to arc magmatic facies associated with the Greater Antilles arc in the north. Our seismic images are consistent with the fault system across

  2. Cholangiopathy and tumors of the pancreas, liver, and biliary tree in boys with X-linked immunodeficiency with hyper-IgM.

    PubMed

    Hayward, A R; Levy, J; Facchetti, F; Notarangelo, L; Ochs, H D; Etzioni, A; Bonnefoy, J Y; Cosyns, M; Weinberg, A

    1997-01-15

    We report an association between X-linked immunodeficiency with hyper-IgM (XHIM) and carcinomas affecting the liver, pancreas, biliary tree, and associated neuroectodermal endocrine cells. The tumors were fatal in eight of nine cases and in most instances were preceded by chronic cholangiopathy and/or cirrhosis. An additional group of subjects with XHIM had chronic inflammation of the liver or bile ducts but no malignancy. Many patients with XHIM were infected with cryptosporidia. CD40 is normally expressed on regenerating or inflammed bile duct epithelium. A CD40+ hepatocellular carcinoma cell line, HepG2, susceptible to cryptosporidia and CMV infection became resistant when cell surface CD40 was cross-linked by a CD40 ligand fusion protein. Apoptosis was triggered in HepG2 cells if protein synthesis was blocked by cycloheximide or if the cells were infected by cryptosporidia. Ligation of CD40 on biliary epithelium may contribute to defense against infection by intracellular pathogens. We propose that the CD40 ligand mutations that cause XHIM deprive the biliary epithelium of one line of defense against intracellular pathogens and that malignant transformation in the biliary tree follows chronic infection or inflammation. The resulting tumors may then progress without check by an effective immune response. Patients with XHIM who have abnormal liver function tests should be considered at increased risk for cholangiopathy or malignancy.

  3. Seismic interpretation of the deep structure of the Wabash Valley Fault System

    USGS Publications Warehouse

    Bear, G.W.; Rupp, J.A.; Rudman, A.J.

    1997-01-01

    Interpretations of newly available seismic reflection profiles near the center of the Illinois Basin indicate that the Wabash Valley Fault System is rooted in a series of basement-penetrating faults. The fault system is composed predominantly of north-northeast-trending high-angle normal faults. The largest faults in the system bound the 22-km wide 40-km long Grayville Graben. Structure contour maps drawn on the base of the Mount Simon Sandstone (Cambrian System) and a deeper pre-Mount Simon horizon show dip-slip displacements totaling at least 600 meters across the New Harmony fault. In contrast to previous interpretations, the N-S extent of significant fault offsets is restricted to a region north of 38?? latitude and south of 38.35?? latitude. This suggests that the graben is not a NE extension of the structural complex composed of the Rough Creek Fault System and the Reelfoot Rift as previously interpreted. Structural complexity on the graben floor also decreases to the south. Structural trends north of 38?? latitude are offset laterally across several large faults, indicating strike-slip motions of 2 to 4 km. Some of the major faults are interpreted to penetrate to depths of 7 km or more. Correlation of these faults with steep potential field gradients suggests that the fault positions are controlled by major lithologic contacts within the basement and that the faults may extend into the depth range where earthquakes are generated, revealing a potential link between specific faults and recently observed low-level seismicity in the area.

  4. Linking clinical measurements and kinematic gait patterns of toe-walking using fuzzy decision trees.

    PubMed

    Armand, Stéphane; Watelain, Eric; Roux, Emmanuel; Mercier, Moïse; Lepoutre, François-Xavier

    2007-03-01

    Toe-walking is one of the most prevalent gait deviations and has been linked to many diseases. Three major ankle kinematic patterns have been identified in toe-walkers, but the relationships between the causes of toe-walking and these patterns remain unknown. This study aims to identify these relationships. Clearly, such knowledge would increase our understanding of this gait deviation, and could help clinicians plan treatment. The large quantity of data provided by gait analysis often makes interpretation a difficult task. Artificial intelligence techniques were used in this study to facilitate interpretation as well as to decrease subjective interpretation. Of the 716 limbs evaluated, 240 showed signs of toe-walking and met inclusion criteria. The ankle kinematic pattern of the evaluated limbs during gait was assigned to one of three toe-walking pattern groups to build the training data set. Toe-walker clinical measurements (range of movement, muscle spasticity and muscle strength) were coded in fuzzy modalities, and fuzzy decision trees were induced to create intelligible rules allowing toe-walkers to be assigned to one of the three groups. A stratified 10-fold cross validation situated the classification accuracy at 81%. Twelve rules depicting the causes of toe-walking were selected, discussed and characterized using kinematic, kinetic and EMG charts. This study proposes an original approach to linking the possible causes of toe-walking with gait patterns.

  5. Using Decision Trees to Detect and Isolate Simulated Leaks in the J-2X Rocket Engine

    NASA Technical Reports Server (NTRS)

    Schwabacher, Mark A.; Aguilar, Robert; Figueroa, Fernando F.

    2009-01-01

    The goal of this work was to use data-driven methods to automatically detect and isolate faults in the J-2X rocket engine. It was decided to use decision trees, since they tend to be easier to interpret than other data-driven methods. The decision tree algorithm automatically "learns" a decision tree by performing a search through the space of possible decision trees to find one that fits the training data. The particular decision tree algorithm used is known as C4.5. Simulated J-2X data from a high-fidelity simulator developed at Pratt & Whitney Rocketdyne and known as the Detailed Real-Time Model (DRTM) was used to "train" and test the decision tree. Fifty-six DRTM simulations were performed for this purpose, with different leak sizes, different leak locations, and different times of leak onset. To make the simulations as realistic as possible, they included simulated sensor noise, and included a gradual degradation in both fuel and oxidizer turbine efficiency. A decision tree was trained using 11 of these simulations, and tested using the remaining 45 simulations. In the training phase, the C4.5 algorithm was provided with labeled examples of data from nominal operation and data including leaks in each leak location. From the data, it "learned" a decision tree that can classify unseen data as having no leak or having a leak in one of the five leak locations. In the test phase, the decision tree produced very low false alarm rates and low missed detection rates on the unseen data. It had very good fault isolation rates for three of the five simulated leak locations, but it tended to confuse the remaining two locations, perhaps because a large leak at one of these two locations can look very similar to a small leak at the other location.

  6. Hydromechanical heterogeneities of a mature fault zone: impacts on fluid flow.

    PubMed

    Jeanne, Pierre; Guglielmi, Yves; Cappa, Frédéric

    2013-01-01

    In this paper, fluid flow is examined for a mature strike-slip fault zone with anisotropic permeability and internal heterogeneity. The hydraulic properties of the fault zone were first characterized in situ by microgeophysical (VP and σc ) and rock-quality measurements (Q-value) performed along a 50-m long profile perpendicular to the fault zone. Then, the local hydrogeological context of the fault was modified to conduct a water-injection test. The resulting fluid pressures and flow rates through the different fault-zone compartments were then analyzed with a two-phase fluid-flow numerical simulation. Fault hydraulic properties estimated from the injection test signals were compared to the properties estimated from the multiscale geological approach. We found that (1) the microgeophysical measurements that we made yield valuable information on the porosity and the specific storage coefficient within the fault zone and (2) the Q-value method highlights significant contrasts in permeability. Fault hydrodynamic behavior can be modeled by a permeability tensor rotation across the fault zone and by a storativity increase. The permeability tensor rotation is linked to the modification of the preexisting fracture properties and to the development of new fractures during the faulting process, whereas the storativity increase results from the development of micro- and macrofractures that lower the fault-zone stiffness and allows an increased extension of the pore space within the fault damage zone. Finally, heterogeneities internal to the fault zones create complex patterns of fluid flow that reflect the connections of paths with contrasting properties. © 2013, The Author(s). Ground Water © 2013, National Ground Water Association.

  7. Dipping San Andreas and Hayward faults revealed beneath San Francisco Bay, California

    USGS Publications Warehouse

    Parsons, T.; Hart, P.E.

    1999-01-01

    The San Francisco Bay area is crossed by several right-lateral strike-slip faults of the San Andreas fault zone. Fault-plane reflections reveal that two of these faults, the San Andreas and Hayward, dip toward each other below seismogenic depths at 60?? and 70??, respectively, and persist to the base of the crust. Previously, a horizontal detachment linking the two faults in the lower crust beneath San Francisco Bay was proposed. The only near-vertical-incidence reflection data available prior to the most recent experiment in 1997 were recorded parallel to the major fault structures. When the new reflection data recorded orthogonal to the faults are compared with the older data, the highest, amplitude reflections show clear variations in moveout with recording azimuth. In addition, reflection times consistently increase with distance from the faults. If the reflectors were horizontal, reflection moveout would be independent of azimuth, and reflection times would be independent of distance from the faults. The best-fit solution from three-dimensional traveltime modeling is a pair of high-angle dipping surfaces. The close correspondence of these dipping structures with the San Andreas and Hayward faults leads us to conclude that they are the faults beneath seismogenic depths. If the faults retain their observed dips, they would converge into a single zone in the upper mantle -45 km beneath the surface, although we can only observe them in the crust.

  8. Bearing faults identification and resonant band demodulation based on wavelet de-noising methods and envelope analysis

    NASA Astrophysics Data System (ADS)

    Abdelrhman, Ahmed M.; Sei Kien, Yong; Salman Leong, M.; Meng Hee, Lim; Al-Obaidi, Salah M. Ali

    2017-07-01

    The vibration signals produced by rotating machinery contain useful information for condition monitoring and fault diagnosis. Fault severities assessment is a challenging task. Wavelet Transform (WT) as a multivariate analysis tool is able to compromise between the time and frequency information in the signals and served as a de-noising method. The CWT scaling function gives different resolutions to the discretely signals such as very fine resolution at lower scale but coarser resolution at a higher scale. However, the computational cost increased as it needs to produce different signal resolutions. DWT has better low computation cost as the dilation function allowed the signals to be decomposed through a tree of low and high pass filters and no further analysing the high-frequency components. In this paper, a method for bearing faults identification is presented by combing Continuous Wavelet Transform (CWT) and Discrete Wavelet Transform (DWT) with envelope analysis for bearing fault diagnosis. The experimental data was sampled by Case Western Reserve University. The analysis result showed that the proposed method is effective in bearing faults detection, identify the exact fault’s location and severity assessment especially for the inner race and outer race faults.

  9. Why the 2002 Denali fault rupture propagated onto the Totschunda fault: implications for fault branching and seismic hazards

    USGS Publications Warehouse

    Schwartz, David P.; Haeussler, Peter J.; Seitz, Gordon G.; Dawson, Timothy E.

    2012-01-01

    The propagation of the rupture of the Mw7.9 Denali fault earthquake from the central Denali fault onto the Totschunda fault has provided a basis for dynamic models of fault branching in which the angle of the regional or local prestress relative to the orientation of the main fault and branch plays a principal role in determining which fault branch is taken. GeoEarthScope LiDAR and paleoseismic data allow us to map the structure of the Denali-Totschunda fault intersection and evaluate controls of fault branching from a geological perspective. LiDAR data reveal the Denali-Totschunda fault intersection is structurally simple with the two faults directly connected. At the branch point, 227.2 km east of the 2002 epicenter, the 2002 rupture diverges southeast to become the Totschunda fault. We use paleoseismic data to propose that differences in the accumulated strain on each fault segment, which express differences in the elapsed time since the most recent event, was one important control of the branching direction. We suggest that data on event history, slip rate, paleo offsets, fault geometry and structure, and connectivity, especially on high slip rate-short recurrence interval faults, can be used to assess the likelihood of branching and its direction. Analysis of the Denali-Totschunda fault intersection has implications for evaluating the potential for a rupture to propagate across other types of fault intersections and for characterizing sources of future large earthquakes.

  10. A Self-Stabilizing Hybrid Fault-Tolerant Synchronization Protocol

    NASA Technical Reports Server (NTRS)

    Malekpour, Mahyar R.

    2015-01-01

    This paper presents a strategy for solving the Byzantine general problem for self-stabilizing a fully connected network from an arbitrary state and in the presence of any number of faults with various severities including any number of arbitrary (Byzantine) faulty nodes. The strategy consists of two parts: first, converting Byzantine faults into symmetric faults, and second, using a proven symmetric-fault tolerant algorithm to solve the general case of the problem. A protocol (algorithm) is also present that tolerates symmetric faults, provided that there are more good nodes than faulty ones. The solution applies to realizable systems, while allowing for differences in the network elements, provided that the number of arbitrary faults is not more than a third of the network size. The only constraint on the behavior of a node is that the interactions with other nodes are restricted to defined links and interfaces. The solution does not rely on assumptions about the initial state of the system and no central clock nor centrally generated signal, pulse, or message is used. Nodes are anonymous, i.e., they do not have unique identities. A mechanical verification of a proposed protocol is also present. A bounded model of the protocol is verified using the Symbolic Model Verifier (SMV). The model checking effort is focused on verifying correctness of the bounded model of the protocol as well as confirming claims of determinism and linear convergence with respect to the self-stabilization period.

  11. Active Fault Topography and Fault Outcrops in the Central Part of the Nukumi fault, the 1891 Nobi Earthquake Fault System, Central Japan

    NASA Astrophysics Data System (ADS)

    Sasaki, T.; Ueta, K.; Inoue, D.; Aoyagi, Y.; Yanagida, M.; Ichikawa, K.; Goto, N.

    2010-12-01

    It is important to evaluate the magnitude of earthquake caused by multiple active faults, taking into account the simultaneous effects. The simultaneity of adjacent active faults are often decided on the basis of geometric distances except for known these paleoseismic records. We have been studied the step area between the Nukumi fault and the Neodani fault, which appeared as consecutive ruptures in the 1891 Nobi earthquake, since 2009. The purpose of this study is to establish innovation in valuation technique of the simultaneity of adjacent active faults in addition to the paleoseismic record and the geometric distance. Geomorphological, geological and reconnaissance microearthquake surveys are concluded. The present work is intended to clarify the distribution of tectonic geomorphology along the Nukumi fault and the Neodani fault by high-resolution interpretations of airborne LiDAR DEM and aerial photograph, and the field survey of outcrops and location survey. The study area of this work is the southeastern Nukumi fault and the northwestern Neodani fault. We interpret DEM using shaded relief map and stereoscopic bird's-eye view made from 2m mesh DEM data which is obtained by airborne laser scanner of Kokusai Kogyo Co., Ltd. Aerial photographic survey is for confirmation of DEM interpretation using 1/16,000 scale photo. As a result of topographic survey, we found consecutive tectonic topography which is left lateral displacement of ridge and valley lines and reverse scarplets along the Nukumi fault and the Neodani fault . From Ogotani 2km southeastern of Nukumi pass which is located at the southeastern end of surface rupture along the Nukumi fault by previous study to Neooppa 9km southeastern of Nukumi pass, we can interpret left lateral topographies and small uphill-facing fault scarps on the terrace surface by detail DEM investigation. These topographies are unrecognized by aerial photographic survey because of heavy vegetation. We have found several new

  12. Upper crustal fault reactivation and the potential of triggered earthquakes on the Atacama Fault System, N-Chile

    NASA Astrophysics Data System (ADS)

    Victor, Pia; Ewiak, Oktawian; Thomas, Ziegenhagen; Monika, Sobiesiak; Bernd, Schurr; Gabriel, Gonzalez; Onno, Oncken

    2016-04-01

    The Atacama Fault System (AFS) is an active trench-parallel fault system, located in the forearc of N-Chile directly above the subduction zone interface. Due to its well-exposed position in the hyper arid forearc of N-Chile it is the perfect target to investigate the interaction between the deformation cycle in the overriding forearc and the subduction zone seismic cycle of the underlying megathrust. Although the AFS and large parts of the upper crust are devoid of any noteworthy seismicity, at least three M=7 earthquakes in the past 10 ky have been documented in the paleoseismological record, demonstrating the potential of large events in the future. We apply a two-fold approach to explore fault activation and reactivation patterns through time and to investigate the triggering potential of upper crustal faults. 1) A new methodology using high-resolution topographic data allows us to investigate the number of past earthquakes for any given segment of the fault system as well as the amount of vertical displacement of the last increment. This provides us with a detailed dataset of past earthquake rupture of upper plate faults which is potentially linked to large subduction zone earthquakes. 2) The IPOC Creepmeter array (http://www.ipoc-network.org/index.php/observatory/creepmeter.html) provides us with high-resolution time series of fault displacement accumulation for 11 stations along the 4 most active branches of the AFS. This array monitors the displacement across the fault with 2 samples/min with a resolution of 1μm. Collocated seismometers record the seismicity at two of the creepmeters, whereas the regional seismicity is provided by the IPOC Seismological Networks. Continuous time series of the creepmeter stations since 2009 show that the shallow segments of the fault do not creep permanently. Instead the accumulation of permanent deformation occurs by triggered slip caused by local or remote earthquakes. The 2014 Mw=8.2 Pisagua Earthquake, located close to

  13. Predeployment validation of fault-tolerant systems through software-implemented fault insertion

    NASA Technical Reports Server (NTRS)

    Czeck, Edward W.; Siewiorek, Daniel P.; Segall, Zary Z.

    1989-01-01

    Fault injection-based automated testing (FIAT) environment, which can be used to experimentally characterize and evaluate distributed realtime systems under fault-free and faulted conditions is described. A survey is presented of validation methodologies. The need for fault insertion based on validation methodologies is demonstrated. The origins and models of faults, and motivation for the FIAT concept are reviewed. FIAT employs a validation methodology which builds confidence in the system through first providing a baseline of fault-free performance data and then characterizing the behavior of the system with faults present. Fault insertion is accomplished through software and allows faults or the manifestation of faults to be inserted by either seeding faults into memory or triggering error detection mechanisms. FIAT is capable of emulating a variety of fault-tolerant strategies and architectures, can monitor system activity, and can automatically orchestrate experiments involving insertion of faults. There is a common system interface which allows ease of use to decrease experiment development and run time. Fault models chosen for experiments on FIAT have generated system responses which parallel those observed in real systems under faulty conditions. These capabilities are shown by two example experiments each using a different fault-tolerance strategy.

  14. Understory plant communities and the functional distinction between savanna trees, forest trees, and pines

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Veldman, Joseph W.; Mattingly, W. Brett; Brudvig, Lars A.

    Although savanna trees and forest trees are thought to represent distinct functional groups with different effects on ecosystem processes, few empirical studies have examined these effects. In particular, it remains unclear if savanna and forest trees differ in their ability to coexist with understory plants, which comprise the majority of plant diversity in most savannas. We used structural equation modeling (SEM) and data from 157 sites across three locations in the southeastern United States to understand the effects of broadleaf savanna trees, broadleaf forest trees, and pine trees on savanna understory plant communities. After accounting for underlying gradients in firemore » frequency and soil moisture, abundances (i.e., basal area and stem density) of forest trees and pines, but not savanna trees, were negatively correlated with the cover and density (i.e., local-scale species richness) of C4 graminoid species, a defining savanna understory functional group that is linked to ecosystem flammability. In analyses of the full understory community, abundances of trees from all functional groups were negatively correlated with species density and cover. For both the C4 and full communities, fire frequency promoted understory plants directly, and indirectly by limiting forest tree abundance. There was little indirect influence of fire on the understory mediated through savanna trees and pines, which are more fire tolerant than forest trees. We conclude that tree functional identity is an important factor that influences overstory tree relationships with savanna understory plant communities. In particular, distinct relationships between trees and C4 graminoids have implications for grass-tree coexistence and vegetation-fire feedbacks that maintain savanna environments and their associated understory plant diversity.« less

  15. One tree to link them all: a phylogenetic dataset for the European tetrapoda.

    PubMed

    Roquet, Cristina; Lavergne, Sébastien; Thuiller, Wilfried

    2014-08-08

    Since the ever-increasing availability of phylogenetic informative data, the last decade has seen an upsurge of ecological studies incorporating information on evolutionary relationships among species. However, detailed species-level phylogenies are still lacking for many large groups and regions, which are necessary for comprehensive large-scale eco-phylogenetic analyses. Here, we provide a dataset of 100 dated phylogenetic trees for all European tetrapods based on a mixture of supermatrix and supertree approaches. Phylogenetic inference was performed separately for each of the main Tetrapoda groups of Europe except mammals (i.e. amphibians, birds, squamates and turtles) by means of maximum likelihood (ML) analyses of supermatrix applying a tree constraint at the family (amphibians and squamates) or order (birds and turtles) levels based on consensus knowledge. For each group, we inferred 100 ML trees to be able to provide a phylogenetic dataset that accounts for phylogenetic uncertainty, and assessed node support with bootstrap analyses. Each tree was dated using penalized-likelihood and fossil calibration. The trees obtained were well-supported by existing knowledge and previous phylogenetic studies. For mammals, we modified the most complete supertree dataset available on the literature to include a recent update of the Carnivora clade. As a final step, we merged the phylogenetic trees of all groups to obtain a set of 100 phylogenetic trees for all European Tetrapoda species for which data was available (91%). We provide this phylogenetic dataset (100 chronograms) for the purpose of comparative analyses, macro-ecological or community ecology studies aiming to incorporate phylogenetic information while accounting for phylogenetic uncertainty.

  16. The Local Wind Pump for Marginal Societies in Indonesia: A Perspective of Fault Tree Analysis

    NASA Astrophysics Data System (ADS)

    Gunawan, Insan; Taufik, Ahmad

    2007-10-01

    There are many efforts to reduce a cost of investment of well established hybrid wind pump applied to rural areas. A recent study on a local wind pump (LWP) for marginal societies in Indonesia (traditional farmers, peasant and tribes) was one of the efforts reporting a new application area. The objectives of the study were defined to measure reliability value of the LWP due to fluctuated wind intensity, low wind speed, economic point of view regarding a prolong economic crisis occurring and an available local component of the LWP and to sustain economics productivity (agriculture product) of the society. In the study, a fault tree analysis (FTA) was deployed as one of three methods used for assessing the LWP. In this article, the FTA has been thoroughly discussed in order to improve a better performance of the LWP applied in dry land watering system of Mesuji district of Lampung province-Indonesia. In the early stage, all of local component of the LWP was classified in term of its function. There were four groups of the components. Moreover, all of the sub components of each group were subjected to failure modes of the FTA, namely (1) primary failure modes; (2) secondary failure modes and (3) common failure modes. In the data processing stage, an available software package, ITEM was deployed. It was observed that the component indicated obtaining relative a long life duration of operational life cycle in 1,666 hours. Moreover, to enhance high performance the LWP, maintenance schedule, critical sub component suffering from failure and an overhaul priority have been identified in term of quantity values. Throughout a year pilot project, it can be concluded that the LWP is a reliable product to the societies enhancing their economics productivities.

  17. A link representation for gravity amplitudes

    NASA Astrophysics Data System (ADS)

    He, Song

    2013-10-01

    We derive a link representation for all tree amplitudes in supergravity, from a recent conjecture by Cachazo and Skinner. The new formula explicitly writes amplitudes as contour integrals over constrained link variables, with an integrand naturally expressed in terms of determinants, or equivalently tree diagrams. Important symmetries of the amplitude, such as supersymmetry, parity and (partial) permutation invariance, are kept manifest in the formulation. We also comment on rewriting the formula in a GL( k)-invariant manner, which may serve as a starting point for the generalization to possible Grassmannian contour integrals.

  18. Robot Position Sensor Fault Tolerance

    NASA Technical Reports Server (NTRS)

    Aldridge, Hal A.

    1997-01-01

    Robot systems in critical applications, such as those in space and nuclear environments, must be able to operate during component failure to complete important tasks. One failure mode that has received little attention is the failure of joint position sensors. Current fault tolerant designs require the addition of directly redundant position sensors which can affect joint design. A new method is proposed that utilizes analytical redundancy to allow for continued operation during joint position sensor failure. Joint torque sensors are used with a virtual passive torque controller to make the robot joint stable without position feedback and improve position tracking performance in the presence of unknown link dynamics and end-effector loading. Two Cartesian accelerometer based methods are proposed to determine the position of the joint. The joint specific position determination method utilizes two triaxial accelerometers attached to the link driven by the joint with the failed position sensor. The joint specific method is not computationally complex and the position error is bounded. The system wide position determination method utilizes accelerometers distributed on different robot links and the end-effector to determine the position of sets of multiple joints. The system wide method requires fewer accelerometers than the joint specific method to make all joint position sensors fault tolerant but is more computationally complex and has lower convergence properties. Experiments were conducted on a laboratory manipulator. Both position determination methods were shown to track the actual position satisfactorily. A controller using the position determination methods and the virtual passive torque controller was able to servo the joints to a desired position during position sensor failure.

  19. Simulated cavity tree dynamics under alternative timber harvest regimes

    Treesearch

    Zhaofei Fan; Stephen R Shifley; Frank R Thompson; David R Larsen

    2004-01-01

    We modeled cavity tree abundance on a landscape as a function of forest stand age classes and as a function of aggregate stand size classes.We explored the impact of five timber harvest regimes on cavity tree abundance on a 3261 ha landscape in southeast Missouri, USA, by linking the stand level cavity tree distribution model to the landscape age structure simulated by...

  20. An earthquake rate forecast for Europe based on smoothed seismicity and smoothed fault contribution

    NASA Astrophysics Data System (ADS)

    Hiemer, Stefan; Woessner, Jochen; Basili, Roberto; Wiemer, Stefan

    2013-04-01

    The main objective of project SHARE (Seismic Hazard Harmonization in Europe) is to develop a community-based seismic hazard model for the Euro-Mediterranean region. The logic tree of earthquake rupture forecasts comprises several methodologies including smoothed seismicity approaches. Smoothed seismicity thus represents an alternative concept to express the degree of spatial stationarity of seismicity and provides results that are more objective, reproducible, and testable. Nonetheless, the smoothed-seismicity approach suffers from the common drawback of being generally based on earthquake catalogs alone, i.e. the wealth of knowledge from geology is completely ignored. We present a model that applies the kernel-smoothing method to both past earthquake locations and slip rates on mapped crustal faults and subductions. The result is mainly driven by the data, being independent of subjective delineation of seismic source zones. The core parts of our model are two distinct location probability densities: The first is computed by smoothing past seismicity (using variable kernel smoothing to account for varying data density). The second is obtained by smoothing fault moment rate contributions. The fault moment rates are calculated by summing the moment rate of each fault patch on a fully parameterized and discretized fault as available from the SHARE fault database. We assume that the regional frequency-magnitude distribution of the entire study area is well known and estimate the a- and b-value of a truncated Gutenberg-Richter magnitude distribution based on a maximum likelihood approach that considers the spatial and temporal completeness history of the seismic catalog. The two location probability densities are linearly weighted as a function of magnitude assuming that (1) the occurrence of past seismicity is a good proxy to forecast occurrence of future seismicity and (2) future large-magnitude events occur more likely in the vicinity of known faults. Consequently

  1. Fault finder

    DOEpatents

    Bunch, Richard H.

    1986-01-01

    A fault finder for locating faults along a high voltage electrical transmission line. Real time monitoring of background noise and improved filtering of input signals is used to identify the occurrence of a fault. A fault is detected at both a master and remote unit spaced along the line. A master clock synchronizes operation of a similar clock at the remote unit. Both units include modulator and demodulator circuits for transmission of clock signals and data. All data is received at the master unit for processing to determine an accurate fault distance calculation.

  2. Exploring connections between trees and human health

    Treesearch

    Geoffrey Donovan; Marie Oliver

    2014-01-01

    Humans have intuitively understood the value of trees to their physical and mental health since the beginning of recorded time. A scientist with the Pacific Northwest Research Station wondered if such a link could be scientifically validated. His research team took advantage of an infestation of emerald ash borer, an invasive pest that kills ash trees, to conduct a...

  3. The Sorong Fault Zone, Indonesia: Mapping a Fault Zone Offshore

    NASA Astrophysics Data System (ADS)

    Melia, S.; Hall, R.

    2017-12-01

    The Sorong Fault Zone is a left-lateral strike-slip fault zone in eastern Indonesia, extending westwards from the Bird's Head peninsula of West Papua towards Sulawesi. It is the result of interactions between the Pacific, Caroline, Philippine Sea, and Australian Plates and much of it is offshore. Previous research on the fault zone has been limited by the low resolution of available data offshore, leading to debates over the extent, location, and timing of movements, and the tectonic evolution of eastern Indonesia. Different studies have shown it north of the Sula Islands, truncated south of Halmahera, continuing to Sulawesi, or splaying into a horsetail fan of smaller faults. Recently acquired high resolution multibeam bathymetry of the seafloor (with a resolution of 15-25 meters), and 2D seismic lines, provide the opportunity to trace the fault offshore. The position of different strands can be identified. On land, SRTM topography shows that in the northern Bird's Head the fault zone is characterised by closely spaced E-W trending faults. NW of the Bird's Head offshore there is a fold and thrust belt which terminates some strands. To the west of the Bird's Head offshore the fault zone diverges into multiple strands trending ENE-WSW. Regions of Riedel shearing are evident west of the Bird's Head, indicating sinistral strike-slip motion. Further west, the ENE-WSW trending faults turn to an E-W trend and there are at least three fault zones situated immediately south of Halmahera, north of the Sula Islands, and between the islands of Sanana and Mangole where the fault system terminates in horsetail strands. South of the Sula islands some former normal faults at the continent-ocean boundary with the North Banda Sea are being reactivated as strike-slip faults. The fault zone does not currently reach Sulawesi. The new fault map differs from previous interpretations concerning the location, age and significance of different parts of the Sorong Fault Zone. Kinematic

  4. San Antonio relay ramp: Area of stratal continuity between large-displacement barrier faults of the Edwards aquifer and Balcones fault zone, central Texas

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Collins, E.W.

    1996-09-01

    The San Antonio relay ramp, a gentle southwest-dipping monocline, formed between the tips of two en echelon master faults having maximum throws of >240 in. Structural analysis of this relay ramp is important to studies of Edwards aquifer recharge and ground-water flow because the ramp is an area of relatively good stratal continuity linking the outcrop belt recharge zone and unconfined aquifer with the downdip confined aquifer. Part of the relay ramp lies within the aquifer recharge zone and is crossed by several southeast-draining creeks, including Salado, Cibolo, and Comal Creeks, that supply water to the ramp recharge area. Thismore » feature is an analog for similar structures within the aquifer and for potential targets for hydrocarbons in other Gulf Coast areas. Defining the ramp is an {approximately}13-km-wide right step of the Edwards Group outcrop belt and the en echelon master faults that bound the ramp. The master faults strike N55-75{degrees}E, and maximum displacement exceeds the {approximately}165-m thickness of the Edwards Group strata. The faults therefore probably serve as barriers to Edwards ground-water flow. Within the ramp, tilted strata gently dip southwestward at {approximately}5 m/km, and the total structural relief along the ramp`s southwest-trending axis is <240 in. The ramp`s internal framework is defined by three fault blocks that are {approximately}4 to {approximately}6 km wide and are bound by northeast-striking faults having maximum throws between 30 and 150 m. Within the fault blocks, local areas of high fracture permeability may exist where smaller faults and joints are well connected.« less

  5. Active faulting, earthquakes, and restraining bend development near Kerman city in southeastern Iran

    NASA Astrophysics Data System (ADS)

    Walker, Richard Thomas; Talebian, Morteza; Saiffori, Sohei; Sloan, Robert Alastair; Rasheedi, Ali; MacBean, Natasha; Ghassemi, Abbas

    2010-08-01

    We provide descriptions of strike-slip and reverse faulting, active within the late Quaternary, in the vicinity of Kerman city in southeastern Iran. The faults accommodate north-south, right-lateral, shear between central Iran and the Dasht-e-Lut depression. The regions that we describe have been subject to numerous earthquakes in the historical and instrumental periods, and many of the faults that are documented in this paper constitute hazards for local populations, including the city of Kerman itself (population ˜200,000). Faults to the north and east of Kerman are associated with the transfer of slip from the Gowk to the Kuh Banan right-lateral faults across a 40 km-wide restraining bend. Faults south and west of the city are associated with oblique slip on the Mahan and Jorjafk systems. The patterns of faulting observed along the Mahan-Jorjafk system, the Gowk-Kuh Banan system, and also the Rafsanjan-Rayen system further to the south, appear to preserve different stages in the development of these oblique-slip fault systems. We suggest that the faulting evolves through time. Topography is initially generated on oblique slip faults (as is seen on the Jorjafk fault). The shortening component then migrates to reverse faults situated away from the high topography whereas strike-slip continues to be accommodated in the high, mountainous, regions (as is seen, for example, on the Rafsanjan fault). The reverse faults may then link together and eventually evolve into new, through-going, strike-slip faults in a process that appears to be occurring, at present, in the bend between the Gowk and Kuh Banan faults.

  6. Response to comment on "No late Quaternary strike-slip motion along the northern Karakoram fault"

    NASA Astrophysics Data System (ADS)

    Robinson, Alexander C.; Owen, Lewis A.; Chen, Jie; Schoenbohm, Lindsay M.; Hedrick, Kathryn A.; Blisniuk, Kimberly; Sharp, Warren D.; Imrecke, Daniel B.; Li, Wenqiao; Yuan, Zhaode; Caffee, Marc W.; Mertz-Kraus, Regina

    2016-06-01

    In their comment on ;No late Quaternary strike-slip motion along the northern Karakoram fault;, while Chevalier et al. (2016) do not dispute any of the results or interpretations regarding our observations along the main strand of the northern Karakoram fault, they make several arguments as to why they interpret the Kongur Shan Extensional System (KES) to be kinematically linked to the Karakoram fault. These arguments center around how an ;active; fault is defined, how slip on segments of the KES may be compatible with dextral shear related to continuation of the Karakoram fault, and suggestions as to how the two fault systems might still be connected. While we appreciate that there are still uncertainties in the regional geology, we address these comments and show that their arguments are inconsistent with all available data, known geologic relationships, and basic kinematics.

  7. Study on vibration characteristics and fault diagnosis method of oil-immersed flat wave reactor in Arctic area converter station

    NASA Astrophysics Data System (ADS)

    Lai, Wenqing; Wang, Yuandong; Li, Wenpeng; Sun, Guang; Qu, Guomin; Cui, Shigang; Li, Mengke; Wang, Yongqiang

    2017-10-01

    Based on long term vibration monitoring of the No.2 oil-immersed fat wave reactor in the ±500kV converter station in East Mongolia, the vibration signals in normal state and in core loose fault state were saved. Through the time-frequency analysis of the signals, the vibration characteristics of the core loose fault were obtained, and a fault diagnosis method based on the dual tree complex wavelet (DT-CWT) and support vector machine (SVM) was proposed. The vibration signals were analyzed by DT-CWT, and the energy entropy of the vibration signals were taken as the feature vector; the support vector machine was used to train and test the feature vector, and the accurate identification of the core loose fault of the flat wave reactor was realized. Through the identification of many groups of normal and core loose fault state vibration signals, the diagnostic accuracy of the result reached 97.36%. The effectiveness and accuracy of the method in the fault diagnosis of the flat wave reactor core is verified.

  8. Fault zone hydrogeology

    NASA Astrophysics Data System (ADS)

    Bense, V. F.; Gleeson, T.; Loveless, S. E.; Bour, O.; Scibek, J.

    2013-12-01

    Deformation along faults in the shallow crust (< 1 km) introduces permeability heterogeneity and anisotropy, which has an important impact on processes such as regional groundwater flow, hydrocarbon migration, and hydrothermal fluid circulation. Fault zones have the capacity to be hydraulic conduits connecting shallow and deep geological environments, but simultaneously the fault cores of many faults often form effective barriers to flow. The direct evaluation of the impact of faults to fluid flow patterns remains a challenge and requires a multidisciplinary research effort of structural geologists and hydrogeologists. However, we find that these disciplines often use different methods with little interaction between them. In this review, we document the current multi-disciplinary understanding of fault zone hydrogeology. We discuss surface- and subsurface observations from diverse rock types from unlithified and lithified clastic sediments through to carbonate, crystalline, and volcanic rocks. For each rock type, we evaluate geological deformation mechanisms, hydrogeologic observations and conceptual models of fault zone hydrogeology. Outcrop observations indicate that fault zones commonly have a permeability structure suggesting they should act as complex conduit-barrier systems in which along-fault flow is encouraged and across-fault flow is impeded. Hydrogeological observations of fault zones reported in the literature show a broad qualitative agreement with outcrop-based conceptual models of fault zone hydrogeology. Nevertheless, the specific impact of a particular fault permeability structure on fault zone hydrogeology can only be assessed when the hydrogeological context of the fault zone is considered and not from outcrop observations alone. To gain a more integrated, comprehensive understanding of fault zone hydrogeology, we foresee numerous synergistic opportunities and challenges for the discipline of structural geology and hydrogeology to co-evolve and

  9. Faulting and hydration of the Juan de Fuca plate system

    NASA Astrophysics Data System (ADS)

    Nedimović, Mladen R.; Bohnenstiehl, DelWayne R.; Carbotte, Suzanne M.; Pablo Canales, J.; Dziak, Robert P.

    2009-06-01

    Multichannel seismic observations provide the first direct images of crustal scale normal faults within the Juan de Fuca plate system and indicate that brittle deformation extends up to ~ 200 km seaward of the Cascadia trench. Within the sedimentary layering steeply dipping faults are identified by stratigraphic offsets, with maximum throws of 110 ± 10 m found near the trench. Fault throws diminish both upsection and seaward from the trench. Long-term throw rates are estimated to be 13 ± 2 mm/kyr. Faulted offsets within the sedimentary layering are typically linked to larger offset scarps in the basement topography, suggesting reactivation of the normal fault systems formed at the spreading center. Imaged reflections within the gabbroic igneous crust indicate swallowing fault dips at depth. These reflections require local alteration to produce an impedance contrast, indicating that the imaged fault structures provide pathways for fluid transport and hydration. As the depth extent of imaged faulting within this young and sediment insulated oceanic plate is primarily limited to approximately Moho depths, fault-controlled hydration appears to be largely restricted to crustal levels. If dehydration embrittlement is an important mechanism for triggering intermediate-depth earthquakes within the subducting slab, then the limited occurrence rate and magnitude of intraslab seismicity at the Cascadia margin may in part be explained by the limited amount of water imbedded into the uppermost oceanic mantle prior to subduction. The distribution of submarine earthquakes within the Juan de Fuca plate system indicates that propagator wake areas are likely to be more faulted and therefore more hydrated than other parts of this plate system. However, being largely restricted to crustal levels, this localized increase in hydration generally does not appear to have a measurable effect on the intraslab seismicity along most of the subducted propagator wakes at the Cascadia margin.

  10. Fault diagnosis

    NASA Technical Reports Server (NTRS)

    Abbott, Kathy

    1990-01-01

    The objective of the research in this area of fault management is to develop and implement a decision aiding concept for diagnosing faults, especially faults which are difficult for pilots to identify, and to develop methods for presenting the diagnosis information to the flight crew in a timely and comprehensible manner. The requirements for the diagnosis concept were identified by interviewing pilots, analyzing actual incident and accident cases, and examining psychology literature on how humans perform diagnosis. The diagnosis decision aiding concept developed based on those requirements takes abnormal sensor readings as input, as identified by a fault monitor. Based on these abnormal sensor readings, the diagnosis concept identifies the cause or source of the fault and all components affected by the fault. This concept was implemented for diagnosis of aircraft propulsion and hydraulic subsystems in a computer program called Draphys (Diagnostic Reasoning About Physical Systems). Draphys is unique in two important ways. First, it uses models of both functional and physical relationships in the subsystems. Using both models enables the diagnostic reasoning to identify the fault propagation as the faulted system continues to operate, and to diagnose physical damage. Draphys also reasons about behavior of the faulted system over time, to eliminate possibilities as more information becomes available, and to update the system status as more components are affected by the fault. The crew interface research is examining display issues associated with presenting diagnosis information to the flight crew. One study examined issues for presenting system status information. One lesson learned from that study was that pilots found fault situations to be more complex if they involved multiple subsystems. Another was pilots could identify the faulted systems more quickly if the system status was presented in pictorial or text format. Another study is currently under way to

  11. Strike-slip faulting in the Inner California Borderlands, offshore Southern California.

    NASA Astrophysics Data System (ADS)

    Bormann, J. M.; Kent, G. M.; Driscoll, N. W.; Harding, A. J.; Sahakian, V. J.; Holmes, J. J.; Klotsko, S.; Kell, A. M.; Wesnousky, S. G.

    2015-12-01

    eastern margin of Avalon Knoll, where the fault is spatially coincident and potentially linked with the San Pedro Basin fault (SPBF). Kinematic linkage between the SDTF and the SPBF increases the potential rupture length for earthquakes on either fault and may allow events nucleating on the SDTF to propagate much closer to the LA Basin.

  12. AADL Fault Modeling and Analysis Within an ARP4761 Safety Assessment

    DTIC Science & Technology

    2014-10-01

    Analysis Generator 27 3.2.3 Mapping to OpenFTA Format File 27 3.2.4 Mapping to Generic XML Format 28 3.2.5 AADL and FTA Mapping Rules 28 3.2.6 Issues...PSSA), System Safety Assessment (SSA), Common Cause Analysis (CCA), Fault Tree Analysis ( FTA ), Failure Modes and Effects Analysis (FMEA), Failure...Modes and Effects Summary, Mar - kov Analysis (MA), and Dependence Diagrams (DDs), also referred to as Reliability Block Dia- grams (RBDs). The

  13. Fault Deformation and Segmentation of the Newport-Inglewood Rose Canyon, and San Onofre Trend Fault Systems from New High-Resolution 3D Seismic Imagery

    NASA Astrophysics Data System (ADS)

    Holmes, J. J.; Driscoll, N. W.; Kent, G. M.

    2016-12-01

    The Inner California Borderlands (ICB) is situated off the coast of southern California and northern Baja. The structural and geomorphic characteristics of the area record a middle Oligocene transition from subduction to microplate capture along the California coast. Marine stratigraphic evidence shows large-scale extension and rotation overprinted by modern strike-slip deformation. Geodetic and geologic observations indicate that approximately 6-8 mm/yr of Pacific-North American relative plate motion is accommodated by offshore strike-slip faulting in the ICB. The farthest inshore fault system, the Newport-Inglewood Rose Canyon (NIRC) Fault is a dextral strike-slip system that is primarily offshore for approximately 120 km from San Diego to the San Joaquin Hills near Newport Beach, California. Based on trenching and well data, the NIRC Fault Holocene slip rate is 1.5-2.0 mm/yr to the south and 0.5-1.0 mm/yr along its northern extent. An earthquake rupturing the entire length of the system could produce an Mw 7.0 earthquake or larger. West of the main segments of the NIRC Fault is the San Onofre Trend (SOT) along the continental slope. Previous work concluded that this is part of a strike-slip system that eventually merges with the NIRC Fault. Others have interpreted this system as deformation associated with the Oceanside Blind Thrust fault purported to underlie most of the region. In late 2013, we acquired the first high-resolution 3D Parallel Cable (P-Cable) seismic surveys of the NIRC and SOT faults as part of the Southern California Regional Fault Mapping project aboard the R/V New Horizon. Analysis of these data volumes provides important new insights and constraints on the fault segmentation and transfer of deformation. Based on this new data, we've mapped several small fault strands associated with the SOT that appear to link up with a westward jog in right-lateral fault splays of the NIRC Fault on the shelf and then narrowly radiate southwards. Our

  14. A New Kinematic Model for Polymodal Faulting: Implications for Fault Connectivity

    NASA Astrophysics Data System (ADS)

    Healy, D.; Rizzo, R. E.

    2015-12-01

    Conjugate, or bimodal, fault patterns dominate the geological literature on shear failure. Based on Anderson's (1905) application of the Mohr-Coulomb failure criterion, these patterns have been interpreted from all tectonic regimes, including normal, strike-slip and thrust (reverse) faulting. However, a fundamental limitation of the Mohr-Coulomb failure criterion - and others that assume faults form parallel to the intermediate principal stress - is that only plane strain can result from slip on the conjugate faults. However, deformation in the Earth is widely accepted as being three-dimensional, with truly triaxial stresses and strains. Polymodal faulting, with three or more sets of faults forming and slipping simultaneously, can generate three-dimensional strains from truly triaxial stresses. Laboratory experiments and outcrop studies have verified the occurrence of the polymodal fault patterns in nature. The connectivity of polymodal fault networks differs significantly from conjugate fault networks, and this presents challenges to our understanding of faulting and an opportunity to improve our understanding of seismic hazards and fluid flow. Polymodal fault patterns will, in general, have more connected nodes in 2D (and more branch lines in 3D) than comparable conjugate (bimodal) patterns. The anisotropy of permeability is therefore expected to be very different in rocks with polymodal fault patterns in comparison to conjugate fault patterns, and this has implications for the development of hydrocarbon reservoirs, the genesis of ore deposits and the management of aquifers. In this contribution, I assess the published evidence and models for polymodal faulting before presenting a novel kinematic model for general triaxial strain in the brittle field.

  15. Progress on Fault Mechanisms for Gear Transmissions in Coal Cutting Machines: From Macro to Nano Models.

    PubMed

    Jiang, Yu; Zhang, Xiaogang; Zhang, Chao; Li, Zhixiong; Sheng, Chenxing

    2017-04-01

    Numerical modeling has been recognized as the dispensable tools for mechanical fault mechanism analysis. Techniques, ranging from macro to nano levels, include the finite element modeling boundary element modeling, modular dynamic modeling, nano dynamic modeling and so forth. This work firstly reviewed the progress on the fault mechanism analysis for gear transmissions from the tribological and dynamic aspects. Literature review indicates that the tribological and dynamic properties were separately investigated to explore the fault mechanism in gear transmissions. However, very limited work has been done to address the links between the tribological and dynamic properties and scarce researches have been done for coal cutting machines. For this reason, the tribo-dynamic coupled model was introduced to bridge the gap between the tribological and dynamic models in fault mechanism analysis for gear transmissions in coal cutting machines. The modular dynamic modeling and nano dynamic modeling techniques are expected to establish the links between the tribological and dynamic models. Possible future research directions using the tribo dynamic coupled model were summarized to provide potential references for researchers in the field.

  16. Northward expansion of Tibet beyond the Altyn Tagh Fault

    NASA Astrophysics Data System (ADS)

    Cunningham, D.; Zhang, J.; Yanfeng, L.; Vernon, R.

    2017-12-01

    For many tectonicists, the evolution of northern Tibet stops at the Altyn Tagh Fault (ATF). This study challenges that assumption. Structural field observations and remote sensing analysis indicate that the Sanweishan and Nanjieshan basement-cored ridges of the Archean Dunhuang Block, which interrupt the north Tibetan foreland directly north of the ATF, are bound and cut by an array of strike-slip, thrust and oblique-slip faults that have been active in the Quaternary and remain potentially active. The Sanweishan is essentially a SE-tilted block that is bound on its NW margin by a steep south-dipping thrust fault that has also accommodated sinistral strike-slip displacements. The Nanjieshan consists of parallel, but offset basement ridges that record NNW and SSE thrust displacements and sinistral strike-slip. Regional folds characterize the extreme eastern Nanjieshan perhaps above blind thrust faults which are emergent further west. At the surface, local fault reactivation of basement fabrics is an important control on the kinematics of deformation. Previously published magnetotelluric data for the region suggest that the major faults of the Sanweishan and Nanjieshan ultimately root to the south within conductive zones that merge into the ATF. Therefore, although the southern margin of the Dunhuang Block focuses significant deformation along the ATF, the adjacent cratonic basement to the north is also affected. Collectively, the ATF and structurally linked Sanweishan and Nanjieshan fault array represent a regional asymmetric half-flower structure that is dominated by non-strain partitioned sinistral transpression. The NW-trending Dengdengshan thrust fault array near Yumen City appears to define the northeastern limit of the Sanweishan-Nanjieshan block, which may be viewed regionally as the most northern, but early-stage expression of Tibetan Plateau growth into a reluctantly deforming, mechanically stiff Archean craton.

  17. Tree- Rings Link Climate and Carbon Storage in a Northern Mixed Hardwood Forest

    NASA Astrophysics Data System (ADS)

    Chiriboga, A.

    2007-12-01

    The terrestrial biosphere is a variable sink for atmospheric carbon dioxide. It is important to understand how carbon storage in trees is affected by natural climate variability to better characterize the sink. Quantifying the sensitivity of forest carbon storage to climate will improve carbon budgets and have implications for forest management practices. Here we explore how climate variability affects the ability of a northern mixed hardwood forest in Michigan to sequester atmospheric carbon dioxide in woody tissues. This site is ideal for studies of carbon sequestration; The University of Michigan Biological Station is an Ameriflux site, and has detailed meteorological and biometric records, as well as CO2 flux data. We have produced an 82- year aspen (Populus grandidentata) tree-ring chronology for this site, and measured ring widths at several heights up the bole. These measurements were used to estimate annual wood volume, which represents carbon allocated to aboveground carbon stores. Standard dendroclimatological techniques are used to identify environmental factors (e.g. temperature or precipitation) that drive tree-ring increment variability in the past century, and therefore annual carbon storage in this forest. Preliminary results show that marker years within the tree- ring chronology correspond with years that have cold spring temperatures. This suggests that trees at this site are temperature sensitive.

  18. Timing of late Holocene surface rupture of the Wairau Fault, Marlborough, New Zealand

    USGS Publications Warehouse

    Zachariasen, J.; Berryman, K.; Langridge, Rob; Prentice, C.; Rymer, M.; Stirling, M.; Villamor, P.

    2006-01-01

    Three trenches excavated across the central portion of the right-lateral strike-slip Wairau Fault in South Island, New Zealand, exposed a complex set of fault strands that have displaced a sequence of late Holocene alluvial and colluvial deposits. Abundant charcoal fragments provide age control for various stratigraphic horizons dating back to c. 5610 yr ago. Faulting relations from the Wadsworth trench show that the most recent surface rupture event occurred at least 1290 yr and at most 2740 yr ago. Drowned trees in landslide-dammed Lake Chalice, in combination with charcoal from the base of an unfaulted colluvial wedge at Wadsworth trench, suggest a narrower time bracket for this event of 1811-2301 cal. yr BP. The penultimate faulting event occurred between c. 2370 and 3380 yr, and possibly near 2680 ?? 60 cal. yr BP, when data from both the Wadsworth and Dillon trenches are combined. Two older events have been recognised from Dillon trench but remain poorly dated. A probable elapsed time of at least 1811 yr since the last surface rupture, and an average slip rate estimate for the Wairau Fault of 3-5 mm/yr, suggests that at least 5.4 m and up to 11.5 m of elastic shear strain has accumulated since the last rupture. This is near to or greater than the single-event displacement estimates of 5-7 m. The average recurrence interval for surface rupture of the fault determined from the trench data is 1150-1400 yr. Although the uncertainties in the timing of faulting events and variability in inter-event times remain high, the time elapsed since the last event is in the order of 1-2 times the average recurrence interval, implying that the Wairau Fault is near the end of its interseismic period. ?? The Royal Society of New Zealand 2006.

  19. Multi-level tree analysis of pulmonary artery/vein trees in non-contrast CT images

    NASA Astrophysics Data System (ADS)

    Gao, Zhiyun; Grout, Randall W.; Hoffman, Eric A.; Saha, Punam K.

    2012-02-01

    Diseases like pulmonary embolism and pulmonary hypertension are associated with vascular dystrophy. Identifying such pulmonary artery/vein (A/V) tree dystrophy in terms of quantitative measures via CT imaging significantly facilitates early detection of disease or a treatment monitoring process. A tree structure, consisting of nodes and connected arcs, linked to the volumetric representation allows multi-level geometric and volumetric analysis of A/V trees. Here, a new theory and method is presented to generate multi-level A/V tree representation of volumetric data and to compute quantitative measures of A/V tree geometry and topology at various tree hierarchies. The new method is primarily designed on arc skeleton computation followed by a tree construction based topologic and geometric analysis of the skeleton. The method starts with a volumetric A/V representation as input and generates its topologic and multi-level volumetric tree representations long with different multi-level morphometric measures. A new recursive merging and pruning algorithms are introduced to detect bad junctions and noisy branches often associated with digital geometric and topologic analysis. Also, a new notion of shortest axial path is introduced to improve the skeletal arc joining two junctions. The accuracy of the multi-level tree analysis algorithm has been evaluated using computer generated phantoms and pulmonary CT images of a pig vessel cast phantom while the reproducibility of method is evaluated using multi-user A/V separation of in vivo contrast-enhanced CT images of a pig lung at different respiratory volumes.

  20. Faulting and groundwater in a desert environment: constraining hydrogeology using time-domain electromagnetic data

    USGS Publications Warehouse

    Bedrosian, Paul A.; Burgess, Matthew K.; Nishikawa, Tracy

    2013-01-01

    Within the south-western Mojave Desert, the Joshua Basin Water District is considering applying imported water into infiltration ponds in the Joshua Tree groundwater sub-basin in an attempt to artificially recharge the underlying aquifer. Scarce subsurface hydrogeological data are available near the proposed recharge site; therefore, time-domain electromagnetic (TDEM) data were collected and analysed to characterize the subsurface. TDEM soundings were acquired to estimate the depth to water on either side of the Pinto Mountain Fault, a major east-west trending strike-slip fault that transects the proposed recharge site. While TDEM is a standard technique for groundwater investigations, special care must be taken when acquiring and interpreting TDEM data in a twodimensional (2D) faulted environment. A subset of the TDEM data consistent with a layered-earth interpretation was identified through a combination of three-dimensional (3D) forward modelling and diffusion time-distance estimates. Inverse modelling indicates an offset in water table elevation of nearly 40 m across the fault. These findings imply that the fault acts as a low-permeability barrier to groundwater flow in the vicinity of the proposed recharge site. Existing production wells on the south side of the fault, together with a thick unsaturated zone and permeable near-surface deposits, suggest the southern half of the study area is suitable for artificial recharge. These results illustrate the effectiveness of targeted TDEM in support of hydrological studies in a heavily faulted desert environment where data are scarce and the cost of obtaining these data by conventional drilling techniques is prohibitive.

  1. Structural Mapping Along the Central San Andreas Fault-zone Using Airborne Electromagnetics

    NASA Astrophysics Data System (ADS)

    Zamudio, K. D.; Bedrosian, P.; Ball, L. B.

    2017-12-01

    Investigations of active fault zones typically focus on either surface expressions or the associated seismogenic zones. However, the largely aseismic upper kilometer can hold significant insight into fault-zone architecture, strain partitioning, and fault-zone permeability. Geophysical imaging of the first kilometer provides a link between surface fault mapping and seismically-defined fault zones and is particularly important in geologically complex regions with limited surface exposure. Additionally, near surface imaging can provide insight into the impact of faulting on the hydrogeology of the critical zone. Airborne electromagnetic (AEM) methods offer a unique opportunity to collect a spatially-large, detailed dataset in a matter of days, and are used to constrain subsurface resistivity to depths of 500 meters or more. We present initial results from an AEM survey flown over a 60 kilometer long segment of the central San Andreas Fault (SAF). The survey is centered near Parkfield, California, the site of the SAFOD drillhole, which marks the transition between a creeping fault segment to the north and a locked zone to the south. Cross sections with a depth of investigation up to approximately 500 meters highlight the complex Tertiary and Mesozoic geology that is dismembered by the SAF system. Numerous fault-parallel structures are imaged across a more than 10 kilometer wide zone centered on the surface trace. Many of these features can be related to faults and folds within Plio-Miocene sedimentary rocks found on both sides of the fault. Northeast of the fault, rocks of the Mesozoic Franciscan and Great Valley complexes are extremely heterogeneous, with highly resistive volcanic rocks within a more conductive background. The upper 300 meters of a prominent fault-zone conductor, previously imaged to 1-3 kilometers depth by magnetotellurics, is restricted to a 20 kilometer long segment of the fault, but is up to 4 kilometers wide in places. Elevated fault

  2. Fault-tolerant cooperative output regulation for multi-vehicle systems with sensor faults

    NASA Astrophysics Data System (ADS)

    Qin, Liguo; He, Xiao; Zhou, D. H.

    2017-10-01

    This paper presents a unified framework of fault diagnosis and fault-tolerant cooperative output regulation (FTCOR) for a linear discrete-time multi-vehicle system with sensor faults. The FTCOR control law is designed through three steps. A cooperative output regulation (COR) controller is designed based on the internal mode principle when there are no sensor faults. A sufficient condition on the existence of the COR controller is given based on the discrete-time algebraic Riccati equation (DARE). Then, a decentralised fault diagnosis scheme is designed to cope with sensor faults occurring in followers. A residual generator is developed to detect sensor faults of each follower, and a bank of fault-matching estimators are proposed to isolate and estimate sensor faults of each follower. Unlike the current distributed fault diagnosis for multi-vehicle systems, the presented decentralised fault diagnosis scheme in each vehicle reduces the communication and computation load by only using the information of the vehicle. By combing the sensor fault estimation and the COR control law, an FTCOR controller is proposed. Finally, the simulation results demonstrate the effectiveness of the FTCOR controller.

  3. Structural heritage, reactivation and distribution of fault and fracture network in a rifting context: Case study of the western shoulder of the Upper Rhine Graben

    NASA Astrophysics Data System (ADS)

    Bertrand, Lionel; Jusseaume, Jessie; Géraud, Yves; Diraison, Marc; Damy, Pierre-Clément; Navelot, Vivien; Haffen, Sébastien

    2018-03-01

    In fractured reservoirs in the basement of extensional basins, fault and fracture parameters like density, spacing and length distribution are key properties for modelling and prediction of reservoir properties and fluids flow. As only large faults are detectable using basin-scale geophysical investigations, these fine-scale parameters need to be inferred from faults and fractures in analogous rocks at the outcrop. In this study, we use the western shoulder of the Upper Rhine Graben as an outcropping analogue of several deep borehole projects in the basement of the graben. Geological regional data, DTM (Digital Terrain Model) mapping and outcrop studies with scanlines are used to determine the spatial arrangement of the faults from the regional to the reservoir scale. The data shows that: 1) The fault network can be hierarchized in three different orders of scale and structural blocks with a characteristic structuration. This is consistent with other basement rocks studies in other rifting system allowing the extrapolation of the important parameters for modelling. 2) In the structural blocks, the fracture network linked to the faults is linked to the interplay between rock facies variation linked to the rock emplacement and the rifting event.

  4. Monitoring of Microseismicity with ArrayTechniques in the Peach Tree Valley Region

    NASA Astrophysics Data System (ADS)

    Garcia-Reyes, J. L.; Clayton, R. W.

    2016-12-01

    This study is focused on the analysis of microseismicity along the San Andreas Fault in the PeachTree Valley region. This zone is part of the transition zone between the locked portion to the south (Parkfield, CA) and the creeping section to the north (Jovilet, et al., JGR, 2014). The data for the study comes from a 2-week deployment of 116 Zland nodes in a cross-shaped configuration along (8.2 km) and across (9 km) the Fault. We analyze the distribution of microseismicity using a 3D backprojection technique, and we explore the use of Hidden Markov Models to identify different patterns of microseismicity (Hammer et al., GJI, 2013). The goal of the study is to relate the style of seismicity to the mechanical state of the Fault. The results show the evolution of seismic activity as well as at least two different patterns of seismic signals.

  5. Recent deformation on the San Diego Trough and San Pedro Basin fault systems, offshore Southern California: Assessing evidence for fault system connectivity.

    NASA Astrophysics Data System (ADS)

    Bormann, J. M.; Kent, G. M.; Driscoll, N. W.; Harding, A. J.

    2016-12-01

    The seismic hazard posed by offshore faults for coastal communities in Southern California is poorly understood and may be considerable, especially when these communities are located near long faults that have the ability to produce large earthquakes. The San Diego Trough fault (SDTF) and San Pedro Basin fault (SPBF) systems are active northwest striking, right-lateral faults in the Inner California Borderland that extend offshore between San Diego and Los Angeles. Recent work shows that the SDTF slip rate accounts for 25% of the 6-8 mm/yr of deformation accommodated by the offshore fault network, and seismic reflection data suggest that these two fault zones may be one continuous structure. Here, we use recently acquired CHIRP, high-resolution multichannel seismic (MCS) reflection, and multibeam bathymetric data in combination with USGS and industry MCS profiles to characterize recent deformation on the SDTF and SPBF zones and to evaluate the potential for an end-to-end rupture that spans both fault systems. The SDTF offsets young sediments at the seafloor for 130 km between the US/Mexico border and Avalon Knoll. The northern SPBF has robust geomorphic expression and offsets the seafloor in the Santa Monica Basin. The southern SPBF lies within a 25-km gap between high-resolution MCS surveys. Although there does appear to be a through-going fault at depth in industry MCS profiles, the low vertical resolution of these data inhibits our ability to confirm recent slip on the southern SPBF. Empirical scaling relationships indicate that a 200-km-long rupture of the SDTF and its southern extension, the Bahia Soledad fault, could produce a M7.7 earthquake. If the SDTF and the SPBF are linked, the length of the combined fault increases to >270 km. This may allow ruptures initiating on the SDTF to propagate within 25 km of the Los Angeles Basin. At present, the paleoseismic histories of the faults are unknown. We present new observations from CHIRP and coring surveys at

  6. Geomorphology of intraplate postglacial faults in Sweden

    NASA Astrophysics Data System (ADS)

    Ask, M. V. S.; Abdujabbar, M.; Lund, B.; Smith, C.; Mikko, H.; Munier, R.

    2015-12-01

    Melting of the Weichselian ice sheet at ≈10 000 BP is inferred to have induced large to great intraplate earthquakes in northern Fennoscandia. Over a dozen large so-called postglacial faults (PGF) have been found, mainly using aerial photogrammetry, trenching, and recognition of numerous paleolandslides in the vicinity of the faults (e.g. Lagerbäck & Sundh 2008). Recent LiDAR-based mapping led to the extension of known PGFs, the discovery of new segments of existing PGFs, and a number of new suspected PGFs (Smith et al. 2014; Mikko et al. 2015). The PGFs in Fennoscandia occur within 14-25°E and 61-69°N; the majority are within Swedish territory. PGFs generally are prominent features, up to 155 km in length and 30 m maximum surface offset. The most intense microseismic activity in Sweden occurs near PGFs. The seismogenic zone of the longest known PGF (Pärvie fault zone, PFZ) extends to ≈40 km depth. From fault geometry and earthquake scaling relations, the paleomagnitude of PFZ is estimated to 8.0±0.3 (Lindblom et al. 2015). The new high-resolution LiDAR-derived elevation model of Sweden offers an unprecedented opportunity to constrain the surface geometry of the PGFs. The objective is to reach more detailed knowledge of the surface offset across their scarps. This distribution provides a one-dimensional view of the slip distribution during the inferred paleorupture. The second objective is to analyze the pattern of vertical displacement of the hanging wall, to obtain a two-dimensional view of the displaced area that is linked to the fault geometry at depth. The anticipated results will further constrain the paleomagnitude of PGFs and will be incorporated into future modeling efforts to investigate the nature of PGFs. ReferencesLagerbäck & Sundh 2008. Early Holocene faulting and paleoseismicity in northern Sweden. http://resource.sgu.se/produkter/c/c836-rapport.pdf Smith et al. 2014. Surficial geology indicates early Holocene faulting and seismicity

  7. Integrated Approach To Design And Analysis Of Systems

    NASA Technical Reports Server (NTRS)

    Patterson-Hine, F. A.; Iverson, David L.

    1993-01-01

    Object-oriented fault-tree representation unifies evaluation of reliability and diagnosis of faults. Programming/fault tree described more fully in "Object-Oriented Algorithm For Evaluation Of Fault Trees" (ARC-12731). Augmented fault tree object contains more information than fault tree object used in quantitative analysis of reliability. Additional information needed to diagnose faults in system represented by fault tree.

  8. Fault geometries in basement-induced wrench faulting under different initial stress states

    NASA Astrophysics Data System (ADS)

    Naylor, M. A.; Mandl, G.; Supesteijn, C. H. K.

    Scaled sandbox experiments were used to generate models for relative ages, dip, strike and three-dimensional shape of faults in basement-controlled wrench faulting. The basic fault sequence runs from early en échelon Riedel shears and splay faults through 'lower-angle' shears to P shears. The Riedel shears are concave upwards and define a tulip structure in cross-section. In three dimensions, each Riedel shear has a helicoidal form. The sequence of faults and three-dimensional geometry are rationalized in terms of the prevailing stress field and Coulomb-Mohr theory of shear failure. The stress state in the sedimentary overburden before wrenching begins has a substantial influence on the fault geometries and on the final complexity of the fault zone. With the maximum compressive stress (∂ 1) initially parallel to the basement fault (transtension), Riedel shears are only slightly en échelon, sub-parallel to the basement fault, steeply dipping with a reduced helicoidal aspect. Conversely, with ∂ 1 initially perpendicular to the basement fault (transpression), Riedel shears are strongly oblique to the basement fault strike, have lower dips and an exaggerated helicoidal form; the final fault zone is both wide and complex. We find good agreement between the models and both mechanical theory and natural examples of wrench faulting.

  9. Application of reliability-centered maintenance to boiling water reactor emergency core cooling systems fault-tree analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Choi, Y.A.; Feltus, M.A.

    1995-07-01

    Reliability-centered maintenance (RCM) methods are applied to boiling water reactor plant-specific emergency core cooling system probabilistic risk assessment (PRA) fault trees. The RCM is a technique that is system function-based, for improving a preventive maintenance (PM) program, which is applied on a component basis. Many PM programs are based on time-directed maintenance tasks, while RCM methods focus on component condition-directed maintenance tasks. Stroke time test data for motor-operated valves (MOVs) are used to address three aspects concerning RCM: (a) to determine if MOV stroke time testing was useful as a condition-directed PM task; (b) to determine and compare the plant-specificmore » MOV failure data from a broad RCM philosophy time period compared with a PM period and, also, compared with generic industry MOV failure data; and (c) to determine the effects and impact of the plant-specific MOV failure data on core damage frequency (CDF) and system unavailabilities for these emergency systems. The MOV stroke time test data from four emergency core cooling systems [i.e., high-pressure coolant injection (HPCI), reactor core isolation cooling (RCIC), low-pressure core spray (LPCS), and residual heat removal/low-pressure coolant injection (RHR/LPCI)] were gathered from Philadelphia Electric Company`s Peach Bottom Atomic Power Station Units 2 and 3 between 1980 and 1992. The analyses showed that MOV stroke time testing was not a predictor for eminent failure and should be considered as a go/no-go test. The failure data from the broad RCM philosophy showed an improvement compared with the PM-period failure rates in the emergency core cooling system MOVs. Also, the plant-specific MOV failure rates for both maintenance philosophies were shown to be lower than the generic industry estimates.« less

  10. Fault zone property near Xinfengjiang Reservoir using dense, across-fault seismic array

    NASA Astrophysics Data System (ADS)

    Lee, M. H. B.; Yang, H.; Sun, X.

    2017-12-01

    Properties of fault zones are important to the understanding of earthquake process. Around the fault zone is a damaged zone which is characterised by a lower seismic velocity. This is detectable as a low velocity zone and measure some physical property of the fault zone, which is otherwise difficult sample directly. A dense, across-fault array of short period seismometer is deployed on an inactive fault near Xinfengjiang Reservoir. Local events were manually picked. By computing the synthetic arrival time, we were able to constrain the parameters of the fault zone Preliminary result shows that the fault zone is around 350 m wide with a P and S velocity increase of around 10%. The fault is geologically inferred, and this result suggested that it may be a geological layer. The other possibility is that the higher velocity is caused by a combination of fault zone healing and fluid intrusion. Whilst the result was not able to tell us the nature of the fault, it demonstrated that this method is able to derive properties from a fault zone.

  11. Rule-based fault diagnosis of hall sensors and fault-tolerant control of PMSM

    NASA Astrophysics Data System (ADS)

    Song, Ziyou; Li, Jianqiu; Ouyang, Minggao; Gu, Jing; Feng, Xuning; Lu, Dongbin

    2013-07-01

    Hall sensor is widely used for estimating rotor phase of permanent magnet synchronous motor(PMSM). And rotor position is an essential parameter of PMSM control algorithm, hence it is very dangerous if Hall senor faults occur. But there is scarcely any research focusing on fault diagnosis and fault-tolerant control of Hall sensor used in PMSM. From this standpoint, the Hall sensor faults which may occur during the PMSM operating are theoretically analyzed. According to the analysis results, the fault diagnosis algorithm of Hall sensor, which is based on three rules, is proposed to classify the fault phenomena accurately. The rotor phase estimation algorithms, based on one or two Hall sensor(s), are initialized to engender the fault-tolerant control algorithm. The fault diagnosis algorithm can detect 60 Hall fault phenomena in total as well as all detections can be fulfilled in 1/138 rotor rotation period. The fault-tolerant control algorithm can achieve a smooth torque production which means the same control effect as normal control mode (with three Hall sensors). Finally, the PMSM bench test verifies the accuracy and rapidity of fault diagnosis and fault-tolerant control strategies. The fault diagnosis algorithm can detect all Hall sensor faults promptly and fault-tolerant control algorithm allows the PMSM to face failure conditions of one or two Hall sensor(s). In addition, the transitions between health-control and fault-tolerant control conditions are smooth without any additional noise and harshness. Proposed algorithms can deal with the Hall sensor faults of PMSM in real applications, and can be provided to realize the fault diagnosis and fault-tolerant control of PMSM.

  12. The susitna glacier thrust fault: Characteristics of surface ruptures on the fault that initiated the 2002 denali fault earthquake

    USGS Publications Warehouse

    Crone, A.J.; Personius, S.F.; Craw, P.A.; Haeussler, P.J.; Staft, L.A.

    2004-01-01

    The 3 November 2002 Mw 7.9 Denali fault earthquake sequence initiated on the newly discovered Susitna Glacier thrust fault and caused 48 km of surface rupture. Rupture of the Susitna Glacier fault generated scarps on ice of the Susitna and West Fork glaciers and on tundra and surficial deposits along the southern front of the central Alaska Range. Based on detailed mapping, 27 topographic profiles, and field observations, we document the characteristics and slip distribution of the 2002 ruptures and describe evidence of pre-2002 ruptures on the fault. The 2002 surface faulting produced structures that range from simple folds on a single trace to complex thrust-fault ruptures and pressure ridges on multiple, sinuous strands. The deformation zone is locally more than 1 km wide. We measured a maximum vertical displacement of 5.4 m on the south-directed main thrust. North-directed backthrusts have more than 4 m of surface offset. We measured a well-constrained near-surface fault dip of about 19?? at one site, which is considerably less than seismologically determined values of 35??-48??. Surface-rupture data yield an estimated magnitude of Mw 7.3 for the fault, which is similar to the seismological value of Mw 7.2. Comparison of field and seismological data suggest that the Susitna Glacier fault is part of a large positive flower structure associated with northwest-directed transpressive deformation on the Denali fault. Prehistoric scarps are evidence of previous rupture of the Sustina Glacier fault, but additional work is needed to determine if past failures of the Susitna Glacier fault have consistently induced rupture of the Denali fault.

  13. Geometry and kinematics of the eastern Lake Mead fault system in the Virgin Mountains, Nevada and Arizona

    USGS Publications Warehouse

    Beard, Sue; Campagna, David J.; Anderson, R. Ernest

    2010-01-01

    The Lake Mead fault system is a northeast-striking, 130-km-long zone of left-slip in the southeast Great Basin, active from before 16 Ma to Quaternary time. The northeast end of the Lake Mead fault system in the Virgin Mountains of southeast Nevada and northwest Arizona forms a partitioned strain field comprising kinematically linked northeast-striking left-lateral faults, north-striking normal faults, and northwest-striking right-lateral faults. Major faults bound large structural blocks whose internal strain reflects their position within a left step-over of the left-lateral faults. Two north-striking large-displacement normal faults, the Lakeside Mine segment of the South Virgin–White Hills detachment fault and the Piedmont fault, intersect the left step-over from the southwest and northeast, respectively. The left step-over in the Lake Mead fault system therefore corresponds to a right-step in the regional normal fault system.Within the left step-over, displacement transfer between the left-lateral faults and linked normal faults occurs near their junctions, where the left-lateral faults become oblique and normal fault displacement decreases away from the junction. Southward from the center of the step-over in the Virgin Mountains, down-to-the-west normal faults splay northward from left-lateral faults, whereas north and east of the center, down-to-the-east normal faults splay southward from left-lateral faults. Minimum slip is thus in the central part of the left step-over, between east-directed slip to the north and west-directed slip to the south. Attenuation faults parallel or subparallel to bedding cut Lower Paleozoic rocks and are inferred to be early structures that accommodated footwall uplift during the initial stages of extension.Fault-slip data indicate oblique extensional strain within the left step-over in the South Virgin Mountains, manifested as east-west extension; shortening is partitioned between vertical for extension-dominated structural

  14. Using fault tree analysis to identify causes of non-compliance: enhancing violation outcome data for the purposes of education and prevention.

    PubMed

    Emery, R J; Charlton, M A; Orders, A B; Hernandez, M

    2001-02-01

    An enhanced coding system for the characterization of notices of violation (NOV's) issued to radiation permit holders in the State of Texas was developed based on a series of fault tree analyses serving to identify a set of common causes. The coding system enhancement was retroactively applied to a representative sample (n = 185) of NOV's issued to specific licensees of radioactive materials in Texas during calendar year 1999. The results obtained were then compared to the currently available summary NOV information for the same year. In addition to identifying the most common NOV's, the enhanced coding system revealed that approximately 70% of the sampled NOV's were issued for non-compliance with a specific regulation as opposed to a permit condition. Furthermore, an underlying cause of 94% of the NOV's was the failure on the part of the licensee to execute a specific task. The findings suggest that opportunities exist to improve permit holder compliance through various means, including the creation of summaries which detail specific tasks to be completed, and revising training programs with more focus on the identification and scheduling of permit-related requirements. Broad application of these results is cautioned due to the bias associated with the restricted scope of the project.

  15. A fault diagnosis scheme for rolling bearing based on local mean decomposition and improved multiscale fuzzy entropy

    NASA Astrophysics Data System (ADS)

    Li, Yongbo; Xu, Minqiang; Wang, Rixin; Huang, Wenhu

    2016-01-01

    This paper presents a new rolling bearing fault diagnosis method based on local mean decomposition (LMD), improved multiscale fuzzy entropy (IMFE), Laplacian score (LS) and improved support vector machine based binary tree (ISVM-BT). When the fault occurs in rolling bearings, the measured vibration signal is a multi-component amplitude-modulated and frequency-modulated (AM-FM) signal. LMD, a new self-adaptive time-frequency analysis method can decompose any complicated signal into a series of product functions (PFs), each of which is exactly a mono-component AM-FM signal. Hence, LMD is introduced to preprocess the vibration signal. Furthermore, IMFE that is designed to avoid the inaccurate estimation of fuzzy entropy can be utilized to quantify the complexity and self-similarity of time series for a range of scales based on fuzzy entropy. Besides, the LS approach is introduced to refine the fault features by sorting the scale factors. Subsequently, the obtained features are fed into the multi-fault classifier ISVM-BT to automatically fulfill the fault pattern identifications. The experimental results validate the effectiveness of the methodology and demonstrate that proposed algorithm can be applied to recognize the different categories and severities of rolling bearings.

  16. Architecture of the wood-wide web: Rhizopogon spp. genets link multiple Douglas-fir cohorts.

    PubMed

    Beiler, Kevin J; Durall, Daniel M; Simard, Suzanne W; Maxwell, Sheri A; Kretzer, Annette M

    2010-01-01

    *The role of mycorrhizal networks in forest dynamics is poorly understood because of the elusiveness of their spatial structure. We mapped the belowground distribution of the fungi Rhizopogon vesiculosus and Rhizopogon vinicolor and interior Douglas-fir trees (Pseudotsuga menziesii var. glauca) to determine the architecture of a mycorrhizal network in a multi-aged old-growth forest. *Rhizopogon spp. mycorrhizas were collected within a 30 x 30 m plot. Trees and fungal genets were identified using multi-locus microsatellite DNA analysis. Tree genotypes from mycorrhizas were matched to reference trees aboveground. Two trees were considered linked if they shared the same fungal genet(s). *The two Rhizopogon species each formed 13-14 genets, each colonizing up to 19 trees in the plot. Rhizopogon vesiculosus genets were larger, occurred at greater depths, and linked more trees than genets of R. vinicolor. Multiple tree cohorts were linked, with young saplings established within the mycorrhizal network of Douglas-fir veterans. A strong positive relationship was found between tree size and connectivity, resulting in a scale-free network architecture with small-world properties. *This mycorrhizal network architecture suggests an efficient and robust network, where large trees play a foundational role in facilitating conspecific regeneration and stabilizing the ecosystem.

  17. Growth and linkage of the quaternary Ubrique Normal Fault Zone, Western Gibraltar Arc: role on the along-strike relief segmentation

    NASA Astrophysics Data System (ADS)

    Jiménez-Bonilla, Alejandro; Balanya, Juan Carlos; Exposito, Inmaculada; Diaz-Azpiroz, Manuel; Barcos, Leticia

    2015-04-01

    Strain partitioning modes within migrating orogenic arcs may result in arc-parallel stretching that produces along-strike structural and topographic discontinuities. In the Western Gibraltar Arc, arc-parallel stretching has operated from the Lower Miocene up to recent times. In this study, we have reviewed the Colmenar Fault, located at the SW end of the Subbetic ranges, previously interpreted as a Middle Miocene low-angle normal fault. Our results allow to identify younger normal fault segments, to analyse their kinematics, growth and segment linkage, and to discuss its role on the structural and relief drop at regional scale. The Colmenar Fault is folded by post-Serravallian NE-SW buckle folds. Both the SW-dipping fault surfaces and the SW-plunging fold axes contribute to the structural relief drop toward the SW. Nevertheless, at the NW tip of the Colmenar Fault, we have identified unfolded normal faults cutting quaternary soils. They are grouped into a N110˚E striking brittle deformation band 15km long and until 3km wide (hereafter Ubrique Normal Fault Zone; UNFZ). The UNFZ is divided into three sectors: (a) The western tip zone is formed by normal faults which usually dip to the SW and whose slip directions vary between N205˚E and N225˚E. These segments are linked to each other by left-lateral oblique faults interpreted as transfer faults. (b) The central part of the UNFZ is composed of a single N115˚E striking fault segment 2,4km long. Slip directions are around N190˚E and the estimated throw is 1,25km. The fault scarp is well-conserved reaching up to 400m in its central part and diminishing to 200m at both segment terminations. This fault segment is linked to the western tip by an overlap zone characterized by tilted blocks limited by high-angle NNE-SSW and WNW-ESE striking faults interpreted as "box faults" [1]. (c) The eastern tip zone is formed by fault segments with oblique slip which also contribute to the downthrown of the SW block. This kinematic

  18. Displacement-length relationship of normal faults in Acheron Fossae, Mars: new observations with HRSC.

    NASA Astrophysics Data System (ADS)

    Charalambakis, E.; Hauber, E.; Knapmeyer, M.; Grott, M.; Gwinner, K.

    2007-08-01

    quality, especially the lighting conditions in the region, different errors can be made by determining the various values. Based on our experiences, we estimate that the error measuring the length of the fault is smaller than 10% and that the measurement error of the offset is smaller than 5%. Furthermore the horizontal resolution of the HRSC images is 12.5 m/pixel or 25 m/pixel and of the DEM derived from HRSC images 50 m/pixel because of re-sampling. That means that image resolution does not introduce a significant error at fault lengths in kilometer range. For the case of Mars it is known that in the growth of fault populations linkage is an essential process [8]. We obtained the d/l-values from selected examples of faults that were connected via a relay ramp. The error of ignoring an existing fault linkage is 20% to 50% if the elliptical fault model is used and 30% to 50% if only the dmax value is used to determine d l . This shows an advantage of the elliptic model. The error increases if more faults are linked, because the underestimation of the relevant length gets worse the longer the linked system is. We obtained a value of gamma=d/l of about 2 · 10-2 for the elliptic model and a value of approximately 2.7 · 10-2 for the dmax-model. The data show a relatively large scatter, but they can be compared to data from terrestrial faults ( d/l= ~1 · 10-2...5 · 10-2; [9] and references therein). In a first inspection of the Acheron Fossae 2 region in the orbit 1437 we could confirm our first observations [10]. If we consider fault linkage the d/l values shift towards lower d/l-ratios, since linkage means that d remains essentially constant, but l increases significantly. We will continue to measure other faults and obtain values for linked faults and relay ramps. References: [1] Cowie, P. A. and Scholz, C. H. (1992) JSG, 14, 1133-1148. [2] Knapmeyer, M. et al. (2006) JGR, 111, E11006. [3] Neukum, G. et al. (2004) ESA SP-1240, 17-35. [4] Kronberg, P. et al. (2007) J

  19. Plio-Pleistocene synsedimentary fault compartments, foundation for the eastern Olduvai Basin paleoenvironmental mosaic, Tanzania.

    PubMed

    Stollhofen, Harald; Stanistreet, Ian G

    2012-08-01

    Normal faults displacing Upper Bed I and Lower Bed II strata of the Plio-Pleistocene Lake Olduvai were studied on the basis of facies and thickness changes as well as diversion of transport directions across them in order to establish criteria for their synsedimentary activity. Decompacted differential thicknesses across faults were then used to calculate average fault slip rates of 0.05-0.47 mm/yr for the Tuff IE/IF interval (Upper Bed I) and 0.01-0.13 mm/yr for the Tuff IF/IIA section (Lower Bed II). Considering fault recurrence intervals of ~1000 years, fault scarp heights potentially achieved average values of 0.05-0.47 m and a maximum value of 5.4 m during Upper Bed I, which dropped to average values of 0.01-0.13 m and a localized maximum of 0.72 m during Lower Bed II deposition. Synsedimentary faults were of importance to the form and paleoecology of landscapes utilized by early hominins, most traceably and provably Homo habilis as illustrated by the recurrent density and compositional pattern of Oldowan stone artifact assemblage variation across them. Two potential relationship factors are: (1) fault scarp topographies controlled sediment distribution, surface, and subsurface hydrology, and thus vegetation, so that a resulting mosaic of microenvironments and paleoecologies provided a variety of opportunities for omnivorous hominins; and (2) they ensured that the most voluminous and violent pyroclastic flows from the Mt. Olmoti volcano were dammed and conduited away from the Olduvai Basin depocenter, when otherwise a single or set of ignimbrite flows might have filled and devastated the topography that contained the central lake body. In addition, hydraulically active faults may have conduited groundwater, supporting freshwater springs and wetlands and favoring growth of trees. Copyright © 2011 Elsevier Ltd. All rights reserved.

  20. The Fault Block Model: A novel approach for faulted gas reservoirs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ursin, J.R.; Moerkeseth, P.O.

    1994-12-31

    The Fault Block Model was designed for the development of gas production from Sleipner Vest. The reservoir consists of marginal marine sandstone of Hugine Formation. Modeling of highly faulted and compartmentalized reservoirs is severely impeded by the nature and extent of known and undetected faults and, in particular, their effectiveness as flow barrier. The model presented is efficient and superior to other models, for highly faulted reservoir, i.e. grid based simulators, because it minimizes the effect of major undetected faults and geological uncertainties. In this article the authors present the Fault Block Model as a new tool to better understandmore » the implications of geological uncertainty in faulted gas reservoirs with good productivity, with respect to uncertainty in well coverage and optimum gas recovery.« less

  1. Paleoseismic investigations in the Santa Cruz mountains, California: Implications for recurrence of large-magnitude earthquakes on the San Andreas fault

    USGS Publications Warehouse

    Schwartz, D.P.; Pantosti, D.; Okumura, K.; Powers, T.J.; Hamilton, J.C.

    1998-01-01

    Trenching, microgeomorphic mapping, and tree ring analysis provide information on timing of paleoearthquakes and behavior of the San Andreas fault in the Santa Cruz mountains. At the Grizzly Flat site alluvial units dated at 1640-1659 A.D., 1679-1894 A.D., 1668-1893 A.D., and the present ground surface are displaced by a single event. This was the 1906 surface rupture. Combined trench dates and tree ring analysis suggest that the penultimate event occurred in the mid-1600s, possibly in an interval as narrow as 1632-1659 A.D. There is no direct evidence in the trenches for the 1838 or 1865 earthquakes, which have been proposed as occurring on this part of the fault zone. In a minimum time of about 340 years only one large surface faulting event (1906) occurred at Grizzly Flat, in contrast to previous recurrence estimates of 95-110 years for the Santa Cruz mountains segment. Comparison with dates of the penultimate San Andreas earthquake at sites north of San Francisco suggests that the San Andreas fault between Point Arena and the Santa Cruz mountains may have failed either as a sequence of closely timed earthquakes on adjacent segments or as a single long rupture similar in length to the 1906 rupture around the mid-1600s. The 1906 coseismic geodetic slip and the late Holocene geologic slip rate on the San Francisco peninsula and southward are about 50-70% and 70% of their values north of San Francisco, respectively. The slip gradient along the 1906 rupture section of the San Andreas reflects partitioning of plate boundary slip onto the San Gregorio, Sargent, and other faults south of the Golden Gate. If a mid-1600s event ruptured the same section of the fault that failed in 1906, it supports the concept that long strike-slip faults can contain master rupture segments that repeat in both length and slip distribution. Recognition of a persistent slip rate gradient along the northern San Andreas fault and the concept of a master segment remove the requirement that

  2. Current limiting behavior in three-phase transformer-type SFCLs using an iron core according to variety of fault

    NASA Astrophysics Data System (ADS)

    Cho, Yong-Sun; Jung, Byung-Ik; Ha, Kyoung-Hun; Choi, Soo-Geun; Park, Hyoung-Min; Choi, Hyo-Sang

    To apply the superconducting fault current limiter (SFCL) to the power system, the reliability of the fault-current-limiting operation must be ensured in diverse fault conditions. The SFCL must also be linked to the operation of the high-speed recloser in the power system. In this study, a three-phase transformer-type SFCL, which has a neutral line to improve the simultaneous quench characteristics of superconducting elements, was manufactured to analyze the fault-current-limiting characteristic according to the single, double, and triple line-to-ground faults. The transformer-type SFCL, wherein three-phase windings are connected to one iron core, reduced the burden on the superconducting element as the superconducting element on the sound phase was also quenched in the case of the single line-to-ground fault. In the case of double or triple line-to-ground faults, the flux from the faulted phase winding was interlinked with other faulted or sound phase windings, and the fault-current-limiting rate decreased because the windings of three phases were inductively connected by one iron core.

  3. Fault reactivation: The Picuris-Pecos fault system of north-central New Mexico

    NASA Astrophysics Data System (ADS)

    McDonald, David Wilson

    The PPFS is a N-trending fault system extending over 80 km in the Sangre de Cristo Mountains of northern New Mexico. Precambrian basement rocks are offset 37 km in a right-lateral sense; however, this offset includes dextral strike-slip (Precambrian), mostly normal dip-slip (Pennsylvanian), mostly reverse dip-slip (Early Laramide), limited strike-slip (Late Laramide) and mostly normal dip-slip (Cenozoic). The PPFS is broken into at least 3 segments by the NE-trending Embudo fault and by several Laramide age NW-trending tear faults. These segments are (from N to S): the Taos, the Picuris, and the Pecos segments. On the east side of the Picuris segment in the Picuris Mountains, the Oligocene-Miocene age Miranda graben developed and represents a complex extension zone south of the Embudo fault. Regional analysis of remotely sensed data and geologic maps indicate that lineaments subparallel to the trace of the PPFS are longer and less frequent than lineaments that trend orthogonal to the PPFS. Significant cross cutting faults and subtle changes in fault trends in each segment are clear in the lineament data. Detailed mapping in the eastern Picuris Mountains showed that the favorably oriented Picuris segment was not reactivated in the Tertiary development of the Rio Grande rift. Segmentation of the PPFS and post-Laramide annealing of the Picuris segment are interpreted to have resulted in the development of the subparallel La Serna fault. The Picuris segment of the PPFS is offset by several E-ESE trending faults. These faults are Late Cenozoic in age and interpreted to be related to the uplift of the Picuris Mountains and the continuing sinistral motion on the Embudo fault. Differential subsidence within the Miranda graben caused the development of several synthetic and orthogonal faults between the bounding La Serna and Miranda faults. Analysis of over 10,000 outcrop scale brittle structures reveals a strong correlation between faults and fracture systems. The dominant

  4. Stafford fault system: 120 million year fault movement history of northern Virginia

    USGS Publications Warehouse

    Powars, David S.; Catchings, Rufus D.; Horton, J. Wright; Schindler, J. Stephen; Pavich, Milan J.

    2015-01-01

    The Stafford fault system, located in the mid-Atlantic coastal plain of the eastern United States, provides the most complete record of fault movement during the past ~120 m.y. across the Virginia, Washington, District of Columbia (D.C.), and Maryland region, including displacement of Pleistocene terrace gravels. The Stafford fault system is close to and aligned with the Piedmont Spotsylvania and Long Branch fault zones. The dominant southwest-northeast trend of strong shaking from the 23 August 2011, moment magnitude Mw 5.8 Mineral, Virginia, earthquake is consistent with the connectivity of these faults, as seismic energy appears to have traveled along the documented and proposed extensions of the Stafford fault system into the Washington, D.C., area. Some other faults documented in the nearby coastal plain are clearly rooted in crystalline basement faults, especially along terrane boundaries. These coastal plain faults are commonly assumed to have undergone relatively uniform movement through time, with average slip rates from 0.3 to 1.5 m/m.y. However, there were higher rates during the Paleocene–early Eocene and the Pliocene (4.4–27.4 m/m.y), suggesting that slip occurred primarily during large earthquakes. Further investigation of the Stafford fault system is needed to understand potential earthquake hazards for the Virginia, Maryland, and Washington, D.C., area. The combined Stafford fault system and aligned Piedmont faults are ~180 km long, so if the combined fault system ruptured in a single event, it would result in a significantly larger magnitude earthquake than the Mineral earthquake. Many structures most strongly affected during the Mineral earthquake are along or near the Stafford fault system and its proposed northeastward extension.

  5. Dynamic rupture simulations on a fault network in the Corinth Rift

    NASA Astrophysics Data System (ADS)

    Durand, V.; Hok, S.; Boiselet, A.; Bernard, P.; Scotti, O.

    2017-03-01

    The Corinth rift (Greece) is made of a complex network of fault segments, typically 10-20 km long separated by stepovers. Assessing the maximum magnitude possible in this region requires accounting for multisegment rupture. Here we apply numerical models of dynamic rupture to quantify the probability of a multisegment rupture in the rift, based on the knowledge of the fault geometry and on the magnitude of the historical and palaeoearthquakes. We restrict our application to dynamic rupture on the most recent and active fault network of the western rift, located on the southern coast. We first define several models, varying the main physical parameters that control the rupture propagation. We keep the regional stress field and stress drop constant, and we test several fault geometries, several positions of the faults in their seismic cycle, several values of the critical distance (and so several fracture energies) and two different hypocentres (thus testing two directivity hypothesis). We obtain different scenarios in terms of the number of ruptured segments and the final magnitude (between M = 5.8 for a single segment rupture to M = 6.4 for a whole network rupture), and find that the main parameter controlling the variability of the scenarios is the fracture energy. We then use a probabilistic approach to quantify the probability of each generated scenario. To do that, we implement a logical tree associating a weight to each model input hypothesis. Combining these weights, we compute the probability of occurrence of each scenario, and show that the multisegment scenarios are very likely (52 per cent), but that the whole network rupture scenario is unlikely (14 per cent).

  6. Fault kinematics and localised inversion within the Troms-Finnmark Fault Complex, SW Barents Sea

    NASA Astrophysics Data System (ADS)

    Zervas, I.; Omosanya, K. O.; Lippard, S. J.; Johansen, S. E.

    2018-04-01

    The areas bounding the Troms-Finnmark Fault Complex are affected by complex tectonic evolution. In this work, the history of fault growth, reactivation, and inversion of major faults in the Troms-Finnmark Fault Complex and the Ringvassøy Loppa Fault Complex is interpreted from three-dimensional seismic data, structural maps and fault displacement plots. Our results reveal eight normal faults bounding rotated fault blocks in the Troms-Finnmark Fault Complex. Both the throw-depth and displacement-distance plots show that the faults exhibit complex configurations of lateral and vertical segmentation with varied profiles. Some of the faults were reactivated by dip-linkages during the Late Jurassic and exhibit polycyclic fault growth, including radial, syn-sedimentary, and hybrid propagation. Localised positive inversion is the main mechanism of fault reactivation occurring at the Troms-Finnmark Fault Complex. The observed structural styles include folds associated with extensional faults, folded growth wedges and inverted depocentres. Localised inversion was intermittent with rifting during the Middle Jurassic-Early Cretaceous at the boundaries of the Troms-Finnmark Fault Complex to the Finnmark Platform. Additionally, tectonic inversion was more intense at the boundaries of the two fault complexes, affecting Middle Triassic to Early Cretaceous strata. Our study shows that localised folding is either a product of compressional forces or of lateral movements in the Troms-Finnmark Fault Complex. Regional stresses due to the uplift in the Loppa High and halokinesis in the Tromsø Basin are likely additional causes of inversion in the Troms-Finnmark Fault Complex.

  7. Experimental investigation into the fault response of superconducting hybrid electric propulsion electrical power system to a DC rail to rail fault

    NASA Astrophysics Data System (ADS)

    Nolan, S.; Jones, C. E.; Munro, R.; Norman, P.; Galloway, S.; Venturumilli, S.; Sheng, J.; Yuan, W.

    2017-12-01

    Hybrid electric propulsion aircraft are proposed to improve overall aircraft efficiency, enabling future rising demands for air travel to be met. The development of appropriate electrical power systems to provide thrust for the aircraft is a significant challenge due to the much higher required power generation capacity levels and complexity of the aero-electrical power systems (AEPS). The efficiency and weight of the AEPS is critical to ensure that the benefits of hybrid propulsion are not mitigated by the electrical power train. Hence it is proposed that for larger aircraft (~200 passengers) superconducting power systems are used to meet target power densities. Central to the design of the hybrid propulsion AEPS is a robust and reliable electrical protection and fault management system. It is known from previous studies that the choice of protection system may have a significant impact on the overall efficiency of the AEPS. Hence an informed design process which considers the key trades between choice of cable and protection requirements is needed. To date the fault response of a voltage source converter interfaced DC link rail to rail fault in a superconducting power system has only been investigated using simulation models validated by theoretical values from the literature. This paper will present the experimentally obtained fault response for a variety of different types of superconducting tape for a rail to rail DC fault. The paper will then use these as a platform to identify key trades between protection requirements and cable design, providing guidelines to enable future informed decisions to optimise hybrid propulsion electrical power system and protection design.

  8. Measurement of tree canopy architecture

    NASA Technical Reports Server (NTRS)

    Martens, S. N.; Ustin, S. L.; Norman, J. M.

    1991-01-01

    The lack of accurate extensive geometric data on tree canopies has retarded development and validation of radiative transfer models. A stratified sampling method was devised to measure the three-dimensional geometry of 16 walnut trees which had received irrigation treatments of either 100 or 33 per cent of evapotranspirational (ET) demand for the previous two years. Graphic reconstructions of the three-dimensional geometry were verified by 58 independent measurements. The distributions of stem- and leaf-size classes, lengths, and angle classes were determined and used to calculate leaf area index (LAI), stem area, and biomass. Reduced irrigation trees have lower biomass of stems, leaves and fruit, lower LAI, steeper leaf angles and altered biomass allocation to large stems. These data can be used in ecological models that link canopy processes with remotely sensed measurements.

  9. Pseudo-fault signal assisted EMD for fault detection and isolation in rotating machines

    NASA Astrophysics Data System (ADS)

    Singh, Dheeraj Sharan; Zhao, Qing

    2016-12-01

    This paper presents a novel data driven technique for the detection and isolation of faults, which generate impacts in a rotating equipment. The technique is built upon the principles of empirical mode decomposition (EMD), envelope analysis and pseudo-fault signal for fault separation. Firstly, the most dominant intrinsic mode function (IMF) is identified using EMD of a raw signal, which contains all the necessary information about the faults. The envelope of this IMF is often modulated with multiple vibration sources and noise. A second level decomposition is performed by applying pseudo-fault signal (PFS) assisted EMD on the envelope. A pseudo-fault signal is constructed based on the known fault characteristic frequency of the particular machine. The objective of using external (pseudo-fault) signal is to isolate different fault frequencies, present in the envelope . The pseudo-fault signal serves dual purposes: (i) it solves the mode mixing problem inherent in EMD, (ii) it isolates and quantifies a particular fault frequency component. The proposed technique is suitable for real-time implementation, which has also been validated on simulated fault and experimental data corresponding to a bearing and a gear-box set-up, respectively.

  10. Tools for developing a quality management program: proactive tools (process mapping, value stream mapping, fault tree analysis, and failure mode and effects analysis).

    PubMed

    Rath, Frank

    2008-01-01

    This article examines the concepts of quality management (QM) and quality assurance (QA), as well as the current state of QM and QA practices in radiotherapy. A systematic approach incorporating a series of industrial engineering-based tools is proposed, which can be applied in health care organizations proactively to improve process outcomes, reduce risk and/or improve patient safety, improve through-put, and reduce cost. This tool set includes process mapping and process flowcharting, failure modes and effects analysis (FMEA), value stream mapping, and fault tree analysis (FTA). Many health care organizations do not have experience in applying these tools and therefore do not understand how and when to use them. As a result there are many misconceptions about how to use these tools, and they are often incorrectly applied. This article describes these industrial engineering-based tools and also how to use them, when they should be used (and not used), and the intended purposes for their use. In addition the strengths and weaknesses of each of these tools are described, and examples are given to demonstrate the application of these tools in health care settings.

  11. Mapping tree and impervious cover using Ikonos imagery: links with water quality and stream health

    NASA Astrophysics Data System (ADS)

    Wright, R.; Goetz, S. J.; Smith, A.; Zinecker, E.

    2002-12-01

    Precision georeferened Ikonos satellite imagery was used to map tree cover and impervious surface area in Montgomery county Maryland. The derived maps were used to assess riparian zone stream buffer tree cover and to predict, with multivariate logistic regression, stream health ratings across 246 small watersheds averaging 472 km2 in size. Stream health was assessed by state and county experts using a combination of physical measurements (e.g., dissolved oxygen) and biological indicators (e.g., benthic macroinvertebrates). We found it possible to create highly accurate (90+ per cent) maps of tree and impervious cover using decision tree classifiers, provided extensive field data were available for algorithm training. Impervious surface area was found to be the primary predictor of stream health, followed by tree cover in riparian buffers, and total tree cover within entire watersheds. A number of issues associated with mapping using Ikonos imagery were encountered, including differences in phenological and atmospheric conditions, shadowing within canopies and between scene elements, and limited spectral discrimination of cover types. We report on both the capabilities and limitations of Ikonos imagery for these applications, and considerations for extending these analyses to other areas.

  12. Postglacial seismic activity along the Isovaara-Riikonkumpu fault complex

    NASA Astrophysics Data System (ADS)

    Ojala, Antti E. K.; Mattila, Jussi; Ruskeeniemi, Timo; Palmu, Jukka-Pekka; Lindberg, Antero; Hänninen, Pekka; Sutinen, Raimo

    2017-10-01

    Analysis of airborne LiDAR-based digital elevation models (DEMs), trenching of Quaternary deposits, and diamond drilling through faulted bedrock was conducted to characterize the geological structure and full slip profiles of the Isovaara-Riikonkumpu postglacial fault (PGF) complex in northern Finland. The PGF systems are recognized from LiDAR DEMs as a complex of surface ruptures striking SW-NE, cutting through late-Weichselian till, and associated with several postglacial landslides within 10 km. Evidence from the terrain rupture characteristics, the deformed and folded structure of late-Weichselian till, and the 14C age of 11,300 cal BP from buried organic matter underneath the Sotka landslide indicates a postglacial origin of the Riikonkumpu fault (PGF). The fracture frequency and lithology of drill cores and fault geometry in the trench log indicate that the Riikonkumpu PGF dips to WNW with a dip angle of 40-45° at the Riikonkumpu site and close to 60° at the Riikonvaara site. A fault length of 19 km and the mean and maximum cumulative vertical displacement of 1.3 m and 4.1 m, respectively, of the Riikonkumpu PGF system indicate that the fault potentially hosted an earthquake with a moment magnitude MW ≈ 6.7-7.3 assuming that slip was accumulated in one seismic event. Our interpretation further suggests that the Riikonkumpu PGF system is linked to the Isovaara PGF system and that, together, they form a larger Isovaara-Riikonkumpu fault complex. Relationships between the 38-km-long rupture of the Isovaara-Riikonkumpu complex and the fault offset parameters, with cumulative displacement of 1.5 and 8.3 m, respectively, indicate that the earthquake(s) contributing to the PGF complex potentially had a moment magnitude of MW ≈ 6.9-7.5. In order to adequately sample the uncertainty space, the moment magnitude was also estimated for each major segment within the Isovaara-Riikonkumpu PGF complex. These estimates vary roughly between MW ≈ 5-8 for the individual

  13. GIS-based groundwater potential mapping using boosted regression tree, classification and regression tree, and random forest machine learning models in Iran.

    PubMed

    Naghibi, Seyed Amir; Pourghasemi, Hamid Reza; Dixon, Barnali

    2016-01-01

    Groundwater is considered one of the most valuable fresh water resources. The main objective of this study was to produce groundwater spring potential maps in the Koohrang Watershed, Chaharmahal-e-Bakhtiari Province, Iran, using three machine learning models: boosted regression tree (BRT), classification and regression tree (CART), and random forest (RF). Thirteen hydrological-geological-physiographical (HGP) factors that influence locations of springs were considered in this research. These factors include slope degree, slope aspect, altitude, topographic wetness index (TWI), slope length (LS), plan curvature, profile curvature, distance to rivers, distance to faults, lithology, land use, drainage density, and fault density. Subsequently, groundwater spring potential was modeled and mapped using CART, RF, and BRT algorithms. The predicted results from the three models were validated using the receiver operating characteristics curve (ROC). From 864 springs identified, 605 (≈70 %) locations were used for the spring potential mapping, while the remaining 259 (≈30 %) springs were used for the model validation. The area under the curve (AUC) for the BRT model was calculated as 0.8103 and for CART and RF the AUC were 0.7870 and 0.7119, respectively. Therefore, it was concluded that the BRT model produced the best prediction results while predicting locations of springs followed by CART and RF models, respectively. Geospatially integrated BRT, CART, and RF methods proved to be useful in generating the spring potential map (SPM) with reasonable accuracy.

  14. Crustal faults exposed in the Pito Deep Rift: Conduits for hydrothermal fluids on the southeast Pacific Rise

    NASA Astrophysics Data System (ADS)

    Hayman, Nicholas W.; Karson, Jeffrey A.

    2009-02-01

    The escarpments that bound the Pito Deep Rift (northeastern Easter microplate) expose in situ upper oceanic crust that was accreted ˜3 Ma ago at the superfast spreading (˜142 mm/a, full rate) southeast Pacific Rise (SEPR). Samples and images of these escarpments were taken during transects utilizing the human-occupied vehicle Alvin and remotely operated vehicle Jason II. The dive areas were mapped with a "deformation intensity scale" revealing that the sheeted dike complex and the base of the lavas contain approximately meter-wide fault zones surrounded by fractured "damage zones." Fault zones are spaced several hundred meters apart, in places offset the base of the lavas, separate areas with differently oriented dikes, and are locally crosscut by (younger) dikes. Fault rocks are rich in interstitial amphibole, matrix and vein chlorite, prominent veins of quartz, and accessory grains of sulfides, oxides, and sphene. These phases form the fine-grained matrix materials for cataclasites and cements for breccias where they completely surround angular to subangular clasts of variably altered and deformed basalt. Bulk rock geochemical compositions of the fault rocks are largely governed by the abundance of quartz veins. When compositions are normalized to compensate for the excess silica, the fault rocks exhibit evidence for additional geochemical changes via hydrothermal alteration, including the loss of mobile elements and gain of some trace metals and magnesium. Microstructures and compositions suggest that the fault rocks developed over multiple increments of deformation and hydrothermal fluid flow in the subaxial environment of the SEPR; faults related to the opening of the Pito Deep Rift can be distinguished by their orientation and fault rock microstructure. Some subaxial deformation increments were likely linked with violent discharge events associated with fluid pressure fluctuations and mineral sealing within the fault zones. Other increments were linked with

  15. Multi-version software reliability through fault-avoidance and fault-tolerance

    NASA Technical Reports Server (NTRS)

    Vouk, Mladen A.; Mcallister, David F.

    1989-01-01

    A number of experimental and theoretical issues associated with the practical use of multi-version software to provide run-time tolerance to software faults were investigated. A specialized tool was developed and evaluated for measuring testing coverage for a variety of metrics. The tool was used to collect information on the relationships between software faults and coverage provided by the testing process as measured by different metrics (including data flow metrics). Considerable correlation was found between coverage provided by some higher metrics and the elimination of faults in the code. Back-to-back testing was continued as an efficient mechanism for removal of un-correlated faults, and common-cause faults of variable span. Software reliability estimation methods was also continued based on non-random sampling, and the relationship between software reliability and code coverage provided through testing. New fault tolerance models were formulated. Simulation studies of the Acceptance Voting and Multi-stage Voting algorithms were finished and it was found that these two schemes for software fault tolerance are superior in many respects to some commonly used schemes. Particularly encouraging are the safety properties of the Acceptance testing scheme.

  16. Urban trees and the risk of poor birth outcomes

    Treesearch

    Geoffrey H. Donovan; Yvonne L. Michael; David T. Butry; Amy D. Sullivan; John M. Chase

    2011-01-01

    This paper investigated whether greater tree-canopy cover is associated with reduced risk of poor birth outcomes in Portland, Oregon. Residential addresses were geocoded and linked to classified-aerial imagery to calculate tree-canopy cover in 50, 100, and 200 m buffers around each home in our sample (n=5696). Detailed data on maternal characteristics and additional...

  17. Imaging the North Anatolian Fault using the scattered teleseismic wavefield

    NASA Astrophysics Data System (ADS)

    Thompson, D. A.; Rost, S.; Houseman, G. A.; Cornwell, D. G.; Turkelli, N.; Teoman, U.; Kahraman, M.; Altuncu Poyraz, S.; Gülen, L.; Utkucu, M.; Frederiksen, A. W.; Rondenay, S.

    2013-12-01

    The North Anatolian Fault Zone (NAFZ) is a major continental strike-slip fault system, similar in size and scale to the San Andreas system, that extends ˜1200 km across Turkey. In 2012, a new multidisciplinary project (FaultLab) was instigated to better understand deformation throughout the entire crust in the NAFZ, in particular the expected transition from narrow zones of brittle deformation in the upper crust to possibly broader shear zones in the lower crust/upper mantle and how these features contribute to the earthquake loading cycle. This contribution will discuss the first results from the seismic component of the project, a 73 station network encompassing the northern and southern branches of the NAFZ in the Sakarya region. The Dense Array for North Anatolia (DANA) is arranged as a 6×11 grid with a nominal station spacing of 7 km, with a further 7 stations located outside of the main grid. With the excellent resolution afforded by the DANA network, we will present images of crustal structure using the technique of teleseismic scattering tomography. The method uses a full waveform inversion of the teleseismic scattered wavefield coupled with array processing techniques to infer the properties and location of small-scale heterogeneities (with scales on the order of the seismic wavelength) within the crust. We will also present preliminary results of teleseismic scattering migration, another powerful method that benefits from the dense data coverage of the deployed seismic network. Images obtained using these methods together with other conventional imaging techniques will provide evidence for how the deformation is distributed within the fault zone at depth, providing constraints that can be used in conjunction with structural analyses of exhumed fault segments and models of geodetic strain-rate across the fault system. By linking together results from the complementary techniques being employed in the FaultLab project, we aim to produce a comprehensive

  18. A Log-Scaling Fault Tolerant Agreement Algorithm for a Fault Tolerant MPI

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hursey, Joshua J; Naughton, III, Thomas J; Vallee, Geoffroy R

    The lack of fault tolerance is becoming a limiting factor for application scalability in HPC systems. The MPI does not provide standardized fault tolerance interfaces and semantics. The MPI Forum's Fault Tolerance Working Group is proposing a collective fault tolerant agreement algorithm for the next MPI standard. Such algorithms play a central role in many fault tolerant applications. This paper combines a log-scaling two-phase commit agreement algorithm with a reduction operation to provide the necessary functionality for the new collective without any additional messages. Error handling mechanisms are described that preserve the fault tolerance properties while maintaining overall scalability.

  19. Ground-Penatrating Radar Investigations Across the Sawmill Branch Fault Near Charleston, South Carolina

    NASA Astrophysics Data System (ADS)

    Dura-Gomez, I.; Addison, A.; Knapp, C. C.; Talwani, P.; Chapman, A.

    2005-12-01

    During the 1886 Charleston earthquake, two parallel tabby walls of Fort Dorchester broke left-laterally, and a strike of ~N25°W was inferred for the causative Sawmill Branch fault. To better define this fault, which does not have any surface expression, we planned to cut trenches across it. However, as Fort Dorchester is a protected archeological site, we were required to locate the fault accurately away from the fort, before permission could be obtained to cut short trenches. The present GPR investigations were planned as a preliminary step to determine locations for trenching. A pulseEKKO 100 GPR was used to collect data along eight profiles (varying in length from 10 m to 30 m) that were run across the projected strike of the fault, and one 50 m long profile that was run parallel to it. The locations of the profiles were obtained using a total station. To capture the signature of the fault, sixteen common-offset (COS) lines were acquired by using different antennas (50, 100 and 200 MHz) and stacking 64 times to increase the signal-to-noise ratio. The location of trees and stumps were recorded. In addition, two common-midpoint (CMP) tests were carried out, and gave an average velocity of about 0.097 m/ns. Processing included the subtraction of the low frequency "wow" on the trace (dewow), automatic gain control (AGC) and the application of bandpass filters. The signals using the 50 MHz, 100 MHz and 200 MHz antennas were found to penetrate up to about 30 meters, 20 meters and 12 meters respectively. Vertically offset reflectors and disruptions of the electrical signal were used to infer the location of the fault(s). Comparisons of the locations of these disruptions on various lines were used to infer the presence of a N30°W fault zone We plan to confirm these locations by cutting shallow trenches.

  20. Prediction and measurement of thermally induced cambial tissue necrosis in tree stems

    Treesearch

    Joshua L. Jones; Brent W. Webb; Bret W. Butler; Matthew B. Dickinson; Daniel Jimenez; James Reardon; Anthony S. Bova

    2006-01-01

    A model for fire-induced heating in tree stems is linked to a recently reported model for tissue necrosis. The combined model produces cambial tissue necrosis predictions in a tree stem as a function of heating rate, heating time, tree species, and stem diameter. Model accuracy is evaluated by comparison with experimental measurements in two hardwood and two softwood...

  1. Experimental study on propagation of fault slip along a simulated rock fault

    NASA Astrophysics Data System (ADS)

    Mizoguchi, K.

    2015-12-01

    Around pre-existing geological faults in the crust, we have often observed off-fault damage zone where there are many fractures with various scales, from ~ mm to ~ m and their density typically increases with proximity to the fault. One of the fracture formation processes is considered to be dynamic shear rupture propagation on the faults, which leads to the occurrence of earthquakes. Here, I have conducted experiments on propagation of fault slip along a pre-cut rock surface to investigate the damaging behavior of rocks with slip propagation. For the experiments, I used a pair of metagabbro blocks from Tamil Nadu, India, of which the contacting surface simulates a fault of 35 cm in length and 1cm width. The experiments were done with the similar uniaxial loading configuration to Rosakis et al. (2007). Axial load σ is applied to the fault plane with an angle 60° to the loading direction. When σ is 5kN, normal and shear stresses on the fault are 1.25MPa and 0.72MPa, respectively. Timing and direction of slip propagation on the fault during the experiments were monitored with several strain gauges arrayed at an interval along the fault. The gauge data were digitally recorded with a 1MHz sampling rate and 16bit resolution. When σ is 4.8kN is applied, we observed some fault slip events where a slip nucleates spontaneously in a subsection of the fault and propagates to the whole fault. However, the propagation speed is about 1.2km/s, much lower than the S-wave velocity of the rock. This indicates that the slip events were not earthquake-like dynamic rupture ones. More efforts are needed to reproduce earthquake-like slip events in the experiments. This work is supported by the JSPS KAKENHI (26870912).

  2. Fault detection and diagnosis for gas turbines based on a kernelized information entropy model.

    PubMed

    Wang, Weiying; Xu, Zhiqiang; Tang, Rui; Li, Shuying; Wu, Wei

    2014-01-01

    Gas turbines are considered as one kind of the most important devices in power engineering and have been widely used in power generation, airplanes, and naval ships and also in oil drilling platforms. However, they are monitored without man on duty in the most cases. It is highly desirable to develop techniques and systems to remotely monitor their conditions and analyze their faults. In this work, we introduce a remote system for online condition monitoring and fault diagnosis of gas turbine on offshore oil well drilling platforms based on a kernelized information entropy model. Shannon information entropy is generalized for measuring the uniformity of exhaust temperatures, which reflect the overall states of the gas paths of gas turbine. In addition, we also extend the entropy to compute the information quantity of features in kernel spaces, which help to select the informative features for a certain recognition task. Finally, we introduce the information entropy based decision tree algorithm to extract rules from fault samples. The experiments on some real-world data show the effectiveness of the proposed algorithms.

  3. Fault Detection and Diagnosis for Gas Turbines Based on a Kernelized Information Entropy Model

    PubMed Central

    Wang, Weiying; Xu, Zhiqiang; Tang, Rui; Li, Shuying; Wu, Wei

    2014-01-01

    Gas turbines are considered as one kind of the most important devices in power engineering and have been widely used in power generation, airplanes, and naval ships and also in oil drilling platforms. However, they are monitored without man on duty in the most cases. It is highly desirable to develop techniques and systems to remotely monitor their conditions and analyze their faults. In this work, we introduce a remote system for online condition monitoring and fault diagnosis of gas turbine on offshore oil well drilling platforms based on a kernelized information entropy model. Shannon information entropy is generalized for measuring the uniformity of exhaust temperatures, which reflect the overall states of the gas paths of gas turbine. In addition, we also extend the entropy to compute the information quantity of features in kernel spaces, which help to select the informative features for a certain recognition task. Finally, we introduce the information entropy based decision tree algorithm to extract rules from fault samples. The experiments on some real-world data show the effectiveness of the proposed algorithms. PMID:25258726

  4. The use of automatic programming techniques for fault tolerant computing systems

    NASA Technical Reports Server (NTRS)

    Wild, C.

    1985-01-01

    It is conjectured that the production of software for ultra-reliable computing systems such as required by Space Station, aircraft, nuclear power plants and the like will require a high degree of automation as well as fault tolerance. In this paper, the relationship between automatic programming techniques and fault tolerant computing systems is explored. Initial efforts in the automatic synthesis of code from assertions to be used for error detection as well as the automatic generation of assertions and test cases from abstract data type specifications is outlined. Speculation on the ability to generate truly diverse designs capable of recovery from errors by exploring alternate paths in the program synthesis tree is discussed. Some initial thoughts on the use of knowledge based systems for the global detection of abnormal behavior using expectations and the goal-directed reconfiguration of resources to meet critical mission objectives are given. One of the sources of information for these systems would be the knowledge captured during the automatic programming process.

  5. Fault Network Reconstruction using Agglomerative Clustering: Applications to South Californian Seismicity

    NASA Astrophysics Data System (ADS)

    Kamer, Yavor; Ouillon, Guy; Sornette, Didier; Wössner, Jochen

    2014-05-01

    We present applications of a new clustering method for fault network reconstruction based on the spatial distribution of seismicity. Unlike common approaches that start from the simplest large scale and gradually increase the complexity trying to explain the small scales, our method uses a bottom-up approach, by an initial sampling of the small scales and then reducing the complexity. The new approach also exploits the location uncertainty associated with each event in order to obtain a more accurate representation of the spatial probability distribution of the seismicity. For a given dataset, we first construct an agglomerative hierarchical cluster (AHC) tree based on Ward's minimum variance linkage. Such a tree starts out with one cluster and progressively branches out into an increasing number of clusters. To atomize the structure into its constitutive protoclusters, we initialize a Gaussian Mixture Modeling (GMM) at a given level of the hierarchical clustering tree. We then let the GMM converge using an Expectation Maximization (EM) algorithm. The kernels that become ill defined (less than 4 points) at the end of the EM are discarded. By incrementing the number of initialization clusters (by atomizing at increasingly populated levels of the AHC tree) and repeating the procedure above, we are able to determine the maximum number of Gaussian kernels the structure can hold. The kernels in this configuration constitute our protoclusters. In this setting, merging of any pair will lessen the likelihood (calculated over the pdf of the kernels) but in turn will reduce the model's complexity. The information loss/gain of any possible merging can thus be quantified based on the Minimum Description Length (MDL) principle. Similar to an inter-distance matrix, where the matrix element di,j gives the distance between points i and j, we can construct a MDL gain/loss matrix where mi,j gives the information gain/loss resulting from the merging of kernels i and j. Based on this

  6. Surface faulting along the Superstition Hills fault zone and nearby faults associated with the earthquakes of 24 November 1987

    USGS Publications Warehouse

    Sharp, R.V.

    1989-01-01

    The M6.2 Elmore Desert Ranch earthquake of 24 November 1987 was associated spatially and probably temporally with left-lateral surface rupture on many northeast-trending faults in and near the Superstition Hills in western Imperial Valley. Three curving discontinuous principal zones of rupture among these breaks extended northeastward from near the Superstition Hills fault zone as far as 9km; the maximum observed surface slip, 12.5cm, was on the northern of the three, the Elmore Ranch fault, at a point near the epicenter. Twelve hours after the Elmore Ranch earthquake, the M6.6 Superstition Hills earthquake occurred near the northwest end of the right-lateral Superstition Hills fault zone. We measured displacements over 339 days at as many as 296 sites along the Superstition Hills fault zone, and repeated measurements at 49 sites provided sufficient data to fit with a simple power law. The overall distributions of right-lateral displacement at 1 day and the estimated final slip are nearly symmetrical about the midpoint of the surface rupture. The average estimated final right-lateral slip for the Superstition Hills fault zone is ~54cm. The average left-lateral slip for the conjugate faults trending northeastward is ~23cm. The southernmost ruptured member of the Superstition Hills fault zone, newly named the Wienert fault, extends the known length of the zone by about 4km. -from Authors

  7. Fault Management Metrics

    NASA Technical Reports Server (NTRS)

    Johnson, Stephen B.; Ghoshal, Sudipto; Haste, Deepak; Moore, Craig

    2017-01-01

    This paper describes the theory and considerations in the application of metrics to measure the effectiveness of fault management. Fault management refers here to the operational aspect of system health management, and as such is considered as a meta-control loop that operates to preserve or maximize the system's ability to achieve its goals in the face of current or prospective failure. As a suite of control loops, the metrics to estimate and measure the effectiveness of fault management are similar to those of classical control loops in being divided into two major classes: state estimation, and state control. State estimation metrics can be classified into lower-level subdivisions for detection coverage, detection effectiveness, fault isolation and fault identification (diagnostics), and failure prognosis. State control metrics can be classified into response determination effectiveness and response effectiveness. These metrics are applied to each and every fault management control loop in the system, for each failure to which they apply, and probabilistically summed to determine the effectiveness of these fault management control loops to preserve the relevant system goals that they are intended to protect.

  8. Ontology and Knowledgebase of Fractures and Faults

    NASA Astrophysics Data System (ADS)

    Aydin, A.; Zhong, J.

    2007-12-01

    Fractures and faults are related to many societal and industrial problems including oil and gas exploration and production, CO2 sequestration, and waste isolation. Therefore, an ontology focusing fractures and faults is desirable to facilitate a sound education and communication among this highly diverse community. We developed an ontology for this field. Some high level classes in our ontology include geological structure, deformation mechanism, and property or factor. Throughout our ontology, we emphasis the relationship among the classes, such as structures formed by mechanisms and properties effect the mechanism that will occur. At this stage, there are about 1,000 classes, referencing about 150 articles or textbook and supplemented by about 350 photographs, diagrams, and illustrations. With limited time and resources, we chose a simple application for our ontology - transforming to a knowledgebase made of a series of web pages. Each web page corresponds to one class in the ontology, having discussion, figures, links to subclass and related concepts, as well as references. We believe that our knowledgebase is a valuable resource for finding information about fractures and faults, to both practicing geologists and students who are interested in the related issues either in application or in education and training.

  9. Ring profiler: a new method for estimating tree-ring density for improved estimates of carbon storage

    Treesearch

    David W. Vahey; C. Tim Scott; J.Y. Zhu; Kenneth E. Skog

    2012-01-01

    Methods for estimating present and future carbon storage in trees and forests rely on measurements or estimates of tree volume or volume growth multiplied by specific gravity. Wood density can vary by tree ring and height in a tree. If data on density by tree ring could be obtained and linked to tree size and stand characteristics, it would be possible to more...

  10. Toward a Model-Based Approach for Flight System Fault Protection

    NASA Technical Reports Server (NTRS)

    Day, John; Meakin, Peter; Murray, Alex

    2012-01-01

    Use SysML/UML to describe the physical structure of the system This part of the model would be shared with other teams - FS Systems Engineering, Planning & Execution, V&V, Operations, etc., in an integrated model-based engineering environment Use the UML Profile mechanism, defining Stereotypes to precisely express the concepts of the FP domain This extends the UML/SysML languages to contain our FP concepts Use UML/SysML, along with our profile, to capture FP concepts and relationships in the model Generate typical FP engineering products (the FMECA, Fault Tree, MRD, V&V Matrices)

  11. Structural Evolution of Transform Fault Zones in Thick Oceanic Crust of Iceland

    NASA Astrophysics Data System (ADS)

    Karson, J. A.; Brandsdottir, B.; Horst, A. J.; Farrell, J.

    2017-12-01

    Spreading centers in Iceland are offset from the regional trend of the Mid-Atlantic Ridge by the Tjörnes Fracture Zone (TFZ) in the north and the South Iceland Seismic Zone (SISZ) in the south. Rift propagation away from the center of the Iceland hotspot, has resulted in migration of these transform faults to the N and S, respectively. As they migrate, new transform faults develop in older crust between offset spreading centers. Active transform faults, and abandoned transform structures left in their wakes, show features that reflect different amounts (and durations) of slip that can be viewed as a series of snapshots of different stages of transform fault evolution in thick, oceanic crust. This crust has a highly anisotropic, spreading fabric with pervasive zones of weakness created by spreading-related normal faults, fissures and dike margins oriented parallel to the spreading centers where they formed. These structures have a strong influence on the mechanical properties of the crust. By integrating available data, we suggest a series of stages of transform development: 1) Formation of an oblique rift (or leaky transform) with magmatic centers, linked by bookshelf fault zones (antithetic strike-slip faults at a high angle to the spreading direction) (Grimsey Fault Zone, youngest part of the TFZ); 2) broad zone of conjugate faulting (tens of km) (Hreppar Block N of the SISZ); 3) narrower ( 20 km) zone of bookshelf faulting aligned with the spreading direction (SISZ); 4) mature, narrow ( 1 km) through-going transform fault zone bounded by deformation (bookshelf faulting and block rotations) distributed over 10 km to either side (Húsavík-Flatey Fault Zone in the TFZ). With progressive slip, the transform zone becomes progressively narrower and more closely aligned with the spreading direction. The transform and non-transform (beyond spreading centers) domains may be truncated by renewed propagation and separated by subsequent spreading. This perspective

  12. Critical fault patterns determination in fault-tolerant computer systems

    NASA Technical Reports Server (NTRS)

    Mccluskey, E. J.; Losq, J.

    1978-01-01

    The method proposed tries to enumerate all the critical fault-patterns (successive occurrences of failures) without analyzing every single possible fault. The conditions for the system to be operating in a given mode can be expressed in terms of the static states. Thus, one can find all the system states that correspond to a given critical mode of operation. The next step consists in analyzing the fault-detection mechanisms, the diagnosis algorithm and the process of switch control. From them, one can find all the possible system configurations that can result from a failure occurrence. Thus, one can list all the characteristics, with respect to detection, diagnosis, and switch control, that failures must have to constitute critical fault-patterns. Such an enumeration of the critical fault-patterns can be directly used to evaluate the overall system tolerance to failures. Present research is focused on how to efficiently make use of these system-level characteristics to enumerate all the failures that verify these characteristics.

  13. Eigenvector of gravity gradient tensor for estimating fault dips considering fault type

    NASA Astrophysics Data System (ADS)

    Kusumoto, Shigekazu

    2017-12-01

    The dips of boundaries in faults and caldera walls play an important role in understanding their formation mechanisms. The fault dip is a particularly important parameter in numerical simulations for hazard map creation as the fault dip affects estimations of the area of disaster occurrence. In this study, I introduce a technique for estimating the fault dip using the eigenvector of the observed or calculated gravity gradient tensor on a profile and investigating its properties through numerical simulations. From numerical simulations, it was found that the maximum eigenvector of the tensor points to the high-density causative body, and the dip of the maximum eigenvector closely follows the dip of the normal fault. It was also found that the minimum eigenvector of the tensor points to the low-density causative body and that the dip of the minimum eigenvector closely follows the dip of the reverse fault. It was shown that the eigenvector of the gravity gradient tensor for estimating fault dips is determined by fault type. As an application of this technique, I estimated the dip of the Kurehayama Fault located in Toyama, Japan, and obtained a result that corresponded to conventional fault dip estimations by geology and geomorphology. Because the gravity gradient tensor is required for this analysis, I present a technique that estimates the gravity gradient tensor from the gravity anomaly on a profile.

  14. Reverse fault growth and fault interaction with frictional interfaces: insights from analogue models

    NASA Astrophysics Data System (ADS)

    Bonanno, Emanuele; Bonini, Lorenzo; Basili, Roberto; Toscani, Giovanni; Seno, Silvio

    2017-04-01

    The association of faulting and folding is a common feature in mountain chains, fold-and-thrust belts, and accretionary wedges. Kinematic models are developed and widely used to explain a range of relationships between faulting and folding. However, these models may result not to be completely appropriate to explain shortening in mechanically heterogeneous rock bodies. Weak layers, bedding surfaces, or pre-existing faults placed ahead of a propagating fault tip may influence the fault propagation rate itself and the associated fold shape. In this work, we employed clay analogue models to investigate how mechanical discontinuities affect the propagation rate and the associated fold shape during the growth of reverse master faults. The simulated master faults dip at 30° and 45°, recalling the range of the most frequent dip angles for active reverse faults that occurs in nature. The mechanical discontinuities are simulated by pre-cutting the clay pack. For both experimental setups (30° and 45° dipping faults) we analyzed three different configurations: 1) isotropic, i.e. without precuts; 2) with one precut in the middle of the clay pack; and 3) with two evenly-spaced precuts. To test the repeatability of the processes and to have a statistically valid dataset we replicate each configuration three times. The experiments were monitored by collecting successive snapshots with a high-resolution camera pointing at the side of the model. The pictures were then processed using the Digital Image Correlation method (D.I.C.), in order to extract the displacement and shear-rate fields. These two quantities effectively show both the on-fault and off-fault deformation, indicating the activity along the newly-formed faults and whether and at what stage the discontinuities (precuts) are reactivated. To study the fault propagation and fold shape variability we marked the position of the fault tips and the fold profiles for every successive step of deformation. Then we compared

  15. Interface For Fault-Tolerant Control System

    NASA Technical Reports Server (NTRS)

    Shaver, Charles; Williamson, Michael

    1989-01-01

    Interface unit and controller emulator developed for research on electronic helicopter-flight-control systems equipped with artificial intelligence. Interface unit interrupt-driven system designed to link microprocessor-based, quadruply-redundant, asynchronous, ultra-reliable, fault-tolerant control system (controller) with electronic servocontrol unit that controls set of hydraulic actuators. Receives digital feedforward messages from, and transmits digital feedback messages to, controller through differential signal lines or fiber-optic cables (thus far only differential signal lines have been used). Analog signals transmitted to and from servocontrol unit via coaxial cables.

  16. Scissoring Fault Rupture Properties along the Median Tectonic Line Fault Zone, Southwest Japan

    NASA Astrophysics Data System (ADS)

    Ikeda, M.; Nishizaka, N.; Onishi, K.; Sakamoto, J.; Takahashi, K.

    2017-12-01

    The Median Tectonic Line fault zone (hereinafter MTLFZ) is the longest and most active fault zone in Japan. The MTLFZ is a 400-km-long trench parallel right-lateral strike-slip fault accommodating lateral slip components of the Philippine Sea plate oblique subduction beneath the Eurasian plate [Fitch, 1972; Yeats, 1996]. Complex fault geometry evolves along the MTLFZ. The geomorphic and geological characteristics show a remarkable change through the MTLFZ. Extensional step-overs and pull-apart basins and a pop-up structure develop in western and eastern parts of the MTLFZ, respectively. It is like a "scissoring fault properties". We can point out two main factors to form scissoring fault properties along the MTLFZ. One is a regional stress condition, and another is a preexisting fault. The direction of σ1 anticlockwise rotate from N170°E [Famin et al., 2014] in the eastern Shikoku to Kinki areas and N100°E [Research Group for Crustral Stress in Western Japan, 1980] in central Shikoku to N85°E [Onishi et al., 2016] in western Shikoku. According to the rotation of principal stress directions, the western and eastern parts of the MTLFZ are to be a transtension and compression regime, respectively. The MTLFZ formed as a terrain boundary at Cretaceous, and has evolved with a long active history. The fault style has changed variously, such as left-lateral, thrust, normal and right-lateral. Under the structural condition of a preexisting fault being, the rupture does not completely conform to Anderson's theory for a newly formed fault, as the theory would require either purely dip-slip motion on the 45° dipping fault or strike-slip motion on a vertical fault. The fault rupture of the 2013 Barochistan earthquake in Pakistan is a rare example of large strike-slip reactivation on a relatively low angle dipping fault (thrust fault), though many strike-slip faults have vertical plane generally [Avouac et al., 2014]. In this presentation, we, firstly, show deep subsurface

  17. A Linked Model for Simulating Stand Development and Growth Processes of Loblolly Pine

    Treesearch

    V. Clark Baldwin; Phillip M. Dougherty; Harold E. Burkhart

    1998-01-01

    Linking models of different scales (e.g., process, tree-stand-ecosystem) is essential for furthering our understanding of stand, climatic, and edaphic effects on tree growth and forest productivity. Moreover, linking existing models that differ in scale and levels of resolution quickly identifies knowledge gaps in information required to scale from one level to another...

  18. Perspective View, Garlock Fault

    NASA Technical Reports Server (NTRS)

    2000-01-01

    California's Garlock Fault, marking the northwestern boundary of the Mojave Desert, lies at the foot of the mountains, running from the lower right to the top center of this image, which was created with data from NASA's shuttle Radar Topography Mission (SRTM), flown in February 2000. The data will be used by geologists studying fault dynamics and landforms resulting from active tectonics. These mountains are the southern end of the Sierra Nevada and the prominent canyon emerging at the lower right is Lone Tree canyon. In the distance, the San Gabriel Mountains cut across from the leftside of the image. At their base lies the San Andreas Fault which meets the Garlock Fault near the left edge at Tejon Pass. The dark linear feature running from lower right to upper left is State Highway 14 leading from the town of Mojave in the distance to Inyokern and the Owens Valley in the north. The lighter parallel lines are dirt roads related to power lines and the Los Angeles Aqueduct which run along the base of the mountains.

    This type of display adds the important dimension of elevation to the study of land use and environmental processes as observed in satellite images. The perspective view was created by draping a Landsat satellite image over an SRTM elevation model. Topography is exaggerated 1.5 times vertically. The Landsat image was provided by the United States Geological Survey's Earth Resources Observations Systems (EROS) Data Center, Sioux Falls, South Dakota.

    Elevation data used in this image was acquired by the Shuttle Radar Topography Mission (SRTM) aboard the Space Shuttle Endeavour, launched on February 11,2000. SRTM used the same radar instrument that comprised the Spaceborne Imaging Radar-C/X-Band Synthetic Aperture Radar (SIR-C/X-SAR) that flew twice on the Space Shuttle Endeavour in 1994. SRTM was designed to collect three-dimensional measurements of the Earth's surface. To collect the 3-D data, engineers added a 60-meter-long (200-foot) mast

  19. Linking movement and oviposition behaviour to spatial population distribution in the tree hole mosquito Ochlerotatus triseriatus.

    PubMed

    Ellis, Alicia M

    2008-01-01

    tree hole mosquito larvae is determined in part by adult habitat selection (H(1)), but do not exclude additional effects from passive aggregation (H(4)), or spatial patterns in adult mortality (H(5)). 5. This research illustrates the importance of studying oviposition behaviour at the population scale to better evaluate its relative importance in determining population distribution and dynamics. Moreover, this study demonstrates the importance of linking behavioural and population dynamics for understanding evolutionary relationships among life-history traits (e.g. preference and offspring performance) and predicting when behaviour will be important in determining population phenomena.

  20. Transform fault earthquakes in the North Atlantic: Source mechanisms and depth of faulting

    NASA Technical Reports Server (NTRS)

    Bergman, Eric A.; Solomon, Sean C.

    1987-01-01

    The centroid depths and source mechanisms of 12 large earthquakes on transform faults of the northern Mid-Atlantic Ridge were determined from an inversion of long-period body waveforms. The earthquakes occurred on the Gibbs, Oceanographer, Hayes, Kane, 15 deg 20 min, and Vema transforms. The depth extent of faulting during each earthquake was estimated from the centroid depth and the fault width. The source mechanisms for all events in this study display the strike slip motion expected for transform fault earthquakes; slip vector azimuths agree to 2 to 3 deg of the local strike of the zone of active faulting. The only anomalies in mechanism were for two earthquakes near the western end of the Vema transform which occurred on significantly nonvertical fault planes. Secondary faulting, occurring either precursory to or near the end of the main episode of strike-slip rupture, was observed for 5 of the 12 earthquakes. For three events the secondary faulting was characterized by reverse motion on fault planes striking oblique to the trend of the transform. In all three cases, the site of secondary reverse faulting is near a compression jog in the current trace of the active transform fault zone. No evidence was found to support the conclusions of Engeln, Wiens, and Stein that oceanic transform faults in general are either hotter than expected from current thermal models or weaker than normal oceanic lithosphere.

  1. Global tree network for computing structures enabling global processing operations

    DOEpatents

    Blumrich; Matthias A.; Chen, Dong; Coteus, Paul W.; Gara, Alan G.; Giampapa, Mark E.; Heidelberger, Philip; Hoenicke, Dirk; Steinmacher-Burow, Burkhard D.; Takken, Todd E.; Vranas, Pavlos M.

    2010-01-19

    A system and method for enabling high-speed, low-latency global tree network communications among processing nodes interconnected according to a tree network structure. The global tree network enables collective reduction operations to be performed during parallel algorithm operations executing in a computer structure having a plurality of the interconnected processing nodes. Router devices are included that interconnect the nodes of the tree via links to facilitate performance of low-latency global processing operations at nodes of the virtual tree and sub-tree structures. The global operations performed include one or more of: broadcast operations downstream from a root node to leaf nodes of a virtual tree, reduction operations upstream from leaf nodes to the root node in the virtual tree, and point-to-point message passing from any node to the root node. The global tree network is configurable to provide global barrier and interrupt functionality in asynchronous or synchronized manner, and, is physically and logically partitionable.

  2. Earthquake Nucleation and Fault Slip: Possible Experiments on a Natural Fault

    NASA Astrophysics Data System (ADS)

    Germanovich, L. N.; Murdoch, L. C.; Garagash, D.; Reches, Z.; Martel, S. J.; Johnston, M. J.; Ebenhack, J.; Gwaba, D.

    2011-12-01

    High-resolution deformation and seismic observations are usually made only near the Earths' surface, kilometers away from where earthquake nucleate on active faults and are limited by inverse-cube-distance attenuation and ground noise. We have developed an experimental approach that aims at reactivating faults in-situ using thermal techniques and fluid injection, which modify in-situ stresses and the fault strength until the fault slips. Mines where in-situ stresses are sufficient to drive faulting present an opportunity to conduct such experiments. The former Homestake gold mine in South Dakota is a good example. During our recent field work in the Homestake mine, we found a large fault that intersects multiple mine levels. The size and distinct structure of this fault make it a promising target for in-situ reactivation, which would likely to be localized on a crack-like patch. Slow patch propagation, moderated by the injection rate and the rate of change of the background stresses, may become unstable, leading to the nucleation of a dynamic earthquake rupture. Our analyses for the Homestake fault conditions indicate that this transition occurs for a patch size ~1 m. This represents a fundamental limitation for laboratory experiments and necessitates larger-scale field tests ~10-100 m. The opportunity to observe earthquake nucleation on the Homestake Fault is feasible because slip could be initiated at a pre-defined location and time with instrumentation placed as close as a few meters from the nucleation site. Designing the experiment requires a detailed assessment of the state-of-stress in the vicinity of the fault. This is being conducted by simulating changes in pore pressure and effective stresses accompanying dewatering of the mine, and by evaluating in-situ stress measurements in light of a regional stress field modified by local perturbations caused by the mine workings.

  3. Transpressive mantle uplift at large offset oceanic transform faults

    NASA Astrophysics Data System (ADS)

    Maia, M.; Briais, A.; Brunelli, D.; Ligi, M.; Sichel, S. E.; Campos, T.

    2017-12-01

    Large-offset transform faults deform due to changes in plate motions and local processes. At the St. Paul transform, in the Equatorial Atlantic, a large body of ultramafic rocks composed of variably serpentinized and mylonitized peridotites is presently being tectonically uplifted. We recently discovered that the origin of the regional mantle uplift is linked to long-standing compressive stresses along the transform fault (1). A positive flower structure, mainly made of mylonitized mantle rocks, can be recognized on the 200 km large push-up ridge. Compressive earthquakes mechanisms reveal seismically active thrust faults on the southern flank of the ridge . The regional transpressive stress field affects a large portion of the ridge segment south of the transform, as revealed by the presence of faults and dykes striking obliquely to the direction of the central ridge axis. A smaller thrust, affecting recent sediments, was mapped south of this segment, suggesting a regional active compressive stress field. The transpressive stress field is interpreted to derive from the propagation of the Mid-Atlantic Ridge (MAR) segment into the transform domain as a response to the enhanced melt supply at the ridge axis. The propagation forced the migration and segmentation of the transform fault southward and the formation of restraining step-overs. The process started after a counterclockwise change in plate motion at 11 Ma initially resulting in extensive stress of the transform domain. A flexural transverse ridge formed in response. Shortly after plate reorganization, the MAR segment started to propagate southwards due to the interaction of the ridge and the Sierra Leone thermal anomaly. 1- Maia et al., 2016. Extreme mantle uplift and exhumation along a transpressive transform fault Nat. Geo. doi:10.1038/ngeo2759

  4. Novel Coupled Thermochronometric and Geochemical Investigation of Blind Geothermal Resources in Fault-Controlled Dilational Corners

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stockli, Daniel

    Geothermal plays in extensional and transtensional tectonic environments have long been a major target in the exploration of geothermal resources and the Dixie Valley area has served as a classic natural laboratory for this type of geothermal plays. In recent years, the interactions between normal faults and strike-slip faults, acting either as strain relay zones have attracted significant interest in geothermal exploration as they commonly result in fault-controlled dilational corners with enhanced fracture permeability and thus have the potential to host blind geothermal prospects. Structural ambiguity, complications in fault linkage, etc. often make the selection for geothermal exploration drilling targetsmore » complicated and risky. Though simplistic, the three main ingredients of a viable utility-grade geothermal resource are heat, fluids, and permeability. Our new geological mapping and fault kinematic analysis derived a structural model suggest a two-stage structural evolution with (a) middle Miocene N -S trending normal faults (faults cutting across the modern range), - and tiling Olio-Miocene volcanic and sedimentary sequences (similar in style to East Range and S Stillwater Range). NE-trending range-front normal faulting initiated during the Pliocene and are both truncating N-S trending normal faults and reactivating some former normal faults in a right-lateral fashion. Thus the two main fundamental differences to previous structural models are (1) N-S trending faults are pre-existing middle Miocene normal faults and (2) these faults are reactivated in a right-later fashion (NOT left-lateral) and kinematically linked to the younger NE-trending range-bounding normal faults (Pliocene in age). More importantly, this study provides the first constraints on transient fluid flow through the novel application of apatite (U-Th)/He (AHe) and 4He/ 3He thermochronometry in the geothermally active Dixie Valley area in Nevada.« less

  5. Fault-zone structure and weakening processes in basin-scale reverse faults: The Moonlight Fault Zone, South Island, New Zealand

    NASA Astrophysics Data System (ADS)

    Alder, S.; Smith, S. A. F.; Scott, J. M.

    2016-10-01

    The >200 km long Moonlight Fault Zone (MFZ) in southern New Zealand was an Oligocene basin-bounding normal fault zone that reactivated in the Miocene as a high-angle reverse fault (present dip angle 65°-75°). Regional exhumation in the last c. 5 Ma has resulted in deep exposures of the MFZ that present an opportunity to study the structure and deformation processes that were active in a basin-scale reverse fault at basement depths. Syn-rift sediments are preserved only as thin fault-bound slivers. The hanging wall and footwall of the MFZ are mainly greenschist facies quartzofeldspathic schists that have a steeply-dipping (55°-75°) foliation subparallel to the main fault trace. In more fissile lithologies (e.g. greyschists), hanging-wall deformation occurred by the development of foliation-parallel breccia layers up to a few centimetres thick. Greyschists in the footwall deformed mainly by folding and formation of tabular, foliation-parallel breccias up to 1 m wide. Where the hanging-wall contains more competent lithologies (e.g. greenschist facies metabasite) it is laced with networks of pseudotachylyte that formed parallel to the host rock foliation in a damage zone extending up to 500 m from the main fault trace. The fault core contains an up to 20 m thick sequence of breccias, cataclasites and foliated cataclasites preserving evidence for the progressive development of interconnected networks of (partly authigenic) chlorite and muscovite. Deformation in the fault core occurred by cataclasis of quartz and albite, frictional sliding of chlorite and muscovite grains, and dissolution-precipitation. Combined with published friction and permeability data, our observations suggest that: 1) host rock lithology and anisotropy were the primary controls on the structure of the MFZ at basement depths and 2) high-angle reverse slip was facilitated by the low frictional strength of fault core materials. Restriction of pseudotachylyte networks to the hanging-wall of the

  6. AGSM Functional Fault Models for Fault Isolation Project

    NASA Technical Reports Server (NTRS)

    Harp, Janicce Leshay

    2014-01-01

    This project implements functional fault models to automate the isolation of failures during ground systems operations. FFMs will also be used to recommend sensor placement to improve fault isolation capabilities. The project enables the delivery of system health advisories to ground system operators.

  7. Seismic images and fault relations of the Santa Monica thrust fault, West Los Angeles, California

    USGS Publications Warehouse

    Catchings, R.D.; Gandhok, G.; Goldman, M.R.; Okaya, D.

    2001-01-01

    In May 1997, the US Geological Survey (USGS) and the University of Southern California (USC) acquired high-resolution seismic reflection and refraction images on the grounds of the Wadsworth Veterans Administration Hospital (WVAH) in the city of Los Angeles (Fig. 1a,b). The objective of the seismic survey was to better understand the near-surface geometry and faulting characteristics of the Santa Monica fault zone. In this report, we present seismic images, an interpretation of those images, and a comparison of our results with results from studies by Dolan and Pratt (1997), Pratt et al. (1998) and Gibbs et al. (2000). The Santa Monica fault is one of the several northeast-southwest-trending, north-dipping, reverse faults that extend through the Los Angeles metropolitan area (Fig. 1a). Through much of area, the Santa Monica fault trends subparallel to the Hollywood fault, but the two faults apparently join into a single fault zone to the southwest and to the northeast (Dolan et al., 1995). The Santa Monica and Hollywood faults may be part of a larger fault system that extends from the Pacific Ocean to the Transverse Ranges. Crook et al. (1983) refer to this fault system as the Malibu Coast-Santa Monica-Raymond-Cucamonga fault system. They suggest that these faults have not formed a contiguous zone since the Pleistocene and conclude that each of the faults should be treated as a separate fault with respect to seismic hazards. However, Dolan et al. (1995) suggest that the Hollywood and Santa Monica faults are capable of generating Mw 6.8 and Mw 7.0 earthquakes, respectively. Thus, regardless of whether the overall fault system is connected and capable of rupturing in one event, individually, each of the faults present a sizable earthquake hazard to the Los Angeles metropolitan area. If, however, these faults are connected, and they were to rupture along a continuous fault rupture, the resulting hazard would be even greater. Although the Santa Monica fault represents

  8. A New Look at Spreading in Iceland: Propagating Rifts, Migrating Transform Faults, and Microplate Tectonics

    NASA Astrophysics Data System (ADS)

    Karson, J.; Horst, A. J.; Nanfito, A.

    2011-12-01

    Iceland has long been used as an analog for studies of seafloor spreading. Despite its thick (~25 km) oceanic crust and subaerial lavas, many features associated with accretion along mid-ocean ridge spreading centers, and the processes that generate them, are well represented in the actively spreading Neovolcanic Zone and deeply glaciated Tertiary crust that flanks it. Integrated results of structural and geodetic studies show that the plate boundary zone on Iceland is a complex array of linked structures bounding major crustal blocks or microplates, similar to oceanic microplates. Major rift zones propagate N and S from the hotspot centered beneath the Vatnajökull icecap in SE central Iceland. The southern propagator has extended southward beyond the South Iceland Seismic Zone transform fault to the Westman Islands, resulting in abandonment of the Eastern Rift Zone. Continued propagation may cause abandonment of the Reykjanes Ridge. The northern propagator is linked to the southern end of the receding Kolbeinsey Ridge to the north. The NNW-trending Kerlingar Pseudo-fault bounds the propagator system to the E. The Tjörnes Transform Fault links the propagator tip to the Kolbeinsey Ridge and appears to be migrating northward in incremental steps, leaving a swath of deformed crustal blocks in its wake. Block rotations, concentrated mainly to the west of the propagators, are clockwise to the N of the hotspot and counter-clockwise to the S, possibly resulting in a component of NS divergence across EW-oriented rift zones. These rotations may help accommodate adjustments of the plate boundary zone to the relative movements of the N American and Eurasian plates. The rotated crustal blocks are composed of highly anisotropic crust with rift-parallel internal fabric generated by spreading processes. Block rotations result in reactivation of spreading-related faults as major rift-parallel, strike-slip faults. Structural details found in Iceland can help provide information

  9. Transition from strike-slip faulting to oblique subduction: active tectonics at the Puysegur Margin, South New Zealand

    NASA Astrophysics Data System (ADS)

    Lamarche, Geoffroy; Lebrun, Jean-Frédéric

    2000-01-01

    South of New Zealand the Pacific-Australia (PAC-AUS) plate boundary runs along the intracontinental Alpine Fault, the Puysegur subduction front and the intraoceanic Puysegur Fault. The Puysegur Fault is located along Puysegur Ridge, which terminates at ca. 47°S against the continental Puysegur Bank in a complex zone of deformation called the Snares Zone. At Puysegur Trench, the Australian Plate subducts beneath Puysegur Bank and the Fiordland Massif. East of Fiordland and Puysegur Bank, the Moonlight Fault System (MFS) represents the Eocene strike-slip plate boundary. Interpretation of seafloor morphology and seismic reflection profiles acquired over Puysegur Bank and the Snares Zone allows study of the transition from intraoceanic strike-slip faulting along the Puysegur Ridge to oblique subduction at the Puysegur Trench and to better understand the genetic link between the Puysegur Fault and the MFS. Seafloor morphology is interpreted from a bathymetric dataset compiled from swath bathymetry data acquired during the 1993 Geodynz survey, and single beam echo soundings acquired by the NZ Royal Navy. The Snares Zone is the key transition zone from strike-slip faulting to subduction. It divides into three sectors, namely East, NW and SW sectors. A conspicuous 3600 m-deep trough (the Snares Trough) separates the NW and East sectors. The East sector is characterised by the NE termination of Puysegur Ridge into right-stepping en echelon ridges that accommodate a change of strike from the Puysegur Fault to the MFS. Between 48°S and 47°S, in the NW sector and the Snares Trough, a series of transpressional faults splay northwards from the Puysegur Fault. Between 49°50'S and 48°S, thrusts develop progressively at Puysegur Trench into a decollement. North of 48°S the Snares Trough develops between two splays of the Puysegur Fault, indicating superficial extension associated with the subsidence of Puysegur Ridge. Seismic reflection profiles and bathymetric maps show a

  10. Model-based fault detection and isolation for intermittently active faults with application to motion-based thruster fault detection and isolation for spacecraft

    NASA Technical Reports Server (NTRS)

    Wilson, Edward (Inventor)

    2008-01-01

    The present invention is a method for detecting and isolating fault modes in a system having a model describing its behavior and regularly sampled measurements. The models are used to calculate past and present deviations from measurements that would result with no faults present, as well as with one or more potential fault modes present. Algorithms that calculate and store these deviations, along with memory of when said faults, if present, would have an effect on the said actual measurements, are used to detect when a fault is present. Related algorithms are used to exonerate false fault modes and finally to isolate the true fault mode. This invention is presented with application to detection and isolation of thruster faults for a thruster-controlled spacecraft. As a supporting aspect of the invention, a novel, effective, and efficient filtering method for estimating the derivative of a noisy signal is presented.

  11. Cumulative co-seismic fault damage and feedbacks on earthquake rupture

    NASA Astrophysics Data System (ADS)

    Mitchell, T. M.; Aben, F. M.; Ostermeijer, G.; Rockwell, T. K.; Doan, M. L.

    2017-12-01

    The importance of the damage zone in the faulting and earthquake process is widely recognized, but our understanding of how damage zones are created, what their properties are, and how they feed back into the seismic cycle, is remarkably poorly known. Firstly, damaged rocks have reduced elastic moduli, cohesion and yield strength, which can cause attenuation and potentially non-linear wave propagation effects during ruptures. Secondly, damaged fault rocks are generally more permeable than intact rocks, and hence play a key role in the migration of fluids in and around fault zones over the seismic cycle. Finally, the dynamic generation of damage as the earthquake propagates can itself influence the dynamics of rupture propagation, by increasing the amount of energy dissipation, decreasing the rupture velocity, modifying the size of the earthquake, changing the efficiency of weakening mechanisms such as thermal pressurisation of pore fluids, and even generating seismic waves itself . All of these effects imply that a feedback exists between the damage imparted immediately after rupture propagation, at the early stages of fault slip, and the effects of that damage on subsequent ruptures dynamics. In recent years, much debate has been sparked by the identification of so-called `pulverized rocks' described on various crustal-scale faults, a type of intensely damaged fault rock which has undergone minimal shear strain, and the occurrence of which has been linked to damage induced by transient high strain-rate stress perturbations during earthquake rupture. Damage induced by such transient stresses, whether compressional or tensional, likely constitute heterogeneous modulations of the remote stresses that will impart significant changes on the strength, elastic and fluid flow properties of a fault zone immediately after rupture propagation, at the early stage of fault slip. In this contribution, we will demonstrate laboratory and field examples of two dynamic mechanisms

  12. Methodology for earthquake rupture rate estimates of fault networks: example for the western Corinth rift, Greece

    NASA Astrophysics Data System (ADS)

    Chartier, Thomas; Scotti, Oona; Lyon-Caen, Hélène; Boiselet, Aurélien

    2017-10-01

    Modeling the seismic potential of active faults is a fundamental step of probabilistic seismic hazard assessment (PSHA). An accurate estimation of the rate of earthquakes on the faults is necessary in order to obtain the probability of exceedance of a given ground motion. Most PSHA studies consider faults as independent structures and neglect the possibility of multiple faults or fault segments rupturing simultaneously (fault-to-fault, FtF, ruptures). The Uniform California Earthquake Rupture Forecast version 3 (UCERF-3) model takes into account this possibility by considering a system-level approach rather than an individual-fault-level approach using the geological, seismological and geodetical information to invert the earthquake rates. In many places of the world seismological and geodetical information along fault networks is often not well constrained. There is therefore a need to propose a methodology relying on geological information alone to compute earthquake rates of the faults in the network. In the proposed methodology, a simple distance criteria is used to define FtF ruptures and consider single faults or FtF ruptures as an aleatory uncertainty, similarly to UCERF-3. Rates of earthquakes on faults are then computed following two constraints: the magnitude frequency distribution (MFD) of earthquakes in the fault system as a whole must follow an a priori chosen shape and the rate of earthquakes on each fault is determined by the specific slip rate of each segment depending on the possible FtF ruptures. The modeled earthquake rates are then compared to the available independent data (geodetical, seismological and paleoseismological data) in order to weight different hypothesis explored in a logic tree.The methodology is tested on the western Corinth rift (WCR), Greece, where recent advancements have been made in the understanding of the geological slip rates of the complex network of normal faults which are accommodating the ˜ 15 mm yr-1 north

  13. Mid Carboniferous lamprophyres, Cobequid Fault Zone, eastern Canada, linked to sodic granites, voluminous gabbro, and albitization

    NASA Astrophysics Data System (ADS)

    Pe-Piper, Georgia; Piper, David J. W.; Papoutsa, Angeliki

    2018-01-01

    Major intra-continental shear zones developed during the later stages of continental collision in a back-arc setting are sites of prolonged magmatism. Mantle metasomatism results from both melting of subducted sediments and oceanic crust. In the Cobequid Fault Zone of the northern Appalachians, back-arc A-type granites and gabbros dated ca. 360 Ma are locally intruded by lamprophyric dykes dated ca. 335 Ma. All the lamprophyres are kersantites with biotite and albite, lesser ilmenite, titanite and fluorapatite, and minor magmatic calcite, allanite, pyrite, magnetite, quartz and K-feldspar in some samples. The lamprophyres show enrichment in Rb, Ba, K, Th and REE and classify as calc-alkaline lamprophyre on the basis of biotite and whole rock chemistry. Pb isotopes lie on a mixing line between normal mantle-derived gabbro and OIB magma. Nd isotopes range from 1.3-3.5 εNdt, a little lower than in local gabbro. Most lamprophyres have δ18O = 3.8-4.4‰. Country rock is cut by pyrite-(Mg)-chlorite veins with euhedral allanite crystals that resemble the lamprophyres mineralogically, with the Mg-chlorite representing chloritized glass. Early Carboniferous unenriched mafic dykes and minor volcanic rocks are widespread along the major active strike-slip fault zones. The lamprophyres are geographically restricted to within 10 km of a small granitoid pluton with some sodic amphibole and widespread albitization. This was displaced by early Carboniferous strike-slip faulting from its original position close to the large Wentworth Pluton, the site of mantle-derived sodic amphibole granite, a major late gabbro pluton, and a volcanic carapace several kilometres thick, previously demonstrated to be the site of mantle upwelling and metasomatism. The age of the lamprophyres implies that enriched source material in upper lithospheric mantle or lower crust was displaced 50 km by crustal scale strike-slip faulting after enrichment by the mantle upwelling before lamprophyre emplacement

  14. Late Quaternary slip history of the Mill Creek strand of the San Andreas fault in San Gorgonio Pass, southern California: The role of a subsidiary left-lateral fault in strand switching

    USGS Publications Warehouse

    Kendrick, Katherine J.; Matti, Jonathan; Mahan, Shannon

    2015-01-01

    The fault history of the Mill Creek strand of the San Andreas fault (SAF) in the San Gorgonio Pass region, along with the reconstructed geomorphology surrounding this fault strand, reveals the important role of the left-lateral Pinto Mountain fault in the regional fault strand switching. The Mill Creek strand has 7.1–8.7 km total slip. Following this displacement, the Pinto Mountain fault offset the Mill Creek strand 1–1.25 km, as SAF slip transferred to the San Bernardino, Banning, and Garnet Hill strands. An alluvial complex within the Mission Creek watershed can be linked to palinspastic reconstruction of drainage segments to constrain slip history of the Mill Creek strand. We investigated surface remnants through detailed geologic mapping, morphometric and stratigraphic analysis, geochronology, and pedogenic analysis. The degree of soil development constrains the duration of surface stability when correlated to other regional, independently dated pedons. This correlation indicates that the oldest surfaces are significantly older than 500 ka. Luminescence dates of 106 ka and 95 ka from (respectively) 5 and 4 m beneath a younger fan surface are consistent with age estimates based on soil-profile development. Offset of the Mill Creek strand by the Pinto Mountain fault suggests a short-term slip rate of ∼10–12.5 mm/yr for the Pinto Mountain fault, and a lower long-term slip rate. Uplift of the Yucaipa Ridge block during the period of Mill Creek strand activity is consistent with thermochronologic modeled uplift estimates.

  15. Frictional heterogeneities on carbonate-bearing normal faults: Insights from the Monte Maggio Fault, Italy

    NASA Astrophysics Data System (ADS)

    Carpenter, B. M.; Scuderi, M. M.; Collettini, C.; Marone, C.

    2014-12-01

    Observations of heterogeneous and complex fault slip are often attributed to the complexity of fault structure and/or spatial heterogeneity of fault frictional behavior. Such complex slip patterns have been observed for earthquakes on normal faults throughout central Italy, where many of the Mw 6 to 7 earthquakes in the Apennines nucleate at depths where the lithology is dominated by carbonate rocks. To explore the relationship between fault structure and heterogeneous frictional properties, we studied the exhumed Monte Maggio Fault, located in the northern Apennines. We collected intact specimens of the fault zone, including the principal slip surface and hanging wall cataclasite, and performed experiments at a normal stress of 10 MPa under saturated conditions. Experiments designed to reactivate slip between the cemented principal slip surface and cataclasite show a 3 MPa stress drop as the fault surface fails, then velocity-neutral frictional behavior and significant frictional healing. Overall, our results suggest that (1) earthquakes may readily nucleate in areas of the fault where the slip surface separates massive limestone and are likely to propagate in areas where fault gouge is in contact with the slip surface; (2) postseismic slip is more likely to occur in areas of the fault where gouge is present; and (3) high rates of frictional healing and low creep relaxation observed between solid fault surfaces could lead to significant aftershocks in areas of low stress drop.

  16. ETE: a python Environment for Tree Exploration.

    PubMed

    Huerta-Cepas, Jaime; Dopazo, Joaquín; Gabaldón, Toni

    2010-01-13

    Many bioinformatics analyses, ranging from gene clustering to phylogenetics, produce hierarchical trees as their main result. These are used to represent the relationships among different biological entities, thus facilitating their analysis and interpretation. A number of standalone programs are available that focus on tree visualization or that perform specific analyses on them. However, such applications are rarely suitable for large-scale surveys, in which a higher level of automation is required. Currently, many genome-wide analyses rely on tree-like data representation and hence there is a growing need for scalable tools to handle tree structures at large scale. Here we present the Environment for Tree Exploration (ETE), a python programming toolkit that assists in the automated manipulation, analysis and visualization of hierarchical trees. ETE libraries provide a broad set of tree handling options as well as specific methods to analyze phylogenetic and clustering trees. Among other features, ETE allows for the independent analysis of tree partitions, has support for the extended newick format, provides an integrated node annotation system and permits to link trees to external data such as multiple sequence alignments or numerical arrays. In addition, ETE implements a number of built-in analytical tools, including phylogeny-based orthology prediction and cluster validation techniques. Finally, ETE's programmable tree drawing engine can be used to automate the graphical rendering of trees with customized node-specific visualizations. ETE provides a complete set of methods to manipulate tree data structures that extends current functionality in other bioinformatic toolkits of a more general purpose. ETE is free software and can be downloaded from http://ete.cgenomics.org.

  17. ETE: a python Environment for Tree Exploration

    PubMed Central

    2010-01-01

    Background Many bioinformatics analyses, ranging from gene clustering to phylogenetics, produce hierarchical trees as their main result. These are used to represent the relationships among different biological entities, thus facilitating their analysis and interpretation. A number of standalone programs are available that focus on tree visualization or that perform specific analyses on them. However, such applications are rarely suitable for large-scale surveys, in which a higher level of automation is required. Currently, many genome-wide analyses rely on tree-like data representation and hence there is a growing need for scalable tools to handle tree structures at large scale. Results Here we present the Environment for Tree Exploration (ETE), a python programming toolkit that assists in the automated manipulation, analysis and visualization of hierarchical trees. ETE libraries provide a broad set of tree handling options as well as specific methods to analyze phylogenetic and clustering trees. Among other features, ETE allows for the independent analysis of tree partitions, has support for the extended newick format, provides an integrated node annotation system and permits to link trees to external data such as multiple sequence alignments or numerical arrays. In addition, ETE implements a number of built-in analytical tools, including phylogeny-based orthology prediction and cluster validation techniques. Finally, ETE's programmable tree drawing engine can be used to automate the graphical rendering of trees with customized node-specific visualizations. Conclusions ETE provides a complete set of methods to manipulate tree data structures that extends current functionality in other bioinformatic toolkits of a more general purpose. ETE is free software and can be downloaded from http://ete.cgenomics.org. PMID:20070885

  18. Late Quaternary faulting along the Death Valley-Furnace Creek fault system, California and Nevada

    USGS Publications Warehouse

    Brogan, George E.; Kellogg, Karl; Slemmons, D. Burton; Terhune, Christina L.

    1991-01-01

    The Death Valley-Furnace Creek fault system, in California and Nevada, has a variety of impressive late Quaternary neotectonic features that record a long history of recurrent earthquake-induced faulting. Although no neotectonic features of unequivocal historical age are known, paleoseismic features from multiple late Quaternary events of surface faulting are well developed throughout the length of the system. Comparison of scarp heights to amount of horizontal offset of stream channels and the relationships of both scarps and channels to the ages of different geomorphic surfaces demonstrate that Quaternary faulting along the northwest-trending Furnace Creek fault zone is predominantly right lateral, whereas that along the north-trending Death Valley fault zone is predominantly normal. These observations are compatible with tectonic models of Death Valley as a northwest-trending pull-apart basin. The largest late Quaternary scarps along the Furnace Creek fault zone, with vertical separation of late Pleistocene surfaces of as much as 64 m (meters), are in Fish Lake Valley. Despite the predominance of normal faulting along the Death Valley fault zone, vertical offset of late Pleistocene surfaces along the Death Valley fault zone apparently does not exceed about 15 m. Evidence for four to six separate late Holocene faulting events along the Furnace Creek fault zone and three or more late Holocene events along the Death Valley fault zone are indicated by rupturing of Q1B (about 200-2,000 years old) geomorphic surfaces. Probably the youngest neotectonic feature observed along the Death Valley-Furnace Creek fault system, possibly historic in age, is vegetation lineaments in southernmost Fish Lake Valley. Near-historic faulting in Death Valley, within several kilometers south of Furnace Creek Ranch, is represented by (1) a 2,000-year-old lake shoreline that is cut by sinuous scarps, and (2) a system of young scarps with free-faceted faces (representing several faulting

  19. Audio-frequency magnetotelluric imaging of the Hijima fault, Yamasaki fault system, southwest Japan

    NASA Astrophysics Data System (ADS)

    Yamaguchi, S.; Ogawa, Y.; Fuji-Ta, K.; Ujihara, N.; Inokuchi, H.; Oshiman, N.

    2010-04-01

    An audio-frequency magnetotelluric (AMT) survey was undertaken at ten sites along a transect across the Hijima fault, a major segment of the Yamasaki fault system, Japan. The data were subjected to dimensionality analysis, following which two-dimensional inversions for the TE and TM modes were carried out. This model is characterized by (1) a clear resistivity boundary that coincides with the downward projection of the surface trace of the Hijima fault, (2) a resistive zone (>500 Ω m) that corresponds to Mesozoic sediment, and (3) shallow and deep two highly conductive zones (30-40 Ω m) along the fault. The shallow conductive zone is a common feature of the Yamasaki fault system, whereas the deep conductor is a newly discovered feature at depths of 800-1,800 m to the southwest of the fault. The conductor is truncated by the Hijima fault to the northeast, and its upper boundary is the resistive zone. Both conductors are interpreted to represent a combination of clay minerals and a fluid network within a fault-related fracture zone. In terms of the development of the fluid networks, the fault core of the Hijima fault and the highly resistive zone may play important roles as barriers to fluid flow on the northeast and upper sides of the conductive zones, respectively.

  20. Quantifying Vertical Exhumation in Intracontinental Strike-Slip Faults: the Garlock fault zone, southern California

    NASA Astrophysics Data System (ADS)

    Chinn, L.; Blythe, A. E.; Fendick, A.

    2012-12-01

    New apatite fission-track ages show varying rates of vertical exhumation at the eastern terminus of the Garlock fault zone. The Garlock fault zone is a 260 km long east-northeast striking strike-slip fault with as much as 64 km of sinistral offset. The Garlock fault zone terminates in the east in the Avawatz Mountains, at the intersection with the dextral Southern Death Valley fault zone. Although motion along the Garlock fault west of the Avawatz Mountains is considered purely strike-slip, uplift and exhumation of bedrock in the Avawatz Mountains south of the Garlock fault, as recently as 5 Ma, indicates that transpression plays an important role at this location and is perhaps related to a restricting bend as the fault wraps around and terminates southeastward along the Avawatz Mountains. In this study we complement extant thermochronometric ages from within the Avawatz core with new low temperature fission-track ages from samples collected within the adjacent Garlock and Southern Death Valley fault zones. These thermochronometric data indicate that vertical exhumation rates vary within the fault zone. Two Miocene ages (10.2 (+5.0/-3.4) Ma, 9.0 (+2.2/-1.8) Ma) indicate at least ~3.3 km of vertical exhumation at ~0.35 mm/yr, assuming a 30°C/km geothermal gradient, along a 2 km transect parallel and adjacent to the Mule Spring fault. An older Eocene age (42.9 (+8.7/-7.3) Ma) indicates ~3.3 km of vertical exhumation at ~0.08 mm/yr. These results are consistent with published exhumation rates of 0.35 mm/yr between ~7 and ~4 Ma and 0.13 mm/yr between ~15 and ~9 Ma, as determined by apatite fission-track and U-Th/He thermochronometry in the hanging-wall of the Mule Spring fault. Similar exhumation rates on both sides of the Mule Spring fault support three separate models: 1) Thrusting is no longer active along the Mule Spring fault, 2) Faulting is dominantly strike-slip at the sample locations, or 3) Miocene-present uplift and exhumation is below detection levels

  1. Aftershocks of the 2014 South Napa, California, Earthquake: Complex faulting on secondary faults

    USGS Publications Warehouse

    Hardebeck, Jeanne L.; Shelly, David R.

    2016-01-01

    We investigate the aftershock sequence of the 2014 MW6.0 South Napa, California, earthquake. Low-magnitude aftershocks missing from the network catalog are detected by applying a matched-filter approach to continuous seismic data, with the catalog earthquakes serving as the waveform templates. We measure precise differential arrival times between events, which we use for double-difference event relocation in a 3D seismic velocity model. Most aftershocks are deeper than the mainshock slip, and most occur west of the mapped surface rupture. While the mainshock coseismic and postseismic slip appears to have occurred on the near-vertical, strike-slip West Napa fault, many of the aftershocks occur in a complex zone of secondary faulting. Earthquake locations in the main aftershock zone, near the mainshock hypocenter, delineate multiple dipping secondary faults. Composite focal mechanisms indicate strike-slip and oblique-reverse faulting on the secondary features. The secondary faults were moved towards failure by Coulomb stress changes from the mainshock slip. Clusters of aftershocks north and south of the main aftershock zone exhibit vertical strike-slip faulting more consistent with the West Napa Fault. The northern aftershocks correspond to the area of largest mainshock coseismic slip, while the main aftershock zone is adjacent to the fault area that has primarily slipped postseismically. Unlike most creeping faults, the zone of postseismic slip does not appear to contain embedded stick-slip patches that would have produced on-fault aftershocks. The lack of stick-slip patches along this portion of the fault may contribute to the low productivity of the South Napa aftershock sequence.

  2. Off-fault tip splay networks: a genetic and generic property of faults indicative of their long-term propagation, and a major component of off-fault damage

    NASA Astrophysics Data System (ADS)

    Perrin, C.; Manighetti, I.; Gaudemer, Y.

    2015-12-01

    Faults grow over the long-term by accumulating displacement and lengthening, i.e., propagating laterally. We use fault maps and fault propagation evidences available in literature to examine geometrical relations between parent faults and off-fault splays. The population includes 47 worldwide crustal faults with lengths from millimeters to thousands of kilometers and of different slip modes. We show that fault splays form adjacent to any propagating fault tip, whereas they are absent at non-propagating fault ends. Independent of parent fault length, slip mode, context, etc, tip splay networks have a similar fan shape widening in direction of long-term propagation, a similar relative length and width (~30 and ~10 % of parent fault length, respectively), and a similar range of mean angles to parent fault (10-20°). Tip splays more commonly develop on one side only of the parent fault. We infer that tip splay networks are a genetic and a generic property of faults indicative of their long-term propagation. We suggest that they represent the most recent damage off-the parent fault, formed during the most recent phase of fault lengthening. The scaling relation between parent fault length and width of tip splay network implies that damage zones enlarge as parent fault length increases. Elastic properties of host rocks might thus be modified at large distances away from a fault, up to 10% of its length. During an earthquake, a significant fraction of coseismic slip and stress is dissipated into the permanent damage zone that surrounds the causative fault. We infer that coseismic dissipation might occur away from a rupture zone as far as a distance of 10% of the length of its causative fault. Coseismic deformations and stress transfers might thus be significant in broad regions about principal rupture traces. This work has been published in Comptes Rendus Geoscience under doi:10.1016/j.crte.2015.05.002 (http://www.sciencedirect.com/science/article/pii/S1631071315000528).

  3. Data-based fault-tolerant control for affine nonlinear systems with actuator faults.

    PubMed

    Xie, Chun-Hua; Yang, Guang-Hong

    2016-09-01

    This paper investigates the fault-tolerant control (FTC) problem for unknown nonlinear systems with actuator faults including stuck, outage, bias and loss of effectiveness. The upper bounds of stuck faults, bias faults and loss of effectiveness faults are unknown. A new data-based FTC scheme is proposed. It consists of the online estimations of the bounds and a state-dependent function. The estimations are adjusted online to compensate automatically the actuator faults. The state-dependent function solved by using real system data helps to stabilize the system. Furthermore, all signals in the resulting closed-loop system are uniformly bounded and the states converge asymptotically to zero. Compared with the existing results, the proposed approach is data-based. Finally, two simulation examples are provided to show the effectiveness of the proposed approach. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  4. Contaminant gradients in trees: Directional tree coring reveals boundaries of soil and soil-gas contamination with potential applications in vapor intrusion assessment

    USGS Publications Warehouse

    Wilson, Jordan L.; Samaranayake, V.A.; Limmer, Matthew A.; Schumacher, John G.; Burken, Joel G.

    2017-01-01

    Contaminated sites pose ecological and human-health risks through exposure to contaminated soil and groundwater. Whereas we can readily locate, monitor, and track contaminants in groundwater, it is harder to perform these tasks in the vadose zone. In this study, tree-core samples were collected at a Superfund site to determine if the sample-collection location around a particular tree could reveal the subsurface location, or direction, of soil and soil-gas contaminant plumes. Contaminant-centroid vectors were calculated from tree-core data to reveal contaminant distributions in directional tree samples at a higher resolution, and vectors were correlated with soil-gas characterization collected using conventional methods. Results clearly demonstrated that directional tree coring around tree trunks can indicate gradients in soil and soil-gas contaminant plumes, and the strength of the correlations were directly proportionate to the magnitude of tree-core concentration gradients (spearman’s coefficient of -0.61 and -0.55 in soil and tree-core gradients, respectively). Linear regression indicates agreement between the concentration-centroid vectors is significantly affected by in-planta and soil concentration gradients and when concentration centroids in soil are closer to trees. Given the existing link between soil-gas and vapor intrusion, this study also indicates that directional tree coring might be applicable in vapor intrusion assessment.

  5. Contaminant Gradients in Trees: Directional Tree Coring Reveals Boundaries of Soil and Soil-Gas Contamination with Potential Applications in Vapor Intrusion Assessment.

    PubMed

    Wilson, Jordan L; Samaranayake, V A; Limmer, Matthew A; Schumacher, John G; Burken, Joel G

    2017-12-19

    Contaminated sites pose ecological and human-health risks through exposure to contaminated soil and groundwater. Whereas we can readily locate, monitor, and track contaminants in groundwater, it is harder to perform these tasks in the vadose zone. In this study, tree-core samples were collected at a Superfund site to determine if the sample-collection location around a particular tree could reveal the subsurface location, or direction, of soil and soil-gas contaminant plumes. Contaminant-centroid vectors were calculated from tree-core data to reveal contaminant distributions in directional tree samples at a higher resolution, and vectors were correlated with soil-gas characterization collected using conventional methods. Results clearly demonstrated that directional tree coring around tree trunks can indicate gradients in soil and soil-gas contaminant plumes, and the strength of the correlations were directly proportionate to the magnitude of tree-core concentration gradients (spearman's coefficient of -0.61 and -0.55 in soil and tree-core gradients, respectively). Linear regression indicates agreement between the concentration-centroid vectors is significantly affected by in planta and soil concentration gradients and when concentration centroids in soil are closer to trees. Given the existing link between soil-gas and vapor intrusion, this study also indicates that directional tree coring might be applicable in vapor intrusion assessment.

  6. Porosity variations in and around normal fault zones: implications for fault seal and geomechanics

    NASA Astrophysics Data System (ADS)

    Healy, David; Neilson, Joyce; Farrell, Natalie; Timms, Nick; Wilson, Moyra

    2015-04-01

    Porosity forms the building blocks for permeability, exerts a significant influence on the acoustic response of rocks to elastic waves, and fundamentally influences rock strength. And yet, published studies of porosity around fault zones or in faulted rock are relatively rare, and are hugely dominated by those of fault zone permeability. We present new data from detailed studies of porosity variations around normal faults in sandstone and limestone. We have developed an integrated approach to porosity characterisation in faulted rock exploiting different techniques to understand variations in the data. From systematic samples taken across exposed normal faults in limestone (Malta) and sandstone (Scotland), we combine digital image analysis on thin sections (optical and electron microscopy), core plug analysis (He porosimetry) and mercury injection capillary pressures (MICP). Our sampling includes representative material from undeformed protoliths and fault rocks from the footwall and hanging wall. Fault-related porosity can produce anisotropic permeability with a 'fast' direction parallel to the slip vector in a sandstone-hosted normal fault. Undeformed sandstones in the same unit exhibit maximum permeability in a sub-horizontal direction parallel to lamination in dune-bedded sandstones. Fault-related deformation produces anisotropic pores and pore networks with long axes aligned sub-vertically and this controls the permeability anisotropy, even under confining pressures up to 100 MPa. Fault-related porosity also has interesting consequences for the elastic properties and velocity structure of normal fault zones. Relationships between texture, pore type and acoustic velocity have been well documented in undeformed limestone. We have extended this work to include the effects of faulting on carbonate textures, pore types and P- and S-wave velocities (Vp, Vs) using a suite of normal fault zones in Malta, with displacements ranging from 0.5 to 90 m. Our results show a

  7. On Identifiability of Bias-Type Actuator-Sensor Faults in Multiple-Model-Based Fault Detection and Identification

    NASA Technical Reports Server (NTRS)

    Joshi, Suresh M.

    2012-01-01

    This paper explores a class of multiple-model-based fault detection and identification (FDI) methods for bias-type faults in actuators and sensors. These methods employ banks of Kalman-Bucy filters to detect the faults, determine the fault pattern, and estimate the fault values, wherein each Kalman-Bucy filter is tuned to a different failure pattern. Necessary and sufficient conditions are presented for identifiability of actuator faults, sensor faults, and simultaneous actuator and sensor faults. It is shown that FDI of simultaneous actuator and sensor faults is not possible using these methods when all sensors have biases.

  8. Net dextral slip, Neogene San Gregorio–Hosgri fault zone, coastal California: Geologic evidence and tectonic implications

    USGS Publications Warehouse

    Dickinson, William R.; Ducea, M.; Rosenberg, Lewis I.; Greene, H. Gary; Graham, Stephan A.; Clark, Joseph C.; Weber, Gerald E.; Kidder, Steven; Ernst, W. Gary; Brabb, Earl E.

    2005-01-01

    elongated its northern end and offset its western margin delineated by the older Nacimiento fault, a sinistral strike-slip fault of latest Cretaceous to Paleocene age. North of its juncture with the San Andreas fault, dextral slip along the San Gregorio–Hosgri fault augments net San Andreas displacement. Alternate restorations of the Gualala block imply that nearly half the net San Gregorio–Hosgri slip was accommodated along the offshore Gualala fault strand lying west of the Gualala block, which is bounded on the east by the current master trace of the San Andreas fault. With San Andreas and San Gregorio–Hosgri slip restored, there remains an unresolved proto–San Andreas mismatch of ∼100 km between the offset northern end of the Salinian block and the southern end of the Sierran-Tehachapi block.On the south, San Gregorio–Hosgri strike slip is transposed into crustal shortening associated with vertical-axis tectonic rotation of fault-bounded crustal panels that form the western Transverse Ranges, and with kinematically linked deformation within the adjacent Santa Maria basin. The San Gregorio–Hosgri fault serves as the principal link between transrotation in the western Transverse Ranges and strike slip within the San Andreas transform system of central California.

  9. A data driven approach for condition monitoring of wind turbine blade using vibration signals through best-first tree algorithm and functional trees algorithm: A comparative study.

    PubMed

    Joshuva, A; Sugumaran, V

    2017-03-01

    Wind energy is one of the important renewable energy resources available in nature. It is one of the major resources for production of energy because of its dependability due to the development of the technology and relatively low cost. Wind energy is converted into electrical energy using rotating blades. Due to environmental conditions and large structure, the blades are subjected to various vibration forces that may cause damage to the blades. This leads to a liability in energy production and turbine shutdown. The downtime can be reduced when the blades are diagnosed continuously using structural health condition monitoring. These are considered as a pattern recognition problem which consists of three phases namely, feature extraction, feature selection, and feature classification. In this study, statistical features were extracted from vibration signals, feature selection was carried out using a J48 decision tree algorithm and feature classification was performed using best-first tree algorithm and functional trees algorithm. The better algorithm is suggested for fault diagnosis of wind turbine blade. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  10. Fault Injection Campaign for a Fault Tolerant Duplex Framework

    NASA Technical Reports Server (NTRS)

    Sacco, Gian Franco; Ferraro, Robert D.; von llmen, Paul; Rennels, Dave A.

    2007-01-01

    Fault tolerance is an efficient approach adopted to avoid or reduce the damage of a system failure. In this work we present the results of a fault injection campaign we conducted on the Duplex Framework (DF). The DF is a software developed by the UCLA group [1, 2] that uses a fault tolerant approach and allows to run two replicas of the same process on two different nodes of a commercial off-the-shelf (COTS) computer cluster. A third process running on a different node, constantly monitors the results computed by the two replicas, and eventually restarts the two replica processes if an inconsistency in their computation is detected. This approach is very cost efficient and can be adopted to control processes on spacecrafts where the fault rate produced by cosmic rays is not very high.

  11. The San Andreas Fault and a Strike-slip Fault on Europa

    NASA Technical Reports Server (NTRS)

    1998-01-01

    The mosaic on the right of the south polar region of Jupiter's moon Europa shows the northern 290 kilometers (180 miles) of a strike-slip fault named Astypalaea Linea. The entire fault is about 810 kilometers (500 miles) long, the size of the California portion of the San Andreas fault on Earth which runs from the California-Mexico border north to the San Francisco Bay.

    The left mosaic shows the portion of the San Andreas fault near California's san Francisco Bay that has been scaled to the same size and resolution as the Europa image. Each covers an area approximately 170 by 193 kilometers(105 by 120 miles). The red line marks the once active central crack of the Europan fault (right) and the line of the San Andreas fault (left).

    A strike-slip fault is one in which two crustal blocks move horizontally past one another, similar to two opposing lanes of traffic. The overall motion along the Europan fault seems to have followed a continuous narrow crack along the entire length of the feature, with a path resembling stepson a staircase crossing zones which have been pulled apart. The images show that about 50 kilometers (30 miles) of displacement have taken place along the fault. Opposite sides of the fault can be reconstructed like a puzzle, matching the shape of the sides as well as older individual cracks and ridges that had been broken by its movements.

    Bends in the Europan fault have allowed the surface to be pulled apart. This pulling-apart along the fault's bends created openings through which warmer, softer ice from below Europa's brittle ice shell surface, or frozen water from a possible subsurface ocean, could reach the surface. This upwelling of material formed large areas of new ice within the boundaries of the original fault. A similar pulling apart phenomenon can be observed in the geological trough surrounding California's Salton Sea, and in Death Valley and the Dead Sea. In those cases, the pulled apart regions can include upwelled

  12. P-wave velocity structure offshore central Sumatra: implications for compressional and strike-slip faulting

    NASA Astrophysics Data System (ADS)

    Karplus, M.; Henstock, T.; McNeill, L. C.; Vermeesch, P. M. T.; Barton, P. J.

    2014-12-01

    The Sunda subduction zone features significant along-strike structural variability including changes in accretionary prism and forearc morphology. Some of these changes have been linked to changes in megathrust faulting styles, and some have been linked to other thrust and strike-slip fault systems across this obliquely convergent margin (~54-58 mm/yr convergence rate, 40-45 mm/yr subduction rate). We examine these structural changes in detail across central Sumatra, from Siberut to Nias Island, offshore Indonesia. In this area the Investigator Fracture Zone and the Wharton Fossil Ridge, features with significant topography, are being subducted, which may affect sediment thickness variation and margin morphology. We present new seismic refraction P-wave velocity models using marine seismic data collected during Sonne cruise SO198 in 2008. The experiment geometry consisted of 57 ocean bottom seismometers, 23 land seismometers, and over 10,000 air gun shots recorded along ~1750 km of profiles. About 130,000 P-wave first arrival refractions were picked, and the picks were inverted using FAST (First Arrivals Refraction Tomography) 3-D to give a velocity model, best-resolved in the top 25 km. Moho depths, crustal composition, prism geometry, slab dip, and upper and lower plate structures provide insight into the past and present tectonic processes at this plate boundary. We specifically examine the relationships between velocity structure and faulting locations/ styles. These observations have implications for strain-partitioning along the boundary. The Mentawai Fault, located west of the forearc basin in parts of Central Sumatra, has been interpreted variably as a backthrust, strike-slip, and normal fault. We integrate existing data to evaluate these hypotheses. Regional megathrust earthquake ruptures indicate plate boundary segmentation in our study area. The offshore forearc west of Siberut is almost aseismic, reflecting the locked state of the plate interface, which

  13. Complex Paleotopography and Faulting near the Elsinore Fault, Coyote Mountains, southern California

    NASA Astrophysics Data System (ADS)

    Brenneman, M. J.; Bykerk-Kauffman, A.

    2012-12-01

    The Coyote Mountains of southern California are bounded on the southwest by the Elsinore Fault, an active dextral fault within the San Andreas Fault zone. According to Axen and Fletcher (1998) and Dorsey and others (2011), rocks exposed in these mountains comprise a portion of the hanging wall of the east-vergent Salton Detachment Fault, which was active from the late Miocene-early Pliocene to Ca. 1.1-1.3 Ma. Detachment faulting was accompanied by subsidence, resulting in deposition of a thick sequence of marine and nonmarine sedimentary rocks. Regional detachment faulting and subsidence ceased with the inception of the Elsinore Fault, which has induced uplift of the Coyote Mountains. Detailed geologic mapping in the central Coyote Mountains supports the above interpretation and adds some intriguing details. New discoveries include a buttress unconformity at the base of the Miocene/Pliocene section that locally cuts across strata at an angle so high that it could be misinterpreted as a fault. We thus conclude that the syn-extension strata were deposited on a surface with very rugged topography. We also discovered that locally-derived nonmarine gravel deposits exposed near the crest of the range, previously interpreted as part of the Miocene Split Mountain Group by Winker and Kidwell (1996), unconformably overlie units of the marine Miocene/Pliocene Imperial Group and must therefore be Pliocene or younger. The presence of such young gravel deposits on the crest of the range provides evidence for its rapid uplift. Additional new discoveries flesh out details of the structural history of the range. We mapped just two normal faults, both of which were relatively minor, thus supporting Axen and Fletcher's assertion that the hanging wall block of the Salton Detachment Fault had not undergone significant internal deformation during extension. We found abundant complex synthetic and antithetic strike-slip faults throughout the area, some of which offset Quaternary alluvial

  14. Fault Interaction and Stress Accumulation in Chaman Fault System, Balouchistan, Pakistan, Since 1892

    NASA Astrophysics Data System (ADS)

    Riaz, M. S.; Shan, B.; Xiong, X.; Xie, Z.

    2017-12-01

    The curved-shaped left-lateral Chaman fault is the Western boundary of the Indian plate, which is approximately 1000 km long. The Chaman fault is an active fault and also locus of many catastrophic earthquakes. Since the inception of strike-slip movement at 20-25Ma along the western collision boundary between Indian and Eurasian plates, the average geologically constrained slip rate of 24 to 35 mm/yr accounts for a total displacement of 460±10 km along the Chaman fault system (Beun et al., 1979; Lawrence et al., 1992). Based on earthquake triggering theory, the change in Coulomb Failure Stress (DCFS) either halted (shadow stress) or advances (positive stress) the occurrence of subsequent earthquakes. Several major earthquakes occurred in Chaman fault system, and this region is poorly studied to understand the earthquake/fault interaction and hazard assessment. In order to do so, we have analyzed the earthquakes catalog and collected significant earthquakes with M ≥6.2 since 1892. We then investigate the evolution of DCFS in the Chaman fault system is computed by integration of coseismic static and postseismic viscoelastic relaxation stress transfer since the 1892, using the codePSGRN/PSCMP (Wang et al., 2006). Moreover, for postseismic stress transfer simulation, we adopted linear Maxwell rheology to calculate the viscoelastic effects in this study. Our results elucidate that three out of four earthquakes are triggered by the preceding earthquakes. The 1892-earthquake with magnitude Mw6.8, which occurred on the North segment of Chaman fault has not influence the 1935-earthquake which occurred on Ghazaband fault, a parallel fault 20km east to Chaman fault. The 1935-earthquake with magnitude Mw7.7 significantly loaded the both ends of rupture with positive stress (CFS ≥0.01 Mpa), which later on triggered the 1975-earthquake with 23% of its rupture length where CFS ≥0.01 Mpa, on Chaman fault, and 1990-earthquke with 58% of its rupture length where CFS ≥0

  15. Fluid involvement in normal faulting

    NASA Astrophysics Data System (ADS)

    Sibson, Richard H.

    2000-04-01

    Evidence of fluid interaction with normal faults comes from their varied role as flow barriers or conduits in hydrocarbon basins and as hosting structures for hydrothermal mineralisation, and from fault-rock assemblages in exhumed footwalls of steep active normal faults and metamorphic core complexes. These last suggest involvement of predominantly aqueous fluids over a broad depth range, with implications for fault shear resistance and the mechanics of normal fault reactivation. A general downwards progression in fault rock assemblages (high-level breccia-gouge (often clay-rich) → cataclasites → phyllonites → mylonite → mylonitic gneiss with the onset of greenschist phyllonites occurring near the base of the seismogenic crust) is inferred for normal fault zones developed in quartzo-feldspathic continental crust. Fluid inclusion studies in hydrothermal veining from some footwall assemblages suggest a transition from hydrostatic to suprahydrostatic fluid pressures over the depth range 3-5 km, with some evidence for near-lithostatic to hydrostatic pressure cycling towards the base of the seismogenic zone in the phyllonitic assemblages. Development of fault-fracture meshes through mixed-mode brittle failure in rock-masses with strong competence layering is promoted by low effective stress in the absence of thoroughgoing cohesionless faults that are favourably oriented for reactivation. Meshes may develop around normal faults in the near-surface under hydrostatic fluid pressures to depths determined by rock tensile strength, and at greater depths in overpressured portions of normal fault zones and at stress heterogeneities, especially dilational jogs. Overpressures localised within developing normal fault zones also determine the extent to which they may reutilise existing discontinuities (for example, low-angle thrust faults). Brittle failure mode plots demonstrate that reactivation of existing low-angle faults under vertical σ1 trajectories is only likely if

  16. Contradicting Estimates of Location, Geometry, and Rupture History of Highly Active Faults in Central Japan

    NASA Astrophysics Data System (ADS)

    Okumura, K.

    2011-12-01

    Accurate location and geometry of seismic sources are critical to estimate strong ground motion. Complete and precise rupture history is also critical to estimate the probability of the future events. In order to better forecast future earthquakes and to reduce seismic hazards, we should consider over all options and choose the most likely parameter. Multiple options for logic trees are acceptable only after thorough examination of contradicting estimates and should not be a result from easy compromise or epoche. In the process of preparation and revisions of Japanese probabilistic and deterministic earthquake hazard maps by Headquarters for Earthquake Research Promotion since 1996, many decisions were made to select plausible parameters, but many contradicting estimates have been left without thorough examinations. There are several highly-active faults in central Japan such as Itoigawa-Shizuoka Tectonic Line active fault system (ISTL), West Nagano Basin fault system (WNBF), Inadani fault system (INFS), and Atera fault system (ATFS). The highest slip rate and the shortest recurrence interval are respectively ~1 cm/yr and 500 to 800 years, and estimated maximum magnitude is 7.5 to 8.5. Those faults are very hazardous because almost entire population and industries are located above the fault within tectonic depressions. As to the fault location, most uncertainties arises from interpretation of geomorphic features. Geomorphological interpretation without geological and structural insight often leads to wrong mapping. Though non-existent longer fault may be a safer estimate, incorrectness harm reliability of the forecast. Also this does not greatly affect strong motion estimates, but misleading to surface displacement issues. Fault geometry, on the other hand, is very important to estimate intensity distribution. For the middle portion of the ISTL, fast-moving left-lateral strike-slip up to 1 cm/yr is obvious. Recent seismicity possibly induced by 2011 Tohoku

  17. Implications of meso- to micro-scale deformation for fault sealing capacity: Insights from the Lenghu5 fold-and-thrust belt, Qaidam Basin, NE Tibetan Plateau

    NASA Astrophysics Data System (ADS)

    Xie, Liujuan; Pei, Yangwen; Li, Anren; Wu, Kongyou

    2018-06-01

    As faults can be barriers to or conduits for fluid flow, it is critical to understand fault seal processes and their effects on the sealing capacity of a fault zone. Apart from the stratigraphic juxtaposition between the hanging wall and footwall, the development of fault rocks is of great importance in changing the sealing capacity of a fault zone. Therefore, field-based structural analysis has been employed to identify the meso-scale and micro-scale deformation features and to understand their effects on modifying the porosity of fault rocks. In this study, the Lenghu5 fold-and-thrust belt (northern Qaidam Basin, NE Tibetan Plateau), with well-exposed outcrops, was selected as an example for meso-scale outcrop mapping and SEM (Scanning Electron Microscope) micro-scale structural analysis. The detailed outcrop maps enabled us to link the samples with meso-scale fault architecture. The representative rock samples, collected in both the fault zones and the undeformed hanging walls/footwalls, were studied by SEM micro-structural analysis to identify the deformation features at the micro-scale and evaluate their influences on the fluid flow properties of the fault rocks. Based on the multi-scale structural analyses, the deformation mechanisms accounting for porosity reduction in the fault rocks have been identified, which are clay smearing, phyllosilicate-framework networking and cataclasis. The sealing capacity is highly dependent on the clay content: high concentrations of clay minerals in fault rocks are likely to form continuous clay smears or micro- clay smears between framework silicates, which can significantly decrease the porosity of the fault rocks. However, there is no direct link between the fault rocks and host rocks. Similar stratigraphic juxtapositions can generate fault rocks with very different magnitudes of porosity reduction. The resultant fault rocks can only be predicted only when the fault throw is smaller than the thickness of a faulted bed, in

  18. EvolView, an online tool for visualizing, annotating and managing phylogenetic trees.

    PubMed

    Zhang, Huangkai; Gao, Shenghan; Lercher, Martin J; Hu, Songnian; Chen, Wei-Hua

    2012-07-01

    EvolView is a web application for visualizing, annotating and managing phylogenetic trees. First, EvolView is a phylogenetic tree viewer and customization tool; it visualizes trees in various formats, customizes them through built-in functions that can link information from external datasets, and exports the customized results to publication-ready figures. Second, EvolView is a tree and dataset management tool: users can easily organize related trees into distinct projects, add new datasets to trees and edit and manage existing trees and datasets. To make EvolView easy to use, it is equipped with an intuitive user interface. With a free account, users can save data and manipulations on the EvolView server. EvolView is freely available at: http://www.evolgenius.info/evolview.html.

  19. Current microseismicity and generating faults in the Gyeongju area, southeastern Korea

    NASA Astrophysics Data System (ADS)

    Han, Minhui; Kim, Kwang-Hee; Son, Moon; Kang, Su Young

    2017-01-01

    A study of microseismicity in a 15 × 20 km2 subregion of Gyeongju, southeastern Korea, establishes a direct link between minor earthquakes and known fault structures. The study area has a complex history of tectonic deformation and has experienced large historic earthquakes, with small earthquakes recorded since the beginning of modern instrumental monitoring. From 5 years of continuously recorded local seismic data, 311 previously unidentified microearthquakes can be reliably located using the double-difference algorithm. These newly discovered events occur in linear streaks that can be spatially correlated with active faults, which could pose a serious hazard to nearby communities. At-risk infrastructure includes the largest industrial park in South Korea, nuclear power plants, and disposal facilities for radioactive waste. The current work suggests that the southern segment of the Yeonil Tectonic Line and segments of the Seokup and Waup Basin boundary faults are active. For areas with high rates of microseismic activity, reliably located hypocenters are spatially correlated with mapped faults; in less active areas, earthquake clusters tend to occur at fault intersections. Microearthquakes in stable continental regions are known to exist, but have been largely ignored in assessments of seismic hazard because their magnitudes are well below the detection thresholds of seismic networks. The total number of locatable microearthquakes could be dramatically increased by lowering the triggering thresholds of network detection algorithms. The present work offers an example of how microearthquakes can be reliably detected and located with advanced techniques. This could make it possible to create a new database to identify subsurface fault geometries and modes of fault movement, which could then be considered in the assessments of seismic hazard in regions where major earthquakes are rare.

  20. Fault connectivity, distributed shortening, and impacts on geologic- geodetic slip rate discrepancies in the central Mojave Desert, California

    NASA Astrophysics Data System (ADS)

    Selander, J.; Oskin, M. E.; Cooke, M. L.; Grette, K.

    2015-12-01

    Understanding off-fault deformation and distribution of displacement rates associated with disconnected strike-slip faults requires a three-dimensional view of fault geometries. We address problems associated with distributed faulting by studying the Mojave segment of the East California Shear Zone (ECSZ), a region dominated by northwest-directed dextral shear along disconnected northwest- southeast striking faults. We use a combination of cross-sectional interpretations, 3D Boundary Element Method (BEM) models, and slip-rate measurements to test new hypothesized fault connections. We find that reverse faulting acts as an important means of slip transfer between strike-slip faults, and show that the impacts of these structural connections on shortening, uplift, strike-slip rates, and off-fault deformation, help to reconcile the overall strain budget across this portion of the ECSZ. In detail, we focus on the Calico and Blackwater faults, which are hypothesized to together represent the longest linked fault system in the Mojave ECSZ, connected by a restraining step at 35°N. Across this restraining step the system displays a pronounced displacement gradient, where dextral offset decreases from ~11.5 to <2 km from south to north. Cross-section interpretations show that ~40% of this displacement is transferred from the Calico fault to the Harper Lake and Blackwater faults via a set of north-dipping thrust ramps. Late Quaternary dextral slip rates follow a similar pattern, where 1.4 +0.8/-0.4 mm/yr of slip along the Calico fault south of 35°N is distributed to the Harper Lake, Blackwater, and Tin Can Alley faults. BEM model results using revised fault geometries for the Mojave ECSZ show areas of uplift consistent with contractional structures, and fault slip-rates that more closely match geologic data. Overall, revised fault connections and addition of off-fault deformation greatly reduces the discrepancy between geodetic and geologic slip rates.

  1. Fault zone structure from topography: signatures of en echelon fault slip at Mustang Ridge on the San Andreas Fault, Monterey County, California

    USGS Publications Warehouse

    DeLong, Stephen B.; Hilley, George E.; Rymer, Michael J.; Prentice, Carol

    2010-01-01

    We used high-resolution topography to quantify the spatial distribution of scarps, linear valleys, topographic sinks, and oversteepened stream channels formed along an extensional step over on the San Andreas Fault (SAF) at Mustang Ridge, California. This location provides detail of both creeping fault landform development and complex fault zone kinematics. Here, the SAF creeps 10–14 mm/yr slower than at locations ∼20 km along the fault in either direction. This spatial change in creep rate is coincident with a series of en echelon oblique-normal faults that strike obliquely to the SAF and may accommodate the missing deformation. This study presents a suite of analyses that are helpful for proper mapping of faults in locations where high-resolution topographic data are available. Furthermore, our analyses indicate that two large subsidiary faults near the center of the step over zone appear to carry significant distributed deformation based on their large apparent vertical offsets, the presence of associated sag ponds and fluvial knickpoints, and the observation that they are rotating a segment of the main SAF. Several subsidiary faults in the southeastern portion of Mustang Ridge are likely less active; they have few associated sag ponds and have older scarp morphologic ages and subdued channel knickpoints. Several faults in the northwestern part of Mustang Ridge, though relatively small, are likely also actively accommodating active fault slip based on their young morphologic ages and the presence of associated sag ponds.

  2. Large earthquakes and creeping faults

    USGS Publications Warehouse

    Harris, Ruth A.

    2017-01-01

    Faults are ubiquitous throughout the Earth's crust. The majority are silent for decades to centuries, until they suddenly rupture and produce earthquakes. With a focus on shallow continental active-tectonic regions, this paper reviews a subset of faults that have a different behavior. These unusual faults slowly creep for long periods of time and produce many small earthquakes. The presence of fault creep and the related microseismicity helps illuminate faults that might not otherwise be located in fine detail, but there is also the question of how creeping faults contribute to seismic hazard. It appears that well-recorded creeping fault earthquakes of up to magnitude 6.6 that have occurred in shallow continental regions produce similar fault-surface rupture areas and similar peak ground shaking as their locked fault counterparts of the same earthquake magnitude. The behavior of much larger earthquakes on shallow creeping continental faults is less well known, because there is a dearth of comprehensive observations. Computational simulations provide an opportunity to fill the gaps in our understanding, particularly of the dynamic processes that occur during large earthquake rupture and arrest.

  3. A novel KFCM based fault diagnosis method for unknown faults in satellite reaction wheels.

    PubMed

    Hu, Di; Sarosh, Ali; Dong, Yun-Feng

    2012-03-01

    Reaction wheels are one of the most critical components of the satellite attitude control system, therefore correct diagnosis of their faults is quintessential for efficient operation of these spacecraft. The known faults in any of the subsystems are often diagnosed by supervised learning algorithms, however, this method fails to work correctly when a new or unknown fault occurs. In such cases an unsupervised learning algorithm becomes essential for obtaining the correct diagnosis. Kernel Fuzzy C-Means (KFCM) is one of the unsupervised algorithms, although it has its own limitations; however in this paper a novel method has been proposed for conditioning of KFCM method (C-KFCM) so that it can be effectively used for fault diagnosis of both known and unknown faults as in satellite reaction wheels. The C-KFCM approach involves determination of exact class centers from the data of known faults, in this way discrete number of fault classes are determined at the start. Similarity parameters are derived and determined for each of the fault data point. Thereafter depending on the similarity threshold each data point is issued with a class label. The high similarity points fall into one of the 'known-fault' classes while the low similarity points are labeled as 'unknown-faults'. Simulation results show that as compared to the supervised algorithm such as neural network, the C-KFCM method can effectively cluster historical fault data (as in reaction wheels) and diagnose the faults to an accuracy of more than 91%. Copyright © 2011 ISA. Published by Elsevier Ltd. All rights reserved.

  4. Temporal evolution of fault systems in the Upper Jurassic of the Central German Molasse Basin: case study Unterhaching

    NASA Astrophysics Data System (ADS)

    Budach, Ingmar; Moeck, Inga; Lüschen, Ewald; Wolfgramm, Markus

    2018-03-01

    The structural evolution of faults in foreland basins is linked to a complex basin history ranging from extension to contraction and inversion tectonics. Faults in the Upper Jurassic of the German Molasse Basin, a Cenozoic Alpine foreland basin, play a significant role for geothermal exploration and are therefore imaged, interpreted and studied by 3D seismic reflection data. Beyond this applied aspect, the analysis of these seismic data help to better understand the temporal evolution of faults and respective stress fields. In 2009, a 27 km2 3D seismic reflection survey was conducted around the Unterhaching Gt 2 well, south of Munich. The main focus of this study is an in-depth analysis of a prominent v-shaped fault block structure located at the center of the 3D seismic survey. Two methods were used to study the periodic fault activity and its relative age of the detected faults: (1) horizon flattening and (2) analysis of incremental fault throws. Slip and dilation tendency analyses were conducted afterwards to determine the stresses resolved on the faults in the current stress field. Two possible kinematic models explain the structural evolution: One model assumes a left-lateral strike slip fault in a transpressional regime resulting in a positive flower structure. The other model incorporates crossing conjugate normal faults within a transtensional regime. The interpreted successive fault formation prefers the latter model. The episodic fault activity may enhance fault zone permeability hence reservoir productivity implying that the analysis of periodically active faults represents an important part in successfully targeting geothermal wells.

  5. The Trees that surround us

    NASA Astrophysics Data System (ADS)

    Costa, M. E. G.; Rodrigues, M. A. S.

    2012-04-01

    In our school the activities linked with sciences are developed in a partnership with other school subjects. Interdisciplinary projects are always valued from beginning to end of a project. It is common for teachers of different areas to work together in a Science project. Research of English written articles is very important not only for the development of our students' scientific literacy but also as a way of widening knowledge and a view on different perspectives of life instead of being limited to research of any articles in Portuguese language. In this study we are going to collect data about the predominant tree species in the region, especially the invasive trees from the acacia species, the native tree species and the commercial species. We are going to study the reasons for the appearance of each species and draw a chart of soil occupation in the council. This chart will also allow the study of the distribution and use of land for each tree species. This research work is the first stage for a contribution to warn the town council of the dangers of the invasive species to the future economy of the council.

  6. Identifying Conventionally Sub-Seismic Faults in Polygonal Fault Systems

    NASA Astrophysics Data System (ADS)

    Fry, C.; Dix, J.

    2017-12-01

    Polygonal Fault Systems (PFS) are prevalent in hydrocarbon basins globally and represent potential fluid pathways. However the characterization of these pathways is subject to the limitations of conventional 3D seismic imaging; only capable of resolving features on a decametre scale horizontally and metres scale vertically. While outcrop and core examples can identify smaller features, they are limited by the extent of the exposures. The disparity between these scales can allow for smaller faults to be lost in a resolution gap which could mean potential pathways are left unseen. Here the focus is upon PFS from within the London Clay, a common bedrock that is tunnelled into and bears construction foundations for much of London. It is a continuation of the Ieper Clay where PFS were first identified and is found to approach the seafloor within the Outer Thames Estuary. This allows for the direct analysis of PFS surface expressions, via the use of high resolution 1m bathymetric imaging in combination with high resolution seismic imaging. Through use of these datasets surface expressions of over 1500 faults within the London Clay have been identified, with the smallest fault measuring 12m and the largest at 612m in length. The displacements over these faults established from both bathymetric and seismic imaging ranges from 30cm to a couple of metres, scales that would typically be sub-seismic for conventional basin seismic imaging. The orientations and dimensions of the faults within this network have been directly compared to 3D seismic data of the Ieper Clay from the offshore Dutch sector where it exists approximately 1km below the seafloor. These have typical PFS attributes with lengths of hundreds of metres to kilometres and throws of tens of metres, a magnitude larger than those identified in the Outer Thames Estuary. The similar orientations and polygonal patterns within both locations indicates that the smaller faults exist within typical PFS structure but are

  7. Influence of fault steps on rupture termination of strike-slip earthquake faults

    NASA Astrophysics Data System (ADS)

    Li, Zhengfang; Zhou, Bengang

    2018-03-01

    A statistical analysis was completed on the rupture data of 29 historical strike-slip earthquakes across the world. The purpose of this study is to examine the effects of fault steps on the rupture termination of these events. The results show good correlations between the type and length of steps with the seismic rupture and a poor correlation between the step number and seismic rupture. For different magnitude intervals, the smallest widths of the fault steps (Lt) that can terminate the rupture propagation are variable: Lt = 3 km for Ms 6.5 6.9, Lt = 4 km for Ms 7.0 7.5, Lt = 6 km for Ms 7.5 8.0, and Lt = 8 km for Ms 8.0 8.5. The dilational fault step is easier to rupture through than the compression fault step. The smallest widths of the fault step for the rupture arrest can be used as an indicator to judge the scale of the rupture termination of seismic faults. This is helpful for research on fault segmentation, as well as estimating the magnitude of potential earthquakes, and is thus of significance for the assessment of seismic risks.

  8. Providing the full DDF link protection for bus-connected SIEPON based system architecture

    NASA Astrophysics Data System (ADS)

    Hwang, I.-Shyan; Pakpahan, Andrew Fernando; Liem, Andrew Tanny; Nikoukar, AliAkbar

    2016-09-01

    Currently a massive amount of traffic per second is delivered through EPON systems, one of the prominent access network technologies for delivering the next generation network. Therefore, it is vital to keep the EPON optical distribution network (ODN) working by providing the necessity protection mechanism in the deployed devices; otherwise, when failures occur it will cause a great loss for both network operators and business customers. In this paper, we propose a bus-connected architecture to protect and recover distribution drop fiber (DDF) link faults or transceiver failures at ONU(s) in SIEPON system. The proposed architecture provides a cost-effective architecture, which delivers the high fault-tolerance in handling multiple DDF faults, while also providing flexibility in choosing the backup ONU assignments. Simulation results show that the proposed architecture provides the reliability and maintains quality of service (QoS) performance in terms of mean packet delay, system throughput, packet loss and EF jitter when DDF link failures occur.

  9. Model-Based Fault Tolerant Control

    NASA Technical Reports Server (NTRS)

    Kumar, Aditya; Viassolo, Daniel

    2008-01-01

    The Model Based Fault Tolerant Control (MBFTC) task was conducted under the NASA Aviation Safety and Security Program. The goal of MBFTC is to develop and demonstrate real-time strategies to diagnose and accommodate anomalous aircraft engine events such as sensor faults, actuator faults, or turbine gas-path component damage that can lead to in-flight shutdowns, aborted take offs, asymmetric thrust/loss of thrust control, or engine surge/stall events. A suite of model-based fault detection algorithms were developed and evaluated. Based on the performance and maturity of the developed algorithms two approaches were selected for further analysis: (i) multiple-hypothesis testing, and (ii) neural networks; both used residuals from an Extended Kalman Filter to detect the occurrence of the selected faults. A simple fusion algorithm was implemented to combine the results from each algorithm to obtain an overall estimate of the identified fault type and magnitude. The identification of the fault type and magnitude enabled the use of an online fault accommodation strategy to correct for the adverse impact of these faults on engine operability thereby enabling continued engine operation in the presence of these faults. The performance of the fault detection and accommodation algorithm was extensively tested in a simulation environment.

  10. A Geologic and Geomorphic Mapping Approach to Understanding the Kinematic Role of Faulting in the Little San Bernardino Mountains in the Evolution of the San Andreas Fault System in Southern California

    NASA Astrophysics Data System (ADS)

    Powell, R. E.; Matti, J. C.

    2006-12-01

    The Little San Bernardino Mountains (LSBM) constitute a pivotal yet poorly understood structural domain along the right-lateral San Andreas Fault (SAF) in southern California. The LSBM, forming a dramatic escarpment between the eastern Transverse Ranges (ETR) and the Salton Trough, contain an array of N- to NW-trending faults that occupy the zone of intersections between the SAF and the coevolving E-trending left-slip faults of the ETR. One of the N-trending faults within the LSBM domain, the West Deception Canyon Fault, previously has been identified as the locus of the Joshua Tree earthquake (Mw 6.1) of 23 April 1992. That earthquake was the initial shock in the ensuing Landers earthquake sequence. During the evolution of the plate-margin shearing associated with the opening of the Gulf of California since about 5 Ma, the left-lateral faults of the ETR have provided the kinematic transition between the S end of the broad Eastern California Shear Zone (ECSZ) which extends northward through the Mojave Desert and along Walker Lane and the SAF proper in southern California. The long-term geologic record of cumulative displacement on the sinistral ETR faults and the dextral SAF and Mojave Desert faults indicates that these conjugate fault sets have mutually accommodated one another rather than exhibit cross-cutting relations. In contrast, the linear array of earthquakes that make up the dextral 1992 Landers sequence extends across the sinistral Pinto Mountain Fault and has been cited by some as evidence that ECSZ is coalescing southward along the N-trending dextral faults of the northern LSBM to join the ECSZ directly to southern SAF. To gain a better understanding of the array of faults in the LSBM, we are combining mapping within the crystalline basement terrane of the LSBM with mapping both of uplifted remnants of erosional surfaces developed on basement rocks and of volcanic and sedimentary rocks deposited on those surfaces. Our preliminary findings indicate the

  11. Detection of CMOS bridging faults using minimal stuck-at fault test sets

    NASA Technical Reports Server (NTRS)

    Ijaz, Nabeel; Frenzel, James F.

    1993-01-01

    The performance of minimal stuck-at fault test sets at detecting bridging faults are evaluated. New functional models of circuit primitives are presented which allow accurate representation of bridging faults under switch-level simulation. The effectiveness of the patterns is evaluated using both voltage and current testing.

  12. A methodological combined framework for roadmapping biosensor research: a fault tree analysis approach within a strategic technology evaluation frame.

    PubMed

    Siontorou, Christina G; Batzias, Fragiskos A

    2014-03-01

    Biosensor technology began in the 1960s to revolutionize instrumentation and measurement. Despite the glucose sensor market success that revolutionized medical diagnostics, and artificial pancreas promise currently the approval stage, the industry is reluctant to capitalize on other relevant university-produced knowledge and innovation. On the other hand, the scientific literature is extensive and persisting, while the number of university-hosted biosensor groups is growing. Considering the limited marketability of biosensors compared to the available research output, the biosensor field has been used by the present authors as a suitable paradigm for developing a methodological combined framework for "roadmapping" university research output in this discipline. This framework adopts the basic principles of the Analytic Hierarchy Process (AHP), replacing the lower level of technology alternatives with internal barriers (drawbacks, limitations, disadvantages), modeled through fault tree analysis (FTA) relying on fuzzy reasoning to count for uncertainty. The proposed methodology is validated retrospectively using ion selective field effect transistor (ISFET) - based biosensors as a case example, and then implemented prospectively membrane biosensors, putting an emphasis on the manufacturability issues. The analysis performed the trajectory of membrane platforms differently than the available market roadmaps that, considering the vast industrial experience in tailoring and handling crystallic forms, suggest the technology path of biomimetic and synthetic materials. The results presented herein indicate that future trajectories lie along with nanotechnology, and especially nanofabrication and nano-bioinformatics, and focused, more on the science-path, that is, on controlling the natural process of self-assembly and the thermodynamics of bioelement-lipid interaction. This retained the nature-derived sensitivity of the biosensor platform, pointing out the differences

  13. Fault diagnosis of sensor networked structures with multiple faults using a virtual beam based approach

    NASA Astrophysics Data System (ADS)

    Wang, H.; Jing, X. J.

    2017-07-01

    This paper presents a virtual beam based approach suitable for conducting diagnosis of multiple faults in complex structures with limited prior knowledge of the faults involved. The "virtual beam", a recently-proposed concept for fault detection in complex structures, is applied, which consists of a chain of sensors representing a vibration energy transmission path embedded in the complex structure. Statistical tests and adaptive threshold are particularly adopted for fault detection due to limited prior knowledge of normal operational conditions and fault conditions. To isolate the multiple faults within a specific structure or substructure of a more complex one, a 'biased running' strategy is developed and embedded within the bacterial-based optimization method to construct effective virtual beams and thus to improve the accuracy of localization. The proposed method is easy and efficient to implement for multiple fault localization with limited prior knowledge of normal conditions and faults. With extensive experimental results, it is validated that the proposed method can localize both single fault and multiple faults more effectively than the classical trust index subtract on negative add on positive (TI-SNAP) method.

  14. Fault Identification by Unsupervised Learning Algorithm

    NASA Astrophysics Data System (ADS)

    Nandan, S.; Mannu, U.

    2012-12-01

    Contemporary fault identification techniques predominantly rely on the surface expression of the fault. This biased observation is inadequate to yield detailed fault structures in areas with surface cover like cities deserts vegetation etc and the changes in fault patterns with depth. Furthermore it is difficult to estimate faults structure which do not generate any surface rupture. Many disastrous events have been attributed to these blind faults. Faults and earthquakes are very closely related as earthquakes occur on faults and faults grow by accumulation of coseismic rupture. For a better seismic risk evaluation it is imperative to recognize and map these faults. We implement a novel approach to identify seismically active fault planes from three dimensional hypocenter distribution by making use of unsupervised learning algorithms. We employ K-means clustering algorithm and Expectation Maximization (EM) algorithm modified to identify planar structures in spatial distribution of hypocenter after filtering out isolated events. We examine difference in the faults reconstructed by deterministic assignment in K- means and probabilistic assignment in EM algorithm. The method is conceptually identical to methodologies developed by Ouillion et al (2008, 2010) and has been extensively tested on synthetic data. We determined the sensitivity of the methodology to uncertainties in hypocenter location, density of clustering and cross cutting fault structures. The method has been applied to datasets from two contrasting regions. While Kumaon Himalaya is a convergent plate boundary, Koyna-Warna lies in middle of the Indian Plate but has a history of triggered seismicity. The reconstructed faults were validated by examining the fault orientation of mapped faults and the focal mechanism of these events determined through waveform inversion. The reconstructed faults could be used to solve the fault plane ambiguity in focal mechanism determination and constrain the fault

  15. A remote sensing study of active folding and faulting in southern Kerman province, S.E. Iran

    NASA Astrophysics Data System (ADS)

    Walker, Richard Thomas

    2006-04-01

    Geomorphological observations reveal a major oblique fold-and-thrust belt in Kerman province, S.E. Iran. The active faults appear to link the Sabzevaran right-lateral strike-slip fault in southeast Iran to other strike-slip faults within the interior of the country and may provide the means of distributing right-lateral shear between the Zagros and Makran mountains over a wider region of central Iran. The Rafsanjan fault is manifest at the Earth's surface as right-lateral strike-slip fault scarps and folding in alluvial sediments. Height changes across the anticlines, and widespread incision of rivers, are likely to result from hanging-wall uplift above thrust faults at depth. Scarps in recent alluvium along the northern margins of the folds suggest that the thrusts reach the surface and are active at the present-day. The observations from Rafsanjan are used to identify similar late Quaternary faulting elsewhere in Kerman province near the towns of Mahan and Rayen. No instrumentally recorded destructive earthquakes have occurred in the study region and only one historical earthquake (Lalehzar, 1923) is recorded. In addition GPS studies show that present-day rates of deformation are low. However, fault structures in southern Kerman province do appear to be active in the late Quaternary and may be capable of producing destructive earthquakes in the future. This study shows how widely available remote sensing data can be used to provide information on the distribution of active faulting across large areas of deformation.

  16. How fault evolution changes strain partitioning and fault slip rates in Southern California: Results from geodynamic modeling

    NASA Astrophysics Data System (ADS)

    Ye, Jiyang; Liu, Mian

    2017-08-01

    In Southern California, the Pacific-North America relative plate motion is accommodated by the complex southern San Andreas Fault system that includes many young faults (<2 Ma). The initiation of these young faults and their impact on strain partitioning and fault slip rates are important for understanding the evolution of this plate boundary zone and assessing earthquake hazard in Southern California. Using a three-dimensional viscoelastoplastic finite element model, we have investigated how this plate boundary fault system has evolved to accommodate the relative plate motion in Southern California. Our results show that when the plate boundary faults are not optimally configured to accommodate the relative plate motion, strain is localized in places where new faults would initiate to improve the mechanical efficiency of the fault system. In particular, the Eastern California Shear Zone, the San Jacinto Fault, the Elsinore Fault, and the offshore dextral faults all developed in places of highly localized strain. These younger faults compensate for the reduced fault slip on the San Andreas Fault proper because of the Big Bend, a major restraining bend. The evolution of the fault system changes the apportionment of fault slip rates over time, which may explain some of the slip rate discrepancy between geological and geodetic measurements in Southern California. For the present fault configuration, our model predicts localized strain in western Transverse Ranges and along the dextral faults across the Mojave Desert, where numerous damaging earthquakes occurred in recent years.

  17. Simulated fault injection - A methodology to evaluate fault tolerant microprocessor architectures

    NASA Technical Reports Server (NTRS)

    Choi, Gwan S.; Iyer, Ravishankar K.; Carreno, Victor A.

    1990-01-01

    A simulation-based fault-injection method for validating fault-tolerant microprocessor architectures is described. The approach uses mixed-mode simulation (electrical/logic analysis), and injects transient errors in run-time to assess the resulting fault impact. As an example, a fault-tolerant architecture which models the digital aspects of a dual-channel real-time jet-engine controller is used. The level of effectiveness of the dual configuration with respect to single and multiple transients is measured. The results indicate 100 percent coverage of single transients. Approximately 12 percent of the multiple transients affect both channels; none result in controller failure since two additional levels of redundancy exist.

  18. Where's the Hayward Fault? A Green Guide to the Fault

    USGS Publications Warehouse

    Stoffer, Philip W.

    2008-01-01

    This report describes self-guided field trips to one of North America?s most dangerous earthquake faults?the Hayward Fault. Locations were chosen because of their easy access using mass transit and/or their significance relating to the natural and cultural history of the East Bay landscape. This field-trip guidebook was compiled to help commemorate the 140th anniversary of an estimated M 7.0 earthquake that occurred on the Hayward Fault at approximately 7:50 AM, October 21st, 1868. Although many reports and on-line resources have been compiled about the science and engineering associated with earthquakes on the Hayward Fault, this report has been prepared to serve as an outdoor guide to the fault for the interested public and for educators. The first chapter is a general overview of the geologic setting of the fault. This is followed by ten chapters of field trips to selected areas along the fault, or in the vicinity, where landscape, geologic, and man-made features that have relevance to understanding the nature of the fault and its earthquake history can be found. A glossary is provided to define and illustrate scientific term used throughout this guide. A ?green? theme helps conserve resources and promotes use of public transportation, where possible. Although access to all locations described in this guide is possible by car, alternative suggestions are provided. To help conserve paper, this guidebook is available on-line only; however, select pages or chapters (field trips) within this guide can be printed separately to take along on an excursion. The discussions in this paper highlight transportation alternatives to visit selected field trip locations. In some cases, combinations, such as a ride on BART and a bus, can be used instead of automobile transportation. For other locales, bicycles can be an alternative means of transportation. Transportation descriptions on selected pages are intended to help guide fieldtrip planners or participants choose trip

  19. The Iceland Plate Boundary Zone: Propagating Rifts, Migrating Transforms, and Rift-Parallel Strike-Slip Faults

    NASA Astrophysics Data System (ADS)

    Karson, J. A.

    2017-11-01

    Unlike most of the Mid-Atlantic Ridge, the North America/Eurasia plate boundary in Iceland lies above sea level where magmatic and tectonic processes can be directly investigated in subaerial exposures. Accordingly, geologic processes in Iceland have long been recognized as possible analogs for seafloor spreading in the submerged parts of the mid-ocean ridge system. Combining existing and new data from across Iceland provides an integrated view of this active, mostly subaerial plate boundary. The broad Iceland plate boundary zone includes segmented rift zones linked by transform fault zones. Rift propagation and transform fault migration away from the Iceland hotspot rearrange the plate boundary configuration resulting in widespread deformation of older crust and reactivation of spreading-related structures. Rift propagation results in block rotations that are accommodated by widespread, rift-parallel, strike-slip faulting. The geometry and kinematics of faulting in Iceland may have implications for spreading processes elsewhere on the mid-ocean ridge system where rift propagation and transform migration occur.

  20. Lifemap: Exploring the Entire Tree of Life.

    PubMed

    de Vienne, Damien M

    2016-12-01

    The Tree of Life (ToL) is meant to be a unique representation of the evolutionary relationships between all species on earth. Huge efforts are made to assemble such a large tree, helped by the decrease of sequencing costs and improved methods to reconstruct and combine phylogenies, but no tool exists today to explore the ToL in its entirety in a satisfying manner. By combining methods used in modern cartography, such as OpenStreetMap, with a new way of representing tree-like structures, I created Lifemap, a tool allowing the exploration of a complete representation of the ToL (between 800,000 and 2.2 million species depending on the data source) in a zoomable interface. A server version of Lifemap also allows users to visualize their own trees. This should help researchers in ecology and evolutionary biology in their everyday work, but may also permit the diffusion to a broader audience of our current knowledge of the evolutionary relationships linking all organisms.

  1. HOT Faults", Fault Organization, and the Occurrence of the Largest Earthquakes

    NASA Astrophysics Data System (ADS)

    Carlson, J. M.; Hillers, G.; Archuleta, R. J.

    2006-12-01

    We apply the concept of "Highly Optimized Tolerance" (HOT) for the investigation of spatio-temporal seismicity evolution, in particular mechanisms associated with largest earthquakes. HOT provides a framework for investigating both qualitative and quantitative features of complex feedback systems that are far from equilibrium and punctuated by rare, catastrophic events. In HOT, robustness trade-offs lead to complexity and power laws in systems that are coupled to evolving environments. HOT was originally inspired by biology and engineering, where systems are internally very highly structured, through biological evolution or deliberate design, and perform in an optimum manner despite fluctuations in their surroundings. Though faults and fault systems are not designed in ways comparable to biological and engineered structures, feedback processes are responsible in a conceptually comparable way for the development, evolution and maintenance of younger fault structures and primary slip surfaces of mature faults, respectively. Hence, in geophysical applications the "optimization" approach is perhaps more aptly replaced by "organization", reflecting the distinction between HOT and random, disorganized configurations, and highlighting the importance of structured interdependencies that evolve via feedback among and between different spatial and temporal scales. Expressed in the terminology of the HOT concept, mature faults represent a configuration optimally organized for the release of strain energy; whereas immature, more heterogeneous fault networks represent intermittent, suboptimal systems that are regularized towards structural simplicity and the ability to generate large earthquakes more easily. We discuss fault structure and associated seismic response pattern within the HOT concept, and outline fundamental differences between this novel interpretation to more orthodox viewpoints like the criticality concept. The discussion is flanked by numerical simulations of a

  2. An Autonomous Distributed Fault-Tolerant Local Positioning System

    NASA Technical Reports Server (NTRS)

    Malekpour, Mahyar R.

    2017-01-01

    We describe a fault-tolerant, GPS-independent (Global Positioning System) distributed autonomous positioning system for static/mobile objects and present solutions for providing highly-accurate geo-location data for the static/mobile objects in dynamic environments. The reliability and accuracy of a positioning system fundamentally depends on two factors; its timeliness in broadcasting signals and the knowledge of its geometry, i.e., locations and distances of the beacons. Existing distributed positioning systems either synchronize to a common external source like GPS or establish their own time synchrony using a scheme similar to a master-slave by designating a particular beacon as the master and other beacons synchronize to it, resulting in a single point of failure. Another drawback of existing positioning systems is their lack of addressing various fault manifestations, in particular, communication link failures, which, as in wireless networks, are increasingly dominating the process failures and are typically transient and mobile, in the sense that they typically affect different messages to/from different processes over time.

  3. EvolView, an online tool for visualizing, annotating and managing phylogenetic trees

    PubMed Central

    Zhang, Huangkai; Gao, Shenghan; Lercher, Martin J.; Hu, Songnian; Chen, Wei-Hua

    2012-01-01

    EvolView is a web application for visualizing, annotating and managing phylogenetic trees. First, EvolView is a phylogenetic tree viewer and customization tool; it visualizes trees in various formats, customizes them through built-in functions that can link information from external datasets, and exports the customized results to publication-ready figures. Second, EvolView is a tree and dataset management tool: users can easily organize related trees into distinct projects, add new datasets to trees and edit and manage existing trees and datasets. To make EvolView easy to use, it is equipped with an intuitive user interface. With a free account, users can save data and manipulations on the EvolView server. EvolView is freely available at: http://www.evolgenius.info/evolview.html. PMID:22695796

  4. Fault Analysis in Solar Photovoltaic Arrays

    NASA Astrophysics Data System (ADS)

    Zhao, Ye

    Fault analysis in solar photovoltaic (PV) arrays is a fundamental task to increase reliability, efficiency and safety in PV systems. Conventional fault protection methods usually add fuses or circuit breakers in series with PV components. But these protection devices are only able to clear faults and isolate faulty circuits if they carry a large fault current. However, this research shows that faults in PV arrays may not be cleared by fuses under some fault scenarios, due to the current-limiting nature and non-linear output characteristics of PV arrays. First, this thesis introduces new simulation and analytic models that are suitable for fault analysis in PV arrays. Based on the simulation environment, this thesis studies a variety of typical faults in PV arrays, such as ground faults, line-line faults, and mismatch faults. The effect of a maximum power point tracker on fault current is discussed and shown to, at times, prevent the fault current protection devices to trip. A small-scale experimental PV benchmark system has been developed in Northeastern University to further validate the simulation conclusions. Additionally, this thesis examines two types of unique faults found in a PV array that have not been studied in the literature. One is a fault that occurs under low irradiance condition. The other is a fault evolution in a PV array during night-to-day transition. Our simulation and experimental results show that overcurrent protection devices are unable to clear the fault under "low irradiance" and "night-to-day transition". However, the overcurrent protection devices may work properly when the same PV fault occurs in daylight. As a result, a fault under "low irradiance" and "night-to-day transition" might be hidden in the PV array and become a potential hazard for system efficiency and reliability.

  5. Fault creep rates of the Chaman fault (Afghanistan and Pakistan) inferred from InSAR

    NASA Astrophysics Data System (ADS)

    Barnhart, William D.

    2017-01-01

    The Chaman fault is the major strike-slip structural boundary between the India and Eurasia plates. Despite sinistral slip rates similar to the North America-Pacific plate boundary, no major (>M7) earthquakes have been documented along the Chaman fault, indicating that the fault either creeps aseismically or is at a late stage in its seismic cycle. Recent work with remotely sensed interferometric synthetic aperture radar (InSAR) time series documented a heterogeneous distribution of fault creep and interseismic coupling along the entire length of the Chaman fault, including an 125 km long creeping segment and an 95 km long locked segment within the region documented in this study. Here I present additional InSAR time series results from the Envisat and ALOS radar missions spanning the southern and central Chaman fault in an effort to constrain the locking depth, dip, and slip direction of the Chaman fault. I find that the fault deviates little from a vertical geometry and accommodates little to no fault-normal displacements. Peak-documented creep rates on the fault are 9-12 mm/yr, accounting for 25-33% of the total motion between India and Eurasia, and locking depths in creeping segments are commonly shallower than 500 m. The magnitude of the 1892 Chaman earthquake is well predicted by the total area of the 95 km long coupled segment. To a first order, the heterogeneous distribution of aseismic creep combined with consistently shallow locking depths suggests that the southern and central Chaman fault may only produce small to moderate earthquakes (

  6. Aftershocks illuminate the 2011 Mineral, Virginia, earthquake causative fault zone and nearby active faults

    USGS Publications Warehouse

    Horton, J. Wright; Shah, Anjana K.; McNamara, Daniel E.; Snyder, Stephen L.; Carter, Aina M

    2015-01-01

    Deployment of temporary seismic stations after the 2011 Mineral, Virginia (USA), earthquake produced a well-recorded aftershock sequence. The majority of aftershocks are in a tabular cluster that delineates the previously unknown Quail fault zone. Quail fault zone aftershocks range from ~3 to 8 km in depth and are in a 1-km-thick zone striking ~036° and dipping ~50°SE, consistent with a 028°, 50°SE main-shock nodal plane having mostly reverse slip. This cluster extends ~10 km along strike. The Quail fault zone projects to the surface in gneiss of the Ordovician Chopawamsic Formation just southeast of the Ordovician–Silurian Ellisville Granodiorite pluton tail. The following three clusters of shallow (<3 km) aftershocks illuminate other faults. (1) An elongate cluster of early aftershocks, ~10 km east of the Quail fault zone, extends 8 km from Fredericks Hall, strikes ~035°–039°, and appears to be roughly vertical. The Fredericks Hall fault may be a strand or splay of the older Lakeside fault zone, which to the south spans a width of several kilometers. (2) A cluster of later aftershocks ~3 km northeast of Cuckoo delineates a fault near the eastern contact of the Ordovician Quantico Formation. (3) An elongate cluster of late aftershocks ~1 km northwest of the Quail fault zone aftershock cluster delineates the northwest fault (described herein), which is temporally distinct, dips more steeply, and has a more northeastward strike. Some aftershock-illuminated faults coincide with preexisting units or structures evident from radiometric anomalies, suggesting tectonic inheritance or reactivation.

  7. Paleoseismicity of two historically quiescent faults in Australia: Implications for fault behavior in stable continental regions

    USGS Publications Warehouse

    Crone, A.J.; De Martini, P. M.; Machette, M.M.; Okumura, K.; Prescott, J.R.

    2003-01-01

    Paleoseismic studies of two historically aseismic Quaternary faults in Australia confirm that cratonic faults in stable continental regions (SCR) typically have a long-term behavior characterized by episodes of activity separated by quiescent intervals of at least 10,000 and commonly 100,000 years or more. Studies of the approximately 30-km-long Roopena fault in South Australia and the approximately 30-km-long Hyden fault in Western Australia document multiple Quaternary surface-faulting events that are unevenly spaced in time. The episodic clustering of events on cratonic SCR faults may be related to temporal fluctuations of fault-zone fluid pore pressures in a volume of strained crust. The long-term slip rate on cratonic SCR faults is extremely low, so the geomorphic expression of many cratonic SCR faults is subtle, and scarps may be difficult to detect because they are poorly preserved. Both the Roopena and Hyden faults are in areas of limited or no significant seismicity; these and other faults that we have studied indicate that many potentially hazardous SCR faults cannot be recognized solely on the basis of instrumental data or historical earthquakes. Although cratonic SCR faults may appear to be nonhazardous because they have been historically aseismic, those that are favorably oriented for movement in the current stress field can and have produced unexpected damaging earthquakes. Paleoseismic studies of modern and prehistoric SCR faulting events provide the basis for understanding of the long-term behavior of these faults and ultimately contribute to better seismic-hazard assessments.

  8. Faulting processes in active faults - Evidences from TCDP and SAFOD drill core samples

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Janssen, C.; Wirth, R.; Wenk, H. -R.

    The microstructures, mineralogy and chemistry of representative samples collected from the cores of the San Andreas Fault drill hole (SAFOD) and the Taiwan Chelungpu-Fault Drilling project (TCDP) have been studied using optical microscopy, TEM, SEM, XRD and XRF analyses. SAFOD samples provide a transect across undeformed host rock, the fault damage zone and currently active deforming zones of the San Andreas Fault. TCDP samples are retrieved from the principal slip zone (PSZ) and from the surrounding damage zone of the Chelungpu Fault. Substantial differences exist in the clay mineralogy of SAFOD and TCDP fault gouge samples. Amorphous material has beenmore » observed in SAFOD as well as TCDP samples. In line with previous publications, we propose that melt, observed in TCDP black gouge samples, was produced by seismic slip (melt origin) whereas amorphous material in SAFOD samples was formed by comminution of grains (crush origin) rather than by melting. Dauphiné twins in quartz grains of SAFOD and TCDP samples may indicate high seismic stress. The differences in the crystallographic preferred orientation of calcite between SAFOD and TCDP samples are significant. Microstructures resulting from dissolution–precipitation processes were observed in both faults but are more frequently found in SAFOD samples than in TCDP fault rocks. As already described for many other fault zones clay-gouge fabrics are quite weak in SAFOD and TCDP samples. Clay-clast aggregates (CCAs), proposed to indicate frictional heating and thermal pressurization, occur in material taken from the PSZ of the Chelungpu Fault, as well as within and outside of the SAFOD deforming zones, indicating that these microstructures were formed over a wide range of slip rates.« less

  9. Misbheaving Faults: The Expanding Role of Geodetic Imaging in Unraveling Unexpected Fault Slip Behavior

    NASA Astrophysics Data System (ADS)

    Barnhart, W. D.; Briggs, R.

    2015-12-01

    Geodetic imaging techniques enable researchers to "see" details of fault rupture that cannot be captured by complementary tools such as seismology and field studies, thus providing increasingly detailed information about surface strain, slip kinematics, and how an earthquake may be transcribed into the geological record. For example, the recent Haiti, Sierra El Mayor, and Nepal earthquakes illustrate the fundamental role of geodetic observations in recording blind ruptures where purely geological and seismological studies provided incomplete views of rupture kinematics. Traditional earthquake hazard analyses typically rely on sparse paleoseismic observations and incomplete mapping, simple assumptions of slip kinematics from Andersonian faulting, and earthquake analogs to characterize the probabilities of forthcoming ruptures and the severity of ground accelerations. Spatially dense geodetic observations in turn help to identify where these prevailing assumptions regarding fault behavior break down and highlight new and unexpected kinematic slip behavior. Here, we focus on three key contributions of space geodetic observations to the analysis of co-seismic deformation: identifying near-surface co-seismic slip where no easily recognized fault rupture exists; discerning non-Andersonian faulting styles; and quantifying distributed, off-fault deformation. The 2013 Balochistan strike slip earthquake in Pakistan illuminates how space geodesy precisely images non-Andersonian behavior and off-fault deformation. Through analysis of high-resolution optical imagery and DEMs, evidence emerges that a single fault map slip as both a strike slip and dip slip fault across multiple seismic cycles. These observations likewise enable us to quantify on-fault deformation, which account for ~72% of the displacements in this earthquake. Nonetheless, the spatial distribution of on- and off-fault deformation in this event is highly spatially variable- a complicating factor for comparisons

  10. Links between tree species, symbiotic fungal diversity and ecosystem functioning in simplified tropical ecosystems.

    PubMed

    Lovelock, Catherine E; Ewel, John J

    2005-07-01

    We studied the relationships among plant and arbuscular mycorrhizal (AM) fungal diversity, and their effects on ecosystem function, in a series of replicate tropical forestry plots in the La Selva Biological Station, Costa Rica. Forestry plots were 12 yr old and were either monocultures of three tree species, or polycultures of the tree species with two additional understory species. Relationships among the AM fungal spore community, host species, plant community diversity and ecosystem phosphorus-use efficiency (PUE) and net primary productivity (NPP) were assessed. Analysis of the relative abundance of AM fungal spores found that host tree species had a significant effect on the AM fungal community, as did host plant community diversity (monocultures vs polycultures). The Shannon diversity index of the AM fungal spore community differed significantly among the three host tree species, but was not significantly different between monoculture and polyculture plots. Over all the plots, significant positive relationships were found between AM fungal diversity and ecosystem NPP, and between AM fungal community evenness and PUE. Relative abundance of two of the dominant AM fungal species also showed significant correlations with NPP and PUE. We conclude that the AM fungal community composition in tropical forests is sensitive to host species, and provide evidence supporting the hypothesis that the diversity of AM fungi in tropical forests and ecosystem NPP covaries.

  11. Fault strength in Marmara region inferred from the geometry of the principle stress axes and fault orientations: A case study for the Prince's Islands fault segment

    NASA Astrophysics Data System (ADS)

    Pinar, Ali; Coskun, Zeynep; Mert, Aydin; Kalafat, Dogan

    2015-04-01

    The general consensus based on historical earthquake data point out that the last major moment release on the Prince's islands fault was in 1766 which in turn signals an increased seismic risk for Istanbul Metropolitan area considering the fact that most of the 20 mm/yr GPS derived slip rate for the region is accommodated mostly by that fault segment. The orientation of the Prince's islands fault segment overlaps with the NW-SE direction of the maximum principle stress axis derived from the focal mechanism solutions of the large and moderate sized earthquakes occurred in the Marmara region. As such, the NW-SE trending fault segment translates the motion between the two E-W trending branches of the North Anatolian fault zone; one extending from the Gulf of Izmit towards Çınarcık basin and the other extending between offshore Bakırköy and Silivri. The basic relation between the orientation of the maximum and minimum principal stress axes, the shear and normal stresses, and the orientation of a fault provides clue on the strength of a fault, i.e., its frictional coefficient. Here, the angle between the fault normal and maximum compressive stress axis is a key parameter where fault normal and fault parallel maximum compressive stress might be a necessary and sufficient condition for a creeping event. That relation also implies that when the trend of the sigma-1 axis is close to the strike of the fault the shear stress acting on the fault plane approaches zero. On the other hand, the ratio between the shear and normal stresses acting on a fault plane is proportional to the coefficient of frictional coefficient of the fault. Accordingly, the geometry between the Prince's islands fault segment and a maximum principal stress axis matches a weak fault model. In the frame of the presentation we analyze seismological data acquired in Marmara region and interpret the results in conjuction with the above mentioned weak fault model.

  12. Aeromagnetic anomaly patterns reveal buried faults along the eastern margin of the Wilkes Subglacial Basin (East Antarctica)

    USGS Publications Warehouse

    Armadillo, E.; Ferraccioli, F.; Zunino, A.; Bozzo, E.

    2007-01-01

    The Wilkes Subglacial Basin (WSB) is the major morphological feature recognized in the hinterland of the Transantarctic Mountains. The origin of this basin remains contentious and relatively poorly understood due to the lack of extensive geophysical exploration. We present a new aeromagnetic anomaly map over the transition between the Transantarctic Mountains and the WSB for an area adjacent to northern Victoria Land. The aeromagnetic map reveals the existence of subglacial faults along the eastern margin of the WSB. These inferred faults connect previously proposed fault zones over Oates Land with those mapped along the Ross Sea Coast. Specifically, we suggest a link between the Matusevich Frature Zone and the Priestley Fault during the Cenozoic. The new evidence for structural control on the eastern margin of the WSB implies that a purely flexural origin for the basin is unlikely.

  13. Complex permeability structure of a fault zone crosscutting a sequence of sandstones and shales and its influence on hydraulic head distribution

    NASA Astrophysics Data System (ADS)

    Cilona, A.; Aydin, A.; Hazelton, G.

    2013-12-01

    Characterization of the structural architecture of a 5 km-long, N40°E-striking fault zone provides new insights for the interpretation of hydraulic heads measured across and along the fault. Of interest is the contaminant transport across a portion of the Upper Cretaceous Chatsworth Formation, a 1400 m-thick turbidite sequence of sandstones and shales exposed in the Simi Hills, south California. Local bedding consistently dips about 20° to 30° to NW. Participating hydrogeologists monitor the local groundwater system by means of numerous boreholes used to define the 3D distribution of the groundwater table around the fault. Sixty hydraulic head measurements consistently show differences of 10s of meters, except for a small area. In this presentation, we propose a link between this distribution and the fault zone architecture. Despite an apparent linear morphological trend, the fault is made up of at least three distinct segments named here as northern, central and southern segments. Key aspects of the fault zone architecture have been delineated at two sites. The first is an outcrop of the central segment and the second is a borehole intersecting the northern segment at depth. The first site shows the fault zone juxtaposing sandstones against shales. Here the fault zone consists of a 13 meter-wide fault rock including a highly deformed sliver of sandstone on the northwestern side. In the sandstone, shear offset was resolved along N42°E striking and SE dipping fracture surfaces localized within a 40 cm thick strand. Here the central core of the fault zone is 8 m-wide and contains mostly shale characterized by highly diffuse deformation. It shows a complex texture overprinted by N30°E-striking carbonate veins. At the southeastern edge of the fault zone exposure, a shale unit dipping 50° NW towards the fault zone provides the key information that the shale unit was incorporated into the fault zone in a manner consistent with shale smearing. At the second site, a

  14. Loading of the San Andreas fault by flood-induced rupture of faults beneath the Salton Sea

    USGS Publications Warehouse

    Brothers, Daniel; Kilb, Debi; Luttrell, Karen; Driscoll, Neal W.; Kent, Graham

    2011-01-01

    The southern San Andreas fault has not experienced a large earthquake for approximately 300 years, yet the previous five earthquakes occurred at ~180-year intervals. Large strike-slip faults are often segmented by lateral stepover zones. Movement on smaller faults within a stepover zone could perturb the main fault segments and potentially trigger a large earthquake. The southern San Andreas fault terminates in an extensional stepover zone beneath the Salton Sea—a lake that has experienced periodic flooding and desiccation since the late Holocene. Here we reconstruct the magnitude and timing of fault activity beneath the Salton Sea over several earthquake cycles. We observe coincident timing between flooding events, stepover fault displacement and ruptures on the San Andreas fault. Using Coulomb stress models, we show that the combined effect of lake loading, stepover fault movement and increased pore pressure could increase stress on the southern San Andreas fault to levels sufficient to induce failure. We conclude that rupture of the stepover faults, caused by periodic flooding of the palaeo-Salton Sea and by tectonic forcing, had the potential to trigger earthquake rupture on the southern San Andreas fault. Extensional stepover zones are highly susceptible to rapid stress loading and thus the Salton Sea may be a nucleation point for large ruptures on the southern San Andreas fault.

  15. Software-implemented fault insertion: An FTMP example

    NASA Technical Reports Server (NTRS)

    Czeck, Edward W.; Siewiorek, Daniel P.; Segall, Zary Z.

    1987-01-01

    This report presents a model for fault insertion through software; describes its implementation on a fault-tolerant computer, FTMP; presents a summary of fault detection, identification, and reconfiguration data collected with software-implemented fault insertion; and compares the results to hardware fault insertion data. Experimental results show detection time to be a function of time of insertion and system workload. For the fault detection time, there is no correlation between software-inserted faults and hardware-inserted faults; this is because hardware-inserted faults must manifest as errors before detection, whereas software-inserted faults immediately exercise the error detection mechanisms. In summary, the software-implemented fault insertion is able to be used as an evaluation technique for the fault-handling capabilities of a system in fault detection, identification and recovery. Although the software-inserted faults do not map directly to hardware-inserted faults, experiments show software-implemented fault insertion is capable of emulating hardware fault insertion, with greater ease and automation.

  16. Global strike-slip fault distribution on Enceladus reveals mostly left-lateral faults

    NASA Astrophysics Data System (ADS)

    Martin, E. S.; Kattenhorn, S. A.

    2013-12-01

    Within the outer solar system, normal faults are a dominant tectonic feature; however, strike-slip faults have played a role in modifying the surfaces of many icy bodies, including Europa, Ganymede, and Enceladus. Large-scale tectonic deformation in icy shells develops in response to stresses caused by a range of mechanisms including polar wander, despinning, volume changes, orbital recession/decay, diurnal tides, and nonsynchronous rotation (NSR). Icy shells often preserve this record of tectonic deformation as patterns of fractures that can be used to identify the source of stress responsible for creating the patterns. Previously published work on Jupiter's moon Europa found that right-lateral strike-slip faults predominantly formed in the southern hemisphere and left-lateral strike-slip faults in the northern hemisphere. This pattern suggested they were formed in the past by stresses induced by diurnal tidal forcing, and were then rotated into their current longitudinal positions by NSR. We mapped the distribution of strike-slip faults on Enceladus and used kinematic indicators, including tailcracks and en echelon fractures, to determine their sense of slip. Tailcracks are secondary fractures that form as a result of concentrations of stress at the tips of slipping faults with geometric patterns dictated by the slip sense. A total of 31 strike-slip faults were identified, nine of which were right-lateral faults, all distributed in a seemingly random pattern across Enceladus's surface, in contrast to Europa. Additionally, there is a dearth of strike-slip faults within the tectonized terrains centered at 90°W and within the polar regions north and south of 60°N and 60°S, respectively. The lack of strike-slip faults in the north polar region may be explained, in part, by limited data coverage. The south polar terrain (SPT), characterized by the prominent tiger stripes and south polar dichotomy, yielded no discrete strike-slip faults. This does not suggest that

  17. The mechanics of fault-bend folding and tear-fault systems in the Niger Delta

    NASA Astrophysics Data System (ADS)

    Benesh, Nathan Philip

    This dissertation investigates the mechanics of fault-bend folding using the discrete element method (DEM) and explores the nature of tear-fault systems in the deep-water Niger Delta fold-and-thrust belt. In Chapter 1, we employ the DEM to investigate the development of growth structures in anticlinal fault-bend folds. This work was inspired by observations that growth strata in active folds show a pronounced upward decrease in bed dip, in contrast to traditional kinematic fault-bend fold models. Our analysis shows that the modeled folds grow largely by parallel folding as specified by the kinematic theory; however, the process of folding over a broad axial surface zone yields a component of fold growth by limb rotation that is consistent with the patterns observed in natural folds. This result has important implications for how growth structures can he used to constrain slip and paleo-earthquake ages on active blind-thrust faults. In Chapter 2, we expand our DEM study to investigate the development of a wider range of fault-bend folds. We examine the influence of mechanical stratigraphy and quantitatively compare our models with the relationships between fold and fault shape prescribed by the kinematic theory. While the synclinal fault-bend models closely match the kinematic theory, the modeled anticlinal fault-bend folds show robust behavior that is distinct from the kinematic theory. Specifically, we observe that modeled structures maintain a linear relationship between fold shape (gamma) and fault-horizon cutoff angle (theta), rather than expressing the non-linear relationship with two distinct modes of anticlinal folding that is prescribed by the kinematic theory. These observations lead to a revised quantitative relationship for fault-bend folds that can serve as a useful interpretation tool. Finally, in Chapter 3, we examine the 3D relationships of tear- and thrust-fault systems in the western, deep-water Niger Delta. Using 3D seismic reflection data and new

  18. High-Intensity Radiated Field Fault-Injection Experiment for a Fault-Tolerant Distributed Communication System

    NASA Technical Reports Server (NTRS)

    Yates, Amy M.; Torres-Pomales, Wilfredo; Malekpour, Mahyar R.; Gonzalez, Oscar R.; Gray, W. Steven

    2010-01-01

    Safety-critical distributed flight control systems require robustness in the presence of faults. In general, these systems consist of a number of input/output (I/O) and computation nodes interacting through a fault-tolerant data communication system. The communication system transfers sensor data and control commands and can handle most faults under typical operating conditions. However, the performance of the closed-loop system can be adversely affected as a result of operating in harsh environments. In particular, High-Intensity Radiated Field (HIRF) environments have the potential to cause random fault manifestations in individual avionic components and to generate simultaneous system-wide communication faults that overwhelm existing fault management mechanisms. This paper presents the design of an experiment conducted at the NASA Langley Research Center's HIRF Laboratory to statistically characterize the faults that a HIRF environment can trigger on a single node of a distributed flight control system.

  19. Development of Fault Models for Hybrid Fault Detection and Diagnostics Algorithm: October 1, 2014 -- May 5, 2015

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cheung, Howard; Braun, James E.

    This report describes models of building faults created for OpenStudio to support the ongoing development of fault detection and diagnostic (FDD) algorithms at the National Renewable Energy Laboratory. Building faults are operating abnormalities that degrade building performance, such as using more energy than normal operation, failing to maintain building temperatures according to the thermostat set points, etc. Models of building faults in OpenStudio can be used to estimate fault impacts on building performance and to develop and evaluate FDD algorithms. The aim of the project is to develop fault models of typical heating, ventilating and air conditioning (HVAC) equipment inmore » the United States, and the fault models in this report are grouped as control faults, sensor faults, packaged and split air conditioner faults, water-cooled chiller faults, and other uncategorized faults. The control fault models simulate impacts of inappropriate thermostat control schemes such as an incorrect thermostat set point in unoccupied hours and manual changes of thermostat set point due to extreme outside temperature. Sensor fault models focus on the modeling of sensor biases including economizer relative humidity sensor bias, supply air temperature sensor bias, and water circuit temperature sensor bias. Packaged and split air conditioner fault models simulate refrigerant undercharging, condenser fouling, condenser fan motor efficiency degradation, non-condensable entrainment in refrigerant, and liquid line restriction. Other fault models that are uncategorized include duct fouling, excessive infiltration into the building, and blower and pump motor degradation.« less

  20. Development of Fault Models for Hybrid Fault Detection and Diagnostics Algorithm: October 1, 2014 -- May 5, 2015

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cheung, Howard; Braun, James E.

    2015-12-31

    This report describes models of building faults created for OpenStudio to support the ongoing development of fault detection and diagnostic (FDD) algorithms at the National Renewable Energy Laboratory. Building faults are operating abnormalities that degrade building performance, such as using more energy than normal operation, failing to maintain building temperatures according to the thermostat set points, etc. Models of building faults in OpenStudio can be used to estimate fault impacts on building performance and to develop and evaluate FDD algorithms. The aim of the project is to develop fault models of typical heating, ventilating and air conditioning (HVAC) equipment inmore » the United States, and the fault models in this report are grouped as control faults, sensor faults, packaged and split air conditioner faults, water-cooled chiller faults, and other uncategorized faults. The control fault models simulate impacts of inappropriate thermostat control schemes such as an incorrect thermostat set point in unoccupied hours and manual changes of thermostat set point due to extreme outside temperature. Sensor fault models focus on the modeling of sensor biases including economizer relative humidity sensor bias, supply air temperature sensor bias, and water circuit temperature sensor bias. Packaged and split air conditioner fault models simulate refrigerant undercharging, condenser fouling, condenser fan motor efficiency degradation, non-condensable entrainment in refrigerant, and liquid line restriction. Other fault models that are uncategorized include duct fouling, excessive infiltration into the building, and blower and pump motor degradation.« less

  1. Illite authigenesis during faulting and fluid flow - a microstructural study of fault rocks

    NASA Astrophysics Data System (ADS)

    Scheiber, Thomas; Viola, Giulio; van der Lelij, Roelant; Margreth, Annina

    2017-04-01

    Authigenic illite can form synkinematically during slip events along brittle faults. In addition it can also crystallize as a result of fluid flow and associated mineral alteration processes in hydrothermal environments. K-Ar dating of illite-bearing fault rocks has recently become a common tool to constrain the timing of fault activity. However, to fully interpret the derived age spectra in terms of deformation ages, a careful investigation of the fault deformation history and architecture at the outcrop-scale, ideally followed by a detailed mineralogical analysis of the illite-forming processes at the micro-scale, are indispensable. Here we integrate this methodological approach by presenting microstructural observations from the host rock immediately adjacent to dated fault gouges from two sites located in the Rolvsnes granodiorite (Bømlo, western Norway). This granodiorite experienced multiple episodes of brittle faulting and fluid-induced alteration, starting in the Mid Ordovician (Scheiber et al., 2016). Fault gouges are predominantly associated with normal faults accommodating mainly E-W extension. K-Ar dating of illites separated from representative fault gouges constrains deformation and alteration due to fluid ingress from the Permian to the Cretaceous, with a cluster of ages for the finest (<0.1 µm) fraction in the early to middle Jurassic. At site one, high-resolution thin section structural mapping reveals a complex deformation history characterized by several coexisting types of calcite veins and seven different generations of cataclasite, two of which contain a significant amount of authigenic and undoubtedly deformation-related illite. At site two, fluid ingress along and adjoining the fault core induced pervasive alteration of the host granodiorite. Quartz is crosscut by calcite veinlets whereas plagioclase, K-feldspar and biotite are almost completely replaced by the main alteration products kaolin, quartz and illite. Illite-bearing micro

  2. Homogeneity of small-scale earthquake faulting, stress, and fault strength

    USGS Publications Warehouse

    Hardebeck, J.L.

    2006-01-01

    Small-scale faulting at seismogenic depths in the crust appears to be more homogeneous than previously thought. I study three new high-quality focal-mechanism datasets of small (M < ??? 3) earthquakes in southern California, the east San Francisco Bay, and the aftershock sequence of the 1989 Loma Prieta earthquake. I quantify the degree of mechanism variability on a range of length scales by comparing the hypocentral disctance between every pair of events and the angular difference between their focal mechanisms. Closely spaced earthquakes (interhypocentral distance faults of many orientations may or may not be present, only similarly oriented fault planes produce earthquakes contemporaneously. On these short length scales, the crustal stress orientation and fault strength (coefficient of friction) are inferred to be homogeneous as well, to produce such similar earthquakes. Over larger length scales (???2-50 km), focal mechanisms become more diverse with increasing interhypocentral distance (differing on average by 40-70??). Mechanism variability on ???2- to 50 km length scales can be explained by ralatively small variations (???30%) in stress or fault strength. It is possible that most of this small apparent heterogeneity in stress of strength comes from measurement error in the focal mechanisms, as negligibble variation in stress or fault strength (<10%) is needed if each earthquake is assigned the optimally oriented focal mechanism within the 1-sigma confidence region. This local homogeneity in stress orientation and fault strength is encouraging, implying it may be possible to measure these parameters with enough precision to be useful in studying and modeling large earthquakes.

  3. Spatiotemporal patterns of fault slip rates across the Central Sierra Nevada frontal fault zone

    NASA Astrophysics Data System (ADS)

    Rood, Dylan H.; Burbank, Douglas W.; Finkel, Robert C.

    2011-01-01

    Patterns in fault slip rates through time and space are examined across the transition from the Sierra Nevada to the Eastern California Shear Zone-Walker Lane belt. At each of four sites along the eastern Sierra Nevada frontal fault zone between 38 and 39° N latitude, geomorphic markers, such as glacial moraines and outwash terraces, are displaced by a suite of range-front normal faults. Using geomorphic mapping, surveying, and 10Be surface exposure dating, mean fault slip rates are defined, and by utilizing markers of different ages (generally, ~ 20 ka and ~ 150 ka), rates through time and interactions among multiple faults are examined over 10 4-10 5 year timescales. At each site for which data are available for the last ~ 150 ky, mean slip rates across the Sierra Nevada frontal fault zone have probably not varied by more than a factor of two over time spans equal to half of the total time interval (~ 20 ky and ~ 150 ky timescales): 0.3 ± 0.1 mm year - 1 (mode and 95% CI) at both Buckeye Creek in the Bridgeport basin and Sonora Junction; and 0.4 + 0.3/-0.1 mm year - 1 along the West Fork of the Carson River at Woodfords. Data permit rates that are relatively constant over the time scales examined. In contrast, slip rates are highly variable in space over the last ~ 20 ky. Slip rates decrease by a factor of 3-5 northward over a distance of ~ 20 km between the northern Mono Basin (1.3 + 0.6/-0.3 mm year - 1 at Lundy Canyon site) to the Bridgeport Basin (0.3 ± 0.1 mm year - 1 ). The 3-fold decrease in the slip rate on the Sierra Nevada frontal fault zone northward from Mono Basin is indicative of a change in the character of faulting north of the Mina Deflection as extension is transferred eastward onto normal faults between the Sierra Nevada and Walker Lane belt. A compilation of regional deformation rates reveals that the spatial pattern of extension rates changes along strike of the Eastern California Shear Zone-Walker Lane belt. South of the Mina Deflection

  4. Spatiotemporal Patterns of Fault Slip Rates Across the Central Sierra Nevada Frontal Fault Zone

    NASA Astrophysics Data System (ADS)

    Rood, D. H.; Burbank, D.; Finkel, R. C.

    2010-12-01

    We examine patterns in fault slip rates through time and space across the transition from the Sierra Nevada to the Eastern California Shear Zone-Walker Lane belt. At each of four sites along the eastern Sierra Nevada frontal fault zone between 38-39° N latitude, geomorphic markers, such as glacial moraines and outwash terraces, are displaced by a suite of range-front normal faults. Using geomorphic mapping, surveying, and Be-10 surface exposure dating, we define mean fault slip rates, and by utilizing markers of different ages (generally, ~20 ka and ~150 ka), we examine rates through time and interactions among multiple faults over 10-100 ky timescales. At each site for which data are available for the last ~150 ky, mean slip rates across the Sierra Nevada frontal fault zone have probably not varied by more than a factor of two over time spans equal to half of the total time interval (~20 ky and ~150 ky timescales): 0.3 ± 0.1 mm/yr (mode and 95% CI) at both Buckeye Creek in the Bridgeport basin and Sonora Junction; and 0.4 +0.3/-0.1 mm/yr along the West Fork of the Carson River at Woodfords. Our data permit that rates are relatively constant over the time scales examined. In contrast, slip rates are highly variable in space over the last ~20 ky. Slip rates decrease by a factor of 3-5 northward over a distance of ~20 km between the northern Mono Basin (1.3 +0.6/-0.3 mm/yr at Lundy Canyon site) and the Bridgeport Basin (0.3 ± 0.1 mm/yr). The 3-fold decrease in the slip rate on the Sierra Nevada frontal fault zone northward from Mono Basin reflects a change in the character of faulting north of the Mina Deflection as extension is transferred eastward onto normal faults between the Sierra Nevada and Walker Lane belt. A compilation of regional deformation rates reveal that the spatial pattern of extension rates changes along strike of the Eastern California Shear Zone-Walker Lane belt. South of the Mina Deflection, extension is accommodated within a diffuse zone of

  5. Late Quaternary Faulting in Southeastern Louisiana: A Natural Laboratory for Understanding Shallow Faulting in Deltaic Materials

    NASA Astrophysics Data System (ADS)

    Dawers, N. H.; McLindon, C.

    2017-12-01

    A synthesis of late Quaternary faults within the Mississippi River deltaic plain aims to provide a more accurate assessment of regional and local fault architecture, and interactions between faulting, sediment loading, salt withdrawal and compaction. This effort was initiated by the New Orleans Geological Society and has resulted in access to industry 3d seismic reflection data, as well as fault trace maps, and various types of well data and biostratigraphy. An unexpected outgrowth of this project is a hypothesis that gravity-driven normal faults in deltaic settings may be good candidates for shallow aseismic and slow-slip phenomena. The late Quaternary fault population is characterized by several large, highly segmented normal fault arrays: the Baton Rouge-Tepetate fault zone, the Lake Pontchartrain-Lake Borgne fault zone, the Golden Meadow fault zone (GMFZ), and a major counter-regional salt withdrawal structure (the Bay Marchand-Timbalier Bay-Caillou Island salt complex and West Delta fault zone) that lies just offshore of southeastern Louisiana. In comparison to the other, more northerly fault zones, the GMFZ is still significantly salt-involved. Salt structures segment the GMFZ with fault tips ending near or within salt, resulting in highly localized fault and compaction related subsidence separated by shallow salt structures, which are inherently buoyant and virtually incompressible. At least several segments within the GMFZ are characterized by marsh breaks that formed aseismically over timescales of days to months, such as near Adams Bay and Lake Enfermer. One well-documented surface rupture adjacent to a salt dome propagated over a 3 day period in 1943. We suggest that Louisiana's coastal faults make excellent analogues for deltaic faults in general, and propose that a series of positive feedbacks keep them active in the near surface. These include differential sediment loading and compaction, weak fault zone materials, high fluid pressure, low elastic

  6. Fault-tolerant software - Experiment with the sift operating system. [Software Implemented Fault Tolerance computer

    NASA Technical Reports Server (NTRS)

    Brunelle, J. E.; Eckhardt, D. E., Jr.

    1985-01-01

    Results are presented of an experiment conducted in the NASA Avionics Integrated Research Laboratory (AIRLAB) to investigate the implementation of fault-tolerant software techniques on fault-tolerant computer architectures, in particular the Software Implemented Fault Tolerance (SIFT) computer. The N-version programming and recovery block techniques were implemented on a portion of the SIFT operating system. The results indicate that, to effectively implement fault-tolerant software design techniques, system requirements will be impacted and suggest that retrofitting fault-tolerant software on existing designs will be inefficient and may require system modification.

  7. Establishment of the mathematical model for diagnosing the engine valve faults by genetic programming

    NASA Astrophysics Data System (ADS)

    Yang, Wen-Xian

    2006-05-01

    Available machine fault diagnostic methods show unsatisfactory performances on both on-line and intelligent analyses because their operations involve intensive calculations and are labour intensive. Aiming at improving this situation, this paper describes the development of an intelligent approach by using the Genetic Programming (abbreviated as GP) method. Attributed to the simple calculation of the mathematical model being constructed, different kinds of machine faults may be diagnosed correctly and quickly. Moreover, human input is significantly reduced in the process of fault diagnosis. The effectiveness of the proposed strategy is validated by an illustrative example, in which three kinds of valve states inherent in a six-cylinders/four-stroke cycle diesel engine, i.e. normal condition, valve-tappet clearance and gas leakage faults, are identified. In the example, 22 mathematical functions have been specially designed and 8 easily obtained signal features are used to construct the diagnostic model. Different from existing GPs, the diagnostic tree used in the algorithm is constructed in an intelligent way by applying a power-weight coefficient to each feature. The power-weight coefficients vary adaptively between 0 and 1 during the evolutionary process. Moreover, different evolutionary strategies are employed, respectively for selecting the diagnostic features and functions, so that the mathematical functions are sufficiently utilized and in the meantime, the repeated use of signal features may be fully avoided. The experimental results are illustrated diagrammatically in the following sections.

  8. The distribution of deformation in parallel fault-related folds with migrating axial surfaces: comparison between fault-propagation and fault-bend folding

    NASA Astrophysics Data System (ADS)

    Salvini, Francesco; Storti, Fabrizio

    2001-01-01

    In fault-related folds that form by axial surface migration, rocks undergo deformation as they pass through axial surfaces. The distribution and intensity of deformation in these structures has been impacted by the history of axial surface migration. Upon fold initiation, unique dip panels develop, each with a characteristic deformation intensity, depending on their history. During fold growth, rocks that pass through axial surfaces are transported between dip panels and accumulate additional deformation. By tracking the pattern of axial surface migration in model folds, we predict the distribution of relative deformation intensity in simple-step, parallel fault-bend and fault-propagation anticlines. In both cases the deformation is partitioned into unique domains we call deformation panels. For a given rheology of the folded multilayer, deformation intensity will be homogeneously distributed in each deformation panel. Fold limbs are always deformed. The flat crests of fault-propagation anticlines are always undeformed. Two asymmetric deformation panels develop in fault-propagation folds above ramp angles exceeding 29°. For lower ramp angles, an additional, more intensely-deformed panel develops at the transition between the crest and the forelimb. Deformation in the flat crests of fault-bend anticlines occurs when fault displacement exceeds the length of the footwall ramp, but is never found immediately hinterland of the crest to forelimb transition. In environments dominated by brittle deformation, our models may serve as a first-order approximation of the distribution of fractures in fault-related folds.

  9. Observations and Modelling of Alternative Tree Cover States of the Boreal Ecosystem

    NASA Astrophysics Data System (ADS)

    Abis, B.; Brovkin, V.

    2017-12-01

    Recently, multimodality of the tree cover distribution of the boreal forests has been detected, revealing the existence of three alternative vegetation modes. Identifying which are the regions with a potential for alternative tree cover states, and assessing which are the main factors underlying their existence, is important to project future change of natural vegetation cover and its effect on climate.Through the use of generalised additive models and phase-space analysis, we study the link between tree cover distribution and eight globally-observed environmental factors, such as rainfall, temperature, and permafrost distribution. Using a classification based on these factors, we show the location of areas with potentially alternative tree cover states under the same environmental conditions in the boreal region. Furthermore, to explain the multimodality found in the data and the asymmetry between North America and Eurasia, we study a conceptual model based on tree species competition, and use it to simulate the sensitivity of tree cover to changes in environmental factors.We find that the link between individual environmental variables and tree cover differs regionally. Nonetheless, environmental conditions uniquely determine the vegetation state among the three dominant modes in ˜95% of the cases. On the other hand, areas with potentially alternative tree cover states encompass ˜1.1 million km2, and correspond to possible transition zones with a reduced resilience to disturbances. Employing our conceptual model, we show that multimodality can be explained through competition between tree species with different adaptations to environmental factors and disturbances. Moreover, the model is able to reproduce the asymmetry in tree species distribution between Eurasia and North America. Finally, we find that changes in permafrost could be associated with bifurcation points of the model, corroborating the importance of permafrost in a changing climate.

  10. Drought-associated tree mortality: Global patterns and insights from tree-ring studies in the southwestern U.S.A

    NASA Astrophysics Data System (ADS)

    Macalady, Alison Kelly

    Forests play an important role in the earth system, regulating climate, maintaining biodiversity, and provisioning human communities with water, food and fuel. Interactions between climate and forest dynamics are not well constrained, and high uncertainty characterizes projections of global warming impacts on forests and associated ecosystem services. Recently observed tree mortality and forest die-off forewarn an acceleration of forest change with rising temperature and increased drought. However, the processes leading to tree death during drought are poorly understood, limiting our ability to anticipate future forest dynamics. The objective of this dissertation was to improve understanding of drought-associated tree mortality through literature synthesis and tree-ring studies on trees that survived and died during droughts in the southwestern USA. Specifically, this dissertation 1) documented global tree mortality patterns and identified emerging trends and research gaps; 2) quantified relationships between growth, climate, competition and mortality of pinon pine during droughts in New Mexico; 3) investigated tree defense anatomy as a potentially key element in pinon avoidance of death; and, 4) characterized the climate sensitivity of pinon resin ducts in order to gain insight into potential trends in tree defenses with climate variability and change. There has been an increase in studies reporting tree mortality linked to drought, heat, and the associated activity of insects and pathogens. Cases span the forested continents and occurred in water, light and temperature-limited forests. We hypothesized that increased tree mortality may be an emerging global phenomenon related to rising temperatures and drought (Appendix A). Recent radial growth was 53% higher on average in pinon that survived versus died during two episodes of drought-associated mortality, and statistical models of mortality risk based on average growth, growth variability, and abrupt growth

  11. Fault pattern at the northern end of the Death Valley - Furnace Creek fault zone, California and Nevada

    NASA Technical Reports Server (NTRS)

    Liggett, M. A. (Principal Investigator); Childs, J. F.

    1974-01-01

    The author has identified the following significant results. The pattern of faulting associated with the termination of the Death Valley-Furnace Creek Fault Zone in northern Fish Lake Valley, Nevada was studied in ERTS-1 MSS color composite imagery and color IR U-2 photography. Imagery analysis was supported by field reconnaissance and low altitude aerial photography. The northwest-trending right-lateral Death Valley-Furnace Creek Fault Zone changes northward to a complex pattern of discontinuous dip slip and strike slip faults. This fault pattern terminates to the north against an east-northeast trending zone herein called the Montgomery Fault Zone. No evidence for continuation of the Death Valley-Furnace Creek Fault Zone is recognized north of the Montgomery Fault Zone. Penecontemporaneous displacement in the Death Valley-Furnace Creek Fault Zone, the complex transitional zone, and the Montgomery Fault Zone suggests that the systems are genetically related. Mercury mineralization appears to have been localized along faults recognizable in ERTS-1 imagery within the transitional zone and the Montgomery Fault Zone.

  12. Study on the Evaluation Method for Fault Displacement: Probabilistic Approach Based on Japanese Earthquake Rupture Data - Principal fault displacements -

    NASA Astrophysics Data System (ADS)

    Kitada, N.; Inoue, N.; Tonagi, M.

    2016-12-01

    The purpose of Probabilistic Fault Displacement Hazard Analysis (PFDHA) is estimate fault displacement values and its extent of the impact. There are two types of fault displacement related to the earthquake fault: principal fault displacement and distributed fault displacement. Distributed fault displacement should be evaluated in important facilities, such as Nuclear Installations. PFDHA estimates principal fault and distributed fault displacement. For estimation, PFDHA uses distance-displacement functions, which are constructed from field measurement data. We constructed slip distance relation of principal fault displacement based on Japanese strike and reverse slip earthquakes in order to apply to Japan area that of subduction field. However, observed displacement data are sparse, especially reverse faults. Takao et al. (2013) tried to estimate the relation using all type fault systems (reverse fault and strike slip fault). After Takao et al. (2013), several inland earthquakes were occurred in Japan, so in this time, we try to estimate distance-displacement functions each strike slip fault type and reverse fault type especially add new fault displacement data set. To normalized slip function data, several criteria were provided by several researchers. We normalized principal fault displacement data based on several methods and compared slip-distance functions. The normalized by total length of Japanese reverse fault data did not show particular trend slip distance relation. In the case of segmented data, the slip-distance relationship indicated similar trend as strike slip faults. We will also discuss the relation between principal fault displacement distributions with source fault character. According to slip distribution function (Petersen et al., 2011), strike slip fault type shows the ratio of normalized displacement are decreased toward to the edge of fault. However, the data set of Japanese strike slip fault data not so decrease in the end of the fault

  13. Fault Current Distribution and Pole Earth Potential Rise (EPR) Under Substation Fault

    NASA Astrophysics Data System (ADS)

    Nnassereddine, M.; Rizk, J.; Hellany, A.; Nagrial, M.

    2013-09-01

    New high-voltage (HV) substations are fed by transmission lines. The position of these lines necessitates earthing design to ensure safety compliance of the system. Conductive structures such as steel or concrete poles are widely used in HV transmission mains. The earth potential rise (EPR) generated by a fault at the substation could result in an unsafe condition. This article discusses EPR based on substation fault. The pole EPR assessment under substation fault is assessed with and without mutual impedance consideration. Split factor determination with and without the mutual impedance of the line is also discussed. Furthermore, a simplified formula to compute the pole grid current under substation fault is included. Also, it includes the introduction of the n factor which determines the number of poles that required earthing assessments under substation fault. A case study is shown.

  14. Stress sensitivity of fault seismicity: A comparison between limited-offset oblique and major strike-slip faults

    USGS Publications Warehouse

    Parsons, T.; Stein, R.S.; Simpson, R.W.; Reasenberg, P.A.

    1999-01-01

    We present a new three-dimensional inventory of the southern San Francisco Bay area faults and use it to calculate stress applied principally by the 1989 M = 7.1 Loma Prieta earthquake and to compare fault seismicity rates before and after 1989. The major high-angle right-lateral faults exhibit a different response to the stress change than do minor oblique (right-lateral/thrust) faults. Seismicity on oblique-slip faults in the southern Santa Clara Valley thrust belt increased where the faults were unclamped. The strong dependence of seismicity change on normal stress change implies a high coefficient of static friction. In contrast, we observe that faults with significant offset (>50-100 km) behave differently; microseismicity on the Hayward fault diminished where right-lateral shear stress was reduced and where it was unclamped by the Loma Prieta earthquake. We observe a similar response on the San Andreas fault zone in southern California after the Landers earthquake sequence. Additionally, the offshore San Gregorio fault shows a seismicity rate increase where right-lateral/oblique shear stress was increased by the Loma Prieta earthquake despite also being clamped by it. These responses are consistent with either a low coefficient of static friction or high pore fluid pressures within the fault zones. We can explain the different behavior of the two styles of faults if those with large cumulative offset become impermeable through gouge buildup; coseismically pressurized pore fluids could be trapped and negate imposed normal stress changes, whereas in more limited offset faults, fluids could rapidly escape. The difference in behavior between minor and major faults may explain why frictional failure criteria that apply intermediate coefficients of static friction can be effective in describing the broad distributions of aftershocks that follow large earthquakes, since many of these events occur both inside and outside major fault zones.

  15. Deformation associated with continental normal faults

    NASA Astrophysics Data System (ADS)

    Resor, Phillip G.

    Deformation associated with normal fault earthquakes and geologic structures provide insights into the seismic cycle as it unfolds over time scales from seconds to millions of years. Improved understanding of normal faulting will lead to more accurate seismic hazard assessments and prediction of associated structures. High-precision aftershock locations for the 1995 Kozani-Grevena earthquake (Mw 6.5), Greece image a segmented master fault and antithetic faults. This three-dimensional fault geometry is typical of normal fault systems mapped from outcrop or interpreted from reflection seismic data and illustrates the importance of incorporating three-dimensional fault geometry in mechanical models. Subsurface fault slip associated with the Kozani-Grevena and 1999 Hector Mine (Mw 7.1) earthquakes is modeled using a new method for slip inversion on three-dimensional fault surfaces. Incorporation of three-dimensional fault geometry improves the fit to the geodetic data while honoring aftershock distributions and surface ruptures. GPS Surveying of deformed bedding surfaces associated with normal faulting in the western Grand Canyon reveals patterns of deformation that are similar to those observed by interferometric satellite radar interferometry (InSAR) for the Kozani Grevena earthquake with a prominent down-warp in the hanging wall and a lesser up-warp in the footwall. However, deformation associated with the Kozani-Grevena earthquake extends ˜20 km from the fault surface trace, while the folds in the western Grand Canyon only extend 500 m into the footwall and 1500 m into the hanging wall. A comparison of mechanical and kinematic models illustrates advantages of mechanical models in exploring normal faulting processes including incorporation of both deformation and causative forces, and the opportunity to incorporate more complex fault geometry and constitutive properties. Elastic models with antithetic or synthetic faults or joints in association with a master

  16. Fault healing promotes high-frequency earthquakes in laboratory experiments and on natural faults

    USGS Publications Warehouse

    McLaskey, Gregory C.; Thomas, Amanda M.; Glaser, Steven D.; Nadeau, Robert M.

    2012-01-01

    Faults strengthen or heal with time in stationary contact and this healing may be an essential ingredient for the generation of earthquakes. In the laboratory, healing is thought to be the result of thermally activated mechanisms that weld together micrometre-sized asperity contacts on the fault surface, but the relationship between laboratory measures of fault healing and the seismically observable properties of earthquakes is at present not well defined. Here we report on laboratory experiments and seismological observations that show how the spectral properties of earthquakes vary as a function of fault healing time. In the laboratory, we find that increased healing causes a disproportionately large amount of high-frequency seismic radiation to be produced during fault rupture. We observe a similar connection between earthquake spectra and recurrence time for repeating earthquake sequences on natural faults. Healing rates depend on pressure, temperature and mineralogy, so the connection between seismicity and healing may help to explain recent observations of large megathrust earthquakes which indicate that energetic, high-frequency seismic radiation originates from locations that are distinct from the geodetically inferred locations of large-amplitude fault slip

  17. A Novel Dual Separate Paths (DSP) Algorithm Providing Fault-Tolerant Communication for Wireless Sensor Networks.

    PubMed

    Tien, Nguyen Xuan; Kim, Semog; Rhee, Jong Myung; Park, Sang Yoon

    2017-07-25

    Fault tolerance has long been a major concern for sensor communications in fault-tolerant cyber physical systems (CPSs). Network failure problems often occur in wireless sensor networks (WSNs) due to various factors such as the insufficient power of sensor nodes, the dislocation of sensor nodes, the unstable state of wireless links, and unpredictable environmental interference. Fault tolerance is thus one of the key requirements for data communications in WSN applications. This paper proposes a novel path redundancy-based algorithm, called dual separate paths (DSP), that provides fault-tolerant communication with the improvement of the network traffic performance for WSN applications, such as fault-tolerant CPSs. The proposed DSP algorithm establishes two separate paths between a source and a destination in a network based on the network topology information. These paths are node-disjoint paths and have optimal path distances. Unicast frames are delivered from the source to the destination in the network through the dual paths, providing fault-tolerant communication and reducing redundant unicast traffic for the network. The DSP algorithm can be applied to wired and wireless networks, such as WSNs, to provide seamless fault-tolerant communication for mission-critical and life-critical applications such as fault-tolerant CPSs. The analyzed and simulated results show that the DSP-based approach not only provides fault-tolerant communication, but also improves network traffic performance. For the case study in this paper, when the DSP algorithm was applied to high-availability seamless redundancy (HSR) networks, the proposed DSP-based approach reduced the network traffic by 80% to 88% compared with the standard HSR protocol, thus improving network traffic performance.

  18. Neotectonics of Asia: Thin-shell finite-element models with faults

    NASA Technical Reports Server (NTRS)

    Kong, Xianghong; Bird, Peter

    1994-01-01

    As India pushed into and beneath the south margin of Asia in Cenozoic time, it added a great volume of crust, which may have been (1) emplaced locally beneath Tibet, (2) distributed as regional crustal thickening of Asia, (3) converted to mantle eclogite by high-pressure metamorphism, or (4) extruded eastward to increase the area of Asia. The amount of eastward extrusion is especially controversial: plane-stress computer models of finite strain in a continuum lithosphere show minimal escape, while laboratory and theoretical plane-strain models of finite strain in a faulted lithosphere show escape as the dominant mode. We suggest computing the present (or neo)tectonics by use of the known fault network and available data on fault activity, geodesy, and stress to select the best model. We apply a new thin-shell method which can represent a faulted lithosphere of realistic rheology on a sphere, and provided predictions of present velocities, fault slip rates, and stresses for various trial rheologies and boundary conditions. To minimize artificial boundaries, the models include all of Asia east of 40 deg E and span 100 deg on the globe. The primary unknowns are the friction coefficient of faults within Asia and the amounts of shear traction applied to Asia in the Himalayan and oceanic subduction zones at its margins. Data on Quaternary fault activity prove to be most useful in rating the models. Best results are obtained with a very low fault friction of 0.085. This major heterogeneity shows that unfaulted continum models cannot be expected to give accurate simulations of the orogeny. But, even with such weak faults, only a fraction of the internal deformation is expressed as fault slip; this means that rigid microplate models cannot represent the kinematics either. A universal feature of the better models is that eastern China and southeast Asia flow rapidly eastward with respect to Siberia. The rate of escape is very sensitive to the level of shear traction in the

  19. Relationship between displacement and gravity change of Uemachi faults and surrounding faults of Osaka basin, Southwest Japan

    NASA Astrophysics Data System (ADS)

    Inoue, N.; Kitada, N.; Kusumoto, S.; Itoh, Y.; Takemura, K.

    2011-12-01

    The Osaka basin surrounded by the Rokko and Ikoma Ranges is one of the typical Quaternary sedimentary basins in Japan. The Osaka basin has been filled by the Pleistocene Osaka group and the later sediments. Several large cities and metropolitan areas, such as Osaka and Kobe are located in the Osaka basin. The basin is surrounded by E-W trending strike slip faults and N-S trending reverse faults. The N-S trending 42-km-long Uemachi faults traverse in the central part of the Osaka city. The Uemachi faults have been investigated for countermeasures against earthquake disaster. It is important to reveal the detailed fault parameters, such as length, dip and recurrence interval, so on for strong ground motion simulation and disaster prevention. For strong ground motion simulation, the fault model of the Uemachi faults consist of the two parts, the north and south parts, because of the no basement displacement in the central part of the faults. The Ministry of Education, Culture, Sports, Science and Technology started the project to survey of the Uemachi faults. The Disaster Prevention Institute of Kyoto University is carried out various surveys from 2009 to 2012 for 3 years. The result of the last year revealed the higher fault activity of the branch fault than main faults in the central part (see poster of "Subsurface Flexure of Uemachi Fault, Japan" by Kitada et al., in this meeting). Kusumoto et al. (2001) reported that surrounding faults enable to form the similar basement relief without the Uemachi faults model based on a dislocation model. We performed various parameter studies for dislocation model and gravity changes based on simplified faults model, which were designed based on the distribution of the real faults. The model was consisted 7 faults including the Uemachi faults. The dislocation and gravity change were calculated based on the Okada et al. (1985) and Okubo et al. (1993) respectively. The results show the similar basement displacement pattern to the

  20. Variable modes of rifting in the eastern Basin and Range, USA from on-fault geological evidence

    NASA Astrophysics Data System (ADS)

    Stahl, T.; Niemi, N. A.

    2017-12-01

    Continental rifts are often divided along their axes into magmatic (or magma-assisted) and amagmatic (or magma-poor) segments. Less is known about magmatic versus non-magmatic extension across `wide' continental rift margins like the Basin and Range province of the USA. Paleoseismic trench investigations, Quaternary geochronology (10Be and 3He exposure-age, luminescence, and 40Ar/39Ar dating), and high-resolution topographic surveys (terrestrial laser scanning and UAV photogrammetry) were used to assess the timing and spatial variability of faulting at the Basin and Range-Colorado Plateau transition zone in central Utah. Results show that while the majority of strain is accommodated by a single, range- and province-bounding fault (the Wasatch fault zone, WFZ, slip rate of c. 3-4 mm yr-1), a transition to magma-assisted rifting occurs near the WFZ southern termination marked by a diffuse zone of faults associated with Pliocene to Holocene volcanism. Paleoseismic analysis of faults within and adjacent to this zone reveal recent (<18 ka) surface-ruptures on these faults. A single event displacement of 10-15 m for the Tabernacle fault at c. 15-18 ka (3He exposure-age) and large fault displacement gradients imply that slip was coeval with lava emplacement and that the faults in this region are linked, at least in part, to dike injection in the uppermost crust rather than slip at seismogenic depths. These results have implications for the controversial nature of regional seismic hazard and the structural evolution of the eastern Basin and Range.

  1. 3D Dynamic Rupture Simulations along the Wasatch Fault, Utah, Incorporating Rough-fault Topography

    NASA Astrophysics Data System (ADS)

    Withers, Kyle; Moschetti, Morgan

    2017-04-01

    Studies have found that the Wasatch Fault has experienced successive large magnitude (>Mw 7.2) earthquakes, with an average recurrence interval near 350 years. To date, no large magnitude event has been recorded along the fault, with the last rupture along the Salt Lake City segment occurring 1300 years ago. Because of this, as well as the lack of strong ground motion records in basins and from normal-faulting earthquakes worldwide, seismic hazard in the region is not well constrained. Previous numerical simulations have modeled deterministic ground motion in the heavily populated regions of Utah, near Salt Lake City, but were primarily restricted to low frequencies ( 1 Hz). Our goal is to better assess broadband ground motions from the Wasatch Fault Zone. Here, we extend deterministic ground motion prediction to higher frequencies ( 5 Hz) in this region by using physics-based spontaneous dynamic rupture simulations along a normal fault with characteristics derived from geologic observations. We use a summation by parts finite difference code (Waveqlab3D) with rough-fault topography following a self-similar fractal distribution (over length scales from 100 m to the size of the fault) and include off-fault plasticity to simulate ruptures > Mw 6.5. Geometric complexity along fault planes has previously been shown to generate broadband sources with spectral energy matching that of observations. We investigate the impact of varying the hypocenter location, as well as the influence that multiple realizations of rough-fault topography have on the rupture process and resulting ground motion. We utilize Waveqlab3's computational efficiency to model wave-propagation to a significant distance from the fault with media heterogeneity at both long and short spatial wavelengths. These simulations generate a synthetic dataset of ground motions to compare with GMPEs, in terms of both the median and inter and intraevent variability.

  2. Research on Robustness of Tree-based P2P Streaming

    NASA Astrophysics Data System (ADS)

    Chu, Chen; Yan, Jinyao; Ding, Kuangzheng; Wang, Xi

    Research on P2P streaming media is a hot topic in the area of Internet technology. It has emerged as a promising technique. This new paradigm brings a number of unique advantages such as scalability, resilience and also effectiveness in coping with dynamics and heterogeneity. However, There are also many problems in P2P streaming media systems using traditional tree-based topology such as the bandwidth limits between parents and child nodes; node's joining or leaving has a great effect on robustness of tree-based topology. This paper will introduce a method of measuring the robustness of tree-based topology: using network measurement, we observe and record the bandwidth between all the nodes, analyses the correlation between all the sibling flows, measure the robustness of tree-based topology. And the result shows that in the Tree-based topology, the different links which have similar routing paths would share the bandwidth bottleneck, reduce the robustness of the Tree-based topology.

  3. Quaternary Geology and Surface Faulting Hazard: Active and Capable Faults in Central Apennines, Italy

    NASA Astrophysics Data System (ADS)

    Falcucci, E.; Gori, S.

    2015-12-01

    The 2009 L'Aquila earthquake (Mw 6.1), in central Italy, raised the issue of surface faulting hazard in Italy, since large urban areas were affected by surface displacement along the causative structure, the Paganica fault. Since then, guidelines for microzonation were drew up that take into consideration the problem of surface faulting in Italy, and laying the bases for future regulations about related hazard, similarly to other countries (e.g. USA). More specific guidelines on the management of areas affected by active and capable faults (i.e. able to produce surface faulting) are going to be released by National Department of Civil Protection; these would define zonation of areas affected by active and capable faults, with prescriptions for land use planning. As such, the guidelines arise the problem of the time interval and general operational criteria to asses fault capability for the Italian territory. As for the chronology, the review of the international literature and regulatory allowed Galadini et al. (2012) to propose different time intervals depending on the ongoing tectonic regime - compressive or extensional - which encompass the Quaternary. As for the operational criteria, the detailed analysis of the large amount of works dealing with active faulting in Italy shows that investigations exclusively based on surface morphological features (e.g. fault planes exposition) or on indirect investigations (geophysical data), are not sufficient or even unreliable to define the presence of an active and capable fault; instead, more accurate geological information on the Quaternary space-time evolution of the areas affected by such tectonic structures is needed. A test area for which active and capable faults can be first mapped based on such a classical but still effective methodological approach can be the central Apennines. Reference Galadini F., Falcucci E., Galli P., Giaccio B., Gori S., Messina P., Moro M., Saroli M., Scardia G., Sposato A. (2012). Time

  4. ReprOlive: a database with linked data for the olive tree (Olea europaea L.) reproductive transcriptome

    PubMed Central

    Carmona, Rosario; Zafra, Adoración; Seoane, Pedro; Castro, Antonio J.; Guerrero-Fernández, Darío; Castillo-Castillo, Trinidad; Medina-García, Ana; Cánovas, Francisco M.; Aldana-Montes, José F.; Navas-Delgado, Ismael; Alché, Juan de Dios; Claros, M. Gonzalo

    2015-01-01

    Plant reproductive transcriptomes have been analyzed in different species due to the agronomical and biotechnological importance of plant reproduction. Here we presented an olive tree reproductive transcriptome database with samples from pollen and pistil at different developmental stages, and leaf and root as control vegetative tissues http://reprolive.eez.csic.es). It was developed from 2,077,309 raw reads to 1,549 Sanger sequences. Using a pre-defined workflow based on open-source tools, sequences were pre-processed, assembled, mapped, and annotated with expression data, descriptions, GO terms, InterPro signatures, EC numbers, KEGG pathways, ORFs, and SSRs. Tentative transcripts (TTs) were also annotated with the corresponding orthologs in Arabidopsis thaliana from TAIR and RefSeq databases to enable Linked Data integration. It results in a reproductive transcriptome comprising 72,846 contigs with average length of 686 bp, of which 63,965 (87.8%) included at least one functional annotation, and 55,356 (75.9%) had an ortholog. A minimum of 23,568 different TTs was identified and 5,835 of them contain a complete ORF. The representative reproductive transcriptome can be reduced to 28,972 TTs for further gene expression studies. Partial transcriptomes from pollen, pistil, and vegetative tissues as control were also constructed. ReprOlive provides free access and download capability to these results. Retrieval mechanisms for sequences and transcript annotations are provided. Graphical localization of annotated enzymes into KEGG pathways is also possible. Finally, ReprOlive has included a semantic conceptualisation by means of a Resource Description Framework (RDF) allowing a Linked Data search for extracting the most updated information related to enzymes, interactions, allergens, structures, and reactive oxygen species. PMID:26322066

  5. Deformation rates across the San Andreas Fault system, central California determined by geology and geodesy

    NASA Astrophysics Data System (ADS)

    Titus, Sarah J.

    The San Andreas fault system is a transpressional plate boundary characterized by sub-parallel dextral strike-slip faults separating internally deformed crustal blocks in central California. Both geodetic and geologic tools were used to understand the short- and long-term partitioning of deformation in both the crust and the lithospheric mantle across the plate boundary system. GPS data indicate that the short-term discrete deformation rate is ˜28 mm/yr for the central creeping segment of the San Andreas fault and increases to 33 mm/yr at +/-35 km from the fault. This gradient in deformation rates is interpreted to reflect elastic locking of the creeping segment at depth, distributed off-fault deformation, or some combination of these two mechanisms. These short-term fault-parallel deformation rates are slower than the expected geologic slip rate and the relative plate motion rate. Structural analysis of folds and transpressional kinematic modeling were used to quantify long-term distributed deformation adjacent to the Rinconada fault. Folding accommodates approximately 5 km of wrench deformation, which translates to a deformation rate of ˜1 mm/yr since the start of the Pliocene. Integration with discrete offset on the Rinconada fault indicates that this portion of the San Andreas fault system is approximately 80% strike-slip partitioned. This kinematic fold model can be applied to the entire San Andreas fault system and may explain some of the across-fault gradient in deformation rates recorded by the geodetic data. Petrologic examination of mantle xenoliths from the Coyote Lake basalt near the Calaveras fault was used to link crustal plate boundary deformation at the surface with models for the accommodation of deformation in the lithospheric mantle. Seismic anisotropy calculations based on xenolith petrofabrics suggest that an anisotropic mantle layer thickness of 35-85 km is required to explain the observed shear wave splitting delay times in central

  6. Fault-tolerant processing system

    NASA Technical Reports Server (NTRS)

    Palumbo, Daniel L. (Inventor)

    1996-01-01

    A fault-tolerant, fiber optic interconnect, or backplane, which serves as a via for data transfer between modules. Fault tolerance algorithms are embedded in the backplane by dividing the backplane into a read bus and a write bus and placing a redundancy management unit (RMU) between the read bus and the write bus so that all data transmitted by the write bus is subjected to the fault tolerance algorithms before the data is passed for distribution to the read bus. The RMU provides both backplane control and fault tolerance.

  7. Surface fault rupture during the Mw 7.8 Kaikoura earthquake, New Zealand, with specific comment on the Kekerengu Fault - one of the country's fastest slipping onland active faults

    NASA Astrophysics Data System (ADS)

    Van Dissen, Russ; Little, Tim

    2017-04-01

    The Mw 7.8 Kaikoura earthquake of 14 November, 2016 (NZDT) was a complex event. It involved ground-surface (or seafloor) fault rupture on at least a dozen onland or offshore faults, and subsurface rupture on a handful of additional faults. Most of the surface ruptures involved previously known (or suspected) active faults, as well as surface rupture on at least two hitherto unrecognised active faults. The southwest to northeast extent of surface fault rupture, as generalised by two straight-line segments, is approximately 180 km, though this is a minimum for the collective length of surface rupture due to multiple overlapping faults with various orientations. Surface rupture displacements on specific faults involved in the Kaikoura Earthquake span approximately two orders of magnitude. For example, maximum surface displacement on the Heaver's Creek Fault is cm- to dm-scale in size; whereas, maximum surface displacement on the nearby Kekerengu Fault is approximately 10-12 m (predominantly in a dextral sense). The Kekerengu Fault has a Late Pleistocene slip-rate rate of 20-26 mm/yr, and is possibly the second fastest slipping onland fault in New Zealand, behind the Alpine Fault. Located in the northeastern South Island of New Zealand, the Kekerengu Fault - along with the Hope Fault to the southwest and the Needles Fault offshore to the northeast - comprise the fastest slipping elements of the Pacific-Australian plate boundary in this part of the country. In January 2016 (about ten months prior to the Kaikoura earthquake) three paleo-earthquake investigation trenches were excavated across pronounced traces of the Kekerengu Fault at two locations. These were the first such trenches dug and evaluated across the fault. All three trenches displayed abundant evidence of past surface fault ruptures (three surface ruptures in the last approximately 1,200 years, four now including the 2016 rupture). An interesting aspect of the 2016 rupture is that two of the trenches

  8. How Do Normal Faults Grow?

    NASA Astrophysics Data System (ADS)

    Jackson, C. A. L.; Bell, R. E.; Rotevatn, A.; Tvedt, A. B. M.

    2015-12-01

    Normal faulting accommodates stretching of the Earth's crust and is one of the fundamental controls on landscape evolution and sediment dispersal in rift basins. Displacement-length scaling relationships compiled from global datasets suggest normal faults grow via a sympathetic increase in these two parameters (the 'isolated fault model'). This model has dominated the structural geology literature for >20 years and underpins the structural and tectono-stratigraphic models developed for active rifts. However, relatively recent analysis of high-quality 3D seismic reflection data suggests faults may grow by rapid establishment of their near-final length prior to significant displacement accumulation (the 'coherent fault model'). The isolated and coherent fault models make very different predictions regarding the tectono-stratigraphic evolution of rift basin, thus assessing their applicability is important. To-date, however, very few studies have explicitly set out to critically test the coherent fault model thus, it may be argued, it has yet to be widely accepted in the structural geology community. Displacement backstripping is a simple graphical technique typically used to determine how faults lengthen and accumulate displacement; this technique should therefore allow us to test the competing fault models. However, in this talk we use several subsurface case studies to show that the most commonly used backstripping methods (the 'original' and 'modified' methods) are, however, of limited value, because application of one over the other requires an a priori assumption of the model most applicable to any given fault; we argue this is illogical given that the style of growth is exactly what the analysis is attempting to determine. We then revisit our case studies and demonstrate that, in the case of seismic-scale growth faults, growth strata thickness patterns and relay zone kinematics, rather than displacement backstripping, should be assessed to directly constrain

  9. Mechanics of slip and fracture along small faults and simple strike-slip fault zones in granitic rock

    NASA Astrophysics Data System (ADS)

    Martel, Stephen J.; Pollard, David D.

    1989-07-01

    We exploit quasi-static fracture mechanics models for slip along pre-existing faults to account for the fracture structure observed along small exhumed faults and small segmented fault zones in the Mount Abbot quadrangle of California and to estimate stress drop and shear fracture energy from geological field measurements. Along small strike-slip faults, cracks that splay from the faults are common only near fault ends. In contrast, many cracks splay from the boundary faults at the edges of a simple fault zone. Except near segment ends, the cracks preferentially splay into a zone. We infer that shear displacement discontinuities (slip patches) along a small fault propagated to near the fault ends and caused fracturing there. Based on elastic stress analyses, we suggest that slip on one boundary fault triggered slip on the adjacent boundary fault, and that the subsequent interaction of the slip patches preferentially led to the generation of fractures that splayed into the zones away from segment ends and out of the zones near segment ends. We estimate the average stress drops for slip events along the fault zones as ˜1 MPa and the shear fracture energy release rate during slip as 5 × 102 - 2 × 104 J/m2. This estimate is similar to those obtained from shear fracture of laboratory samples, but orders of magnitude less than those for large fault zones. These results suggest that the shear fracture energy release rate increases as the structural complexity of fault zones increases.

  10. How do normal faults grow?

    NASA Astrophysics Data System (ADS)

    Jackson, Christopher; Bell, Rebecca; Rotevatn, Atle; Tvedt, Anette

    2016-04-01

    Normal faulting accommodates stretching of the Earth's crust, and it is arguably the most fundamental tectonic process leading to continent rupture and oceanic crust emplacement. Furthermore, the incremental and finite geometries associated with normal faulting dictate landscape evolution, sediment dispersal and hydrocarbon systems development in rifts. Displacement-length scaling relationships compiled from global datasets suggest normal faults grow via a sympathetic increase in these two parameters (the 'isolated fault model'). This model has dominated the structural geology literature for >20 years and underpins the structural and tectono-stratigraphic models developed for active rifts. However, relatively recent analysis of high-quality 3D seismic reflection data suggests faults may grow by rapid establishment of their near-final length prior to significant displacement accumulation (the 'coherent fault model'). The isolated and coherent fault models make very different predictions regarding the tectono-stratigraphic evolution of rift basin, thus assessing their applicability is important. To-date, however, very few studies have explicitly set out to critically test the coherent fault model thus, it may be argued, it has yet to be widely accepted in the structural geology community. Displacement backstripping is a simple graphical technique typically used to determine how faults lengthen and accumulate displacement; this technique should therefore allow us to test the competing fault models. However, in this talk we use several subsurface case studies to show that the most commonly used backstripping methods (the 'original' and 'modified' methods) are, however, of limited value, because application of one over the other requires an a priori assumption of the model most applicable to any given fault; we argue this is illogical given that the style of growth is exactly what the analysis is attempting to determine. We then revisit our case studies and demonstrate

  11. Late Holocene earthquakes on the Toe Jam Hill fault, Seattle fault zone, Bainbridge Island, Washington

    USGS Publications Warehouse

    Nelson, A.R.; Johnson, S.Y.; Kelsey, H.M.; Wells, R.E.; Sherrod, B.L.; Pezzopane, S.K.; Bradley, L.A.; Koehler, R. D.; Bucknam, R.C.

    2003-01-01

    Five trenches across a Holocene fault scarp yield the first radiocarbon-measured earthquake recurrence intervals for a crustal fault in western Washington. The scarp, the first to be revealed by laser imagery, marks the Toe Jam Hill fault, a north-dipping backthrust to the Seattle fault. Folded and faulted strata, liquefaction features, and forest soil A horizons buried by hanging-wall-collapse colluvium record three, or possibly four, earthquakes between 2500 and 1000 yr ago. The most recent earthquake is probably the 1050-1020 cal. (calibrated) yr B.P. (A.D. 900-930) earthquake that raised marine terraces and triggered a tsunami in Puget Sound. Vertical deformation estimated from stratigraphic and surface offsets at trench sites suggests late Holocene earthquake magnitudes near M7, corresponding to surface ruptures >36 km long. Deformation features recording poorly understood latest Pleistocene earthquakes suggest that they were smaller than late Holocene earthquakes. Postglacial earthquake recurrence intervals based on 97 radiocarbon ages, most on detrital charcoal, range from ???12,000 yr to as little as a century or less; corresponding fault-slip rates are 0.2 mm/yr for the past 16,000 yr and 2 mm/yr for the past 2500 yr. Because the Toe Jam Hill fault is a backthrust to the Seattle fault, it may not have ruptured during every earthquake on the Seattle fault. But the earthquake history of the Toe Jam Hill fault is at least a partial proxy for the history of the rest of the Seattle fault zone.

  12. A method of real-time fault diagnosis for power transformers based on vibration analysis

    NASA Astrophysics Data System (ADS)

    Hong, Kaixing; Huang, Hai; Zhou, Jianping; Shen, Yimin; Li, Yujie

    2015-11-01

    In this paper, a novel probability-based classification model is proposed for real-time fault detection of power transformers. First, the transformer vibration principle is introduced, and two effective feature extraction techniques are presented. Next, the details of the classification model based on support vector machine (SVM) are shown. The model also includes a binary decision tree (BDT) which divides transformers into different classes according to health state. The trained model produces posterior probabilities of membership to each predefined class for a tested vibration sample. During the experiments, the vibrations of transformers under different conditions are acquired, and the corresponding feature vectors are used to train the SVM classifiers. The effectiveness of this model is illustrated experimentally on typical in-service transformers. The consistency between the results of the proposed model and the actual condition of the test transformers indicates that the model can be used as a reliable method for transformer fault detection.

  13. Spatial and temporal patterns of fault creep across an active salt system, Canyonlands National Park, Utah

    NASA Astrophysics Data System (ADS)

    Kravitz, K.; Mueller, K. J.; Furuya, M.; Tiampo, K. F.

    2017-12-01

    First order conditions that control creeping behavior on faults include the strength of faulted materials, fault maturity and stress changes associated with seismic cycles. We present mapping of surface strain from differential interferometric synthetic aperture radar (DInSAR) of actively creeping faults in Eastern Utah that form by reactivation of older joints and faults. A nine-year record of displacement across the region using descending ERS scenes from 1992-2001 suggests maximum slip rates of 1 mm/yr. Time series analysis shows near steady rates across the region consistent with the proposed ultra-weak nature of these faults as suggested by their dilating nature, based on observations of sinkholes, pit chains and recently opened fissures along their lengths. Slip rates along the faults in the main part of the array are systematically faster with closer proximity to the Colorado River Canyon, consistent with mechanical modeling of the boundary conditions that control the overall salt system. Deeply incised side tributaries coincide with and control the edges of the region with higher strain rates. Comparison of D:L scaling at decadal scales in fault bounded grabens (as defined by InSAR) with previous measurements of total slip (D) to length (L) is interpreted to suggest that faults reached nearly their current lengths relatively quickly (i.e. displaying low displacement to length scaling). We argue this may then have been followed by along strike slip distributions where the centers of the grabens slip more rapidly than their endpoints, resulting in a higher D:L ratio over time. InSAR mapping also points to an increase in creep rates in overlap zones where two faults became hard-linked at breached relay ramps. Additionally, we see evidence for soft-linkage, where displacement profiles along a graben coincide with obvious fault segments. While an endmember case (ultra-weak faults sliding above a plastic substrate), structures in this region highlight mechanical

  14. Subaqueous tectonic geomorphology along a 400 km stretch of the Queen Charlotte-Fairweather Fault System, southeastern Alaska

    NASA Astrophysics Data System (ADS)

    Brothers, D. S.; Ten Brink, U. S.; Andrews, B. D.; Kluesner, J.; Haeussler, P. J.; Watt, J. T.; Dartnell, P.; Miller, N. C.; Conrad, J. E.; East, A. E.; Maier, K. L.; Balster-Gee, A.; Ebuna, D. R.

    2016-12-01

    Seismic and geodetic monitoring of active fault systems does not typically extend beyond one seismic cycle, hence it is challenging to link the characteristics of individual earthquakes with long-term fault behavior. A compelling place to examine such linkages is the right-lateral Queen Charlotte-Fairweather Fault (QCFF), a 1200 km dextral strike-slip fault offshore southeastern Alaska and western British Columbia. The QCFF defines the North America-Pacific transform plate boundary and has experienced at least eight M>7 earthquakes in the last 130 years. During 2015-2016, the USGS conducted four high-resolution marine geophysical surveys (multibeam bathymetry, sparker multichannel seismic and Chirp) along a 400-km-long section of the QCFF from Icy Point to Noyes Canyon. The QCFF displays a nearly linear and continuous fault trace from Icy Point to the southern tip of Baranof Island, a distance of 315 km. Subtle changes in fault strike, particularly the 200 km section fault south of Sitka Sound, are associated with pull-apart basins and compressional pop-up structures. Bathymetric imagery provides stunning views of strike-slip fault morphology along the continental shelf-edge and slope, including linear fault valleys and knife-edge lateral offset of submarine canyons, gullies, and ridges. We also observe pervasive evidence for small-scale (<1 km^2) submarine landslides along the margin and propose that they were seismically triggered. The glacially scoured southern wall of the Yakobi Sea Valley, formed 17 ka, is offset 925±25 m by the QCFF, providing a late Pleistocene-present slip-rate estimate of approximately 54 mm/yr. This suggests nearly the entire plate boundary motion is localized to a single, relatively narrow fault zone. We also constructed and analyzed a catalog of lateral piercing points along the fault to better understand long-term fault behavior, particularly along segments that have generated large historical earthquakes.

  15. Fault Model Development for Fault Tolerant VLSI Design

    DTIC Science & Technology

    1988-05-01

    0 % .%. . BEIDGING FAULTS A bridging fault in a digital circuit connects two or more conducting paths of the circuit. The resistance...Melvin Breuer and Arthur Friedman, "Diagnosis and Reliable Design of Digital Systems", Computer Science Press, Inc., 1976. 4. [Chandramouli,1983] R...2138 AEDC LIBARY (TECH REPORTS FILE) MS-O0 ARNOLD AFS TN 37389-9998 USAG1 Attn: ASH-PCA-CRT Ft Huachuca AZ 85613-6000 DOT LIBRARY/iQA SECTION - ATTN

  16. A fault-based model for crustal deformation, fault slip-rates and off-fault strain rate in California

    USGS Publications Warehouse

    Zeng, Yuehua; Shen, Zheng-Kang

    2016-01-01

    We invert Global Positioning System (GPS) velocity data to estimate fault slip rates in California using a fault‐based crustal deformation model with geologic constraints. The model assumes buried elastic dislocations across the region using Uniform California Earthquake Rupture Forecast Version 3 (UCERF3) fault geometries. New GPS velocity and geologic slip‐rate data were compiled by the UCERF3 deformation working group. The result of least‐squares inversion shows that the San Andreas fault slips at 19–22  mm/yr along Santa Cruz to the North Coast, 25–28  mm/yr along the central California creeping segment to the Carrizo Plain, 20–22  mm/yr along the Mojave, and 20–24  mm/yr along the Coachella to the Imperial Valley. Modeled slip rates are 7–16  mm/yr lower than the preferred geologic rates from the central California creeping section to the San Bernardino North section. For the Bartlett Springs section, fault slip rates of 7–9  mm/yr fall within the geologic bounds but are twice the preferred geologic rates. For the central and eastern Garlock, inverted slip rates of 7.5 and 4.9  mm/yr, respectively, match closely with the geologic rates. For the western Garlock, however, our result suggests a low slip rate of 1.7  mm/yr. Along the eastern California shear zone and southern Walker Lane, our model shows a cumulative slip rate of 6.2–6.9  mm/yr across its east–west transects, which is ∼1  mm/yr increase of the geologic estimates. For the off‐coast faults of central California, from Hosgri to San Gregorio, fault slips are modeled at 1–5  mm/yr, similar to the lower geologic bounds. For the off‐fault deformation, the total moment rate amounts to 0.88×1019  N·m/yr, with fast straining regions found around the Mendocino triple junction, Transverse Ranges and Garlock fault zones, Landers and Brawley seismic zones, and farther south. The overall California moment rate is 2.76×1019

  17. Upper air teleconnections to Ob River flows and tree rings

    NASA Astrophysics Data System (ADS)

    Meko, David; Panyushkina, Irina; Agafonov, Leonid

    2015-04-01

    The Ob River, one of the world's greatest rivers, with a catchment basin about the size of Western Europe, contributes 12% or more of the annual freshwater inflow to the Arctic Ocean. The input of heat and fresh water is important to the global climate system through effects on sea ice, salinity, and the thermohaline circulation of the ocean. As part of a tree-ring project to obtain multi-century long information on variability of Ob River flows, a network of 18 sites of Pinus, Larix, Populus and Salix has been collected along the Ob in the summers of 2013 and 2014. Analysis of collections processed so far indicates a significant relationship of tree-growth to river discharge. Moderation of the floodplain air temperature regime by flooding appears to be an important driver of the tree-ring response. In unraveling the relationship of tree-growth to river flows, it is important to identify atmospheric circulation features directly linked to observed time series variations of flow and tree growth. In this study we examine statistical links between primary teleconnection modes of Northern Hemisphere upper-air (500 mb) circulation, Ob River flow, and tree-ring chronologies. Annual discharge at the mouth of the Ob River is found to be significantly positively related to the phase of the East Atlantic (EA) pattern, the second prominent mode of low-frequency variability over the North Atlantic. The EA pattern, consisting of a north-south dipole of pressure-anomaly centers spanning the North Atlantic from east to west, is associated with a low-pressure anomaly centered over the Ob River Basin, and with a pattern of positive precipitation anomaly of the same region. The positive correlation of discharge and EA is consistent with these know patterns, and is contrasted with generally negative (though smaller) correlations between EA and tree-ring chronologies. The signs of correlations are consistent with a conceptual model of river influence on tree growth through air

  18. Continental deformation accommodated by non-rigid passive bookshelf faulting: An example from the Cenozoic tectonic development of northern Tibet

    NASA Astrophysics Data System (ADS)

    Zuza, Andrew V.; Yin, An

    2016-05-01

    Collision-induced continental deformation commonly involves complex interactions between strike-slip faulting and off-fault deformation, yet this relationship has rarely been quantified. In northern Tibet, Cenozoic deformation is expressed by the development of the > 1000-km-long east-striking left-slip Kunlun, Qinling, and Haiyuan faults. Each have a maximum slip in the central fault segment exceeding 10s to ~ 100 km but a much smaller slip magnitude (~< 10% of the maximum slip) at their terminations. The along-strike variation of fault offsets and pervasive off-fault deformation create a strain pattern that departs from the expectations of the classic plate-like rigid-body motion and flow-like distributed deformation end-member models for continental tectonics. Here we propose a non-rigid bookshelf-fault model for the Cenozoic tectonic development of northern Tibet. Our model, quantitatively relating discrete left-slip faulting to distributed off-fault deformation during regional clockwise rotation, explains several puzzling features, including the: (1) clockwise rotation of east-striking left-slip faults against the northeast-striking left-slip Altyn Tagh fault along the northwestern margin of the Tibetan Plateau, (2) alternating fault-parallel extension and shortening in the off-fault regions, and (3) eastward-tapering map-view geometries of the Qimen Tagh, Qaidam, and Qilian Shan thrust belts that link with the three major left-slip faults in northern Tibet. We refer to this specific non-rigid bookshelf-fault system as a passive bookshelf-fault system because the rotating bookshelf panels are detached from the rigid bounding domains. As a consequence, the wallrock of the strike-slip faults deforms to accommodate both the clockwise rotation of the left-slip faults and off-fault strain that arises at the fault ends. An important implication of our model is that the style and magnitude of Cenozoic deformation in northern Tibet vary considerably in the east

  19. Active faulting on the Wallula fault zone within the Olympic-Wallowa lineament, Washington State, USA

    USGS Publications Warehouse

    Sherrod, Brian; Blakely, Richard J.; Lasher, John P.; Lamb, Andrew P.; Mahan, Shannon; Foit, Franklin F.; Barnett, Elizabeth

    2016-01-01

    The Wallula fault zone is an integral feature of the Olympic-Wallowa lineament, an ∼500-km-long topographic lineament oblique to the Cascadia plate boundary, extending from Vancouver Island, British Columbia, to Walla Walla, Washington. The structure and past earthquake activity of the Wallula fault zone are important because of nearby infrastructure, and also because the fault zone defines part of the Olympic-Wallowa lineament in south-central Washington and suggests that the Olympic-Wallowa lineament may have a structural origin. We used aeromagnetic and ground magnetic data to locate the trace of the Wallula fault zone in the subsurface and map a quarry exposure of the Wallula fault zone near Finley, Washington, to investigate past earthquakes along the fault. We mapped three main packages of rocks and unconsolidated sediments in an ∼10-m-high quarry exposure. Our mapping suggests at least three late Pleistocene earthquakes with surface rupture, and an episode of liquefaction in the Holocene along the Wallula fault zone. Faint striae on the master fault surface are subhorizontal and suggest reverse dextral oblique motion for these earthquakes, consistent with dextral offset on the Wallula fault zone inferred from offset aeromagnetic anomalies associated with ca. 8.5 Ma basalt dikes. Magnetic surveys show that the Wallula fault actually lies 350 m to the southwest of the trace shown on published maps, passes directly through deformed late Pleistocene or younger deposits exposed at Finley quarry, and extends uninterrupted over 120 km.

  20. Improving Multiple Fault Diagnosability using Possible Conflicts

    NASA Technical Reports Server (NTRS)

    Daigle, Matthew J.; Bregon, Anibal; Biswas, Gautam; Koutsoukos, Xenofon; Pulido, Belarmino

    2012-01-01

    Multiple fault diagnosis is a difficult problem for dynamic systems. Due to fault masking, compensation, and relative time of fault occurrence, multiple faults can manifest in many different ways as observable fault signature sequences. This decreases diagnosability of multiple faults, and therefore leads to a loss in effectiveness of the fault isolation step. We develop a qualitative, event-based, multiple fault isolation framework, and derive several notions of multiple fault diagnosability. We show that using Possible Conflicts, a model decomposition technique that decouples faults from residuals, we can significantly improve the diagnosability of multiple faults compared to an approach using a single global model. We demonstrate these concepts and provide results using a multi-tank system as a case study.