Reconfigurable tree architectures using subtree oriented fault tolerance
NASA Technical Reports Server (NTRS)
Lowrie, Matthew B.
1987-01-01
An approach to the design of reconfigurable tree architecture is presented in which spare processors are allocated at the leaves. The approach is unique in that spares are associated with subtrees and sharing of spares between these subtrees can occur. The Subtree Oriented Fault Tolerance (SOFT) approach is more reliable than previous approaches capable of tolerating link and switch failures for both single chip and multichip tree implementations while reducing redundancy in terms of both spare processors and links. VLSI layout is 0(n) for binary trees and is directly extensible to N-ary trees and fault tolerance through performance degradation.
Fault diagnosis of power transformer based on fault-tree analysis (FTA)
NASA Astrophysics Data System (ADS)
Wang, Yongliang; Li, Xiaoqiang; Ma, Jianwei; Li, SuoYu
2017-05-01
Power transformers is an important equipment in power plants and substations, power distribution transmission link is made an important hub of power systems. Its performance directly affects the quality and health of the power system reliability and stability. This paper summarizes the five parts according to the fault type power transformers, then from the time dimension divided into three stages of power transformer fault, use DGA routine analysis and infrared diagnostics criterion set power transformer running state, finally, according to the needs of power transformer fault diagnosis, by the general to the section by stepwise refinement of dendritic tree constructed power transformer fault
Locating hardware faults in a data communications network of a parallel computer
Archer, Charles J.; Megerian, Mark G.; Ratterman, Joseph D.; Smith, Brian E.
2010-01-12
Hardware faults location in a data communications network of a parallel computer. Such a parallel computer includes a plurality of compute nodes and a data communications network that couples the compute nodes for data communications and organizes the compute node as a tree. Locating hardware faults includes identifying a next compute node as a parent node and a root of a parent test tree, identifying for each child compute node of the parent node a child test tree having the child compute node as root, running a same test suite on the parent test tree and each child test tree, and identifying the parent compute node as having a defective link connected from the parent compute node to a child compute node if the test suite fails on the parent test tree and succeeds on all the child test trees.
Using minimal spanning trees to compare the reliability of network topologies
NASA Technical Reports Server (NTRS)
Leister, Karen J.; White, Allan L.; Hayhurst, Kelly J.
1990-01-01
Graph theoretic methods are applied to compute the reliability for several types of networks of moderate size. The graph theory methods used are minimal spanning trees for networks with bi-directional links and the related concept of strongly connected directed graphs for networks with uni-directional links. A comparison is conducted of ring networks and braided networks. The case is covered where just the links fail and the case where both links and nodes fail. Two different failure modes for the links are considered. For one failure mode, the link no longer carries messages. For the other failure mode, the link delivers incorrect messages. There is a description and comparison of link-redundancy versus path-redundancy as methods to achieve reliability. All the computations are carried out by means of a fault tree program.
Integrated Approach To Design And Analysis Of Systems
NASA Technical Reports Server (NTRS)
Patterson-Hine, F. A.; Iverson, David L.
1993-01-01
Object-oriented fault-tree representation unifies evaluation of reliability and diagnosis of faults. Programming/fault tree described more fully in "Object-Oriented Algorithm For Evaluation Of Fault Trees" (ARC-12731). Augmented fault tree object contains more information than fault tree object used in quantitative analysis of reliability. Additional information needed to diagnose faults in system represented by fault tree.
Chen, Gang; Song, Yongduan; Lewis, Frank L
2016-05-03
This paper investigates the distributed fault-tolerant control problem of networked Euler-Lagrange systems with actuator and communication link faults. An adaptive fault-tolerant cooperative control scheme is proposed to achieve the coordinated tracking control of networked uncertain Lagrange systems on a general directed communication topology, which contains a spanning tree with the root node being the active target system. The proposed algorithm is capable of compensating for the actuator bias fault, the partial loss of effectiveness actuation fault, the communication link fault, the model uncertainty, and the external disturbance simultaneously. The control scheme does not use any fault detection and isolation mechanism to detect, separate, and identify the actuator faults online, which largely reduces the online computation and expedites the responsiveness of the controller. To validate the effectiveness of the proposed method, a test-bed of multiple robot-arm cooperative control system is developed for real-time verification. Experiments on the networked robot-arms are conduced and the results confirm the benefits and the effectiveness of the proposed distributed fault-tolerant control algorithms.
[The Application of the Fault Tree Analysis Method in Medical Equipment Maintenance].
Liu, Hongbin
2015-11-01
In this paper, the traditional fault tree analysis method is presented, detailed instructions for its application characteristics in medical instrument maintenance is made. It is made significant changes when the traditional fault tree analysis method is introduced into the medical instrument maintenance: gave up the logic symbolic, logic analysis and calculation, gave up its complicated programs, and only keep its image and practical fault tree diagram, and the fault tree diagram there are also differences: the fault tree is no longer a logical tree but the thinking tree in troubleshooting, the definition of the fault tree's nodes is different, the composition of the fault tree's branches is also different.
NASA Technical Reports Server (NTRS)
Lee, Charles; Alena, Richard L.; Robinson, Peter
2004-01-01
We started from ISS fault trees example to migrate to decision trees, presented a method to convert fault trees to decision trees. The method shows that the visualizations of root cause of fault are easier and the tree manipulating becomes more programmatic via available decision tree programs. The visualization of decision trees for the diagnostic shows a format of straight forward and easy understands. For ISS real time fault diagnostic, the status of the systems could be shown by mining the signals through the trees and see where it stops at. The other advantage to use decision trees is that the trees can learn the fault patterns and predict the future fault from the historic data. The learning is not only on the static data sets but also can be online, through accumulating the real time data sets, the decision trees can gain and store faults patterns in the trees and recognize them when they come.
Automatic translation of digraph to fault-tree models
NASA Technical Reports Server (NTRS)
Iverson, David L.
1992-01-01
The author presents a technique for converting digraph models, including those models containing cycles, to a fault-tree format. A computer program which automatically performs this translation using an object-oriented representation of the models has been developed. The fault-trees resulting from translations can be used for fault-tree analysis and diagnosis. Programs to calculate fault-tree and digraph cut sets and perform diagnosis with fault-tree models have also been developed. The digraph to fault-tree translation system has been successfully tested on several digraphs of varying size and complexity. Details of some representative translation problems are presented. Most of the computation performed by the program is dedicated to finding minimal cut sets for digraph nodes in order to break cycles in the digraph. Fault-trees produced by the translator have been successfully used with NASA's Fault-Tree Diagnosis System (FTDS) to produce automated diagnostic systems.
NASA Technical Reports Server (NTRS)
Butler, Ricky W.; Boerschlein, David P.
1993-01-01
Fault-Tree Compiler (FTC) program, is software tool used to calculate probability of top event in fault tree. Gates of five different types allowed in fault tree: AND, OR, EXCLUSIVE OR, INVERT, and M OF N. High-level input language easy to understand and use. In addition, program supports hierarchical fault-tree definition feature, which simplifies tree-description process and reduces execution time. Set of programs created forming basis for reliability-analysis workstation: SURE, ASSIST, PAWS/STEM, and FTC fault-tree tool (LAR-14586). Written in PASCAL, ANSI-compliant C language, and FORTRAN 77. Other versions available upon request.
Application Research of Fault Tree Analysis in Grid Communication System Corrective Maintenance
NASA Astrophysics Data System (ADS)
Wang, Jian; Yang, Zhenwei; Kang, Mei
2018-01-01
This paper attempts to apply the fault tree analysis method to the corrective maintenance field of grid communication system. Through the establishment of the fault tree model of typical system and the engineering experience, the fault tree analysis theory is used to analyze the fault tree model, which contains the field of structural function, probability importance and so on. The results show that the fault tree analysis can realize fast positioning and well repairing of the system. Meanwhile, it finds that the analysis method of fault tree has some guiding significance to the reliability researching and upgrading f the system.
Khan, F I; Iqbal, A; Ramesh, N; Abbasi, S A
2001-10-12
As it is conventionally done, strategies for incorporating accident--prevention measures in any hazardous chemical process industry are developed on the basis of input from risk assessment. However, the two steps-- risk assessment and hazard reduction (or safety) measures--are not linked interactively in the existing methodologies. This prevents a quantitative assessment of the impacts of safety measures on risk control. We have made an attempt to develop a methodology in which risk assessment steps are interactively linked with implementation of safety measures. The resultant system tells us the extent of reduction of risk by each successive safety measure. It also tells based on sophisticated maximum credible accident analysis (MCAA) and probabilistic fault tree analysis (PFTA) whether a given unit can ever be made 'safe'. The application of the methodology has been illustrated with a case study.
Fault Tree in the Trenches, A Success Story
NASA Technical Reports Server (NTRS)
Long, R. Allen; Goodson, Amanda (Technical Monitor)
2000-01-01
Getting caught up in the explanation of Fault Tree Analysis (FTA) minutiae is easy. In fact, most FTA literature tends to address FTA concepts and methodology. Yet there seems to be few articles addressing actual design changes resulting from the successful application of fault tree analysis. This paper demonstrates how fault tree analysis was used to identify and solve a potentially catastrophic mechanical problem at a rocket motor manufacturer. While developing the fault tree given in this example, the analyst was told by several organizations that the piece of equipment in question had been evaluated by several committees and organizations, and that the analyst was wasting his time. The fault tree/cutset analysis resulted in a joint-redesign of the control system by the tool engineering group and the fault tree analyst, as well as bragging rights for the analyst. (That the fault tree found problems where other engineering reviews had failed was not lost on the other engineering groups.) Even more interesting was that this was the analyst's first fault tree which further demonstrates how effective fault tree analysis can be in guiding (i.e., forcing) the analyst to take a methodical approach in evaluating complex systems.
SPACE PROPULSION SYSTEM PHASED-MISSION PROBABILITY ANALYSIS USING CONVENTIONAL PRA METHODS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Curtis Smith; James Knudsen
As part of a series of papers on the topic of advance probabilistic methods, a benchmark phased-mission problem has been suggested. This problem consists of modeling a space mission using an ion propulsion system, where the mission consists of seven mission phases. The mission requires that the propulsion operate for several phases, where the configuration changes as a function of phase. The ion propulsion system itself consists of five thruster assemblies and a single propellant supply, where each thruster assembly has one propulsion power unit and two ion engines. In this paper, we evaluate the probability of mission failure usingmore » the conventional methodology of event tree/fault tree analysis. The event tree and fault trees are developed and analyzed using Systems Analysis Programs for Hands-on Integrated Reliability Evaluations (SAPHIRE). While the benchmark problem is nominally a "dynamic" problem, in our analysis the mission phases are modeled in a single event tree to show the progression from one phase to the next. The propulsion system is modeled in fault trees to account for the operation; or in this case, the failure of the system. Specifically, the propulsion system is decomposed into each of the five thruster assemblies and fed into the appropriate N-out-of-M gate to evaluate mission failure. A separate fault tree for the propulsion system is developed to account for the different success criteria of each mission phase. Common-cause failure modeling is treated using traditional (i.e., parametrically) methods. As part of this paper, we discuss the overall results in addition to the positive and negative aspects of modeling dynamic situations with non-dynamic modeling techniques. One insight from the use of this conventional method for analyzing the benchmark problem is that it requires significant manual manipulation to the fault trees and how they are linked into the event tree. The conventional method also requires editing the resultant cut sets to obtain the correct results. While conventional methods may be used to evaluate a dynamic system like that in the benchmark, the level of effort required may preclude its use on real-world problems.« less
Reliability analysis of the solar array based on Fault Tree Analysis
NASA Astrophysics Data System (ADS)
Jianing, Wu; Shaoze, Yan
2011-07-01
The solar array is an important device used in the spacecraft, which influences the quality of in-orbit operation of the spacecraft and even the launches. This paper analyzes the reliability of the mechanical system and certifies the most vital subsystem of the solar array. The fault tree analysis (FTA) model is established according to the operating process of the mechanical system based on DFH-3 satellite; the logical expression of the top event is obtained by Boolean algebra and the reliability of the solar array is calculated. The conclusion shows that the hinges are the most vital links between the solar arrays. By analyzing the structure importance(SI) of the hinge's FTA model, some fatal causes, including faults of the seal, insufficient torque of the locking spring, temperature in space, and friction force, can be identified. Damage is the initial stage of the fault, so limiting damage is significant to prevent faults. Furthermore, recommendations for improving reliability associated with damage limitation are discussed, which can be used for the redesigning of the solar array and the reliability growth planning.
NASA Astrophysics Data System (ADS)
LI, Y.; Yang, S. H.
2017-05-01
The Antarctica astronomical telescopes work chronically on the top of the unattended South Pole, and they have only one chance to maintain every year. Due to the complexity of the optical, mechanical, and electrical systems, the telescopes are hard to be maintained and need multi-tasker expedition teams, which means an excessive awareness is essential for the reliability of the Antarctica telescopes. Based on the fault mechanism and fault mode of the main-axis control system for the equatorial Antarctica astronomical telescope AST3-3 (Antarctic Schmidt Telescopes 3-3), the method of fault tree analysis is introduced in this article, and we obtains the importance degree of the top event from the importance degree of the bottom event structure. From the above results, the hidden problems and weak links can be effectively found out, which will indicate the direction for promoting the stability of the system and optimizing the design of the system.
NASA Technical Reports Server (NTRS)
Butler, Ricky W.; Martensen, Anna L.
1992-01-01
FTC, Fault-Tree Compiler program, is reliability-analysis software tool used to calculate probability of top event of fault tree. Five different types of gates allowed in fault tree: AND, OR, EXCLUSIVE OR, INVERT, and M OF N. High-level input language of FTC easy to understand and use. Program supports hierarchical fault-tree-definition feature simplifying process of description of tree and reduces execution time. Solution technique implemented in FORTRAN, and user interface in Pascal. Written to run on DEC VAX computer operating under VMS operating system.
Tutorial: Advanced fault tree applications using HARP
NASA Technical Reports Server (NTRS)
Dugan, Joanne Bechta; Bavuso, Salvatore J.; Boyd, Mark A.
1993-01-01
Reliability analysis of fault tolerant computer systems for critical applications is complicated by several factors. These modeling difficulties are discussed and dynamic fault tree modeling techniques for handling them are described and demonstrated. Several advanced fault tolerant computer systems are described, and fault tree models for their analysis are presented. HARP (Hybrid Automated Reliability Predictor) is a software package developed at Duke University and NASA Langley Research Center that is capable of solving the fault tree models presented.
Locating hardware faults in a parallel computer
Archer, Charles J.; Megerian, Mark G.; Ratterman, Joseph D.; Smith, Brian E.
2010-04-13
Locating hardware faults in a parallel computer, including defining within a tree network of the parallel computer two or more sets of non-overlapping test levels of compute nodes of the network that together include all the data communications links of the network, each non-overlapping test level comprising two or more adjacent tiers of the tree; defining test cells within each non-overlapping test level, each test cell comprising a subtree of the tree including a subtree root compute node and all descendant compute nodes of the subtree root compute node within a non-overlapping test level; performing, separately on each set of non-overlapping test levels, an uplink test on all test cells in a set of non-overlapping test levels; and performing, separately from the uplink tests and separately on each set of non-overlapping test levels, a downlink test on all test cells in a set of non-overlapping test levels.
Technology transfer by means of fault tree synthesis
NASA Astrophysics Data System (ADS)
Batzias, Dimitris F.
2012-12-01
Since Fault Tree Analysis (FTA) attempts to model and analyze failure processes of engineering, it forms a common technique for good industrial practice. On the contrary, fault tree synthesis (FTS) refers to the methodology of constructing complex trees either from dentritic modules built ad hoc or from fault tress already used and stored in a Knowledge Base. In both cases, technology transfer takes place in a quasi-inductive mode, from partial to holistic knowledge. In this work, an algorithmic procedure, including 9 activity steps and 3 decision nodes is developed for performing effectively this transfer when the fault under investigation occurs within one of the latter stages of an industrial procedure with several stages in series. The main parts of the algorithmic procedure are: (i) the construction of a local fault tree within the corresponding production stage, where the fault has been detected, (ii) the formation of an interface made of input faults that might occur upstream, (iii) the fuzzy (to count for uncertainty) multicriteria ranking of these faults according to their significance, and (iv) the synthesis of an extended fault tree based on the construction of part (i) and on the local fault tree of the first-ranked fault in part (iii). An implementation is presented, referring to 'uneven sealing of Al anodic film', thus proving the functionality of the developed methodology.
DG TO FT - AUTOMATIC TRANSLATION OF DIGRAPH TO FAULT TREE MODELS
NASA Technical Reports Server (NTRS)
Iverson, D. L.
1994-01-01
Fault tree and digraph models are frequently used for system failure analysis. Both types of models represent a failure space view of the system using AND and OR nodes in a directed graph structure. Each model has its advantages. While digraphs can be derived in a fairly straightforward manner from system schematics and knowledge about component failure modes and system design, fault tree structure allows for fast processing using efficient techniques developed for tree data structures. The similarities between digraphs and fault trees permits the information encoded in the digraph to be translated into a logically equivalent fault tree. The DG TO FT translation tool will automatically translate digraph models, including those with loops or cycles, into fault tree models that have the same minimum cut set solutions as the input digraph. This tool could be useful, for example, if some parts of a system have been modeled using digraphs and others using fault trees. The digraphs could be translated and incorporated into the fault trees, allowing them to be analyzed using a number of powerful fault tree processing codes, such as cut set and quantitative solution codes. A cut set for a given node is a group of failure events that will cause the failure of the node. A minimum cut set for a node is any cut set that, if any of the failures in the set were to be removed, the occurrence of the other failures in the set will not cause the failure of the event represented by the node. Cut sets calculations can be used to find dependencies, weak links, and vital system components whose failures would cause serious systems failure. The DG TO FT translation system reads in a digraph with each node listed as a separate object in the input file. The user specifies a terminal node for the digraph that will be used as the top node of the resulting fault tree. A fault tree basic event node representing the failure of that digraph node is created and becomes a child of the terminal root node. A subtree is created for each of the inputs to the digraph terminal node and the root of those subtrees are added as children of the top node of the fault tree. Every node in the digraph upstream of the terminal node will be visited and converted. During the conversion process, the algorithm keeps track of the path from the digraph terminal node to the current digraph node. If a node is visited twice, then the program has found a cycle in the digraph. This cycle is broken by finding the minimal cut sets of the twice visited digraph node and forming those cut sets into subtrees. Another implementation of the algorithm resolves loops by building a subtree based on the digraph minimal cut sets calculation. It does not reduce the subtree to minimal cut set form. This second implementation produces larger fault trees, but runs much faster than the version using minimal cut sets since it does not spend time reducing the subtrees to minimal cut sets. The fault trees produced by DG TO FT will contain OR gates, AND gates, Basic Event nodes, and NOP gates. The results of a translation can be output as a text object description of the fault tree similar to the text digraph input format. The translator can also output a LISP language formatted file and an augmented LISP file which can be used by the FTDS (ARC-13019) diagnosis system, available from COSMIC, which performs diagnostic reasoning using the fault tree as a knowledge base. DG TO FT is written in C-language to be machine independent. It has been successfully implemented on a Sun running SunOS, a DECstation running ULTRIX, a Macintosh running System 7, and a DEC VAX running VMS. The RAM requirement varies with the size of the models. DG TO FT is available in UNIX tar format on a .25 inch streaming magnetic tape cartridge (standard distribution) or on a 3.5 inch diskette. It is also available on a 3.5 inch Macintosh format diskette or on a 9-track 1600 BPI magnetic tape in DEC VAX FILES-11 format. Sample input and sample output are provided on the distribution medium. An electronic copy of the documentation in Macintosh Microsoft Word format is provided on the distribution medium. DG TO FT was developed in 1992. Sun, and SunOS are trademarks of Sun Microsystems, Inc. DECstation, ULTRIX, VAX, and VMS are trademarks of Digital Equipment Corporation. UNIX is a registered trademark of AT&T Bell Laboratories. Macintosh is a registered trademark of Apple Computer, Inc. System 7 is a trademark of Apple Computers Inc. Microsoft Word is a trademark of Microsoft Corporation.
Faults Discovery By Using Mined Data
NASA Technical Reports Server (NTRS)
Lee, Charles
2005-01-01
Fault discovery in the complex systems consist of model based reasoning, fault tree analysis, rule based inference methods, and other approaches. Model based reasoning builds models for the systems either by mathematic formulations or by experiment model. Fault Tree Analysis shows the possible causes of a system malfunction by enumerating the suspect components and their respective failure modes that may have induced the problem. The rule based inference build the model based on the expert knowledge. Those models and methods have one thing in common; they have presumed some prior-conditions. Complex systems often use fault trees to analyze the faults. Fault diagnosis, when error occurs, is performed by engineers and analysts performing extensive examination of all data gathered during the mission. International Space Station (ISS) control center operates on the data feedback from the system and decisions are made based on threshold values by using fault trees. Since those decision-making tasks are safety critical and must be done promptly, the engineers who manually analyze the data are facing time challenge. To automate this process, this paper present an approach that uses decision trees to discover fault from data in real-time and capture the contents of fault trees as the initial state of the trees.
Friction Laws Derived From the Acoustic Emissions of a Laboratory Fault by Machine Learning
NASA Astrophysics Data System (ADS)
Rouet-Leduc, B.; Hulbert, C.; Ren, C. X.; Bolton, D. C.; Marone, C.; Johnson, P. A.
2017-12-01
Fault friction controls nearly all aspects of fault rupture, yet it is only possible to measure in the laboratory. Here we describe laboratory experiments where acoustic emissions are recorded from the fault. We find that by applying a machine learning approach known as "extreme gradient boosting trees" to the continuous acoustical signal, the fault friction can be directly inferred, showing that instantaneous characteristics of the acoustic signal are a fingerprint of the frictional state. This machine learning-based inference leads to a simple law that links the acoustic signal to the friction state, and holds for every stress cycle the laboratory fault goes through. The approach does not use any other measured parameter than instantaneous statistics of the acoustic signal. This finding may have importance for inferring frictional characteristics from seismic waves in Earth where fault friction cannot be measured.
Fault trees and sequence dependencies
NASA Technical Reports Server (NTRS)
Dugan, Joanne Bechta; Boyd, Mark A.; Bavuso, Salvatore J.
1990-01-01
One of the frequently cited shortcomings of fault-tree models, their inability to model so-called sequence dependencies, is discussed. Several sources of such sequence dependencies are discussed, and new fault-tree gates to capture this behavior are defined. These complex behaviors can be included in present fault-tree models because they utilize a Markov solution. The utility of the new gates is demonstrated by presenting several models of the fault-tolerant parallel processor, which include both hot and cold spares.
McElroy, Lisa M; Khorzad, Rebeca; Rowe, Theresa A; Abecassis, Zachary A; Apley, Daniel W; Barnard, Cynthia; Holl, Jane L
The purpose of this study was to use fault tree analysis to evaluate the adequacy of quality reporting programs in identifying root causes of postoperative bloodstream infection (BSI). A systematic review of the literature was used to construct a fault tree to evaluate 3 postoperative BSI reporting programs: National Surgical Quality Improvement Program (NSQIP), Centers for Medicare and Medicaid Services (CMS), and The Joint Commission (JC). The literature review revealed 699 eligible publications, 90 of which were used to create the fault tree containing 105 faults. A total of 14 identified faults are currently mandated for reporting to NSQIP, 5 to CMS, and 3 to JC; 2 or more programs require 4 identified faults. The fault tree identifies numerous contributing faults to postoperative BSI and reveals substantial variation in the requirements and ability of national quality data reporting programs to capture these potential faults. Efforts to prevent postoperative BSI require more comprehensive data collection to identify the root causes and develop high-reliability improvement strategies.
MacDonald Iii, Angus W; Zick, Jennifer L; Chafee, Matthew V; Netoff, Theoden I
2015-01-01
The grand challenges of schizophrenia research are linking the causes of the disorder to its symptoms and finding ways to overcome those symptoms. We argue that the field will be unable to address these challenges within psychiatry's standard neo-Kraepelinian (DSM) perspective. At the same time the current corrective, based in molecular genetics and cognitive neuroscience, is also likely to flounder due to its neglect for psychiatry's syndromal structure. We suggest adopting a new approach long used in reliability engineering, which also serves as a synthesis of these approaches. This approach, known as fault tree analysis, can be combined with extant neuroscientific data collection and computational modeling efforts to uncover the causal structures underlying the cognitive and affective failures in people with schizophrenia as well as other complex psychiatric phenomena. By making explicit how causes combine from basic faults to downstream failures, this approach makes affordances for: (1) causes that are neither necessary nor sufficient in and of themselves; (2) within-diagnosis heterogeneity; and (3) between diagnosis co-morbidity.
A dynamic fault tree model of a propulsion system
NASA Technical Reports Server (NTRS)
Xu, Hong; Dugan, Joanne Bechta; Meshkat, Leila
2006-01-01
We present a dynamic fault tree model of the benchmark propulsion system, and solve it using Galileo. Dynamic fault trees (DFT) extend traditional static fault trees with special gates to model spares and other sequence dependencies. Galileo solves DFT models using a judicious combination of automatically generated Markov and Binary Decision Diagram models. Galileo easily handles the complexities exhibited by the benchmark problem. In particular, Galileo is designed to model phased mission systems.
Chen, Yingyi; Zhen, Zhumi; Yu, Huihui; Xu, Jing
2017-01-14
In the Internet of Things (IoT) equipment used for aquaculture is often deployed in outdoor ponds located in remote areas. Faults occur frequently in these tough environments and the staff generally lack professional knowledge and pay a low degree of attention in these areas. Once faults happen, expert personnel must carry out maintenance outdoors. Therefore, this study presents an intelligent method for fault diagnosis based on fault tree analysis and a fuzzy neural network. In the proposed method, first, the fault tree presents a logic structure of fault symptoms and faults. Second, rules extracted from the fault trees avoid duplicate and redundancy. Third, the fuzzy neural network is applied to train the relationship mapping between fault symptoms and faults. In the aquaculture IoT, one fault can cause various fault symptoms, and one symptom can be caused by a variety of faults. Four fault relationships are obtained. Results show that one symptom-to-one fault, two symptoms-to-two faults, and two symptoms-to-one fault relationships can be rapidly diagnosed with high precision, while one symptom-to-two faults patterns perform not so well, but are still worth researching. This model implements diagnosis for most kinds of faults in the aquaculture IoT.
Chen, Yingyi; Zhen, Zhumi; Yu, Huihui; Xu, Jing
2017-01-01
In the Internet of Things (IoT) equipment used for aquaculture is often deployed in outdoor ponds located in remote areas. Faults occur frequently in these tough environments and the staff generally lack professional knowledge and pay a low degree of attention in these areas. Once faults happen, expert personnel must carry out maintenance outdoors. Therefore, this study presents an intelligent method for fault diagnosis based on fault tree analysis and a fuzzy neural network. In the proposed method, first, the fault tree presents a logic structure of fault symptoms and faults. Second, rules extracted from the fault trees avoid duplicate and redundancy. Third, the fuzzy neural network is applied to train the relationship mapping between fault symptoms and faults. In the aquaculture IoT, one fault can cause various fault symptoms, and one symptom can be caused by a variety of faults. Four fault relationships are obtained. Results show that one symptom-to-one fault, two symptoms-to-two faults, and two symptoms-to-one fault relationships can be rapidly diagnosed with high precision, while one symptom-to-two faults patterns perform not so well, but are still worth researching. This model implements diagnosis for most kinds of faults in the aquaculture IoT. PMID:28098822
Reliability computation using fault tree analysis
NASA Technical Reports Server (NTRS)
Chelson, P. O.
1971-01-01
A method is presented for calculating event probabilities from an arbitrary fault tree. The method includes an analytical derivation of the system equation and is not a simulation program. The method can handle systems that incorporate standby redundancy and it uses conditional probabilities for computing fault trees where the same basic failure appears in more than one fault path.
Object-oriented fault tree evaluation program for quantitative analyses
NASA Technical Reports Server (NTRS)
Patterson-Hine, F. A.; Koen, B. V.
1988-01-01
Object-oriented programming can be combined with fault free techniques to give a significantly improved environment for evaluating the safety and reliability of large complex systems for space missions. Deep knowledge about system components and interactions, available from reliability studies and other sources, can be described using objects that make up a knowledge base. This knowledge base can be interrogated throughout the design process, during system testing, and during operation, and can be easily modified to reflect design changes in order to maintain a consistent information source. An object-oriented environment for reliability assessment has been developed on a Texas Instrument (TI) Explorer LISP workstation. The program, which directly evaluates system fault trees, utilizes the object-oriented extension to LISP called Flavors that is available on the Explorer. The object representation of a fault tree facilitates the storage and retrieval of information associated with each event in the tree, including tree structural information and intermediate results obtained during the tree reduction process. Reliability data associated with each basic event are stored in the fault tree objects. The object-oriented environment on the Explorer also includes a graphical tree editor which was modified to display and edit the fault trees.
NASA Technical Reports Server (NTRS)
Martensen, Anna L.; Butler, Ricky W.
1987-01-01
The Fault Tree Compiler Program is a new reliability tool used to predict the top event probability for a fault tree. Five different gate types are allowed in the fault tree: AND, OR, EXCLUSIVE OR, INVERT, and M OF N gates. The high level input language is easy to understand and use when describing the system tree. In addition, the use of the hierarchical fault tree capability can simplify the tree description and decrease program execution time. The current solution technique provides an answer precise (within the limits of double precision floating point arithmetic) to the five digits in the answer. The user may vary one failure rate or failure probability over a range of values and plot the results for sensitivity analyses. The solution technique is implemented in FORTRAN; the remaining program code is implemented in Pascal. The program is written to run on a Digital Corporation VAX with the VMS operation system.
The Fault Tree Compiler (FTC): Program and mathematics
NASA Technical Reports Server (NTRS)
Butler, Ricky W.; Martensen, Anna L.
1989-01-01
The Fault Tree Compiler Program is a new reliability tool used to predict the top-event probability for a fault tree. Five different gate types are allowed in the fault tree: AND, OR, EXCLUSIVE OR, INVERT, AND m OF n gates. The high-level input language is easy to understand and use when describing the system tree. In addition, the use of the hierarchical fault tree capability can simplify the tree description and decrease program execution time. The current solution technique provides an answer precisely (within the limits of double precision floating point arithmetic) within a user specified number of digits accuracy. The user may vary one failure rate or failure probability over a range of values and plot the results for sensitivity analyses. The solution technique is implemented in FORTRAN; the remaining program code is implemented in Pascal. The program is written to run on a Digital Equipment Corporation (DEC) VAX computer with the VMS operation system.
Systems Theoretic Process Analysis Applied to an Offshore Supply Vessel Dynamic Positioning System
2016-06-01
additional safety issues that were either not identified or inadequately mitigated through the use of Fault Tree Analysis and Failure Modes and...Techniques ...................................................................................................... 15 1.3.1. Fault Tree Analysis...49 3.2. Fault Tree Analysis Comparison
An overview of the phase-modular fault tree approach to phased mission system analysis
NASA Technical Reports Server (NTRS)
Meshkat, L.; Xing, L.; Donohue, S. K.; Ou, Y.
2003-01-01
We look at how fault tree analysis (FTA), a primary means of performing reliability analysis of PMS, can meet this challenge in this paper by presenting an overview of the modular approach to solving fault trees that represent PMS.
Try Fault Tree Analysis, a Step-by-Step Way to Improve Organization Development.
ERIC Educational Resources Information Center
Spitzer, Dean
1980-01-01
Fault Tree Analysis, a systems safety engineering technology used to analyze organizational systems, is described. Explains the use of logic gates to represent the relationship between failure events, qualitative analysis, quantitative analysis, and effective use of Fault Tree Analysis. (CT)
Fault Tree Analysis: A Research Tool for Educational Planning. Technical Report No. 1.
ERIC Educational Resources Information Center
Alameda County School Dept., Hayward, CA. PACE Center.
This ESEA Title III report describes fault tree analysis and assesses its applicability to education. Fault tree analysis is an operations research tool which is designed to increase the probability of success in any system by analyzing the most likely modes of failure that could occur. A graphic portrayal, which has the form of a tree, is…
Review: Evaluation of Foot-and-Mouth Disease Control Using Fault Tree Analysis.
Isoda, N; Kadohira, M; Sekiguchi, S; Schuppers, M; Stärk, K D C
2015-06-01
An outbreak of foot-and-mouth disease (FMD) causes huge economic losses and animal welfare problems. Although much can be learnt from past FMD outbreaks, several countries are not satisfied with their degree of contingency planning and aiming at more assurance that their control measures will be effective. The purpose of the present article was to develop a generic fault tree framework for the control of an FMD outbreak as a basis for systematic improvement and refinement of control activities and general preparedness. Fault trees are typically used in engineering to document pathways that can lead to an undesired event, that is, ineffective FMD control. The fault tree method allows risk managers to identify immature parts of the control system and to analyse the events or steps that will most probably delay rapid and effective disease control during a real outbreak. The present developed fault tree is generic and can be tailored to fit the specific needs of countries. For instance, the specific fault tree for the 2001 FMD outbreak in the UK was refined based on control weaknesses discussed in peer-reviewed articles. Furthermore, the specific fault tree based on the 2001 outbreak was applied to the subsequent FMD outbreak in 2007 to assess the refinement of control measures following the earlier, major outbreak. The FMD fault tree can assist risk managers to develop more refined and adequate control activities against FMD outbreaks and to find optimum strategies for rapid control. Further application using the current tree will be one of the basic measures for FMD control worldwide. © 2013 Blackwell Verlag GmbH.
The weakest t-norm based intuitionistic fuzzy fault-tree analysis to evaluate system reliability.
Kumar, Mohit; Yadav, Shiv Prasad
2012-07-01
In this paper, a new approach of intuitionistic fuzzy fault-tree analysis is proposed to evaluate system reliability and to find the most critical system component that affects the system reliability. Here weakest t-norm based intuitionistic fuzzy fault tree analysis is presented to calculate fault interval of system components from integrating expert's knowledge and experience in terms of providing the possibility of failure of bottom events. It applies fault-tree analysis, α-cut of intuitionistic fuzzy set and T(ω) (the weakest t-norm) based arithmetic operations on triangular intuitionistic fuzzy sets to obtain fault interval and reliability interval of the system. This paper also modifies Tanaka et al.'s fuzzy fault-tree definition. In numerical verification, a malfunction of weapon system "automatic gun" is presented as a numerical example. The result of the proposed method is compared with the listing approaches of reliability analysis methods. Copyright © 2012 ISA. Published by Elsevier Ltd. All rights reserved.
Software For Fault-Tree Diagnosis Of A System
NASA Technical Reports Server (NTRS)
Iverson, Dave; Patterson-Hine, Ann; Liao, Jack
1993-01-01
Fault Tree Diagnosis System (FTDS) computer program is automated-diagnostic-system program identifying likely causes of specified failure on basis of information represented in system-reliability mathematical models known as fault trees. Is modified implementation of failure-cause-identification phase of Narayanan's and Viswanadham's methodology for acquisition of knowledge and reasoning in analyzing failures of systems. Knowledge base of if/then rules replaced with object-oriented fault-tree representation. Enhancement yields more-efficient identification of causes of failures and enables dynamic updating of knowledge base. Written in C language, C++, and Common LISP.
MacDonald III, Angus W.; Zick, Jennifer L.; Chafee, Matthew V.; Netoff, Theoden I.
2016-01-01
The grand challenges of schizophrenia research are linking the causes of the disorder to its symptoms and finding ways to overcome those symptoms. We argue that the field will be unable to address these challenges within psychiatry’s standard neo-Kraepelinian (DSM) perspective. At the same time the current corrective, based in molecular genetics and cognitive neuroscience, is also likely to flounder due to its neglect for psychiatry’s syndromal structure. We suggest adopting a new approach long used in reliability engineering, which also serves as a synthesis of these approaches. This approach, known as fault tree analysis, can be combined with extant neuroscientific data collection and computational modeling efforts to uncover the causal structures underlying the cognitive and affective failures in people with schizophrenia as well as other complex psychiatric phenomena. By making explicit how causes combine from basic faults to downstream failures, this approach makes affordances for: (1) causes that are neither necessary nor sufficient in and of themselves; (2) within-diagnosis heterogeneity; and (3) between diagnosis co-morbidity. PMID:26779007
Fault tree models for fault tolerant hypercube multiprocessors
NASA Technical Reports Server (NTRS)
Boyd, Mark A.; Tuazon, Jezus O.
1991-01-01
Three candidate fault tolerant hypercube architectures are modeled, their reliability analyses are compared, and the resulting implications of these methods of incorporating fault tolerance into hypercube multiprocessors are discussed. In the course of performing the reliability analyses, the use of HARP and fault trees in modeling sequence dependent system behaviors is demonstrated.
Comparative analysis of techniques for evaluating the effectiveness of aircraft computing systems
NASA Technical Reports Server (NTRS)
Hitt, E. F.; Bridgman, M. S.; Robinson, A. C.
1981-01-01
Performability analysis is a technique developed for evaluating the effectiveness of fault-tolerant computing systems in multiphase missions. Performability was evaluated for its accuracy, practical usefulness, and relative cost. The evaluation was performed by applying performability and the fault tree method to a set of sample problems ranging from simple to moderately complex. The problems involved as many as five outcomes, two to five mission phases, permanent faults, and some functional dependencies. Transient faults and software errors were not considered. A different analyst was responsible for each technique. Significantly more time and effort were required to learn performability analysis than the fault tree method. Performability is inherently as accurate as fault tree analysis. For the sample problems, fault trees were more practical and less time consuming to apply, while performability required less ingenuity and was more checkable. Performability offers some advantages for evaluating very complex problems.
Product Support Manager Guidebook
2011-04-01
package is being developed using supportability analysis concepts such as Failure Mode, Effects and Criticality Analysis (FMECA), Fault Tree Analysis ( FTA ...Analysis (LORA) Condition Based Maintenance + (CBM+) Fault Tree Analysis ( FTA ) Failure Mode, Effects, and Criticality Analysis (FMECA) Maintenance Task...Reporting and Corrective Action System (FRACAS), Fault Tree Analysis ( FTA ), Level of Repair Analysis (LORA), Maintenance Task Analysis (MTA
MIRAP, microcomputer reliability analysis program
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jehee, J.N.T.
1989-01-01
A program for a microcomputer is outlined that can determine minimal cut sets from a specified fault tree logic. The speed and memory limitations of the microcomputers on which the program is implemented (Atari ST and IBM) are addressed by reducing the fault tree's size and by storing the cut set data on disk. Extensive well proven fault tree restructuring techniques, such as the identification of sibling events and of independent gate events, reduces the fault tree's size but does not alter its logic. New methods are used for the Boolean reduction of the fault tree logic. Special criteria formore » combining events in the 'AND' and 'OR' logic avoid the creation of many subsuming cut sets which all would cancel out due to existing cut sets. Figures and tables illustrates these methods. 4 refs., 5 tabs.« less
The FTA Method And A Possibility Of Its Application In The Area Of Road Freight Transport
NASA Astrophysics Data System (ADS)
Poliaková, Adela
2015-06-01
The Fault Tree process utilizes logic diagrams to portray and analyse potentially hazardous events. Three basic symbols (logic gates) are adequate for diagramming any fault tree. However, additional recently developed symbols can be used to reduce the time and effort required for analysis. A fault tree is a graphical representation of the relationship between certain specific events and the ultimate undesired event (2). This paper deals to method of Fault Tree Analysis basic description and provides a practical view on possibility of application by quality improvement in road freight transport company.
Fault Tree Analysis: Its Implications for Use in Education.
ERIC Educational Resources Information Center
Barker, Bruce O.
This study introduces the concept of Fault Tree Analysis as a systems tool and examines the implications of Fault Tree Analysis (FTA) as a technique for isolating failure modes in educational systems. A definition of FTA and discussion of its history, as it relates to education, are provided. The step by step process for implementation and use of…
Preventing medical errors by designing benign failures.
Grout, John R
2003-07-01
One way to successfully reduce medical errors is to design health care systems that are more resistant to the tendencies of human beings to err. One interdisciplinary approach entails creating design changes, mitigating human errors, and making human error irrelevant to outcomes. This approach is intended to facilitate the creation of benign failures, which have been called mistake-proofing devices and forcing functions elsewhere. USING FAULT TREES TO DESIGN FORCING FUNCTIONS: A fault tree is a graphical tool used to understand the relationships that either directly cause or contribute to the cause of a particular failure. A careful analysis of a fault tree enables the analyst to anticipate how the process will behave after the change. EXAMPLE OF AN APPLICATION: A scenario in which a patient is scalded while bathing can serve as an example of how multiple fault trees can be used to design forcing functions. The first fault tree shows the undesirable event--patient scalded while bathing. The second fault tree has a benign event--no water. Adding a scald valve changes the outcome from the undesirable event ("patient scalded while bathing") to the benign event ("no water") Analysis of fault trees does not ensure or guarantee that changes necessary to eliminate error actually occur. Most mistake-proofing is used to prevent simple errors and to create well-defended processes, but complex errors can also result. The utilization of mistake-proofing or forcing functions can be thought of as changing the logic of a process. Errors that formerly caused undesirable failures can be converted into the causes of benign failures. The use of fault trees can provide a variety of insights into the design of forcing functions that will improve patient safety.
Fault Tree Analysis Application for Safety and Reliability
NASA Technical Reports Server (NTRS)
Wallace, Dolores R.
2003-01-01
Many commercial software tools exist for fault tree analysis (FTA), an accepted method for mitigating risk in systems. The method embedded in the tools identifies a root as use in system components, but when software is identified as a root cause, it does not build trees into the software component. No commercial software tools have been built specifically for development and analysis of software fault trees. Research indicates that the methods of FTA could be applied to software, but the method is not practical without automated tool support. With appropriate automated tool support, software fault tree analysis (SFTA) may be a practical technique for identifying the underlying cause of software faults that may lead to critical system failures. We strive to demonstrate that existing commercial tools for FTA can be adapted for use with SFTA, and that applied to a safety-critical system, SFTA can be used to identify serious potential problems long before integrator and system testing.
ERIC Educational Resources Information Center
Barker, Bruce O.; Petersen, Paul D.
This paper explores the fault-tree analysis approach to isolating failure modes within a system. Fault tree investigates potentially undesirable events and then looks for failures in sequence that would lead to their occurring. Relationships among these events are symbolized by AND or OR logic gates, AND used when single events must coexist to…
Evidential Networks for Fault Tree Analysis with Imprecise Knowledge
NASA Astrophysics Data System (ADS)
Yang, Jianping; Huang, Hong-Zhong; Liu, Yu; Li, Yan-Feng
2012-06-01
Fault tree analysis (FTA), as one of the powerful tools in reliability engineering, has been widely used to enhance system quality attributes. In most fault tree analyses, precise values are adopted to represent the probabilities of occurrence of those events. Due to the lack of sufficient data or imprecision of existing data at the early stage of product design, it is often difficult to accurately estimate the failure rates of individual events or the probabilities of occurrence of the events. Therefore, such imprecision and uncertainty need to be taken into account in reliability analysis. In this paper, the evidential networks (EN) are employed to quantify and propagate the aforementioned uncertainty and imprecision in fault tree analysis. The detailed conversion processes of some logic gates to EN are described in fault tree (FT). The figures of the logic gates and the converted equivalent EN, together with the associated truth tables and the conditional belief mass tables, are also presented in this work. The new epistemic importance is proposed to describe the effect of ignorance degree of event. The fault tree of an aircraft engine damaged by oil filter plugs is presented to demonstrate the proposed method.
Object-oriented fault tree models applied to system diagnosis
NASA Technical Reports Server (NTRS)
Iverson, David L.; Patterson-Hine, F. A.
1990-01-01
When a diagnosis system is used in a dynamic environment, such as the distributed computer system planned for use on Space Station Freedom, it must execute quickly and its knowledge base must be easily updated. Representing system knowledge as object-oriented augmented fault trees provides both features. The diagnosis system described here is based on the failure cause identification process of the diagnostic system described by Narayanan and Viswanadham. Their system has been enhanced in this implementation by replacing the knowledge base of if-then rules with an object-oriented fault tree representation. This allows the system to perform its task much faster and facilitates dynamic updating of the knowledge base in a changing diagnosis environment. Accessing the information contained in the objects is more efficient than performing a lookup operation on an indexed rule base. Additionally, the object-oriented fault trees can be easily updated to represent current system status. This paper describes the fault tree representation, the diagnosis algorithm extensions, and an example application of this system. Comparisons are made between the object-oriented fault tree knowledge structure solution and one implementation of a rule-based solution. Plans for future work on this system are also discussed.
Probabilistic fault tree analysis of a radiation treatment system.
Ekaette, Edidiong; Lee, Robert C; Cooke, David L; Iftody, Sandra; Craighead, Peter
2007-12-01
Inappropriate administration of radiation for cancer treatment can result in severe consequences such as premature death or appreciably impaired quality of life. There has been little study of vulnerable treatment process components and their contribution to the risk of radiation treatment (RT). In this article, we describe the application of probabilistic fault tree methods to assess the probability of radiation misadministration to patients at a large cancer treatment center. We conducted a systematic analysis of the RT process that identified four process domains: Assessment, Preparation, Treatment, and Follow-up. For the Preparation domain, we analyzed possible incident scenarios via fault trees. For each task, we also identified existing quality control measures. To populate the fault trees we used subjective probabilities from experts and compared results with incident report data. Both the fault tree and the incident report analysis revealed simulation tasks to be most prone to incidents, and the treatment prescription task to be least prone to incidents. The probability of a Preparation domain incident was estimated to be in the range of 0.1-0.7% based on incident reports, which is comparable to the mean value of 0.4% from the fault tree analysis using probabilities from the expert elicitation exercise. In conclusion, an analysis of part of the RT system using a fault tree populated with subjective probabilities from experts was useful in identifying vulnerable components of the system, and provided quantitative data for risk management.
Secure Embedded System Design Methodologies for Military Cryptographic Systems
2016-03-31
Fault- Tree Analysis (FTA); Built-In Self-Test (BIST) Introduction Secure access-control systems restrict operations to authorized users via methods...failures in the individual software/processor elements, the question of exactly how unlikely is difficult to answer. Fault- Tree Analysis (FTA) has a...Collins of Sandia National Laboratories for years of sharing his extensive knowledge of Fail-Safe Design Assurance and Fault- Tree Analysis
NASA Astrophysics Data System (ADS)
Chen, Chunfeng; Liu, Hua; Fan, Ge
2005-02-01
In this paper we consider the problem of designing a network of optical cross-connects(OXCs) to provide end-to-end lightpath services to label switched routers (LSRs). Like some previous work, we select the number of OXCs as our objective. Compared with the previous studies, we take into account the fault-tolerant characteristic of logical topology. First of all, using a Prufer number randomly generated, we generate a tree. By adding some edges to the tree, we can obtain a physical topology which consists of a certain number of OXCs and fiber links connecting OXCs. It is notable that we for the first time limit the number of layers of the tree produced according to the method mentioned above. Then we design the logical topologies based on the physical topologies mentioned above. In principle, we will select the shortest path in addition to some consideration on the load balancing of links and the limitation owing to the SRLG. Notably, we implement the routing algorithm for the nodes in increasing order of the degree of the nodes. With regarding to the problem of the wavelength assignment, we adopt the heuristic algorithm of the graph coloring commonly used. It is clear our problem is computationally intractable especially when the scale of the network is large. We adopt the taboo search algorithm to find the near optimal solution to our objective. We present numerical results for up to 1000 LSRs and for a wide range of system parameters such as the number of wavelengths supported by each fiber link and traffic. The results indicate that it is possible to build large-scale optical networks with rich connectivity in a cost-effective manner, using relatively few but properly dimensioned OXCs.
The 1992 Landers earthquake sequence; seismological observations
Egill Hauksson,; Jones, Lucile M.; Hutton, Kate; Eberhart-Phillips, Donna
1993-01-01
The (MW6.1, 7.3, 6.2) 1992 Landers earthquakes began on April 23 with the MW6.1 1992 Joshua Tree preshock and form the most substantial earthquake sequence to occur in California in the last 40 years. This sequence ruptured almost 100 km of both surficial and concealed faults and caused aftershocks over an area 100 km wide by 180 km long. The faulting was predominantly strike slip and three main events in the sequence had unilateral rupture to the north away from the San Andreas fault. The MW6.1 Joshua Tree preshock at 33°N58′ and 116°W19′ on 0451 UT April 23 was preceded by a tightly clustered foreshock sequence (M≤4.6) beginning 2 hours before the mainshock and followed by a large aftershock sequence with more than 6000 aftershocks. The aftershocks extended along a northerly trend from about 10 km north of the San Andreas fault, northwest of Indio, to the east-striking Pinto Mountain fault. The Mw7.3 Landers mainshock occurred at 34°N13′ and 116°W26′ at 1158 UT, June 28, 1992, and was preceded for 12 hours by 25 small M≤3 earthquakes at the mainshock epicenter. The distribution of more than 20,000 aftershocks, analyzed in this study, and short-period focal mechanisms illuminate a complex sequence of faulting. The aftershocks extend 60 km to the north of the mainshock epicenter along a system of at least five different surficial faults, and 40 km to the south, crossing the Pinto Mountain fault through the Joshua Tree aftershock zone towards the San Andreas fault near Indio. The rupture initiated in the depth range of 3–6 km, similar to previous M∼5 earthquakes in the region, although the maximum depth of aftershocks is about 15 km. The mainshock focal mechanism showed right-lateral strike-slip faulting with a strike of N10°W on an almost vertical fault. The rupture formed an arclike zone well defined by both surficial faulting and aftershocks, with more westerly faulting to the north. This change in strike is accomplished by jumping across dilational jogs connecting surficial faults with strikes rotated progressively to the west. A 20-km-long linear cluster of aftershocks occurred 10–20 km north of Barstow, or 30–40 km north of the end of the mainshock rupture. The most prominent off-fault aftershock cluster occurred 30 km to the west of the Landers mainshock. The largest aftershock was within this cluster, the Mw6.2 Big Bear aftershock occurring at 34°N10′ and 116°W49′ at 1505 UT June 28. It exhibited left-lateral strike-slip faulting on a northeast striking and steeply dipping plane. The Big Bear aftershocks form a linear trend extending 20 km to the northeast with a scattered distribution to the north. The Landers mainshock occurred near the southernmost extent of the Eastern California Shear Zone, an 80-km-wide, more than 400-km-long zone of deformation. This zone extends into the Death Valley region and accommodates about 10 to 20% of the plate motion between the Pacific and North American plates. The Joshua Tree preshock, its aftershocks, and Landers aftershocks form a previously missing link that connects the Eastern California Shear Zone to the southern San Andreas fault.
Rymer, M.J.
2000-01-01
The Coachella Valley area was strongly shaken by the 1992 Joshua Tree (23 April) and Landers (28 June) earthquakes, and both events caused triggered slip on active faults within the area. Triggered slip associated with the Joshua Tree earthquake was on a newly recognized fault, the East Wide Canyon fault, near the southwestern edge of the Little San Bernardino Mountains. Slip associated with the Landers earthquake formed along the San Andreas fault in the southeastern Coachella Valley. Surface fractures formed along the East Wide Canyon fault in association with the Joshua Tree earthquake. The fractures extended discontinuously over a 1.5-km stretch of the fault, near its southern end. Sense of slip was consistently right-oblique, west side down, similar to the long-term style of faulting. Measured offset values were small, with right-lateral and vertical components of slip ranging from 1 to 6 mm and 1 to 4 mm, respectively. This is the first documented historic slip on the East Wide Canyon fault, which was first mapped only months before the Joshua Tree earthquake. Surface slip associated with the Joshua Tree earthquake most likely developed as triggered slip given its 5 km distance from the Joshua Tree epicenter and aftershocks. As revealed in a trench investigation, slip formed in an area with only a thin (<3 m thick) veneer of alluvium in contrast to earlier documented triggered slip events in this region, all in the deep basins of the Salton Trough. A paleoseismic trench study in an area of 1992 surface slip revealed evidence of two and possibly three surface faulting events on the East Wide Canyon fault during the late Quaternary, probably latest Pleistocene (first event) and mid- to late Holocene (second two events). About two months after the Joshua Tree earthquake, the Landers earthquake then triggered slip on many faults, including the San Andreas fault in the southeastern Coachella Valley. Surface fractures associated with this event formed discontinuous breaks over a 54-km-long stretch of the fault, from the Indio Hills southeastward to Durmid Hill. Sense of slip was right-lateral; only locally was there a minor (~1 mm) vertical component of slip. Measured dextral displacement values ranged from 1 to 20 mm, with the largest amounts found in the Mecca Hills where large slip values have been measured following past triggered-slip events.
NASA Astrophysics Data System (ADS)
de Barros, Felipe P. J.; Bolster, Diogo; Sanchez-Vila, Xavier; Nowak, Wolfgang
2011-05-01
Assessing health risk in hydrological systems is an interdisciplinary field. It relies on the expertise in the fields of hydrology and public health and needs powerful translation concepts to provide decision support and policy making. Reliable health risk estimates need to account for the uncertainties and variabilities present in hydrological, physiological, and human behavioral parameters. Despite significant theoretical advancements in stochastic hydrology, there is still a dire need to further propagate these concepts to practical problems and to society in general. Following a recent line of work, we use fault trees to address the task of probabilistic risk analysis and to support related decision and management problems. Fault trees allow us to decompose the assessment of health risk into individual manageable modules, thus tackling a complex system by a structural divide and conquer approach. The complexity within each module can be chosen individually according to data availability, parsimony, relative importance, and stage of analysis. Three differences are highlighted in this paper when compared to previous works: (1) The fault tree proposed here accounts for the uncertainty in both hydrological and health components, (2) system failure within the fault tree is defined in terms of risk being above a threshold value, whereas previous studies that used fault trees used auxiliary events such as exceedance of critical concentration levels, and (3) we introduce a new form of stochastic fault tree that allows us to weaken the assumption of independent subsystems that is required by a classical fault tree approach. We illustrate our concept in a simple groundwater-related setting.
Planning effectiveness may grow on fault trees.
Chow, C W; Haddad, K; Mannino, B
1991-10-01
The first step of a strategic planning process--identifying and analyzing threats and opportunities--requires subjective judgments. By using an analytical tool known as a fault tree, healthcare administrators can reduce the unreliability of subjective decision making by creating a logical structure for problem solving and decision making. A case study of 11 healthcare administrators showed that an analysis technique called prospective hindsight can add to a fault tree's ability to improve a strategic planning process.
NASA Astrophysics Data System (ADS)
Batzias, Dimitris F.
2012-12-01
Fault Tree Analysis (FTA) can be used for technology transfer when the relevant problem (called 'top even' in FTA) is solved in a technology centre and the results are diffused to interested parties (usually Small Medium Enterprises - SMEs) that have not the proper equipment and the required know-how to solve the problem by their own. Nevertheless, there is a significant drawback in this procedure: the information usually provided by the SMEs to the technology centre, about production conditions and corresponding quality characteristics of the product, and (sometimes) the relevant expertise in the Knowledge Base of this centre may be inadequate to form a complete fault tree. Since such cases are quite frequent in practice, we have developed a methodology for transforming incomplete fault tree to Ishikawa diagram, which is more flexible and less strict in establishing causal chains, because it uses a surface phenomenological level with a limited number of categories of faults. On the other hand, such an Ishikawa diagram can be extended to simulate a fault tree as relevant knowledge increases. An implementation of this transformation, referring to anodization of aluminium, is presented.
A systematic risk management approach employed on the CloudSat project
NASA Technical Reports Server (NTRS)
Basilio, R. R.; Plourde, K. S.; Lam, T.
2000-01-01
The CloudSat Project has developed a simplified approach for fault tree analysis and probabilistic risk assessment. A system-level fault tree has been constructed to identify credible fault scenarios and failure modes leading up to a potential failure to meet the nominal mission success criteria.
Fault Tree Analysis: A Bibliography
NASA Technical Reports Server (NTRS)
2000-01-01
Fault tree analysis is a top-down approach to the identification of process hazards. It is as one of the best methods for systematically identifying an graphically displaying the many ways some things can go wrong. This bibliography references 266 documents in the NASA STI Database that contain the major concepts. fault tree analysis, risk an probability theory, in the basic index or major subject terms. An abstract is included with most citations, followed by the applicable subject terms.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sarrack, A.G.
The purpose of this report is to document fault tree analyses which have been completed for the Defense Waste Processing Facility (DWPF) safety analysis. Logic models for equipment failures and human error combinations that could lead to flammable gas explosions in various process tanks, or failure of critical support systems were developed for internal initiating events and for earthquakes. These fault trees provide frequency estimates for support systems failures and accidents that could lead to radioactive and hazardous chemical releases both on-site and off-site. Top event frequency results from these fault trees will be used in further APET analyses tomore » calculate accident risk associated with DWPF facility operations. This report lists and explains important underlying assumptions, provides references for failure data sources, and briefly describes the fault tree method used. Specific commitments from DWPF to provide new procedural/administrative controls or system design changes are listed in the ''Facility Commitments'' section. The purpose of the ''Assumptions'' section is to clarify the basis for fault tree modeling, and is not necessarily a list of items required to be protected by Technical Safety Requirements (TSRs).« less
Graphical fault tree analysis for fatal falls in the construction industry.
Chi, Chia-Fen; Lin, Syuan-Zih; Dewi, Ratna Sari
2014-11-01
The current study applied a fault tree analysis to represent the causal relationships among events and causes that contributed to fatal falls in the construction industry. Four hundred and eleven work-related fatalities in the Taiwanese construction industry were analyzed in terms of age, gender, experience, falling site, falling height, company size, and the causes for each fatality. Given that most fatal accidents involve multiple events, the current study coded up to a maximum of three causes for each fall fatality. After the Boolean algebra and minimal cut set analyses, accident causes associated with each falling site can be presented as a fault tree to provide an overview of the basic causes, which could trigger fall fatalities in the construction industry. Graphical icons were designed for each falling site along with the associated accident causes to illustrate the fault tree in a graphical manner. A graphical fault tree can improve inter-disciplinary discussion of risk management and the communication of accident causation to first line supervisors. Copyright © 2014 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Polverino, Pierpaolo; Pianese, Cesare; Sorrentino, Marco; Marra, Dario
2015-04-01
The paper focuses on the design of a procedure for the development of an on-field diagnostic algorithm for solid oxide fuel cell (SOFC) systems. The diagnosis design phase relies on an in-deep analysis of the mutual interactions among all system components by exploiting the physical knowledge of the SOFC system as a whole. This phase consists of the Fault Tree Analysis (FTA), which identifies the correlations among possible faults and their corresponding symptoms at system components level. The main outcome of the FTA is an inferential isolation tool (Fault Signature Matrix - FSM), which univocally links the faults to the symptoms detected during the system monitoring. In this work the FTA is considered as a starting point to develop an improved FSM. Making use of a model-based investigation, a fault-to-symptoms dependency study is performed. To this purpose a dynamic model, previously developed by the authors, is exploited to simulate the system under faulty conditions. Five faults are simulated, one for the stack and four occurring at BOP level. Moreover, the robustness of the FSM design is increased by exploiting symptom thresholds defined for the investigation of the quantitative effects of the simulated faults on the affected variables.
Fault Tree Analysis for an Inspection Robot in a Nuclear Power Plant
NASA Astrophysics Data System (ADS)
Ferguson, Thomas A.; Lu, Lixuan
2017-09-01
The life extension of current nuclear reactors has led to an increasing demand on inspection and maintenance of critical reactor components that are too expensive to replace. To reduce the exposure dosage to workers, robotics have become an attractive alternative as a preventative safety tool in nuclear power plants. It is crucial to understand the reliability of these robots in order to increase the veracity and confidence of their results. This study presents the Fault Tree (FT) analysis to a coolant outlet piper snake-arm inspection robot in a nuclear power plant. Fault trees were constructed for a qualitative analysis to determine the reliability of the robot. Insight on the applicability of fault tree methods for inspection robotics in the nuclear industry is gained through this investigation.
Interim reliability evaluation program, Browns Ferry fault trees
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stewart, M.E.
1981-01-01
An abbreviated fault tree method is used to evaluate and model Browns Ferry systems in the Interim Reliability Evaluation programs, simplifying the recording and displaying of events, yet maintaining the system of identifying faults. The level of investigation is not changed. The analytical thought process inherent in the conventional method is not compromised. But the abbreviated method takes less time, and the fault modes are much more visible.
Object-Oriented Algorithm For Evaluation Of Fault Trees
NASA Technical Reports Server (NTRS)
Patterson-Hine, F. A.; Koen, B. V.
1992-01-01
Algorithm for direct evaluation of fault trees incorporates techniques of object-oriented programming. Reduces number of calls needed to solve trees with repeated events. Provides significantly improved software environment for such computations as quantitative analyses of safety and reliability of complicated systems of equipment (e.g., spacecraft or factories).
NASA Technical Reports Server (NTRS)
English, Thomas
2005-01-01
A standard tool of reliability analysis used at NASA-JSC is the event tree. An event tree is simply a probability tree, with the probabilities determining the next step through the tree specified at each node. The nodal probabilities are determined by a reliability study of the physical system at work for a particular node. The reliability study performed at a node is typically referred to as a fault tree analysis, with the potential of a fault tree existing.for each node on the event tree. When examining an event tree it is obvious why the event tree/fault tree approach has been adopted. Typical event trees are quite complex in nature, and the event tree/fault tree approach provides a systematic and organized approach to reliability analysis. The purpose of this study was two fold. Firstly, we wanted to explore the possibility that a semi-Markov process can create dependencies between sojourn times (the times it takes to transition from one state to the next) that can decrease the uncertainty when estimating time to failures. Using a generalized semi-Markov model, we studied a four element reliability model and were able to demonstrate such sojourn time dependencies. Secondly, we wanted to study the use of semi-Markov processes to introduce a time variable into the event tree diagrams that are commonly developed in PRA (Probabilistic Risk Assessment) analyses. Event tree end states which change with time are more representative of failure scenarios than are the usual static probability-derived end states.
Structural system reliability calculation using a probabilistic fault tree analysis method
NASA Technical Reports Server (NTRS)
Torng, T. Y.; Wu, Y.-T.; Millwater, H. R.
1992-01-01
The development of a new probabilistic fault tree analysis (PFTA) method for calculating structural system reliability is summarized. The proposed PFTA procedure includes: developing a fault tree to represent the complex structural system, constructing an approximation function for each bottom event, determining a dominant sampling sequence for all bottom events, and calculating the system reliability using an adaptive importance sampling method. PFTA is suitable for complicated structural problems that require computer-intensive computer calculations. A computer program has been developed to implement the PFTA.
Using Fault Trees to Advance Understanding of Diagnostic Errors.
Rogith, Deevakar; Iyengar, M Sriram; Singh, Hardeep
2017-11-01
Diagnostic errors annually affect at least 5% of adults in the outpatient setting in the United States. Formal analytic techniques are only infrequently used to understand them, in part because of the complexity of diagnostic processes and clinical work flows involved. In this article, diagnostic errors were modeled using fault tree analysis (FTA), a form of root cause analysis that has been successfully used in other high-complexity, high-risk contexts. How factors contributing to diagnostic errors can be systematically modeled by FTA to inform error understanding and error prevention is demonstrated. A team of three experts reviewed 10 published cases of diagnostic error and constructed fault trees. The fault trees were modeled according to currently available conceptual frameworks characterizing diagnostic error. The 10 trees were then synthesized into a single fault tree to identify common contributing factors and pathways leading to diagnostic error. FTA is a visual, structured, deductive approach that depicts the temporal sequence of events and their interactions in a formal logical hierarchy. The visual FTA enables easier understanding of causative processes and cognitive and system factors, as well as rapid identification of common pathways and interactions in a unified fashion. In addition, it enables calculation of empirical estimates for causative pathways. Thus, fault trees might provide a useful framework for both quantitative and qualitative analysis of diagnostic errors. Future directions include establishing validity and reliability by modeling a wider range of error cases, conducting quantitative evaluations, and undertaking deeper exploration of other FTA capabilities. Copyright © 2017 The Joint Commission. Published by Elsevier Inc. All rights reserved.
Khan, F I; Abbasi, S A
2000-07-10
Fault tree analysis (FTA) is based on constructing a hypothetical tree of base events (initiating events) branching into numerous other sub-events, propagating the fault and eventually leading to the top event (accident). It has been a powerful technique used traditionally in identifying hazards in nuclear installations and power industries. As the systematic articulation of the fault tree is associated with assigning probabilities to each fault, the exercise is also sometimes called probabilistic risk assessment. But powerful as this technique is, it is also very cumbersome and costly, limiting its area of application. We have developed a new algorithm based on analytical simulation (named as AS-II), which makes the application of FTA simpler, quicker, and cheaper; thus opening up the possibility of its wider use in risk assessment in chemical process industries. Based on the methodology we have developed a computer-automated tool. The details are presented in this paper.
Reliability database development for use with an object-oriented fault tree evaluation program
NASA Technical Reports Server (NTRS)
Heger, A. Sharif; Harringtton, Robert J.; Koen, Billy V.; Patterson-Hine, F. Ann
1989-01-01
A description is given of the development of a fault-tree analysis method using object-oriented programming. In addition, the authors discuss the programs that have been developed or are under development to connect a fault-tree analysis routine to a reliability database. To assess the performance of the routines, a relational database simulating one of the nuclear power industry databases has been constructed. For a realistic assessment of the results of this project, the use of one of existing nuclear power reliability databases is planned.
NASA Astrophysics Data System (ADS)
Chartier, Thomas; Scotti, Oona; Boiselet, Aurelien; Lyon-Caen, Hélène
2016-04-01
Including faults in probabilistic seismic hazard assessment tends to increase the degree of uncertainty in the results due to the intrinsically uncertain nature of the fault data. This is especially the case in the low to moderate seismicity regions of Europe, where slow slipping faults are difficult to characterize. In order to better understand the key parameters that control the uncertainty in the fault-related hazard computations, we propose to build an analytic tool that provides a clear link between the different components of the fault-related hazard computations and their impact on the results. This will allow identifying the important parameters that need to be better constrained in order to reduce the resulting uncertainty in hazard and also provide a more hazard-oriented strategy for collecting relevant fault parameters in the field. The tool will be illustrated through the example of the West Corinth rifts fault-models. Recent work performed in the gulf has shown the complexity of the normal faulting system that is accommodating the extensional deformation of the rift. A logic-tree approach is proposed to account for this complexity and the multiplicity of scientifically defendable interpretations. At the nodes of the logic tree, different options that could be considered at each step of the fault-related seismic hazard will be considered. The first nodes represent the uncertainty in the geometries of the faults and their slip rates, which can derive from different data and methodologies. The subsequent node explores, for a given geometry/slip rate of faults, different earthquake rupture scenarios that may occur in the complex network of faults. The idea is to allow the possibility of several faults segments to break together in a single rupture scenario. To build these multiple-fault-segment scenarios, two approaches are considered: one based on simple rules (i.e. minimum distance between faults) and a second one that relies on physically-based simulations. The following nodes represents for each rupture scenario different rupture forecast models (i.e; characteristic or Gutenberg-Richter) and for a given rupture forecast, two probability models commonly used in seismic hazard assessment: poissonian or time-dependent. The final node represents an exhaustive set of ground motion prediction equations chosen in order to be compatible with the region. Finally, the expected probability of exceeding a given ground motion level is computed at each sites. Results will be discussed for a few specific localities of the West Corinth Gulf.
CUTSETS - MINIMAL CUT SET CALCULATION FOR DIGRAPH AND FAULT TREE RELIABILITY MODELS
NASA Technical Reports Server (NTRS)
Iverson, D. L.
1994-01-01
Fault tree and digraph models are frequently used for system failure analysis. Both type of models represent a failure space view of the system using AND and OR nodes in a directed graph structure. Fault trees must have a tree structure and do not allow cycles or loops in the graph. Digraphs allow any pattern of interconnection between loops in the graphs. A common operation performed on digraph and fault tree models is the calculation of minimal cut sets. A cut set is a set of basic failures that could cause a given target failure event to occur. A minimal cut set for a target event node in a fault tree or digraph is any cut set for the node with the property that if any one of the failures in the set is removed, the occurrence of the other failures in the set will not cause the target failure event. CUTSETS will identify all the minimal cut sets for a given node. The CUTSETS package contains programs that solve for minimal cut sets of fault trees and digraphs using object-oriented programming techniques. These cut set codes can be used to solve graph models for reliability analysis and identify potential single point failures in a modeled system. The fault tree minimal cut set code reads in a fault tree model input file with each node listed in a text format. In the input file the user specifies a top node of the fault tree and a maximum cut set size to be calculated. CUTSETS will find minimal sets of basic events which would cause the failure at the output of a given fault tree gate. The program can find all the minimal cut sets of a node, or minimal cut sets up to a specified size. The algorithm performs a recursive top down parse of the fault tree, starting at the specified top node, and combines the cut sets of each child node into sets of basic event failures that would cause the failure event at the output of that gate. Minimal cut set solutions can be found for all nodes in the fault tree or just for the top node. The digraph cut set code uses the same techniques as the fault tree cut set code, except it includes all upstream digraph nodes in the cut sets for a given node and checks for cycles in the digraph during the solution process. CUTSETS solves for specified nodes and will not automatically solve for all upstream digraph nodes. The cut sets will be output as a text file. CUTSETS includes a utility program that will convert the popular COD format digraph model description files into text input files suitable for use with the CUTSETS programs. FEAT (MSC-21873) and FIRM (MSC-21860) available from COSMIC are examples of programs that produce COD format digraph model description files that may be converted for use with the CUTSETS programs. CUTSETS is written in C-language to be machine independent. It has been successfully implemented on a Sun running SunOS, a DECstation running ULTRIX, a Macintosh running System 7, and a DEC VAX running VMS. The RAM requirement varies with the size of the models. CUTSETS is available in UNIX tar format on a .25 inch streaming magnetic tape cartridge (standard distribution) or on a 3.5 inch diskette. It is also available on a 3.5 inch Macintosh format diskette or on a 9-track 1600 BPI magnetic tape in DEC VAX FILES-11 format. Sample input and sample output are provided on the distribution medium. An electronic copy of the documentation in Macintosh Microsoft Word format is included on the distribution medium. Sun and SunOS are trademarks of Sun Microsystems, Inc. DEC, DeCstation, ULTRIX, VAX, and VMS are trademarks of Digital Equipment Corporation. UNIX is a registered trademark of AT&T Bell Laboratories. Macintosh is a registered trademark of Apple Computer, Inc.
Fault trees for decision making in systems analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lambert, Howard E.
1975-10-09
The application of fault tree analysis (FTA) to system safety and reliability is presented within the framework of system safety analysis. The concepts and techniques involved in manual and automated fault tree construction are described and their differences noted. The theory of mathematical reliability pertinent to FTA is presented with emphasis on engineering applications. An outline of the quantitative reliability techniques of the Reactor Safety Study is given. Concepts of probabilistic importance are presented within the fault tree framework and applied to the areas of system design, diagnosis and simulation. The computer code IMPORTANCE ranks basic events and cut setsmore » according to a sensitivity analysis. A useful feature of the IMPORTANCE code is that it can accept relative failure data as input. The output of the IMPORTANCE code can assist an analyst in finding weaknesses in system design and operation, suggest the most optimal course of system upgrade, and determine the optimal location of sensors within a system. A general simulation model of system failure in terms of fault tree logic is described. The model is intended for efficient diagnosis of the causes of system failure in the event of a system breakdown. It can also be used to assist an operator in making decisions under a time constraint regarding the future course of operations. The model is well suited for computer implementation. New results incorporated in the simulation model include an algorithm to generate repair checklists on the basis of fault tree logic and a one-step-ahead optimization procedure that minimizes the expected time to diagnose system failure.« less
Fire safety in transit systems fault tree analysis
DOT National Transportation Integrated Search
1981-09-01
Fire safety countermeasures applicable to transit vehicles are identified and evaluated. This document contains fault trees which illustrate the sequences of events which may lead to a transit-fire related casualty. A description of the basis for the...
System Analysis by Mapping a Fault-tree into a Bayesian-network
NASA Astrophysics Data System (ADS)
Sheng, B.; Deng, C.; Wang, Y. H.; Tang, L. H.
2018-05-01
In view of the limitations of fault tree analysis in reliability assessment, Bayesian Network (BN) has been studied as an alternative technology. After a brief introduction to the method for mapping a Fault Tree (FT) into an equivalent BN, equations used to calculate the structure importance degree, the probability importance degree and the critical importance degree are presented. Furthermore, the correctness of these equations is proved mathematically. Combining with an aircraft landing gear’s FT, an equivalent BN is developed and analysed. The results show that richer and more accurate information have been achieved through the BN method than the FT, which demonstrates that the BN is a superior technique in both reliability assessment and fault diagnosis.
A diagnosis system using object-oriented fault tree models
NASA Technical Reports Server (NTRS)
Iverson, David L.; Patterson-Hine, F. A.
1990-01-01
Spaceborne computing systems must provide reliable, continuous operation for extended periods. Due to weight, power, and volume constraints, these systems must manage resources very effectively. A fault diagnosis algorithm is described which enables fast and flexible diagnoses in the dynamic distributed computing environments planned for future space missions. The algorithm uses a knowledge base that is easily changed and updated to reflect current system status. Augmented fault trees represented in an object-oriented form provide deep system knowledge that is easy to access and revise as a system changes. Given such a fault tree, a set of failure events that have occurred, and a set of failure events that have not occurred, this diagnosis system uses forward and backward chaining to propagate causal and temporal information about other failure events in the system being diagnosed. Once the system has established temporal and causal constraints, it reasons backward from heuristically selected failure events to find a set of basic failure events which are a likely cause of the occurrence of the top failure event in the fault tree. The diagnosis system has been implemented in common LISP using Flavors.
Reset Tree-Based Optical Fault Detection
Lee, Dong-Geon; Choi, Dooho; Seo, Jungtaek; Kim, Howon
2013-01-01
In this paper, we present a new reset tree-based scheme to protect cryptographic hardware against optical fault injection attacks. As one of the most powerful invasive attacks on cryptographic hardware, optical fault attacks cause semiconductors to misbehave by injecting high-energy light into a decapped integrated circuit. The contaminated result from the affected chip is then used to reveal secret information, such as a key, from the cryptographic hardware. Since the advent of such attacks, various countermeasures have been proposed. Although most of these countermeasures are strong, there is still the possibility of attack. In this paper, we present a novel optical fault detection scheme that utilizes the buffers on a circuit's reset signal tree as a fault detection sensor. To evaluate our proposal, we model radiation-induced currents into circuit components and perform a SPICE simulation. The proposed scheme is expected to be used as a supplemental security tool. PMID:23698267
Fault tree applications within the safety program of Idaho Nuclear Corporation
NASA Technical Reports Server (NTRS)
Vesely, W. E.
1971-01-01
Computerized fault tree analyses are used to obtain both qualitative and quantitative information about the safety and reliability of an electrical control system that shuts the reactor down when certain safety criteria are exceeded, in the design of a nuclear plant protection system, and in an investigation of a backup emergency system for reactor shutdown. The fault tree yields the modes by which the system failure or accident will occur, the most critical failure or accident causing areas, detailed failure probabilities, and the response of safety or reliability to design modifications and maintenance schemes.
Lognormal Approximations of Fault Tree Uncertainty Distributions.
El-Shanawany, Ashraf Ben; Ardron, Keith H; Walker, Simon P
2018-01-26
Fault trees are used in reliability modeling to create logical models of fault combinations that can lead to undesirable events. The output of a fault tree analysis (the top event probability) is expressed in terms of the failure probabilities of basic events that are input to the model. Typically, the basic event probabilities are not known exactly, but are modeled as probability distributions: therefore, the top event probability is also represented as an uncertainty distribution. Monte Carlo methods are generally used for evaluating the uncertainty distribution, but such calculations are computationally intensive and do not readily reveal the dominant contributors to the uncertainty. In this article, a closed-form approximation for the fault tree top event uncertainty distribution is developed, which is applicable when the uncertainties in the basic events of the model are lognormally distributed. The results of the approximate method are compared with results from two sampling-based methods: namely, the Monte Carlo method and the Wilks method based on order statistics. It is shown that the closed-form expression can provide a reasonable approximation to results obtained by Monte Carlo sampling, without incurring the computational expense. The Wilks method is found to be a useful means of providing an upper bound for the percentiles of the uncertainty distribution while being computationally inexpensive compared with full Monte Carlo sampling. The lognormal approximation method and Wilks's method appear attractive, practical alternatives for the evaluation of uncertainty in the output of fault trees and similar multilinear models. © 2018 Society for Risk Analysis.
Fault Tree Analysis as a Planning and Management Tool: A Case Study
ERIC Educational Resources Information Center
Witkin, Belle Ruth
1977-01-01
Fault Tree Analysis is an operations research technique used to analyse the most probable modes of failure in a system, in order to redesign or monitor the system more closely in order to increase its likelihood of success. (Author)
Methodology for Designing Fault-Protection Software
NASA Technical Reports Server (NTRS)
Barltrop, Kevin; Levison, Jeffrey; Kan, Edwin
2006-01-01
A document describes a methodology for designing fault-protection (FP) software for autonomous spacecraft. The methodology embodies and extends established engineering practices in the technical discipline of Fault Detection, Diagnosis, Mitigation, and Recovery; and has been successfully implemented in the Deep Impact Spacecraft, a NASA Discovery mission. Based on established concepts of Fault Monitors and Responses, this FP methodology extends the notion of Opinion, Symptom, Alarm (aka Fault), and Response with numerous new notions, sub-notions, software constructs, and logic and timing gates. For example, Monitor generates a RawOpinion, which graduates into Opinion, categorized into no-opinion, acceptable, or unacceptable opinion. RaiseSymptom, ForceSymptom, and ClearSymptom govern the establishment and then mapping to an Alarm (aka Fault). Local Response is distinguished from FP System Response. A 1-to-n and n-to- 1 mapping is established among Monitors, Symptoms, and Responses. Responses are categorized by device versus by function. Responses operate in tiers, where the early tiers attempt to resolve the Fault in a localized step-by-step fashion, relegating more system-level response to later tier(s). Recovery actions are gated by epoch recovery timing, enabling strategy, urgency, MaxRetry gate, hardware availability, hazardous versus ordinary fault, and many other priority gates. This methodology is systematic, logical, and uses multiple linked tables, parameter files, and recovery command sequences. The credibility of the FP design is proven via a fault-tree analysis "top-down" approach, and a functional fault-mode-effects-and-analysis via "bottoms-up" approach. Via this process, the mitigation and recovery strategy(s) per Fault Containment Region scope (width versus depth) the FP architecture.
Machine Learning of Fault Friction
NASA Astrophysics Data System (ADS)
Johnson, P. A.; Rouet-Leduc, B.; Hulbert, C.; Marone, C.; Guyer, R. A.
2017-12-01
We are applying machine learning (ML) techniques to continuous acoustic emission (AE) data from laboratory earthquake experiments. Our goal is to apply explicit ML methods to this acoustic datathe AE in order to infer frictional properties of a laboratory fault. The experiment is a double direct shear apparatus comprised of fault blocks surrounding fault gouge comprised of glass beads or quartz powder. Fault characteristics are recorded, including shear stress, applied load (bulk friction = shear stress/normal load) and shear velocity. The raw acoustic signal is continuously recorded. We rely on explicit decision tree approaches (Random Forest and Gradient Boosted Trees) that allow us to identify important features linked to the fault friction. A training procedure that employs both the AE and the recorded shear stress from the experiment is first conducted. Then, testing takes place on data the algorithm has never seen before, using only the continuous AE signal. We find that these methods provide rich information regarding frictional processes during slip (Rouet-Leduc et al., 2017a; Hulbert et al., 2017). In addition, similar machine learning approaches predict failure times, as well as slip magnitudes in some cases. We find that these methods work for both stick slip and slow slip experiments, for periodic slip and for aperiodic slip. We also derive a fundamental relationship between the AE and the friction describing the frictional behavior of any earthquake slip cycle in a given experiment (Rouet-Leduc et al., 2017b). Our goal is to ultimately scale these approaches to Earth geophysical data to probe fault friction. References Rouet-Leduc, B., C. Hulbert, N. Lubbers, K. Barros, C. Humphreys and P. A. Johnson, Machine learning predicts laboratory earthquakes, in review (2017). https://arxiv.org/abs/1702.05774Rouet-LeDuc, B. et al., Friction Laws Derived From the Acoustic Emissions of a Laboratory Fault by Machine Learning (2017), AGU Fall Meeting Session S025: Earthquake source: from the laboratory to the fieldHulbert, C., Characterizing slow slip applying machine learning (2017), AGU Fall Meeting Session S019: Slow slip, Tectonic Tremor, and the Brittle-to-Ductile Transition Zone: What mechanisms control the diversity of slow and fast earthquakes?
NASA Astrophysics Data System (ADS)
Rodak, C. M.; McHugh, R.; Wei, X.
2016-12-01
The development and combination of horizontal drilling and hydraulic fracturing has unlocked unconventional hydrocarbon reserves around the globe. These advances have triggered a number of concerns regarding aquifer contamination and over-exploitation, leading to scientific studies investigating potential risks posed by directional hydraulic fracturing activities. These studies, balanced with potential economic benefits of energy production, are a crucial source of information for communities considering the development of unconventional reservoirs. However, probabilistic quantification of the overall risk posed by hydraulic fracturing at the system level are rare. Here we present the concept of fault tree analysis to determine the overall probability of groundwater contamination or over-exploitation, broadly referred to as the probability of failure. The potential utility of fault tree analysis for the quantification and communication of risks is approached with a general application. However, the fault tree design is robust and can handle various combinations of regional-specific data pertaining to relevant spatial scales, geological conditions, and industry practices where available. All available data are grouped into quantity and quality-based impacts and sub-divided based on the stage of the hydraulic fracturing process in which the data is relevant as described by the USEPA. Each stage is broken down into the unique basic events required for failure; for example, to quantify the risk of an on-site spill we must consider the likelihood, magnitude, composition, and subsurface transport of the spill. The structure of the fault tree described above can be used to render a highly complex system of variables into a straightforward equation for risk calculation based on Boolean logic. This project shows the utility of fault tree analysis for the visual communication of the potential risks of hydraulic fracturing activities on groundwater resources.
COMCAN: a computer program for common cause analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Burdick, G.R.; Marshall, N.H.; Wilson, J.R.
1976-05-01
The computer program, COMCAN, searches the fault tree minimal cut sets for shared susceptibility to various secondary events (common causes) and common links between components. In the case of common causes, a location check may also be performed by COMCAN to determine whether barriers to the common cause exist between components. The program can locate common manufacturers of components having events in the same minimal cut set. A relative ranking scheme for secondary event susceptibility is included in the program.
Fault Tree Analysis: An Emerging Methodology for Instructional Science.
ERIC Educational Resources Information Center
Wood, R. Kent; And Others
1979-01-01
Describes Fault Tree Analysis, a tool for systems analysis which attempts to identify possible modes of failure in systems to increase the probability of success. The article defines the technique and presents the steps of FTA construction, focusing on its application to education. (RAO)
Program listing for fault tree analysis of JPL technical report 32-1542
NASA Technical Reports Server (NTRS)
Chelson, P. O.
1971-01-01
The computer program listing for the MAIN program and those subroutines unique to the fault tree analysis are described. Some subroutines are used for analyzing the reliability block diagram. The program is written in FORTRAN 5 and is running on a UNIVAC 1108.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Powell, Danny H; Elwood Jr, Robert H
2011-01-01
Analysis of the material protection, control, and accountability (MPC&A) system is necessary to understand the limits and vulnerabilities of the system to internal threats. A self-appraisal helps the facility be prepared to respond to internal threats and reduce the risk of theft or diversion of nuclear material. The material control and accountability (MC&A) system effectiveness tool (MSET) fault tree was developed to depict the failure of the MPC&A system as a result of poor practices and random failures in the MC&A system. It can also be employed as a basis for assessing deliberate threats against a facility. MSET uses faultmore » tree analysis, which is a top-down approach to examining system failure. The analysis starts with identifying a potential undesirable event called a 'top event' and then determining the ways it can occur (e.g., 'Fail To Maintain Nuclear Materials Under The Purview Of The MC&A System'). The analysis proceeds by determining how the top event can be caused by individual or combined lower level faults or failures. These faults, which are the causes of the top event, are 'connected' through logic gates. The MSET model uses AND-gates and OR-gates and propagates the effect of event failure using Boolean algebra. To enable the fault tree analysis calculations, the basic events in the fault tree are populated with probability risk values derived by conversion of questionnaire data to numeric values. The basic events are treated as independent variables. This assumption affects the Boolean algebraic calculations used to calculate results. All the necessary calculations are built into the fault tree codes, but it is often useful to estimate the probabilities manually as a check on code functioning. The probability of failure of a given basic event is the probability that the basic event primary question fails to meet the performance metric for that question. The failure probability is related to how well the facility performs the task identified in that basic event over time (not just one performance or exercise). Fault tree calculations provide a failure probability for the top event in the fault tree. The basic fault tree calculations establish a baseline relative risk value for the system. This probability depicts relative risk, not absolute risk. Subsequent calculations are made to evaluate the change in relative risk that would occur if system performance is improved or degraded. During the development effort of MSET, the fault tree analysis program used was SAPHIRE. SAPHIRE is an acronym for 'Systems Analysis Programs for Hands-on Integrated Reliability Evaluations.' Version 1 of the SAPHIRE code was sponsored by the Nuclear Regulatory Commission in 1987 as an innovative way to draw, edit, and analyze graphical fault trees primarily for safe operation of nuclear power reactors. When the fault tree calculations are performed, the fault tree analysis program will produce several reports that can be used to analyze the MPC&A system. SAPHIRE produces reports showing risk importance factors for all basic events in the operational MC&A system. The risk importance information is used to examine the potential impacts when performance of certain basic events increases or decreases. The initial results produced by the SAPHIRE program are considered relative risk values. None of the results can be interpreted as absolute risk values since the basic event probability values represent estimates of risk associated with the performance of MPC&A tasks throughout the material balance area (MBA). The RRR for a basic event represents the decrease in total system risk that would result from improvement of that one event to a perfect performance level. Improvement of the basic event with the greatest RRR value produces a greater decrease in total system risk than improvement of any other basic event. Basic events with the greatest potential for system risk reduction are assigned performance improvement values, and new fault tree calculations show the improvement in total system risk. The operational impact or cost-effectiveness from implementing the performance improvements can then be evaluated. The improvements being evaluated can be system performance improvements, or they can be potential, or actual, upgrades to the system. The RIR for a basic event represents the increase in total system risk that would result from failure of that one event. Failure of the basic event with the greatest RIR value produces a greater increase in total system risk than failure of any other basic event. Basic events with the greatest potential for system risk increase are assigned failure performance values, and new fault tree calculations show the increase in total system risk. This evaluation shows the importance of preventing performance degradation of the basic events. SAPHIRE identifies combinations of basic events where concurrent failure of the events results in failure of the top event.« less
The Role of Coseismic Coulomb Stress Changes in Shaping the Hard Link Between Normal Fault Segments
NASA Astrophysics Data System (ADS)
Hodge, M.; Fagereng, Å.; Biggs, J.
2018-01-01
The mechanism and evolution of fault linkage is important in the growth and development of large faults. Here we investigate the role of coseismic stress changes in shaping the hard links between parallel normal fault segments (or faults), by comparing numerical models of the Coulomb stress change from simulated earthquakes on two en echelon fault segments to natural observations of hard-linked fault geometry. We consider three simplified linking fault geometries: (1) fault bend, (2) breached relay ramp, and (3) strike-slip transform fault. We consider scenarios where either one or both segments rupture and vary the distance between segment tips. Fault bends and breached relay ramps are favored where segments underlap or when the strike-perpendicular distance between overlapping segments is less than 20% of their total length, matching all 14 documented examples. Transform fault linkage geometries are preferred when overlapping segments are laterally offset at larger distances. Few transform faults exist in continental extensional settings, and our model suggests that propagating faults or fault segments may first link through fault bends or breached ramps before reaching sufficient overlap for a transform fault to develop. Our results suggest that Coulomb stresses arising from multisegment ruptures or repeated earthquakes are consistent with natural observations of the geometry of hard links between parallel normal fault segments.
Expert systems for fault diagnosis in nuclear reactor control
NASA Astrophysics Data System (ADS)
Jalel, N. A.; Nicholson, H.
1990-11-01
An expert system for accident analysis and fault diagnosis for the Loss Of Fluid Test (LOFT) reactor, a small scale pressurized water reactor, was developed for a personal computer. The knowledge of the system is presented using a production rule approach with a backward chaining inference engine. The data base of the system includes simulated dependent state variables of the LOFT reactor model. Another system is designed to assist the operator in choosing the appropriate cooling mode and to diagnose the fault in the selected cooling system. The response tree, which is used to provide the link between a list of very specific accident sequences and a set of generic emergency procedures which help the operator in monitoring system status, and to differentiate between different accident sequences and select the correct procedures, is used to build the system knowledge base. Both systems are written in TURBO PROLOG language and can be run on an IBM PC compatible with 640k RAM, 40 Mbyte hard disk and color graphics.
Direct evaluation of fault trees using object-oriented programming techniques
NASA Technical Reports Server (NTRS)
Patterson-Hine, F. A.; Koen, B. V.
1989-01-01
Object-oriented programming techniques are used in an algorithm for the direct evaluation of fault trees. The algorithm combines a simple bottom-up procedure for trees without repeated events with a top-down recursive procedure for trees with repeated events. The object-oriented approach results in a dynamic modularization of the tree at each step in the reduction process. The algorithm reduces the number of recursive calls required to solve trees with repeated events and calculates intermediate results as well as the solution of the top event. The intermediate results can be reused if part of the tree is modified. An example is presented in which the results of the algorithm implemented with conventional techniques are compared to those of the object-oriented approach.
NASA Astrophysics Data System (ADS)
Guns, K. A.; Bennett, R. A.; Blisniuk, K.
2017-12-01
To better evaluate the distribution and transfer of strain and slip along the Southern San Andreas Fault (SSAF) zone in the northern Coachella valley in southern California, we integrate geological and geodetic observations to test whether strain is being transferred away from the SSAF system towards the Eastern California Shear Zone through microblock rotation of the Eastern Transverse Ranges (ETR). The faults of the ETR consist of five east-west trending left lateral strike slip faults that have measured cumulative offsets of up to 20 km and as low as 1 km. Present kinematic and block models present a variety of slip rate estimates, from as low as zero to as high as 7 mm/yr, suggesting a gap in our understanding of what role these faults play in the larger system. To determine whether present-day block rotation along these faults is contributing to strain transfer in the region, we are applying 10Be surface exposure dating methods to observed offset channel and alluvial fan deposits in order to estimate fault slip rates along two faults in the ETR. We present observations of offset geomorphic landforms using field mapping and LiDAR data at three sites along the Blue Cut Fault and one site along the Smoke Tree Wash Fault in Joshua Tree National Park which indicate recent Quaternary fault activity. Initial results of site mapping and clast count analyses reveal at least three stages of offset, including potential Holocene offsets, for one site along the Blue Cut Fault, while preliminary 10Be geochronology is in progress. This geologic slip rate data, combined with our new geodetic surface velocity field derived from updated campaign-based GPS measurements within Joshua Tree National Park will allow us to construct a suite of elastic fault block models to elucidate rates of strain transfer away from the SSAF and how that strain transfer may be affecting the length of the interseismic period along the SSAF.
FAULT TREE ANALYSIS FOR EXPOSURE TO REFRIGERANTS USED FOR AUTOMOTIVE AIR CONDITIONING IN THE U.S.
A fault tree analysis was used to estimate the number of refrigerant exposures of automotive service technicians and vehicle occupants in the United States. Exposures of service technicians can occur when service equipment or automotive air-conditioning systems leak during servic...
A Fault Tree Approach to Analysis of Organizational Communication Systems.
ERIC Educational Resources Information Center
Witkin, Belle Ruth; Stephens, Kent G.
Fault Tree Analysis (FTA) is a method of examing communication in an organization by focusing on: (1) the complex interrelationships in human systems, particularly in communication systems; (2) interactions across subsystems and system boundaries; and (3) the need to select and "prioritize" channels which will eliminate noise in the…
Applying fault tree analysis to the prevention of wrong-site surgery.
Abecassis, Zachary A; McElroy, Lisa M; Patel, Ronak M; Khorzad, Rebeca; Carroll, Charles; Mehrotra, Sanjay
2015-01-01
Wrong-site surgery (WSS) is a rare event that occurs to hundreds of patients each year. Despite national implementation of the Universal Protocol over the past decade, development of effective interventions remains a challenge. We performed a systematic review of the literature reporting root causes of WSS and used the results to perform a fault tree analysis to assess the reliability of the system in preventing WSS and identifying high-priority targets for interventions aimed at reducing WSS. Process components where a single error could result in WSS were labeled with OR gates; process aspects reinforced by verification were labeled with AND gates. The overall redundancy of the system was evaluated based on prevalence of AND gates and OR gates. In total, 37 studies described risk factors for WSS. The fault tree contains 35 faults, most of which fall into five main categories. Despite the Universal Protocol mandating patient verification, surgical site signing, and a brief time-out, a large proportion of the process relies on human transcription and verification. Fault tree analysis provides a standardized perspective of errors or faults within the system of surgical scheduling and site confirmation. It can be adapted by institutions or specialties to lead to more targeted interventions to increase redundancy and reliability within the preoperative process. Copyright © 2015 Elsevier Inc. All rights reserved.
An Application of the Geo-Semantic Micro-services in Seamless Data-Model Integration
NASA Astrophysics Data System (ADS)
Jiang, P.; Elag, M.; Kumar, P.; Liu, R.; Hu, Y.; Marini, L.; Peckham, S. D.; Hsu, L.
2016-12-01
We are applying machine learning (ML) techniques to continuous acoustic emission (AE) data from laboratory earthquake experiments. Our goal is to apply explicit ML methods to this acoustic datathe AE in order to infer frictional properties of a laboratory fault. The experiment is a double direct shear apparatus comprised of fault blocks surrounding fault gouge comprised of glass beads or quartz powder. Fault characteristics are recorded, including shear stress, applied load (bulk friction = shear stress/normal load) and shear velocity. The raw acoustic signal is continuously recorded. We rely on explicit decision tree approaches (Random Forest and Gradient Boosted Trees) that allow us to identify important features linked to the fault friction. A training procedure that employs both the AE and the recorded shear stress from the experiment is first conducted. Then, testing takes place on data the algorithm has never seen before, using only the continuous AE signal. We find that these methods provide rich information regarding frictional processes during slip (Rouet-Leduc et al., 2017a; Hulbert et al., 2017). In addition, similar machine learning approaches predict failure times, as well as slip magnitudes in some cases. We find that these methods work for both stick slip and slow slip experiments, for periodic slip and for aperiodic slip. We also derive a fundamental relationship between the AE and the friction describing the frictional behavior of any earthquake slip cycle in a given experiment (Rouet-Leduc et al., 2017b). Our goal is to ultimately scale these approaches to Earth geophysical data to probe fault friction. References Rouet-Leduc, B., C. Hulbert, N. Lubbers, K. Barros, C. Humphreys and P. A. Johnson, Machine learning predicts laboratory earthquakes, in review (2017). https://arxiv.org/abs/1702.05774Rouet-LeDuc, B. et al., Friction Laws Derived From the Acoustic Emissions of a Laboratory Fault by Machine Learning (2017), AGU Fall Meeting Session S025: Earthquake source: from the laboratory to the fieldHulbert, C., Characterizing slow slip applying machine learning (2017), AGU Fall Meeting Session S019: Slow slip, Tectonic Tremor, and the Brittle-to-Ductile Transition Zone: What mechanisms control the diversity of slow and fast earthquakes?
Langenheim, Victoria E.; Rymer, Michael J.; Catchings, Rufus D.; Goldman, Mark R.; Watt, Janet T.; Powell, Robert E.; Matti, Jonathan C.
2016-03-02
We describe high-resolution gravity and seismic refraction surveys acquired to determine the thickness of valley-fill deposits and to delineate geologic structures that might influence groundwater flow beneath the Smoke Tree Wash area in Joshua Tree National Park. These surveys identified a sedimentary basin that is fault-controlled. A profile across the Smoke Tree Wash fault zone reveals low gravity values and seismic velocities that coincide with a mapped strand of the Smoke Tree Wash fault. Modeling of the gravity data reveals a basin about 2–2.5 km long and 1 km wide that is roughly centered on this mapped strand, and bounded by inferred faults. According to the gravity model the deepest part of the basin is about 270 m, but this area coincides with low velocities that are not characteristic of typical basement complex rocks. Most likely, the density contrast assumed in the inversion is too high or the uncharacteristically low velocities represent highly fractured or weathered basement rocks, or both. A longer seismic profile extending onto basement outcrops would help differentiate which scenario is more accurate. The seismic velocities also determine the depth to water table along the profile to be about 40–60 m, consistent with water levels measured in water wells near the northern end of the profile.
A Fault Tree Approach to Needs Assessment -- An Overview.
ERIC Educational Resources Information Center
Stephens, Kent G.
A "failsafe" technology is presented based on a new unified theory of needs assessment. Basically the paper discusses fault tree analysis as a technique for enhancing the probability of success in any system by analyzing the most likely modes of failure that could occur and then suggesting high priority avoidance strategies for those…
Huang, Weiqing; Fan, Hongbo; Qiu, Yongfu; Cheng, Zhiyu; Xu, Pingru; Qian, Yu
2016-05-01
Recently, China has frequently experienced large-scale, severe and persistent haze pollution due to surging urbanization and industrialization and a rapid growth in the number of motor vehicles and energy consumption. The vehicle emission due to the consumption of a large number of fossil fuels is no doubt a critical factor of the haze pollution. This work is focused on the causation mechanism of haze pollution related to the vehicle emission for Guangzhou city by employing the Fault Tree Analysis (FTA) method for the first time. With the establishment of the fault tree system of "Haze weather-Vehicle exhausts explosive emission", all of the important risk factors are discussed and identified by using this deductive FTA method. The qualitative and quantitative assessments of the fault tree system are carried out based on the structure, probability and critical importance degree analysis of the risk factors. The study may provide a new simple and effective tool/strategy for the causation mechanism analysis and risk management of haze pollution in China. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Sanchez-Vila, X.; de Barros, F.; Bolster, D.; Nowak, W.
2010-12-01
Assessing the potential risk of hydro(geo)logical supply systems to human population is an interdisciplinary field. It relies on the expertise in fields as distant as hydrogeology, medicine, or anthropology, and needs powerful translation concepts to provide decision support and policy making. Reliable health risk estimates need to account for the uncertainties in hydrological, physiological and human behavioral parameters. We propose the use of fault trees to address the task of probabilistic risk analysis (PRA) and to support related management decisions. Fault trees allow decomposing the assessment of health risk into individual manageable modules, thus tackling a complex system by a structural “Divide and Conquer” approach. The complexity within each module can be chosen individually according to data availability, parsimony, relative importance and stage of analysis. The separation in modules allows for a true inter- and multi-disciplinary approach. This presentation highlights the three novel features of our work: (1) we define failure in terms of risk being above a threshold value, whereas previous studies used auxiliary events such as exceedance of critical concentration levels, (2) we plot an integrated fault tree that handles uncertainty in both hydrological and health components in a unified way, and (3) we introduce a new form of stochastic fault tree that allows to weaken the assumption of independent subsystems that is required by a classical fault tree approach. We illustrate our concept in a simple groundwater-related setting.
A fuzzy decision tree for fault classification.
Zio, Enrico; Baraldi, Piero; Popescu, Irina C
2008-02-01
In plant accident management, the control room operators are required to identify the causes of the accident, based on the different patterns of evolution of the monitored process variables thereby developing. This task is often quite challenging, given the large number of process parameters monitored and the intense emotional states under which it is performed. To aid the operators, various techniques of fault classification have been engineered. An important requirement for their practical application is the physical interpretability of the relationships among the process variables underpinning the fault classification. In this view, the present work propounds a fuzzy approach to fault classification, which relies on fuzzy if-then rules inferred from the clustering of available preclassified signal data, which are then organized in a logical and transparent decision tree structure. The advantages offered by the proposed approach are precisely that a transparent fault classification model is mined out of the signal data and that the underlying physical relationships among the process variables are easily interpretable as linguistic if-then rules that can be explicitly visualized in the decision tree structure. The approach is applied to a case study regarding the classification of simulated faults in the feedwater system of a boiling water reactor.
Integration of Advanced Probabilistic Analysis Techniques with Multi-Physics Models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cetiner, Mustafa Sacit; none,; Flanagan, George F.
2014-07-30
An integrated simulation platform that couples probabilistic analysis-based tools with model-based simulation tools can provide valuable insights for reactive and proactive responses to plant operating conditions. The objective of this work is to demonstrate the benefits of a partial implementation of the Small Modular Reactor (SMR) Probabilistic Risk Assessment (PRA) Detailed Framework Specification through the coupling of advanced PRA capabilities and accurate multi-physics plant models. Coupling a probabilistic model with a multi-physics model will aid in design, operations, and safety by providing a more accurate understanding of plant behavior. This represents the first attempt at actually integrating these two typesmore » of analyses for a control system used for operations, on a faster than real-time basis. This report documents the development of the basic communication capability to exchange data with the probabilistic model using Reliability Workbench (RWB) and the multi-physics model using Dymola. The communication pathways from injecting a fault (i.e., failing a component) to the probabilistic and multi-physics models were successfully completed. This first version was tested with prototypic models represented in both RWB and Modelica. First, a simple event tree/fault tree (ET/FT) model was created to develop the software code to implement the communication capabilities between the dynamic-link library (dll) and RWB. A program, written in C#, successfully communicates faults to the probabilistic model through the dll. A systems model of the Advanced Liquid-Metal Reactor–Power Reactor Inherently Safe Module (ALMR-PRISM) design developed under another DOE project was upgraded using Dymola to include proper interfaces to allow data exchange with the control application (ConApp). A program, written in C+, successfully communicates faults to the multi-physics model. The results of the example simulation were successfully plotted.« less
FTC - THE FAULT-TREE COMPILER (SUN VERSION)
NASA Technical Reports Server (NTRS)
Butler, R. W.
1994-01-01
FTC, the Fault-Tree Compiler program, is a tool used to calculate the top-event probability for a fault-tree. Five different gate types are allowed in the fault tree: AND, OR, EXCLUSIVE OR, INVERT, and M OF N. The high-level input language is easy to understand and use. In addition, the program supports a hierarchical fault tree definition feature which simplifies the tree-description process and reduces execution time. A rigorous error bound is derived for the solution technique. This bound enables the program to supply an answer precisely (within the limits of double precision floating point arithmetic) at a user-specified number of digits accuracy. The program also facilitates sensitivity analysis with respect to any specified parameter of the fault tree such as a component failure rate or a specific event probability by allowing the user to vary one failure rate or the failure probability over a range of values and plot the results. The mathematical approach chosen to solve a reliability problem may vary with the size and nature of the problem. Although different solution techniques are utilized on different programs, it is possible to have a common input language. The Systems Validation Methods group at NASA Langley Research Center has created a set of programs that form the basis for a reliability analysis workstation. The set of programs are: SURE reliability analysis program (COSMIC program LAR-13789, LAR-14921); the ASSIST specification interface program (LAR-14193, LAR-14923), PAWS/STEM reliability analysis programs (LAR-14165, LAR-14920); and the FTC fault tree tool (LAR-14586, LAR-14922). FTC is used to calculate the top-event probability for a fault tree. PAWS/STEM and SURE are programs which interpret the same SURE language, but utilize different solution methods. ASSIST is a preprocessor that generates SURE language from a more abstract definition. SURE, ASSIST, and PAWS/STEM are also offered as a bundle. Please see the abstract for COS-10039/COS-10041, SARA - SURE/ASSIST Reliability Analysis Workstation, for pricing details. FTC was originally developed for DEC VAX series computers running VMS and was later ported for use on Sun computers running SunOS. The program is written in PASCAL, ANSI compliant C-language, and FORTRAN 77. The TEMPLATE graphics library is required to obtain graphical output. The standard distribution medium for the VMS version of FTC (LAR-14586) is a 9-track 1600 BPI magnetic tape in VMSINSTAL format. It is also available on a TK50 tape cartridge in VMSINSTAL format. Executables are included. The standard distribution medium for the Sun version of FTC (LAR-14922) is a .25 inch streaming magnetic tape cartridge in UNIX tar format. Both Sun3 and Sun4 executables are included. FTC was developed in 1989 and last updated in 1992. DEC, VAX, VMS, and TK50 are trademarks of Digital Equipment Corporation. UNIX is a registered trademark of AT&T Bell Laboratories. SunOS is a trademark of Sun Microsystems, Inc.
FTC - THE FAULT-TREE COMPILER (VAX VMS VERSION)
NASA Technical Reports Server (NTRS)
Butler, R. W.
1994-01-01
FTC, the Fault-Tree Compiler program, is a tool used to calculate the top-event probability for a fault-tree. Five different gate types are allowed in the fault tree: AND, OR, EXCLUSIVE OR, INVERT, and M OF N. The high-level input language is easy to understand and use. In addition, the program supports a hierarchical fault tree definition feature which simplifies the tree-description process and reduces execution time. A rigorous error bound is derived for the solution technique. This bound enables the program to supply an answer precisely (within the limits of double precision floating point arithmetic) at a user-specified number of digits accuracy. The program also facilitates sensitivity analysis with respect to any specified parameter of the fault tree such as a component failure rate or a specific event probability by allowing the user to vary one failure rate or the failure probability over a range of values and plot the results. The mathematical approach chosen to solve a reliability problem may vary with the size and nature of the problem. Although different solution techniques are utilized on different programs, it is possible to have a common input language. The Systems Validation Methods group at NASA Langley Research Center has created a set of programs that form the basis for a reliability analysis workstation. The set of programs are: SURE reliability analysis program (COSMIC program LAR-13789, LAR-14921); the ASSIST specification interface program (LAR-14193, LAR-14923), PAWS/STEM reliability analysis programs (LAR-14165, LAR-14920); and the FTC fault tree tool (LAR-14586, LAR-14922). FTC is used to calculate the top-event probability for a fault tree. PAWS/STEM and SURE are programs which interpret the same SURE language, but utilize different solution methods. ASSIST is a preprocessor that generates SURE language from a more abstract definition. SURE, ASSIST, and PAWS/STEM are also offered as a bundle. Please see the abstract for COS-10039/COS-10041, SARA - SURE/ASSIST Reliability Analysis Workstation, for pricing details. FTC was originally developed for DEC VAX series computers running VMS and was later ported for use on Sun computers running SunOS. The program is written in PASCAL, ANSI compliant C-language, and FORTRAN 77. The TEMPLATE graphics library is required to obtain graphical output. The standard distribution medium for the VMS version of FTC (LAR-14586) is a 9-track 1600 BPI magnetic tape in VMSINSTAL format. It is also available on a TK50 tape cartridge in VMSINSTAL format. Executables are included. The standard distribution medium for the Sun version of FTC (LAR-14922) is a .25 inch streaming magnetic tape cartridge in UNIX tar format. Both Sun3 and Sun4 executables are included. FTC was developed in 1989 and last updated in 1992. DEC, VAX, VMS, and TK50 are trademarks of Digital Equipment Corporation. UNIX is a registered trademark of AT&T Bell Laboratories. SunOS is a trademark of Sun Microsystems, Inc.
A Fault Tree Approach to Analysis of Behavioral Systems: An Overview.
ERIC Educational Resources Information Center
Stephens, Kent G.
Developed at Brigham Young University, Fault Tree Analysis (FTA) is a technique for enhancing the probability of success in any system by analyzing the most likely modes of failure that could occur. It provides a logical, step-by-step description of possible failure events within a system and their interaction--the combinations of potential…
The engine fuel system fault analysis
NASA Astrophysics Data System (ADS)
Zhang, Yong; Song, Hanqiang; Yang, Changsheng; Zhao, Wei
2017-05-01
For improving the reliability of the engine fuel system, the typical fault factor of the engine fuel system was analyzed from the point view of structure and functional. The fault character was gotten by building the fuel system fault tree. According the utilizing of fault mode effect analysis method (FMEA), several factors of key component fuel regulator was obtained, which include the fault mode, the fault cause, and the fault influences. All of this made foundation for next development of fault diagnosis system.
Fault tree analysis: NiH2 aerospace cells for LEO mission
NASA Technical Reports Server (NTRS)
Klein, Glenn C.; Rash, Donald E., Jr.
1992-01-01
The Fault Tree Analysis (FTA) is one of several reliability analyses or assessments applied to battery cells to be utilized in typical Electric Power Subsystems for spacecraft in low Earth orbit missions. FTA is generally the process of reviewing and analytically examining a system or equipment in such a way as to emphasize the lower level fault occurrences which directly or indirectly contribute to the major fault or top level event. This qualitative FTA addresses the potential of occurrence for five specific top level events: hydrogen leakage through either discrete leakage paths or through pressure vessel rupture; and four distinct modes of performance degradation - high charge voltage, suppressed discharge voltage, loss of capacity, and high pressure.
Modular techniques for dynamic fault-tree analysis
NASA Technical Reports Server (NTRS)
Patterson-Hine, F. A.; Dugan, Joanne B.
1992-01-01
It is noted that current approaches used to assess the dependability of complex systems such as Space Station Freedom and the Air Traffic Control System are incapable of handling the size and complexity of these highly integrated designs. A novel technique for modeling such systems which is built upon current techniques in Markov theory and combinatorial analysis is described. It enables the development of a hierarchical representation of system behavior which is more flexible than either technique alone. A solution strategy which is based on an object-oriented approach to model representation and evaluation is discussed. The technique is virtually transparent to the user since the fault tree models can be built graphically and the objects defined automatically. The tree modularization procedure allows the two model types, Markov and combinatoric, to coexist and does not require that the entire fault tree be translated to a Markov chain for evaluation. This effectively reduces the size of the Markov chain required and enables solutions with less truncation, making analysis of longer mission times possible. Using the fault-tolerant parallel processor as an example, a model is built and solved for a specific mission scenario and the solution approach is illustrated in detail.
Soft error evaluation and vulnerability analysis in Xilinx Zynq-7010 system-on chip
NASA Astrophysics Data System (ADS)
Du, Xuecheng; He, Chaohui; Liu, Shuhuan; Zhang, Yao; Li, Yonghong; Xiong, Ceng; Tan, Pengkang
2016-09-01
Radiation-induced soft errors are an increasingly important threat to the reliability of modern electronic systems. In order to evaluate system-on chip's reliability and soft error, the fault tree analysis method was used in this work. The system fault tree was constructed based on Xilinx Zynq-7010 All Programmable SoC. Moreover, the soft error rates of different components in Zynq-7010 SoC were tested by americium-241 alpha radiation source. Furthermore, some parameters that used to evaluate the system's reliability and safety were calculated using Isograph Reliability Workbench 11.0, such as failure rate, unavailability and mean time to failure (MTTF). According to fault tree analysis for system-on chip, the critical blocks and system reliability were evaluated through the qualitative and quantitative analysis.
Learning from examples - Generation and evaluation of decision trees for software resource analysis
NASA Technical Reports Server (NTRS)
Selby, Richard W.; Porter, Adam A.
1988-01-01
A general solution method for the automatic generation of decision (or classification) trees is investigated. The approach is to provide insights through in-depth empirical characterization and evaluation of decision trees for software resource data analysis. The trees identify classes of objects (software modules) that had high development effort. Sixteen software systems ranging from 3,000 to 112,000 source lines were selected for analysis from a NASA production environment. The collection and analysis of 74 attributes (or metrics), for over 4,700 objects, captured information about the development effort, faults, changes, design style, and implementation style. A total of 9,600 decision trees were automatically generated and evaluated. The trees correctly identified 79.3 percent of the software modules that had high development effort or faults, and the trees generated from the best parameter combinations correctly identified 88.4 percent of the modules on the average.
NASA Astrophysics Data System (ADS)
Young, C. S.; Dawers, N. H.
2017-12-01
Fault growth is often accomplished by linking a series of en echelon faults through relay ramps. A relay ramp is the area between two overlapping fault segments that tilts and deforms as the faults accrue displacement. The structural evolution of breached normal fault relay ramps remains poorly understood because of the difficulty in defining how slip is partitioned between the most basinward fault (known as the outboard fault), the overlapping fault (inboard fault), and any ramp-breaching linking faults. Along the Warner Valley fault in south-central Oregon, two relay ramps displaying different fault linkage geometries are lined with a series of paleo-lacustrine shorelines that record a Pleistocene paleolake regression. The inner edges of these shorelines act as paleo-horizontal datums that have been deformed by fault activity, and are used to measure relative slip variations across the relay ramp bounding faults. By measuring the elevation changes using a 10m digital elevation model (DEM) of shoreline inner edges, we estimate the amount of slip partitioned between the inboard, outboard and ramp-breaching linking faults. In order to attribute shoreline deformation to fault activity we identify shoreline elevation anomalies, where deformation exceeds a ± 3.34 m window, which encompass our conservative estimates of natural variability in the shoreline geomorphology and the error associated with the data collection. Fault activity along the main length of the fault for each ramp-breaching style is concentrated near the intersection of the linking fault and the outboard portion of the main fault segment. However, fault activity along the outboard fault tip varies according to breaching style. At a footwall breach the entire outboard fault tip appears relatively inactive. At a mid-ramp breach the outboard fault tip remains relatively active because of the proximity of the linking fault to this fault tip.
Decision tree and PCA-based fault diagnosis of rotating machinery
NASA Astrophysics Data System (ADS)
Sun, Weixiang; Chen, Jin; Li, Jiaqing
2007-04-01
After analysing the flaws of conventional fault diagnosis methods, data mining technology is introduced to fault diagnosis field, and a new method based on C4.5 decision tree and principal component analysis (PCA) is proposed. In this method, PCA is used to reduce features after data collection, preprocessing and feature extraction. Then, C4.5 is trained by using the samples to generate a decision tree model with diagnosis knowledge. At last the tree model is used to make diagnosis analysis. To validate the method proposed, six kinds of running states (normal or without any defect, unbalance, rotor radial rub, oil whirl, shaft crack and a simultaneous state of unbalance and radial rub), are simulated on Bently Rotor Kit RK4 to test C4.5 and PCA-based method and back-propagation neural network (BPNN). The result shows that C4.5 and PCA-based diagnosis method has higher accuracy and needs less training time than BPNN.
NASA Technical Reports Server (NTRS)
Chang, Chi-Yung (Inventor); Fang, Wai-Chi (Inventor); Curlander, John C. (Inventor)
1995-01-01
A system for data compression utilizing systolic array architecture for Vector Quantization (VQ) is disclosed for both full-searched and tree-searched. For a tree-searched VQ, the special case of a Binary Tree-Search VQ (BTSVQ) is disclosed with identical Processing Elements (PE) in the array for both a Raw-Codebook VQ (RCVQ) and a Difference-Codebook VQ (DCVQ) algorithm. A fault tolerant system is disclosed which allows a PE that has developed a fault to be bypassed in the array and replaced by a spare at the end of the array, with codebook memory assignment shifted one PE past the faulty PE of the array.
Fault tree analysis for system modeling in case of intentional EMI
NASA Astrophysics Data System (ADS)
Genender, E.; Mleczko, M.; Döring, O.; Garbe, H.; Potthast, S.
2011-08-01
The complexity of modern systems on the one hand and the rising threat of intentional electromagnetic interference (IEMI) on the other hand increase the necessity for systematical risk analysis. Most of the problems can not be treated deterministically since slight changes in the configuration (source, position, polarization, ...) can dramatically change the outcome of an event. For that purpose, methods known from probabilistic risk analysis can be applied. One of the most common approaches is the fault tree analysis (FTA). The FTA is used to determine the system failure probability and also the main contributors to its failure. In this paper the fault tree analysis is introduced and a possible application of that method is shown using a small computer network as an example. The constraints of this methods are explained and conclusions for further research are drawn.
NASA Astrophysics Data System (ADS)
Akinci, A.; Pace, B.
2017-12-01
In this study, we discuss the seismic hazard variability of peak ground acceleration (PGA) at 475 years return period in the Southern Apennines of Italy. The uncertainty and parametric sensitivity are presented to quantify the impact of the several fault parameters on ground motion predictions for 10% exceedance in 50-year hazard. A time-independent PSHA model is constructed based on the long-term recurrence behavior of seismogenic faults adopting the characteristic earthquake model for those sources capable of rupturing the entire fault segment with a single maximum magnitude. The fault-based source model uses the dimensions and slip rates of mapped fault to develop magnitude-frequency estimates for characteristic earthquakes. Variability of the selected fault parameter is given with a truncated normal random variable distribution presented by standard deviation about a mean value. A Monte Carlo approach, based on the random balanced sampling by logic tree, is used in order to capture the uncertainty in seismic hazard calculations. For generating both uncertainty and sensitivity maps, we perform 200 simulations for each of the fault parameters. The results are synthesized both in frequency-magnitude distribution of modeled faults as well as the different maps: the overall uncertainty maps provide a confidence interval for the PGA values and the parameter uncertainty maps determine the sensitivity of hazard assessment to variability of every logic tree branch. These branches of logic tree, analyzed through the Monte Carlo approach, are maximum magnitudes, fault length, fault width, fault dip and slip rates. The overall variability of these parameters is determined by varying them simultaneously in the hazard calculations while the sensitivity of each parameter to overall variability is determined varying each of the fault parameters while fixing others. However, in this study we do not investigate the sensitivity of mean hazard results to the consideration of different GMPEs. Distribution of possible seismic hazard results is illustrated by 95% confidence factor map, which indicates the dispersion about mean value, and coefficient of variation map, which shows percent variability. The results of our study clearly illustrate the influence of active fault parameters to probabilistic seismic hazard maps.
Li, Jia; Wang, Deming; Huang, Zonghou
2017-01-01
Coal dust explosions (CDE) are one of the main threats to the occupational safety of coal miners. Aiming to identify and assess the risk of CDE, this paper proposes a novel method of fuzzy fault tree analysis combined with the Visual Basic (VB) program. In this methodology, various potential causes of the CDE are identified and a CDE fault tree is constructed. To overcome drawbacks from the lack of exact probability data for the basic events, fuzzy set theory is employed and the probability data of each basic event is treated as intuitionistic trapezoidal fuzzy numbers. In addition, a new approach for calculating the weighting of each expert is also introduced in this paper to reduce the error during the expert elicitation process. Specifically, an in-depth quantitative analysis of the fuzzy fault tree, such as the importance measure of the basic events and the cut sets, and the CDE occurrence probability is given to assess the explosion risk and acquire more details of the CDE. The VB program is applied to simplify the analysis process. A case study and analysis is provided to illustrate the effectiveness of this proposed method, and some suggestions are given to take preventive measures in advance and avoid CDE accidents. PMID:28793348
Shi, Lei; Shuai, Jian; Xu, Kui
2014-08-15
Fire and explosion accidents of steel oil storage tanks (FEASOST) occur occasionally during the petroleum and chemical industry production and storage processes and often have devastating impact on lives, the environment and property. To contribute towards the development of a quantitative approach for assessing the occurrence probability of FEASOST, a fault tree of FEASOST is constructed that identifies various potential causes. Traditional fault tree analysis (FTA) can achieve quantitative evaluation if the failure data of all of the basic events (BEs) are available, which is almost impossible due to the lack of detailed data, as well as other uncertainties. This paper makes an attempt to perform FTA of FEASOST by a hybrid application between an expert elicitation based improved analysis hierarchy process (AHP) and fuzzy set theory, and the occurrence possibility of FEASOST is estimated for an oil depot in China. A comparison between statistical data and calculated data using fuzzy fault tree analysis (FFTA) based on traditional and improved AHP is also made. Sensitivity and importance analysis has been performed to identify the most crucial BEs leading to FEASOST that will provide insights into how managers should focus effective mitigation. Copyright © 2014 Elsevier B.V. All rights reserved.
Wang, Hetang; Li, Jia; Wang, Deming; Huang, Zonghou
2017-01-01
Coal dust explosions (CDE) are one of the main threats to the occupational safety of coal miners. Aiming to identify and assess the risk of CDE, this paper proposes a novel method of fuzzy fault tree analysis combined with the Visual Basic (VB) program. In this methodology, various potential causes of the CDE are identified and a CDE fault tree is constructed. To overcome drawbacks from the lack of exact probability data for the basic events, fuzzy set theory is employed and the probability data of each basic event is treated as intuitionistic trapezoidal fuzzy numbers. In addition, a new approach for calculating the weighting of each expert is also introduced in this paper to reduce the error during the expert elicitation process. Specifically, an in-depth quantitative analysis of the fuzzy fault tree, such as the importance measure of the basic events and the cut sets, and the CDE occurrence probability is given to assess the explosion risk and acquire more details of the CDE. The VB program is applied to simplify the analysis process. A case study and analysis is provided to illustrate the effectiveness of this proposed method, and some suggestions are given to take preventive measures in advance and avoid CDE accidents.
Graphical workstation capability for reliability modeling
NASA Technical Reports Server (NTRS)
Bavuso, Salvatore J.; Koppen, Sandra V.; Haley, Pamela J.
1992-01-01
In addition to computational capabilities, software tools for estimating the reliability of fault-tolerant digital computer systems must also provide a means of interfacing with the user. Described here is the new graphical interface capability of the hybrid automated reliability predictor (HARP), a software package that implements advanced reliability modeling techniques. The graphics oriented (GO) module provides the user with a graphical language for modeling system failure modes through the selection of various fault-tree gates, including sequence-dependency gates, or by a Markov chain. By using this graphical input language, a fault tree becomes a convenient notation for describing a system. In accounting for any sequence dependencies, HARP converts the fault-tree notation to a complex stochastic process that is reduced to a Markov chain, which it can then solve for system reliability. The graphics capability is available for use on an IBM-compatible PC, a Sun, and a VAX workstation. The GO module is written in the C programming language and uses the graphical kernal system (GKS) standard for graphics implementation. The PC, VAX, and Sun versions of the HARP GO module are currently in beta-testing stages.
Fault tree analysis for urban flooding.
ten Veldhuis, J A E; Clemens, F H L R; van Gelder, P H A J M
2009-01-01
Traditional methods to evaluate flood risk generally focus on heavy storm events as the principal cause of flooding. Conversely, fault tree analysis is a technique that aims at modelling all potential causes of flooding. It quantifies both overall flood probability and relative contributions of individual causes of flooding. This paper presents a fault model for urban flooding and an application to the case of Haarlem, a city of 147,000 inhabitants. Data from a complaint register, rainfall gauges and hydrodynamic model calculations are used to quantify probabilities of basic events in the fault tree. This results in a flood probability of 0.78/week for Haarlem. It is shown that gully pot blockages contribute to 79% of flood incidents, whereas storm events contribute only 5%. This implies that for this case more efficient gully pot cleaning is a more effective strategy to reduce flood probability than enlarging drainage system capacity. Whether this is also the most cost-effective strategy can only be decided after risk assessment has been complemented with a quantification of consequences of both types of events. To do this will be the next step in this study.
NASA Astrophysics Data System (ADS)
Koji, Yusuke; Kitamura, Yoshinobu; Kato, Yoshikiyo; Tsutsui, Yoshio; Mizoguchi, Riichiro
In conceptual design, it is important to develop functional structures which reflect the rich experience in the knowledge from previous design failures. Especially, if a designer learns possible abnormal behaviors from a previous design failure, he or she can add an additional function which prevents such abnormal behaviors and faults. To do this, it is a crucial issue to share such knowledge about possible faulty phenomena and how to cope with them. In fact, a part of such knowledge is described in FMEA (Failure Mode and Effect Analysis) sheets, function structure models for systematic design and fault trees for FTA (Fault Tree Analysis).
Failure analysis of energy storage spring in automobile composite brake chamber
NASA Astrophysics Data System (ADS)
Luo, Zai; Wei, Qing; Hu, Xiaofeng
2015-02-01
This paper set energy storage spring of parking brake cavity, part of automobile composite brake chamber, as the research object. And constructed the fault tree model of energy storage spring which caused parking brake failure based on the fault tree analysis method. Next, the parking brake failure model of energy storage spring was established by analyzing the working principle of composite brake chamber. Finally, the data of working load and the push rod stroke measured by comprehensive test-bed valve was used to validate the failure model above. The experimental result shows that the failure model can distinguish whether the energy storage spring is faulted.
A fast bottom-up algorithm for computing the cut sets of noncoherent fault trees
DOE Office of Scientific and Technical Information (OSTI.GOV)
Corynen, G.C.
1987-11-01
An efficient procedure for finding the cut sets of large fault trees has been developed. Designed to address coherent or noncoherent systems, dependent events, shared or common-cause events, the method - called SHORTCUT - is based on a fast algorithm for transforming a noncoherent tree into a quasi-coherent tree (COHERE), and on a new algorithm for reducing cut sets (SUBSET). To assure sufficient clarity and precision, the procedure is discussed in the language of simple sets, which is also developed in this report. Although the new method has not yet been fully implemented on the computer, we report theoretical worst-casemore » estimates of its computational complexity. 12 refs., 10 figs.« less
Electromagnetic Compatibility (EMC) in Microelectronics.
1983-02-01
Fault Tree Analysis", System Saftey Symposium, June 8-9, 1965, Seattle: The Boeing Company . 12. Fussell, J.B., "Fault Tree Analysis-Concepts and...procedure for assessing EMC in microelectronics and for applying DD, 1473 EOiTO OP I, NOV6 IS OESOL.ETE UNCLASSIFIED SECURITY CLASSIFICATION OF THIS...CRITERIA 2.1 Background 2 2.2 The Probabilistic Nature of EMC 2 2.3 The Probabilistic Approach 5 2.4 The Compatibility Factor 6 3 APPLYING PROBABILISTIC
A graphical language for reliability model generation
NASA Technical Reports Server (NTRS)
Howell, Sandra V.; Bavuso, Salvatore J.; Haley, Pamela J.
1990-01-01
A graphical interface capability of the hybrid automated reliability predictor (HARP) is described. The graphics-oriented (GO) module provides the user with a graphical language for modeling system failure modes through the selection of various fault tree gates, including sequence dependency gates, or by a Markov chain. With this graphical input language, a fault tree becomes a convenient notation for describing a system. In accounting for any sequence dependencies, HARP converts the fault-tree notation to a complex stochastic process that is reduced to a Markov chain which it can then solve for system reliability. The graphics capability is available for use on an IBM-compatible PC, a Sun, and a VAX workstation. The GO module is written in the C programming language and uses the Graphical Kernel System (GKS) standard for graphics implementation. The PC, VAX, and Sun versions of the HARP GO module are currently in beta-testing.
NASA Astrophysics Data System (ADS)
Wu, Jianing; Yan, Shaoze; Xie, Liyang
2011-12-01
To address the impact of solar array anomalies, it is important to perform analysis of the solar array reliability. This paper establishes the fault tree analysis (FTA) and fuzzy reasoning Petri net (FRPN) models of a solar array mechanical system and analyzes reliability to find mechanisms of the solar array fault. The index final truth degree (FTD) and cosine matching function (CMF) are employed to resolve the issue of how to evaluate the importance and influence of different faults. So an improvement reliability analysis method is developed by means of the sorting of FTD and CMF. An example is analyzed using the proposed method. The analysis results show that harsh thermal environment and impact caused by particles in space are the most vital causes of the solar array fault. Furthermore, other fault modes and the corresponding improvement methods are discussed. The results reported in this paper could be useful for the spacecraft designers, particularly, in the process of redesigning the solar array and scheduling its reliability growth plan.
Seera, Manjeevan; Lim, Chee Peng; Ishak, Dahaman; Singh, Harapajan
2012-01-01
In this paper, a novel approach to detect and classify comprehensive fault conditions of induction motors using a hybrid fuzzy min-max (FMM) neural network and classification and regression tree (CART) is proposed. The hybrid model, known as FMM-CART, exploits the advantages of both FMM and CART for undertaking data classification and rule extraction problems. A series of real experiments is conducted, whereby the motor current signature analysis method is applied to form a database comprising stator current signatures under different motor conditions. The signal harmonics from the power spectral density are extracted as discriminative input features for fault detection and classification with FMM-CART. A comprehensive list of induction motor fault conditions, viz., broken rotor bars, unbalanced voltages, stator winding faults, and eccentricity problems, has been successfully classified using FMM-CART with good accuracy rates. The results are comparable, if not better, than those reported in the literature. Useful explanatory rules in the form of a decision tree are also elicited from FMM-CART to analyze and understand different fault conditions of induction motors.
Aydin, Ilhan; Karakose, Mehmet; Akin, Erhan
2014-03-01
Although reconstructed phase space is one of the most powerful methods for analyzing a time series, it can fail in fault diagnosis of an induction motor when the appropriate pre-processing is not performed. Therefore, boundary analysis based a new feature extraction method in phase space is proposed for diagnosis of induction motor faults. The proposed approach requires the measurement of one phase current signal to construct the phase space representation. Each phase space is converted into an image, and the boundary of each image is extracted by a boundary detection algorithm. A fuzzy decision tree has been designed to detect broken rotor bars and broken connector faults. The results indicate that the proposed approach has a higher recognition rate than other methods on the same dataset. © 2013 ISA Published by ISA All rights reserved.
The P-Mesh: A Commodity-based Scalable Network Architecture for Clusters
NASA Technical Reports Server (NTRS)
Nitzberg, Bill; Kuszmaul, Chris; Stockdale, Ian; Becker, Jeff; Jiang, John; Wong, Parkson; Tweten, David (Technical Monitor)
1998-01-01
We designed a new network architecture, the P-Mesh which combines the scalability and fault resilience of a torus with the performance of a switch. We compare the scalability, performance, and cost of the hub, switch, torus, tree, and P-Mesh architectures. The latter three are capable of scaling to thousands of nodes, however, the torus has severe performance limitations with that many processors. The tree and P-Mesh have similar latency, bandwidth, and bisection bandwidth, but the P-Mesh outperforms the switch architecture (a lower bound for tree performance) on 16-node NAB Parallel Benchmark tests by up to 23%, and costs 40% less. Further, the P-Mesh has better fault resilience characteristics. The P-Mesh architecture trades increased management overhead for lower cost, and is a good bridging technology while the price of tree uplinks is expensive.
Fault tree safety analysis of a large Li/SOCl(sub)2 spacecraft battery
NASA Technical Reports Server (NTRS)
Uy, O. Manuel; Maurer, R. H.
1987-01-01
The results of the safety fault tree analysis on the eight module, 576 F cell Li/SOCl2 battery on the spacecraft and in the integration and test environment prior to launch on the ground are presented. The analysis showed that with the right combination of blocking diodes, electrical fuses, thermal fuses, thermal switches, cell balance, cell vents, and battery module vents the probability of a single cell or a 72 cell module exploding can be reduced to .000001, essentially the probability due to explosion for unexplained reasons.
The Effects of Fault Bends on Rupture Propagation: A Parameter Study
NASA Astrophysics Data System (ADS)
Lozos, J. C.; Oglesby, D. D.; Duan, B.; Wesnousky, S. G.
2008-12-01
Segmented faults with stepovers are ubiquitous, and occur at a variety of scales, ranging from small stepovers on the San Jacinto Fault, to the large-scale stepover on of the San Andreas Fault between Tejon Pass and San Gorgonio Pass. Because this type of fault geometry is so prevalent, understanding how rupture propagates through such systems is important for evaluating seismic hazard at different points along these faults. In the present study, we systematically investigate how far rupture will propagate through a fault with a linked (i.e., continuous fault) stepover, based on the length of the linking fault segment and the angle that connects the linking segment to adjacent segments. We conducted dynamic models of such systems using a two-dimensional finite element code (Duan and Oglesby 2007). The fault system in our models consists of three segments: two parallel 10km-long faults linked at a specified angle by a linking segment of between 500 m and 5 km. This geometry was run both as a extensional system and a compressional system. We observed several distinct rupture behaviors, with systematic differences between compressional and extensional cases. Both shear directions rupture straight through the stepover for very shallow stepover angles. In compressional systems with steeper angles, rupture may jump ahead from the stepover segment onto the far segment; whether or not rupture on this segment reaches critical patch size and slips fully is also a function of angle and stepover length. In some compressional cases, if the angle is steep enough and the stepover short enough, rupture may jump over the step entirely and propagate down the far segment without touching the linking segment. In extensional systems, rupture jumps from the nucleating segment onto the linking segment even at shallow angles, but at steeper angles, rupture propagates through without jumping. It is easier to propagate through a wider range of angles in extensional cases. In both extensional and compressional cases, for each stepover length there exists a maximum angle through which rupture can fully propagate; this maximum angle decreases asymptotically to a minimum value as the stepover length increases. We also found that a wave associated with a stopping phase coming from the far end of the fault may restart rupture and induce full propagation after a significant delay in some cases where the initial rupture terminated.
NASA Astrophysics Data System (ADS)
Nwosu, Cajethan M.; Ogbuka, Cosmas U.; Oti, Stephen E.
2017-08-01
This paper presents a control model design capable of inhibiting the phenomenal rise in the DC-link voltage during grid- fault condition in a variable speed wind turbine. Against the use of power circuit protection strategies with inherent limitations in fault ride-through capability, a control circuit algorithm capable of limiting the DC-link voltage rise which in turn bears dynamics that has direct influence on the characteristics of the rotor voltage especially during grid faults is here proposed. The model results so obtained compare favorably with the simulation results as obtained in a MATLAB/SIMULINK environment. The generated model may therefore be used to predict near accurately the nature of DC-link voltage variations during fault given some factors which include speed and speed mode of operation, the value of damping resistor relative to half the product of inner loop current control bandwidth and the filter inductance.
NASA Astrophysics Data System (ADS)
Peacock, D. C. P.; Nixon, C. W.; Rotevatn, A.; Sanderson, D. J.; Zuluaga, L. F.
2017-04-01
The way that faults interact with each other controls fault geometries, displacements and strains. Faults rarely occur individually but as sets or networks, with the arrangement of these faults producing a variety of different fault interactions. Fault interactions are characterised in terms of the following: 1) Geometry - the spatial arrangement of the faults. Interacting faults may or may not be geometrically linked (i.e. physically connected), when fault planes share an intersection line. 2) Kinematics - the displacement distributions of the interacting faults and whether the displacement directions are parallel, perpendicular or oblique to the intersection line. Interacting faults may or may not be kinematically linked, where the displacements, stresses and strains of one fault influences those of the other. 3) Displacement and strain in the interaction zone - whether the faults have the same or opposite displacement directions, and if extension or contraction dominates in the acute bisector between the faults. 4) Chronology - the relative ages of the faults. This characterisation scheme is used to suggest a classification for interacting faults. Different types of interaction are illustrated using metre-scale faults from the Mesozoic rocks of Somerset and examples from the literature.
Fault tree analysis of most common rolling bearing tribological failures
NASA Astrophysics Data System (ADS)
Vencl, Aleksandar; Gašić, Vlada; Stojanović, Blaža
2017-02-01
Wear as a tribological process has a major influence on the reliability and life of rolling bearings. Field examinations of bearing failures due to wear indicate possible causes and point to the necessary measurements for wear reduction or elimination. Wear itself is a very complex process initiated by the action of different mechanisms, and can be manifested by different wear types which are often related. However, the dominant type of wear can be approximately determined. The paper presents the classification of most common bearing damages according to the dominant wear type, i.e. abrasive wear, adhesive wear, surface fatigue wear, erosive wear, fretting wear and corrosive wear. The wear types are correlated with the terms used in ISO 15243 standard. Each wear type is illustrated with an appropriate photograph, and for each wear type, appropriate description of causes and manifestations is presented. Possible causes of rolling bearing failure are used for the fault tree analysis (FTA). It was performed to determine the root causes for bearing failures. The constructed fault tree diagram for rolling bearing failure can be useful tool for maintenance engineers.
Renjith, V R; Madhu, G; Nayagam, V Lakshmana Gomathi; Bhasi, A B
2010-11-15
The hazards associated with major accident hazard (MAH) industries are fire, explosion and toxic gas releases. Of these, toxic gas release is the worst as it has the potential to cause extensive fatalities. Qualitative and quantitative hazard analyses are essential for the identification and quantification of these hazards related to chemical industries. Fault tree analysis (FTA) is an established technique in hazard identification. This technique has the advantage of being both qualitative and quantitative, if the probabilities and frequencies of the basic events are known. This paper outlines the estimation of the probability of release of chlorine from storage and filling facility of chlor-alkali industry using FTA. An attempt has also been made to arrive at the probability of chlorine release using expert elicitation and proven fuzzy logic technique for Indian conditions. Sensitivity analysis has been done to evaluate the percentage contribution of each basic event that could lead to chlorine release. Two-dimensional fuzzy fault tree analysis (TDFFTA) has been proposed for balancing the hesitation factor involved in expert elicitation. Copyright © 2010 Elsevier B.V. All rights reserved.
Development and validation of techniques for improving software dependability
NASA Technical Reports Server (NTRS)
Knight, John C.
1992-01-01
A collection of document abstracts are presented on the topic of improving software dependability through NASA grant NAG-1-1123. Specific topics include: modeling of error detection; software inspection; test cases; Magnetic Stereotaxis System safety specifications and fault trees; and injection of synthetic faults into software.
Trade Studies of Space Launch Architectures using Modular Probabilistic Risk Analysis
NASA Technical Reports Server (NTRS)
Mathias, Donovan L.; Go, Susie
2006-01-01
A top-down risk assessment in the early phases of space exploration architecture development can provide understanding and intuition of the potential risks associated with new designs and technologies. In this approach, risk analysts draw from their past experience and the heritage of similar existing systems as a source for reliability data. This top-down approach captures the complex interactions of the risk driving parts of the integrated system without requiring detailed knowledge of the parts themselves, which is often unavailable in the early design stages. Traditional probabilistic risk analysis (PRA) technologies, however, suffer several drawbacks that limit their timely application to complex technology development programs. The most restrictive of these is a dependence on static planning scenarios, expressed through fault and event trees. Fault trees incorporating comprehensive mission scenarios are routinely constructed for complex space systems, and several commercial software products are available for evaluating fault statistics. These static representations cannot capture the dynamic behavior of system failures without substantial modification of the initial tree. Consequently, the development of dynamic models using fault tree analysis has been an active area of research in recent years. This paper discusses the implementation and demonstration of dynamic, modular scenario modeling for integration of subsystem fault evaluation modules using the Space Architecture Failure Evaluation (SAFE) tool. SAFE is a C++ code that was originally developed to support NASA s Space Launch Initiative. It provides a flexible framework for system architecture definition and trade studies. SAFE supports extensible modeling of dynamic, time-dependent risk drivers of the system and functions at the level of fidelity for which design and failure data exists. The approach is scalable, allowing inclusion of additional information as detailed data becomes available. The tool performs a Monte Carlo analysis to provide statistical estimates. Example results of an architecture system reliability study are summarized for an exploration system concept using heritage data from liquid-fueled expendable Saturn V/Apollo launch vehicles.
Survey of critical failure events in on-chip interconnect by fault tree analysis
NASA Astrophysics Data System (ADS)
Yokogawa, Shinji; Kunii, Kyousuke
2018-07-01
In this paper, a framework based on reliability physics is proposed for adopting fault tree analysis (FTA) to the on-chip interconnect system of a semiconductor. By integrating expert knowledge and experience regarding the possibilities of failure on basic events, critical issues of on-chip interconnect reliability will be evaluated by FTA. In particular, FTA is used to identify the minimal cut sets with high risk priority. Critical events affecting the on-chip interconnect reliability are identified and discussed from the viewpoint of long-term reliability assessment. The moisture impact is evaluated as an external event.
Sun, Weifang; Yao, Bin; Zeng, Nianyin; Chen, Binqiang; He, Yuchao; Cao, Xincheng; He, Wangpeng
2017-07-12
As a typical example of large and complex mechanical systems, rotating machinery is prone to diversified sorts of mechanical faults. Among these faults, one of the prominent causes of malfunction is generated in gear transmission chains. Although they can be collected via vibration signals, the fault signatures are always submerged in overwhelming interfering contents. Therefore, identifying the critical fault's characteristic signal is far from an easy task. In order to improve the recognition accuracy of a fault's characteristic signal, a novel intelligent fault diagnosis method is presented. In this method, a dual-tree complex wavelet transform (DTCWT) is employed to acquire the multiscale signal's features. In addition, a convolutional neural network (CNN) approach is utilized to automatically recognise a fault feature from the multiscale signal features. The experiment results of the recognition for gear faults show the feasibility and effectiveness of the proposed method, especially in the gear's weak fault features.
Corridors of crestal and radial faults linking salt diapirs in the Espírito Santo Basin, SE Brazil
NASA Astrophysics Data System (ADS)
Mattos, Nathalia H.; Alves, Tiago M.
2018-03-01
This work uses high-quality 3D seismic data to assess the geometry of fault families around salt diapirs in SE Brazil (Espírito Santo Basin). It aims at evaluating the timings of fault growth, and suggests the generation of corridors for fluid migration linking discrete salt diapirs. Three salt diapirs, one salt ridge, and five fault families were identified based on their geometry and relative locations. Displacement-length (D-x) plots, Throw-depth (T-z) data and structural maps indicate that faults consist of multiple segments that were reactivated by dip-linkage following a preferential NE-SW direction. This style of reactivation and linkage is distinct from other sectors of the Espírito Santo Basin where the preferential mode of reactivation is by upwards vertical propagation. Reactivation of faults above a Mid-Eocene unconformity is also scarce in the study area. Conversely, two halokinetic episodes dated as Cretaceous and Paleogene are interpreted below a Mid-Eocene unconformity. This work is important as it recognises the juxtaposition of permeable strata across faults as marking the generation of fault corridors linking adjacent salt structures. In such a setting, fault modelling shows that fluid will migrate towards the shallower salt structures along the fault corridors first identified in this work.
Analysis of a hardware and software fault tolerant processor for critical applications
NASA Technical Reports Server (NTRS)
Dugan, Joanne B.
1993-01-01
Computer systems for critical applications must be designed to tolerate software faults as well as hardware faults. A unified approach to tolerating hardware and software faults is characterized by classifying faults in terms of duration (transient or permanent) rather than source (hardware or software). Errors arising from transient faults can be handled through masking or voting, but errors arising from permanent faults require system reconfiguration to bypass the failed component. Most errors which are caused by software faults can be considered transient, in that they are input-dependent. Software faults are triggered by a particular set of inputs. Quantitative dependability analysis of systems which exhibit a unified approach to fault tolerance can be performed by a hierarchical combination of fault tree and Markov models. A methodology for analyzing hardware and software fault tolerant systems is applied to the analysis of a hypothetical system, loosely based on the Fault Tolerant Parallel Processor. The models consider both transient and permanent faults, hardware and software faults, independent and related software faults, automatic recovery, and reconfiguration.
Determining preventability of pediatric readmissions using fault tree analysis.
Jonas, Jennifer A; Devon, Erin Pete; Ronan, Jeanine C; Ng, Sonia C; Owusu-McKenzie, Jacqueline Y; Strausbaugh, Janet T; Fieldston, Evan S; Hart, Jessica K
2016-05-01
Previous studies attempting to distinguish preventable from nonpreventable readmissions reported challenges in completing reviews efficiently and consistently. (1) Examine the efficiency and reliability of a Web-based fault tree tool designed to guide physicians through chart reviews to a determination about preventability. (2) Investigate root causes of general pediatrics readmissions and identify the percent that are preventable. General pediatricians from The Children's Hospital of Philadelphia used a Web-based fault tree tool to classify root causes of all general pediatrics 15-day readmissions in 2014. The tool guided reviewers through a logical progression of questions, which resulted in 1 of 18 root causes of readmission, 8 of which were considered potentially preventable. Twenty percent of cases were cross-checked to measure inter-rater reliability. Of the 7252 discharges, 248 were readmitted, for an all-cause general pediatrics 15-day readmission rate of 3.4%. Of those readmissions, 15 (6.0%) were deemed potentially preventable, corresponding to 0.2% of total discharges. The most common cause of potentially preventable readmissions was premature discharge. For the 50 cross-checked cases, both reviews resulted in the same root cause for 44 (86%) of files (κ = 0.79; 95% confidence interval: 0.60-0.98). Completing 1 review using the tool took approximately 20 minutes. The Web-based fault tree tool helped physicians to identify root causes of hospital readmissions and classify them as either preventable or not preventable in an efficient and consistent way. It also confirmed that only a small percentage of general pediatrics 15-day readmissions are potentially preventable. Journal of Hospital Medicine 2016;11:329-335. © 2016 Society of Hospital Medicine. © 2016 Society of Hospital Medicine.
Risk Analysis of Return Support Material on Gas Compressor Platform Project
NASA Astrophysics Data System (ADS)
Silvianita; Aulia, B. U.; Khakim, M. L. N.; Rosyid, Daniel M.
2017-07-01
On a fixed platforms project are not only carried out by a contractor, but two or more contractors. Cooperation in the construction of fixed platforms is often not according to plan, it is caused by several factors. It takes a good synergy between the contractor to avoid miss communication may cause problems on the project. For the example is about support material (sea fastening, skid shoe and shipping support) used in the process of sending a jacket structure to operation place often does not return to the contractor. It needs a systematic method to overcome the problem of support material. This paper analyses the causes and effects of GAS Compressor Platform that support material is not return, using Fault Tree Analysis (FTA) and Event Tree Analysis (ETA). From fault tree analysis, the probability of top event is 0.7783. From event tree analysis diagram, the contractors lose Rp.350.000.000, - to Rp.10.000.000.000, -.
Mines Systems Safety Improvement Using an Integrated Event Tree and Fault Tree Analysis
NASA Astrophysics Data System (ADS)
Kumar, Ranjan; Ghosh, Achyuta Krishna
2017-04-01
Mines systems such as ventilation system, strata support system, flame proof safety equipment, are exposed to dynamic operational conditions such as stress, humidity, dust, temperature, etc., and safety improvement of such systems can be done preferably during planning and design stage. However, the existing safety analysis methods do not handle the accident initiation and progression of mine systems explicitly. To bridge this gap, this paper presents an integrated Event Tree (ET) and Fault Tree (FT) approach for safety analysis and improvement of mine systems design. This approach includes ET and FT modeling coupled with redundancy allocation technique. In this method, a concept of top hazard probability is introduced for identifying system failure probability and redundancy is allocated to the system either at component or system level. A case study on mine methane explosion safety with two initiating events is performed. The results demonstrate that the presented method can reveal the accident scenarios and improve the safety of complex mine systems simultaneously.
NASA Astrophysics Data System (ADS)
Li, Yongbo; Li, Guoyan; Yang, Yuantao; Liang, Xihui; Xu, Minqiang
2018-05-01
The fault diagnosis of planetary gearboxes is crucial to reduce the maintenance costs and economic losses. This paper proposes a novel fault diagnosis method based on adaptive multi-scale morphological filter (AMMF) and modified hierarchical permutation entropy (MHPE) to identify the different health conditions of planetary gearboxes. In this method, AMMF is firstly adopted to remove the fault-unrelated components and enhance the fault characteristics. Second, MHPE is utilized to extract the fault features from the denoised vibration signals. Third, Laplacian score (LS) approach is employed to refine the fault features. In the end, the obtained features are fed into the binary tree support vector machine (BT-SVM) to accomplish the fault pattern identification. The proposed method is numerically and experimentally demonstrated to be able to recognize the different fault categories of planetary gearboxes.
2013-05-01
specifics of the correlation will be explored followed by discussion of new paradigms— the ordered event list (OEL) and the decision tree — that result from...4.2.1 Brief Overview of the Decision Tree Paradigm ................................................15 4.2.2 OEL Explained...6 Figure 3. A depiction of a notional fault/activation tree . ................................................................7
Clustering of GPS velocities in the Mojave Block, southeastern California
NASA Astrophysics Data System (ADS)
Savage, J. C.; Simpson, R. W.
2013-04-01
find subdivisions within the Mojave Block using cluster analysis to identify groupings in the velocities observed at GPS stations there. The clusters are represented on a fault map by symbols located at the positions of the GPS stations, each symbol representing the cluster to which the velocity of that GPS station belongs. Fault systems that separate the clusters are readily identified on such a map. The most significant representation as judged by the gap test involves 4 clusters within the Mojave Block. The fault systems bounding the clusters from east to west are 1) the faults defining the eastern boundary of the Northeast Mojave Domain extended southward to connect to the Hector Mine rupture, 2) the Calico-Paradise fault system, 3) the Landers-Blackwater fault system, and 4) the Helendale-Lockhart fault system. This division of the Mojave Block is very similar to that proposed by Meade and Hager []. However, no cluster boundary coincides with the Garlock Fault, the northern boundary of the Mojave Block. Rather, the clusters appear to continue without interruption from the Mojave Block north into the southern Walker Lane Belt, similar to the continuity across the Garlock Fault of the shear zone along the Blackwater-Little Lake fault system observed by Peltzer et al. []. Mapped traces of individual faults in the Mojave Block terminate within the block and do not continue across the Garlock Fault [Dokka and Travis, ].
Monitoring of Microseismicity with ArrayTechniques in the Peach Tree Valley Region
NASA Astrophysics Data System (ADS)
Garcia-Reyes, J. L.; Clayton, R. W.
2016-12-01
This study is focused on the analysis of microseismicity along the San Andreas Fault in the PeachTree Valley region. This zone is part of the transition zone between the locked portion to the south (Parkfield, CA) and the creeping section to the north (Jovilet, et al., JGR, 2014). The data for the study comes from a 2-week deployment of 116 Zland nodes in a cross-shaped configuration along (8.2 km) and across (9 km) the Fault. We analyze the distribution of microseismicity using a 3D backprojection technique, and we explore the use of Hidden Markov Models to identify different patterns of microseismicity (Hammer et al., GJI, 2013). The goal of the study is to relate the style of seismicity to the mechanical state of the Fault. The results show the evolution of seismic activity as well as at least two different patterns of seismic signals.
[Impact of water pollution risk in water transfer project based on fault tree analysis].
Liu, Jian-Chang; Zhang, Wei; Wang, Li-Min; Li, Dai-Qing; Fan, Xiu-Ying; Deng, Hong-Bing
2009-09-15
The methods to assess water pollution risk for medium water transfer are gradually being explored. The event-nature-proportion method was developed to evaluate the probability of the single event. Fault tree analysis on the basis of calculation on single event was employed to evaluate the extent of whole water pollution risk for the channel water body. The result indicates, that the risk of pollutants from towns and villages along the line of water transfer project to the channel water body is at high level with the probability of 0.373, which will increase pollution to the channel water body at the rate of 64.53 mg/L COD, 4.57 mg/L NH4(+) -N and 0.066 mg/L volatilization hydroxybenzene, respectively. The measurement of fault probability on the basis of proportion method is proved to be useful in assessing water pollution risk under much uncertainty.
Viewpoint on ISA TR84.0.02--simplified methods and fault tree analysis.
Summers, A E
2000-01-01
ANSI/ISA-S84.01-1996 and IEC 61508 require the establishment of a safety integrity level for any safety instrumented system or safety related system used to mitigate risk. Each stage of design, operation, maintenance, and testing is judged against this safety integrity level. Quantitative techniques can be used to verify whether the safety integrity level is met. ISA-dTR84.0.02 is a technical report under development by ISA, which discusses how to apply quantitative analysis techniques to safety instrumented systems. This paper discusses two of those techniques: (1) Simplified equations and (2) Fault tree analysis.
TH-EF-BRC-03: Fault Tree Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thomadsen, B.
2016-06-15
This Hands-on Workshop will be focused on providing participants with experience with the principal tools of TG 100 and hence start to build both competence and confidence in the use of risk-based quality management techniques. The three principal tools forming the basis of TG 100’s risk analysis: Process mapping, Failure-Modes and Effects Analysis and fault-tree analysis will be introduced with a 5 minute refresher presentation and each presentation will be followed by a 30 minute small group exercise. An exercise on developing QM from the risk analysis follows. During the exercise periods, participants will apply the principles in 2 differentmore » clinical scenarios. At the conclusion of each exercise there will be ample time for participants to discuss with each other and the faculty their experience and any challenges encountered. Learning Objectives: To review the principles of Process Mapping, Failure Modes and Effects Analysis and Fault Tree Analysis. To gain familiarity with these three techniques in a small group setting. To share and discuss experiences with the three techniques with faculty and participants. Director, TreatSafely, LLC. Director, Center for the Assessment of Radiological Sciences. Occasional Consultant to the IAEA and Varian.« less
Estimating earthquake-induced failure probability and downtime of critical facilities.
Porter, Keith; Ramer, Kyle
2012-01-01
Fault trees have long been used to estimate failure risk in earthquakes, especially for nuclear power plants (NPPs). One interesting application is that one can assess and manage the probability that two facilities - a primary and backup - would be simultaneously rendered inoperative in a single earthquake. Another is that one can calculate the probabilistic time required to restore a facility to functionality, and the probability that, during any given planning period, the facility would be rendered inoperative for any specified duration. A large new peer-reviewed library of component damageability and repair-time data for the first time enables fault trees to be used to calculate the seismic risk of operational failure and downtime for a wide variety of buildings other than NPPs. With the new library, seismic risk of both the failure probability and probabilistic downtime can be assessed and managed, considering the facility's unique combination of structural and non-structural components, their seismic installation conditions, and the other systems on which the facility relies. An example is offered of real computer data centres operated by a California utility. The fault trees were created and tested in collaboration with utility operators, and the failure probability and downtime results validated in several ways.
Huang, Weiqing; Fan, Hongbo; Qiu, Yongfu; Cheng, Zhiyu; Qian, Yu
2016-02-15
Haze weather has become a serious environmental pollution problem which occurs in many Chinese cities. One of the most critical factors for the formation of haze weather is the exhausts of coal combustion, thus it is meaningful to figure out the causation mechanism between urban haze and the exhausts of coal combustion. Based on above considerations, the fault tree analysis (FAT) approach was employed for the causation mechanism of urban haze in Beijing by considering the risk events related with the exhausts of coal combustion for the first time. Using this approach, firstly the fault tree of the urban haze causation system connecting with coal combustion exhausts was established; consequently the risk events were discussed and identified; then, the minimal cut sets were successfully determined using Boolean algebra; finally, the structure, probability and critical importance degree analysis of the risk events were completed for the qualitative and quantitative assessment. The study results proved that the FTA was an effective and simple tool for the causation mechanism analysis and risk management of urban haze in China. Copyright © 2015 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Mulyana, Cukup; Muhammad, Fajar; Saad, Aswad H.; Mariah, Riveli, Nowo
2017-03-01
Storage tank component is the most critical component in LNG regasification terminal. It has the risk of failure and accident which impacts to human health and environment. Risk assessment is conducted to detect and reduce the risk of failure in storage tank. The aim of this research is determining and calculating the probability of failure in regasification unit of LNG. In this case, the failure is caused by Boiling Liquid Expanding Vapor Explosion (BLEVE) and jet fire in LNG storage tank component. The failure probability can be determined by using Fault Tree Analysis (FTA). Besides that, the impact of heat radiation which is generated is calculated. Fault tree for BLEVE and jet fire on storage tank component has been determined and obtained with the value of failure probability for BLEVE of 5.63 × 10-19 and for jet fire of 9.57 × 10-3. The value of failure probability for jet fire is high enough and need to be reduced by customizing PID scheme of regasification LNG unit in pipeline number 1312 and unit 1. The value of failure probability after customization has been obtained of 4.22 × 10-6.
NASA Astrophysics Data System (ADS)
Blakely, R. J.; Sherrod, B. L.; Glen, J. M. G.; Ritzinger, B. T.; Staisch, L.
2017-12-01
High-resolution aeromagnetic surveys of Washington and Oregon, acquired over the past two decades by the U.S. Geological Survey, serve as proxies for geologic mapping in a terrain modified by glacial and catastrophic flood processes and covered by vegetation and urban development. In concert with geologic mapping and ancillary geophysical measurements, these data show possible kinematic links between forearc and backarc regions and have improved understanding of Cascadia crustal framework. Here we investigate a possible link between the NW-striking Wallula fault zone (WFZ), a segment of the Olympic-Wallowa lineament (OWL), and the N-striking Hite fault in Cascadia's backarc. Strike-slip displacement on the WFZ is indicated by offset of NW-striking Ice Harbor dikes (8.5 Ma), as displayed in magnetic anomalies. An exposed dike immediately south of the Walla Walla River has been used by others to argue against strike-slip displacement; i.e., the exposure lies south of one strand of the WFZ but is not displaced with respect to its linear magnetic anomaly north of the fault. However, high-resolution magnetic anomalies and a recently discovered, 25-km-long LiDAR scarp show that the dike exposure actually lies north of the fault and thus is irrelevant in determining strike-slip displacement on the fault. Our most recent magnetic survey illuminates with unprecedented detail strands of the N-striking Hite fault system and structural links to the WFZ. The survey lies over an area underlain by strongly magnetic Miocene Columbia River flood basalts (CRB) and older intrusive and volcanic rocks. NW-striking magnetic anomalies associated with the WFZ do not extend eastward beyond the Hite fault, suggesting that this is the region at which strain is transferred from the OWL. Magnetic anomalies originating from CRB across the Hite fault serve as piercing points and indicate 1.5 to 2 km of sinistral slip since middle Miocene. Vertical offsets in depth to magnetic basement across the fault suggest that vertical displacement also was important. We conclude that the WFZ and Hite fault are kinematically linked and that both exhibit oblique-slip displacement. Faults north and south of the WFZ are dominantly compressional and extensional, respectively, suggesting that the Hite fault serves as a backstop to dextral slip on the OWL.
NASA Astrophysics Data System (ADS)
Li, Shuanghong; Cao, Hongliang; Yang, Yupu
2018-02-01
Fault diagnosis is a key process for the reliability and safety of solid oxide fuel cell (SOFC) systems. However, it is difficult to rapidly and accurately identify faults for complicated SOFC systems, especially when simultaneous faults appear. In this research, a data-driven Multi-Label (ML) pattern identification approach is proposed to address the simultaneous fault diagnosis of SOFC systems. The framework of the simultaneous-fault diagnosis primarily includes two components: feature extraction and ML-SVM classifier. The simultaneous-fault diagnosis approach can be trained to diagnose simultaneous SOFC faults, such as fuel leakage, air leakage in different positions in the SOFC system, by just using simple training data sets consisting only single fault and not demanding simultaneous faults data. The experimental result shows the proposed framework can diagnose the simultaneous SOFC system faults with high accuracy requiring small number training data and low computational burden. In addition, Fault Inference Tree Analysis (FITA) is employed to identify the correlations among possible faults and their corresponding symptoms at the system component level.
NASA Astrophysics Data System (ADS)
Schwartz, D. P.; Haeussler, P. J.; Seitz, G. G.; Dawson, T. E.; Stenner, H. D.; Matmon, A.; Crone, A. J.; Personius, S.; Burns, P. B.; Cadena, A.; Thoms, E.
2005-12-01
Developing accurate rupture histories of long, high-slip-rate strike-slip faults is is especially challenging where recurrence is relatively short (hundreds of years), adjacent segments may fail within decades of each other, and uncertainties in dating can be as large as, or larger than, the time between events. The Denali Fault system (DFS) is the major active structure of interior Alaska, but received little study since pioneering fault investigations in the early 1970s. Until the summer of 2003 essentially no data existed on the timing or spatial distribution of past ruptures on the DFS. This changed with the occurrence of the M7.9 2002 Denali fault earthquake, which has been a catalyst for present paleoseismic investigations. It provided a well-constrained rupture length and slip distribution. Strike-slip faulting occurred along 290 km of the Denali and Totschunda faults, leaving unruptured ?140km of the eastern Denali fault, ?180 km of the western Denali fault, and ?70 km of the eastern Totschunda fault. The DFS presents us with a blank canvas on which to fill a chronology of past earthquakes using modern paleoseismic techniques. Aware of correlation issues with potentially closely-timed earthquakes we have a) investigated 11 paleoseismic sites that allow a variety of dating techniques, b) measured paleo offsets, which provide insight into magnitude and rupture length of past events, at 18 locations, and c) developed late Pleistocene and Holocene slip rates using exposure age dating to constrain long-term fault behavior models. We are in the process of: 1) radiocarbon-dating peats involved in faulting and liquefaction, and especially short-lived forest floor vegetation that includes outer rings of trees, spruce needles, and blueberry leaves killed and buried during paleoearthquakes; 2) supporting development of a 700-900 year tree-ring time-series for precise dating of trees used in event timing; 3) employing Pb 210 for constraining the youngest ruptures in sag ponds on the eastern and western Denali fault; and 4) using volcanic ashes in trenches for dating and correlation. Initial results are: 1) Large earthquakes occurred along the 2002 rupture section 350-700 yrb02 (2-sigma, calendar-corrected, years before 2002) with offsets about the same as 2002. The Denali penultimate rupture appears younger (350-570 yrb02) than the Totschunda (580-700 yrb02); 2) The western Denali fault is geomorphically fresh, its MRE likely occurred within the past 250 years, the penultimate event occurred 570-680 yrb02, and slip in each event was 4m; 3) The eastern Denali MRE post-dates peat dated at 550-680 yrb02, is younger than the penultimate Totschunda event, and could be part of the penultimate Denali fault rupture or a separate earthquake; 4) A 120-km section of the Denali fault between tNenana glacier and the Delta River may be a zone of overlap for large events and/or capable of producing smaller earthquakes; its western part has fresh scarps with small (1m) offsets. 2004/2005 field observations show there are longer datable records, with 4-5 events recorded in trenches on the eastern Denali fault and the west end of the 2002 rupture, 2-3 events on the western part of the fault in Denali National Park, and 3-4 events on the Totschunda fault. These and extensive datable material provide the basis to define the paleoseismic history of DFS earthquake ruptures through multiple and complete earthquake cycles.
Support vector machines-based fault diagnosis for turbo-pump rotor
NASA Astrophysics Data System (ADS)
Yuan, Sheng-Fa; Chu, Fu-Lei
2006-05-01
Most artificial intelligence methods used in fault diagnosis are based on empirical risk minimisation principle and have poor generalisation when fault samples are few. Support vector machines (SVM) is a new general machine-learning tool based on structural risk minimisation principle that exhibits good generalisation even when fault samples are few. Fault diagnosis based on SVM is discussed. Since basic SVM is originally designed for two-class classification, while most of fault diagnosis problems are multi-class cases, a new multi-class classification of SVM named 'one to others' algorithm is presented to solve the multi-class recognition problems. It is a binary tree classifier composed of several two-class classifiers organised by fault priority, which is simple, and has little repeated training amount, and the rate of training and recognition is expedited. The effectiveness of the method is verified by the application to the fault diagnosis for turbo pump rotor.
Distributed intrusion monitoring system with fiber link backup and on-line fault diagnosis functions
NASA Astrophysics Data System (ADS)
Xu, Jiwei; Wu, Huijuan; Xiao, Shunkun
2014-12-01
A novel multi-channel distributed optical fiber intrusion monitoring system with smart fiber link backup and on-line fault diagnosis functions was proposed. A 1× N optical switch was intelligently controlled by a peripheral interface controller (PIC) to expand the fiber link from one channel to several ones to lower the cost of the long or ultra-long distance intrusion monitoring system and also to strengthen the intelligent monitoring link backup function. At the same time, a sliding window auto-correlation method was presented to identify and locate the broken or fault point of the cable. The experimental results showed that the proposed multi-channel system performed well especially whenever any a broken cable was detected. It could locate the broken or fault point by itself accurately and switch to its backup sensing link immediately to ensure the security system to operate stably without a minute idling. And it was successfully applied in a field test for security monitoring of the 220-km-length national borderline in China.
Computer hardware fault administration
Archer, Charles J.; Megerian, Mark G.; Ratterman, Joseph D.; Smith, Brian E.
2010-09-14
Computer hardware fault administration carried out in a parallel computer, where the parallel computer includes a plurality of compute nodes. The compute nodes are coupled for data communications by at least two independent data communications networks, where each data communications network includes data communications links connected to the compute nodes. Typical embodiments carry out hardware fault administration by identifying a location of a defective link in the first data communications network of the parallel computer and routing communications data around the defective link through the second data communications network of the parallel computer.
Missing link between the Hayward and Rodgers Creek faults
Watt, Janet; Ponce, David; Parsons, Tom; Hart, Patrick
2016-01-01
The next major earthquake to strike the ~7 million residents of the San Francisco Bay Area will most likely result from rupture of the Hayward or Rodgers Creek faults. Until now, the relationship between these two faults beneath San Pablo Bay has been a mystery. Detailed subsurface imaging provides definitive evidence of active faulting along the Hayward fault as it traverses San Pablo Bay and bends ~10° to the right toward the Rodgers Creek fault. Integrated geophysical interpretation and kinematic modeling show that the Hayward and Rodgers Creek faults are directly connected at the surface—a geometric relationship that has significant implications for earthquake dynamics and seismic hazard. A direct link enables simultaneous rupture of the Hayward and Rodgers Creek faults, a scenario that could result in a major earthquake (M = 7.4) that would cause extensive damage and loss of life with global economic impact. PMID:27774514
Missing link between the Hayward and Rodgers Creek faults
Watt, Janet; Ponce, David A.; Parsons, Thomas E.; Hart, Patrick E.
2016-01-01
The next major earthquake to strike the ~7 million residents of the San Francisco Bay Area will most likely result from rupture of the Hayward or Rodgers Creek faults. Until now, the relationship between these two faults beneath San Pablo Bay has been a mystery. Detailed subsurface imaging provides definitive evidence of active faulting along the Hayward fault as it traverses San Pablo Bay and bends ~10° to the right toward the Rodgers Creek fault. Integrated geophysical interpretation and kinematic modeling show that the Hayward and Rodgers Creek faults are directly connected at the surface—a geometric relationship that has significant implications for earthquake dynamics and seismic hazard. A direct link enables simultaneous rupture of the Hayward and Rodgers Creek faults, a scenario that could result in a major earthquake (M = 7.4) that would cause extensive damage and loss of life with global economic impact.
EDNA: Expert fault digraph analysis using CLIPS
NASA Technical Reports Server (NTRS)
Dixit, Vishweshwar V.
1990-01-01
Traditionally fault models are represented by trees. Recently, digraph models have been proposed (Sack). Digraph models closely imitate the real system dependencies and hence are easy to develop, validate and maintain. However, they can also contain directed cycles and analysis algorithms are hard to find. Available algorithms tend to be complicated and slow. On the other hand, the tree analysis (VGRH, Tayl) is well understood and rooted in vast research effort and analytical techniques. The tree analysis algorithms are sophisticated and orders of magnitude faster. Transformation of a digraph (cyclic) into trees (CLP, LP) is a viable approach to blend the advantages of the representations. Neither the digraphs nor the trees provide the ability to handle heuristic knowledge. An expert system, to capture the engineering knowledge, is essential. We propose an approach here, namely, expert network analysis. We combine the digraph representation and tree algorithms. The models are augmented by probabilistic and heuristic knowledge. CLIPS, an expert system shell from NASA-JSC will be used to develop a tool. The technique provides the ability to handle probabilities and heuristic knowledge. Mixed analysis, some nodes with probabilities, is possible. The tool provides graphics interface for input, query, and update. With the combined approach it is expected to be a valuable tool in the design process as well in the capture of final design knowledge.
NASA Astrophysics Data System (ADS)
Hu, Bingbing; Li, Bing
2016-02-01
It is very difficult to detect weak fault signatures due to the large amount of noise in a wind turbine system. Multiscale noise tuning stochastic resonance (MSTSR) has proved to be an effective way to extract weak signals buried in strong noise. However, the MSTSR method originally based on discrete wavelet transform (DWT) has disadvantages such as shift variance and the aliasing effects in engineering application. In this paper, the dual-tree complex wavelet transform (DTCWT) is introduced into the MSTSR method, which makes it possible to further improve the system output signal-to-noise ratio and the accuracy of fault diagnosis by the merits of DTCWT (nearly shift invariant and reduced aliasing effects). Moreover, this method utilizes the relationship between the two dual-tree wavelet basis functions, instead of matching the single wavelet basis function to the signal being analyzed, which may speed up the signal processing and be employed in on-line engineering monitoring. The proposed method is applied to the analysis of bearing outer ring and shaft coupling vibration signals carrying fault information. The results confirm that the method performs better in extracting the fault features than the original DWT-based MSTSR, the wavelet transform with post spectral analysis, and EMD-based spectral analysis methods.
A.P. Lamb,; L.M. Liberty,; Blakely, Richard J.; Pratt, Thomas L.; Sherrod, B.L.; Van Wijk, K.
2012-01-01
We present evidence that the Seattle fault zone of Washington State extends to the west edge of the Puget Lowland and is kinemati-cally linked to active faults that border the Olympic Massif, including the Saddle Moun-tain deformation zone. Newly acquired high-resolution seismic reflection and marine magnetic data suggest that the Seattle fault zone extends west beyond the Seattle Basin to form a >100-km-long active fault zone. We provide evidence for a strain transfer zone, expressed as a broad set of faults and folds connecting the Seattle and Saddle Mountain deformation zones near Hood Canal. This connection provides an explanation for the apparent synchroneity of M7 earthquakes on the two fault systems ~1100 yr ago. We redefi ne the boundary of the Tacoma Basin to include the previously termed Dewatto basin and show that the Tacoma fault, the southern part of which is a backthrust of the Seattle fault zone, links with a previously unidentifi ed fault along the western margin of the Seattle uplift. We model this north-south fault, termed the Dewatto fault, along the western margin of the Seattle uplift as a low-angle thrust that initiated with exhu-mation of the Olympic Massif and today accommodates north-directed motion. The Tacoma and Dewatto faults likely control both the southern and western boundaries of the Seattle uplift. The inferred strain trans-fer zone linking the Seattle fault zone and Saddle Mountain deformation zone defi nes the northern margin of the Tacoma Basin, and the Saddle Mountain deformation zone forms the northwestern boundary of the Tacoma Basin. Our observations and model suggest that the western portions of the Seattle fault zone and Tacoma fault are com-plex, require temporal variations in principal strain directions, and cannot be modeled as a simple thrust and/or backthrust system.
Model authoring system for fail safe analysis
NASA Technical Reports Server (NTRS)
Sikora, Scott E.
1990-01-01
The Model Authoring System is a prototype software application for generating fault tree analyses and failure mode and effects analyses for circuit designs. Utilizing established artificial intelligence and expert system techniques, the circuits are modeled as a frame-based knowledge base in an expert system shell, which allows the use of object oriented programming and an inference engine. The behavior of the circuit is then captured through IF-THEN rules, which then are searched to generate either a graphical fault tree analysis or failure modes and effects analysis. Sophisticated authoring techniques allow the circuit to be easily modeled, permit its behavior to be quickly defined, and provide abstraction features to deal with complexity.
A quantitative analysis of the F18 flight control system
NASA Technical Reports Server (NTRS)
Doyle, Stacy A.; Dugan, Joanne B.; Patterson-Hine, Ann
1993-01-01
This paper presents an informal quantitative analysis of the F18 flight control system (FCS). The analysis technique combines a coverage model with a fault tree model. To demonstrate the method's extensive capabilities, we replace the fault tree with a digraph model of the F18 FCS, the only model available to us. The substitution shows that while digraphs have primarily been used for qualitative analysis, they can also be used for quantitative analysis. Based on our assumptions and the particular failure rates assigned to the F18 FCS components, we show that coverage does have a significant effect on the system's reliability and thus it is important to include coverage in the reliability analysis.
The Design of a Fault-Tolerant COTS-Based Bus Architecture for Space Applications
NASA Technical Reports Server (NTRS)
Chau, Savio N.; Alkalai, Leon; Tai, Ann T.
2000-01-01
The high-performance, scalability and miniaturization requirements together with the power, mass and cost constraints mandate the use of commercial-off-the-shelf (COTS) components and standards in the X2000 avionics system architecture for deep-space missions. In this paper, we report our experiences and findings on the design of an IEEE 1394 compliant fault-tolerant COTS-based bus architecture. While the COTS standard IEEE 1394 adequately supports power management, high performance and scalability, its topological criteria impose restrictions on fault tolerance realization. To circumvent the difficulties, we derive a "stack-tree" topology that not only complies with the IEEE 1394 standard but also facilitates fault tolerance realization in a spaceborne system with limited dedicated resource redundancies. Moreover, by exploiting pertinent standard features of the 1394 interface which are not purposely designed for fault tolerance, we devise a comprehensive set of fault detection mechanisms to support the fault-tolerant bus architecture.
Fault-zone waves observed at the southern Joshua Tree earthquake rupture zone
Hough, S.E.; Ben-Zion, Y.; Leary, P.
1994-01-01
Waveform and spectral characteristics of several aftershocks of the M 6.1 22 April 1992 Joshua Tree earthquake recorded at stations just north of the Indio Hills in the Coachella Valley can be interpreted in terms of waves propagating within narrow, low-velocity, high-attenuation, vertical zones. Evidence for our interpretation consists of: (1) emergent P arrivals prior to and opposite in polarity to the impulsive direct phase; these arrivals can be modeled as headwaves indicative of a transfault velocity contrast; (2) spectral peaks in the S wave train that can be interpreted as internally reflected, low-velocity fault-zone wave energy; and (3) spatial selectivity of event-station pairs at which these data are observed, suggesting a long, narrow geologic structure. The observed waveforms are modeled using the analytical solution of Ben-Zion and Aki (1990) for a plane-parallel layered fault-zone structure. Synthetic waveform fits to the observed data indicate the presence of NS-trending vertical fault-zone layers characterized by a thickness of 50 to 100 m, a velocity decrease of 10 to 15% relative to the surrounding rock, and a P-wave quality factor in the range 25 to 50.
Probability and possibility-based representations of uncertainty in fault tree analysis.
Flage, Roger; Baraldi, Piero; Zio, Enrico; Aven, Terje
2013-01-01
Expert knowledge is an important source of input to risk analysis. In practice, experts might be reluctant to characterize their knowledge and the related (epistemic) uncertainty using precise probabilities. The theory of possibility allows for imprecision in probability assignments. The associated possibilistic representation of epistemic uncertainty can be combined with, and transformed into, a probabilistic representation; in this article, we show this with reference to a simple fault tree analysis. We apply an integrated (hybrid) probabilistic-possibilistic computational framework for the joint propagation of the epistemic uncertainty on the values of the (limiting relative frequency) probabilities of the basic events of the fault tree, and we use possibility-probability (probability-possibility) transformations for propagating the epistemic uncertainty within purely probabilistic and possibilistic settings. The results of the different approaches (hybrid, probabilistic, and possibilistic) are compared with respect to the representation of uncertainty about the top event (limiting relative frequency) probability. Both the rationale underpinning the approaches and the computational efforts they require are critically examined. We conclude that the approaches relevant in a given setting depend on the purpose of the risk analysis, and that further research is required to make the possibilistic approaches operational in a risk analysis context. © 2012 Society for Risk Analysis.
Clendenin, C.W.; Diehl, S.F.
1999-01-01
A pronounced, subparallel set of northeast-striking faults occurs in southeastern Missouri, but little is known about these faults because of poor exposure. The Commerce fault system is the southernmost exposed fault system in this set and has an ancestry related to Reelfoot rift extension. Recent published work indicates that this fault system has a long history of reactivation. The northeast-striking Grays Point fault zone is a segment of the Commerce fault system and is well exposed along the southeast rim of an inactive quarry. Our mapping shows that the Grays Point fault zone also has a complex history of polyphase reactivation, involving three periods of Paleozoic reactivation that occurred in Late Ordovician, Devonian, and post-Mississippian. Each period is characterized by divergent, right-lateral oblique-slip faulting. Petrographic examination of sidwall rip-out clasts in calcite-filled faults associated with the Grays Point fault zone supports a minimum of three periods of right-lateral oblique-slip. The reported observations imply that a genetic link exists between intracratonic fault reactivation and strain produced by Paleozoic orogenies affecting the eastern margin of Laurentia (North America). Interpretation of this link indicate that right-lateral oblique-slip has occurred on all of the northeast-striking faults in southeastern Missouri as a result of strain influenced by the convergence directions of the different Paleozoic orogenies.
Fault Tree Based Diagnosis with Optimal Test Sequencing for Field Service Engineers
NASA Technical Reports Server (NTRS)
Iverson, David L.; George, Laurence L.; Patterson-Hine, F. A.; Lum, Henry, Jr. (Technical Monitor)
1994-01-01
When field service engineers go to customer sites to service equipment, they want to diagnose and repair failures quickly and cost effectively. Symptoms exhibited by failed equipment frequently suggest several possible causes which require different approaches to diagnosis. This can lead the engineer to follow several fruitless paths in the diagnostic process before they find the actual failure. To assist in this situation, we have developed the Fault Tree Diagnosis and Optimal Test Sequence (FTDOTS) software system that performs automated diagnosis and ranks diagnostic hypotheses based on failure probability and the time or cost required to isolate and repair each failure. FTDOTS first finds a set of possible failures that explain exhibited symptoms by using a fault tree reliability model as a diagnostic knowledge to rank the hypothesized failures based on how likely they are and how long it would take or how much it would cost to isolate and repair them. This ordering suggests an optimal sequence for the field service engineer to investigate the hypothesized failures in order to minimize the time or cost required to accomplish the repair task. Previously, field service personnel would arrive at the customer site and choose which components to investigate based on past experience and service manuals. Using FTDOTS running on a portable computer, they can now enter a set of symptoms and get a list of possible failures ordered in an optimal test sequence to help them in their decisions. If facilities are available, the field engineer can connect the portable computer to the malfunctioning device for automated data gathering. FTDOTS is currently being applied to field service of medical test equipment. The techniques are flexible enough to use for many different types of devices. If a fault tree model of the equipment and information about component failure probabilities and isolation times or costs are available, a diagnostic knowledge base for that device can be developed easily.
Sequential Test Strategies for Multiple Fault Isolation
NASA Technical Reports Server (NTRS)
Shakeri, M.; Pattipati, Krishna R.; Raghavan, V.; Patterson-Hine, Ann; Kell, T.
1997-01-01
In this paper, we consider the problem of constructing near optimal test sequencing algorithms for diagnosing multiple faults in redundant (fault-tolerant) systems. The computational complexity of solving the optimal multiple-fault isolation problem is super-exponential, that is, it is much more difficult than the single-fault isolation problem, which, by itself, is NP-hard. By employing concepts from information theory and Lagrangian relaxation, we present several static and dynamic (on-line or interactive) test sequencing algorithms for the multiple fault isolation problem that provide a trade-off between the degree of suboptimality and computational complexity. Furthermore, we present novel diagnostic strategies that generate a static diagnostic directed graph (digraph), instead of a static diagnostic tree, for multiple fault diagnosis. Using this approach, the storage complexity of the overall diagnostic strategy reduces substantially. Computational results based on real-world systems indicate that the size of a static multiple fault strategy is strictly related to the structure of the system, and that the use of an on-line multiple fault strategy can diagnose faults in systems with as many as 10,000 failure sources.
Optical fiber-fault surveillance for passive optical networks in S-band operation window
NASA Astrophysics Data System (ADS)
Yeh, Chien-Hung; Chi, Sien
2005-07-01
An S-band (1470 to 1520 nm) fiber laser scheme, which uses multiple fiber Bragg grating (FBG) elements as feedback elements on each passive branch, is proposed and described for in-service fault identification in passive optical networks (PONs). By tuning a wavelength selective filter located within the laser cavity over a gain bandwidth, the fiber-fault of each branch can be monitored without affecting the in-service channels. In our experiment, an S-band four-branch monitoring tree-structured PON system is demonstrated and investigated experimentally.
Optical fiber-fault surveillance for passive optical networks in S-band operation window.
Yeh, Chien-Hung; Chi, Sien
2005-07-11
An S-band (1470 to 1520 nm) fiber laser scheme, which uses multiple fiber Bragg grating (FBG) elements as feedback elements on each passive branch, is proposed and described for in-service fault identification in passive optical networks (PONs). By tuning a wavelength selective filter located within the laser cavity over a gain bandwidth, the fiber-fault of each branch can be monitored without affecting the in-service channels. In our experiment, an S-band four-branch monitoring tree-structured PON system is demonstrated and investigated experimentally.
Sun, Weifang; Yao, Bin; Zeng, Nianyin; He, Yuchao; Cao, Xincheng; He, Wangpeng
2017-01-01
As a typical example of large and complex mechanical systems, rotating machinery is prone to diversified sorts of mechanical faults. Among these faults, one of the prominent causes of malfunction is generated in gear transmission chains. Although they can be collected via vibration signals, the fault signatures are always submerged in overwhelming interfering contents. Therefore, identifying the critical fault’s characteristic signal is far from an easy task. In order to improve the recognition accuracy of a fault’s characteristic signal, a novel intelligent fault diagnosis method is presented. In this method, a dual-tree complex wavelet transform (DTCWT) is employed to acquire the multiscale signal’s features. In addition, a convolutional neural network (CNN) approach is utilized to automatically recognise a fault feature from the multiscale signal features. The experiment results of the recognition for gear faults show the feasibility and effectiveness of the proposed method, especially in the gear’s weak fault features. PMID:28773148
NASA Astrophysics Data System (ADS)
Bertrand, Lionel; Jusseaume, Jessie; Géraud, Yves; Diraison, Marc; Damy, Pierre-Clément; Navelot, Vivien; Haffen, Sébastien
2018-03-01
In fractured reservoirs in the basement of extensional basins, fault and fracture parameters like density, spacing and length distribution are key properties for modelling and prediction of reservoir properties and fluids flow. As only large faults are detectable using basin-scale geophysical investigations, these fine-scale parameters need to be inferred from faults and fractures in analogous rocks at the outcrop. In this study, we use the western shoulder of the Upper Rhine Graben as an outcropping analogue of several deep borehole projects in the basement of the graben. Geological regional data, DTM (Digital Terrain Model) mapping and outcrop studies with scanlines are used to determine the spatial arrangement of the faults from the regional to the reservoir scale. The data shows that: 1) The fault network can be hierarchized in three different orders of scale and structural blocks with a characteristic structuration. This is consistent with other basement rocks studies in other rifting system allowing the extrapolation of the important parameters for modelling. 2) In the structural blocks, the fracture network linked to the faults is linked to the interplay between rock facies variation linked to the rock emplacement and the rifting event.
Interactions between Polygonal Normal Faults and Larger Normal Faults, Offshore Nova Scotia, Canada
NASA Astrophysics Data System (ADS)
Pham, T. Q. H.; Withjack, M. O.; Hanafi, B. R.
2017-12-01
Polygonal faults, small normal faults with polygonal arrangements that form in fine-grained sedimentary rocks, can influence ground-water flow and hydrocarbon migration. Using well and 3D seismic-reflection data, we have examined the interactions between polygonal faults and larger normal faults on the passive margin of offshore Nova Scotia, Canada. The larger normal faults strike approximately E-W to NE-SW. Growth strata indicate that the larger normal faults were active in the Late Cretaceous (i.e., during the deposition of the Wyandot Formation) and during the Cenozoic. The polygonal faults were also active during the Cenozoic because they affect the top of the Wyandot Formation, a fine-grained carbonate sedimentary rock, and the overlying Cenozoic strata. Thus, the larger normal faults and the polygonal faults were both active during the Cenozoic. The polygonal faults far from the larger normal faults have a wide range of orientations. Near the larger normal faults, however, most polygonal faults have preferred orientations, either striking parallel or perpendicular to the larger normal faults. Some polygonal faults nucleated at the tip of a larger normal fault, propagated outward, and linked with a second larger normal fault. The strike of these polygonal faults changed as they propagated outward, ranging from parallel to the strike of the original larger normal fault to orthogonal to the strike of the second larger normal fault. These polygonal faults hard-linked the larger normal faults at and above the level of the Wyandot Formation but not below it. We argue that the larger normal faults created stress-enhancement and stress-reorientation zones for the polygonal faults. Numerous small, polygonal faults formed in the stress-enhancement zones near the tips of larger normal faults. Stress-reorientation zones surrounded the larger normal faults far from their tips. Fewer polygonal faults are present in these zones, and, more importantly, most polygonal faults in these zones were either parallel or perpendicular to the larger faults.
Field, Edward; Biasi, Glenn P.; Bird, Peter; Dawson, Timothy E.; Felzer, Karen R.; Jackson, David A.; Johnson, Kaj M.; Jordan, Thomas H.; Madden, Christopher; Michael, Andrew J.; Milner, Kevin; Page, Morgan T.; Parsons, Thomas E.; Powers, Peter; Shaw, Bruce E.; Thatcher, Wayne R.; Weldon, Ray J.; Zeng, Yuehua
2015-01-01
The 2014 Working Group on California Earthquake Probabilities (WGCEP 2014) presents time-dependent earthquake probabilities for the third Uniform California Earthquake Rupture Forecast (UCERF3). Building on the UCERF3 time-independent model, published previously, renewal models are utilized to represent elastic-rebound-implied probabilities. A new methodology has been developed that solves applicability issues in the previous approach for un-segmented models. The new methodology also supports magnitude-dependent aperiodicity and accounts for the historic open interval on faults that lack a date-of-last-event constraint. Epistemic uncertainties are represented with a logic tree, producing 5,760 different forecasts. Results for a variety of evaluation metrics are presented, including logic-tree sensitivity analyses and comparisons to the previous model (UCERF2). For 30-year M≥6.7 probabilities, the most significant changes from UCERF2 are a threefold increase on the Calaveras fault and a threefold decrease on the San Jacinto fault. Such changes are due mostly to differences in the time-independent models (e.g., fault slip rates), with relaxation of segmentation and inclusion of multi-fault ruptures being particularly influential. In fact, some UCERF2 faults were simply too long to produce M 6.7 sized events given the segmentation assumptions in that study. Probability model differences are also influential, with the implied gains (relative to a Poisson model) being generally higher in UCERF3. Accounting for the historic open interval is one reason. Another is an effective 27% increase in the total elastic-rebound-model weight. The exact factors influencing differences between UCERF2 and UCERF3, as well as the relative importance of logic-tree branches, vary throughout the region, and depend on the evaluation metric of interest. For example, M≥6.7 probabilities may not be a good proxy for other hazard or loss measures. This sensitivity, coupled with the approximate nature of the model and known limitations, means the applicability of UCERF3 should be evaluated on a case-by-case basis.
Knowledge Representation Standards and Interchange Formats for Causal Graphs
NASA Technical Reports Server (NTRS)
Throop, David R.; Malin, Jane T.; Fleming, Land
2005-01-01
In many domains, automated reasoning tools must represent graphs of causally linked events. These include fault-tree analysis, probabilistic risk assessment (PRA), planning, procedures, medical reasoning about disease progression, and functional architectures. Each of these fields has its own requirements for the representation of causation, events, actors and conditions. The representations include ontologies of function and cause, data dictionaries for causal dependency, failure and hazard, and interchange formats between some existing tools. In none of the domains has a generally accepted interchange format emerged. The paper makes progress towards interoperability across the wide range of causal analysis methodologies. We survey existing practice and emerging interchange formats in each of these fields. Setting forth a set of terms and concepts that are broadly shared across the domains, we examine the several ways in which current practice represents them. Some phenomena are difficult to represent or to analyze in several domains. These include mode transitions, reachability analysis, positive and negative feedback loops, conditions correlated but not causally linked and bimodal probability distributions. We work through examples and contrast the differing methods for addressing them. We detail recent work in knowledge interchange formats for causal trees in aerospace analysis applications in early design, safety and reliability. Several examples are discussed, with a particular focus on reachability analysis and mode transitions. We generalize the aerospace analysis work across the several other domains. We also recommend features and capabilities for the next generation of causal knowledge representation standards.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stacey M. L. Hendrickson; April M. Whaley; Ronald L. Boring
The Office of Nuclear Regulatory Research (RES) is sponsoring work in response to a Staff Requirements Memorandum (SRM) directing an effort to establish a single human reliability analysis (HRA) method for the agency or guidance for the use of multiple methods. As part of this effort an attempt to develop a comprehensive HRA qualitative approach is being pursued. This paper presents a draft of the method’s middle layer, a part of the qualitative analysis phase that links failure mechanisms to performance shaping factors. Starting with a Crew Response Tree (CRT) that has identified human failure events, analysts identify potential failuremore » mechanisms using the mid-layer model. The mid-layer model presented in this paper traces the identification of the failure mechanisms using the Information-Diagnosis/Decision-Action (IDA) model and cognitive models from the psychological literature. Each failure mechanism is grouped according to a phase of IDA. Under each phase of IDA, the cognitive models help identify the relevant performance shaping factors for the failure mechanism. The use of IDA and cognitive models can be traced through fault trees, which provide a detailed complement to the CRT.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shen, Song-Hua; Chang, James Y. H.; Boring,Ronald L.
2010-03-01
The Office of Nuclear Regulatory Research (RES) at the US Nuclear Regulatory Commission (USNRC) is sponsoring work in response to a Staff Requirements Memorandum (SRM) directing an effort to establish a single human reliability analysis (HRA) method for the agency or guidance for the use of multiple methods. As part of this effort an attempt to develop a comprehensive HRA qualitative approach is being pursued. This paper presents a draft of the method's middle layer, a part of the qualitative analysis phase that links failure mechanisms to performance shaping factors. Starting with a Crew Response Tree (CRT) that has identifiedmore » human failure events, analysts identify potential failure mechanisms using the mid-layer model. The mid-layer model presented in this paper traces the identification of the failure mechanisms using the Information-Diagnosis/Decision-Action (IDA) model and cognitive models from the psychological literature. Each failure mechanism is grouped according to a phase of IDA. Under each phase of IDA, the cognitive models help identify the relevant performance shaping factors for the failure mechanism. The use of IDA and cognitive models can be traced through fault trees, which provide a detailed complement to the CRT.« less
Redundancy management for efficient fault recovery in NASA's distributed computing system
NASA Technical Reports Server (NTRS)
Malek, Miroslaw; Pandya, Mihir; Yau, Kitty
1991-01-01
The management of redundancy in computer systems was studied and guidelines were provided for the development of NASA's fault-tolerant distributed systems. Fault recovery and reconfiguration mechanisms were examined. A theoretical foundation was laid for redundancy management by efficient reconfiguration methods and algorithmic diversity. Algorithms were developed to optimize the resources for embedding of computational graphs of tasks in the system architecture and reconfiguration of these tasks after a failure has occurred. The computational structure represented by a path and the complete binary tree was considered and the mesh and hypercube architectures were targeted for their embeddings. The innovative concept of Hybrid Algorithm Technique was introduced. This new technique provides a mechanism for obtaining fault tolerance while exhibiting improved performance.
Failure mode effect analysis and fault tree analysis as a combined methodology in risk management
NASA Astrophysics Data System (ADS)
Wessiani, N. A.; Yoshio, F.
2018-04-01
There have been many studies reported the implementation of Failure Mode Effect Analysis (FMEA) and Fault Tree Analysis (FTA) as a method in risk management. However, most of the studies usually only choose one of these two methods in their risk management methodology. On the other side, combining these two methods will reduce the drawbacks of each methods when implemented separately. This paper aims to combine the methodology of FMEA and FTA in assessing risk. A case study in the metal company will illustrate how this methodology can be implemented. In the case study, this combined methodology will assess the internal risks that occur in the production process. Further, those internal risks should be mitigated based on their level of risks.
Node degree distribution in spanning trees
NASA Astrophysics Data System (ADS)
Pozrikidis, C.
2016-03-01
A method is presented for computing the number of spanning trees involving one link or a specified group of links, and excluding another link or a specified group of links, in a network described by a simple graph in terms of derivatives of the spanning-tree generating function defined with respect to the eigenvalues of the Kirchhoff (weighted Laplacian) matrix. The method is applied to deduce the node degree distribution in a complete or randomized set of spanning trees of an arbitrary network. An important feature of the proposed method is that the explicit construction of spanning trees is not required. It is shown that the node degree distribution in the spanning trees of the complete network is described by the binomial distribution. Numerical results are presented for the node degree distribution in square, triangular, and honeycomb lattices.
Using Decision Trees to Detect and Isolate Simulated Leaks in the J-2X Rocket Engine
NASA Technical Reports Server (NTRS)
Schwabacher, Mark A.; Aguilar, Robert; Figueroa, Fernando F.
2009-01-01
The goal of this work was to use data-driven methods to automatically detect and isolate faults in the J-2X rocket engine. It was decided to use decision trees, since they tend to be easier to interpret than other data-driven methods. The decision tree algorithm automatically "learns" a decision tree by performing a search through the space of possible decision trees to find one that fits the training data. The particular decision tree algorithm used is known as C4.5. Simulated J-2X data from a high-fidelity simulator developed at Pratt & Whitney Rocketdyne and known as the Detailed Real-Time Model (DRTM) was used to "train" and test the decision tree. Fifty-six DRTM simulations were performed for this purpose, with different leak sizes, different leak locations, and different times of leak onset. To make the simulations as realistic as possible, they included simulated sensor noise, and included a gradual degradation in both fuel and oxidizer turbine efficiency. A decision tree was trained using 11 of these simulations, and tested using the remaining 45 simulations. In the training phase, the C4.5 algorithm was provided with labeled examples of data from nominal operation and data including leaks in each leak location. From the data, it "learned" a decision tree that can classify unseen data as having no leak or having a leak in one of the five leak locations. In the test phase, the decision tree produced very low false alarm rates and low missed detection rates on the unseen data. It had very good fault isolation rates for three of the five simulated leak locations, but it tended to confuse the remaining two locations, perhaps because a large leak at one of these two locations can look very similar to a small leak at the other location.
Geology of Joshua Tree National Park geodatabase
Powell, Robert E.; Matti, Jonathan C.; Cossette, Pamela M.
2015-09-16
The database in this Open-File Report describes the geology of Joshua Tree National Park and was completed in support of the National Cooperative Geologic Mapping Program of the U.S. Geological Survey (USGS) and in cooperation with the National Park Service (NPS). The geologic observations and interpretations represented in the database are relevant to both the ongoing scientific interests of the USGS in southern California and the management requirements of NPS, specifically of Joshua Tree National Park (JOTR).Joshua Tree National Park is situated within the eastern part of California’s Transverse Ranges province and straddles the transition between the Mojave and Sonoran deserts. The geologically diverse terrain that underlies JOTR reveals a rich and varied geologic evolution, one that spans nearly two billion years of Earth history. The Park’s landscape is the current expression of this evolution, its varied landforms reflecting the differing origins of underlying rock types and their differing responses to subsequent geologic events. Crystalline basement in the Park consists of Proterozoic plutonic and metamorphic rocks intruded by a composite Mesozoic batholith of Triassic through Late Cretaceous plutons arrayed in northwest-trending lithodemic belts. The basement was exhumed during the Cenozoic and underwent differential deep weathering beneath a low-relief erosion surface, with the deepest weathering profiles forming on quartz-rich, biotite-bearing granitoid rocks. Disruption of the basement terrain by faults of the San Andreas system began ca. 20 Ma and the JOTR sinistral domain, preceded by basalt eruptions, began perhaps as early as ca. 7 Ma, but no later than 5 Ma. Uplift of the mountain blocks during this interval led to erosional stripping of the thick zones of weathered quartz-rich granitoid rocks to form etchplains dotted by bouldery tors—the iconic landscape of the Park. The stripped debris filled basins along the fault zones.Mountain ranges and basins in the Park exhibit an east-west physiographic grain controlled by left-lateral fault zones that form a sinistral domain within the broad zone of dextral shear along the transform boundary between the North American and Pacific plates. Geologic and geophysical evidence reveal that movement on the sinistral faults zones has resulted in left steps along the zones, resulting in the development of sub-basins beneath Pinto Basin and Shavers and Chuckwalla Valleys. The sinistral fault zones connect the Mojave Desert dextral faults of the Eastern California Shear Zone to the north and east with the Coachella Valley strands of the southern San Andreas Fault Zone to the west.Quaternary surficial deposits accumulated in alluvial washes and playas and lakes along the valley floors; in alluvial fans, washes, and sheet wash aprons along piedmonts flanking the mountain ranges; and in eolian dunes and sand sheets that span the transition from valley floor to piedmont slope. Sequences of Quaternary pediments are planed into piedmonts flanking valley-floor and upland basins, each pediment in turn overlain by successively younger residual and alluvial surficial deposits.
Varga, R.J.; Faulds, J.E.; Snee, L.W.; Harlan, S.S.; Bettison-Varga, L.
2004-01-01
Recent studies demonstrate that rifts are characterized by linked tilt domains, each containing a consistent polarity of normal faults and stratal tilt directions, and that the transition between domains is typically through formation of accommodation zones and generally not through production of throughgoing transfer faults. The mid-Miocene Black Mountains accommodation zone of southern Nevada and western Arizona is a well-exposed example of an accommodation zone linking two regionally extensive and opposing tilt domains. In the southeastern part of this zone near Kingman, Arizona, east dipping normal faults of the Whipple tilt domain and west dipping normal faults of the Lake Mead domain coalesce across a relatively narrow region characterized by a series of linked, extensional folds. The geometry of these folds in this strike-parallel portion of the accommodation zone is dictated by the geometry of the interdigitating normal faults of opposed polarity. Synclines formed where normal faults of opposite polarity face away from each other whereas anticlines formed where the opposed normal faults face each other. Opposed normal faults with small overlaps produced short folds with axial trends at significant angles to regional strike directions, whereas large fault overlaps produce elongate folds parallel to faults. Analysis of faults shows that the folds are purely extensional and result from east/northeast stretching and fault-related tilting. The structural geometry of this portion of the accommodation zone mirrors that of the Black Mountains accommodation zone more regionally, with both transverse and strike-parallel antithetic segments. Normal faults of both tilt domains lose displacement and terminate within the accommodation zone northwest of Kingman, Arizona. However, isotopic dating of growth sequences and crosscutting relationships show that the initiation of the two fault systems in this area was not entirely synchronous and that west dipping faults of the Lake Mead domain began to form between 1 m.y. to 0.2 m.y. prior to east dipping faults of the Whipple domain. The accommodation zone formed above an active and evolving magmatic center that, prior to rifting, produced intermediate-composition volcanic rocks and that, during rifting, produced voluminous rhyolite and basalt magmas. Copyright 2004 by the American Geophysical Union.
Improved FTA methodology and application to subsea pipeline reliability design.
Lin, Jing; Yuan, Yongbo; Zhang, Mingyuan
2014-01-01
An innovative logic tree, Failure Expansion Tree (FET), is proposed in this paper, which improves on traditional Fault Tree Analysis (FTA). It describes a different thinking approach for risk factor identification and reliability risk assessment. By providing a more comprehensive and objective methodology, the rather subjective nature of FTA node discovery is significantly reduced and the resulting mathematical calculations for quantitative analysis are greatly simplified. Applied to the Useful Life phase of a subsea pipeline engineering project, the approach provides a more structured analysis by constructing a tree following the laws of physics and geometry. Resulting improvements are summarized in comparison table form.
Improved FTA Methodology and Application to Subsea Pipeline Reliability Design
Lin, Jing; Yuan, Yongbo; Zhang, Mingyuan
2014-01-01
An innovative logic tree, Failure Expansion Tree (FET), is proposed in this paper, which improves on traditional Fault Tree Analysis (FTA). It describes a different thinking approach for risk factor identification and reliability risk assessment. By providing a more comprehensive and objective methodology, the rather subjective nature of FTA node discovery is significantly reduced and the resulting mathematical calculations for quantitative analysis are greatly simplified. Applied to the Useful Life phase of a subsea pipeline engineering project, the approach provides a more structured analysis by constructing a tree following the laws of physics and geometry. Resulting improvements are summarized in comparison table form. PMID:24667681
Time-dependent seismic hazard analysis for the Greater Tehran and surrounding areas
NASA Astrophysics Data System (ADS)
Jalalalhosseini, Seyed Mostafa; Zafarani, Hamid; Zare, Mehdi
2018-01-01
This study presents a time-dependent approach for seismic hazard in Tehran and surrounding areas. Hazard is evaluated by combining background seismic activity, and larger earthquakes may emanate from fault segments. Using available historical and paleoseismological data or empirical relation, the recurrence time and maximum magnitude of characteristic earthquakes for the major faults have been explored. The Brownian passage time (BPT) distribution has been used to calculate equivalent fictitious seismicity rate for major faults in the region. To include ground motion uncertainty, a logic tree and five ground motion prediction equations have been selected based on their applicability in the region. Finally, hazard maps have been presented.
Architecture of the wood-wide web: Rhizopogon spp. genets link multiple Douglas-fir cohorts.
Beiler, Kevin J; Durall, Daniel M; Simard, Suzanne W; Maxwell, Sheri A; Kretzer, Annette M
2010-01-01
*The role of mycorrhizal networks in forest dynamics is poorly understood because of the elusiveness of their spatial structure. We mapped the belowground distribution of the fungi Rhizopogon vesiculosus and Rhizopogon vinicolor and interior Douglas-fir trees (Pseudotsuga menziesii var. glauca) to determine the architecture of a mycorrhizal network in a multi-aged old-growth forest. *Rhizopogon spp. mycorrhizas were collected within a 30 x 30 m plot. Trees and fungal genets were identified using multi-locus microsatellite DNA analysis. Tree genotypes from mycorrhizas were matched to reference trees aboveground. Two trees were considered linked if they shared the same fungal genet(s). *The two Rhizopogon species each formed 13-14 genets, each colonizing up to 19 trees in the plot. Rhizopogon vesiculosus genets were larger, occurred at greater depths, and linked more trees than genets of R. vinicolor. Multiple tree cohorts were linked, with young saplings established within the mycorrhizal network of Douglas-fir veterans. A strong positive relationship was found between tree size and connectivity, resulting in a scale-free network architecture with small-world properties. *This mycorrhizal network architecture suggests an efficient and robust network, where large trees play a foundational role in facilitating conspecific regeneration and stabilizing the ecosystem.
LIDAR Helps Identify Source of 1872 Earthquake Near Chelan, Washington
NASA Astrophysics Data System (ADS)
Sherrod, B. L.; Blakely, R. J.; Weaver, C. S.
2015-12-01
One of the largest historic earthquakes in the Pacific Northwest occurred on 15 December 1872 (M6.5-7) near the south end of Lake Chelan in north-central Washington State. Lack of recognized surface deformation suggested that the earthquake occurred on a blind, perhaps deep, fault. New LiDAR data show landslides and a ~6 km long, NW-side-up scarp in Spencer Canyon, ~30 km south of Lake Chelan. Two landslides in Spencer Canyon impounded small ponds. An historical account indicated that dead trees were visible in one pond in AD1884. Wood from a snag in the pond yielded a calibrated age of AD1670-1940. Tree ring counts show that the oldest living trees on each landslide are 130 and 128 years old. The larger of the two landslides obliterated the scarp and thus, post-dates the last scarp-forming event. Two trenches across the scarp exposed a NW-dipping thrust fault. One trench exposed alluvial fan deposits, Mazama ash, and scarp colluvium cut by a single thrust fault. Three charcoal samples from a colluvium buried during the last fault displacement had calibrated ages between AD1680 and AD1940. The second trench exposed gneiss thrust over colluvium during at least two, and possibly three fault displacements. The younger of two charcoal samples collected from a colluvium below gneiss had a calibrated age of AD1665- AD1905. For an historical constraint, we assume that the lack of felt reports for large earthquakes in the period between 1872 and today indicates that no large earthquakes capable of rupturing the ground surface occurred in the region after the 1872 earthquake; thus the last displacement on the Spencer Canyon scarp cannot post-date the 1872 earthquake. Modeling of the age data suggests that the last displacement occurred between AD1840 and AD1890. These data, combined with the historical record, indicate that this fault is the source of the 1872 earthquake. Analyses of aeromagnetic data reveal lithologic contacts beneath the scarp that form an ENE-striking, curvilinear zone ~2.5 km wide and ~55 km long. This zone coincides with monoclines mapped in Mesozoic bedrock and Miocene flood basalts. This study ends uncertainty regarding the source of the 1872 earthquake and provides important information for seismic hazard analyses of major infrastructure projects in Washington and British Columbia.
Fault detection and fault tolerance in robotics
NASA Technical Reports Server (NTRS)
Visinsky, Monica; Walker, Ian D.; Cavallaro, Joseph R.
1992-01-01
Robots are used in inaccessible or hazardous environments in order to alleviate some of the time, cost and risk involved in preparing men to endure these conditions. In order to perform their expected tasks, the robots are often quite complex, thus increasing their potential for failures. If men must be sent into these environments to repair each component failure in the robot, the advantages of using the robot are quickly lost. Fault tolerant robots are needed which can effectively cope with failures and continue their tasks until repairs can be realistically scheduled. Before fault tolerant capabilities can be created, methods of detecting and pinpointing failures must be perfected. This paper develops a basic fault tree analysis of a robot in order to obtain a better understanding of where failures can occur and how they contribute to other failures in the robot. The resulting failure flow chart can also be used to analyze the resiliency of the robot in the presence of specific faults. By simulating robot failures and fault detection schemes, the problems involved in detecting failures for robots are explored in more depth.
NASA Astrophysics Data System (ADS)
Lai, Wenqing; Wang, Yuandong; Li, Wenpeng; Sun, Guang; Qu, Guomin; Cui, Shigang; Li, Mengke; Wang, Yongqiang
2017-10-01
Based on long term vibration monitoring of the No.2 oil-immersed fat wave reactor in the ±500kV converter station in East Mongolia, the vibration signals in normal state and in core loose fault state were saved. Through the time-frequency analysis of the signals, the vibration characteristics of the core loose fault were obtained, and a fault diagnosis method based on the dual tree complex wavelet (DT-CWT) and support vector machine (SVM) was proposed. The vibration signals were analyzed by DT-CWT, and the energy entropy of the vibration signals were taken as the feature vector; the support vector machine was used to train and test the feature vector, and the accurate identification of the core loose fault of the flat wave reactor was realized. Through the identification of many groups of normal and core loose fault state vibration signals, the diagnostic accuracy of the result reached 97.36%. The effectiveness and accuracy of the method in the fault diagnosis of the flat wave reactor core is verified.
Method and system for dynamic probabilistic risk assessment
NASA Technical Reports Server (NTRS)
Dugan, Joanne Bechta (Inventor); Xu, Hong (Inventor)
2013-01-01
The DEFT methodology, system and computer readable medium extends the applicability of the PRA (Probabilistic Risk Assessment) methodology to computer-based systems, by allowing DFT (Dynamic Fault Tree) nodes as pivot nodes in the Event Tree (ET) model. DEFT includes a mathematical model and solution algorithm, supports all common PRA analysis functions and cutsets. Additional capabilities enabled by the DFT include modularization, phased mission analysis, sequence dependencies, and imperfect coverage.
Fault diagnosis of helical gearbox using acoustic signal and wavelets
NASA Astrophysics Data System (ADS)
Pranesh, SK; Abraham, Siju; Sugumaran, V.; Amarnath, M.
2017-05-01
The efficient transmission of power in machines is needed and gears are an appropriate choice. Faults in gears result in loss of energy and money. The monitoring and fault diagnosis are done by analysis of the acoustic and vibrational signals which are generally considered to be unwanted by products. This study proposes the usage of machine learning algorithm for condition monitoring of a helical gearbox by using the sound signals produced by the gearbox. Artificial faults were created and subsequently signals were captured by a microphone. An extensive study using different wavelet transformations for feature extraction from the acoustic signals was done, followed by waveletselection and feature selection using J48 decision tree and feature classification was performed using K star algorithm. Classification accuracy of 100% was obtained in the study
Inferring patterns in mitochondrial DNA sequences through hypercube independent spanning trees.
Silva, Eduardo Sant Ana da; Pedrini, Helio
2016-03-01
Given a graph G, a set of spanning trees rooted at a vertex r of G is said vertex/edge independent if, for each vertex v of G, v≠r, the paths of r to v in any pair of trees are vertex/edge disjoint. Independent spanning trees (ISTs) provide a number of advantages in data broadcasting due to their fault tolerant properties. For this reason, some studies have addressed the issue by providing mechanisms for constructing independent spanning trees efficiently. In this work, we investigate how to construct independent spanning trees on hypercubes, which are generated based upon spanning binomial trees, and how to use them to predict mitochondrial DNA sequence parts through paths on the hypercube. The prediction works both for inferring mitochondrial DNA sequences comprised of six bases as well as infer anomalies that probably should not belong to the mitochondrial DNA standard. Copyright © 2016 Elsevier Ltd. All rights reserved.
Nouri.Gharahasanlou, Ali; Mokhtarei, Ashkan; Khodayarei, Aliasqar; Ataei, Mohammad
2014-01-01
Evaluating and analyzing the risk in the mining industry is a new approach for improving the machinery performance. Reliability, safety, and maintenance management based on the risk analysis can enhance the overall availability and utilization of the mining technological systems. This study investigates the failure occurrence probability of the crushing and mixing bed hall department at Azarabadegan Khoy cement plant by using fault tree analysis (FTA) method. The results of the analysis in 200 h operating interval show that the probability of failure occurrence for crushing, conveyor systems, crushing and mixing bed hall department is 73, 64, and 95 percent respectively and the conveyor belt subsystem found as the most probable system for failure. Finally, maintenance as a method of control and prevent the occurrence of failure is proposed. PMID:26779433
Towards generating ECSS-compliant fault tree analysis results via ConcertoFLA
NASA Astrophysics Data System (ADS)
Gallina, B.; Haider, Z.; Carlsson, A.
2018-05-01
Attitude Control Systems (ACSs) maintain the orientation of the satellite in three-dimensional space. ACSs need to be engineered in compliance with ECSS standards and need to ensure a certain degree of dependability. Thus, dependability analysis is conducted at various levels and by using ECSS-compliant techniques. Fault Tree Analysis (FTA) is one of these techniques. FTA is being automated within various Model Driven Engineering (MDE)-based methodologies. The tool-supported CHESS-methodology is one of them. This methodology incorporates ConcertoFLA, a dependability analysis technique enabling failure behavior analysis and thus FTA-results generation. ConcertoFLA, however, similarly to other techniques, still belongs to the academic research niche. To promote this technique within the space industry, we apply it on an ACS and discuss about its multi-faceted potentialities in the context of ECSS-compliant engineering.
NASA Astrophysics Data System (ADS)
Zeng, Yajun; Skibniewski, Miroslaw J.
2013-08-01
Enterprise resource planning (ERP) system implementations are often characterised with large capital outlay, long implementation duration, and high risk of failure. In order to avoid ERP implementation failure and realise the benefits of the system, sound risk management is the key. This paper proposes a probabilistic risk assessment approach for ERP system implementation projects based on fault tree analysis, which models the relationship between ERP system components and specific risk factors. Unlike traditional risk management approaches that have been mostly focused on meeting project budget and schedule objectives, the proposed approach intends to address the risks that may cause ERP system usage failure. The approach can be used to identify the root causes of ERP system implementation usage failure and quantify the impact of critical component failures or critical risk events in the implementation process.
Accelerated Monte Carlo Simulation for Safety Analysis of the Advanced Airspace Concept
NASA Technical Reports Server (NTRS)
Thipphavong, David
2010-01-01
Safe separation of aircraft is a primary objective of any air traffic control system. An accelerated Monte Carlo approach was developed to assess the level of safety provided by a proposed next-generation air traffic control system. It combines features of fault tree and standard Monte Carlo methods. It runs more than one order of magnitude faster than the standard Monte Carlo method while providing risk estimates that only differ by about 10%. It also preserves component-level model fidelity that is difficult to maintain using the standard fault tree method. This balance of speed and fidelity allows sensitivity analysis to be completed in days instead of weeks or months with the standard Monte Carlo method. Results indicate that risk estimates are sensitive to transponder, pilot visual avoidance, and conflict detection failure probabilities.
Logic flowgraph methodology - A tool for modeling embedded systems
NASA Technical Reports Server (NTRS)
Muthukumar, C. T.; Guarro, S. B.; Apostolakis, G. E.
1991-01-01
The logic flowgraph methodology (LFM), a method for modeling hardware in terms of its process parameters, has been extended to form an analytical tool for the analysis of integrated (hardware/software) embedded systems. In the software part of a given embedded system model, timing and the control flow among different software components are modeled by augmenting LFM with modified Petrinet structures. The objective of the use of such an augmented LFM model is to uncover possible errors and the potential for unanticipated software/hardware interactions. This is done by backtracking through the augmented LFM mode according to established procedures which allow the semiautomated construction of fault trees for any chosen state of the embedded system (top event). These fault trees, in turn, produce the possible combinations of lower-level states (events) that may lead to the top event.
Nouri Gharahasanlou, Ali; Mokhtarei, Ashkan; Khodayarei, Aliasqar; Ataei, Mohammad
2014-04-01
Evaluating and analyzing the risk in the mining industry is a new approach for improving the machinery performance. Reliability, safety, and maintenance management based on the risk analysis can enhance the overall availability and utilization of the mining technological systems. This study investigates the failure occurrence probability of the crushing and mixing bed hall department at Azarabadegan Khoy cement plant by using fault tree analysis (FTA) method. The results of the analysis in 200 h operating interval show that the probability of failure occurrence for crushing, conveyor systems, crushing and mixing bed hall department is 73, 64, and 95 percent respectively and the conveyor belt subsystem found as the most probable system for failure. Finally, maintenance as a method of control and prevent the occurrence of failure is proposed.
Risk assessment techniques with applicability in marine engineering
NASA Astrophysics Data System (ADS)
Rudenko, E.; Panaitescu, F. V.; Panaitescu, M.
2015-11-01
Nowadays risk management is a carefully planned process. The task of risk management is organically woven into the general problem of increasing the efficiency of business. Passive attitude to risk and awareness of its existence are replaced by active management techniques. Risk assessment is one of the most important stages of risk management, since for risk management it is necessary first to analyze and evaluate risk. There are many definitions of this notion but in general case risk assessment refers to the systematic process of identifying the factors and types of risk and their quantitative assessment, i.e. risk analysis methodology combines mutually complementary quantitative and qualitative approaches. Purpose of the work: In this paper we will consider as risk assessment technique Fault Tree analysis (FTA). The objectives are: understand purpose of FTA, understand and apply rules of Boolean algebra, analyse a simple system using FTA, FTA advantages and disadvantages. Research and methodology: The main purpose is to help identify potential causes of system failures before the failures actually occur. We can evaluate the probability of the Top event.The steps of this analize are: the system's examination from Top to Down, the use of symbols to represent events, the use of mathematical tools for critical areas, the use of Fault tree logic diagrams to identify the cause of the Top event. Results: In the finally of study it will be obtained: critical areas, Fault tree logical diagrams and the probability of the Top event. These results can be used for the risk assessment analyses.
Using certification trails to achieve software fault tolerance
NASA Technical Reports Server (NTRS)
Sullivan, Gregory F.; Masson, Gerald M.
1993-01-01
A conceptually novel and powerful technique to achieve fault tolerance in hardware and software systems is introduced. When used for software fault tolerance, this new technique uses time and software redundancy and can be outlined as follows. In the initial phase, a program is run to solve a problem and store the result. In addition, this program leaves behind a trail of data called a certification trail. In the second phase, another program is run which solves the original problem again. This program, however, has access to the certification trail left by the first program. Because of the availability of the certification trail, the second phase can be performed by a less complex program and can execute more quickly. In the final phase, the two results are accepted as correct; otherwise an error is indicated. An essential aspect of this approach is that the second program must always generate either an error indication or a correct output even when the certification trail it receives from the first program is incorrect. The certification trail approach to fault tolerance was formalized and it was illustrated by applying it to the fundamental problem of finding a minimum spanning tree. Cases in which the second phase can be run concorrectly with the first and act as a monitor are discussed. The certification trail approach was compared to other approaches to fault tolerance. Because of space limitations we have omitted examples of our technique applied to the Huffman tree, and convex hull problems. These can be found in the full version of this paper.
Bodin, Paul; Bilham, Roger; Behr, Jeff; Gomberg, Joan; Hudnut, Kenneth W.
1994-01-01
Five out of six functioning creepmeters on southern California faults recorded slip triggered at the time of some or all of the three largest events of the 1992 Landers earthquake sequence. Digital creep data indicate that dextral slip was triggered within 1 min of each mainshock and that maximum slip velocities occurred 2 to 3 min later. The duration of triggered slip events ranged from a few hours to several weeks. We note that triggered slip occurs commonly on faults that exhibit fault creep. To account for the observation that slip can be triggered repeatedly on a fault, we propose that the amplitude of triggered slip may be proportional to the depth of slip in the creep event and to the available near-surface tectonic strain that would otherwise eventually be released as fault creep. We advance the notion that seismic surface waves, perhaps amplified by sediments, generate transient local conditions that favor the release of tectonic strain to varying depths. Synthetic strain seismograms are presented that suggest increased pore pressure during periods of fault-normal contraction may be responsible for triggered slip, since maximum dextral shear strain transients correspond to times of maximum fault-normal contraction.
A novel design for sap flux data acquisition in large research plots using open source components
NASA Astrophysics Data System (ADS)
Hawthorne, D. A.; Oishi, A. C.
2017-12-01
Sap flux sensors are a widely-used tool for estimating in-situ, tree-level transpiration rates. These probes are installed in the stems of multiple trees within a study area and are typically left in place throughout the year. Sensors vary in their design and theory of operation, but all require electrical power for a heating element and produce at least one analog signal that must be digitized for storage. There are two topologies traditionally adopted to energize these sensors and gather the data from them. In one, a single data logger and power source are used. Dedicated cables radiate out from the logger to supply power to each of the probes and retrieve analog signals. In the other layout, a standalone data logger is located at each monitored tree. Batteries must then be distributed throughout the plot to service these loggers. We present a hybrid solution based on industrial control systems that employs a central data logger and battery, but co-locates digitizing hardware with the sensors at each tree. Each hardware node is able to communicate and share power over wire links with neighboring nodes. The resulting network provides a fault-tolerant path between the logger and each sensor. The approach is optimized to limit disturbance of the study plot, protect signal integrity and to enhance system reliability. This open-source implementation is built on the Arduino micro-controller system and employs RS485 and Modbus communications protocols. It is supported by laptop based management software coded in Python. The system is designed to be readily fabricated and programmed by non-experts. It works with a variety of sap-flux measurement techniques and it is able to interface to additional environmental sensors.
NASA Astrophysics Data System (ADS)
Jiménez-Bonilla, Alejandro; Balanya, Juan Carlos; Exposito, Inmaculada; Diaz-Azpiroz, Manuel; Barcos, Leticia
2015-04-01
Strain partitioning modes within migrating orogenic arcs may result in arc-parallel stretching that produces along-strike structural and topographic discontinuities. In the Western Gibraltar Arc, arc-parallel stretching has operated from the Lower Miocene up to recent times. In this study, we have reviewed the Colmenar Fault, located at the SW end of the Subbetic ranges, previously interpreted as a Middle Miocene low-angle normal fault. Our results allow to identify younger normal fault segments, to analyse their kinematics, growth and segment linkage, and to discuss its role on the structural and relief drop at regional scale. The Colmenar Fault is folded by post-Serravallian NE-SW buckle folds. Both the SW-dipping fault surfaces and the SW-plunging fold axes contribute to the structural relief drop toward the SW. Nevertheless, at the NW tip of the Colmenar Fault, we have identified unfolded normal faults cutting quaternary soils. They are grouped into a N110˚E striking brittle deformation band 15km long and until 3km wide (hereafter Ubrique Normal Fault Zone; UNFZ). The UNFZ is divided into three sectors: (a) The western tip zone is formed by normal faults which usually dip to the SW and whose slip directions vary between N205˚E and N225˚E. These segments are linked to each other by left-lateral oblique faults interpreted as transfer faults. (b) The central part of the UNFZ is composed of a single N115˚E striking fault segment 2,4km long. Slip directions are around N190˚E and the estimated throw is 1,25km. The fault scarp is well-conserved reaching up to 400m in its central part and diminishing to 200m at both segment terminations. This fault segment is linked to the western tip by an overlap zone characterized by tilted blocks limited by high-angle NNE-SSW and WNW-ESE striking faults interpreted as "box faults" [1]. (c) The eastern tip zone is formed by fault segments with oblique slip which also contribute to the downthrown of the SW block. This kinematic pattern seems to be related to other strike-slip fault systems developed to the E of the UNFZ. The structural revision together with updated kinematic data suggest that the Colmenar Fault is cut and downthrown by a younger normal fault zone, the UNFZ, which would have contributed to accommodate arc-parallel stretching until the Quaternary. This stretching provokes along-strike relief segmentation, being the UNFZ the main fault zone causing the final drop of the Subbetic ranges towards the SW within the Western Gibraltar Arc. Our results show displacement variations in each fault segment of the UNFZ, diminishing to their tips. This suggests fault segment linkage finally evolved to build the nearly continuous current fault zone. The development of current large through-going faults linked inside the UNFZ is similar to those ones simulated in some numerical modelling of rift systems [2]. Acknowledgements: RNM-415 and CGL-2013-46368-P [1]Peacock, D.C.P., Knipe, R.J., Sanderson, D.J., 2000. Glossary of normal faults. Journal Structural Geology, 22, 291-305. [2]Cowie, P.A., Gupta, S., Dawers, N.H., 2000. Implications of fault array evolution for synrift depocentre development: insights from a numerical fault growth model. Basin Research, 12, 241-261.
1983-04-01
tolerances or spaci - able assets diagnostic/fault ness float fications isolation devices Operation of cannibalL- zation point Why Sustain materiel...with diagnostic software based on "fault tree " representation of the M65 ThS) to bridge the gap in diagnostics capability was demonstrated in 1980 and... identification friend or foe) which has much lower reliability than TSQ-73 peculiar hardware). Thus, as in other examples, reported readiness does not reflect
AADL Fault Modeling and Analysis Within an ARP4761 Safety Assessment
2014-10-01
Analysis Generator 27 3.2.3 Mapping to OpenFTA Format File 27 3.2.4 Mapping to Generic XML Format 28 3.2.5 AADL and FTA Mapping Rules 28 3.2.6 Issues...PSSA), System Safety Assessment (SSA), Common Cause Analysis (CCA), Fault Tree Analysis ( FTA ), Failure Modes and Effects Analysis (FMEA), Failure...Modes and Effects Summary, Mar - kov Analysis (MA), and Dependence Diagrams (DDs), also referred to as Reliability Block Dia- grams (RBDs). The
Unsupervised Learning —A Novel Clustering Method for Rolling Bearing Faults Identification
NASA Astrophysics Data System (ADS)
Kai, Li; Bo, Luo; Tao, Ma; Xuefeng, Yang; Guangming, Wang
2017-12-01
To promptly process the massive fault data and automatically provide accurate diagnosis results, numerous studies have been conducted on intelligent fault diagnosis of rolling bearing. Among these studies, such as artificial neural networks, support vector machines, decision trees and other supervised learning methods are used commonly. These methods can detect the failure of rolling bearing effectively, but to achieve better detection results, it often requires a lot of training samples. Based on above, a novel clustering method is proposed in this paper. This novel method is able to find the correct number of clusters automatically the effectiveness of the proposed method is validated using datasets from rolling element bearings. The diagnosis results show that the proposed method can accurately detect the fault types of small samples. Meanwhile, the diagnosis results are also relative high accuracy even for massive samples.
Discovering the Complexity of Capable Faults in Northern Chile
NASA Astrophysics Data System (ADS)
Gonzalez, G.; del Río, I. A.; Rojas Orrego, C., Sr.; Astudillo, L. A., Sr.
2017-12-01
Great crustal earthquakes (Mw >7.0) in the upper plate of subduction zones are relatively uncommon and less well documented. We hypothesize that crustal earthquakes are poorly represented in the instrumental record because they have long recurrence intervals. In northern Chile, the extreme long-term aridity permits extraordinary preservation of landforms related to fault activity, making this region a primary target to understand how upper plate faults work at subduction zones. To understand how these faults relate to crustal seismicity in the long-term, we have conducted a detailed palaeoseismological study. We performed a palaeoseismological survey integrating trench logging and photogrammetry based on UAVs. Optically stimulated luminescence (OSL) age determinations were practiced for dating deposits linked to faulting. In this contribution we present the study case of two primary faults located in the Coastal Cordillera of northern Chile between Iquique (21ºS) and Antofagasta (24ºS). We estimate the maximum moment magnitude of earthquakes generated in these upper plate faults, their recurrence interval and the fault-slip rate. We conclude that the studied upper plate faults show a complex kinematics on geological timescales. Faults seem to change their kinematics from normal (extension) to reverse (compression) or from normal to transcurrent (compression) according to the stage of subduction earthquake cycle. Normal displacement is related to coseismic stages and compression is linked to interseismic period. As result this complex interaction these faults are capable of generating Mw 7.0 earthquakes, with recurrence times on the order of thousands of years during every stage of the subduction earthquake cycle.
Fault Analysis on Bevel Gear Teeth Surface Damage of Aeroengine
NASA Astrophysics Data System (ADS)
Cheng, Li; Chen, Lishun; Li, Silu; Liang, Tao
2017-12-01
Aiming at the trouble phenomenon for bevel gear teeth surface damage of Aero-engine, Fault Tree of bevel gear teeth surface damage was drawing by logical relations, the possible cause of trouble was analyzed, scanning electron-microscope, energy spectrum analysis, Metallographic examination, hardness measurement and other analysis means were adopted to investigate the spall gear tooth. The results showed that Material composition, Metallographic structure, Micro-hardness, Carburization depth of the fault bevel gear accord with technical requirements. Contact fatigue spall defect caused bevel gear teeth surface damage. The small magnitude of Interference of accessory gearbox install hole and driving bevel gear bearing seat was mainly caused. Improved measures were proposed, after proof, Thermoelement measures are effective.
Goal-Function Tree Modeling for Systems Engineering and Fault Management
NASA Technical Reports Server (NTRS)
Johnson, Stephen B.; Breckenridge, Jonathan T.
2013-01-01
This paper describes a new representation that enables rigorous definition and decomposition of both nominal and off-nominal system goals and functions: the Goal-Function Tree (GFT). GFTs extend the concept and process of functional decomposition, utilizing state variables as a key mechanism to ensure physical and logical consistency and completeness of the decomposition of goals (requirements) and functions, and enabling full and complete traceabilitiy to the design. The GFT also provides for means to define and represent off-nominal goals and functions that are activated when the system's nominal goals are not met. The physical accuracy of the GFT, and its ability to represent both nominal and off-nominal goals enable the GFT to be used for various analyses of the system, including assessments of the completeness and traceability of system goals and functions, the coverage of fault management failure detections, and definition of system failure scenarios.
Risk management of PPP project in the preparation stage based on Fault Tree Analysis
NASA Astrophysics Data System (ADS)
Xing, Yuanzhi; Guan, Qiuling
2017-03-01
The risk management of PPP(Public Private Partnership) project can improve the level of risk control between government departments and private investors, so as to make more beneficial decisions, reduce investment losses and achieve mutual benefit as well. Therefore, this paper takes the PPP project preparation stage venture as the research object to identify and confirm four types of risks. At the same time, fault tree analysis(FTA) is used to evaluate the risk factors that belong to different parts, and quantify the influencing degree of risk impact on the basis of risk identification. In addition, it determines the importance order of risk factors by calculating unit structure importance on PPP project preparation stage. The result shows that accuracy of government decision-making, rationality of private investors funds allocation and instability of market returns are the main factors to generate the shared risk on the project.
Rocket engine system reliability analyses using probabilistic and fuzzy logic techniques
NASA Technical Reports Server (NTRS)
Hardy, Terry L.; Rapp, Douglas C.
1994-01-01
The reliability of rocket engine systems was analyzed by using probabilistic and fuzzy logic techniques. Fault trees were developed for integrated modular engine (IME) and discrete engine systems, and then were used with the two techniques to quantify reliability. The IRRAS (Integrated Reliability and Risk Analysis System) computer code, developed for the U.S. Nuclear Regulatory Commission, was used for the probabilistic analyses, and FUZZYFTA (Fuzzy Fault Tree Analysis), a code developed at NASA Lewis Research Center, was used for the fuzzy logic analyses. Although both techniques provided estimates of the reliability of the IME and discrete systems, probabilistic techniques emphasized uncertainty resulting from randomness in the system whereas fuzzy logic techniques emphasized uncertainty resulting from vagueness in the system. Because uncertainty can have both random and vague components, both techniques were found to be useful tools in the analysis of rocket engine system reliability.
Enterprise architecture availability analysis using fault trees and stakeholder interviews
NASA Astrophysics Data System (ADS)
Närman, Per; Franke, Ulrik; König, Johan; Buschle, Markus; Ekstedt, Mathias
2014-01-01
The availability of enterprise information systems is a key concern for many organisations. This article describes a method for availability analysis based on Fault Tree Analysis and constructs from the ArchiMate enterprise architecture (EA) language. To test the quality of the method, several case-studies within the banking and electrical utility industries were performed. Input data were collected through stakeholder interviews. The results from the case studies were compared with availability of log data to determine the accuracy of the method's predictions. In the five cases where accurate log data were available, the yearly downtime estimates were within eight hours from the actual downtimes. The cost of performing the analysis was low; no case study required more than 20 man-hours of work, making the method ideal for practitioners with an interest in obtaining rapid availability estimates of their enterprise information systems.
Uncertainty analysis in fault tree models with dependent basic events.
Pedroni, Nicola; Zio, Enrico
2013-06-01
In general, two types of dependence need to be considered when estimating the probability of the top event (TE) of a fault tree (FT): "objective" dependence between the (random) occurrences of different basic events (BEs) in the FT and "state-of-knowledge" (epistemic) dependence between estimates of the epistemically uncertain probabilities of some BEs of the FT model. In this article, we study the effects on the TE probability of objective and epistemic dependences. The well-known Frèchet bounds and the distribution envelope determination (DEnv) method are used to model all kinds of (possibly unknown) objective and epistemic dependences, respectively. For exemplification, the analyses are carried out on a FT with six BEs. Results show that both types of dependence significantly affect the TE probability; however, the effects of epistemic dependence are likely to be overwhelmed by those of objective dependence (if present). © 2012 Society for Risk Analysis.
A fault tree model to assess probability of contaminant discharge from shipwrecks.
Landquist, H; Rosén, L; Lindhe, A; Norberg, T; Hassellöv, I-M; Lindgren, J F; Dahllöf, I
2014-11-15
Shipwrecks on the sea floor around the world may contain hazardous substances that can cause harm to the marine environment. Today there are no comprehensive methods for environmental risk assessment of shipwrecks, and thus there is poor support for decision-making on prioritization of mitigation measures. The purpose of this study was to develop a tool for quantitative risk estimation of potentially polluting shipwrecks, and in particular an estimation of the annual probability of hazardous substance discharge. The assessment of the probability of discharge is performed using fault tree analysis, facilitating quantification of the probability with respect to a set of identified hazardous events. This approach enables a structured assessment providing transparent uncertainty and sensitivity analyses. The model facilitates quantification of risk, quantification of the uncertainties in the risk calculation and identification of parameters to be investigated further in order to obtain a more reliable risk calculation. Copyright © 2014 Elsevier Ltd. All rights reserved.
Qualitative Importance Measures of Systems Components - A New Approach and Its Applications
NASA Astrophysics Data System (ADS)
Chybowski, Leszek; Gawdzińska, Katarzyna; Wiśnicki, Bogusz
2016-12-01
The paper presents an improved methodology of analysing the qualitative importance of components in the functional and reliability structures of the system. We present basic importance measures, i.e. the Birnbaum's structural measure, the order of the smallest minimal cut-set, the repetition count of an i-th event in the Fault Tree and the streams measure. A subsystem of circulation pumps and fuel heaters in the main engine fuel supply system of a container vessel illustrates the qualitative importance analysis. We constructed a functional model and a Fault Tree which we analysed using qualitative measures. Additionally, we compared the calculated measures and introduced corrected measures as a tool for improving the analysis. We proposed scaled measures and a common measure taking into account the location of the component in the reliability and functional structures. Finally, we proposed an area where the measures could be applied.
Schwartz, D.P.; Pantosti, D.; Okumura, K.; Powers, T.J.; Hamilton, J.C.
1998-01-01
Trenching, microgeomorphic mapping, and tree ring analysis provide information on timing of paleoearthquakes and behavior of the San Andreas fault in the Santa Cruz mountains. At the Grizzly Flat site alluvial units dated at 1640-1659 A.D., 1679-1894 A.D., 1668-1893 A.D., and the present ground surface are displaced by a single event. This was the 1906 surface rupture. Combined trench dates and tree ring analysis suggest that the penultimate event occurred in the mid-1600s, possibly in an interval as narrow as 1632-1659 A.D. There is no direct evidence in the trenches for the 1838 or 1865 earthquakes, which have been proposed as occurring on this part of the fault zone. In a minimum time of about 340 years only one large surface faulting event (1906) occurred at Grizzly Flat, in contrast to previous recurrence estimates of 95-110 years for the Santa Cruz mountains segment. Comparison with dates of the penultimate San Andreas earthquake at sites north of San Francisco suggests that the San Andreas fault between Point Arena and the Santa Cruz mountains may have failed either as a sequence of closely timed earthquakes on adjacent segments or as a single long rupture similar in length to the 1906 rupture around the mid-1600s. The 1906 coseismic geodetic slip and the late Holocene geologic slip rate on the San Francisco peninsula and southward are about 50-70% and 70% of their values north of San Francisco, respectively. The slip gradient along the 1906 rupture section of the San Andreas reflects partitioning of plate boundary slip onto the San Gregorio, Sargent, and other faults south of the Golden Gate. If a mid-1600s event ruptured the same section of the fault that failed in 1906, it supports the concept that long strike-slip faults can contain master rupture segments that repeat in both length and slip distribution. Recognition of a persistent slip rate gradient along the northern San Andreas fault and the concept of a master segment remove the requirement that lower slip sections of large events such as 1906 must fill in on a periodic basis with smaller and more frequent earthquakes.
Moran, Michael J.; Wilson, Jon W.; Beard, L. Sue
2015-11-03
Several major faults, including the Salt Cedar Fault and the Palm Tree Fault, play an important role in the movement of groundwater. Groundwater may move along these faults and discharge where faults intersect volcanic breccias or fractured rock. Vertical movement of groundwater along faults is suggested as a mechanism for the introduction of heat energy present in groundwater from many of the springs. Groundwater altitudes in the study area indicate a potential for flow from Eldorado Valley to Black Canyon although current interpretations of the geology of this area do not favor such flow. If groundwater from Eldorado Valley discharges at springs in Black Canyon then the development of groundwater resources in Eldorado Valley could result in a decrease in discharge from the springs. Geology and structure indicate that it is not likely that groundwater can move between Detrital Valley and Black Canyon. Thus, the development of groundwater resources in Detrital Valley may not result in a decrease in discharge from springs in Black Canyon.
NASA Astrophysics Data System (ADS)
Abdelrhman, Ahmed M.; Sei Kien, Yong; Salman Leong, M.; Meng Hee, Lim; Al-Obaidi, Salah M. Ali
2017-07-01
The vibration signals produced by rotating machinery contain useful information for condition monitoring and fault diagnosis. Fault severities assessment is a challenging task. Wavelet Transform (WT) as a multivariate analysis tool is able to compromise between the time and frequency information in the signals and served as a de-noising method. The CWT scaling function gives different resolutions to the discretely signals such as very fine resolution at lower scale but coarser resolution at a higher scale. However, the computational cost increased as it needs to produce different signal resolutions. DWT has better low computation cost as the dilation function allowed the signals to be decomposed through a tree of low and high pass filters and no further analysing the high-frequency components. In this paper, a method for bearing faults identification is presented by combing Continuous Wavelet Transform (CWT) and Discrete Wavelet Transform (DWT) with envelope analysis for bearing fault diagnosis. The experimental data was sampled by Case Western Reserve University. The analysis result showed that the proposed method is effective in bearing faults detection, identify the exact fault’s location and severity assessment especially for the inner race and outer race faults.
Experimental evaluation of the certification-trail method
NASA Technical Reports Server (NTRS)
Sullivan, Gregory F.; Wilson, Dwight S.; Masson, Gerald M.; Itoh, Mamoru; Smith, Warren W.; Kay, Jonathan S.
1993-01-01
Certification trails are a recently introduced and promising approach to fault-detection and fault-tolerance. A comprehensive attempt to assess experimentally the performance and overall value of the method is reported. The method is applied to algorithms for the following problems: huffman tree, shortest path, minimum spanning tree, sorting, and convex hull. Our results reveal many cases in which an approach using certification-trails allows for significantly faster overall program execution time than a basic time redundancy-approach. Algorithms for the answer-validation problem for abstract data types were also examined. This kind of problem provides a basis for applying the certification-trail method to wide classes of algorithms. Answer-validation solutions for two types of priority queues were implemented and analyzed. In both cases, the algorithm which performs answer-validation is substantially faster than the original algorithm for computing the answer. Next, a probabilistic model and analysis which enables comparison between the certification-trail method and the time-redundancy approach were presented. The analysis reveals some substantial and sometimes surprising advantages for ther certification-trail method. Finally, the work our group performed on the design and implementation of fault injection testbeds for experimental analysis of the certification trail technique is discussed. This work employs two distinct methodologies, software fault injection (modification of instruction, data, and stack segments of programs on a Sun Sparcstation ELC and on an IBM 386 PC) and hardware fault injection (control, address, and data lines of a Motorola MC68000-based target system pulsed at logical zero/one values). Our results indicate the viability of the certification trail technique. It is also believed that the tools developed provide a solid base for additional exploration.
Investigation of Fuel Oil/Lube Oil Spray Fires On Board Vessels. Volume 3.
1998-11-01
U.S. Coast Guard Research and Development Center 1082 Shennecossett Road, Groton, CT 06340-6096 Report No. CG-D-01-99, III Investigation of Fuel ...refinery). Developed the technical and mathematical specifications for BRAVO™2.0, a state-of-the-art Windows program for performing event tree and fault...tree analyses. Also managed the development of and prepared the technical specifications for QRA ROOTS™, a Windows program for storing, searching K-4
1992-01-01
boost plenum which houses the camshaft . The compressed mixture is metered by a throttle to intake valves of the engine. The engine is constructed from...difficulties associated with a time-tagged fault tree . In particular, recent work indicates that the multi-layer perception architecture can give good fdi...Abstract: In the past decade, wastepaper recycling has gained a wider acceptance. Depletion of tree stocks, waste water treatment demands and
Interim reliability evaluation program, Browns Ferry 1
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mays, S.E.; Poloski, J.P.; Sullivan, W.H.
1981-01-01
Probabilistic risk analysis techniques, i.e., event tree and fault tree analysis, were utilized to provide a risk assessment of the Browns Ferry Nuclear Plant Unit 1. Browns Ferry 1 is a General Electric boiling water reactor of the BWR 4 product line with a Mark 1 (drywell and torus) containment. Within the guidelines of the IREP Procedure and Schedule Guide, dominant accident sequences that contribute to public health and safety risks were identified and grouped according to release categories.
Cost-effectiveness analysis of risk-reduction measures to reach water safety targets.
Lindhe, Andreas; Rosén, Lars; Norberg, Tommy; Bergstedt, Olof; Pettersson, Thomas J R
2011-01-01
Identifying the most suitable risk-reduction measures in drinking water systems requires a thorough analysis of possible alternatives. In addition to the effects on the risk level, also the economic aspects of the risk-reduction alternatives are commonly considered important. Drinking water supplies are complex systems and to avoid sub-optimisation of risk-reduction measures, the entire system from source to tap needs to be considered. There is a lack of methods for quantification of water supply risk reduction in an economic context for entire drinking water systems. The aim of this paper is to present a novel approach for risk assessment in combination with economic analysis to evaluate risk-reduction measures based on a source-to-tap approach. The approach combines a probabilistic and dynamic fault tree method with cost-effectiveness analysis (CEA). The developed approach comprises the following main parts: (1) quantification of risk reduction of alternatives using a probabilistic fault tree model of the entire system; (2) combination of the modelling results with CEA; and (3) evaluation of the alternatives with respect to the risk reduction, the probability of not reaching water safety targets and the cost-effectiveness. The fault tree method and CEA enable comparison of risk-reduction measures in the same quantitative unit and consider costs and uncertainties. The approach provides a structured and thorough analysis of risk-reduction measures that facilitates transparency and long-term planning of drinking water systems in order to avoid sub-optimisation of available resources for risk reduction. Copyright © 2010 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Xie, Liujuan; Pei, Yangwen; Li, Anren; Wu, Kongyou
2018-06-01
As faults can be barriers to or conduits for fluid flow, it is critical to understand fault seal processes and their effects on the sealing capacity of a fault zone. Apart from the stratigraphic juxtaposition between the hanging wall and footwall, the development of fault rocks is of great importance in changing the sealing capacity of a fault zone. Therefore, field-based structural analysis has been employed to identify the meso-scale and micro-scale deformation features and to understand their effects on modifying the porosity of fault rocks. In this study, the Lenghu5 fold-and-thrust belt (northern Qaidam Basin, NE Tibetan Plateau), with well-exposed outcrops, was selected as an example for meso-scale outcrop mapping and SEM (Scanning Electron Microscope) micro-scale structural analysis. The detailed outcrop maps enabled us to link the samples with meso-scale fault architecture. The representative rock samples, collected in both the fault zones and the undeformed hanging walls/footwalls, were studied by SEM micro-structural analysis to identify the deformation features at the micro-scale and evaluate their influences on the fluid flow properties of the fault rocks. Based on the multi-scale structural analyses, the deformation mechanisms accounting for porosity reduction in the fault rocks have been identified, which are clay smearing, phyllosilicate-framework networking and cataclasis. The sealing capacity is highly dependent on the clay content: high concentrations of clay minerals in fault rocks are likely to form continuous clay smears or micro- clay smears between framework silicates, which can significantly decrease the porosity of the fault rocks. However, there is no direct link between the fault rocks and host rocks. Similar stratigraphic juxtapositions can generate fault rocks with very different magnitudes of porosity reduction. The resultant fault rocks can only be predicted only when the fault throw is smaller than the thickness of a faulted bed, in which scenario self-juxtaposition forms between the hanging wall and footwall.
CARE3MENU- A CARE III USER FRIENDLY INTERFACE
NASA Technical Reports Server (NTRS)
Pierce, J. L.
1994-01-01
CARE3MENU generates an input file for the CARE III program. CARE III is used for reliability prediction of complex, redundant, fault-tolerant systems including digital computers, aircraft, nuclear and chemical control systems. The CARE III input file often becomes complicated and is not easily formatted with a text editor. CARE3MENU provides an easy, interactive method of creating an input file by automatically formatting a set of user-supplied inputs for the CARE III system. CARE3MENU provides detailed on-line help for most of its screen formats. The reliability model input process is divided into sections using menu-driven screen displays. Each stage, or set of identical modules comprising the model, must be identified and described in terms of number of modules, minimum number of modules for stage operation, and critical fault threshold. The fault handling and fault occurence models are detailed in several screens by parameters such as transition rates, propagation and detection densities, Weibull or exponential characteristics, and model accuracy. The system fault tree and critical pairs fault tree screens are used to define the governing logic and to identify modules affected by component failures. Additional CARE3MENU screens prompt the user for output options and run time control values such as mission time and truncation values. There are fourteen major screens, many with default values and HELP options. The documentation includes: 1) a users guide with several examples of CARE III models, the dialog required to input them to CARE3MENU, and the output files created; and 2) a maintenance manual for assistance in changing the HELP files and modifying any of the menu formats or contents. CARE3MENU is written in FORTRAN 77 for interactive execution and has been implemented on a DEC VAX series computer operating under VMS. This program was developed in 1985.
NASA Astrophysics Data System (ADS)
Martínez-Martínez, José Miguel; Booth-Rea, Guillermo; Azañón, José Miguel; Torcal, Federico
2006-08-01
Pliocene and Quaternary tectonic structures mainly consisting of segmented northwest-southeast normal faults, and associated seismicity in the central Betics do not agree with the transpressive tectonic nature of the Africa-Eurasia plate boundary in the Ibero-Maghrebian region. Active extensional deformation here is heterogeneous, individual segmented normal faults being linked by relay ramps and transfer faults, including oblique-slip and both dextral and sinistral strike-slip faults. Normal faults extend the hanging wall of an extensional detachment that is the active segment of a complex system of successive WSW-directed extensional detachments which have thinned the Betic upper crust since middle Miocene. Two areas, which are connected by an active 40-km long dextral strike-slip transfer fault zone, concentrate present-day extension. Both the seismicity distribution and focal mechanisms agree with the position and regime of the observed faults. The activity of the transfer zone during middle Miocene to present implies a mode of extension which must have remained substantially the same over the entire period. Thus, the mechanisms driving extension should still be operating. Both the westward migration of the extensional loci and the high asymmetry of the extensional systems can be related to edge delamination below the south Iberian margin coupled with roll-back under the Alborán Sea; involving the asymmetric westward inflow of asthenospheric material under the margins.
Magnetotelluric Studies of Fault Zones Surrounding the 2016 Pawnee, Oklahoma Earthquake
NASA Astrophysics Data System (ADS)
Evans, R. L.; Key, K.; Atekwana, E. A.
2016-12-01
Since 2008, there has been a dramatic increase in earthquake activity in the central United States in association with major oil and gas operations. Oklahoma is now considered one the most seismically active states. Although seismic networks are able to detect activity and map its locus, they are unable to image the distribution of fluids in the fault responsible for triggering seismicity. Electrical geophysical methods are ideally suited to image fluid bearing faults since the injected waste-waters are highly saline and hence have a high electrical conductivity. To date, no study has imaged the fluids in the faults in Oklahoma and made a direct link to the seismicity. The 2016 M5.8 Pawnee, Oklahoma earthquake provides an unprecedented opportunity for scientists to provide that link. Several injection wells are located within a 20 km radius of the epicenter; and studies have suggested that injection of fluids in high-volume wells can trigger earthquakes as far away as 30 km. During late October to early November, 2016, we are collecting magnetotelluric (MT) data with the aim of constraining the distribution of fluids in the fault zone. The MT technique uses naturally occurring electric and magnetic fields measured at Earth's surface to measure conductivity structure. We plan to carry out a series of short two-dimensional (2D) profiles of wideband MT acquisition located through areas where the fault recently ruptured and seismic activity is concentrated and also across the faults in the vicinity that did not rupture. The integration of our results and ongoing seismic studies will lead to a better understanding of the links between fluid injection and seismicity.
TU-AB-BRD-03: Fault Tree Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dunscombe, P.
2015-06-15
Current quality assurance and quality management guidelines provided by various professional organizations are prescriptive in nature, focusing principally on performance characteristics of planning and delivery devices. However, published analyses of events in radiation therapy show that most events are often caused by flaws in clinical processes rather than by device failures. This suggests the need for the development of a quality management program that is based on integrated approaches to process and equipment quality assurance. Industrial engineers have developed various risk assessment tools that are used to identify and eliminate potential failures from a system or a process before amore » failure impacts a customer. These tools include, but are not limited to, process mapping, failure modes and effects analysis, fault tree analysis. Task Group 100 of the American Association of Physicists in Medicine has developed these tools and used them to formulate an example risk-based quality management program for intensity-modulated radiotherapy. This is a prospective risk assessment approach that analyzes potential error pathways inherent in a clinical process and then ranks them according to relative risk, typically before implementation, followed by the design of a new process or modification of the existing process. Appropriate controls are then put in place to ensure that failures are less likely to occur and, if they do, they will more likely be detected before they propagate through the process, compromising treatment outcome and causing harm to the patient. Such a prospective approach forms the basis of the work of Task Group 100 that has recently been approved by the AAPM. This session will be devoted to a discussion of these tools and practical examples of how these tools can be used in a given radiotherapy clinic to develop a risk based quality management program. Learning Objectives: Learn how to design a process map for a radiotherapy process Learn how to perform failure modes and effects analysis analysis for a given process Learn what fault trees are all about Learn how to design a quality management program based upon the information obtained from process mapping, failure modes and effects analysis and fault tree analysis. Dunscombe: Director, TreatSafely, LLC and Center for the Assessment of Radiological Sciences; Consultant to IAEA and Varian Thomadsen: President, Center for the Assessment of Radiological Sciences Palta: Vice President of the Center for the Assessment of Radiological Sciences.« less
Neotectonic inversion of the Hindu Kush-Pamir mountain region
Ruleman, C.A.
2011-01-01
The Hindu Kush-Pamir region of southern Asia is one of Earth's most rapidly deforming regions and it is poorly understood. This study develops a kinematic model based on active faulting in this part of the Trans-Himalayan orogenic belt. Previous studies have described north-verging thrust faults and some strike-slip faults, reflected in the northward-convex geomorphologic and structural grain of the Pamir Mountains. However, this structural analysis suggests that contemporary tectonics are changing the style of deformation from north-verging thrusts formed during the initial contraction of the Himalayan orogeny to south-verging thrusts and a series of northwest-trending, dextral strike-slip faults in the modern transpressional regime. These northwest-trending fault zones are linked to the major right-lateral Karakoram fault, located to the east, as synthetic, conjugate shears that form a right-stepping en echelon pattern. Northwest-trending lineaments with dextral displacements extend continuously westward across the Hindu Kush-Pamir region indicating a pattern of systematic shearing of multiple blocks to the northwest as the deformation effects from Indian plate collision expands to the north-northwest. Locally, east-northeast- and northwest-trending faults display sinistral and dextral displacement, respectively, yielding conjugate shear pairs developed in a northwest-southeast compressional stress field. Geodetic measurements and focal mechanisms from historical seismicity support these surficial, tectono-morphic observations. The conjugate shear pairs may be structurally linked subsidiary faults and co-seismically slip during single large magnitude (> M7) earthquakes that occur on major south-verging thrust faults. This kinematic model provides a potential context for prehistoric, historic, and future patterns of faulting and earthquakes.
Evolution of triangular topographic facets along active normal faults
NASA Astrophysics Data System (ADS)
Balogun, A.; Dawers, N. H.; Gasparini, N. M.; Giachetta, E.
2011-12-01
Triangular shaped facets, which are generally formed by the erosion of fault - bounded mountain ranges, are arguably one of the most prominent geomorphic features on active normal fault scarps. Some previous studies of triangular facet development have suggested that facet size and slope exhibit a strong linear dependency on fault slip rate, thus linking their growth directly to the kinematics of fault initiation and linkage. Other studies, however, generally conclude that there is no variation in triangular facet geometry (height and slope) with fault slip rate. The landscape of the northeastern Basin and Range Province of the western United States provides an opportunity for addressing this problem. This is due to the presence of well developed triangular facets along active normal faults, as well as spatial variations in fault scale and slip rate. In addition, the Holocene climatic record for this region suggests a dominant tectonic regime, as the faulted landscape shows little evidence of precipitation gradients associated with tectonic uplift. Using GIS-based analyses of USGS 30 m digital elevation data (DEMs) for east - central Idaho and southwestern Montana, we analyze triangular facet geometries along fault systems of varying number of constituent segments. This approach allows us to link these geometries with established patterns of along - strike slip rate variation. For this study, we consider major watersheds to include only catchments with upstream and downstream boundaries extending from the drainage divide to the mapped fault trace, respectively. In order to maintain consistency in the selection criteria for the analyzed triangular facets, only facets bounded on opposite sides by major watersheds were considered. Our preliminary observations reflect a general along - strike increase in the surface area, average slope, and relief of triangular facets from the tips of the fault towards the center. We attribute anomalies in the along - strike geometric measurements of the triangular facets to represent possible locations of fault segment linkage associated with normal fault evolution.
CLEAR: Communications Link Expert Assistance Resource
NASA Technical Reports Server (NTRS)
Hull, Larry G.; Hughes, Peter M.
1987-01-01
Communications Link Expert Assistance Resource (CLEAR) is a real time, fault diagnosis expert system for the Cosmic Background Explorer (COBE) Mission Operations Room (MOR). The CLEAR expert system is an operational prototype which assists the MOR operator/analyst by isolating and diagnosing faults in the spacecraft communication link with the Tracking and Data Relay Satellite (TDRS) during periods of realtime data acquisition. The mission domain, user requirements, hardware configuration, expert system concept, tool selection, development approach, and system design were discussed. Development approach and system implementation are emphasized. Also discussed are system architecture, tool selection, operation, and future plans.
Probabilistic seismic hazard study based on active fault and finite element geodynamic models
NASA Astrophysics Data System (ADS)
Kastelic, Vanja; Carafa, Michele M. C.; Visini, Francesco
2016-04-01
We present a probabilistic seismic hazard analysis (PSHA) that is exclusively based on active faults and geodynamic finite element input models whereas seismic catalogues were used only in a posterior comparison. We applied the developed model in the External Dinarides, a slow deforming thrust-and-fold belt at the contact between Adria and Eurasia.. is the Our method consists of establishing s two earthquake rupture forecast models: (i) a geological active fault input (GEO) model and, (ii) a finite element (FEM) model. The GEO model is based on active fault database that provides information on fault location and its geometric and kinematic parameters together with estimations on its slip rate. By default in this model all deformation is set to be released along the active faults. The FEM model is based on a numerical geodynamic model developed for the region of study. In this model the deformation is, besides along the active faults, released also in the volumetric continuum elements. From both models we calculated their corresponding activity rates, its earthquake rates and their final expected peak ground accelerations. We investigated both the source model and the earthquake model uncertainties by varying the main active fault and earthquake rate calculation parameters through constructing corresponding branches of the seismic hazard logic tree. Hazard maps and UHS curves have been produced for horizontal ground motion on bedrock conditions VS 30 ≥ 800 m/s), thereby not considering local site amplification effects. The hazard was computed over a 0.2° spaced grid considering 648 branches of the logic tree and the mean value of 10% probability of exceedance in 50 years hazard level, while the 5th and 95th percentiles were also computed to investigate the model limits. We conducted a sensitivity analysis to control which of the input parameters influence the final hazard results in which measure. The results of such comparison evidence the deformation model and with their internal variability together with the choice of the ground motion prediction equations (GMPEs) are the most influencing parameter. Both of these parameters have significan affect on the hazard results. Thus having good knowledge of the existence of active faults and their geometric and activity characteristics is of key importance. We also show that PSHA models based exclusively on active faults and geodynamic inputs, which are thus not dependent on past earthquake occurrences, provide a valid method for seismic hazard calculation.
A fault is born: The Landers-Mojave earthquake line
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nur, A.; Ron, H.
1993-04-01
The epicenter and the southern portion of the 1992 Landers earthquake fell on an approximately N-S earthquake line, defined by both epicentral locations and by the rupture directions of four previous M>5 earthquakes in the Mojave: The 1947 Manix; 1975 Galway Lake; 1979 Homestead Valley: and 1992 Joshua Tree events. Another M 5.2 earthquake epicenter in 1965 fell on this line where it intersects the Calico fault. In contrast, the northern part of the Landers rupture followed the NW-SE trending Camp Rock and parallel faults, exhibiting an apparently unusual rupture kink. The block tectonic model (Ron et al., 1984) combiningmore » fault kinematic and mechanics, explains both the alignment of the events, and their ruptures (Nur et al., 1986, 1989), as well as the Landers kink (Nur et al., 1992). Accordingly, the now NW oriented faults have rotated into their present direction away from the direction of maximum shortening, close to becoming locked, whereas a new fault set, optimally oriented relative to the direction of shortening, is developing to accommodate current crustal deformation. The Mojave-Landers line may thus be a new fault in formation. During the transition of faulting from the old, well developed and wak but poorly oriented faults to the strong, but favorably oriented new ones, both can slip simultaneously, giving rise to kinks such as Landers.« less
Advanced information processing system
NASA Technical Reports Server (NTRS)
Lala, J. H.
1984-01-01
Design and performance details of the advanced information processing system (AIPS) for fault and damage tolerant data processing on aircraft and spacecraft are presented. AIPS comprises several computers distributed throughout the vehicle and linked by a damage tolerant data bus. Most I/O functions are available to all the computers, which run in a TDMA mode. Each computer performs separate specific tasks in normal operation and assumes other tasks in degraded modes. Redundant software assures that all fault monitoring, logging and reporting are automated, together with control functions. Redundant duplex links and damage-spread limitation provide the fault tolerance. Details of an advanced design of a laboratory-scale proof-of-concept system are described, including functional operations.
Quantitative method of medication system interface evaluation.
Pingenot, Alleene Anne; Shanteau, James; Pingenot, James D F
2007-01-01
The objective of this study was to develop a quantitative method of evaluating the user interface for medication system software. A detailed task analysis provided a description of user goals and essential activity. A structural fault analysis was used to develop a detailed description of the system interface. Nurses experienced with use of the system under evaluation provided estimates of failure rates for each point in this simplified fault tree. Means of estimated failure rates provided quantitative data for fault analysis. Authors note that, although failures of steps in the program were frequent, participants reported numerous methods of working around these failures so that overall system failure was rare. However, frequent process failure can affect the time required for processing medications, making a system inefficient. This method of interface analysis, called Software Efficiency Evaluation and Fault Identification Method, provides quantitative information with which prototypes can be compared and problems within an interface identified.
Synthetic Earthquake Statistics From Physical Fault Models for the Lower Rhine Embayment
NASA Astrophysics Data System (ADS)
Brietzke, G. B.; Hainzl, S.; Zöller, G.
2012-04-01
As of today, seismic risk and hazard estimates mostly use pure empirical, stochastic models of earthquake fault systems tuned specifically to the vulnerable areas of interest. Although such models allow for reasonable risk estimates they fail to provide a link between the observed seismicity and the underlying physical processes. Solving a state-of-the-art fully dynamic description set of all relevant physical processes related to earthquake fault systems is likely not useful since it comes with a large number of degrees of freedom, poor constraints on its model parameters and a huge computational effort. Here, quasi-static and quasi-dynamic physical fault simulators provide a compromise between physical completeness and computational affordability and aim at providing a link between basic physical concepts and statistics of seismicity. Within the framework of quasi-static and quasi-dynamic earthquake simulators we investigate a model of the Lower Rhine Embayment (LRE) that is based upon seismological and geological data. We present and discuss statistics of the spatio-temporal behavior of generated synthetic earthquake catalogs with respect to simplification (e.g. simple two-fault cases) as well as to complication (e.g. hidden faults, geometric complexity, heterogeneities of constitutive parameters).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mays, S.E.; Poloski, J.P.; Sullivan, W.H.
1982-07-01
This report describes a risk study of the Browns Ferry, Unit 1, nuclear plant. The study is one of four such studies sponsored by the NRC Office of Research, Division of Risk Assessment, as part of its Interim Reliability Evaluation Program (IREP), Phase II. This report is contained in four volumes: a main report and three appendixes. Appendix B provides a description of Browns Ferry, Unit 1, plant systems and the failure evaluation of those systems as they apply to accidents at Browns Ferry. Information is presented concerning front-line system fault analysis; support system fault analysis; human error models andmore » probabilities; and generic control circuit analyses.« less
Risk Analysis Methods for Deepwater Port Oil Transfer Systems
DOT National Transportation Integrated Search
1976-06-01
This report deals with the risk analysis methodology for oil spills from the oil transfer systems in deepwater ports. Failure mode and effect analysis in combination with fault tree analysis are identified as the methods best suited for the assessmen...
Causes of Charcot-Marie-Tooth Disease (CMT)
... t always easy to trace through a family tree: X-linked, autosomal dominant and autosomal recessive. X- ... can be easy to recognize in the family tree. In contrast, X-linked or autosomal recessive types ...
A-Priori Rupture Models for Northern California Type-A Faults
Wills, Chris J.; Weldon, Ray J.; Field, Edward H.
2008-01-01
This appendix describes how a-priori rupture models were developed for the northern California Type-A faults. As described in the main body of this report, and in Appendix G, ?a-priori? models represent an initial estimate of the rate of single and multi-segment surface ruptures on each fault. Whether or not a given model is moment balanced (i.e., satisfies section slip-rate data) depends on assumptions made regarding the average slip on each segment in each rupture (which in turn depends on the chosen magnitude-area relationship). Therefore, for a given set of assumptions, or branch on the logic tree, the methodology of the present Working Group (WGCEP-2007) is to find a final model that is as close as possible to the a-priori model, in the least squares sense, but that also satisfies slip rate and perhaps other data. This is analogous the WGCEP- 2002 approach of effectively voting on the relative rate of each possible rupture, and then finding the closest moment-balance model (under a more limiting set of assumptions than adopted by the present WGCEP, as described in detail in Appendix G). The 2002 Working Group Report (WCCEP, 2003, referred to here as WGCEP-2002), created segmented earthquake rupture forecast models for all faults in the region, including some that had been designated as Type B faults in the NSHMP, 1996, and one that had not previously been considered. The 2002 National Seismic Hazard Maps used the values from WGCEP-2002 for all the faults in the region, essentially treating all the listed faults as Type A faults. As discussed in Appendix A, the current WGCEP found that there are a number of faults with little or no data on slip-per-event, or dates of previous earthquakes. As a result, the WGCEP recommends that faults with minimal available earthquake recurrence data: the Greenville, Mount Diablo, San Gregorio, Monte Vista-Shannon and Concord-Green Valley be modeled as Type B faults to be consistent with similarly poorly-known faults statewide. As a result, the modified segmented models discussed here only concern the San Andreas, Hayward-Rodgers Creek, and Calaveras faults. Given the extensive level of effort given by the recent Bay-Area WGCEP-2002, our approach has been to adopt their final average models as our preferred a-prior models. We have modified the WGCEP-2002 models where necessary to match data that were not available or not used by that WGCEP and where the models needed by WGCEP-2007 for a uniform statewide model require different assumptions and/or logic-tree branch weights. In these cases we have made what are usually slight modifications to the WGCEP-2002 model. This Appendix presents the minor changes needed to accomodate updated information and model construction. We do not attempt to reproduce here the extensive documentation of data, model parameters and earthquake probablilities in the WG-2002 report.
Jiang, Yu; Zhang, Xiaogang; Zhang, Chao; Li, Zhixiong; Sheng, Chenxing
2017-04-01
Numerical modeling has been recognized as the dispensable tools for mechanical fault mechanism analysis. Techniques, ranging from macro to nano levels, include the finite element modeling boundary element modeling, modular dynamic modeling, nano dynamic modeling and so forth. This work firstly reviewed the progress on the fault mechanism analysis for gear transmissions from the tribological and dynamic aspects. Literature review indicates that the tribological and dynamic properties were separately investigated to explore the fault mechanism in gear transmissions. However, very limited work has been done to address the links between the tribological and dynamic properties and scarce researches have been done for coal cutting machines. For this reason, the tribo-dynamic coupled model was introduced to bridge the gap between the tribological and dynamic models in fault mechanism analysis for gear transmissions in coal cutting machines. The modular dynamic modeling and nano dynamic modeling techniques are expected to establish the links between the tribological and dynamic models. Possible future research directions using the tribo dynamic coupled model were summarized to provide potential references for researchers in the field.
Automated Generation of Fault Management Artifacts from a Simple System Model
NASA Technical Reports Server (NTRS)
Kennedy, Andrew K.; Day, John C.
2013-01-01
Our understanding of off-nominal behavior - failure modes and fault propagation - in complex systems is often based purely on engineering intuition; specific cases are assessed in an ad hoc fashion as a (fallible) fault management engineer sees fit. This work is an attempt to provide a more rigorous approach to this understanding and assessment by automating the creation of a fault management artifact, the Failure Modes and Effects Analysis (FMEA) through querying a representation of the system in a SysML model. This work builds off the previous development of an off-nominal behavior model for the upcoming Soil Moisture Active-Passive (SMAP) mission at the Jet Propulsion Laboratory. We further developed the previous system model to more fully incorporate the ideas of State Analysis, and it was restructured in an organizational hierarchy that models the system as layers of control systems while also incorporating the concept of "design authority". We present software that was developed to traverse the elements and relationships in this model to automatically construct an FMEA spreadsheet. We further discuss extending this model to automatically generate other typical fault management artifacts, such as Fault Trees, to efficiently portray system behavior, and depend less on the intuition of fault management engineers to ensure complete examination of off-nominal behavior.
Naive Bayes Bearing Fault Diagnosis Based on Enhanced Independence of Data
Zhang, Nannan; Wu, Lifeng; Yang, Jing; Guan, Yong
2018-01-01
The bearing is the key component of rotating machinery, and its performance directly determines the reliability and safety of the system. Data-based bearing fault diagnosis has become a research hotspot. Naive Bayes (NB), which is based on independent presumption, is widely used in fault diagnosis. However, the bearing data are not completely independent, which reduces the performance of NB algorithms. In order to solve this problem, we propose a NB bearing fault diagnosis method based on enhanced independence of data. The method deals with data vector from two aspects: the attribute feature and the sample dimension. After processing, the classification limitation of NB is reduced by the independence hypothesis. First, we extract the statistical characteristics of the original signal of the bearings effectively. Then, the Decision Tree algorithm is used to select the important features of the time domain signal, and the low correlation features is selected. Next, the Selective Support Vector Machine (SSVM) is used to prune the dimension data and remove redundant vectors. Finally, we use NB to diagnose the fault with the low correlation data. The experimental results show that the independent enhancement of data is effective for bearing fault diagnosis. PMID:29401730
NASA Astrophysics Data System (ADS)
Li, Yongbo; Xu, Minqiang; Wang, Rixin; Huang, Wenhu
2016-01-01
This paper presents a new rolling bearing fault diagnosis method based on local mean decomposition (LMD), improved multiscale fuzzy entropy (IMFE), Laplacian score (LS) and improved support vector machine based binary tree (ISVM-BT). When the fault occurs in rolling bearings, the measured vibration signal is a multi-component amplitude-modulated and frequency-modulated (AM-FM) signal. LMD, a new self-adaptive time-frequency analysis method can decompose any complicated signal into a series of product functions (PFs), each of which is exactly a mono-component AM-FM signal. Hence, LMD is introduced to preprocess the vibration signal. Furthermore, IMFE that is designed to avoid the inaccurate estimation of fuzzy entropy can be utilized to quantify the complexity and self-similarity of time series for a range of scales based on fuzzy entropy. Besides, the LS approach is introduced to refine the fault features by sorting the scale factors. Subsequently, the obtained features are fed into the multi-fault classifier ISVM-BT to automatically fulfill the fault pattern identifications. The experimental results validate the effectiveness of the methodology and demonstrate that proposed algorithm can be applied to recognize the different categories and severities of rolling bearings.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lumsdaine, Andrew
2013-03-08
The main purpose of the Coordinated Infrastructure for Fault Tolerance in Systems initiative has been to conduct research with a goal of providing end-to-end fault tolerance on a systemwide basis for applications and other system software. While fault tolerance has been an integral part of most high-performance computing (HPC) system software developed over the past decade, it has been treated mostly as a collection of isolated stovepipes. Visibility and response to faults has typically been limited to the particular hardware and software subsystems in which they are initially observed. Little fault information is shared across subsystems, allowing little flexibility ormore » control on a system-wide basis, making it practically impossible to provide cohesive end-to-end fault tolerance in support of scientific applications. As an example, consider faults such as communication link failures that can be seen by a network library but are not directly visible to the job scheduler, or consider faults related to node failures that can be detected by system monitoring software but are not inherently visible to the resource manager. If information about such faults could be shared by the network libraries or monitoring software, then other system software, such as a resource manager or job scheduler, could ensure that failed nodes or failed network links were excluded from further job allocations and that further diagnosis could be performed. As a founding member and one of the lead developers of the Open MPI project, our efforts over the course of this project have been focused on making Open MPI more robust to failures by supporting various fault tolerance techniques, and using fault information exchange and coordination between MPI and the HPC system software stack from the application, numeric libraries, and programming language runtime to other common system components such as jobs schedulers, resource managers, and monitoring tools.« less
Achieving Agreement in Three Rounds With Bounded-Byzantine Faults
NASA Technical Reports Server (NTRS)
Malekpour, Mahyar R.
2015-01-01
A three-round algorithm is presented that guarantees agreement in a system of K (nodes) greater than or equal to 3F (faults) +1 nodes provided each faulty node induces no more than F faults and each good node experiences no more than F faults, where, F is the maximum number of simultaneous faults in the network. The algorithm is based on the Oral Message algorithm of Lamport et al. and is scalable with respect to the number of nodes in the system and applies equally to the traditional node-fault model as well as the link-fault model. We also present a mechanical verification of the algorithm focusing on verifying the correctness of a bounded model of the algorithm as well as confirming claims of determinism.
Mumma, Joel M; Durso, Francis T; Ferguson, Ashley N; Gipson, Christina L; Casanova, Lisa; Erukunuakpor, Kimberly; Kraft, Colleen S; Walsh, Victoria L; Zimring, Craig; DuBose, Jennifer; Jacob, Jesse T
2018-03-05
Doffing protocols for personal protective equipment (PPE) are critical for keeping healthcare workers (HCWs) safe during care of patients with Ebola virus disease. We assessed the relationship between errors and self-contamination during doffing. Eleven HCWs experienced with doffing Ebola-level PPE participated in simulations in which HCWs donned PPE marked with surrogate viruses (ɸ6 and MS2), completed a clinical task, and were assessed for contamination after doffing. Simulations were video recorded, and a failure modes and effects analysis and fault tree analyses were performed to identify errors during doffing, quantify their risk (risk index), and predict contamination data. Fifty-one types of errors were identified, many having the potential to spread contamination. Hand hygiene and removing the powered air purifying respirator (PAPR) hood had the highest total risk indexes (111 and 70, respectively) and number of types of errors (9 and 13, respectively). ɸ6 was detected on 10% of scrubs and the fault tree predicted a 10.4% contamination rate, likely occurring when the PAPR hood inadvertently contacted scrubs during removal. MS2 was detected on 10% of hands, 20% of scrubs, and 70% of inner gloves and the predicted rates were 7.3%, 19.4%, 73.4%, respectively. Fault trees for MS2 and ɸ6 contamination suggested similar pathways. Ebola-level PPE can both protect and put HCWs at risk for self-contamination throughout the doffing process, even among experienced HCWs doffing with a trained observer. Human factors methodologies can identify error-prone steps, delineate the relationship between errors and self-contamination, and suggest remediation strategies.
NASA Astrophysics Data System (ADS)
Krechowicz, Maria
2017-10-01
Nowadays, one of the characteristic features of construction industry is an increased complexity of a growing number of projects. Almost each construction project is unique, has its project-specific purpose, its own project structural complexity, owner’s expectations, ground conditions unique to a certain location, and its own dynamics. Failure costs and costs resulting from unforeseen problems in complex construction projects are very high. Project complexity drivers pose many vulnerabilities to a successful completion of a number of projects. This paper discusses the process of effective risk management in complex construction projects in which renewable energy sources were used, on the example of the realization phase of the ENERGIS teaching-laboratory building, from the point of view of DORBUD S.A., its general contractor. This paper suggests a new approach to risk management for complex construction projects in which renewable energy sources were applied. The risk management process was divided into six stages: gathering information, identification of the top, critical project risks resulting from the project complexity, construction of the fault tree for each top, critical risks, logical analysis of the fault tree, quantitative risk assessment applying fuzzy logic and development of risk response strategy. A new methodology for the qualitative and quantitative risk assessment for top, critical risks in complex construction projects was developed. Risk assessment was carried out applying Fuzzy Fault Tree analysis on the example of one top critical risk. Application of the Fuzzy sets theory to the proposed model allowed to decrease uncertainty and eliminate problems with gaining the crisp values of the basic events probability, common during expert risk assessment with the objective to give the exact risk score of each unwanted event probability.
Taheriyoun, Masoud; Moradinejad, Saber
2015-01-01
The reliability of a wastewater treatment plant is a critical issue when the effluent is reused or discharged to water resources. Main factors affecting the performance of the wastewater treatment plant are the variation of the influent, inherent variability in the treatment processes, deficiencies in design, mechanical equipment, and operational failures. Thus, meeting the established reuse/discharge criteria requires assessment of plant reliability. Among many techniques developed in system reliability analysis, fault tree analysis (FTA) is one of the popular and efficient methods. FTA is a top down, deductive failure analysis in which an undesired state of a system is analyzed. In this study, the problem of reliability was studied on Tehran West Town wastewater treatment plant. This plant is a conventional activated sludge process, and the effluent is reused in landscape irrigation. The fault tree diagram was established with the violation of allowable effluent BOD as the top event in the diagram, and the deficiencies of the system were identified based on the developed model. Some basic events are operator's mistake, physical damage, and design problems. The analytical method is minimal cut sets (based on numerical probability) and Monte Carlo simulation. Basic event probabilities were calculated according to available data and experts' opinions. The results showed that human factors, especially human error had a great effect on top event occurrence. The mechanical, climate, and sewer system factors were in subsequent tier. Literature shows applying FTA has been seldom used in the past wastewater treatment plant (WWTP) risk analysis studies. Thus, the developed FTA model in this study considerably improves the insight into causal failure analysis of a WWTP. It provides an efficient tool for WWTP operators and decision makers to achieve the standard limits in wastewater reuse and discharge to the environment.
Jetter, J J; Forte, R; Rubenstein, R
2001-02-01
A fault tree analysis was used to estimate the number of refrigerant exposures of automotive service technicians and vehicle occupants in the United States. Exposures of service technicians can occur when service equipment or automotive air-conditioning systems leak during servicing. The number of refrigerant exposures of service technicians was estimated to be 135,000 per year. Exposures of vehicle occupants can occur when refrigerant enters passenger compartments due to sudden leaks in air-conditioning systems, leaks following servicing, or leaks caused by collisions. The total number of exposures of vehicle occupants was estimated to be 3,600 per year. The largest number of exposures of vehicle occupants was estimated for leaks caused by collisions, and the second largest number of exposures was estimated for leaks following servicing. Estimates used in the fault tree analysis were based on a survey of automotive air-conditioning service shops, the best available data from the literature, and the engineering judgement of the authors and expert reviewers from the Society of Automotive Engineers Interior Climate Control Standards Committee. Exposure concentrations and durations were estimated and compared with toxicity data for refrigerants currently used in automotive air conditioners. Uncertainty was high for the estimated numbers of exposures, exposure concentrations, and exposure durations. Uncertainty could be reduced in the future by conducting more extensive surveys, measurements of refrigerant concentrations, and exposure monitoring. Nevertheless, the analysis indicated that the risk of exposure of service technicians and vehicle occupants is significant, and it is recommended that no refrigerant that is substantially more toxic than currently available substitutes be accepted for use in vehicle air-conditioning systems, absent a means of mitigating exposure.
Fault tree analysis for integrated and probabilistic risk analysis of drinking water systems.
Lindhe, Andreas; Rosén, Lars; Norberg, Tommy; Bergstedt, Olof
2009-04-01
Drinking water systems are vulnerable and subject to a wide range of risks. To avoid sub-optimisation of risk-reduction options, risk analyses need to include the entire drinking water system, from source to tap. Such an integrated approach demands tools that are able to model interactions between different events. Fault tree analysis is a risk estimation tool with the ability to model interactions between events. Using fault tree analysis on an integrated level, a probabilistic risk analysis of a large drinking water system in Sweden was carried out. The primary aims of the study were: (1) to develop a method for integrated and probabilistic risk analysis of entire drinking water systems; and (2) to evaluate the applicability of Customer Minutes Lost (CML) as a measure of risk. The analysis included situations where no water is delivered to the consumer (quantity failure) and situations where water is delivered but does not comply with water quality standards (quality failure). Hard data as well as expert judgements were used to estimate probabilities of events and uncertainties in the estimates. The calculations were performed using Monte Carlo simulations. CML is shown to be a useful measure of risks associated with drinking water systems. The method presented provides information on risk levels, probabilities of failure, failure rates and downtimes of the system. This information is available for the entire system as well as its different sub-systems. Furthermore, the method enables comparison of the results with performance targets and acceptable levels of risk. The method thus facilitates integrated risk analysis and consequently helps decision-makers to minimise sub-optimisation of risk-reduction options.
Naghibi, Seyed Amir; Pourghasemi, Hamid Reza; Dixon, Barnali
2016-01-01
Groundwater is considered one of the most valuable fresh water resources. The main objective of this study was to produce groundwater spring potential maps in the Koohrang Watershed, Chaharmahal-e-Bakhtiari Province, Iran, using three machine learning models: boosted regression tree (BRT), classification and regression tree (CART), and random forest (RF). Thirteen hydrological-geological-physiographical (HGP) factors that influence locations of springs were considered in this research. These factors include slope degree, slope aspect, altitude, topographic wetness index (TWI), slope length (LS), plan curvature, profile curvature, distance to rivers, distance to faults, lithology, land use, drainage density, and fault density. Subsequently, groundwater spring potential was modeled and mapped using CART, RF, and BRT algorithms. The predicted results from the three models were validated using the receiver operating characteristics curve (ROC). From 864 springs identified, 605 (≈70 %) locations were used for the spring potential mapping, while the remaining 259 (≈30 %) springs were used for the model validation. The area under the curve (AUC) for the BRT model was calculated as 0.8103 and for CART and RF the AUC were 0.7870 and 0.7119, respectively. Therefore, it was concluded that the BRT model produced the best prediction results while predicting locations of springs followed by CART and RF models, respectively. Geospatially integrated BRT, CART, and RF methods proved to be useful in generating the spring potential map (SPM) with reasonable accuracy.
Subsurface Tectonics and Pingos of Northern Alaska
NASA Astrophysics Data System (ADS)
Skirvin, S.; Casavant, R.; Burr, D.
2008-12-01
We describe preliminary results of a two-phase study that investigated links between subsurface structural and stratigraphic controls, and distribution of hydrostatic pingos on the central coastal plain of Arctic Alaska. Our 2300 km2 study area is underlain by a complete petroleum system that supports gas, oil and water production from 3 of the largest oil fields in North America. In addition, gas hydrate deposits exist in this area within and just below the permafrost interval at depths of 600 to 1800 feet below sea level. Phase 1 of the study compared locations of subsurface faults and pingos for evidence of linkages between faulting and pingo genesis and distribution. Several hundred discrete fault features were digitized from published data and georeferenced in a GIS database. Fault types were determined by geometry and sense of slip derived from well log and seismic maps. More than 200 pingos and surface sediment type associated with their locations were digitized from regional surficial geology maps within an area that included wire line and seismic data coverage. Beneath the pingos lies an assemblage of high-angle normal and transtensional faults that trend NNE and NW; subsidiary trends are EW and NNW. Quaternary fault reactivation is evidenced by faults that displaced strata at depths exceeding 3000 meters below sea level and intersect near-surface units. Unpublished seismic images and cross-section analysis support this interpretation. Kinematics and distribution of reactivated faults are linked to polyphase deformational history of the region that includes Mesozoic rift events, succeeded by crustal shortening and uplift of the Brooks Range to the south, and differential subsidence and segmentation of a related foreland basin margin beneath the study area. Upward fluid migration, a normal process in basin formation and fault reactivation, may play yet unrecognized roles in the genesis (e.g. fluid charging) of pingos and groundwater hydrology. Preliminary analysis shows that more than half the pingos occur within 150 m of the vertical projections of subsurface fault plane traces. In a previous, unpublished geostatistical study, comparison of pingo and random locations indicated a non-random NE-trending alignment of pingos. This trend in particular matches the dominant orientation of fault sets that are linked to the most recent tectonic deformation of the region. A concurrent Phase 2 of the study examines the potential role of near-surface stratigraphic units in regard to both pingos and faults. Both surface and subsurface coarse-grained deposits across the region are often controlled by fault structures; this study is the first to assess any relationship between reservoir rocks and pingo locations. Cross-sections were constructed from well log data to depths of 100 meters. Subsurface elements were compared with surface features. Although some studies have linked fine-grained surface sediments with pingo occurrence, our analysis hints that coarse-grained sediments underlie pingos and may be related to near-surface fluid transmissivity, as suggested by other researchers. We also investigated pingo occurrence in relationship to upthrown or downthrown fault blocks that vary in the degree of deformation and fluid transmission. Results will guide a proposed pingo drilling project to test linkages between pingos, subsurface geology, hydrology, and petroleum systems. Findings from this study could aid research and planning for field exploration of similar settings on Earth and Mars.
Seismic Hazards of the Upper Mississippi Embayment
1998-01-01
displacement of the Mississippi River; uplift of the Lake County uplift, Tiptonville dome, Blytheville arch; subsidence of Reelfoot Lake , Big Lake , and Lake St...slip faults within the Blytheville arch and western margin of the Reelfoot rift that are linked by the southwest dipping Reelfoot reverse fault. The...Bootheel lineament and back thrusts of the Reelfoot fault may also have slipped in 1811-12. Geomorphic effects of the 1811-12 sequence include
Risk management of key issues of FPSO
NASA Astrophysics Data System (ADS)
Sun, Liping; Sun, Hai
2012-12-01
Risk analysis of key systems have become a growing topic late of because of the development of offshore structures. Equipment failures of offloading system and fire accidents were analyzed based on the floating production, storage and offloading (FPSO) features. Fault tree analysis (FTA), and failure modes and effects analysis (FMEA) methods were examined based on information already researched on modules of relex reliability studio (RRS). Equipment failures were also analyzed qualitatively by establishing a fault tree and Boolean structure function based on the shortage of failure cases, statistical data, and risk control measures examined. Failure modes of fire accident were classified according to the different areas of fire occurrences during the FMEA process, using risk priority number (RPN) methods to evaluate their severity rank. The qualitative analysis of FTA gave the basic insight of forming the failure modes of FPSO offloading, and the fire FMEA gave the priorities and suggested processes. The research has practical importance for the security analysis problems of FPSO.
Yazdi, Mohammad; Korhan, Orhan; Daneshvar, Sahand
2018-05-09
This study aimed at establishing fault tree analysis (FTA) using expert opinion to compute the probability of an event. To find the probability of the top event (TE), all probabilities of the basic events (BEs) should be available when the FTA is drawn. In this case, employing expert judgment can be used as an alternative to failure data in an awkward situation. The fuzzy analytical hierarchy process as a standard technique is used to give a specific weight to each expert, and fuzzy set theory is engaged for aggregating expert opinion. In this regard, the probability of BEs will be computed and, consequently, the probability of the TE obtained using Boolean algebra. Additionally, to reduce the probability of the TE in terms of three parameters (safety consequences, cost and benefit), the importance measurement technique and modified TOPSIS was employed. The effectiveness of the proposed approach is demonstrated with a real-life case study.
NASA Astrophysics Data System (ADS)
Guan, Yifeng; Zhao, Jie; Shi, Tengfei; Zhu, Peipei
2016-09-01
In recent years, China's increased interest in environmental protection has led to a promotion of energy-efficient dual fuel (diesel/natural gas) ships in Chinese inland rivers. A natural gas as ship fuel may pose dangers of fire and explosion if a gas leak occurs. If explosions or fires occur in the engine rooms of a ship, heavy damage and losses will be incurred. In this paper, a fault tree model is presented that considers both fires and explosions in a dual fuel ship; in this model, dual fuel engine rooms are the top events. All the basic events along with the minimum cut sets are obtained through the analysis. The primary factors that affect accidents involving fires and explosions are determined by calculating the degree of structure importance of the basic events. According to these results, corresponding measures are proposed to ensure and improve the safety and reliability of Chinese inland dual fuel ships.
Kingman, D M; Field, W E
2005-11-01
Findings reported by researchers at Illinois State University and Purdue University indicated that since 1980, an average of eight individuals per year have become engulfed and died in farm grain bins in the U.S. and Canada and that all these deaths are significant because they are believed to be preventable. During a recent effort to develop intervention strategies and recommendations for an ASAE farm grain bin safety standard, fault tree analysis (FTA) was utilized to identify contributing factors to engulfments in grain stored in on-farm grain bins. FTA diagrams provided a spatial perspective of the circumstances that occurred prior to engulfment incidents, a perspective never before presented in other hazard analyses. The FTA also demonstrated relationships and interrelationships of the contributing factors. FTA is a useful tool that should be applied more often in agricultural incident investigations to assist in the more complete understanding of the problem studied.
Fault tree analysis for data-loss in long-term monitoring networks.
Dirksen, J; ten Veldhuis, J A E; Schilperoort, R P S
2009-01-01
Prevention of data-loss is an important aspect in the design as well as the operational phase of monitoring networks since data-loss can seriously limit intended information yield. In the literature limited attention has been paid to the origin of unreliable or doubtful data from monitoring networks. Better understanding of causes of data-loss points out effective solutions to increase data yield. This paper introduces FTA as a diagnostic tool to systematically deduce causes of data-loss in long-term monitoring networks in urban drainage systems. In order to illustrate the effectiveness of FTA, a fault tree is developed for a monitoring network and FTA is applied to analyze the data yield of a UV/VIS submersible spectrophotometer. Although some of the causes of data-loss cannot be recovered because the historical database of metadata has been updated infrequently, the example points out that FTA still is a powerful tool to analyze the causes of data-loss and provides useful information on effective data-loss prevention.
Accurate reliability analysis method for quantum-dot cellular automata circuits
NASA Astrophysics Data System (ADS)
Cui, Huanqing; Cai, Li; Wang, Sen; Liu, Xiaoqiang; Yang, Xiaokuo
2015-10-01
Probabilistic transfer matrix (PTM) is a widely used model in the reliability research of circuits. However, PTM model cannot reflect the impact of input signals on reliability, so it does not completely conform to the mechanism of the novel field-coupled nanoelectronic device which is called quantum-dot cellular automata (QCA). It is difficult to get accurate results when PTM model is used to analyze the reliability of QCA circuits. To solve this problem, we present the fault tree models of QCA fundamental devices according to different input signals. After that, the binary decision diagram (BDD) is used to quantitatively investigate the reliability of two QCA XOR gates depending on the presented models. By employing the fault tree models, the impact of input signals on reliability can be identified clearly and the crucial components of a circuit can be found out precisely based on the importance values (IVs) of components. So this method is contributive to the construction of reliable QCA circuits.
Betweenness centrality in a weighted network
NASA Astrophysics Data System (ADS)
Wang, Huijuan; Hernandez, Javier Martin; van Mieghem, Piet
2008-04-01
When transport in networks follows the shortest paths, the union of all shortest path trees G∪SPT can be regarded as the “transport overlay network.” Overlay networks such as peer-to-peer networks or virtual private networks can be considered as a subgraph of G∪SPT . The traffic through the network is examined by the betweenness Bl of links in the overlay G∪SPT . The strength of disorder can be controlled by, e.g., tuning the extreme value index α of the independent and identically distributed polynomial link weights. In the strong disorder limit (α→0) , all transport flows over a critical backbone, the minimum spanning tree (MST). We investigate the betweenness distributions of wide classes of trees, such as the MST of those well-known network models and of various real-world complex networks. All these trees with different degree distributions (e.g., uniform, exponential, or power law) are found to possess a power law betweenness distribution Pr[Bl=j]˜j-c . The exponent c seems to be positively correlated with the degree variance of the tree and to be insensitive of the size N of a network. In the weak disorder regime, transport in the network traverses many links. We show that a link with smaller link weight tends to carry more traffic. This negative correlation between link weight and betweenness depends on α and the structure of the underlying topology.
Study of Stand-Alone Microgrid under Condition of Faults on Distribution Line
NASA Astrophysics Data System (ADS)
Malla, S. G.; Bhende, C. N.
2014-10-01
The behavior of stand-alone microgrid is analyzed under the condition of faults on distribution feeders. During fault since battery is not able to maintain dc-link voltage within limit, the resistive dump load control is presented to do so. An inverter control is proposed to maintain balanced voltages at PCC under the unbalanced load condition and to reduce voltage unbalance factor (VUF) at load points. The proposed inverter control also has facility to protect itself from high fault current. Existing maximum power point tracker (MPPT) algorithm is modified to limit the speed of generator during fault. Extensive simulation results using MATLAB/SIMULINK established that the performance of the controllers is quite satisfactory under different fault conditions as well as unbalanced load conditions.
Achieving Agreement in Three Rounds with Bounded-Byzantine Faults
NASA Technical Reports Server (NTRS)
Malekpour, Mahyar, R.
2017-01-01
A three-round algorithm is presented that guarantees agreement in a system of K greater than or equal to 3F+1 nodes provided each faulty node induces no more than F faults and each good node experiences no more than F faults, where, F is the maximum number of simultaneous faults in the network. The algorithm is based on the Oral Message algorithm of Lamport, Shostak, and Pease and is scalable with respect to the number of nodes in the system and applies equally to traditional node-fault model as well as the link-fault model. We also present a mechanical verification of the algorithm focusing on verifying the correctness of a bounded model of the algorithm as well as confirming claims of determinism.
Providing the full DDF link protection for bus-connected SIEPON based system architecture
NASA Astrophysics Data System (ADS)
Hwang, I.-Shyan; Pakpahan, Andrew Fernando; Liem, Andrew Tanny; Nikoukar, AliAkbar
2016-09-01
Currently a massive amount of traffic per second is delivered through EPON systems, one of the prominent access network technologies for delivering the next generation network. Therefore, it is vital to keep the EPON optical distribution network (ODN) working by providing the necessity protection mechanism in the deployed devices; otherwise, when failures occur it will cause a great loss for both network operators and business customers. In this paper, we propose a bus-connected architecture to protect and recover distribution drop fiber (DDF) link faults or transceiver failures at ONU(s) in SIEPON system. The proposed architecture provides a cost-effective architecture, which delivers the high fault-tolerance in handling multiple DDF faults, while also providing flexibility in choosing the backup ONU assignments. Simulation results show that the proposed architecture provides the reliability and maintains quality of service (QoS) performance in terms of mean packet delay, system throughput, packet loss and EF jitter when DDF link failures occur.
Treelink: data integration, clustering and visualization of phylogenetic trees.
Allende, Christian; Sohn, Erik; Little, Cedric
2015-12-29
Phylogenetic trees are central to a wide range of biological studies. In many of these studies, tree nodes need to be associated with a variety of attributes. For example, in studies concerned with viral relationships, tree nodes are associated with epidemiological information, such as location, age and subtype. Gene trees used in comparative genomics are usually linked with taxonomic information, such as functional annotations and events. A wide variety of tree visualization and annotation tools have been developed in the past, however none of them are intended for an integrative and comparative analysis. Treelink is a platform-independent software for linking datasets and sequence files to phylogenetic trees. The application allows an automated integration of datasets to trees for operations such as classifying a tree based on a field or showing the distribution of selected data attributes in branches and leafs. Genomic and proteonomic sequences can also be linked to the tree and extracted from internal and external nodes. A novel clustering algorithm to simplify trees and display the most divergent clades was also developed, where validation can be achieved using the data integration and classification function. Integrated geographical information allows ancestral character reconstruction for phylogeographic plotting based on parsimony and likelihood algorithms. Our software can successfully integrate phylogenetic trees with different data sources, and perform operations to differentiate and visualize those differences within a tree. File support includes the most popular formats such as newick and csv. Exporting visualizations as images, cluster outputs and genomic sequences is supported. Treelink is available as a web and desktop application at http://www.treelinkapp.com .
A Linked Model for Simulating Stand Development and Growth Processes of Loblolly Pine
V. Clark Baldwin; Phillip M. Dougherty; Harold E. Burkhart
1998-01-01
Linking models of different scales (e.g., process, tree-stand-ecosystem) is essential for furthering our understanding of stand, climatic, and edaphic effects on tree growth and forest productivity. Moreover, linking existing models that differ in scale and levels of resolution quickly identifies knowledge gaps in information required to scale from one level to another...
Ding, Ming; Zhu, Qianlong
2016-01-01
Hardware protection and control action are two kinds of low voltage ride-through technical proposals widely used in a permanent magnet synchronous generator (PMSG). This paper proposes an innovative clustering concept for the equivalent modeling of a PMSG-based wind power plant (WPP), in which the impacts of both the chopper protection and the coordinated control of active and reactive powers are taken into account. First, the post-fault DC link voltage is selected as a concentrated expression of unit parameters, incoming wind and electrical distance to a fault point to reflect the transient characteristics of PMSGs. Next, we provide an effective method for calculating the post-fault DC link voltage based on the pre-fault wind energy and the terminal voltage dip. Third, PMSGs are divided into groups by analyzing the calculated DC link voltages without any clustering algorithm. Finally, PMSGs of the same group are equivalent as one rescaled PMSG to realize the transient equivalent modeling of the PMSG-based WPP. Using the DIgSILENT PowerFactory simulation platform, the efficiency and accuracy of the proposed equivalent model are tested against the traditional equivalent WPP and the detailed WPP. The simulation results show the proposed equivalent model can be used to analyze the offline electromechanical transients in power systems.
49 CFR Appendix B to Part 236 - Risk Assessment Criteria
Code of Federal Regulations, 2012 CFR
2012-10-01
... availability calculations for subsystems and components, Fault Tree Analysis (FTA) of the subsystems, and... upper bound, as estimated with a sensitivity analysis, and the risk value selected must be demonstrated... interconnected subsystems/components? The risk assessment of each safety-critical system (product) must account...
49 CFR Appendix B to Part 236 - Risk Assessment Criteria
Code of Federal Regulations, 2014 CFR
2014-10-01
... availability calculations for subsystems and components, Fault Tree Analysis (FTA) of the subsystems, and... upper bound, as estimated with a sensitivity analysis, and the risk value selected must be demonstrated... interconnected subsystems/components? The risk assessment of each safety-critical system (product) must account...
49 CFR Appendix D to Part 236 - Independent Review of Verification and Validation
Code of Federal Regulations, 2010 CFR
2010-10-01
... standards. (f) The reviewer shall analyze all Fault Tree Analyses (FTA), Failure Mode and Effects... for each product vulnerability cited by the reviewer; (4) Identification of any documentation or... not properly followed; (6) Identification of the software verification and validation procedures, as...
14 CFR 417.309 - Flight safety system analysis.
Code of Federal Regulations, 2012 CFR
2012-01-01
... system anomaly occurring and all of its effects as determined by the single failure point analysis and... termination system. (c) Single failure point. A command control system must undergo an analysis that... fault tree analysis or a failure modes effects and criticality analysis; (2) Identify all possible...
14 CFR 417.309 - Flight safety system analysis.
Code of Federal Regulations, 2010 CFR
2010-01-01
... system anomaly occurring and all of its effects as determined by the single failure point analysis and... termination system. (c) Single failure point. A command control system must undergo an analysis that... fault tree analysis or a failure modes effects and criticality analysis; (2) Identify all possible...
14 CFR 417.309 - Flight safety system analysis.
Code of Federal Regulations, 2013 CFR
2013-01-01
... system anomaly occurring and all of its effects as determined by the single failure point analysis and... termination system. (c) Single failure point. A command control system must undergo an analysis that... fault tree analysis or a failure modes effects and criticality analysis; (2) Identify all possible...
14 CFR 417.309 - Flight safety system analysis.
Code of Federal Regulations, 2014 CFR
2014-01-01
... system anomaly occurring and all of its effects as determined by the single failure point analysis and... termination system. (c) Single failure point. A command control system must undergo an analysis that... fault tree analysis or a failure modes effects and criticality analysis; (2) Identify all possible...
14 CFR 417.309 - Flight safety system analysis.
Code of Federal Regulations, 2011 CFR
2011-01-01
... system anomaly occurring and all of its effects as determined by the single failure point analysis and... termination system. (c) Single failure point. A command control system must undergo an analysis that... fault tree analysis or a failure modes effects and criticality analysis; (2) Identify all possible...
Toward a Model-Based Approach for Flight System Fault Protection
NASA Technical Reports Server (NTRS)
Day, John; Meakin, Peter; Murray, Alex
2012-01-01
Use SysML/UML to describe the physical structure of the system This part of the model would be shared with other teams - FS Systems Engineering, Planning & Execution, V&V, Operations, etc., in an integrated model-based engineering environment Use the UML Profile mechanism, defining Stereotypes to precisely express the concepts of the FP domain This extends the UML/SysML languages to contain our FP concepts Use UML/SysML, along with our profile, to capture FP concepts and relationships in the model Generate typical FP engineering products (the FMECA, Fault Tree, MRD, V&V Matrices)
Conversion of Questionnaire Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Powell, Danny H; Elwood Jr, Robert H
During the survey, respondents are asked to provide qualitative answers (well, adequate, needs improvement) on how well material control and accountability (MC&A) functions are being performed. These responses can be used to develop failure probabilities for basic events performed during routine operation of the MC&A systems. The failure frequencies for individual events may be used to estimate total system effectiveness using a fault tree in a probabilistic risk analysis (PRA). Numeric risk values are required for the PRA fault tree calculations that are performed to evaluate system effectiveness. So, the performance ratings in the questionnaire must be converted to relativemore » risk values for all of the basic MC&A tasks performed in the facility. If a specific material protection, control, and accountability (MPC&A) task is being performed at the 'perfect' level, the task is considered to have a near zero risk of failure. If the task is performed at a less than perfect level, the deficiency in performance represents some risk of failure for the event. As the degree of deficiency in performance increases, the risk of failure increases. If a task that should be performed is not being performed, that task is in a state of failure. The failure probabilities of all basic events contribute to the total system risk. Conversion of questionnaire MPC&A system performance data to numeric values is a separate function from the process of completing the questionnaire. When specific questions in the questionnaire are answered, the focus is on correctly assessing and reporting, in an adjectival manner, the actual performance of the related MC&A function. Prior to conversion, consideration should not be given to the numeric value that will be assigned during the conversion process. In the conversion process, adjectival responses to questions on system performance are quantified based on a log normal scale typically used in human error analysis (see A.D. Swain and H.E. Guttmann, 'Handbook of Human Reliability Analysis with Emphasis on Nuclear Power Plant Applications,' NUREG/CR-1278). This conversion produces the basic event risk of failure values required for the fault tree calculations. The fault tree is a deductive logic structure that corresponds to the operational nuclear MC&A system at a nuclear facility. The conventional Delphi process is a time-honored approach commonly used in the risk assessment field to extract numerical values for the failure rates of actions or activities when statistically significant data is absent.« less
NASA Astrophysics Data System (ADS)
Cho, Yong-Sun; Jung, Byung-Ik; Ha, Kyoung-Hun; Choi, Soo-Geun; Park, Hyoung-Min; Choi, Hyo-Sang
To apply the superconducting fault current limiter (SFCL) to the power system, the reliability of the fault-current-limiting operation must be ensured in diverse fault conditions. The SFCL must also be linked to the operation of the high-speed recloser in the power system. In this study, a three-phase transformer-type SFCL, which has a neutral line to improve the simultaneous quench characteristics of superconducting elements, was manufactured to analyze the fault-current-limiting characteristic according to the single, double, and triple line-to-ground faults. The transformer-type SFCL, wherein three-phase windings are connected to one iron core, reduced the burden on the superconducting element as the superconducting element on the sound phase was also quenched in the case of the single line-to-ground fault. In the case of double or triple line-to-ground faults, the flux from the faulted phase winding was interlinked with other faulted or sound phase windings, and the fault-current-limiting rate decreased because the windings of three phases were inductively connected by one iron core.
Preliminary Isostatic Gravity Map of Joshua Tree National Park and Vicinity, Southern California
Langenheim, V.E.; Biehler, Shawn; McPhee, D.K.; McCabe, C.A.; Watt, J.T.; Anderson, M.L.; Chuchel, B.A.; Stoffer, P.
2007-01-01
This isostatic residual gravity map is part of an effort to map the three-dimensional distribution of rocks in Joshua Tree National Park, southern California. This map will serve as a basis for modeling the shape of basins beneath the Park and in adjacent valleys and also for determining the location and geometry of faults within the area. Local spatial variations in the Earth's gravity field, after accounting for variations caused by elevation, terrain, and deep crustal structure, reflect the distribution of densities in the mid- to upper crust. Densities often can be related to rock type, and abrupt spatial changes in density commonly mark lithologic or structural boundaries. High-density basement rocks exposed within the Eastern Transverse Ranges include crystalline rocks that range in age from Proterozoic to Mesozoic and these rocks are generally present in the mountainous areas of the quadrangle. Alluvial sediments, usually located in the valleys, and Tertiary sedimentary rocks are characterized by low densities. However, with increasing depth of burial and age, the densities of these rocks may become indistinguishable from those of basement rocks. Tertiary volcanic rocks are characterized by a wide range of densities, but, on average, are less dense than the pre-Cenozoic basement rocks. Basalt within the Park is as dense as crystalline basement, but is generally thin (less than 100 m thick; e.g., Powell, 2003). Isostatic residual gravity values within the map area range from about 44 mGal over Coachella Valley to about 8 mGal between the Mecca Hills and the Orocopia Mountains. Steep linear gravity gradients are coincident with the traces of several Quaternary strike-slip faults, most notably along the San Andreas Fault bounding the east side of Coachella Valley and east-west-striking, left-lateral faults, such as the Pinto Mountain, Blue Cut, and Chiriaco Faults (Fig. 1). Gravity gradients also define concealed basin-bounding faults, such as those beneath the Chuckwalla Valley (e.g. Rotstein and others, 1976). These gradients result from juxtaposing dense basement rocks against thick Cenozoic sedimentary rocks.
Kaltag fault, northern Yukon, Canada: Constraints on evolution of Arctic Alaska
NASA Astrophysics Data System (ADS)
Lane, Larry S.
1992-07-01
The Kaltag fault has been linked to several strike-slip models of evolution of the western Arctic Ocean. Hundreds of kilometres of Cretaceous-Tertiary displacement have been hypothesized in models that emplace Arctic Alaska into its present position by either left- or right-lateral strike slip. However, regional-scale displacement is precluded by new potential-field data. Postulated transform emplacement of Arctic Alaska cannot be accommodated by motion on the Kaltag fault or adjacent structures. The Kaltag fault of the northern Yukon is an eastward extrapolation of its namesake in west-central Alaska; however, a connection cannot be demonstrated. Cretaceous-Tertiary displacement on the Alaskan Kaltag fault is probably accommodated elsewhere.
NASA Astrophysics Data System (ADS)
Possee, D.; Keir, D.; Harmon, N.; Rychert, C.; Rolandone, F.; Leroy, S. D.; Stuart, G. W.; Calais, E.; Boisson, D.; Ulysse, S. M. J.; Guerrier, K.; Momplaisir, R.; Prepetit, C.
2017-12-01
Oblique convergence of the Caribbean and North American plates has partitioned strain across an extensive transpressional fault system that bisects Haiti. Most recently the 2010, MW7.0 earthquake ruptured multiple thrust faults in southern Haiti. However, while the rupture mechanism has been well studied, how these faults are segmented and link to deformation across the plate boundary is still debated. Understanding the link between strain accumulation and faulting in Haiti is also key to future modelling of seismic hazards. To assess seismic activity and fault structures we used data from 31 broadband seismic stations deployed on Haiti for 16-months. Local earthquakes were recorded and hypocentre locations determined using a 1D velocity model. A high-quality subset of the data was then inverted using travel-time tomography for relocated hypocentres and 2D images of Vp and Vp/Vs crustal structure. Earthquake locations reveal two clusters of seismic activity, the first delineates faults associated with the 2010 earthquake and the second shows activity 100km further east along a thrust fault north of Lake Enriquillo (Dominican Republic). The velocity models show large variations in seismic properties across the plate boundary; shallow low-velocity zones with a 5-8% decrease in Vp and high Vp/Vs ratios of 1.85-1.95 correspond to sedimentary basins that form the low-lying terrain on Haiti. We also image a region with a 4-5% decrease in Vp and an increased Vp/Vs ratio of 1.80-1.85 dipping south to a depth of 20km beneath southern Haiti. This feature matches the location of a major thrust fault and suggests a substantial damage zone around this fault. Beneath northern Haiti a transition to lower Vp/Vs values of 1.70-1.75 reflects a compositional change from mafic facies such as the Caribbean large igneous province in the south, to arc magmatic facies associated with the Greater Antilles arc in the north. Our seismic images are consistent with the fault system across southern Haiti transitioning from a near vertical strike-slip fault in the west to a major south dipping oblique-slip fault in the east. Seismicity in southern Haiti broadly occurs on the thrust/oblique-slip faults. The results show evidence for significant variations in fault zone structure and kinematics along strike of a major transpressional plate boundary.
Quality-based Multimodal Classification Using Tree-Structured Sparsity
2014-03-08
Pennsylvania State University soheil@psu.edu Asok Ray Pennsylvania State University axr2@psu.edu@psu.edu Nasser M. Nasrabadi Army Research Laboratory...clustering for on- line fault detection and isolation. Applied Intelligence, 35(2):269–284, 2011. 4 [2] S. Bahrampour, A. Ray , S. Sarkar, T. Damarla, and N
Assessing Institutional Ineffectiveness: A Strategy for Improvement.
ERIC Educational Resources Information Center
Cameron, Kim S.
1984-01-01
Based on the theory that institutional change and improvement are motivated more by knowledge of problems than by knowledge of successes, a fault tree analysis technique using Boolean logic for assessing institutional ineffectiveness by determining weaknesses in the system is presented. Advantages and disadvantages of focusing on weakness rather…
Beard, Sue; Campagna, David J.; Anderson, R. Ernest
2010-01-01
The Lake Mead fault system is a northeast-striking, 130-km-long zone of left-slip in the southeast Great Basin, active from before 16 Ma to Quaternary time. The northeast end of the Lake Mead fault system in the Virgin Mountains of southeast Nevada and northwest Arizona forms a partitioned strain field comprising kinematically linked northeast-striking left-lateral faults, north-striking normal faults, and northwest-striking right-lateral faults. Major faults bound large structural blocks whose internal strain reflects their position within a left step-over of the left-lateral faults. Two north-striking large-displacement normal faults, the Lakeside Mine segment of the South Virgin–White Hills detachment fault and the Piedmont fault, intersect the left step-over from the southwest and northeast, respectively. The left step-over in the Lake Mead fault system therefore corresponds to a right-step in the regional normal fault system.Within the left step-over, displacement transfer between the left-lateral faults and linked normal faults occurs near their junctions, where the left-lateral faults become oblique and normal fault displacement decreases away from the junction. Southward from the center of the step-over in the Virgin Mountains, down-to-the-west normal faults splay northward from left-lateral faults, whereas north and east of the center, down-to-the-east normal faults splay southward from left-lateral faults. Minimum slip is thus in the central part of the left step-over, between east-directed slip to the north and west-directed slip to the south. Attenuation faults parallel or subparallel to bedding cut Lower Paleozoic rocks and are inferred to be early structures that accommodated footwall uplift during the initial stages of extension.Fault-slip data indicate oblique extensional strain within the left step-over in the South Virgin Mountains, manifested as east-west extension; shortening is partitioned between vertical for extension-dominated structural blocks and south-directed for strike-slip faults. Strike-slip faults are oblique to the extension direction due to structural inheritance from NE-striking fabrics in Proterozoic crystalline basement rocks.We hypothesize that (1) during early phases of deformation oblique extension was partitioned to form east-west–extended domains bounded by left-lateral faults of the Lake Mead fault system, from ca. 16 to 14 Ma. (2) Beginning ca. 13 Ma, increased south-directed shortening impinged on the Virgin Mountains and forced uplift, faulting, and overturning along the north and west side of the Virgin Mountains. (3) By ca. 10 Ma, initiation of the younger Hen Spring to Hamblin Bay fault segment of the Lake Mead fault system accommodated westward tectonic escape, and the focus of south-directed shortening transferred to the western Lake Mead region. The shift from early partitioned oblique extension to south-directed shortening may have resulted from initiation of right-lateral shear of the eastern Walker Lane to the west coupled with left-lateral shear along the eastern margin of the Great Basin.
An earthquake rate forecast for Europe based on smoothed seismicity and smoothed fault contribution
NASA Astrophysics Data System (ADS)
Hiemer, Stefan; Woessner, Jochen; Basili, Roberto; Wiemer, Stefan
2013-04-01
The main objective of project SHARE (Seismic Hazard Harmonization in Europe) is to develop a community-based seismic hazard model for the Euro-Mediterranean region. The logic tree of earthquake rupture forecasts comprises several methodologies including smoothed seismicity approaches. Smoothed seismicity thus represents an alternative concept to express the degree of spatial stationarity of seismicity and provides results that are more objective, reproducible, and testable. Nonetheless, the smoothed-seismicity approach suffers from the common drawback of being generally based on earthquake catalogs alone, i.e. the wealth of knowledge from geology is completely ignored. We present a model that applies the kernel-smoothing method to both past earthquake locations and slip rates on mapped crustal faults and subductions. The result is mainly driven by the data, being independent of subjective delineation of seismic source zones. The core parts of our model are two distinct location probability densities: The first is computed by smoothing past seismicity (using variable kernel smoothing to account for varying data density). The second is obtained by smoothing fault moment rate contributions. The fault moment rates are calculated by summing the moment rate of each fault patch on a fully parameterized and discretized fault as available from the SHARE fault database. We assume that the regional frequency-magnitude distribution of the entire study area is well known and estimate the a- and b-value of a truncated Gutenberg-Richter magnitude distribution based on a maximum likelihood approach that considers the spatial and temporal completeness history of the seismic catalog. The two location probability densities are linearly weighted as a function of magnitude assuming that (1) the occurrence of past seismicity is a good proxy to forecast occurrence of future seismicity and (2) future large-magnitude events occur more likely in the vicinity of known faults. Consequently, the underlying location density of our model depends on the magnitude. We scale the density with the estimated a-value in order to construct a forecast that specifies the earthquake rate in each longitude-latitude-magnitude bin. The model is intended to be one branch of SHARE's logic tree of rupture forecasts and provides rates of events in the magnitude range of 5 <= m <= 8.5 for the entire region of interest and is suitable for comparison with other long-term models in the framework of the Collaboratory for the Study of Earthquake Predictability (CSEP).
Transpressional Rupture Cascade of the 2016 Mw 7.8 Kaikoura Earthquake, New Zealand
NASA Astrophysics Data System (ADS)
Xu, Wenbin; Feng, Guangcai; Meng, Lingsen; Zhang, Ailin; Ampuero, Jean Paul; Bürgmann, Roland; Fang, Lihua
2018-03-01
Large earthquakes often do not occur on a simple planar fault but involve rupture of multiple geometrically complex faults. The 2016 Mw 7.8 Kaikoura earthquake, New Zealand, involved the rupture of at least 21 faults, propagating from southwest to northeast for about 180 km. Here we combine space geodesy and seismology techniques to study subsurface fault geometry, slip distribution, and the kinematics of the rupture. Our finite-fault slip model indicates that the fault motion changes from predominantly right-lateral slip near the epicenter to transpressional slip in the northeast with a maximum coseismic surface displacement of about 10 m near the intersection between the Kekerengu and Papatea faults. Teleseismic back projection imaging shows that rupture speed was overall slow (1.4 km/s) but faster on individual fault segments (approximately 2 km/s) and that the conjugate, oblique-reverse, north striking faults released the largest high-frequency energy. We show that the linking Conway-Charwell faults aided in propagation of rupture across the step over from the Humps fault zone to the Hope fault. Fault slip cascaded along the Jordan Thrust, Kekerengu, and Needles faults, causing stress perturbations that activated two major conjugate faults, the Hundalee and Papatea faults. Our results shed important light on the study of earthquakes and seismic hazard evaluation in geometrically complex fault systems.
Ultrareliable fault-tolerant control systems
NASA Technical Reports Server (NTRS)
Webster, L. D.; Slykhouse, R. A.; Booth, L. A., Jr.; Carson, T. M.; Davis, G. J.; Howard, J. C.
1984-01-01
It is demonstrated that fault-tolerant computer systems, such as on the Shuttles, based on redundant, independent operation are a viable alternative in fault tolerant system designs. The ultrareliable fault-tolerant control system (UFTCS) was developed and tested in laboratory simulations of an UH-1H helicopter. UFTCS includes asymptotically stable independent control elements in a parallel, cross-linked system environment. Static redundancy provides the fault tolerance. A polling is performed among the computers, with results allowing for time-delay channel variations with tight bounds. When compared with the laboratory and actual flight data for the helicopter, the probability of a fault was, for the first 10 hr of flight given a quintuple computer redundancy, found to be 1 in 290 billion. Two weeks of untended Space Station operations would experience a fault probability of 1 in 24 million. Techniques for avoiding channel divergence problems are identified.
Response to comment on "No late Quaternary strike-slip motion along the northern Karakoram fault"
NASA Astrophysics Data System (ADS)
Robinson, Alexander C.; Owen, Lewis A.; Chen, Jie; Schoenbohm, Lindsay M.; Hedrick, Kathryn A.; Blisniuk, Kimberly; Sharp, Warren D.; Imrecke, Daniel B.; Li, Wenqiao; Yuan, Zhaode; Caffee, Marc W.; Mertz-Kraus, Regina
2016-06-01
In their comment on ;No late Quaternary strike-slip motion along the northern Karakoram fault;, while Chevalier et al. (2016) do not dispute any of the results or interpretations regarding our observations along the main strand of the northern Karakoram fault, they make several arguments as to why they interpret the Kongur Shan Extensional System (KES) to be kinematically linked to the Karakoram fault. These arguments center around how an ;active; fault is defined, how slip on segments of the KES may be compatible with dextral shear related to continuation of the Karakoram fault, and suggestions as to how the two fault systems might still be connected. While we appreciate that there are still uncertainties in the regional geology, we address these comments and show that their arguments are inconsistent with all available data, known geologic relationships, and basic kinematics.
Quaternary low-angle slip on detachment faults in Death Valley, California
Hayman, N.W.; Knott, J.R.; Cowan, D.S.; Nemser, E.; Sarna-Wojcicki, A. M.
2003-01-01
Detachment faults on the west flank of the Black Mountains (Nevada and California) dip 29??-36?? and cut subhorizontal layers of the 0.77 Ma Bishop ash. Steeply dipping normal faults confined to the hanging walls of the detachments offset layers of the 0.64 Ma Lava Creek B tephra and the base of 0.12-0.18 Ma Lake Manly gravel. These faults sole into and do not cut the low-angle detachments. Therefore the detachments accrued any measurable slip across the kinematically linked hanging-wall faults. An analysis of the orientations of hundreds of the hanging-wall faults shows that extension occurred at modest slip rates (<1 mm/yr) under a steep to vertically oriented maximum principal stress. The Black Mountain detachments are appropriately described as the basal detachments of near-critical Coulomb wedges. We infer that the formation of late Pleistocene and Holocene range-front fault scarps accompanied seismogenic slip on the detachments.
NASA Astrophysics Data System (ADS)
Albrecht, Franziska; Dorigo, Wouter; Gruber, Alexander; Wagner, Wolfgang; Kainz, Wolfgang
2014-05-01
Climate change induced drought variability impacts global forest ecosystems and forest carbon cycle dynamics. Physiological drought stress might even become an issue in regions generally not considered water-limited. The water balance at the soil surface is essential for forest growth. Soil moisture is a key driver linking precipitation and tree development. Tree ring based analyses are a potential approach to study the driving role of hydrological parameters for tree growth. However, at present two major research gaps are apparent: i) soil moisture records are hardly considered and ii) only a few studies are linking tree ring chronologies and satellite observations. Here we used tree ring chronologies obtained from the International Tree ring Data Bank (ITRDB) and remotely sensed soil moisture observations (ECV_SM) to analyze the moisture-tree growth relationship. The ECV_SM dataset, which is being distributed through ESA's Climate Change Initiative for soil moisture covers the period 1979 to 2010 at a spatial resolution of 0.25°. First analyses were performed for Mongolia, a country characterized by a continental arid climate. We extracted 13 tree ring chronologies suitable for our analysis from the ITRDB. Using monthly satellite based soil moisture observations we confirmed previous studies on the seasonality of soil moisture in Mongolia. Further, we investigated the relationship between tree growth (as reflected by tree ring width index) and remotely sensed soil moisture records by applying correlation analysis. In terms of correlation coefficient a strong response of tree growth to soil moisture conditions of current April to August was observed, confirming a strong linkage between tree growth and soil water storage. The highest correlation was found for current April (R=0.44), indicating that sufficient water supply is vital for trees at the beginning of the growing season. To verify these results, we related the chronologies to reanalysis precipitation and temperature datasets. Precipitation was important during both the current and previous growth season. Temperature showed the strongest correlation for previous (R=0.12) and current October (R=0.21). Hence, our results demonstrated that water supply is most likely limiting tree growth during the growing season, while temperature is determining its length. We are confident that long-term satellite based soil moisture observations can bridge spatial and temporal limitations that are inherent to in situ measurements, which are traditionally used for tree ring research. Our preliminary results are a foundation for further studies linking remotely sensed datasets and tree ring chronologies, an approach that has not been widely investigated among the scientific community.
Managing Risk to Ensure a Successful Cassini/Huygens Saturn Orbit Insertion (SOI)
NASA Technical Reports Server (NTRS)
Witkowski, Mona M.; Huh, Shin M.; Burt, John B.; Webster, Julie L.
2004-01-01
I. Design: a) S/C designed to be largely single fault tolerant; b) Operate in flight demonstrated envelope, with margin; and c) Strict compliance with requirements & flight rules. II. Test: a) Baseline, fault & stress testing using flight system testbeds (H/W & S/W); b) In-flight checkout & demos to remove first time events. III. Failure Analysis: a) Critical event driven fault tree analysis; b) Risk mitigation & development of contingencies. IV) Residual Risks: a) Accepted pre-launch waivers to Single Point Failures; b) Unavoidable risks (e.g. natural disaster). V) Mission Assurance: a) Strict process for characterization of variances (ISAs, PFRs & Waivers; b) Full time Mission Assurance Manager reports to Program Manager: 1) Independent assessment of compliance with institutional standards; 2) Oversight & risk assessment of ISAs, PFRs & Waivers etc.; and 3) Risk Management Process facilitator.
NASA Astrophysics Data System (ADS)
Petrie, E. S.; Evans, J. P.; Richey, D.; Flores, S.; Barton, C.; Mozley, P.
2015-12-01
Sedimentary rocks in the San Rafael Swell, Utah, were deformed by Laramide compression and subsequent Neogene extension. We evaluate the effect of fault damage zone morphology as a function of structural position, and changes in mechanical stratigraphy on the distribution of secondary minerals across the reservoir-seal pair of the Navajo Sandstone and overlying Carmel Formation. We decipher paleo-fluid migration and examine the effect faults and fractures have on reservoir permeability and efficacy of top seal for a range of geo-engineering applications. Map-scale faults have an increased probability of allowing upward migration of fluids along the fault plane and within the damage zone, potentially bypassing the top seal. Field mapping, mesoscopic structural analyses, petrography, and geochemical observations demonstrate that fault zone thickness increases at structural intersections, fault relay zones, fault-related folds, and fault tips. Higher densities of faults with meters of slip and dense fracture populations are present in relay zones relative to single, discrete faults. Curvature analysis of the San Rafael monocline and fracture density data show that fracture density is highest where curvature is highest in the syncline hinge and near faults. Fractures cross the reservoir-seal interface where fracture density is highest and structural diagensis includes mineralization events and bleaching and calcite and gypsum mineralization. The link between fracture distributions and structural setting implys that transmissive fractures have predictable orientations and density distributions. At the m- to cm- scale, deformation-band faults and joints in the Navajo Sandstone penetrate the reservoir-seal interface and transition into open-mode fractures in the caprock seal. Scanline analysis and petrography of veins provide evidence for subsurface mineralization and fracture reactivation, suggesting that the fractures act as loci for fluid flow through time. Heterolithic caprock seals with variable fracture distributions and morphology highlight the strong link between the variation in material properties and the response to changing stress conditions. The variable connectivity of fractures and the changes in fracture density plays a critical role in subsurface fluid flow.
David W. Vahey; C. Tim Scott; J.Y. Zhu; Kenneth E. Skog
2012-01-01
Methods for estimating present and future carbon storage in trees and forests rely on measurements or estimates of tree volume or volume growth multiplied by specific gravity. Wood density can vary by tree ring and height in a tree. If data on density by tree ring could be obtained and linked to tree size and stand characteristics, it would be possible to more...
Reliability and availability evaluation of Wireless Sensor Networks for industrial applications.
Silva, Ivanovitch; Guedes, Luiz Affonso; Portugal, Paulo; Vasques, Francisco
2012-01-01
Wireless Sensor Networks (WSN) currently represent the best candidate to be adopted as the communication solution for the last mile connection in process control and monitoring applications in industrial environments. Most of these applications have stringent dependability (reliability and availability) requirements, as a system failure may result in economic losses, put people in danger or lead to environmental damages. Among the different type of faults that can lead to a system failure, permanent faults on network devices have a major impact. They can hamper communications over long periods of time and consequently disturb, or even disable, control algorithms. The lack of a structured approach enabling the evaluation of permanent faults, prevents system designers to optimize decisions that minimize these occurrences. In this work we propose a methodology based on an automatic generation of a fault tree to evaluate the reliability and availability of Wireless Sensor Networks, when permanent faults occur on network devices. The proposal supports any topology, different levels of redundancy, network reconfigurations, criticality of devices and arbitrary failure conditions. The proposed methodology is particularly suitable for the design and validation of Wireless Sensor Networks when trying to optimize its reliability and availability requirements.
Reliability and Availability Evaluation of Wireless Sensor Networks for Industrial Applications
Silva, Ivanovitch; Guedes, Luiz Affonso; Portugal, Paulo; Vasques, Francisco
2012-01-01
Wireless Sensor Networks (WSN) currently represent the best candidate to be adopted as the communication solution for the last mile connection in process control and monitoring applications in industrial environments. Most of these applications have stringent dependability (reliability and availability) requirements, as a system failure may result in economic losses, put people in danger or lead to environmental damages. Among the different type of faults that can lead to a system failure, permanent faults on network devices have a major impact. They can hamper communications over long periods of time and consequently disturb, or even disable, control algorithms. The lack of a structured approach enabling the evaluation of permanent faults, prevents system designers to optimize decisions that minimize these occurrences. In this work we propose a methodology based on an automatic generation of a fault tree to evaluate the reliability and availability of Wireless Sensor Networks, when permanent faults occur on network devices. The proposal supports any topology, different levels of redundancy, network reconfigurations, criticality of devices and arbitrary failure conditions. The proposed methodology is particularly suitable for the design and validation of Wireless Sensor Networks when trying to optimize its reliability and availability requirements. PMID:22368497
PAWS/STEM - PADE APPROXIMATION WITH SCALING AND SCALED TAYLOR EXPONENTIAL MATRIX (VAX VMS VERSION)
NASA Technical Reports Server (NTRS)
Butler, R. W.
1994-01-01
Traditional fault-tree techniques for analyzing the reliability of large, complex systems fail to model the dynamic reconfiguration capabilities of modern computer systems. Markov models, on the other hand, can describe fault-recovery (via system reconfiguration) as well as fault-occurrence. The Pade Approximation with Scaling (PAWS) and Scaled Taylor Exponential Matrix (STEM) programs provide a flexible, user-friendly, language-based interface for the creation and evaluation of Markov models describing the behavior of fault-tolerant reconfigurable computer systems. PAWS and STEM produce exact solutions for the probability of system failure and provide a conservative estimate of the number of significant digits in the solution. The calculation of the probability of entering a death state of a Markov model (representing system failure) requires the solution of a set of coupled differential equations. Because of the large disparity between the rates of fault arrivals and system recoveries, Markov models of fault-tolerant architectures inevitably lead to numerically stiff differential equations. Both PAWS and STEM have the capability to solve numerically stiff models. These complementary programs use separate methods to determine the matrix exponential in the solution of the model's system of differential equations. In general, PAWS is better suited to evaluate small and dense models. STEM operates at lower precision, but works faster than PAWS for larger models. The mathematical approach chosen to solve a reliability problem may vary with the size and nature of the problem. Although different solution techniques are utilized on different programs, it is possible to have a common input language. The Systems Validation Methods group at NASA Langley Research Center has created a set of programs that form the basis for a reliability analysis workstation. The set of programs are: SURE reliability analysis program (COSMIC program LAR-13789, LAR-14921); the ASSIST specification interface program (LAR-14193, LAR-14923), PAWS/STEM reliability analysis programs (LAR-14165, LAR-14920); and the FTC fault tree tool (LAR-14586, LAR-14922). FTC is used to calculate the top-event probability for a fault tree. PAWS/STEM and SURE are programs which interpret the same SURE language, but utilize different solution methods. ASSIST is a preprocessor that generates SURE language from a more abstract definition. SURE, ASSIST, and PAWS/STEM are also offered as a bundle. Please see the abstract for COS-10039/COS-10041, SARA - SURE/ASSIST Reliability Analysis Workstation, for pricing details. PAWS/STEM was originally developed for DEC VAX series computers running VMS and was later ported for use on Sun computers running SunOS. The package is written in PASCAL, ANSI compliant C-language, and FORTRAN 77. The standard distribution medium for the VMS version of PAWS/STEM (LAR-14165) is a 9-track 1600 BPI magnetic tape in VMSINSTAL format. It is also available on a TK50 tape cartridge in VMSINSTAL format. Executables are included. The standard distribution medium for the Sun version of PAWS/STEM (LAR-14920) is a .25 inch streaming magnetic tape cartridge in UNIX tar format. Both Sun3 and Sun4 executables are included. PAWS/STEM was developed in 1989 and last updated in 1991. DEC, VAX, VMS, and TK50 are trademarks of Digital Equipment Corporation. SunOS, Sun3, and Sun4 are trademarks of Sun Microsystems, Inc. UNIX is a registered trademark of AT&T Bell Laboratories.
PAWS/STEM - PADE APPROXIMATION WITH SCALING AND SCALED TAYLOR EXPONENTIAL MATRIX (SUN VERSION)
NASA Technical Reports Server (NTRS)
Butler, R. W.
1994-01-01
Traditional fault-tree techniques for analyzing the reliability of large, complex systems fail to model the dynamic reconfiguration capabilities of modern computer systems. Markov models, on the other hand, can describe fault-recovery (via system reconfiguration) as well as fault-occurrence. The Pade Approximation with Scaling (PAWS) and Scaled Taylor Exponential Matrix (STEM) programs provide a flexible, user-friendly, language-based interface for the creation and evaluation of Markov models describing the behavior of fault-tolerant reconfigurable computer systems. PAWS and STEM produce exact solutions for the probability of system failure and provide a conservative estimate of the number of significant digits in the solution. The calculation of the probability of entering a death state of a Markov model (representing system failure) requires the solution of a set of coupled differential equations. Because of the large disparity between the rates of fault arrivals and system recoveries, Markov models of fault-tolerant architectures inevitably lead to numerically stiff differential equations. Both PAWS and STEM have the capability to solve numerically stiff models. These complementary programs use separate methods to determine the matrix exponential in the solution of the model's system of differential equations. In general, PAWS is better suited to evaluate small and dense models. STEM operates at lower precision, but works faster than PAWS for larger models. The mathematical approach chosen to solve a reliability problem may vary with the size and nature of the problem. Although different solution techniques are utilized on different programs, it is possible to have a common input language. The Systems Validation Methods group at NASA Langley Research Center has created a set of programs that form the basis for a reliability analysis workstation. The set of programs are: SURE reliability analysis program (COSMIC program LAR-13789, LAR-14921); the ASSIST specification interface program (LAR-14193, LAR-14923), PAWS/STEM reliability analysis programs (LAR-14165, LAR-14920); and the FTC fault tree tool (LAR-14586, LAR-14922). FTC is used to calculate the top-event probability for a fault tree. PAWS/STEM and SURE are programs which interpret the same SURE language, but utilize different solution methods. ASSIST is a preprocessor that generates SURE language from a more abstract definition. SURE, ASSIST, and PAWS/STEM are also offered as a bundle. Please see the abstract for COS-10039/COS-10041, SARA - SURE/ASSIST Reliability Analysis Workstation, for pricing details. PAWS/STEM was originally developed for DEC VAX series computers running VMS and was later ported for use on Sun computers running SunOS. The package is written in PASCAL, ANSI compliant C-language, and FORTRAN 77. The standard distribution medium for the VMS version of PAWS/STEM (LAR-14165) is a 9-track 1600 BPI magnetic tape in VMSINSTAL format. It is also available on a TK50 tape cartridge in VMSINSTAL format. Executables are included. The standard distribution medium for the Sun version of PAWS/STEM (LAR-14920) is a .25 inch streaming magnetic tape cartridge in UNIX tar format. Both Sun3 and Sun4 executables are included. PAWS/STEM was developed in 1989 and last updated in 1991. DEC, VAX, VMS, and TK50 are trademarks of Digital Equipment Corporation. SunOS, Sun3, and Sun4 are trademarks of Sun Microsystems, Inc. UNIX is a registered trademark of AT&T Bell Laboratories.
1988-08-20
34 William A. Link, Patuxent Wildlife Research Center "Increasing reliability of multiversion fault-tolerant software design by modulation," Junryo 3... Multiversion lault-Tolerant Software Design by Modularization Junryo Miyashita Department of Computer Science California state University at san Bernardino Fault...They shall beE refered to as " multiversion fault-tolerant software design". Onel problem of developing multi-versions of a program is the high cost
SeaMARC II mapping of transform faults in the Cayman Trough, Caribbean Sea
Rosencrantz, Eric; Mann, Paul
1992-01-01
SeaMARC II maps of the southern wall of the Cayman Trough between Honduras and Jamaica show zones of continuous, well-defined fault lineaments adjacent and parallel to the wall, both to the east and west of the Cayman spreading axis. These lineaments mark the present, active traces of transform faults which intersect the southern end of the spreading axis at a triple junction. The Swan Islands transform fault to the west is dominated by two major lineaments that overlap with right-stepping sense across a large push-up ridge beneath the Swan Islands. The fault zone to the east of the axis, named the Walton fault, is more complex, containing multiple fault strands and a large pull-apart structure. The Walton fault links the spreading axis to Jamaican and Hispaniolan strike-slip faults, and it defines the southern boundary of a microplate composed of the eastern Cayman Trough and western Hispaniola. The presence of this microplate raises questions about the veracity of Caribbean plate velocities based primarily on Cayman Trough opening rates.
NASA Astrophysics Data System (ADS)
Cilona, A.; Aydin, A.; Hazelton, G.
2013-12-01
Characterization of the structural architecture of a 5 km-long, N40°E-striking fault zone provides new insights for the interpretation of hydraulic heads measured across and along the fault. Of interest is the contaminant transport across a portion of the Upper Cretaceous Chatsworth Formation, a 1400 m-thick turbidite sequence of sandstones and shales exposed in the Simi Hills, south California. Local bedding consistently dips about 20° to 30° to NW. Participating hydrogeologists monitor the local groundwater system by means of numerous boreholes used to define the 3D distribution of the groundwater table around the fault. Sixty hydraulic head measurements consistently show differences of 10s of meters, except for a small area. In this presentation, we propose a link between this distribution and the fault zone architecture. Despite an apparent linear morphological trend, the fault is made up of at least three distinct segments named here as northern, central and southern segments. Key aspects of the fault zone architecture have been delineated at two sites. The first is an outcrop of the central segment and the second is a borehole intersecting the northern segment at depth. The first site shows the fault zone juxtaposing sandstones against shales. Here the fault zone consists of a 13 meter-wide fault rock including a highly deformed sliver of sandstone on the northwestern side. In the sandstone, shear offset was resolved along N42°E striking and SE dipping fracture surfaces localized within a 40 cm thick strand. Here the central core of the fault zone is 8 m-wide and contains mostly shale characterized by highly diffuse deformation. It shows a complex texture overprinted by N30°E-striking carbonate veins. At the southeastern edge of the fault zone exposure, a shale unit dipping 50° NW towards the fault zone provides the key information that the shale unit was incorporated into the fault zone in a manner consistent with shale smearing. At the second site, a borehole more than 194 meter-long intersects the fault zone at its bottom. Based on an optical televiewer image supplemented by limited recovered rock cores, a juxtaposition plane (dipping 75° SE) between a fractured sandstone and a highly-deformed shale fault rock has been interpreted as the southeastern boundary of the fault zone. The shale fault rock estimated to be thicker than 4 meters is highly folded and brecciated with locally complex cataclastic texture. The observations and interpretations of the fault architecture presented above suggest that the drop of hydraulic head detected across the fault segments is due primarily to the low-permeability shaly fault rock incorporated into the fault zone by a shale smearing mechanism. Interestingly, at around the step between the northern and the central fault segments, where the fault offset is expected to diminish (no hard link and no significant shaly fault rock), the groundwater levels measured on either sides of the fault zone are more-or-less equal.
Methods to enhance seismic faults and construct fault surfaces
NASA Astrophysics Data System (ADS)
Wu, Xinming; Zhu, Zhihui
2017-10-01
Faults are often apparent as reflector discontinuities in a seismic volume. Numerous types of fault attributes have been proposed to highlight fault positions from a seismic volume by measuring reflection discontinuities. These attribute volumes, however, can be sensitive to noise and stratigraphic features that are also apparent as discontinuities in a seismic volume. We propose a matched filtering method to enhance a precomputed fault attribute volume, and simultaneously estimate fault strikes and dips. In this method, a set of efficient 2D exponential filters, oriented by all possible combinations of strike and dip angles, are applied to the input attribute volume to find the maximum filtering responses at all samples in the volume. These maximum filtering responses are recorded to obtain the enhanced fault attribute volume while the corresponding strike and dip angles, that yield the maximum filtering responses, are recoded to obtain volumes of fault strikes and dips. By doing this, we assume that a fault surface is locally planar, and a 2D smoothing filter will yield a maximum response if the smoothing plane coincides with a local fault plane. With the enhanced fault attribute volume and the estimated fault strike and dip volumes, we then compute oriented fault samples on the ridges of the enhanced fault attribute volume, and each sample is oriented by the estimated fault strike and dip. Fault surfaces can be constructed by directly linking the oriented fault samples with consistent fault strikes and dips. For complicated cases with missing fault samples and noisy samples, we further propose to use a perceptual grouping method to infer fault surfaces that reasonably fit the positions and orientations of the fault samples. We apply these methods to 3D synthetic and real examples and successfully extract multiple intersecting fault surfaces and complete fault surfaces without holes.
Communications and tracking expert systems study
NASA Technical Reports Server (NTRS)
Leibfried, T. F.; Feagin, Terry; Overland, David
1987-01-01
The original objectives of the study consisted of five broad areas of investigation: criteria and issues for explanation of communication and tracking system anomaly detection, isolation, and recovery; data storage simplification issues for fault detection expert systems; data selection procedures for decision tree pruning and optimization to enhance the abstraction of pertinent information for clear explanation; criteria for establishing levels of explanation suited to needs; and analysis of expert system interaction and modularization. Progress was made in all areas, but to a lesser extent in the criteria for establishing levels of explanation suited to needs. Among the types of expert systems studied were those related to anomaly or fault detection, isolation, and recovery.
[Medical Equipment Maintenance Methods].
Liu, Hongbin
2015-09-01
Due to the high technology and the complexity of medical equipment, as well as to the safety and effectiveness, it determines the high requirements of the medical equipment maintenance work. This paper introduces some basic methods of medical instrument maintenance, including fault tree analysis, node method and exclusive method which are the three important methods in the medical equipment maintenance, through using these three methods for the instruments that have circuit drawings, hardware breakdown maintenance can be done easily. And this paper introduces the processing methods of some special fault conditions, in order to reduce little detours in meeting the same problems. Learning is very important for stuff just engaged in this area.
NASA Astrophysics Data System (ADS)
Blakely, R. J.; Wells, R. E.; Sherrod, B. L.; Brocher, T. M.
2016-12-01
Newly acquired potential-field data, geologic mapping, and recorded seismicity indicate that the Cascadia subduction zone is segmented in southwestern Washington by a left-stepping, possibly active crustal structure spanning nearly the entire onshore portion of the forearc. The east-striking, southward verging Doty thrust fault is an important part of this trans-forearc structure. As mapped, the eastern end of the 50-km-long Doty fault connects with the northwestern termination of ongoing seismicity on the north-northwest-striking Mt. St. Helens seismic zone (MSHSZ), suggesting that the Doty fault and MSHSZ may be kinematically linked. Westward, the mapped Doty fault terminates at and may link to mapped faults striking northwestward to 35 km north of Grays Harbor, a total northwest distance of 85 km. A newly acquired aeromagnetic survey over the Doty fault and MSHSZ, and existing gravity data, emphasize Crescent Formation and other Eocene volcanic rocks in the hanging wall of the Doty fault with up to 4 km of vertical throw. Most MSHSZ epicenters fall within a broad (5- to 10-km wide) magnetic low extending 50 km north-northwestward from Mt. St Helens. The magnetic low skirts around the western margin of the Miocene-age Spirit Lake pluton, but otherwise is not obviously associated with topography or mapped geology. We suggest that dextral slip on the MSHSZ is distributed across a broad, northwest-striking area that includes the magnetic low and is transferred to compressional slip on the Doty fault. The Doty fault demarcates a clear north-to-south decrease in the density of episodic tremor, suggesting that the thrust fault may intersect or modulate over-pressured fluids generated above the slab (Wells et al., in review). The Doty fault, MSHSZ, and neighboring structures are consistent with a dextral shear couple (Wells and Coe, 1985) and consequent clockwise crustal rotation extending across the entire landward portion of the Cascadia forearc, from the Pacific Coast to the Cascadia arc and from Grays Harbor to the Portland basin in northwestern Oregon.
Mori, J.
1996-01-01
Details of the M 4.3 foreshock to the Joshua Tree earthquake were studied using P waves recorded on the Southern California Seismic Network and the Anza network. Deconvolution, using an M 2.4 event as an empirical Green's function, corrected for complicated path and site effects in the seismograms and produced simple far-field displacement pulses that were inverted for a slip distribution. Both possible fault planes, north-south and east-west, for the focal mechanism were tested by a least-squares inversion procedure with a range of rupture velocities. The results showed that the foreshock ruptured the north-south plane, similar to the mainshock. The foreshock initiated a few hundred meters south of the mainshock and ruptured to the north, toward the mainshock hypocenter. The mainshock (M 6.1) initiated near the northern edge of the foreshock rupture 2 hr later. The foreshock had a high stress drop (320 to 800 bars) and broke a small portion of the fault adjacent to the mainshock but was not able to immediately initiate the mainshock rupture.
NASA Astrophysics Data System (ADS)
Hayman, Nicholas W.; Karson, Jeffrey A.
2009-02-01
The escarpments that bound the Pito Deep Rift (northeastern Easter microplate) expose in situ upper oceanic crust that was accreted ˜3 Ma ago at the superfast spreading (˜142 mm/a, full rate) southeast Pacific Rise (SEPR). Samples and images of these escarpments were taken during transects utilizing the human-occupied vehicle Alvin and remotely operated vehicle Jason II. The dive areas were mapped with a "deformation intensity scale" revealing that the sheeted dike complex and the base of the lavas contain approximately meter-wide fault zones surrounded by fractured "damage zones." Fault zones are spaced several hundred meters apart, in places offset the base of the lavas, separate areas with differently oriented dikes, and are locally crosscut by (younger) dikes. Fault rocks are rich in interstitial amphibole, matrix and vein chlorite, prominent veins of quartz, and accessory grains of sulfides, oxides, and sphene. These phases form the fine-grained matrix materials for cataclasites and cements for breccias where they completely surround angular to subangular clasts of variably altered and deformed basalt. Bulk rock geochemical compositions of the fault rocks are largely governed by the abundance of quartz veins. When compositions are normalized to compensate for the excess silica, the fault rocks exhibit evidence for additional geochemical changes via hydrothermal alteration, including the loss of mobile elements and gain of some trace metals and magnesium. Microstructures and compositions suggest that the fault rocks developed over multiple increments of deformation and hydrothermal fluid flow in the subaxial environment of the SEPR; faults related to the opening of the Pito Deep Rift can be distinguished by their orientation and fault rock microstructure. Some subaxial deformation increments were likely linked with violent discharge events associated with fluid pressure fluctuations and mineral sealing within the fault zones. Other increments were linked with the influx of relatively fresh seawater. The spacing of the faults is consistent with fault localization occurring every 7000 to 14,000 years, with long-term slip rates of <3 mm/a. Once spread from the ridge axis, the faults were probably not active, and damage zones likely played a more significant role in axial flank and off-axis crustal permeability.
NASA Astrophysics Data System (ADS)
Karson, J.; Horst, A. J.; Nanfito, A.
2011-12-01
Iceland has long been used as an analog for studies of seafloor spreading. Despite its thick (~25 km) oceanic crust and subaerial lavas, many features associated with accretion along mid-ocean ridge spreading centers, and the processes that generate them, are well represented in the actively spreading Neovolcanic Zone and deeply glaciated Tertiary crust that flanks it. Integrated results of structural and geodetic studies show that the plate boundary zone on Iceland is a complex array of linked structures bounding major crustal blocks or microplates, similar to oceanic microplates. Major rift zones propagate N and S from the hotspot centered beneath the Vatnajökull icecap in SE central Iceland. The southern propagator has extended southward beyond the South Iceland Seismic Zone transform fault to the Westman Islands, resulting in abandonment of the Eastern Rift Zone. Continued propagation may cause abandonment of the Reykjanes Ridge. The northern propagator is linked to the southern end of the receding Kolbeinsey Ridge to the north. The NNW-trending Kerlingar Pseudo-fault bounds the propagator system to the E. The Tjörnes Transform Fault links the propagator tip to the Kolbeinsey Ridge and appears to be migrating northward in incremental steps, leaving a swath of deformed crustal blocks in its wake. Block rotations, concentrated mainly to the west of the propagators, are clockwise to the N of the hotspot and counter-clockwise to the S, possibly resulting in a component of NS divergence across EW-oriented rift zones. These rotations may help accommodate adjustments of the plate boundary zone to the relative movements of the N American and Eurasian plates. The rotated crustal blocks are composed of highly anisotropic crust with rift-parallel internal fabric generated by spreading processes. Block rotations result in reactivation of spreading-related faults as major rift-parallel, strike-slip faults. Structural details found in Iceland can help provide information that is difficult or impossible to obtain in propagating systems of the deep seafloor.
Fault isolation through no-overhead link level CRC
Chen, Dong; Coteus, Paul W.; Gara, Alan G.
2007-04-24
A fault isolation technique for checking the accuracy of data packets transmitted between nodes of a parallel processor. An independent crc is kept of all data sent from one processor to another, and received from one processor to another. At the end of each checkpoint, the crcs are compared. If they do not match, there was an error. The crcs may be cleared and restarted at each checkpoint. In the preferred embodiment, the basic functionality is to calculate a CRC of all packet data that has been successfully transmitted across a given link. This CRC is done on both ends of the link, thereby allowing an independent check on all data believed to have been correctly transmitted. Preferably, all links have this CRC coverage, and the CRC used in this link level check is different from that used in the packet transfer protocol. This independent check, if successfully passed, virtually eliminates the possibility that any data errors were missed during the previous transfer period.
Bedrosian, Paul A.; Burgess, Matthew K.; Nishikawa, Tracy
2013-01-01
Within the south-western Mojave Desert, the Joshua Basin Water District is considering applying imported water into infiltration ponds in the Joshua Tree groundwater sub-basin in an attempt to artificially recharge the underlying aquifer. Scarce subsurface hydrogeological data are available near the proposed recharge site; therefore, time-domain electromagnetic (TDEM) data were collected and analysed to characterize the subsurface. TDEM soundings were acquired to estimate the depth to water on either side of the Pinto Mountain Fault, a major east-west trending strike-slip fault that transects the proposed recharge site. While TDEM is a standard technique for groundwater investigations, special care must be taken when acquiring and interpreting TDEM data in a twodimensional (2D) faulted environment. A subset of the TDEM data consistent with a layered-earth interpretation was identified through a combination of three-dimensional (3D) forward modelling and diffusion time-distance estimates. Inverse modelling indicates an offset in water table elevation of nearly 40 m across the fault. These findings imply that the fault acts as a low-permeability barrier to groundwater flow in the vicinity of the proposed recharge site. Existing production wells on the south side of the fault, together with a thick unsaturated zone and permeable near-surface deposits, suggest the southern half of the study area is suitable for artificial recharge. These results illustrate the effectiveness of targeted TDEM in support of hydrological studies in a heavily faulted desert environment where data are scarce and the cost of obtaining these data by conventional drilling techniques is prohibitive.
NASA Astrophysics Data System (ADS)
Fenton, C. H.; Sutiwanich, C.
2005-12-01
The Ranong and Khlong Marui faults are northeast-southwest trending structures in the Isthmus of Kra, southern Thailand, that apparently link the extensional regimes of the Mergui Basin in the Andaman Sea and the Gulf of Thailand. These faults are depicted commonly as strike-slip faults, acting as conjugate structures to the dominant northwest-southeast trending strike-slip faults, in Southeast Asia. These faults are parallel to the predominant structural grain in the Carboniferous rocks of peninsular Thailand. In addition, they appear to be bounding structures for several Tertiary basins, including the onshore parts of the Surat Thani basin and the offshore Chumphon basin. Initial remote sensing studies showed that both faults have relatively subdued geomorphic expressions. Field reconnaissance investigations indicated a lack of youthful tectonic geomorphology along the Khlong Marui fault and ambiguous evidence for recent movement along the Ranong fault. Fault exposures along both fault trends and on minor parallel faults in the region indicated that, rather than predominantly strike-slip motion, these faults have experienced up-to-the-west reverse movement. Because of its more youthful geomorphic expression, several sites along the Ranong fault were chosen for paleoseismic trenching. Initial trench exposures indicate an absence of Holocene movement. Some exposures indicate the possibility of Late Tertiary-Early Holocene vertical movement. These investigations are currently ongoing and we hope to report our conclusions at the Fall Meeting.
Langridge, R.M.; Stenner, Heidi D.; Fumal, T.E.; Christofferson, S.A.; Rockwell, T.K.; Hartleb, R.D.; Bachhuber, J.; Barka, A.A.
2002-01-01
The Mw 7.4 17 August 1999 İzmit earthquake ruptured five major fault segments of the dextral North Anatolian Fault Zone. The 26-km-long, N86°W-trending Sakarya fault segment (SFS) extends from the Sapanca releasing step-over in the west to near the town of Akyazi in the east. The SFS emerges from Lake Sapanca as two distinct fault traces that rejoin to traverse the Adapazari Plain to Akyazi. Offsets were measured across 88 cultural and natural features that cross the fault, such as roads, cornfield rows, rows of trees, walls, rails, field margins, ditches, vehicle ruts, a dike, and ground cracks. The maximum displacement observed for the İzmit earthquake (∼5.1 m) was encountered on this segment. Dextral displacement for the SFS rises from less than 1 m at Lake Sapanca to greater than 5 m near Arifiye, only 3 km away. Average slip decreases uniformly to the east from Arifiye until the fault steps left from Sagir to Kazanci to the N75°W, 6-km-long Akyazi strand, where slip drops to less than 1 m. The Akyazi strand passes eastward into the Akyazi Bend, which consists of a high-angle bend (18°-29°) between the Sakarya and Karadere fault segments, a 6-km gap in surface rupture, and high aftershock energy release. Complex structural geometries exist between the İzmit, Düzce, and 1967 Mudurnu fault segments that have arrested surface ruptures on timescales ranging from 30 sec to 88 days to 32 yr. The largest of these step-overs may have acted as a rupture segmentation boundary in previous earthquake cycles.
Generic Fortran Containers (GFC)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liakh, Dmitry
2016-09-01
The Fortran language does not provide a standard library that implements generic containers, like linked lists, trees, dictionaries, etc. The GFC software provides an implementation of generic Fortran containers natively written in Fortran 2003/2008 language. The following containers are either already implemented or planned: Stack (done), Linked list (done), Tree (done), Dictionary (done), Queue (planned), Priority queue (planned).
Certification trails for data structures
NASA Technical Reports Server (NTRS)
Sullivan, Gregory F.; Masson, Gerald M.
1993-01-01
Certification trails are a recently introduced and promising approach to fault detection and fault tolerance. The applicability of the certification trail technique is significantly generalized. Previously, certification trails had to be customized to each algorithm application; trails appropriate to wide classes of algorithms were developed. These certification trails are based on common data-structure operations such as those carried out using these sets of operations such as those carried out using balanced binary trees and heaps. Any algorithms using these sets of operations can therefore employ the certification trail method to achieve software fault tolerance. To exemplify the scope of the generalization of the certification trail technique provided, constructions of trails for abstract data types such as priority queues and union-find structures are given. These trails are applicable to any data-structure implementation of the abstract data type. It is also shown that these ideals lead naturally to monitors for data-structure operations.
NASA Technical Reports Server (NTRS)
Braden, W. B.
1992-01-01
This talk discusses the importance of providing a process operator with concise information about a process fault including a root cause diagnosis of the problem, a suggested best action for correcting the fault, and prioritization of the problem set. A decision tree approach is used to illustrate one type of approach for determining the root cause of a problem. Fault detection in several different types of scenarios is addressed, including pump malfunctions and pipeline leaks. The talk stresses the need for a good data rectification strategy and good process models along with a method for presenting the findings to the process operator in a focused and understandable way. A real time expert system is discussed as an effective tool to help provide operators with this type of information. The use of expert systems in the analysis of actual versus predicted results from neural networks and other types of process models is discussed.
Modeling Off-Nominal Behavior in SysML
NASA Technical Reports Server (NTRS)
Day, John C.; Donahue, Kenneth; Ingham, Michel; Kadesch, Alex; Kennedy, Andrew K.; Post, Ethan
2012-01-01
Specification and development of fault management functionality in systems is performed in an ad hoc way - more of an art than a science. Improvements to system reliability, availability, safety and resilience will be limited without infusion of additional formality into the practice of fault management. Key to the formalization of fault management is a precise representation of off-nominal behavior. Using the upcoming Soil Moisture Active-Passive (SMAP) mission for source material, we have modeled the off-nominal behavior of the SMAP system during its initial spin-up activity, using the System Modeling Language (SysML). In the course of developing these models, we have developed generic patterns for capturing off-nominal behavior in SysML. We show how these patterns provide useful ways of reasoning about the system (e.g., checking for completeness and effectiveness) and allow the automatic generation of typical artifacts (e.g., success trees and FMECAs) used in system analyses.
An update of Quaternary faults of central and eastern Oregon
Weldon, Ray J.; Fletcher, D.K.; Weldon, E.M.; Scharer, K.M.; McCrory, P.A.
2002-01-01
This is the online version of a CD-ROM publication. We have updated the eastern portion of our previous active fault map of Oregon (Pezzopane, Nakata, and Weldon, 1992) as a contribution to the larger USGS effort to produce digital maps of active faults in the Pacific Northwest region. The 1992 fault map has seen wide distribution and has been reproduced in essentially all subsequent compilations of active faults of Oregon. The new map provides a substantial update of known active or suspected active faults east of the Cascades. Improvements in the new map include (1) many newly recognized active faults, (2) a linked ArcInfo map and reference database, (3) more precise locations for previously recognized faults on shaded relief quadrangles generated from USGS 30-m digital elevations models (DEM), (4) more uniform coverage resulting in more consistent grouping of the ages of active faults, and (5) a new category of 'possibly' active faults that share characteristics with known active faults, but have not been studied adequately to assess their activity. The distribution of active faults has not changed substantially from the original Pezzopane, Nakata and Weldon map. Most faults occur in the south-central Basin and Range tectonic province that is located in the backarc portion of the Cascadia subduction margin. These faults occur in zones consisting of numerous short faults with similar rates, ages, and styles of movement. Many active faults strongly correlate with the most active volcanic centers of Oregon, including Newberry Craters and Crater Lake.
Linking megathrust earthquakes to brittle deformation in a fossil accretionary complex
Dielforder, Armin; Vollstaedt, Hauke; Vennemann, Torsten; Berger, Alfons; Herwegh, Marco
2015-01-01
Seismological data from recent subduction earthquakes suggest that megathrust earthquakes induce transient stress changes in the upper plate that shift accretionary wedges into an unstable state. These stress changes have, however, never been linked to geological structures preserved in fossil accretionary complexes. The importance of coseismically induced wedge failure has therefore remained largely elusive. Here we show that brittle faulting and vein formation in the palaeo-accretionary complex of the European Alps record stress changes generated by subduction-related earthquakes. Early veins formed at shallow levels by bedding-parallel shear during coseismic compression of the outer wedge. In contrast, subsequent vein formation occurred by normal faulting and extensional fracturing at deeper levels in response to coseismic extension of the inner wedge. Our study demonstrates how mineral veins can be used to reveal the dynamics of outer and inner wedges, which respond in opposite ways to megathrust earthquakes by compressional and extensional faulting, respectively. PMID:26105966
Real-Time Distributed Embedded Oscillator Operating Frequency Monitoring
NASA Technical Reports Server (NTRS)
Pollock, Julie; Oliver, Brett; Brickner, Christopher
2012-01-01
A document discusses the utilization of embedded clocks inside of operating network data links as an auxiliary clock source to satisfy local oscillator monitoring requirements. Modem network interfaces, typically serial network links, often contain embedded clocking information of very tight precision to recover data from the link. This embedded clocking data can be utilized by the receiving device to monitor the local oscillator for tolerance to required specifications, often important in high-integrity fault-tolerant applications. A device can utilize a received embedded clock to determine if the local or the remote device is out of tolerance by using a single link. The local device can determine if it is failing, assuming a single fault model, with two or more active links. Network fabric components, containing many operational links, can potentially determine faulty remote or local devices in the presence of multiple faults. Two methods of implementation are described. In one method, a recovered clock can be directly used to monitor the local clock as a direct replacement of an external local oscillator. This scheme is consistent with a general clock monitoring function whereby clock sources are clocking two counters and compared over a fixed interval of time. In another method, overflow/underflow conditions can be used to detect clock relationships for monitoring. These network interfaces often provide clock compensation circuitry to allow data to be transferred from the received (network) clock domain to the internal clock domain. This circuit could be modified to detect overflow/underflow conditions of the buffering required and report a fast or slow receive clock, respectively.
TreeVector: scalable, interactive, phylogenetic trees for the web.
Pethica, Ralph; Barker, Gary; Kovacs, Tim; Gough, Julian
2010-01-28
Phylogenetic trees are complex data forms that need to be graphically displayed to be human-readable. Traditional techniques of plotting phylogenetic trees focus on rendering a single static image, but increases in the production of biological data and large-scale analyses demand scalable, browsable, and interactive trees. We introduce TreeVector, a Scalable Vector Graphics-and Java-based method that allows trees to be integrated and viewed seamlessly in standard web browsers with no extra software required, and can be modified and linked using standard web technologies. There are now many bioinformatics servers and databases with a range of dynamic processes and updates to cope with the increasing volume of data. TreeVector is designed as a framework to integrate with these processes and produce user-customized phylogenies automatically. We also address the strengths of phylogenetic trees as part of a linked-in browsing process rather than an end graphic for print. TreeVector is fast and easy to use and is available to download precompiled, but is also open source. It can also be run from the web server listed below or the user's own web server. It has already been deployed on two recognized and widely used database Web sites.
Fault-tolerant Control of a Cyber-physical System
NASA Astrophysics Data System (ADS)
Roxana, Rusu-Both; Eva-Henrietta, Dulf
2017-10-01
Cyber-physical systems represent a new emerging field in automatic control. The fault system is a key component, because modern, large scale processes must meet high standards of performance, reliability and safety. Fault propagation in large scale chemical processes can lead to loss of production, energy, raw materials and even environmental hazard. The present paper develops a multi-agent fault-tolerant control architecture using robust fractional order controllers for a (13C) cryogenic separation column cascade. The JADE (Java Agent DEvelopment Framework) platform was used to implement the multi-agent fault tolerant control system while the operational model of the process was implemented in Matlab/SIMULINK environment. MACSimJX (Multiagent Control Using Simulink with Jade Extension) toolbox was used to link the control system and the process model. In order to verify the performance and to prove the feasibility of the proposed control architecture several fault simulation scenarios were performed.
The 2013 Balochistan earthquake: An extraordinary or completely ordinary event?
NASA Astrophysics Data System (ADS)
Zhou, Yu; Elliott, John R.; Parsons, Barry; Walker, Richard T.
2015-08-01
The 2013 Balochistan earthquake, a predominantly strike-slip event, occurred on the arcuate Hoshab fault in the eastern Makran linking an area of mainly left-lateral shear in the east to one of shortening in the west. The difficulty of reconciling predominantly strike-slip motion with this shortening has led to a wide range of unconventional kinematic and dynamic models. Here we determine the vertical component of motion on the fault using a 1 m resolution elevation model derived from postearthquake Pleiades satellite imagery. We find a constant local ratio of vertical to horizontal slip through multiple past earthquakes, suggesting the kinematic style of the Hoshab fault has remained constant throughout the late Quaternary. We also find evidence for active faulting on a series of nearby, subparallel faults, showing that failure in large, distributed and rare earthquakes is the likely method of faulting across the eastern Makran, reconciling geodetic and long-term records of strain accumulation.
The kinematics of central-southern Turkey and northwest Syria revisited
NASA Astrophysics Data System (ADS)
Seyrek, Ali; Demir, Tuncer; Westaway, Rob; Guillou, Hervé; Scaillet, Stéphane; White, Tom S.; Bridgland, David R.
2014-03-01
Central-southern Turkey, NW Syria, and adjacent offshore areas in the NE Mediterranean region form the boundary zone between the Turkish, African and Arabian plates. A great deal of new information has emerged in recent years regarding senses and rates of active crustal deformation in this region, but this material has not hitherto been well integrated, so the interpretations of key localities by different teams remain contradictory. We have reviewed and synthesised this evidence, combining it with new investigations targeted at key areas of uncertainty. This work has led to the inference of previously unrecognised active faults and has clarified the roles of other structures within the framework of plate motions provided by GPS studies. Roughly one third of the relative motion between the Turkish and Arabian plates is accommodated on the Misis-Kyrenia Fault Zone, which links to the study region from the Kyrenia mountain range of northern Cyprus. Much of this motion passes NNE then eastward around the northern limit of the Amanos Mountains, as previously thought, but some of it splays northeastward to link into newly-recognised normal faulting within the Amanos Mountains. The remaining two thirds of the relative motion is accommodated along the Karasu Valley; some of this component steps leftward across the Amik Basin before passing southward onto the northern Dead Sea Fault Zone (DSFZ) but much of it continues southwestward, past the city of Antakya, then into offshore structures, ultimately linking to the subduction zone bounding the Turkish and African plates to the southwest of Cyprus. However, some of this offshore motion continues southward, west of the Syrian coast, before linking onshore into the southern DSFZ; this component of the relative motion is indeed the main reason why the slip rate on the northern DSFZ, measured geodetically, is so much lower than that on its southern counterpart. In some parts of this region, notably in the Karasu Valley, it is now clear how the expected relative plate motion has been accommodated on active faults during much of the Quaternary: rather than constant slip rates on individual faults, quite complex changes in the partitioning of this motion on timescales of hundreds of thousands of years are indicated. However, in other parts of the region it remains unclear whether additional major active faults remain unrecognised or whether significant relative motions are accommodated by distributed deformation or on the many smaller-scale structures present.
Fault detection and diagnosis for gas turbines based on a kernelized information entropy model.
Wang, Weiying; Xu, Zhiqiang; Tang, Rui; Li, Shuying; Wu, Wei
2014-01-01
Gas turbines are considered as one kind of the most important devices in power engineering and have been widely used in power generation, airplanes, and naval ships and also in oil drilling platforms. However, they are monitored without man on duty in the most cases. It is highly desirable to develop techniques and systems to remotely monitor their conditions and analyze their faults. In this work, we introduce a remote system for online condition monitoring and fault diagnosis of gas turbine on offshore oil well drilling platforms based on a kernelized information entropy model. Shannon information entropy is generalized for measuring the uniformity of exhaust temperatures, which reflect the overall states of the gas paths of gas turbine. In addition, we also extend the entropy to compute the information quantity of features in kernel spaces, which help to select the informative features for a certain recognition task. Finally, we introduce the information entropy based decision tree algorithm to extract rules from fault samples. The experiments on some real-world data show the effectiveness of the proposed algorithms.
Fault Detection and Diagnosis for Gas Turbines Based on a Kernelized Information Entropy Model
Wang, Weiying; Xu, Zhiqiang; Tang, Rui; Li, Shuying; Wu, Wei
2014-01-01
Gas turbines are considered as one kind of the most important devices in power engineering and have been widely used in power generation, airplanes, and naval ships and also in oil drilling platforms. However, they are monitored without man on duty in the most cases. It is highly desirable to develop techniques and systems to remotely monitor their conditions and analyze their faults. In this work, we introduce a remote system for online condition monitoring and fault diagnosis of gas turbine on offshore oil well drilling platforms based on a kernelized information entropy model. Shannon information entropy is generalized for measuring the uniformity of exhaust temperatures, which reflect the overall states of the gas paths of gas turbine. In addition, we also extend the entropy to compute the information quantity of features in kernel spaces, which help to select the informative features for a certain recognition task. Finally, we introduce the information entropy based decision tree algorithm to extract rules from fault samples. The experiments on some real-world data show the effectiveness of the proposed algorithms. PMID:25258726
Seismic link at plate boundary
NASA Astrophysics Data System (ADS)
Ramdani, Faical; Kettani, Omar; Tadili, Benaissa
2015-06-01
Seismic triggering at plate boundaries has a very complex nature that includes seismic events at varying distances. The spatial orientation of triggering cannot be reduced to sequences from the main shocks. Seismic waves propagate at all times in all directions, particularly in highly active zones. No direct evidence can be obtained regarding which earthquakes trigger the shocks. The first approach is to determine the potential linked zones where triggering may occur. The second step is to determine the causality between the events and their triggered shocks. The spatial orientation of the links between events is established from pre-ordered networks and the adapted dependence of the spatio-temporal occurrence of earthquakes. Based on a coefficient of synchronous seismic activity to grid couples, we derive a network link by each threshold. The links of high thresholds are tested using the coherence of time series to determine the causality and related orientation. The resulting link orientations at the plate boundary conditions indicate that causal triggering seems to be localized along a major fault, as a stress transfer between two major faults, and parallel to the geothermal area extension.
NASA Astrophysics Data System (ADS)
Gülerce, Zeynep; Buğra Soyman, Kadir; Güner, Barış; Kaymakci, Nuretdin
2017-12-01
This contribution provides an updated planar seismic source characterization (SSC) model to be used in the probabilistic seismic hazard assessment (PSHA) for Istanbul. It defines planar rupture systems for the four main segments of the North Anatolian fault zone (NAFZ) that are critical for the PSHA of Istanbul: segments covering the rupture zones of the 1999 Kocaeli and Düzce earthquakes, central Marmara, and Ganos/Saros segments. In each rupture system, the source geometry is defined in terms of fault length, fault width, fault plane attitude, and segmentation points. Activity rates and the magnitude recurrence models for each rupture system are established by considering geological and geodetic constraints and are tested based on the observed seismicity that is associated with the rupture system. Uncertainty in the SSC model parameters (e.g., b value, maximum magnitude, slip rate, weights of the rupture scenarios) is considered, whereas the uncertainty in the fault geometry is not included in the logic tree. To acknowledge the effect of earthquakes that are not associated with the defined rupture systems on the hazard, a background zone is introduced and the seismicity rates in the background zone are calculated using smoothed-seismicity approach. The state-of-the-art SSC model presented here is the first fully documented and ready-to-use fault-based SSC model developed for the PSHA of Istanbul.
NASA Astrophysics Data System (ADS)
Dygert, Nick; Liang, Yan
2015-06-01
Mantle peridotites from ophiolites are commonly interpreted as having mid-ocean ridge (MOR) or supra-subduction zone (SSZ) affinity. Recently, an REE-in-two-pyroxene thermometer was developed (Liang et al., 2013) that has higher closure temperatures (designated as TREE) than major element based two-pyroxene thermometers for mafic and ultramafic rocks that experienced cooling. The REE-in-two-pyroxene thermometer has the potential to extract meaningful cooling rates from ophiolitic peridotites and thus shed new light on the thermal history of the different tectonic regimes. We calculated TREE for available literature data from abyssal peridotites, subcontinental (SC) peridotites, and ophiolites around the world (Alps, Coast Range, Corsica, New Caledonia, Oman, Othris, Puerto Rico, Russia, and Turkey), and augmented the data with new measurements for peridotites from the Trinity and Josephine ophiolites and the Mariana trench. TREE are compared to major element based thermometers, including the two-pyroxene thermometer of Brey and Köhler (1990) (TBKN). Samples with SC affinity have TREE and TBKN in good agreement. Samples with MOR and SSZ affinity have near-solidus TREE but TBKN hundreds of degrees lower. Closure temperatures for REE and Fe-Mg in pyroxenes were calculated to compare cooling rates among abyssal peridotites, MOR ophiolites, and SSZ ophiolites. Abyssal peridotites appear to cool more rapidly than peridotites from most ophiolites. On average, SSZ ophiolites have lower closure temperatures than abyssal peridotites and many ophiolites with MOR affinity. We propose that these lower temperatures can be attributed to the residence time in the cooling oceanic lithosphere prior to obduction. MOR ophiolites define a continuum spanning cooling rates from SSZ ophiolites to abyssal peridotites. Consistent high closure temperatures for abyssal peridotites and the Oman and Corsica ophiolites suggests hydrothermal circulation and/or rapid cooling events (e.g., normal faulting, unroofing) control the late thermal histories of peridotites from transform faults and slow and fast spreading centers with or without a crustal section.
NASA Technical Reports Server (NTRS)
Bennett, Richard A.; Reilinger, Robert E.; Rodi, William; Li, Yingping; Toksoz, M. Nafi; Hudnut, Ken
1995-01-01
Coseismic surface deformation associated with the M(sub w) 6.1, April 23, 1992, Joshua Tree earthquake is well represented by estimates of geodetic monument displacements at 20 locations independently derived from Global Positioning System and trilateration measurements. The rms signal to noise ratio for these inferred displacements is 1.8 with near-fault displacement estimates exceeding 40 mm. In order to determine the long-wavelength distribution of slip over the plane of rupture, a Tikhonov regularization operator is applied to these estimates which minimizes stress variability subject to purely right-lateral slip and zero surface slip constraints. The resulting slip distribution yields a geodetic moment estimate of 1.7 x 10(exp 18) N m with corresponding maximum slip around 0.8 m and compares well with independent and complementary information including seismic moment and source time function estimates and main shock and aftershock locations. From empirical Green's functions analyses, a rupture duration of 5 s is obtained which implies a rupture radius of 6-8 km. Most of the inferred slip lies to the north of the hypocenter, consistent with northward rupture propagation. Stress drop estimates are in the range of 2-4 MPa. In addition, predicted Coulomb stress increases correlate remarkably well with the distribution of aftershock hypocenters; most of the aftershocks occur in areas for which the mainshock rupture produced stress increases larger than about 0.1 MPa. In contrast, predicted stress changes are near zero at the hypocenter of the M(sub w) 7.3, June 28, 1992, Landers earthquake which nucleated about 20 km beyond the northernmost edge of the Joshua Tree rupture. Based on aftershock migrations and the predicted static stress field, we speculate that redistribution of Joshua Tree-induced stress perturbations played a role in the spatio-temporal development of the earth sequence culminating in the Landers event.
Dipping San Andreas and Hayward faults revealed beneath San Francisco Bay, California
Parsons, T.; Hart, P.E.
1999-01-01
The San Francisco Bay area is crossed by several right-lateral strike-slip faults of the San Andreas fault zone. Fault-plane reflections reveal that two of these faults, the San Andreas and Hayward, dip toward each other below seismogenic depths at 60?? and 70??, respectively, and persist to the base of the crust. Previously, a horizontal detachment linking the two faults in the lower crust beneath San Francisco Bay was proposed. The only near-vertical-incidence reflection data available prior to the most recent experiment in 1997 were recorded parallel to the major fault structures. When the new reflection data recorded orthogonal to the faults are compared with the older data, the highest, amplitude reflections show clear variations in moveout with recording azimuth. In addition, reflection times consistently increase with distance from the faults. If the reflectors were horizontal, reflection moveout would be independent of azimuth, and reflection times would be independent of distance from the faults. The best-fit solution from three-dimensional traveltime modeling is a pair of high-angle dipping surfaces. The close correspondence of these dipping structures with the San Andreas and Hayward faults leads us to conclude that they are the faults beneath seismogenic depths. If the faults retain their observed dips, they would converge into a single zone in the upper mantle -45 km beneath the surface, although we can only observe them in the crust.
NASA Astrophysics Data System (ADS)
Molli, G.; Cortecci, G.; Vaselli, L.; Ottria, G.; Cortopassi, A.; Dinelli, E.; Mussi, M.; Barbieri, M.
2010-09-01
We studied the geometry, intensity of deformation and fluid-rock interaction of a high angle normal fault within Carrara marble in the Alpi Apuane NW Tuscany, Italy. The fault is comprised of a core bounded by two major, non-parallel slip surfaces. The fault core, marked by crush breccia and cataclasites, asymmetrically grades to the host protolith through a damage zone, which is well developed only in the footwall block. On the contrary, the transition from the fault core to the hangingwall protolith is sharply defined by the upper main slip surface. Faulting was associated with fluid-rock interaction, as evidenced by kinematically related veins observable in the damage zone and fluid channelling within the fault core, where an orange-brownish cataclasite matrix can be observed. A chemical and isotopic study of veins and different structural elements of the fault zone (protolith, damage zone and fault core), including a mathematical model, was performed to document type, role, and activity of fluid-rock interactions during deformation. The results of our studies suggested that deformation pattern was mainly controlled by processes associated with a linking-damage zone at a fault tip, development of a fault core, localization and channelling of fluids within the fault zone. Syn-kinematic microstructural modification of calcite microfabric possibly played a role in confining fluid percolation.
Active faults in Africa: a review
NASA Astrophysics Data System (ADS)
Skobelev, S. F.; Hanon, M.; Klerkx, J.; Govorova, N. N.; Lukina, N. V.; Kazmin, V. G.
2004-03-01
The active fault database and Map of active faults in Africa, in scale of 1:5,000,000, were compiled according to the ILP Project II-2 "World Map of Major Active Faults". The data were collected in the Royal Museum of Central Africa, Tervuren, Belgium, and in the Geological Institute, Moscow, where the final edition was carried out. Active faults of Africa form three groups. The first group is represented by thrusts and reverse faults associated with compressed folds in the northwest Africa. They belong to the western part of the Alpine-Central Asian collision belt. The faults disturb only the Earth's crust and some of them do not penetrate deeper than the sedimentary cover. The second group comprises the faults of the Great African rift system. The faults form the known Western and Eastern branches, which are rifts with abnormal mantle below. The deep-seated mantle "hot" anomaly probably relates to the eastern volcanic branch. In the north, it joins with the Aden-Red Sea rift zone. Active faults in Egypt, Libya and Tunis may represent a link between the East African rift system and Pantellerian rift zone in the Mediterranean. The third group included rare faults in the west of Equatorial Africa. The data were scarce, so that most of the faults of this group were identified solely by interpretation of space imageries and seismicity. Some longer faults of the group may continue the transverse faults of the Atlantic and thus can penetrate into the mantle. This seems evident for the Cameron fault line.
How geometrical constraints contribute to the weakness of mature faults
Lockner, D.A.; Byerlee, J.D.
1993-01-01
Increasing evidence that the San Andreas fault has low shear strength1 has fuelled considerable discussion regarding the role of fluid pressure in controlling fault strength. Byerlee2,3 and Rice4 have shown how fluid pressure gradients within a fault zone can produce a fault with low strength while avoiding hydraulic fracture of the surrounding rock due to excessive fluid pressure. It may not be widely realised, however, that the same analysis2-4 shows that even in the absence of fluids, the presence of a relatively soft 'gouge' layer surrounded by harder country rock can also reduce the effective shear strength of the fault. As shown most recently by Byerlee and Savage5, as the shear stress across a fault increases, the stress state within the fault zone evolves to a limiting condition in which the maximum shear stress within the fault zone is parallel to the fault, which then slips with a lower apparent coefficient of friction than the same material unconstrained by the fault. Here we confirm the importance of fault geometry in determining the apparent weakness of fault zones, by showing that the apparent friction on a sawcut granite surface can be predicted from the friction measured in intact rock, given only the geometrical constraints introduced by the fault surfaces. This link between the sliding friction of faults and the internal friction of intact rock suggests a new approach to understanding the microphysical processes that underlie friction in brittle materials.
NASA Astrophysics Data System (ADS)
Kibey, Sandeep A.
We present a hierarchical approach that spans multiple length scales to describe defect formation---in particular, formation of stacking faults (SFs) and deformation twins---in fcc crystals. We link the energy pathways (calculated here via ab initio density functional theory, DFT) associated with formation of stacking faults and twins to corresponding heterogeneous defect nucleation models (described through mesoscale dislocation mechanics). Through the generalized Peieirls-Nabarro model, we first correlate the width of intrinsic SFs in fcc alloy systems to their nucleation pathways called generalized stacking fault energies (GSFE). We then establish a qualitative dependence of twinning tendency in fee metals and alloys---specifically, in pure Cu and dilute Cu-xAl (x= 5.0 and 8.3 at.%)---on their twin-energy pathways called the generalized planar fault energies (GPFE). We also link the twinning behavior of Cu-Al alloys to their electronic structure by determining the effect of solute Al on the valence charge density redistribution at the SF through ab initio DFT. Further, while several efforts have been undertaken to incorporate twinning for predicting stress-strain response of fcc materials, a fundamental law for critical twinning stress has not yet emerged. We resolve this long-standing issue by linking quantitatively the twin-energy pathways (GPFE) obtained via ab initio DFT to heterogeneous, dislocation-based twin nucleation models. We establish an analytical expression that quantitatively predicts the critical twinning stress in fcc metals in agreement with experiments without requiring any empiricism at any length scale. Our theory connects twinning stress to twin-energy pathways and predicts a monotonic relation between stress and unstable twin stacking fault energy revealing the physics of twinning. We further demonstrate that the theory holds for fcc alloys as well. Our theory inherently accounts for directional nature of twinning which available qualitative models do not necessarily account for. Finally, we extend the present work to martensitic transformations and determine the energy pathway for B2→B19 transformation in NiTi. Based on our ab initio DFT calculations, we propose a combined distortion-shuffle pathway for B2→B19 transformation in NiTi. Our results indicate that in NiTi, a barrier of 0.48 mRyd/atom (relative to B2 phase) must be overcome to transform the parent B2 into orthorhombic B19 phase.
Complex Plate Tectonic Features on Planetary Bodies: Analogs from Earth
NASA Astrophysics Data System (ADS)
Stock, J. M.; Smrekar, S. E.
2016-12-01
We review the types and scales of observations needed on other rocky planetary bodies (e.g., Mars, Venus, exoplanets) to evaluate evidence of present or past plate motions. Earth's plate boundaries were initially simplified into three basic types (ridges, trenches, and transform faults). Previous studies examined the Moon, Mars, Venus, Mercury and icy moons such as Europa, for evidence of features, including linear rifts, arcuate convergent zones, strike-slip faults, and distributed deformation (rifting or folding). Yet, several aspects merit further consideration. 1) Is the feature active or fossil? Earth's active mid ocean ridges are bathymetric highs, and seafloor depth increases on either side; whereas, fossil mid ocean ridges may be as deep as the surrounding abyssal plain with no major rift valley, although with a minor gravity low (e.g., Osbourn Trough, W. Pacific Ocean). Fossil trenches have less topographic relief than active trenches (e.g., the fossil trench along the Patton Escarpment, west of California). 2) On Earth, fault patterns of spreading centers depend on volcanism. Excess volcanism reduced faulting. Fault visibility increases as spreading rates slow, or as magmatism decreases, producing high-angle normal faults parallel to the spreading center. At magma-poor spreading centers, high resolution bathymetry shows low angle detachment faults with large scale mullions and striations parallel to plate motion (e.g., Mid Atlantic Ridge, Southwest Indian Ridge). 3) Sedimentation on Earth masks features that might be visible on a non-erosional planet. Subduction zones on Earth in areas of low sedimentation have clear trench -parallel faults causing flexural deformation of the downgoing plate; in highly sedimented subduction zones, no such faults can be seen, and there may be no bathymetric trench at all. 4) Areas of Earth with broad upwelling, such as the North Fiji Basin, have complex plate tectonic patterns with many individual but poorly linked ridge segments and transform faults. These details and scales of features should be considered in planning future surveys of altimetry, reflectance, magnetics, compositional, and gravity data from other planetary bodies aimed at understanding the link between a planet's surface and interior, whether via plate tectonics or other processes.
Safety Study of TCAS II for Logic Version 6.04
1992-07-01
used in the fault tree of the 198 tdy. The fu given for Logic and Altimetry effects represent the site averages, and we bued upon TCAS RAs always being...comparison with the results of Monte Carlo simulations. Five million iterations were carril out for each of the four cases (eqs. 3, 4, 6 and 7
Code of Federal Regulations, 2010 CFR
2010-10-01
..., national, or international standards. (f) The reviewer shall analyze all Fault Tree Analyses (FTA), Failure... cited by the reviewer; (4) Identification of any documentation or information sought by the reviewer...) Identification of the hardware and software verification and validation procedures for the PTC system's safety...
The Two-By-Two Array: An Aid in Conceptualization and Problem Solving
ERIC Educational Resources Information Center
Eberhart, James
2004-01-01
The fields of mathematics, science, and engineering are replete with diagrams of many varieties. They range in nature from the Venn diagrams of symbolic logic to the Periodic Chart of the Elements; and from the fault trees of risk assessment to the flow charts used to describe laboratory procedures, industrial processes, and computer programs. All…
Powell, Robert E.
2001-01-01
This data set maps and describes the geology of the Porcupine Wash 7.5 minute quadrangle, Riverside County, southern California. The quadrangle, situated in Joshua Tree National Park in the eastern Transverse Ranges physiographic and structural province, encompasses parts of the Hexie Mountains, Cottonwood Mountains, northern Eagle Mountains, and south flank of Pinto Basin. It is underlain by a basement terrane comprising Proterozoic metamorphic rocks, Mesozoic plutonic rocks, and Mesozoic and Mesozoic or Cenozoic hypabyssal dikes. The basement terrane is capped by a widespread Tertiary erosion surface preserved in remnants in the Eagle and Cottonwood Mountains and buried beneath Cenozoic deposits in Pinto Basin. Locally, Miocene basalt overlies the erosion surface. A sequence of at least three Quaternary pediments is planed into the north piedmont of the Eagle and Hexie Mountains, each in turn overlain by successively younger residual and alluvial deposits. The Tertiary erosion surface is deformed and broken by north-northwest-trending, high-angle, dip-slip faults and an east-west trending system of high-angle dip- and left-slip faults. East-west trending faults are younger than and perhaps in part coeval with faults of the northwest-trending set. The Porcupine Wash database was created using ARCVIEW and ARC/INFO, which are geographical information system (GIS) software products of Envronmental Systems Research Institute (ESRI). The database consists of the following items: (1) a map coverage showing faults and geologic contacts and units, (2) a separate coverage showing dikes, (3) a coverage showing structural data, (4) a scanned topographic base at a scale of 1:24,000, and (5) attribute tables for geologic units (polygons and regions), contacts (arcs), and site-specific data (points). The database, accompanied by a pamphlet file and this metadata file, also includes the following graphic and text products: (1) A portable document file (.pdf) containing a navigable graphic of the geologic map on a 1:24,000 topographic base. The map is accompanied by a marginal explanation consisting of a Description of Map and Database Units (DMU), a Correlation of Map and Database Units (CMU), and a key to point-and line-symbols. (2) Separate .pdf files of the DMU and CMU, individually. (3) A PostScript graphic-file containing the geologic map on a 1:24,000 topographic base accompanied by the marginal explanation. (4) A pamphlet that describes the database and how to access it. Within the database, geologic contacts , faults, and dikes are represented as lines (arcs), geologic units as polygons and regions, and site-specific data as points. Polygon, arc, and point attribute tables (.pat, .aat, and .pat, respectively) uniquely identify each geologic datum and link it to other tables (.rel) that provide more detailed geologic information.
NASA Astrophysics Data System (ADS)
Kamer, Yavor; Ouillon, Guy; Sornette, Didier; Wössner, Jochen
2014-05-01
We present applications of a new clustering method for fault network reconstruction based on the spatial distribution of seismicity. Unlike common approaches that start from the simplest large scale and gradually increase the complexity trying to explain the small scales, our method uses a bottom-up approach, by an initial sampling of the small scales and then reducing the complexity. The new approach also exploits the location uncertainty associated with each event in order to obtain a more accurate representation of the spatial probability distribution of the seismicity. For a given dataset, we first construct an agglomerative hierarchical cluster (AHC) tree based on Ward's minimum variance linkage. Such a tree starts out with one cluster and progressively branches out into an increasing number of clusters. To atomize the structure into its constitutive protoclusters, we initialize a Gaussian Mixture Modeling (GMM) at a given level of the hierarchical clustering tree. We then let the GMM converge using an Expectation Maximization (EM) algorithm. The kernels that become ill defined (less than 4 points) at the end of the EM are discarded. By incrementing the number of initialization clusters (by atomizing at increasingly populated levels of the AHC tree) and repeating the procedure above, we are able to determine the maximum number of Gaussian kernels the structure can hold. The kernels in this configuration constitute our protoclusters. In this setting, merging of any pair will lessen the likelihood (calculated over the pdf of the kernels) but in turn will reduce the model's complexity. The information loss/gain of any possible merging can thus be quantified based on the Minimum Description Length (MDL) principle. Similar to an inter-distance matrix, where the matrix element di,j gives the distance between points i and j, we can construct a MDL gain/loss matrix where mi,j gives the information gain/loss resulting from the merging of kernels i and j. Based on this matrix, merging events resulting in MDL gain are performed in descending order until no gainful merging is possible anymore. We envision that the results of this study could lead to a better understanding of the complex interactions within the Californian fault system and hopefully use the acquired insights for earthquake forecasting.
NASA Astrophysics Data System (ADS)
Shi, J. T.; Han, X. T.; Xie, J. F.; Yao, L.; Huang, L. T.; Li, L.
2013-03-01
A Pulsed High Magnetic Field Facility (PHMFF) has been established in Wuhan National High Magnetic Field Center (WHMFC) and various protection measures are applied in its control system. In order to improve the reliability and robustness of the control system, the safety analysis of the PHMFF is carried out based on Fault Tree Analysis (FTA) technique. The function and realization of 5 protection systems, which include sequence experiment operation system, safety assistant system, emergency stop system, fault detecting and processing system and accident isolating protection system, are given. The tests and operation indicate that these measures improve the safety of the facility and ensure the safety of people.
NASA Astrophysics Data System (ADS)
Fagereng, A.; Hodge, M.; Biggs, J.; Mdala, H. S.; Goda, K.
2016-12-01
Faults grow through the interaction and linkage of isolated fault segments. Continuous fault systems are those where segments interact, link and may slip synchronously, whereas non-continuous fault systems comprise isolated faults. As seismic moment is related to fault length (Wells and Coppersmith, 1994), understanding whether a fault system is continuous or not is critical in evaluating seismic hazard. Maturity may be a control on fault continuity: immature, low displacement faults are typically assumed to be non-continuous. Here, we study two overlapping, 20 km long, normal fault segments of the N-S striking Bilila-Mtakataka fault, Malawi, in the southern section of the East African Rift System. Despite its relative immaturity, previous studies concluded the Bilila-Mtakataka fault is continuous for its entire 100 km length, with the most recent event equating to an Mw8.0 earthquake (Jackson and Blenkinsop, 1997). We explore whether segment geometry and relationship to pre-existing high-grade metamorphic foliation has influenced segment interaction and fault development. Fault geometry and scarp height is constrained by DEMs derived from SRTM, Pleiades and `Structure from Motion' photogrammetry using a UAV, alongside direct field observations. The segment strikes differ on average by 10°, but up to 55° at their adjacent tips. The southern segment is sub-parallel to the foliation, whereas the northern segment is highly oblique to the foliation. Geometrical surface discontinuities suggest two isolated faults; however, displacement-length profiles and Coulomb stress change models suggest segment interaction, with potential for linkage at depth. Further work must be undertaken on other segments to assess the continuity of the entire fault, concluding whether an earthquake greater than that of the maximum instrumentally recorded (1910 M7.4 Rukwa) is possible.
Dynamic 3D simulations of earthquakes on en echelon faults
Harris, R.A.; Day, S.M.
1999-01-01
One of the mysteries of earthquake mechanics is why earthquakes stop. This process determines the difference between small and devastating ruptures. One possibility is that fault geometry controls earthquake size. We test this hypothesis using a numerical algorithm that simulates spontaneous rupture propagation in a three-dimensional medium and apply our knowledge to two California fault zones. We find that the size difference between the 1934 and 1966 Parkfield, California, earthquakes may be the product of a stepover at the southern end of the 1934 earthquake and show how the 1992 Landers, California, earthquake followed physically reasonable expectations when it jumped across en echelon faults to become a large event. If there are no linking structures, such as transfer faults, then strike-slip earthquakes are unlikely to propagate through stepovers >5 km wide. Copyright 1999 by the American Geophysical Union.
Fault Tolerance for VLSI Multicomputers
1985-08-01
that consists of hundreds or thousands of VLSI computation nodes interconnected by dedicated links. Some important applications of high-end computers...technology, and intended applications . A proposed fault tolerance scheme combines hardware that performs error detection and system-level protocols for...order to recover from the error and resume correct operation, a valid system state must be restored. A low-overhead, application -transparent error
Powell, Robert E.
2001-01-01
This data set maps and describes the geology of the Conejo Well 7.5 minute quadrangle, Riverside County, southern California. The quadrangle, situated in Joshua Tree National Park in the eastern Transverse Ranges physiographic and structural province, encompasses part of the northern Eagle Mountains and part of the south flank of Pinto Basin. It is underlain by a basement terrane comprising Proterozoic metamorphic rocks, Mesozoic plutonic rocks, and Mesozoic and Mesozoic or Cenozoic hypabyssal dikes. The basement terrane is capped by a widespread Tertiary erosion surface preserved in remnants in the Eagle Mountains and buried beneath Cenozoic deposits in Pinto Basin. Locally, Miocene basalt overlies the erosion surface. A sequence of at least three Quaternary pediments is planed into the north piedmont of the Eagle Mountains, each in turn overlain by successively younger residual and alluvial deposits. The Tertiary erosion surface is deformed and broken by north-northwest-trending, high-angle, dip-slip faults in the Eagle Mountains and an east-west trending system of high-angle dip- and left-slip faults. In and adjacent to the Conejo Well quadrangle, faults of the northwest-trending set displace Miocene sedimentary rocks and basalt deposited on the Tertiary erosion surface and Pliocene and (or) Pleistocene deposits that accumulated on the oldest pediment. Faults of this system appear to be overlain by Pleistocene deposits that accumulated on younger pediments. East-west trending faults are younger than and perhaps in part coeval with faults of the northwest-trending set. The Conejo Well database was created using ARCVIEW and ARC/INFO, which are geographical information system (GIS) software products of Envronmental Systems Research Institute (ESRI). The database consists of the following items: (1) a map coverage showing faults and geologic contacts and units, (2) a separate coverage showing dikes, (3) a coverage showing structural data, (4) a point coverage containing line ornamentation, and (5) a scanned topographic base at a scale of 1:24,000. The coverages include attribute tables for geologic units (polygons and regions), contacts (arcs), and site-specific data (points). The database, accompanied by a pamphlet file and this metadata file, also includes the following graphic and text products: (1) A portable document file (.pdf) containing a navigable graphic of the geologic map on a 1:24,000 topographic base. The map is accompanied by a marginal explanation consisting of a Description of Map and Database Units (DMU), a Correlation of Map and Database Units (CMU), and a key to point-and line-symbols. (2) Separate .pdf files of the DMU and CMU, individually. (3) A PostScript graphic-file containing the geologic map on a 1:24,000 topographic base accompanied by the marginal explanation. (4) A pamphlet that describes the database and how to access it. Within the database, geologic contacts , faults, and dikes are represented as lines (arcs), geologic units as polygons and regions, and site-specific data as points. Polygon, arc, and point attribute tables (.pat, .aat, and .pat, respectively) uniquely identify each geologic datum and link it to other tables (.rel) that provide more detailed geologic information.
NASA Astrophysics Data System (ADS)
Delle Piane, Claudio; Clennell, M. Ben; Keller, Joao V. A.; Giwelli, Ausama; Luzin, Vladimir
2017-10-01
The structure, frictional properties and permeability of faults within carbonate rocks exhibit a dynamic interplay that controls both seismicity and the exchange of fluid between different crustal levels. Here we review field and experimental studies focused on the characterization of fault zones in carbonate rocks with the aim of identifying the microstructural indicators of rupture nucleation and seismic slip. We highlight results from experimental research linked to observations on exhumed fault zones in carbonate rocks. From the analysis of these accumulated results we identify the meso and microstructural deformation styles in carbonates rocks and link them to the lithology of the protolith and their potential as seismic indicators. Although there has been significant success in the laboratory reproduction of deformation structures observed in the field, the range of slip rates and dynamic friction under which most of the potential seismic indicators is formed in the laboratory urges caution when using them as a diagnostic for seismic slip. We finally outline what we think are key topics for future research that would lead to a more in-depth understanding of the record of seismic slip in carbonate rocks.
Nelson, Alan R.; Personius, Stephen F.; Sherrod, Brian L.; Buck, Jason; Bradley, Lee-Ann; Henley, Gary; Liberty, Lee M.; Kelsey, Harvey M.; Witter, Robert C.; Koehler, R.D.; Schermer, Elizabeth R.; Nemser, Eliza S.; Cladouhos, Trenton T.
2008-01-01
As part of the effort to assess seismic hazard in the Puget Sound region, we map fault scarps on Airborne Laser Swath Mapping (ALSM, an application of LiDAR) imagery (with 2.5-m elevation contours on 1:4,000-scale maps) and show field and laboratory data from backhoe trenches across the scarps that are being used to develop a latest Pleistocene and Holocene history of large earthquakes on the Tacoma fault. We supplement previous Tacoma fault paleoseismic studies with data from five trenches on the hanging wall of the fault. In a new trench across the Catfish Lake scarp, broad folding of more tightly folded glacial sediment does not predate 4.3 ka because detrital charcoal of this age was found in stream-channel sand in the trench beneath the crest of the scarp. A post-4.3-ka age for scarp folding is consistent with previously identified uplift across the fault during AD 770-1160. In the trench across the younger of the two Stansberry Lake scarps, six maximum 14C ages on detrital charcoal in pre-faulting B and C soil horizons and three minimum ages on a tree root in post-faulting colluvium, limit a single oblique-slip (right-lateral) surface faulting event to AD 410-990. Stratigraphy and sedimentary structures in the trench across the older scarp at the same site show eroded glacial sediments, probably cut by a meltwater channel, with no evidence of post-glacial deformation. At the northeast end of the Sunset Beach scarps, charcoal ages in two trenches across graben-forming scarps give a close maximum age of 1.3 ka for graben formation. The ages that best limit the time of faulting and folding in each of the trenches are consistent with the time of the large regional earthquake in southern Puget Sound about AD 900-930.
Monotone Boolean approximation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hulme, B.L.
1982-12-01
This report presents a theory of approximation of arbitrary Boolean functions by simpler, monotone functions. Monotone increasing functions can be expressed without the use of complements. Nonconstant monotone increasing functions are important in their own right since they model a special class of systems known as coherent systems. It is shown here that when Boolean expressions for noncoherent systems become too large to treat exactly, then monotone approximations are easily defined. The algorithms proposed here not only provide simpler formulas but also produce best possible upper and lower monotone bounds for any Boolean function. This theory has practical application formore » the analysis of noncoherent fault trees and event tree sequences.« less
Rockwell, Thomas K.; Lindvall, Scott; Dawson, Tim; Langridge, Rob; Lettis, William; Klinger, Yann
2002-01-01
Surveys of multiple tree lines within groves of poplar trees, planted in straight lines across the fault prior to the earthquake, show surprisingly large lateral variations. In one grove, slip increases by nearly 1.8 m, or 35% of the maximum measured value, over a lateral distance of nearly 100 m. This and other observations along the 1999 ruptures suggest that the lateral variability of slip observed from displaced geomorphic features in many earthquakes of the past may represent a combination of (1) actual differences in slip at the surface and (2) the difficulty in recognizing distributed nonbrittle deformation.
Grauch, V.J.S.; Bauer, Paul W.; Drenth, Benjamin J.; Kelson, Keith I.
2017-01-01
We present a detailed example of how a subbasin develops adjacent to a transfer zone in the Rio Grande rift. The Embudo transfer zone in the Rio Grande rift is considered one of the classic examples and has been used as the inspiration for several theoretical models. Despite this attention, the history of its development into a major rift structure is poorly known along its northern extent near Taos, New Mexico. Geologic evidence for all but its young rift history is concealed under Quaternary cover. We focus on understanding the pre-Quaternary evidence that is in the subsurface by integrating diverse pieces of geologic and geophysical information. As a result, we present a substantively new understanding of the tectonic configuration and evolution of the northern extent of the Embudo fault and its adjacent subbasin.We integrate geophysical, borehole, and geologic information to interpret the subsurface configuration of the rift margins formed by the Embudo and Sangre de Cristo faults and the geometry of the subbasin within the Taos embayment. Key features interpreted include (1) an imperfect D-shaped subbasin that slopes to the east and southeast, with the deepest point ∼2 km below the valley floor located northwest of Taos at ∼36° 26′N latitude and 105° 37′W longitude; (2) a concealed Embudo fault system that extends as much as 7 km wider than is mapped at the surface, wherein fault strands disrupt or truncate flows of Pliocene Servilleta Basalt and step down into the subbasin with a minimum of 1.8 km of vertical displacement; and (3) a similar, wider than expected (5–7 km) zone of stepped, west-down normal faults associated with the Sangre de Cristo range front fault.From the geophysical interpretations and subsurface models, we infer relations between faulting and flows of Pliocene Servilleta Basalt and older, buried basaltic rocks that, combined with geologic mapping, suggest a revised rift history involving shifts in the locus of fault activity as the Taos subbasin developed. We speculate that faults related to north-striking grabens at the end of Laramide time formed the first west-down master faults. The Embudo fault may have initiated in early Miocene southwest of the Taos region. Normal-oblique slip on these early fault strands likely transitioned in space and time to dominantly left-lateral slip as the Embudo fault propagated to the northeast. During and shortly after eruption of Servilleta Basalt, proto-Embudo fault strands were active along and parallel to the modern, NE-aligned Rio Pueblo de Taos, ∼4–7 km basinward of the modern, mapped Embudo fault zone. Faults along the northeastern subbasin margin had northwest strikes for most of the period of subbasin formation and were located ∼5–7 km basinward of the modern Sangre de Cristo fault. The locus of fault activity shifted to more northerly striking faults within 2 km of the modern range front sometime after Servilleta volcanism had ceased. The northerly faults may have linked with the northeasterly proto-Embudo faults at this time, concurrent with the development of N-striking Los Cordovas normal faults within the interior of the subbasin. By middle Pleistocene(?) time, the Los Cordovas faults had become inactive, and the linked Embudo–Sangre de Cristo fault system migrated to the south, to the modern range front.
NASA Astrophysics Data System (ADS)
Lamarche, Geoffroy; Lebrun, Jean-Frédéric
2000-01-01
South of New Zealand the Pacific-Australia (PAC-AUS) plate boundary runs along the intracontinental Alpine Fault, the Puysegur subduction front and the intraoceanic Puysegur Fault. The Puysegur Fault is located along Puysegur Ridge, which terminates at ca. 47°S against the continental Puysegur Bank in a complex zone of deformation called the Snares Zone. At Puysegur Trench, the Australian Plate subducts beneath Puysegur Bank and the Fiordland Massif. East of Fiordland and Puysegur Bank, the Moonlight Fault System (MFS) represents the Eocene strike-slip plate boundary. Interpretation of seafloor morphology and seismic reflection profiles acquired over Puysegur Bank and the Snares Zone allows study of the transition from intraoceanic strike-slip faulting along the Puysegur Ridge to oblique subduction at the Puysegur Trench and to better understand the genetic link between the Puysegur Fault and the MFS. Seafloor morphology is interpreted from a bathymetric dataset compiled from swath bathymetry data acquired during the 1993 Geodynz survey, and single beam echo soundings acquired by the NZ Royal Navy. The Snares Zone is the key transition zone from strike-slip faulting to subduction. It divides into three sectors, namely East, NW and SW sectors. A conspicuous 3600 m-deep trough (the Snares Trough) separates the NW and East sectors. The East sector is characterised by the NE termination of Puysegur Ridge into right-stepping en echelon ridges that accommodate a change of strike from the Puysegur Fault to the MFS. Between 48°S and 47°S, in the NW sector and the Snares Trough, a series of transpressional faults splay northwards from the Puysegur Fault. Between 49°50'S and 48°S, thrusts develop progressively at Puysegur Trench into a decollement. North of 48°S the Snares Trough develops between two splays of the Puysegur Fault, indicating superficial extension associated with the subsidence of Puysegur Ridge. Seismic reflection profiles and bathymetric maps show a series of transpressional faults that splay northwards across the Snares Fault, and terminate at the top of the Puysegur trench slope. Between ca. 48°S and 46°30'S, the relative plate motion appears to be distributed over the Puysegur subduction zone and the strike-slip faults located on the edge of the upper plate. Conversely, north of ca. 46°S, a lack of active strike-slip faulting along the MFS and across most of Puysegur Bank indicates that the subduction in the northern part of Puysegur Trench accounts for most of the oblique convergence. Hence, active transpression in the Snares fault zone indicates that the relative PAC-AUS plate motion is transferred from strike-slip faulting along the Puysegur Fault to subduction at Puysegur Trench. The progressive transition from thrusts at Puysegur Trench and strike-slip faulting at the Puysegur Fault to oblique subduction at Puysegur Trench suggests that the subduction interface progressively developed from a western shallow splay of the Puysegur Fault. It implies that the transfer fault links the subduction interface at depth. A tectonic sliver is identified between Puysegur Trench and the Puysegur Fault. Its northwards motion relative to the Pacific Plate implies that is might collide with Puysegur Bank.
NASA Technical Reports Server (NTRS)
Patterson, Jonathan D.; Breckenridge, Jonathan T.; Johnson, Stephen B.
2013-01-01
Building upon the purpose, theoretical approach, and use of a Goal-Function Tree (GFT) being presented by Dr. Stephen B. Johnson, described in a related Infotech 2013 ISHM abstract titled "Goal-Function Tree Modeling for Systems Engineering and Fault Management", this paper will describe the core framework used to implement the GFTbased systems engineering process using the Systems Modeling Language (SysML). These two papers are ideally accepted and presented together in the same Infotech session. Statement of problem: SysML, as a tool, is currently not capable of implementing the theoretical approach described within the "Goal-Function Tree Modeling for Systems Engineering and Fault Management" paper cited above. More generally, SysML's current capabilities to model functional decompositions in the rigorous manner required in the GFT approach are limited. The GFT is a new Model-Based Systems Engineering (MBSE) approach to the development of goals and requirements, functions, and its linkage to design. As a growing standard for systems engineering, it is important to develop methods to implement GFT in SysML. Proposed Method of Solution: Many of the central concepts of the SysML language are needed to implement a GFT for large complex systems. In the implementation of those central concepts, the following will be described in detail: changes to the nominal SysML process, model view definitions and examples, diagram definitions and examples, and detailed SysML construct and stereotype definitions.
NASA Astrophysics Data System (ADS)
Jara, Nicolás; Vallejos, Reinaldo; Rubino, Gerardo
2017-11-01
The design of optical networks decomposes into different tasks, where the engineers must basically organize the way the main system's resources are used, minimizing the design and operation costs and respecting critical performance constraints. More specifically, network operators face the challenge of solving routing and wavelength dimensioning problems while aiming to simultaneously minimize the network cost and to ensure that the network performance meets the level established in the Service Level Agreement (SLA). We call this the Routing and Wavelength Dimensioning (R&WD) problem. Another important problem to be solved is how to deal with failures of links when the network is operating. When at least one link fails, a high rate of data loss may occur. To avoid it, the network must be designed in such a manner that upon one or multiple failures, the affected connections can still communicate using alternative routes, a mechanism known as Fault Tolerance (FT). When the mechanism allows to deal with an arbitrary number of faults, we speak about Multiple Fault Tolerance (MFT). The different tasks before mentioned are usually solved separately, or in some cases by pairs, leading to solutions that are not necessarily close to optimal ones. This paper proposes a novel method to simultaneously solve all of them, that is, the Routing, the Wavelength Dimensioning, and the Multiple Fault Tolerance problems. The method allows to obtain: a) all the primary routes by which each connection normally transmits its information, b) the additional routes, called secondary routes, used to keep each user connected in cases where one or more simultaneous failures occur, and c) the number of wavelengths available at each link of the network, calculated such that the blocking probability of each connection is lower than a pre-determined threshold (which is a network design parameter), despite the occurrence of simultaneous link failures. The solution obtained by the new algorithm is significantly more efficient than current methods, its implementation is notably simple and its on-line operation is very fast. In the paper, different examples illustrate the results provided by the proposed technique.
NASA Astrophysics Data System (ADS)
Redfield, T. F.; Osmundsen, P. T.
2009-09-01
On February 22, 1756, approximately 15.7 million cubic meters of bedrock were catastrophically released as a giant rockslide into the Langfjorden. Subsequently, three ˜ 40 meter high tsunami waves overwhelmed the village of Tjelle and several other local communities. Inherited structures had isolated a compartment in the hanging wall damage zone of the fjord-dwelling Tjellefonna fault. Because the region is seismically active in oblique-normal mode, and in accordance with scant historical sources, we speculate that an earthquake on a nearby fault may have caused the already-weakened Tjelle hillside to fail. From interpretation of structural, geomorphic, and thermo-chronological data we suggest that today's escarpment topography of Møre og Trøndelag is controlled to a first order by post-rift reactivation of faults parallel to the Mesozoic passive margin. In turn, a number of these faults reactivated Late Caledonian or early post-Caledonian fabrics. Normal-sense reactivation of inherited structures along much of coastal Norway suggests that a structural link exists between the processes that destroy today's mountains and those that created them. The Paleozoic Møre-Trøndelag Fault Complex was reactivated as a normal fault during the Mesozoic and, probably, throughout the Cenozoic until the present day. Its NE-SW trending strands crop out between the coast and the base of a c. 1.7 km high NW-facing topographic 'Great Escarpment.' Well-preserved kinematic indicators and multiple generations of fault products are exposed along the Tjellefonna fault, a well-defined structural and topographic lineament parallel to both the Langfjorden and the Great Escarpment. The slope instability that was formerly present at Tjelle, and additional instabilities currently present throughout the region, may be viewed as the direct product of past and ongoing development of tectonic topography in Møre og Trøndelag county. In the Langfjorden region in particular, structural geometry suggests additional unreleased rock compartments may be isolated and under normal fault control. Although post-glacial rebound and topographically-derived horizontal spreading stresses might in part help drive present-day oblique normal seismicity, the normal-fault-controlled escarpments of Norway were at least partly erected in pre-glacial times. Cretaceous to Early Tertiary post-rift subsidence was interrupted by normal faulting at the innermost portion of the passive margin, imposing a strong tectonic empreinte on the developing landscape.
Seismic interpretation of the deep structure of the Wabash Valley Fault System
Bear, G.W.; Rupp, J.A.; Rudman, A.J.
1997-01-01
Interpretations of newly available seismic reflection profiles near the center of the Illinois Basin indicate that the Wabash Valley Fault System is rooted in a series of basement-penetrating faults. The fault system is composed predominantly of north-northeast-trending high-angle normal faults. The largest faults in the system bound the 22-km wide 40-km long Grayville Graben. Structure contour maps drawn on the base of the Mount Simon Sandstone (Cambrian System) and a deeper pre-Mount Simon horizon show dip-slip displacements totaling at least 600 meters across the New Harmony fault. In contrast to previous interpretations, the N-S extent of significant fault offsets is restricted to a region north of 38?? latitude and south of 38.35?? latitude. This suggests that the graben is not a NE extension of the structural complex composed of the Rough Creek Fault System and the Reelfoot Rift as previously interpreted. Structural complexity on the graben floor also decreases to the south. Structural trends north of 38?? latitude are offset laterally across several large faults, indicating strike-slip motions of 2 to 4 km. Some of the major faults are interpreted to penetrate to depths of 7 km or more. Correlation of these faults with steep potential field gradients suggests that the fault positions are controlled by major lithologic contacts within the basement and that the faults may extend into the depth range where earthquakes are generated, revealing a potential link between specific faults and recently observed low-level seismicity in the area.
NASA Astrophysics Data System (ADS)
Budach, Ingmar; Moeck, Inga; Lüschen, Ewald; Wolfgramm, Markus
2018-03-01
The structural evolution of faults in foreland basins is linked to a complex basin history ranging from extension to contraction and inversion tectonics. Faults in the Upper Jurassic of the German Molasse Basin, a Cenozoic Alpine foreland basin, play a significant role for geothermal exploration and are therefore imaged, interpreted and studied by 3D seismic reflection data. Beyond this applied aspect, the analysis of these seismic data help to better understand the temporal evolution of faults and respective stress fields. In 2009, a 27 km2 3D seismic reflection survey was conducted around the Unterhaching Gt 2 well, south of Munich. The main focus of this study is an in-depth analysis of a prominent v-shaped fault block structure located at the center of the 3D seismic survey. Two methods were used to study the periodic fault activity and its relative age of the detected faults: (1) horizon flattening and (2) analysis of incremental fault throws. Slip and dilation tendency analyses were conducted afterwards to determine the stresses resolved on the faults in the current stress field. Two possible kinematic models explain the structural evolution: One model assumes a left-lateral strike slip fault in a transpressional regime resulting in a positive flower structure. The other model incorporates crossing conjugate normal faults within a transtensional regime. The interpreted successive fault formation prefers the latter model. The episodic fault activity may enhance fault zone permeability hence reservoir productivity implying that the analysis of periodically active faults represents an important part in successfully targeting geothermal wells.
NASA Astrophysics Data System (ADS)
Finn, S.; Liberty, L. M.; Haeussler, P. J.; Northrup, C.; Pratt, T. L.
2010-12-01
We interpret regionally extensive, active faults beneath Prince William Sound (PWS), Alaska, to be structurally linked to deeper megathrust splay faults, such as the one that ruptured in the 1964 M9.2 earthquake. Western PWS in particular is unique; the locations of active faulting offer insights into the transition at the southern terminus of the previously subducted Yakutat slab to Pacific plate subduction. Newly acquired high-resolution, marine seismic data show three seismic facies related to Holocene and older Quaternary to Tertiary strata. These sediments are cut by numerous high angle normal faults in the hanging wall of megathrust splay. Crustal-scale seismic reflection profiles show splay faults emerging from 20 km depth between the Yakutat block and North American crust and surfacing as the Hanning Bay and Patton Bay faults. A distinct boundary coinciding beneath the Hinchinbrook Entrance causes a systematic fault trend change from N30E in southwestern PWS to N70E in northeastern PWS. The fault trend change underneath Hinchinbrook Entrance may occur gradually or abruptly and there is evidence for similar deformation near the Montague Strait Entrance. Landward of surface expressions of the splay fault, we observe subsidence, faulting, and landslides that record deformation associated with the 1964 and older megathrust earthquakes. Surface exposures of Tertiary rocks throughout PWS along with new apatite-helium dates suggest long-term and regional uplift with localized, fault-controlled subsidence.
Active faulting, earthquakes, and restraining bend development near Kerman city in southeastern Iran
NASA Astrophysics Data System (ADS)
Walker, Richard Thomas; Talebian, Morteza; Saiffori, Sohei; Sloan, Robert Alastair; Rasheedi, Ali; MacBean, Natasha; Ghassemi, Abbas
2010-08-01
We provide descriptions of strike-slip and reverse faulting, active within the late Quaternary, in the vicinity of Kerman city in southeastern Iran. The faults accommodate north-south, right-lateral, shear between central Iran and the Dasht-e-Lut depression. The regions that we describe have been subject to numerous earthquakes in the historical and instrumental periods, and many of the faults that are documented in this paper constitute hazards for local populations, including the city of Kerman itself (population ˜200,000). Faults to the north and east of Kerman are associated with the transfer of slip from the Gowk to the Kuh Banan right-lateral faults across a 40 km-wide restraining bend. Faults south and west of the city are associated with oblique slip on the Mahan and Jorjafk systems. The patterns of faulting observed along the Mahan-Jorjafk system, the Gowk-Kuh Banan system, and also the Rafsanjan-Rayen system further to the south, appear to preserve different stages in the development of these oblique-slip fault systems. We suggest that the faulting evolves through time. Topography is initially generated on oblique slip faults (as is seen on the Jorjafk fault). The shortening component then migrates to reverse faults situated away from the high topography whereas strike-slip continues to be accommodated in the high, mountainous, regions (as is seen, for example, on the Rafsanjan fault). The reverse faults may then link together and eventually evolve into new, through-going, strike-slip faults in a process that appears to be occurring, at present, in the bend between the Gowk and Kuh Banan faults.
Faulting along the southern margin of Reelfoot Lake, Tennessee
Van Arsdale, R.; Purser, J.; Stephenson, W.; Odum, J.
1998-01-01
The Reelfoot Lake basin, Tennessee, is structurally complex and of great interest seismologically because it is located at the junction of two seismicity trends of the New Madrid seismic zone. To better understand the structure at this location, a 7.5-km-long seismic reflection profile was acquired on roads along the southern margin of Reelfoot Lake. The seismic line reveals a westerly dipping basin bounded on the west by the Reelfoot reverse fault zone, the Ridgely right-lateral transpressive fault zone on the east, and the Cottonwood Grove right-lateral strike-slip fault in the middle of the basin. The displacement history of the Reelfoot fault zone appears to be the same as the Ridgely fault zone, thus suggesting that movement on these fault zones has been synchronous, perhaps since the Cretaceous. Since the Reelfoot and Ridgely fault systems are believed responsible for two of the mainshocks of 1811-1812, the fault history revealed in the Reelfoot Lake profile suggests that multiple mainshocks may be typical of the New Madrid seismic zone. The Ridgely fault zone consists of two northeast-striking faults that lie at the base of and within the Mississippi Valley bluff line. This fault zone has 15 m of post-Eocene, up-to-the-east displacement and appears to locally control the eastern limit of Mississippi River migration. The Cottonwood Grove fault zone passes through the center of the seismic line and has approximately 5 m up-to-the-east displacement. Correlation of the Cottonwood Grove fault with a possible fault scarp on the floor of Reelfoot Lake and the New Markham fault north of the lake suggests the Cottonwood Grove fault may change to a northerly strike at Reelfoot Lake, thereby linking the northeast-trending zones of seismicity in the New Madrid seismic zone.
NASA Astrophysics Data System (ADS)
Phillips, Thomas B.; Jackson, Christopher A.-L.; Bell, Rebecca E.; Duffy, Oliver B.
2018-04-01
Pre-existing structures within sub-crustal lithosphere may localise stresses during subsequent tectonic events, resulting in complex fault systems at upper-crustal levels. As these sub-crustal structures are difficult to resolve at great depths, the evolution of kinematically and perhaps geometrically linked upper-crustal fault populations can offer insights into their deformation history, including when and how they reactivate and accommodate stresses during later tectonic events. In this study, we use borehole-constrained 2-D and 3-D seismic reflection data to investigate the structural development of the Farsund Basin, offshore southern Norway. We use throw-length (T-x) analysis and fault displacement backstripping techniques to determine the geometric and kinematic evolution of N-S- and E-W-striking upper-crustal fault populations during the multiphase evolution of the Farsund Basin. N-S-striking faults were active during the Triassic, prior to a period of sinistral strike-slip activity along E-W-striking faults during the Early Jurassic, which represented a hitherto undocumented phase of activity in this area. These E-W-striking upper-crustal faults are later obliquely reactivated under a dextral stress regime during the Early Cretaceous, with new faults also propagating away from pre-existing ones, representing a switch to a predominantly dextral sense of motion. The E-W faults within the Farsund Basin are interpreted to extend through the crust to the Moho and link with the Sorgenfrei-Tornquist Zone, a lithosphere-scale lineament, identified within the sub-crustal lithosphere, that extends > 1000 km across central Europe. Based on this geometric linkage, we infer that the E-W-striking faults represent the upper-crustal component of the Sorgenfrei-Tornquist Zone and that the Sorgenfrei-Tornquist Zone represents a long-lived lithosphere-scale lineament that is periodically reactivated throughout its protracted geological history. The upper-crustal component of the lineament is reactivated in a range of tectonic styles, including both sinistral and dextral strike-slip motions, with the geometry and kinematics of these faults often inconsistent with what may otherwise be inferred from regional tectonics alone. Understanding these different styles of reactivation not only allows us to better understand the influence of sub-crustal lithospheric structure on rifting but also offers insights into the prevailing stress field during regional tectonic events.
Dynamic Reconfiguration and Link Fault Tolerance in a Transputer Network
1989-06-01
linkO and link3 are connected to the C004s. LinkI and link2 are routed to the P2 edge connector, labelled ConfigUp and ConfiDown for access to...various commands recieved PROC handle.screen (VAL BYTE link.byte, SEQ -place the first byte on screen (source) I F1 linki < 16 -- a link 0 SEQ line.num l...determine characters used on screen for -- display of source & dest IF ((INT(bytel)) < 32) linki : to.slot[INT(bytel)] otherwise linki : 10 IF ((INT(byte2
Prediction and measurement of thermally induced cambial tissue necrosis in tree stems
Joshua L. Jones; Brent W. Webb; Bret W. Butler; Matthew B. Dickinson; Daniel Jimenez; James Reardon; Anthony S. Bova
2006-01-01
A model for fire-induced heating in tree stems is linked to a recently reported model for tissue necrosis. The combined model produces cambial tissue necrosis predictions in a tree stem as a function of heating rate, heating time, tree species, and stem diameter. Model accuracy is evaluated by comparison with experimental measurements in two hardwood and two softwood...
Analysis of Fault Lengths Across Valles Marineris, Mars
NASA Astrophysics Data System (ADS)
Fori, A. N.; Schultz, R. A.
1996-03-01
Summary. As part of a larger project to determine the history of stress and strain across Valles Marineris, Mars, graben lengths located within the Valley are measured using a two-dimensional window-sampling method to investigate depth of faulting and accuracy of measurement. The resulting degree of uncertainty in measuring lengths (+19 km - 80% accuracy) is independent of the resolution at which the faults are measured, so data sets and resultant statistical analysis from different scales or map areas can be compared. The cumulative length frequency plots show that the geometry of Valley faults display no evidence of a frictional stability transition at depth in the lithosphere if mechanical interaction between individual faults (an unphysical situation) is not considered. If strongly interacting faults are linked and the composite lengths used to re-create the cumulative lengths plots, a significant change in slope is apparent suggesting the existence of a transition at about 35-65 km below the surface (assuming faults are dipping from 50deg to 70deg This suggests the thermal gradient to the associated 300-400degC isotherm is 53C/km to 12degC/km.
NASA Astrophysics Data System (ADS)
Folguera, AndréS.; Ramos, VíCtor A.; Hermanns, Reginald L.; Naranjo, José
2004-10-01
The Antiñir-Copahue fault zone (ACFZ) is the eastern orogenic front of the Andes between 38° and 37°S. It is formed by an east vergent fan of high-angle dextral transpressive and transtensive faults, which invert a Paleogene intra-arc rift system in an out of sequence order with respect to the Cretaceous to Miocene fold and thrust belt. 3.1-1.7 Ma volcanic rocks are folded and fractured through this belt, and recent indicators of fault activity in unconsolidated deposits suggest an ongoing deformation. In spite of the absence of substantial shallow seismicity associated with the orogenic front, neotectonic studies show the existence of active faults in the present mountain front. The low shallow seismicity could be linked to the high volumes of retroarc-derived volcanic rocks erupted through this fault system during Pliocene and Quaternary times. This thermally weakened basement accommodates the strain of the Antiñir-Copahue fault zone, absorbing the present convergence between the South America and Nazca plates.
The Mentawai forearc sliver off Sumatra: A model for a strike-slip duplex at a regional scale
NASA Astrophysics Data System (ADS)
Berglar, Kai; Gaedicke, Christoph; Ladage, Stefan; Thöle, Hauke
2017-07-01
At the Sumatran oblique convergent margin the Mentawai Fault and Sumatran Fault zones accommodate most of the trench parallel component of strain. These faults bound the Mentawai forearc sliver that extends from the Sunda Strait to the Nicobar Islands. Based on multi-channel reflection seismic data, swath bathymetry and high resolution sub-bottom profiling we identified a set of wrench faults obliquely connecting the two major fault zones. These wrench faults separate at least four horses of a regional strike-slip duplex forming the forearc sliver. Each horse comprises an individual basin of the forearc with differing subsidence and sedimentary history. Duplex formation started in Mid/Late Miocene southwest of the Sunda Strait. Initiation of new horses propagated northwards along the Sumatran margin over 2000 km until Early Pliocene. These results directly link strike-slip tectonics to forearc evolution and may serve as a model for basin evolution in other oblique subduction settings.
NASA Astrophysics Data System (ADS)
Kaduri, M.; Gratier, J. P.; Renard, F.; Cakir, Z.; Lasserre, C.
2015-12-01
Aseismic creep is found along several sections of major active faults at shallow depth, such as the North Anatolian Fault in Turkey, the San Andreas Fault in California (USA), the Longitudinal Valley Fault in Taiwan, the Haiyuan fault in China and the El Pilar Fault in Venezuela. Identifying the mechanisms controlling creep and their evolution with time and space represents a major challenge for predicting the mechanical evolution of active faults, the interplay between creep and earthquakes, and the link between short-term observations from geodesy and the geological setting. Hence, studying the evolution of initial rock into damaged rock, then into gouge, is one of the key question for understanding the origin of fault creep. In order to address this question we collected samples from a dozen well-preserved fault outcrops along creeping and locked sections of the North Anatolian Fault. We used various methods such as microscopic and geological observations, EPMA, XRD analysis, combined with image processing, to characterize their mineralogy and strain. We conclude that (1) there is a clear correlation between creep localization and gouge composition. The locked sections of the fault are mostly composed of massive limestone. The creeping sections comprises clay gouges with 40-80% low friction minerals such as smectite, saponite, kaolinite, that facilitates the creeping. (2) The fault gouge shows two main structures that evolve with displacement: anastomosing cleavage develop during the first stage of displacement; amplifying displacement leads to layering development oblique or sub-parallel to the fault. (3) We demonstrate that the fault gouge result from a progressive evolution of initial volcanic rocks including dissolution of soluble species that move at least partially toward the damage zones and alteration transformations by fluid flow that weaken the gouge and strengthen the damage zone.
NASA Astrophysics Data System (ADS)
Mattos, Nathalia H.; Alves, Tiago M.; Omosanya, Kamaldeen O.
2016-10-01
This paper uses 2D and high-quality 3D seismic reflection data to assess the geometry and kinematics of the Samson Dome, offshore Norway, revising the implications of the new data to hydrocarbon exploration in the Barents Sea. The study area was divided into three (3) zones in terms of fault geometries and predominant strikes. Displacement-length (D-x) and Throw-depth (T-z) plots showed faults to consist of several segments that were later dip-linked. Interpreted faults were categorised into three families, with Type A comprising crestal faults, Type B representing large E-W faults, and Type C consisting of polygonal faults. The Samson Dome was formed in three major stages: a) a first stage recording buckling of the post-salt overburden and generation of radial faults; b) a second stage involving dissolution and collapse of the dome, causing subsidence of the overburden and linkage of initially isolated fault segments; and c) a final stage in which large fault segments were developed. Late Cretaceous faults strike predominantly to the NW, whereas NE-trending faults comprise Triassic structures that were reactivated in a later stage. Our work provides scarce evidence for the escape of hydrocarbons in the Samson Dome. In addition, fault analyses based on present-day stress distributions indicate a tendency for 'locking' of faults at depth, with the largest leakage factors occurring close to the surface. The Samson Dome is an analogue to salt structures in the Barents Sea where oil and gas exploration has occurred with varied degrees of success.
Hydromechanical heterogeneities of a mature fault zone: impacts on fluid flow.
Jeanne, Pierre; Guglielmi, Yves; Cappa, Frédéric
2013-01-01
In this paper, fluid flow is examined for a mature strike-slip fault zone with anisotropic permeability and internal heterogeneity. The hydraulic properties of the fault zone were first characterized in situ by microgeophysical (VP and σc ) and rock-quality measurements (Q-value) performed along a 50-m long profile perpendicular to the fault zone. Then, the local hydrogeological context of the fault was modified to conduct a water-injection test. The resulting fluid pressures and flow rates through the different fault-zone compartments were then analyzed with a two-phase fluid-flow numerical simulation. Fault hydraulic properties estimated from the injection test signals were compared to the properties estimated from the multiscale geological approach. We found that (1) the microgeophysical measurements that we made yield valuable information on the porosity and the specific storage coefficient within the fault zone and (2) the Q-value method highlights significant contrasts in permeability. Fault hydrodynamic behavior can be modeled by a permeability tensor rotation across the fault zone and by a storativity increase. The permeability tensor rotation is linked to the modification of the preexisting fracture properties and to the development of new fractures during the faulting process, whereas the storativity increase results from the development of micro- and macrofractures that lower the fault-zone stiffness and allows an increased extension of the pore space within the fault damage zone. Finally, heterogeneities internal to the fault zones create complex patterns of fluid flow that reflect the connections of paths with contrasting properties. © 2013, The Author(s). Ground Water © 2013, National Ground Water Association.
NASA Technical Reports Server (NTRS)
Bruhn, Ronald L.; Sauber, Jeanne; Cotton, Michele M.; Pavlis, Terry L.; Burgess, Evan; Ruppert, Natalia; Forster, Richard R.
2012-01-01
The northwest directed motion of the Pacific plate is accompanied by migration and collision of the Yakutat terrane into the cusp of southern Alaska. The nature and magnitude of accretion and translation on upper crustal faults and folds is poorly constrained, however, due to pervasive glaciation. In this study we used high-resolution topography, geodetic imaging, seismic, and geologic data to advance understanding of the transition from strike-slip motion on the Fairweather fault to plate margin deformation on the Bagley fault, which cuts through the upper plate of the collisional suture above the subduction megathrust. The Fairweather fault terminates by oblique-extensional splay faulting within a structural syntaxis, allowing rapid tectonic upwelling of rocks driven by thrust faulting and crustal contraction. Plate motion is partly transferred from the Fairweather to the Bagley fault, which extends 125 km farther west as a dextral shear zone that is partly reactivated by reverse faulting. The Bagley fault dips steeply through the upper plate to intersect the subduction megathrust at depth, forming a narrow fault-bounded crustal sliver in the obliquely convergent plate margin. Since . 20 Ma the Bagley fault has accommodated more than 50 km of dextral shearing and several kilometers of reverse motion along its southern flank during terrane accretion. The fault is considered capable of generating earthquakes because it is linked to faults that generated large historic earthquakes, suitably oriented for reactivation in the contemporary stress field, and locally marked by seismicity. The fault may generate earthquakes of Mw <= 7.5.
Architecture Analysis with AADL: The Speed Regulation Case-Study
2014-11-01
Overview Functional Hazard Analysis ( FHA ) Failures inventory with description, classification, etc. Fault-Tree Analysis (FTA) Dependencies between...University Pittsburgh, PA 15213 Julien Delange Report Documentation Page Form ApprovedOMB No. 0704-0188 Public reporting burden for the collection of...Information Operations and Reports , 1215 Jefferson Davis Highway, Suite 1204, Arlington VA 22202-4302. Respondents should be aware that notwithstanding any
Journal of Air Transportation, Volume 12, No. 2 (ATRS Special Edition)
NASA Technical Reports Server (NTRS)
Bowen, Brent D. (Editor); Kabashkin, Igor (Editor); Fink, Mary (Editor)
2007-01-01
Topics covered include: Competition and Change in the Long-Haul Markets from Europe; Insights into the Maintenance, Repair, and Overhaul Configurations of European Airlines; Validation of Fault Tree Analysis in Aviation Safety Management; An Investigation into Airline Service Quality Performance between U.S. Legacy Carriers and Their EU Competitors and Partners; and Climate Impact of Aircraft Technology and Design Changes.
Risk Analysis of a Fuel Storage Terminal Using HAZOP and FTA
Baixauli-Pérez, Mª Piedad
2017-01-01
The size and complexity of industrial chemical plants, together with the nature of the products handled, means that an analysis and control of the risks involved is required. This paper presents a methodology for risk analysis in chemical and allied industries that is based on a combination of HAZard and OPerability analysis (HAZOP) and a quantitative analysis of the most relevant risks through the development of fault trees, fault tree analysis (FTA). Results from FTA allow prioritizing the preventive and corrective measures to minimize the probability of failure. An analysis of a case study is performed; it consists in the terminal for unloading chemical and petroleum products, and the fuel storage facilities of two companies, in the port of Valencia (Spain). HAZOP analysis shows that loading and unloading areas are the most sensitive areas of the plant and where the most significant danger is a fuel spill. FTA analysis indicates that the most likely event is a fuel spill in tank truck loading area. A sensitivity analysis from the FTA results show the importance of the human factor in all sequences of the possible accidents, so it should be mandatory to improve the training of the staff of the plants. PMID:28665325
TH-EF-BRC-04: Quality Management Program
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yorke, E.
2016-06-15
This Hands-on Workshop will be focused on providing participants with experience with the principal tools of TG 100 and hence start to build both competence and confidence in the use of risk-based quality management techniques. The three principal tools forming the basis of TG 100’s risk analysis: Process mapping, Failure-Modes and Effects Analysis and fault-tree analysis will be introduced with a 5 minute refresher presentation and each presentation will be followed by a 30 minute small group exercise. An exercise on developing QM from the risk analysis follows. During the exercise periods, participants will apply the principles in 2 differentmore » clinical scenarios. At the conclusion of each exercise there will be ample time for participants to discuss with each other and the faculty their experience and any challenges encountered. Learning Objectives: To review the principles of Process Mapping, Failure Modes and Effects Analysis and Fault Tree Analysis. To gain familiarity with these three techniques in a small group setting. To share and discuss experiences with the three techniques with faculty and participants. Director, TreatSafely, LLC. Director, Center for the Assessment of Radiological Sciences. Occasional Consultant to the IAEA and Varian.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
2016-06-15
This Hands-on Workshop will be focused on providing participants with experience with the principal tools of TG 100 and hence start to build both competence and confidence in the use of risk-based quality management techniques. The three principal tools forming the basis of TG 100’s risk analysis: Process mapping, Failure-Modes and Effects Analysis and fault-tree analysis will be introduced with a 5 minute refresher presentation and each presentation will be followed by a 30 minute small group exercise. An exercise on developing QM from the risk analysis follows. During the exercise periods, participants will apply the principles in 2 differentmore » clinical scenarios. At the conclusion of each exercise there will be ample time for participants to discuss with each other and the faculty their experience and any challenges encountered. Learning Objectives: To review the principles of Process Mapping, Failure Modes and Effects Analysis and Fault Tree Analysis. To gain familiarity with these three techniques in a small group setting. To share and discuss experiences with the three techniques with faculty and participants. Director, TreatSafely, LLC. Director, Center for the Assessment of Radiological Sciences. Occasional Consultant to the IAEA and Varian.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huq, M.
2016-06-15
This Hands-on Workshop will be focused on providing participants with experience with the principal tools of TG 100 and hence start to build both competence and confidence in the use of risk-based quality management techniques. The three principal tools forming the basis of TG 100’s risk analysis: Process mapping, Failure-Modes and Effects Analysis and fault-tree analysis will be introduced with a 5 minute refresher presentation and each presentation will be followed by a 30 minute small group exercise. An exercise on developing QM from the risk analysis follows. During the exercise periods, participants will apply the principles in 2 differentmore » clinical scenarios. At the conclusion of each exercise there will be ample time for participants to discuss with each other and the faculty their experience and any challenges encountered. Learning Objectives: To review the principles of Process Mapping, Failure Modes and Effects Analysis and Fault Tree Analysis. To gain familiarity with these three techniques in a small group setting. To share and discuss experiences with the three techniques with faculty and participants. Director, TreatSafely, LLC. Director, Center for the Assessment of Radiological Sciences. Occasional Consultant to the IAEA and Varian.« less
Risk Analysis of a Fuel Storage Terminal Using HAZOP and FTA.
Fuentes-Bargues, José Luis; González-Cruz, Mª Carmen; González-Gaya, Cristina; Baixauli-Pérez, Mª Piedad
2017-06-30
The size and complexity of industrial chemical plants, together with the nature of the products handled, means that an analysis and control of the risks involved is required. This paper presents a methodology for risk analysis in chemical and allied industries that is based on a combination of HAZard and OPerability analysis (HAZOP) and a quantitative analysis of the most relevant risks through the development of fault trees, fault tree analysis (FTA). Results from FTA allow prioritizing the preventive and corrective measures to minimize the probability of failure. An analysis of a case study is performed; it consists in the terminal for unloading chemical and petroleum products, and the fuel storage facilities of two companies, in the port of Valencia (Spain). HAZOP analysis shows that loading and unloading areas are the most sensitive areas of the plant and where the most significant danger is a fuel spill. FTA analysis indicates that the most likely event is a fuel spill in tank truck loading area. A sensitivity analysis from the FTA results show the importance of the human factor in all sequences of the possible accidents, so it should be mandatory to improve the training of the staff of the plants.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dunscombe, P.
This Hands-on Workshop will be focused on providing participants with experience with the principal tools of TG 100 and hence start to build both competence and confidence in the use of risk-based quality management techniques. The three principal tools forming the basis of TG 100’s risk analysis: Process mapping, Failure-Modes and Effects Analysis and fault-tree analysis will be introduced with a 5 minute refresher presentation and each presentation will be followed by a 30 minute small group exercise. An exercise on developing QM from the risk analysis follows. During the exercise periods, participants will apply the principles in 2 differentmore » clinical scenarios. At the conclusion of each exercise there will be ample time for participants to discuss with each other and the faculty their experience and any challenges encountered. Learning Objectives: To review the principles of Process Mapping, Failure Modes and Effects Analysis and Fault Tree Analysis. To gain familiarity with these three techniques in a small group setting. To share and discuss experiences with the three techniques with faculty and participants. Director, TreatSafely, LLC. Director, Center for the Assessment of Radiological Sciences. Occasional Consultant to the IAEA and Varian.« less
Rath, Frank
2008-01-01
This article examines the concepts of quality management (QM) and quality assurance (QA), as well as the current state of QM and QA practices in radiotherapy. A systematic approach incorporating a series of industrial engineering-based tools is proposed, which can be applied in health care organizations proactively to improve process outcomes, reduce risk and/or improve patient safety, improve through-put, and reduce cost. This tool set includes process mapping and process flowcharting, failure modes and effects analysis (FMEA), value stream mapping, and fault tree analysis (FTA). Many health care organizations do not have experience in applying these tools and therefore do not understand how and when to use them. As a result there are many misconceptions about how to use these tools, and they are often incorrectly applied. This article describes these industrial engineering-based tools and also how to use them, when they should be used (and not used), and the intended purposes for their use. In addition the strengths and weaknesses of each of these tools are described, and examples are given to demonstrate the application of these tools in health care settings.
Liu, Xiao Yu; Xue, Kang Ning; Rong, Rong; Zhao, Chi Hong
2016-01-01
Epidemic hemorrhagic fever has been an ongoing threat to laboratory personnel involved in animal care and use. Laboratory transmissions and severe infections occurred over the past twenty years, even though the standards and regulations for laboratory biosafety have been issued, upgraded, and implemented in China. Therefore, there is an urgent need to identify risk factors and to seek effective preventive measures that can curb the incidences of epidemic hemorrhagic fever among laboratory personnel. In the present study, we reviewed literature that relevant to animals laboratory-acquired hemorrhagic fever infections reported from 1995 to 2015, and analyzed these incidences using fault tree analysis (FTA). The results of data analysis showed that purchasing of qualified animals and guarding against wild rats which could make sure the laboratory animals without hantaviruses, are the basic measures to prevent infections. During the process of daily management, the consciousness of personal protecting and the ability of personal protecting need to be further improved. Undoubtedly vaccination is the most direct and effective method, while it plays role after infection. So avoiding infections can't rely entirely on vaccination. Copyright © 2016 The Editorial Board of Biomedical and Environmental Sciences. Published by China CDC. All rights reserved.
Fault tree analysis of the causes of waterborne outbreaks.
Risebro, Helen L; Doria, Miguel F; Andersson, Yvonne; Medema, Gertjan; Osborn, Keith; Schlosser, Olivier; Hunter, Paul R
2007-01-01
Prevention and containment of outbreaks requires examination of the contribution and interrelation of outbreak causative events. An outbreak fault tree was developed and applied to 61 enteric outbreaks related to public drinking water supplies in the EU. A mean of 3.25 causative events per outbreak were identified; each event was assigned a score based on percentage contribution per outbreak. Source and treatment system causative events often occurred concurrently (in 34 outbreaks). Distribution system causative events occurred less frequently (19 outbreaks) but were often solitary events contributing heavily towards the outbreak (a mean % score of 87.42). Livestock and rainfall in the catchment with no/inadequate filtration of water sources contributed concurrently to 11 of 31 Cryptosporidium outbreaks. Of the 23 protozoan outbreaks experiencing at least one treatment causative event, 90% of these events were filtration deficiencies; by contrast, for bacterial, viral, gastroenteritis and mixed pathogen outbreaks, 75% of treatment events were disinfection deficiencies. Roughly equal numbers of groundwater and surface water outbreaks experienced at least one treatment causative event (18 and 17 outbreaks, respectively). Retrospective analysis of multiple outbreaks of enteric disease can be used to inform outbreak investigations, facilitate corrective measures, and further develop multi-barrier approaches.
Fault and event tree analyses for process systems risk analysis: uncertainty handling formulations.
Ferdous, Refaul; Khan, Faisal; Sadiq, Rehan; Amyotte, Paul; Veitch, Brian
2011-01-01
Quantitative risk analysis (QRA) is a systematic approach for evaluating likelihood, consequences, and risk of adverse events. QRA based on event (ETA) and fault tree analyses (FTA) employs two basic assumptions. The first assumption is related to likelihood values of input events, and the second assumption is regarding interdependence among the events (for ETA) or basic events (for FTA). Traditionally, FTA and ETA both use crisp probabilities; however, to deal with uncertainties, the probability distributions of input event likelihoods are assumed. These probability distributions are often hard to come by and even if available, they are subject to incompleteness (partial ignorance) and imprecision. Furthermore, both FTA and ETA assume that events (or basic events) are independent. In practice, these two assumptions are often unrealistic. This article focuses on handling uncertainty in a QRA framework of a process system. Fuzzy set theory and evidence theory are used to describe the uncertainties in the input event likelihoods. A method based on a dependency coefficient is used to express interdependencies of events (or basic events) in ETA and FTA. To demonstrate the approach, two case studies are discussed. © 2010 Society for Risk Analysis.
A link representation for gravity amplitudes
NASA Astrophysics Data System (ADS)
He, Song
2013-10-01
We derive a link representation for all tree amplitudes in supergravity, from a recent conjecture by Cachazo and Skinner. The new formula explicitly writes amplitudes as contour integrals over constrained link variables, with an integrand naturally expressed in terms of determinants, or equivalently tree diagrams. Important symmetries of the amplitude, such as supersymmetry, parity and (partial) permutation invariance, are kept manifest in the formulation. We also comment on rewriting the formula in a GL( k)-invariant manner, which may serve as a starting point for the generalization to possible Grassmannian contour integrals.
NASA Astrophysics Data System (ADS)
Karplus, M.; Henstock, T.; McNeill, L. C.; Vermeesch, P. M. T.; Barton, P. J.
2014-12-01
The Sunda subduction zone features significant along-strike structural variability including changes in accretionary prism and forearc morphology. Some of these changes have been linked to changes in megathrust faulting styles, and some have been linked to other thrust and strike-slip fault systems across this obliquely convergent margin (~54-58 mm/yr convergence rate, 40-45 mm/yr subduction rate). We examine these structural changes in detail across central Sumatra, from Siberut to Nias Island, offshore Indonesia. In this area the Investigator Fracture Zone and the Wharton Fossil Ridge, features with significant topography, are being subducted, which may affect sediment thickness variation and margin morphology. We present new seismic refraction P-wave velocity models using marine seismic data collected during Sonne cruise SO198 in 2008. The experiment geometry consisted of 57 ocean bottom seismometers, 23 land seismometers, and over 10,000 air gun shots recorded along ~1750 km of profiles. About 130,000 P-wave first arrival refractions were picked, and the picks were inverted using FAST (First Arrivals Refraction Tomography) 3-D to give a velocity model, best-resolved in the top 25 km. Moho depths, crustal composition, prism geometry, slab dip, and upper and lower plate structures provide insight into the past and present tectonic processes at this plate boundary. We specifically examine the relationships between velocity structure and faulting locations/ styles. These observations have implications for strain-partitioning along the boundary. The Mentawai Fault, located west of the forearc basin in parts of Central Sumatra, has been interpreted variably as a backthrust, strike-slip, and normal fault. We integrate existing data to evaluate these hypotheses. Regional megathrust earthquake ruptures indicate plate boundary segmentation in our study area. The offshore forearc west of Siberut is almost aseismic, reflecting the locked state of the plate interface, which last ruptured in 1797. The weakly-coupled Batu segment experiences sporadic clusters of events near the forearc slope break. The Nias segment in the north ruptured in the 2005 M8.7 earthquake. We compare P-wave velocity structure to the earthquake data to examine potential links between lithospheric structure and seismogenesis.
NASA Technical Reports Server (NTRS)
Redinbo, Robert
1994-01-01
Fault tolerance features in the first three major subsystems appearing in the next generation of communications satellites are described. These satellites will contain extensive but efficient high-speed processing and switching capabilities to support the low signal strengths associated with very small aperture terminals. The terminals' numerous data channels are combined through frequency division multiplexing (FDM) on the up-links and are protected individually by forward error-correcting (FEC) binary convolutional codes. The front-end processing resources, demultiplexer, demodulators, and FEC decoders extract all data channels which are then switched individually, multiplexed, and remodulated before retransmission to earth terminals through narrow beam spot antennas. Algorithm based fault tolerance (ABFT) techniques, which relate real number parity values with data flows and operations, are used to protect the data processing operations. The additional checking features utilize resources that can be substituted for normal processing elements when resource reconfiguration is required to replace a failed unit.
Armadillo, E.; Ferraccioli, F.; Zunino, A.; Bozzo, E.
2007-01-01
The Wilkes Subglacial Basin (WSB) is the major morphological feature recognized in the hinterland of the Transantarctic Mountains. The origin of this basin remains contentious and relatively poorly understood due to the lack of extensive geophysical exploration. We present a new aeromagnetic anomaly map over the transition between the Transantarctic Mountains and the WSB for an area adjacent to northern Victoria Land. The aeromagnetic map reveals the existence of subglacial faults along the eastern margin of the WSB. These inferred faults connect previously proposed fault zones over Oates Land with those mapped along the Ross Sea Coast. Specifically, we suggest a link between the Matusevich Frature Zone and the Priestley Fault during the Cenozoic. The new evidence for structural control on the eastern margin of the WSB implies that a purely flexural origin for the basin is unlikely.
NASA Astrophysics Data System (ADS)
Selander, J.; Oskin, M. E.; Cooke, M. L.; Grette, K.
2015-12-01
Understanding off-fault deformation and distribution of displacement rates associated with disconnected strike-slip faults requires a three-dimensional view of fault geometries. We address problems associated with distributed faulting by studying the Mojave segment of the East California Shear Zone (ECSZ), a region dominated by northwest-directed dextral shear along disconnected northwest- southeast striking faults. We use a combination of cross-sectional interpretations, 3D Boundary Element Method (BEM) models, and slip-rate measurements to test new hypothesized fault connections. We find that reverse faulting acts as an important means of slip transfer between strike-slip faults, and show that the impacts of these structural connections on shortening, uplift, strike-slip rates, and off-fault deformation, help to reconcile the overall strain budget across this portion of the ECSZ. In detail, we focus on the Calico and Blackwater faults, which are hypothesized to together represent the longest linked fault system in the Mojave ECSZ, connected by a restraining step at 35°N. Across this restraining step the system displays a pronounced displacement gradient, where dextral offset decreases from ~11.5 to <2 km from south to north. Cross-section interpretations show that ~40% of this displacement is transferred from the Calico fault to the Harper Lake and Blackwater faults via a set of north-dipping thrust ramps. Late Quaternary dextral slip rates follow a similar pattern, where 1.4 +0.8/-0.4 mm/yr of slip along the Calico fault south of 35°N is distributed to the Harper Lake, Blackwater, and Tin Can Alley faults. BEM model results using revised fault geometries for the Mojave ECSZ show areas of uplift consistent with contractional structures, and fault slip-rates that more closely match geologic data. Overall, revised fault connections and addition of off-fault deformation greatly reduces the discrepancy between geodetic and geologic slip rates.
NASA Astrophysics Data System (ADS)
Aprilia, Ayu Rizky; Santoso, Imam; Ekasari, Dhita Murita
2017-05-01
Yogurt is a product based on milk, which has beneficial effects for health. The process for the production of yogurt is very susceptible to failure because it involves bacteria and fermentation. For an industry, the risks may cause harm and have a negative impact. In order for a product to be successful and profitable, it requires the analysis of risks that may occur during the production process. Risk analysis can identify the risks in detail and prevent as well as determine its handling, so that the risks can be minimized. Therefore, this study will analyze the risks of the production process with a case study in CV.XYZ. The method used in this research is the Fuzzy Failure Mode and Effect Analysis (fuzzy FMEA) and Fault Tree Analysis (FTA). The results showed that there are 6 risks from equipment variables, raw material variables, and process variables. Those risks include the critical risk, which is the risk of a lack of an aseptic process, more specifically if starter yogurt is damaged due to contamination by fungus or other bacteria and a lack of sanitation equipment. The results of quantitative analysis of FTA showed that the highest probability is the probability of the lack of an aseptic process, with a risk of 3.902%. The recommendations for improvement include establishing SOPs (Standard Operating Procedures), which include the process, workers, and environment, controlling the starter of yogurt and improving the production planning and sanitation equipment using hot water immersion.
NASA Astrophysics Data System (ADS)
Molnár, László; Vásárhelyi, Balázs; Tóth, Tivadar M.; Schubert, Félix
2015-01-01
The integrated evaluation of borecores from the Mezősas-Furta fractured metamorphic hydrocarbon reservoir suggests significantly distinct microstructural and rock mechanical features within the analysed fault rock samples. The statistical evaluation of the clast geometries revealed the dominantly cataclastic nature of the samples. Damage zone of the fault can be characterised by an extremely brittle nature and low uniaxial compressive strength, coupled with a predominately coarse fault breccia composition. In contrast, the microstructural manner of the increasing deformation coupled with higher uniaxial compressive strength, strain-hardening nature and low brittleness indicate a transitional interval between the weakly fragmented damage zone and strongly grinded fault core. Moreover, these attributes suggest this unit is mechanically the strongest part of the fault zone. Gougerich cataclasites mark the core zone of the fault, with their widespread plastic nature and locally pseudo-ductile microstructure. Strain localization tends to be strongly linked with the existence of fault gouge ribbons. The fault zone with ˜15 m total thickness can be defined as a significant migration pathway inside the fractured crystalline reservoir. Moreover, as a consequence of the distributed nature of the fault core, it may possibly have a key role in compartmentalisation of the local hydraulic system.
Displacement-length scaling of brittle faults in ductile shear.
Grasemann, Bernhard; Exner, Ulrike; Tschegg, Cornelius
2011-11-01
Within a low-grade ductile shear zone, we investigated exceptionally well exposed brittle faults, which accumulated antithetic slip and rotated into the shearing direction. The foliation planes of the mylonitic host rock intersect the faults approximately at their centre and exhibit ductile reverse drag. Three types of brittle faults can be distinguished: (i) Faults developing on pre-existing K-feldspar/mica veins that are oblique to the shear direction. These faults have triclinic flanking structures. (ii) Wing cracks opening as mode I fractures at the tips of the triclinic flanking structures, perpendicular to the shear direction. These cracks are reactivated as faults with antithetic shear, extend from the parent K-feldspar/mica veins and form a complex linked flanking structure system. (iii) Joints forming perpendicular to the shearing direction are deformed to form monoclinic flanking structures. Triclinic and monoclinic flanking structures record elliptical displacement-distance profiles with steep displacement gradients at the fault tips by ductile flow in the host rocks, resulting in reverse drag of the foliation planes. These structures record one of the greatest maximum displacement/length ratios reported from natural fault structures. These exceptionally high ratios can be explained by localized antithetic displacement along brittle slip surfaces, which did not propagate during their rotation during surrounding ductile flow.
San Andreas tremor cascades define deep fault zone complexity
Shelly, David R.
2015-01-01
Weak seismic vibrations - tectonic tremor - can be used to delineate some plate boundary faults. Tremor on the deep San Andreas Fault, located at the boundary between the Pacific and North American plates, is thought to be a passive indicator of slow fault slip. San Andreas Fault tremor migrates at up to 30 m s-1, but the processes regulating tremor migration are unclear. Here I use a 12-year catalogue of more than 850,000 low-frequency earthquakes to systematically analyse the high-speed migration of tremor along the San Andreas Fault. I find that tremor migrates most effectively through regions of greatest tremor production and does not propagate through regions with gaps in tremor production. I interpret the rapid tremor migration as a self-regulating cascade of seismic ruptures along the fault, which implies that tremor may be an active, rather than passive participant in the slip propagation. I also identify an isolated group of tremor sources that are offset eastwards beneath the San Andreas Fault, possibly indicative of the interface between the Monterey Microplate, a hypothesized remnant of the subducted Farallon Plate, and the North American Plate. These observations illustrate a possible link between the central San Andreas Fault and tremor-producing subduction zones.
Displacement–length scaling of brittle faults in ductile shear
Grasemann, Bernhard; Exner, Ulrike; Tschegg, Cornelius
2011-01-01
Within a low-grade ductile shear zone, we investigated exceptionally well exposed brittle faults, which accumulated antithetic slip and rotated into the shearing direction. The foliation planes of the mylonitic host rock intersect the faults approximately at their centre and exhibit ductile reverse drag. Three types of brittle faults can be distinguished: (i) Faults developing on pre-existing K-feldspar/mica veins that are oblique to the shear direction. These faults have triclinic flanking structures. (ii) Wing cracks opening as mode I fractures at the tips of the triclinic flanking structures, perpendicular to the shear direction. These cracks are reactivated as faults with antithetic shear, extend from the parent K-feldspar/mica veins and form a complex linked flanking structure system. (iii) Joints forming perpendicular to the shearing direction are deformed to form monoclinic flanking structures. Triclinic and monoclinic flanking structures record elliptical displacement–distance profiles with steep displacement gradients at the fault tips by ductile flow in the host rocks, resulting in reverse drag of the foliation planes. These structures record one of the greatest maximum displacement/length ratios reported from natural fault structures. These exceptionally high ratios can be explained by localized antithetic displacement along brittle slip surfaces, which did not propagate during their rotation during surrounding ductile flow. PMID:26806996
... al. Prepubertal gynecomastia linked to lavender and tea tree oils . New England Journal of Medicine. 2007;356( ... et al. Possible efficacy of lavender and tea tree oils in the treatment of young women affected ...
NASA Astrophysics Data System (ADS)
Nicholson, C.; Plesch, A.; Sorlien, C. C.; Shaw, J. H.; Hauksson, E.
2014-12-01
Southern California represents an ideal natural laboratory to investigate oblique deformation in 3D owing to its comprehensive datasets, complex tectonic history, evolving components of oblique slip, and continued crustal rotations about horizontal and vertical axes. As the SCEC Community Fault Model (CFM) aims to accurately reflect this 3D deformation, we present the results of an extensive update to the model by using primarily detailed fault trace, seismic reflection, relocated hypocenter and focal mechanism nodal plane data to generate improved, more realistic digital 3D fault surfaces. The results document a wide variety of oblique strain accommodation, including various aspects of strain partitioning and fault-related folding, sets of both high-angle and low-angle faults that mutually interact, significant non-planar, multi-stranded faults with variable dip along strike and with depth, and active mid-crustal detachments. In places, closely-spaced fault strands or fault systems can remain surprisingly subparallel to seismogenic depths, while in other areas, major strike-slip to oblique-slip faults can merge, such as the S-dipping Arroyo Parida-Mission Ridge and Santa Ynez faults with the N-dipping North Channel-Pitas Point-Red Mountain fault system, or diverge with depth. Examples of the latter include the steep-to-west-dipping Laguna Salada-Indiviso faults with the steep-to-east-dipping Sierra Cucapah faults, and the steep southern San Andreas fault with the adjacent NE-dipping Mecca Hills-Hidden Springs fault system. In addition, overprinting by steep predominantly strike-slip faulting can segment which parts of intersecting inherited low-angle faults are reactivated, or result in mutual cross-cutting relationships. The updated CFM 3D fault surfaces thus help characterize a more complex pattern of fault interactions at depth between various fault sets and linked fault systems, and a more complex fault geometry than typically inferred or expected from projecting near-surface data down-dip, or modeled from surface strain and potential field data alone.
Timing of late Holocene surface rupture of the Wairau Fault, Marlborough, New Zealand
Zachariasen, J.; Berryman, K.; Langridge, Rob; Prentice, C.; Rymer, M.; Stirling, M.; Villamor, P.
2006-01-01
Three trenches excavated across the central portion of the right-lateral strike-slip Wairau Fault in South Island, New Zealand, exposed a complex set of fault strands that have displaced a sequence of late Holocene alluvial and colluvial deposits. Abundant charcoal fragments provide age control for various stratigraphic horizons dating back to c. 5610 yr ago. Faulting relations from the Wadsworth trench show that the most recent surface rupture event occurred at least 1290 yr and at most 2740 yr ago. Drowned trees in landslide-dammed Lake Chalice, in combination with charcoal from the base of an unfaulted colluvial wedge at Wadsworth trench, suggest a narrower time bracket for this event of 1811-2301 cal. yr BP. The penultimate faulting event occurred between c. 2370 and 3380 yr, and possibly near 2680 ?? 60 cal. yr BP, when data from both the Wadsworth and Dillon trenches are combined. Two older events have been recognised from Dillon trench but remain poorly dated. A probable elapsed time of at least 1811 yr since the last surface rupture, and an average slip rate estimate for the Wairau Fault of 3-5 mm/yr, suggests that at least 5.4 m and up to 11.5 m of elastic shear strain has accumulated since the last rupture. This is near to or greater than the single-event displacement estimates of 5-7 m. The average recurrence interval for surface rupture of the fault determined from the trench data is 1150-1400 yr. Although the uncertainties in the timing of faulting events and variability in inter-event times remain high, the time elapsed since the last event is in the order of 1-2 times the average recurrence interval, implying that the Wairau Fault is near the end of its interseismic period. ?? The Royal Society of New Zealand 2006.
NASA Astrophysics Data System (ADS)
Dura-Gomez, I.; Addison, A.; Knapp, C. C.; Talwani, P.; Chapman, A.
2005-12-01
During the 1886 Charleston earthquake, two parallel tabby walls of Fort Dorchester broke left-laterally, and a strike of ~N25°W was inferred for the causative Sawmill Branch fault. To better define this fault, which does not have any surface expression, we planned to cut trenches across it. However, as Fort Dorchester is a protected archeological site, we were required to locate the fault accurately away from the fort, before permission could be obtained to cut short trenches. The present GPR investigations were planned as a preliminary step to determine locations for trenching. A pulseEKKO 100 GPR was used to collect data along eight profiles (varying in length from 10 m to 30 m) that were run across the projected strike of the fault, and one 50 m long profile that was run parallel to it. The locations of the profiles were obtained using a total station. To capture the signature of the fault, sixteen common-offset (COS) lines were acquired by using different antennas (50, 100 and 200 MHz) and stacking 64 times to increase the signal-to-noise ratio. The location of trees and stumps were recorded. In addition, two common-midpoint (CMP) tests were carried out, and gave an average velocity of about 0.097 m/ns. Processing included the subtraction of the low frequency "wow" on the trace (dewow), automatic gain control (AGC) and the application of bandpass filters. The signals using the 50 MHz, 100 MHz and 200 MHz antennas were found to penetrate up to about 30 meters, 20 meters and 12 meters respectively. Vertically offset reflectors and disruptions of the electrical signal were used to infer the location of the fault(s). Comparisons of the locations of these disruptions on various lines were used to infer the presence of a N30°W fault zone We plan to confirm these locations by cutting shallow trenches.
Stollhofen, Harald; Stanistreet, Ian G
2012-08-01
Normal faults displacing Upper Bed I and Lower Bed II strata of the Plio-Pleistocene Lake Olduvai were studied on the basis of facies and thickness changes as well as diversion of transport directions across them in order to establish criteria for their synsedimentary activity. Decompacted differential thicknesses across faults were then used to calculate average fault slip rates of 0.05-0.47 mm/yr for the Tuff IE/IF interval (Upper Bed I) and 0.01-0.13 mm/yr for the Tuff IF/IIA section (Lower Bed II). Considering fault recurrence intervals of ~1000 years, fault scarp heights potentially achieved average values of 0.05-0.47 m and a maximum value of 5.4 m during Upper Bed I, which dropped to average values of 0.01-0.13 m and a localized maximum of 0.72 m during Lower Bed II deposition. Synsedimentary faults were of importance to the form and paleoecology of landscapes utilized by early hominins, most traceably and provably Homo habilis as illustrated by the recurrent density and compositional pattern of Oldowan stone artifact assemblage variation across them. Two potential relationship factors are: (1) fault scarp topographies controlled sediment distribution, surface, and subsurface hydrology, and thus vegetation, so that a resulting mosaic of microenvironments and paleoecologies provided a variety of opportunities for omnivorous hominins; and (2) they ensured that the most voluminous and violent pyroclastic flows from the Mt. Olmoti volcano were dammed and conduited away from the Olduvai Basin depocenter, when otherwise a single or set of ignimbrite flows might have filled and devastated the topography that contained the central lake body. In addition, hydraulically active faults may have conduited groundwater, supporting freshwater springs and wetlands and favoring growth of trees. Copyright © 2011 Elsevier Ltd. All rights reserved.
The use of automatic programming techniques for fault tolerant computing systems
NASA Technical Reports Server (NTRS)
Wild, C.
1985-01-01
It is conjectured that the production of software for ultra-reliable computing systems such as required by Space Station, aircraft, nuclear power plants and the like will require a high degree of automation as well as fault tolerance. In this paper, the relationship between automatic programming techniques and fault tolerant computing systems is explored. Initial efforts in the automatic synthesis of code from assertions to be used for error detection as well as the automatic generation of assertions and test cases from abstract data type specifications is outlined. Speculation on the ability to generate truly diverse designs capable of recovery from errors by exploring alternate paths in the program synthesis tree is discussed. Some initial thoughts on the use of knowledge based systems for the global detection of abnormal behavior using expectations and the goal-directed reconfiguration of resources to meet critical mission objectives are given. One of the sources of information for these systems would be the knowledge captured during the automatic programming process.
A method of real-time fault diagnosis for power transformers based on vibration analysis
NASA Astrophysics Data System (ADS)
Hong, Kaixing; Huang, Hai; Zhou, Jianping; Shen, Yimin; Li, Yujie
2015-11-01
In this paper, a novel probability-based classification model is proposed for real-time fault detection of power transformers. First, the transformer vibration principle is introduced, and two effective feature extraction techniques are presented. Next, the details of the classification model based on support vector machine (SVM) are shown. The model also includes a binary decision tree (BDT) which divides transformers into different classes according to health state. The trained model produces posterior probabilities of membership to each predefined class for a tested vibration sample. During the experiments, the vibrations of transformers under different conditions are acquired, and the corresponding feature vectors are used to train the SVM classifiers. The effectiveness of this model is illustrated experimentally on typical in-service transformers. The consistency between the results of the proposed model and the actual condition of the test transformers indicates that the model can be used as a reliable method for transformer fault detection.
Simulated cavity tree dynamics under alternative timber harvest regimes
Zhaofei Fan; Stephen R Shifley; Frank R Thompson; David R Larsen
2004-01-01
We modeled cavity tree abundance on a landscape as a function of forest stand age classes and as a function of aggregate stand size classes.We explored the impact of five timber harvest regimes on cavity tree abundance on a 3261 ha landscape in southeast Missouri, USA, by linking the stand level cavity tree distribution model to the landscape age structure simulated by...
Kenah, Eben; Britton, Tom; Halloran, M. Elizabeth; Longini, Ira M.
2016-01-01
Recent work has attempted to use whole-genome sequence data from pathogens to reconstruct the transmission trees linking infectors and infectees in outbreaks. However, transmission trees from one outbreak do not generalize to future outbreaks. Reconstruction of transmission trees is most useful to public health if it leads to generalizable scientific insights about disease transmission. In a survival analysis framework, estimation of transmission parameters is based on sums or averages over the possible transmission trees. A phylogeny can increase the precision of these estimates by providing partial information about who infected whom. The leaves of the phylogeny represent sampled pathogens, which have known hosts. The interior nodes represent common ancestors of sampled pathogens, which have unknown hosts. Starting from assumptions about disease biology and epidemiologic study design, we prove that there is a one-to-one correspondence between the possible assignments of interior node hosts and the transmission trees simultaneously consistent with the phylogeny and the epidemiologic data on person, place, and time. We develop algorithms to enumerate these transmission trees and show these can be used to calculate likelihoods that incorporate both epidemiologic data and a phylogeny. A simulation study confirms that this leads to more efficient estimates of hazard ratios for infectiousness and baseline hazards of infectious contact, and we use these methods to analyze data from a foot-and-mouth disease virus outbreak in the United Kingdom in 2001. These results demonstrate the importance of data on individuals who escape infection, which is often overlooked. The combination of survival analysis and algorithms linking phylogenies to transmission trees is a rigorous but flexible statistical foundation for molecular infectious disease epidemiology. PMID:27070316
Insurance Applications of Active Fault Maps Showing Epistemic Uncertainty
NASA Astrophysics Data System (ADS)
Woo, G.
2005-12-01
Insurance loss modeling for earthquakes utilizes available maps of active faulting produced by geoscientists. All such maps are subject to uncertainty, arising from lack of knowledge of fault geometry and rupture history. Field work to undertake geological fault investigations drains human and monetary resources, and this inevitably limits the resolution of fault parameters. Some areas are more accessible than others; some may be of greater social or economic importance than others; some areas may be investigated more rapidly or diligently than others; or funding restrictions may have curtailed the extent of the fault mapping program. In contrast with the aleatory uncertainty associated with the inherent variability in the dynamics of earthquake fault rupture, uncertainty associated with lack of knowledge of fault geometry and rupture history is epistemic. The extent of this epistemic uncertainty may vary substantially from one regional or national fault map to another. However aware the local cartographer may be, this uncertainty is generally not conveyed in detail to the international map user. For example, an area may be left blank for a variety of reasons, ranging from lack of sufficient investigation of a fault to lack of convincing evidence of activity. Epistemic uncertainty in fault parameters is of concern in any probabilistic assessment of seismic hazard, not least in insurance earthquake risk applications. A logic-tree framework is appropriate for incorporating epistemic uncertainty. Some insurance contracts cover specific high-value properties or transport infrastructure, and therefore are extremely sensitive to the geometry of active faulting. Alternative Risk Transfer (ART) to the capital markets may also be considered. In order for such insurance or ART contracts to be properly priced, uncertainty should be taken into account. Accordingly, an estimate is needed for the likelihood of surface rupture capable of causing severe damage. Especially where a high deductible is in force, this requires estimation of the epistemic uncertainty on fault geometry and activity. Transport infrastructure insurance is of practical interest in seismic countries. On the North Anatolian Fault in Turkey, there is uncertainty over an unbroken segment between the eastern end of the Dazce Fault and Bolu. This may have ruptured during the 1944 earthquake. Existing hazard maps may simply use a question mark to flag uncertainty. However, a far more informative type of hazard map might express spatial variations in the confidence level associated with a fault map. Through such visual guidance, an insurance risk analyst would be better placed to price earthquake cover, allowing for epistemic uncertainty.
NASA Astrophysics Data System (ADS)
Chartier, Thomas; Scotti, Oona; Lyon-Caen, Hélène; Boiselet, Aurélien
2017-10-01
Modeling the seismic potential of active faults is a fundamental step of probabilistic seismic hazard assessment (PSHA). An accurate estimation of the rate of earthquakes on the faults is necessary in order to obtain the probability of exceedance of a given ground motion. Most PSHA studies consider faults as independent structures and neglect the possibility of multiple faults or fault segments rupturing simultaneously (fault-to-fault, FtF, ruptures). The Uniform California Earthquake Rupture Forecast version 3 (UCERF-3) model takes into account this possibility by considering a system-level approach rather than an individual-fault-level approach using the geological, seismological and geodetical information to invert the earthquake rates. In many places of the world seismological and geodetical information along fault networks is often not well constrained. There is therefore a need to propose a methodology relying on geological information alone to compute earthquake rates of the faults in the network. In the proposed methodology, a simple distance criteria is used to define FtF ruptures and consider single faults or FtF ruptures as an aleatory uncertainty, similarly to UCERF-3. Rates of earthquakes on faults are then computed following two constraints: the magnitude frequency distribution (MFD) of earthquakes in the fault system as a whole must follow an a priori chosen shape and the rate of earthquakes on each fault is determined by the specific slip rate of each segment depending on the possible FtF ruptures. The modeled earthquake rates are then compared to the available independent data (geodetical, seismological and paleoseismological data) in order to weight different hypothesis explored in a logic tree.The methodology is tested on the western Corinth rift (WCR), Greece, where recent advancements have been made in the understanding of the geological slip rates of the complex network of normal faults which are accommodating the ˜ 15 mm yr-1 north-south extension. Modeling results show that geological, seismological and paleoseismological rates of earthquakes cannot be reconciled with only single-fault-rupture scenarios and require hypothesizing a large spectrum of possible FtF rupture sets. In order to fit the imposed regional Gutenberg-Richter (GR) MFD target, some of the slip along certain faults needs to be accommodated either with interseismic creep or as post-seismic processes. Furthermore, computed individual faults' MFDs differ depending on the position of each fault in the system and the possible FtF ruptures associated with the fault. Finally, a comparison of modeled earthquake rupture rates with those deduced from the regional and local earthquake catalog statistics and local paleoseismological data indicates a better fit with the FtF rupture set constructed with a distance criteria based on 5 km rather than 3 km, suggesting a high connectivity of faults in the WCR fault system.
Metric Ranking of Invariant Networks with Belief Propagation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tao, Changxia; Ge, Yong; Song, Qinbao
The management of large-scale distributed information systems relies on the effective use and modeling of monitoring data collected at various points in the distributed information systems. A promising approach is to discover invariant relationships among the monitoring data and generate invariant networks, where a node is a monitoring data source (metric) and a link indicates an invariant relationship between two monitoring data. Such an invariant network representation can help system experts to localize and diagnose the system faults by examining those broken invariant relationships and their related metrics, because system faults usually propagate among the monitoring data and eventually leadmore » to some broken invariant relationships. However, at one time, there are usually a lot of broken links (invariant relationships) within an invariant network. Without proper guidance, it is difficult for system experts to manually inspect this large number of broken links. Thus, a critical challenge is how to effectively and efficiently rank metrics (nodes) of invariant networks according to the anomaly levels of metrics. The ranked list of metrics will provide system experts with useful guidance for them to localize and diagnose the system faults. To this end, we propose to model the nodes and the broken links as a Markov Random Field (MRF), and develop an iteration algorithm to infer the anomaly of each node based on belief propagation (BP). Finally, we validate the proposed algorithm on both realworld and synthetic data sets to illustrate its effectiveness.« less
Evolution of groundwater chemistry along fault structures in sandstone
NASA Astrophysics Data System (ADS)
Dausse, A.; Guiheneuf, N.; Pierce, A. A.; Cherry, J. A.; Parker, B. L.
2016-12-01
Fluid-rock interaction across geological structures plays a major role on evolution of groundwater chemistry and physical properties of reservoirs. In particular, groundwater chemistry evolve on different facies according to residence times which can be linked to hydraulic properties of the geological unit. In this study, we analyze groundwater samples collected at an 11 km² site located in southern California (USA) to evaluate the evolution of groundwater chemistry according to different geological structures. Major and minor elements were sampled at the same period of time from 40 wells located along the main structures in the northeast of the site, where major NE-SW trending faults and other oriented ESE-WNW are present in sandstone Chatsworth formation. By analyzing the spatial distribution of ions concentration at the site scale, several hydrochemical compartments (main- and sub-compartments) can be distinguished and are in agreement with structural and hydrological information. In particular, as previously observed from piezometric informations, the shear zone fault serves as a barrier for groundwater flow and separates the site on two mains compartments. In addition, the analysis along major faults oriented orthogonal to this shear zone (ESE-WNW) in the eastern part of the site, shows an increase in mineralization following the hydraulic gradient. This salinization has been confirmed by ionic ratio and Gibbs plots and is attributed to fluid-rock interaction processes. In particular, groundwater chemistry seems to evolve from bicarbonate to sodium facies. Moreover, the gradient of concentrations vary depending on fault locations and can be related to their hydraulic properties and hence to different characteristic times from point to point. To conclude, major faults across the site display different degrees of groundwater chemistry evolution, linked to their physical properties, which may in turn have a large impact on contaminant transport and attenuation.
NASA Astrophysics Data System (ADS)
Yang, Wen-Xian
2006-05-01
Available machine fault diagnostic methods show unsatisfactory performances on both on-line and intelligent analyses because their operations involve intensive calculations and are labour intensive. Aiming at improving this situation, this paper describes the development of an intelligent approach by using the Genetic Programming (abbreviated as GP) method. Attributed to the simple calculation of the mathematical model being constructed, different kinds of machine faults may be diagnosed correctly and quickly. Moreover, human input is significantly reduced in the process of fault diagnosis. The effectiveness of the proposed strategy is validated by an illustrative example, in which three kinds of valve states inherent in a six-cylinders/four-stroke cycle diesel engine, i.e. normal condition, valve-tappet clearance and gas leakage faults, are identified. In the example, 22 mathematical functions have been specially designed and 8 easily obtained signal features are used to construct the diagnostic model. Different from existing GPs, the diagnostic tree used in the algorithm is constructed in an intelligent way by applying a power-weight coefficient to each feature. The power-weight coefficients vary adaptively between 0 and 1 during the evolutionary process. Moreover, different evolutionary strategies are employed, respectively for selecting the diagnostic features and functions, so that the mathematical functions are sufficiently utilized and in the meantime, the repeated use of signal features may be fully avoided. The experimental results are illustrated diagrammatically in the following sections.
NASA Astrophysics Data System (ADS)
Okumura, K.
2011-12-01
Accurate location and geometry of seismic sources are critical to estimate strong ground motion. Complete and precise rupture history is also critical to estimate the probability of the future events. In order to better forecast future earthquakes and to reduce seismic hazards, we should consider over all options and choose the most likely parameter. Multiple options for logic trees are acceptable only after thorough examination of contradicting estimates and should not be a result from easy compromise or epoche. In the process of preparation and revisions of Japanese probabilistic and deterministic earthquake hazard maps by Headquarters for Earthquake Research Promotion since 1996, many decisions were made to select plausible parameters, but many contradicting estimates have been left without thorough examinations. There are several highly-active faults in central Japan such as Itoigawa-Shizuoka Tectonic Line active fault system (ISTL), West Nagano Basin fault system (WNBF), Inadani fault system (INFS), and Atera fault system (ATFS). The highest slip rate and the shortest recurrence interval are respectively ~1 cm/yr and 500 to 800 years, and estimated maximum magnitude is 7.5 to 8.5. Those faults are very hazardous because almost entire population and industries are located above the fault within tectonic depressions. As to the fault location, most uncertainties arises from interpretation of geomorphic features. Geomorphological interpretation without geological and structural insight often leads to wrong mapping. Though non-existent longer fault may be a safer estimate, incorrectness harm reliability of the forecast. Also this does not greatly affect strong motion estimates, but misleading to surface displacement issues. Fault geometry, on the other hand, is very important to estimate intensity distribution. For the middle portion of the ISTL, fast-moving left-lateral strike-slip up to 1 cm/yr is obvious. Recent seismicity possibly induced by 2011 Tohoku earthquake show pure strike-slip. However, thrusts are modeled from seismic profiles and gravity anomalies. Therefore, two contradicting models are presented for strong motion estimates. There should be a unique solution of the geometry, which will be discussed. As to the rupture history, there is plenty of paleoseismological evidence that supports segmentation of those faults above. However, in most fault zones, the largest and sometimes possibly less frequent earthquakes are modeled. Segmentation and modeling of coming earthquakes should be more carefully examined without leaving them in contradictions.
Material and Stress Rotations: Anticipating the 1992 Landers, CA Earthquake
NASA Astrophysics Data System (ADS)
Nur, A. M.
2014-12-01
"Rotations make nonsense of the two-dimensional reconstructions that are still so popular among structural geologists". (McKenzie, 1990, p. 109-110) I present a comprehensive tectonic model for the strike-slip fault geometry, seismicity, material rotation, and stress rotation, in which new, optimally oriented faults can form when older ones have rotated about a vertical axis out of favorable orientations. The model was successfully tested in the Mojave region using stress rotation and three independent data sets: the alignment of epicenters and fault plane solutions from the six largest central Mojave earthquakes since 1947, material rotations inferred from paleomagnetic declination anomalies, and rotated dike strands of the Independence dike swarm. The model led not only to the anticipation of the 1992 M7.3 Landers, CA earthquake but also accounts for the great complexity of the faulting and seismicity of this event. The implication of this model for crustal deformation in general is that rotations of material (faults and the blocks between them) and of stress provide the key link between the complexity of faults systems in-situ and idealized mechanical theory of faulting. Excluding rotations from the kinematical and mechanical analysis of crustal deformation makes it impossible to explain the complexity of what geologists see in faults, or what seismicity shows us about active faults. However, when we allow for rotation of material and stress, Coulomb's law becomes consistent with the complexity of faults and faulting observed in situ.
NASA Technical Reports Server (NTRS)
Breckenridge, Jonathan T.; Johnson, Stephen B.
2013-01-01
This paper describes the core framework used to implement a Goal-Function Tree (GFT) based systems engineering process using the Systems Modeling Language. It defines a set of principles built upon by the theoretical approach described in the InfoTech 2013 ISHM paper titled "Goal-Function Tree Modeling for Systems Engineering and Fault Management" presented by Dr. Stephen B. Johnson. Using the SysML language, the principles in this paper describe the expansion of the SysML language as a baseline in order to: hierarchically describe a system, describe that system functionally within success space, and allocate detection mechanisms to success functions for system protection.
Earthquake Rupture Forecast of M>= 6 for the Corinth Rift System
NASA Astrophysics Data System (ADS)
Scotti, O.; Boiselet, A.; Lyon-Caen, H.; Albini, P.; Bernard, P.; Briole, P.; Ford, M.; Lambotte, S.; Matrullo, E.; Rovida, A.; Satriano, C.
2014-12-01
Fourteen years of multidisciplinary observations and data collection in the Western Corinth Rift (WCR) near-fault observatory have been recently synthesized (Boiselet, Ph.D. 2014) for the purpose of providing earthquake rupture forecasts (ERF) of M>=6 in WCR. The main contribution of this work consisted in paving the road towards the development of a "community-based" fault model reflecting the level of knowledge gathered thus far by the WCR working group. The most relevant available data used for this exercise are: - onshore/offshore fault traces, based on geological and high-resolution seismics, revealing a complex network of E-W striking, ~10 km long fault segments; microseismicity recorded by a dense network ( > 60000 events; 1.5
Joshuva, A; Sugumaran, V
2017-03-01
Wind energy is one of the important renewable energy resources available in nature. It is one of the major resources for production of energy because of its dependability due to the development of the technology and relatively low cost. Wind energy is converted into electrical energy using rotating blades. Due to environmental conditions and large structure, the blades are subjected to various vibration forces that may cause damage to the blades. This leads to a liability in energy production and turbine shutdown. The downtime can be reduced when the blades are diagnosed continuously using structural health condition monitoring. These are considered as a pattern recognition problem which consists of three phases namely, feature extraction, feature selection, and feature classification. In this study, statistical features were extracted from vibration signals, feature selection was carried out using a J48 decision tree algorithm and feature classification was performed using best-first tree algorithm and functional trees algorithm. The better algorithm is suggested for fault diagnosis of wind turbine blade. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Katopody, D. T.; Oldow, J. S.
2015-12-01
The northwest-striking Furnace Creek - Fish Lake Valley (FC-FLV) fault system stretches for >250 km from southeastern California to western Nevada, forms the eastern boundary of the northern segment of the Eastern California Shear Zone, and has contemporary displacement. The FC-FLV fault system initiated in the mid-Miocene (10-12 Ma) and shows a south to north decrease in displacement from a maximum of 75-100 km to less than 10 km. Coeval elongation by extension on north-northeast striking faults within the adjoining blocks to the FC-FLV fault both supply and remove cumulative displacement measured at the northern end of the transcurrent fault system. Elongation and displacement transfer in the eastern block, constituting the southern Walker Lane of western Nevada, exceeds that of the western block and results in the net south to north decrease in displacement on the FC-FLV fault system. Elongation in the eastern block is accommodated by late Miocene to Pliocene detachment faulting followed by extension on superposed, east-northeast striking, high-angle structures. Displacement transfer from the FC-FLV fault system to the northwest-trending faults of the central Walker Lane to the north is accomplished by motion on a series of west-northwest striking transcurrent faults, named the Oriental Wash, Sylvania Mountain, and Palmetto Mountain fault systems. The west-northwest striking transcurrent faults cross-cut earlier detachment structures and are kinematically linked to east-northeast high-angle extensional faults. The transcurrent faults are mapped along strike for 60 km to the east, where they merge with north-northwest faults forming the eastern boundary of the southern Walker Lane. The west-northwest trending transcurrent faults have 30-35 km of cumulative left-lateral displacement and are a major contributor to the decrease in right-lateral displacement on the FC-FLV fault system.
Fault-scale controls on rift geometry: the Bilila-Mtakataka Fault, Malawi
NASA Astrophysics Data System (ADS)
Hodge, M.; Fagereng, A.; Biggs, J.; Mdala, H. S.
2017-12-01
Border faults that develop during initial stages of rifting determine the geometry of rifts and passive margins. At outcrop and regional scales, it has been suggested that border fault orientation may be controlled by reactivation of pre-existing weaknesses. Here, we perform a multi-scale investigation on the influence of anisotropic fabrics along a major developing border fault in the southern East African Rift, Malawi. The 130 km long Bilila-Mtakataka fault has been proposed to have slipped in a single MW 8 earthquake with 10 m of normal displacement. The fault is marked by an 11±7 m high scarp with an average trend that is oblique to the current plate motion. Variations in scarp height are greatest at lithological boundaries and where the scarp switches between following and cross-cutting high-grade metamorphic foliation. Based on the scarp's geometry and morphology, we define 6 geometrically distinct segments. We suggest that the segments link to at least one deeper structure that strikes parallel to the average scarp trend, an orientation consistent with the kinematics of an early phase of rift initiation. The slip required on a deep fault(s) to match the height of the current scarp suggests multiple earthquakes along the fault. We test this hypothesis by studying the scarp morphology using high-resolution satellite data. Our results suggest that during the earthquake(s) that formed the current scarp, the propagation of the fault toward the surface locally followed moderately-dipping foliation well oriented for reactivation. In conclusion, although well oriented pre-existing weaknesses locally influence shallow fault geometry, large-scale border fault geometry appears primarily controlled by the stress field at the time of fault initiation.
NASA Astrophysics Data System (ADS)
Zhao, H.; Wu, L.; Xiao, A.
2016-12-01
We present a detailed structural analysis on the fault geometry and Cenozoic development in the Dongping area, northwestern Qaidam Basin, based on the precise 3-D seismic interpretation, remote sensing images and seismic attribute analysis. Two conflicting fault systems distributed in different orientations ( EW-striking and NNW-striking) with opposing senses of shear are recognized and discussed, and the interaction between them provides new insights to the intracontinental deformation of the Qaidam Basin within the NE Tibetan Plateau. The EW-striking fault system constitutes the south part of the Altyn left-slip positive flower structure. Faulting on the EW-striking faults dominated the northwestern Qaidam since 40 Ma in respond to the inception of the Altyn Tagh fault system as a ductile shear zone, tilting the south slope of the Altyn Tagh. Whereas the NNW-striking fault system became the dominant structures since the mid-Miocene ( 15 Ma), induced by the large scale strike-slip of the Altyn Tagh fault which leads to the NE-SW directed compression of the Qaidam Basin. Thus it evidently implies a structural conversion taking place within the NE Tibetan Plateau since the mid-Miocece ( 15 Ma). Interestingly, the preexisting faults possibly restrained the development of the later period faults, while the latter tended to track and link to the former fault traces. Taken the large scale sinistral striking-slip East Kunlun fault system into account, the late Cenozoic intracontinental deformation in the Qaidam Basin showing the dextral transpressional attribute is suggested to be the consequence of the combined effect of its two border sinistral strike-slip faults, which furthermore favors a continuous and lateral-extrusion mechanism of the growth of the NE Tibetan Plateau.
Probabilistic Seismic Hazard Assessment of the Chiapas State (SE Mexico)
NASA Astrophysics Data System (ADS)
Rodríguez-Lomelí, Anabel Georgina; García-Mayordomo, Julián
2015-04-01
The Chiapas State, in southeastern Mexico, is a very active seismic region due to the interaction of three tectonic plates: Northamerica, Cocos and Caribe. We present a probabilistic seismic hazard assessment (PSHA) specifically performed to evaluate seismic hazard in the Chiapas state. The PSHA was based on a composited seismic catalogue homogenized to Mw and was used a logic tree procedure for the consideration of different seismogenic source models and ground motion prediction equations (GMPEs). The results were obtained in terms of peak ground acceleration as well as spectral accelerations. The earthquake catalogue was compiled from the International Seismological Center and the Servicio Sismológico Nacional de México sources. Two different seismogenic source zones (SSZ) models were devised based on a revision of the tectonics of the region and the available geomorphological and geological maps. The SSZ were finally defined by the analysis of geophysical data, resulting two main different SSZ models. The Gutenberg-Richter parameters for each SSZ were calculated from the declustered and homogenized catalogue, while the maximum expected earthquake was assessed from both the catalogue and geological criteria. Several worldwide and regional GMPEs for subduction and crustal zones were revised. For each SSZ model we considered four possible combinations of GMPEs. Finally, hazard was calculated in terms of PGA and SA for 500-, 1000-, and 2500-years return periods for each branch of the logic tree using the CRISIS2007 software. The final hazard maps represent the mean values obtained from the two seismogenic and four attenuation models considered in the logic tree. For the three return periods analyzed, the maps locate the most hazardous areas in the Chiapas Central Pacific Zone, the Pacific Coastal Plain and in the Motagua and Polochic Fault Zone; intermediate hazard values in the Chiapas Batholith Zone and in the Strike-Slip Faults Province. The hazard decreases towards the northeast across the Reverse Faults Province and up to Yucatan Platform, where the lowest values are reached. We also produced uniform hazard spectra (UHS) for the three main cities of Chiapas. Tapachula city presents the highest spectral accelerations, while Tuxtla Gutierrez and San Cristobal de las Casas cities show similar values. We conclude that seismic hazard in Chiapas is chiefly controlled by the subduction of the Cocos beneath Northamerica and Caribe tectonic plates, that makes the coastal areas the most hazardous. Additionally, the Motagua and Polochic Fault Zones are also important, increasing the hazard particularly in southeastern Chiapas.
NASA Astrophysics Data System (ADS)
Charalambakis, E.; Hauber, E.; Knapmeyer, M.; Grott, M.; Gwinner, K.
2007-08-01
For Earth, data sets and models have shown that for a fault loaded by a constant remote stress, the maximum displacement on the fault is linearly related to its length by d = gamma · l [1]. The scaling and structure is self-similar through time [1]. The displacement-length relationship can provide useful information about the tectonic regime.We intend to use it to estimate the seismic moment released during the formation of Martian fault systems and to improve the seismicity model [2]. Only few data sets have been measured for extraterrestrial faults. One reason is the limited number of reliable topographic data sets. We used high-resolution Digital Elevation Models (DEM) [3] derived from HRSC image data taken from Mars Express orbit 1437. This orbit covers an area in the Acheron Fossae region, a rift-like graben system north of Olympus Mons with a "banana"-shaped topography [4]. It has a fault trend which runs approximately WNW-ESE. With an interactive IDL-based software tool [5] we measured the fault length and the vertical offset for 34 faults. We evaluated the height profile by plotting the fault lengths l vs. their observed maximum displacement (dmax-model). Additionally, we computed the maximum displacement of an elliptical fault scarp where the plane has the same area as in the observed case (elliptical model). The integration over the entire fault length necessary for the computation of the area supresses the "noise" introduced by local topographic effects like erosion or cratering. We should also mention that fault planes dipping 60 degree are usually assumed for Mars [e.g., 6] and even shallower dips have been found for normal fault planes [7]. This dip angle is used to compute displacement from vertical offset via d = h/(h*sinα), where h is the observed topographic step height, and ? is the fault dip angle. If fault dip angles of 30 degree are considered, the displacement differs by 40% from the one of dip angles of 60 degree. Depending on the data quality, especially the lighting conditions in the region, different errors can be made by determining the various values. Based on our experiences, we estimate that the error measuring the length of the fault is smaller than 10% and that the measurement error of the offset is smaller than 5%. Furthermore the horizontal resolution of the HRSC images is 12.5 m/pixel or 25 m/pixel and of the DEM derived from HRSC images 50 m/pixel because of re-sampling. That means that image resolution does not introduce a significant error at fault lengths in kilometer range. For the case of Mars it is known that in the growth of fault populations linkage is an essential process [8]. We obtained the d/l-values from selected examples of faults that were connected via a relay ramp. The error of ignoring an existing fault linkage is 20% to 50% if the elliptical fault model is used and 30% to 50% if only the dmax value is used to determine d l . This shows an advantage of the elliptic model. The error increases if more faults are linked, because the underestimation of the relevant length gets worse the longer the linked system is. We obtained a value of gamma=d/l of about 2 · 10-2 for the elliptic model and a value of approximately 2.7 · 10-2 for the dmax-model. The data show a relatively large scatter, but they can be compared to data from terrestrial faults ( d/l= ~1 · 10-2...5 · 10-2; [9] and references therein). In a first inspection of the Acheron Fossae 2 region in the orbit 1437 we could confirm our first observations [10]. If we consider fault linkage the d/l values shift towards lower d/l-ratios, since linkage means that d remains essentially constant, but l increases significantly. We will continue to measure other faults and obtain values for linked faults and relay ramps. References: [1] Cowie, P. A. and Scholz, C. H. (1992) JSG, 14, 1133-1148. [2] Knapmeyer, M. et al. (2006) JGR, 111, E11006. [3] Neukum, G. et al. (2004) ESA SP-1240, 17-35. [4] Kronberg, P. et al. (2007) J. Geophys. Res., 112, E04005, doi:10.1029/2006JE002780. [5] Hauber, E. et al. (2007) LPSC, XXXVIII, abstract 1338. [6] Wilkins, S. J. et al. (2002) GRL, 29, 1884, doi: 10.1029/2002GL015391. [7] Fueten, F. et al. (2007) LPSC, XXXVIII, abstract 1388. [8] Schultz, R. A. (2000) Tectonophysics, 316, 169-193. [9] Schultz, R. A. et al. (2006) JSG, 28, 2182-2193. [10] Hauber, E. et al. (2007) 7th Mars Conference, submitted.
China Report, Science and Technology, No. 197.
1983-05-13
eucalyptus trees both make excellent lumber for ship build- ing. Bark from the casuarina equisetifolia contains 13-18 percent tannic acid which can be...34 - / 2.*. 77 NOTE JPRS publications contain information primarily from foreign newspapers, periodicals and books, but also from news agency transmissions...depression zone, Hainan Island-easterly extension of Hainan Island-Dongsha continental slope uplift zone, northern Xisha Islands faulted trough and Zhongsha
1981-01-01
are applied to determine what system states (usually failed states) are possible; deductive methods are applied to determine how a given system state...Similar considerations apply to the single failures of CVA, BVB and CVB and this important additional information has been displayed in the principal...way. The point "maximum tolerable failure" corresponds to the survival point of the company building the aircraft. Above that point, only intolerable
Links to Literature--Huge Trees, Small Drawings: Ideas of Relative Sizes.
ERIC Educational Resources Information Center
Burton, Gail
1996-01-01
Discusses a unit integrating science, mathematics, and environmental education centered around "The Great Kapok Tree," by Lynne Cherry (1990). Ratios are used to make scale drawings of trees in a rain forest. Other activities include a terrarium and problem-solving activities based on eating habits of rain forest animals. (KMC)
Structural Mapping Along the Central San Andreas Fault-zone Using Airborne Electromagnetics
NASA Astrophysics Data System (ADS)
Zamudio, K. D.; Bedrosian, P.; Ball, L. B.
2017-12-01
Investigations of active fault zones typically focus on either surface expressions or the associated seismogenic zones. However, the largely aseismic upper kilometer can hold significant insight into fault-zone architecture, strain partitioning, and fault-zone permeability. Geophysical imaging of the first kilometer provides a link between surface fault mapping and seismically-defined fault zones and is particularly important in geologically complex regions with limited surface exposure. Additionally, near surface imaging can provide insight into the impact of faulting on the hydrogeology of the critical zone. Airborne electromagnetic (AEM) methods offer a unique opportunity to collect a spatially-large, detailed dataset in a matter of days, and are used to constrain subsurface resistivity to depths of 500 meters or more. We present initial results from an AEM survey flown over a 60 kilometer long segment of the central San Andreas Fault (SAF). The survey is centered near Parkfield, California, the site of the SAFOD drillhole, which marks the transition between a creeping fault segment to the north and a locked zone to the south. Cross sections with a depth of investigation up to approximately 500 meters highlight the complex Tertiary and Mesozoic geology that is dismembered by the SAF system. Numerous fault-parallel structures are imaged across a more than 10 kilometer wide zone centered on the surface trace. Many of these features can be related to faults and folds within Plio-Miocene sedimentary rocks found on both sides of the fault. Northeast of the fault, rocks of the Mesozoic Franciscan and Great Valley complexes are extremely heterogeneous, with highly resistive volcanic rocks within a more conductive background. The upper 300 meters of a prominent fault-zone conductor, previously imaged to 1-3 kilometers depth by magnetotellurics, is restricted to a 20 kilometer long segment of the fault, but is up to 4 kilometers wide in places. Elevated fault-zone conductivity may be related to damage within the fault zone, Miocene marine shales, or some combination of the two.
Dickinson, William R.; Ducea, M.; Rosenberg, Lewis I.; Greene, H. Gary; Graham, Stephan A.; Clark, Joseph C.; Weber, Gerald E.; Kidder, Steven; Ernst, W. Gary; Brabb, Earl E.
2005-01-01
Reinterpretation of onshore and offshore geologic mapping, examination of a key offshore well core, and revision of cross-fault ties indicate Neogene dextral strike slip of 156 ± 4 km along the San Gregorio–Hosgri fault zone, a major strand of the San Andreas transform system in coastal California. Delineating the full course of the fault, defining net slip across it, and showing its relationship to other major tectonic features of central California helps clarify the evolution of the San Andreas system.San Gregorio–Hosgri slip rates over time are not well constrained, but were greater than at present during early phases of strike slip following fault initiation in late Miocene time. Strike slip took place southward along the California coast from the western fl ank of the San Francisco Peninsula to the Hosgri fault in the offshore Santa Maria basin without significant reduction by transfer of strike slip into the central California Coast Ranges. Onshore coastal segments of the San Gregorio–Hosgri fault include the Seal Cove and San Gregorio faults on the San Francisco Peninsula, and the Sur and San Simeon fault zones along the flank of the Santa Lucia Range.Key cross-fault ties include porphyritic granodiorite and overlying Eocene strata exposed at Point Reyes and at Point Lobos, the Nacimiento fault contact between Salinian basement rocks and the Franciscan Complex offshore within the outer Santa Cruz basin and near Esalen on the flank of the Santa Lucia Range, Upper Cretaceous (Campanian) turbidites of the Pigeon Point Formation on the San Francisco Peninsula and the Atascadero Formation in the southern Santa Lucia Range, assemblages of Franciscan rocks exposed at Point Sur and at Point San Luis, and a lithic assemblage of Mesozoic rocks and their Tertiary cover exposed near Point San Simeon and at Point Sal, as restored for intrabasinal deformation within the onshore Santa Maria basin.Slivering of the Salinian block by San Gregorio–Hosgri displacements elongated its northern end and offset its western margin delineated by the older Nacimiento fault, a sinistral strike-slip fault of latest Cretaceous to Paleocene age. North of its juncture with the San Andreas fault, dextral slip along the San Gregorio–Hosgri fault augments net San Andreas displacement. Alternate restorations of the Gualala block imply that nearly half the net San Gregorio–Hosgri slip was accommodated along the offshore Gualala fault strand lying west of the Gualala block, which is bounded on the east by the current master trace of the San Andreas fault. With San Andreas and San Gregorio–Hosgri slip restored, there remains an unresolved proto–San Andreas mismatch of ∼100 km between the offset northern end of the Salinian block and the southern end of the Sierran-Tehachapi block.On the south, San Gregorio–Hosgri strike slip is transposed into crustal shortening associated with vertical-axis tectonic rotation of fault-bounded crustal panels that form the western Transverse Ranges, and with kinematically linked deformation within the adjacent Santa Maria basin. The San Gregorio–Hosgri fault serves as the principal link between transrotation in the western Transverse Ranges and strike slip within the San Andreas transform system of central California.
Jin, Long; Liao, Bolin; Liu, Mei; Xiao, Lin; Guo, Dongsheng; Yan, Xiaogang
2017-01-01
By incorporating the physical constraints in joint space, a different-level simultaneous minimization scheme, which takes both the robot kinematics and robot dynamics into account, is presented and investigated for fault-tolerant motion planning of redundant manipulator in this paper. The scheme is reformulated as a quadratic program (QP) with equality and bound constraints, which is then solved by a discrete-time recurrent neural network. Simulative verifications based on a six-link planar redundant robot manipulator substantiate the efficacy and accuracy of the presented acceleration fault-tolerant scheme, the resultant QP and the corresponding discrete-time recurrent neural network. PMID:28955217
NASA Astrophysics Data System (ADS)
Swain, Snehaprava; Ray, Pravat Kumar
2016-12-01
In this paper a three phase fault analysis is done on a DFIG based grid integrated wind energy system. A Novel Active Crowbar Protection (NACB_P) system is proposed to enhance the Fault-ride through (FRT) capability of DFIG both for symmetrical as well as unsymmetrical grid faults. Hence improves the power quality of the system. The protection scheme proposed here is designed with a capacitor in series with the resistor unlike the conventional Crowbar (CB) having only resistors. The major function of the capacitor in the protection circuit is to eliminate the ripples generated in the rotor current and to protect the converter as well as the DC-link capacitor. It also compensates reactive power required by the DFIG during fault. Due to these advantages the proposed scheme enhances the FRT capability of the DFIG and also improves the power quality of the whole system. Experimentally the fault analysis is done on a 3hp slip ring induction generator and simulation results are carried out on a 1.7 MVA DFIG based WECS under different types of grid faults in MATLAB/Simulation and functionality of the proposed scheme is verified.
Earthquake and volcano clustering via stress transfer at Yucca Mountain, Nevada
Parsons, T.; Thompson, G.A.; Cogbill, A.H.
2006-01-01
The proposed national high-level nuclear waste repository at Yucca Mountain is close to Quaternary cinder cones and faults with Quaternary slip. Volcano eruption and earthquake frequencies are low, with indications of spatial and temporal clustering, making probabilistic assessments difficult. In an effort to identify the most likely intrusion sites, we based a three-dimensional finite-element model on the expectation that faulting and basalt intrusions are sensitive to the magnitude and orientation of the least principal stress in extensional terranes. We found that in the absence of fault slip, variation in overburden pressure caused a stress state that preferentially favored intrusions at Crater Flat. However, when we allowed central Yucca Mountain faults to slip in the model, we found that magmatic clustering was not favored at Crater Flat or in the central Yucca Mountain block. Instead, we calculated that the stress field was most encouraging to intrusions near fault terminations, consistent with the location of the most recent volcanism at Yucca Mountain, the Lathrop Wells cone. We found this linked fault and magmatic system to be mutually reinforcing in the model in that Lathrop Wells feeder dike inflation favored renewed fault slip. ?? 2006 Geological Society of America.
Ryan, Holly F.; Conrad, James E.; Paull, C.K.; McGann, Mary
2012-01-01
The San Diego trough fault zone (SDTFZ) is part of a 90-km-wide zone of faults within the inner California Borderland that accommodates motion between the Pacific and North American plates. Along with most faults offshore southern California, the slip rate and paleoseismic history of the SDTFZ are unknown. We present new seismic reflection data that show that the fault zone steps across a 5-km-wide stepover to continue for an additional 60 km north of its previously mapped extent. The 1986 Oceanside earthquake swarm is located within the 20-km-long restraining stepover. Farther north, at the latitude of Santa Catalina Island, the SDTFZ bends 20° to the west and may be linked via a complex zone of folds with the San Pedro basin fault zone (SPBFZ). In a cooperative program between the U.S. Geological Survey (USGS) and the Monterey Bay Aquarium Research Institute (MBARI), we measure and date the coseismic offset of a submarine channel that intersects the fault zone near the SDTFZ–SPBFZ junction. We estimate a horizontal slip rate of about 1:5 0:3 mm=yr over the past 12,270 yr.
Integral Sensor Fault Detection and Isolation for Railway Traction Drive.
Garramiola, Fernando; Del Olmo, Jon; Poza, Javier; Madina, Patxi; Almandoz, Gaizka
2018-05-13
Due to the increasing importance of reliability and availability of electric traction drives in Railway applications, early detection of faults has become an important key for Railway traction drive manufacturers. Sensor faults are important sources of failures. Among the different fault diagnosis approaches, in this article an integral diagnosis strategy for sensors in traction drives is presented. Such strategy is composed of an observer-based approach for direct current (DC)-link voltage and catenary current sensors, a frequency analysis approach for motor current phase sensors and a hardware redundancy solution for speed sensors. None of them requires any hardware change requirement in the actual traction drive. All the fault detection and isolation approaches have been validated in a Hardware-in-the-loop platform comprising a Real Time Simulator and a commercial Traction Control Unit for a tram. In comparison to safety-critical systems in Aerospace applications, Railway applications do not need instantaneous detection, and the diagnosis is validated in a short time period for reliable decision. Combining the different approaches and existing hardware redundancy, an integral fault diagnosis solution is provided, to detect and isolate faults in all the sensors installed in the traction drive.
Integral Sensor Fault Detection and Isolation for Railway Traction Drive
del Olmo, Jon; Poza, Javier; Madina, Patxi; Almandoz, Gaizka
2018-01-01
Due to the increasing importance of reliability and availability of electric traction drives in Railway applications, early detection of faults has become an important key for Railway traction drive manufacturers. Sensor faults are important sources of failures. Among the different fault diagnosis approaches, in this article an integral diagnosis strategy for sensors in traction drives is presented. Such strategy is composed of an observer-based approach for direct current (DC)-link voltage and catenary current sensors, a frequency analysis approach for motor current phase sensors and a hardware redundancy solution for speed sensors. None of them requires any hardware change requirement in the actual traction drive. All the fault detection and isolation approaches have been validated in a Hardware-in-the-loop platform comprising a Real Time Simulator and a commercial Traction Control Unit for a tram. In comparison to safety-critical systems in Aerospace applications, Railway applications do not need instantaneous detection, and the diagnosis is validated in a short time period for reliable decision. Combining the different approaches and existing hardware redundancy, an integral fault diagnosis solution is provided, to detect and isolate faults in all the sensors installed in the traction drive. PMID:29757251
NASA Astrophysics Data System (ADS)
Zuza, Andrew V.; Yin, An
2016-05-01
Collision-induced continental deformation commonly involves complex interactions between strike-slip faulting and off-fault deformation, yet this relationship has rarely been quantified. In northern Tibet, Cenozoic deformation is expressed by the development of the > 1000-km-long east-striking left-slip Kunlun, Qinling, and Haiyuan faults. Each have a maximum slip in the central fault segment exceeding 10s to ~ 100 km but a much smaller slip magnitude (~< 10% of the maximum slip) at their terminations. The along-strike variation of fault offsets and pervasive off-fault deformation create a strain pattern that departs from the expectations of the classic plate-like rigid-body motion and flow-like distributed deformation end-member models for continental tectonics. Here we propose a non-rigid bookshelf-fault model for the Cenozoic tectonic development of northern Tibet. Our model, quantitatively relating discrete left-slip faulting to distributed off-fault deformation during regional clockwise rotation, explains several puzzling features, including the: (1) clockwise rotation of east-striking left-slip faults against the northeast-striking left-slip Altyn Tagh fault along the northwestern margin of the Tibetan Plateau, (2) alternating fault-parallel extension and shortening in the off-fault regions, and (3) eastward-tapering map-view geometries of the Qimen Tagh, Qaidam, and Qilian Shan thrust belts that link with the three major left-slip faults in northern Tibet. We refer to this specific non-rigid bookshelf-fault system as a passive bookshelf-fault system because the rotating bookshelf panels are detached from the rigid bounding domains. As a consequence, the wallrock of the strike-slip faults deforms to accommodate both the clockwise rotation of the left-slip faults and off-fault strain that arises at the fault ends. An important implication of our model is that the style and magnitude of Cenozoic deformation in northern Tibet vary considerably in the east-west direction. Thus, any single north-south cross section and its kinematic reconstruction through the region do not properly quantify the complex deformational processes of plateau formation.
NASA Astrophysics Data System (ADS)
Neely, Thomas G.; Erslev, Eric A.
2009-09-01
Horizontally-shortened, basement-involved foreland orogens commonly exhibit anastomosing networks of bifurcating basement highs (here called arches) whose structural culminations are linked by complex transition zones of diversely-oriented faults and folds. The 3D geometry and kinematics of the southern Beartooth arch transition zone of north-central Wyoming were studied to understand the fold mechanisms and control on basement-involved arches. Data from 1581 slickensided minor faults are consistent with a single regional shortening direction of 065°. Evidence for oblique-slip, vertical axis rotations and stress refraction at anomalously-oriented folds suggests formation over reactivated pre-existing weaknesses. Restorable cross-sections and 3D surfaces, constrained by surface, well, and seismic data, document blind, ENE-directed basement thrusting and associated thin-skinned backthrusting and folding along the Beartooth and Oregon Basin fault systems. Between these systems, the basement-cored Rattlesnake Mountain backthrust followed basement weaknesses and rotated a basement chip toward the basin before the ENE-directed Line Creek fault system broke through and connected the Beartooth and Oregon Basin fault systems. Slip was transferred at the terminations of the Rattlesnake Mountain fault block by pivoting to the north and tear faulting to the south. In summary, unidirectional Laramide compression and pre-existing basement weaknesses combined with fault-propagation and rotational fault-bend folding to create an irregular yet continuous basement arch transition.
Fault compaction and overpressured faults: results from a 3-D model of a ductile fault zone
NASA Astrophysics Data System (ADS)
Fitzenz, D. D.; Miller, S. A.
2003-10-01
A model of a ductile fault zone is incorporated into a forward 3-D earthquake model to better constrain fault-zone hydraulics. The conceptual framework of the model fault zone was chosen such that two distinct parts are recognized. The fault core, characterized by a relatively low permeability, is composed of a coseismic fault surface embedded in a visco-elastic volume that can creep and compact. The fault core is surrounded by, and mostly sealed from, a high permeability damaged zone. The model fault properties correspond explicitly to those of the coseismic fault core. Porosity and pore pressure evolve to account for the viscous compaction of the fault core, while stresses evolve in response to the applied tectonic loading and to shear creep of the fault itself. A small diffusive leakage is allowed in and out of the fault zone. Coseismically, porosity is created to account for frictional dilatancy. We show in the case of a 3-D fault model with no in-plane flow and constant fluid compressibility, pore pressures do not drop to hydrostatic levels after a seismic rupture, leading to an overpressured weak fault. Since pore pressure plays a key role in the fault behaviour, we investigate coseismic hydraulic property changes. In the full 3-D model, pore pressures vary instantaneously by the poroelastic effect during the propagation of the rupture. Once the stress state stabilizes, pore pressures are incrementally redistributed in the failed patch. We show that the significant effect of pressure-dependent fluid compressibility in the no in-plane flow case becomes a secondary effect when the other spatial dimensions are considered because in-plane flow with a near-lithostatically pressured neighbourhood equilibrates at a pressure much higher than hydrostatic levels, forming persistent high-pressure fluid compartments. If the observed faults are not all overpressured and weak, other mechanisms, not included in this model, must be at work in nature, which need to be investigated. Significant leakage perpendicular to the fault strike (in the case of a young fault), or cracks hydraulically linking the fault core to the damaged zone (for a mature fault) are probable mechanisms for keeping the faults strong and might play a significant role in modulating fault pore pressures. Therefore, fault-normal hydraulic properties of fault zones should be a future focus of field and numerical experiments.
NASA Astrophysics Data System (ADS)
Hakimi Asiabar, Saeid; Bagheriyan, Siyamak
2018-03-01
The Alborz range in northern Iran stretches along the southern coast of the Caspian Sea and finally runs northeast and merges into the Pamir mountains in Afghanistan. Alborz mountain belt is a doubly vergent orogen formed along the northern edge of the Iranian plateau in response to the closure of the Neo-Tethys ocean and continental collision between Arabia and Eurasia. The south Caspian depression—the Alborz basin of Mesozoic age (with W-E trend) in northern Iran—inverted in response to the Arabia-Eurasia collision. Pre-existing extensional faults of the south Caspian-Alborz system preferentially reactivated as contractional faults because of tectonic inversion. These contractional structures tend to run parallel to the trends of pre-existing extensional faults and acquire W and WNW-ESE orientations across the previous accommodation zones that were imposed by the reactivation of adjacent extensional faults with different directions. The NNE to N dipping faults show evidences of reactivation. The Deylaman fault is one of the important faults of western Alborz in Iran and is an example of inversion tectonic style of deformation in the western Alborz mountain range. The Deylaman fault, with an E-W trend, contains three discontinuous fault segments in the area under investigation. These fault segments have evidence of oblique right-lateral reverse motion and links eastward to the dextral Kandavan thrust. The importance of this fault is due to its effect on sedimentation of several rock units from the Jurassic to Neogene in western Alborz; the rock facies on each side of this fault are very different and illustrate different parts of tectonic history.
NASA Astrophysics Data System (ADS)
Polun, S. G.; Stockman, M. B.; Hickcox, K.; Horrell, D.; Tesfaye, S.; Gomez, F. G.
2015-12-01
As the only subaerial exposure of a ridge - ridge - ridge triple junction, the Afar region of Ethiopia and Djibouti offers a rare opportunity to assess strain partitioning within this type of triple junction. Here, the plate boundaries do not link discretely, but rather the East African rift meets the Red Sea and Gulf of Aden rifts in a zone of diffuse normal faulting characterized by a lack of magmatic activity, referred to as the central Afar. An initial assessment of Late Quaternary strain partitioning is based on faulted landforms in the Dobe - Hanle graben system in Ethiopia and Djibouti. These two extensional basins are connected by an imbricated accommodation zone. Several fault scarps occur within terraces formed during the last highstand of Lake Dobe, around 5 ka - they provide a means of calibrating a numerical model of fault scarp degradation. Additional timing constraints will be provided by pending exposure ages. The spreading rates of both grabens are equivalent, however in Dobe graben, extension is partitioned 2:1 between northern, south dipping faults and the southern, north dipping fault. Extension in Hanle graben is primarily focused on the north dipping Hanle fault. On the north margin of Dobe graben, the boundary fault bifurcates, where the basin-bordering fault displays a significantly higher modeled uplift rate than the more distal fault, suggesting a basinward propagation of faulting. On the southern Dobe fault, surveyed fault scarps have ages ranging from 30 - 5 ka with uplift rates of 0.71, 0.47, and 0.68 mm/yr, suggesting no secular variation in slip rates from the late Plestocene through the Holocene. These rates are converted into horizontal stretching estimates, which are compared with regional strain estimated from velocities of relatively sparse GPS data.
NASA Astrophysics Data System (ADS)
Naim, Nani Fadzlina; Ab-Rahman, Mohammad Syuhaimi; Kamaruddin, Nur Hasiba; Bakar, Ahmad Ashrif A.
2013-09-01
Nowadays, optical networks are becoming dense while detecting faulty branches in the tree-structured networks has become problematic. Conventional methods are inconvenient as they require an engineer to visit the failure site to check the optical fiber using an optical time-domain reflectometer. An innovative monitoring technique for tree-structured network topology in Ethernet passive optical networks (EPONs) by using the erbium-doped fiber amplifier to amplify the traffic signal is demonstrated, and in the meantime, a residual amplified spontaneous emission spectrum is used as the input signal to monitor the optical cable from the central office. Fiber Bragg gratings with distinct center wavelengths are employed to reflect the monitoring signals. Faulty branches of the tree-structured EPONs can be identified using a simple and low-cost receiver. We will show that this technique is capable of providing monitoring range up to 32 optical network units using a power meter with a sensitivity of -65 dBm while maintaining the bit error rate of 10-13.
Active tectonics of the northern Mojave Desert: The 2017 Desert Symposium field trip road log
Miller, David; Reynolds, R.E.; Phelps, Geoffrey; Honke, Jeff; Cyr, Andrew J.; Buesch, David C.; Schmidt, Kevin M.; Losson, G.
2017-01-01
The 2017 Desert Symposium field trip will highlight recent work by the U.S. Geological Survey geologists and geophysicists, who have been mapping young sediment and geomorphology associated with active tectonic features in the least well-known part of the eastern California Shear Zone (ECSZ). This area, stretching from Barstow eastward in a giant arc to end near the Granite Mountains on the south and the Avawatz Mountains on the north (Fig. 1-1), encompasses the two major structural components of the ECSZ—east-striking sinistral faults and northwest-striking dextral faults—as well as reverseoblique and normal-oblique faults that are associated with topographic highs and sags, respectively. In addition, folds and stepovers (both restraining stepovers that form pop-up structures and releasing stepovers that create narrow basins) have been identified. The ECSZ is a segment in the ‘soft’ distributed deformation of the North American plate east of the San Andreas fault (Fig. 1-1), where it takes up approximately 20-25% of plate motion in a broad zone of right-lateral shear (Sauber et al., 1994) The ECSZ (sensu strictu) begins in the Joshua Tree area and passes north through the Mojave Desert, past the Owens Valley-to-Death Valley swath and northward, where it is termed the Walker Lane. It has been defined as the locus of active faulting (Dokka and Travis, 1990), but when the full history from about 10 Ma forward is considered, it lies in a broader zone of right shear that passes westward in the Mojave Desert to the San Andreas fault (Mojave strike-slip province of Miller and Yount, 2002) and passes eastward to the Nevada state line or beyond (Miller, this volume).We will visit several accessible highlights for newly studied faults, signs of young deformation, and packages of syntectonic sediments. These pieces of a complex active tectonic puzzle have yielded some answers to longstanding questions such as: How is fault slip transfer in this area accommodated between northwest-striking dextral faults and eaststriking sinistral faults?How is active deformation on the Ludlow fault transferred northward, presumably to connect to the southern Death Valley fault zone?When were faults in this area of the central Mojave Desert initiated?Are faults in this area more or less active than faults in the ECSZ to the west?What is the role of NNW-striking faults and when did they form?How has fault slip changed over time? Locations and fault names are provided in figure 1-2. Important turns and locations are identified with locations in the projection: UTM, zone 11; datum NAD 83: (578530 3917335).
A remote sensing study of active folding and faulting in southern Kerman province, S.E. Iran
NASA Astrophysics Data System (ADS)
Walker, Richard Thomas
2006-04-01
Geomorphological observations reveal a major oblique fold-and-thrust belt in Kerman province, S.E. Iran. The active faults appear to link the Sabzevaran right-lateral strike-slip fault in southeast Iran to other strike-slip faults within the interior of the country and may provide the means of distributing right-lateral shear between the Zagros and Makran mountains over a wider region of central Iran. The Rafsanjan fault is manifest at the Earth's surface as right-lateral strike-slip fault scarps and folding in alluvial sediments. Height changes across the anticlines, and widespread incision of rivers, are likely to result from hanging-wall uplift above thrust faults at depth. Scarps in recent alluvium along the northern margins of the folds suggest that the thrusts reach the surface and are active at the present-day. The observations from Rafsanjan are used to identify similar late Quaternary faulting elsewhere in Kerman province near the towns of Mahan and Rayen. No instrumentally recorded destructive earthquakes have occurred in the study region and only one historical earthquake (Lalehzar, 1923) is recorded. In addition GPS studies show that present-day rates of deformation are low. However, fault structures in southern Kerman province do appear to be active in the late Quaternary and may be capable of producing destructive earthquakes in the future. This study shows how widely available remote sensing data can be used to provide information on the distribution of active faulting across large areas of deformation.
Tree Colors: Color Schemes for Tree-Structured Data.
Tennekes, Martijn; de Jonge, Edwin
2014-12-01
We present a method to map tree structures to colors from the Hue-Chroma-Luminance color model, which is known for its well balanced perceptual properties. The Tree Colors method can be tuned with several parameters, whose effect on the resulting color schemes is discussed in detail. We provide a free and open source implementation with sensible parameter defaults. Categorical data are very common in statistical graphics, and often these categories form a classification tree. We evaluate applying Tree Colors to tree structured data with a survey on a large group of users from a national statistical institute. Our user study suggests that Tree Colors are useful, not only for improving node-link diagrams, but also for unveiling tree structure in non-hierarchical visualizations.
Inversion of Coeval Shear and Normal Stress of Piton de la Fournaise Flank Displacement
NASA Astrophysics Data System (ADS)
Cayol, V.; Tridon, M.; Froger, J. L.; Augier, A.; Bachelery, P.
2016-12-01
The April 2007 eruption of Piton de la Fournaise was the biggest volcano eruptive crisis of the 20th and 21st centuries. InSAR captured a large (1.4 m) co-eruptive seaward slip of the volcano's eastern flank, which continued for more than a year at a decreasing rate. Co-eruptive uplift and post-eruptive subsidence were also observed. While it is generally agreed that flank displacement is induced by fault slip, we investigate wether this flank displacement might have been induced by a sheared sill, as suggested by observations of sheared sills at Piton des Neiges. To test this hypothesis, we develop a new method to invert a quadrangular curved source submitted to co-eval pressure and shear stress changes. This method, based on boundary elements, is applied to co-eruptive and post-eruptive InSAR data. We find that co-eruptive displacement is explained by a 2 km by 2 km detachment fault, parallel to the flank and probably coincident with a lithological discontinuity. The fracture is shallow enough to induce the co-eval uplift characteristic of a detachment fold. We determine the co-eruptive overpressure is zero, which indicates that the fracture is not a sheared sill. This finding confirms a previous determination obtained using a decision tree based on ratios of maximum displacements. The determined shear stress change of 2 MPa is conistent with the eastern flank loaded by previously intruded rift dikes. Post-eruptive displacement is well explained by slip and closure of the same fracture but over a larger (5 km by 8 km). This displacements is consistent with relaxation and the co-eruptive flank displacement and causal link between both displacement is investigated.
Parsons, Tom; Dreger, Douglas S.
2000-01-01
The proximity in time (∼7 years) and space (∼20 km) between the 1992 M=7.3 Landers earthquake and the 1999 M=7.1 Hector Mine event suggests a possible link between the quakes. We thus calculated the static stress changes following the 1992 Joshua Tree/Landers/Big Bear earthquake sequence on the 1999 M=7.1 Hector Mine rupture plane in southern California. Resolving the stress tensor into rake-parallel and fault-normal components and comparing with changes in the post-Landers seismicity rate allows us to estimate a coefficient of friction on the Hector Mine plane. Seismicity following the 1992 sequence increased at Hector Mine where the fault was unclamped. This increase occurred despite a calculated reduction in right-lateral shear stress. The dependence of seismicity change primarily on normal stress change implies a high coefficient of static friction (µ≥0.8). We calculated the Coulomb stress change using µ=0.8 and found that the Hector Mine hypocenter was mildly encouraged (0.5 bars) by the 1992 earthquake sequence. In addition, the region of peak slip during the Hector Mine quake occurred where Coulomb stress is calculated to have increased by 0.5–1.5 bars. In general, slip was more limited where Coulomb stress was reduced, though there was some slip where the strongest stress decrease was calculated. Interestingly, many smaller earthquakes nucleated at or near the 1999 Hector Mine hypocenter after 1992, but only in 1999 did an event spread to become a M=7.1 earthquake.
Robust Routing Protocol For Digital Messages
NASA Technical Reports Server (NTRS)
Marvit, Maclen
1994-01-01
Refinement of ditigal-message-routing protocol increases fault tolerance of polled networks. AbNET-3 is latest of generic AbNET protocols for transmission of messages among computing nodes. AbNET concept described in "Multiple-Ring Digital Communication Network" (NPO-18133). Specifically aimed at increasing fault tolerance of network in broadcast mode, in which one node broadcasts message to and receives responses from all other nodes. Communication in network of computers maintained even when links fail.
NASA Astrophysics Data System (ADS)
Jung, Byung Ik; Cho, Yong Sun; Park, Hyoung Min; Chung, Dong Chul; Choi, Hyo Sang
2013-01-01
The South Korean power grid has a network structure for the flexible operation of the system. The continuously increasing power demand necessitated the increase of power facilities, which decreased the impedance in the power system. As a result, the size of the fault current in the event of a system fault increased. As this increased fault current size is threatening the breaking capacity of the circuit breaker, the main protective device, a solution to this problem is needed. The superconducting fault current limiter (SFCL) has been designed to address this problem. SFCL supports the stable operation of the circuit breaker through its excellent fault-current-limiting operation [1-5]. In this paper, the quench and fault current limiting characteristics of the flux-coupling-type SFCL with one three-phase transformer were compared with those of the same SFCL type but with three single-phase transformers. In the case of the three-phase transformers, both the superconducting elements of the fault and sound phases were quenched, whereas in the case of the single-phase transformer, only that of the fault phase was quenched. For the fault current limiting rate, both cases showed similar rates for the single line-to-ground fault, but for the three-wire earth fault, the fault current limiting rate of the single-phase transformer was over 90% whereas that of the three-phase transformer was about 60%. It appears that when the three-phase transformer was used, the limiting rate decreased because the fluxes by the fault current of each phase were linked in one core. When the power loads of the superconducting elements were compared by fault type, the initial (half-cycle) load was great when the single-phase transformer was applied, whereas for the three-phase transformer, its power load was slightly lower at the initial stage but became greater after the half fault cycle.
Fault linkage and continental breakup
NASA Astrophysics Data System (ADS)
Cresswell, Derren; Lymer, Gaël; Reston, Tim; Stevenson, Carl; Bull, Jonathan; Sawyer, Dale; Morgan, Julia
2017-04-01
The magma-poor rifted margin off the west coast of Galicia (NW Spain) has provided some of the key observations in the development of models describing the final stages of rifting and continental breakup. In 2013, we collected a 68 x 20 km 3D seismic survey across the Galicia margin, NE Atlantic. Processing through to 3D Pre-stack Time Migration (12.5 m bin-size) and 3D depth conversion reveals the key structures, including an underlying detachment fault (the S detachment), and the intra-block and inter-block faults. These data reveal multiple phases of faulting, which overlap spatially and temporally, have thinned the crust to between zero and a few km thickness, producing 'basement windows' where crustal basement has been completely pulled apart and sediments lie directly on the mantle. Two approximately N-S trending fault systems are observed: 1) a margin proximal system of two linked faults that are the upward extension (breakaway faults) of the S; in the south they form one surface that splays northward to form two faults with an intervening fault block. These faults were thus demonstrably active at one time rather than sequentially. 2) An oceanward relay structure that shows clear along strike linkage. Faults within the relay trend NE-SW and heavily dissect the basement. The main block bounding faults can be traced from the S detachment through the basement into, and heavily deforming, the syn-rift sediments where they die out, suggesting that the faults propagated up from the S detachment surface. Analysis of the fault heaves and associated maps at different structural levels show complementary fault systems. The pattern of faulting suggests a variation in main tectonic transport direction moving oceanward. This might be interpreted as a temporal change during sequential faulting, however the transfer of extension between faults and the lateral variability of fault blocks suggests that many of the faults across the 3D volume were active at least in part simultaneously. Alternatively, extension may have varied in direction spatially if it were a rotation about a pole located to the north.
Exploring connections between trees and human health
Geoffrey Donovan; Marie Oliver
2014-01-01
Humans have intuitively understood the value of trees to their physical and mental health since the beginning of recorded time. A scientist with the Pacific Northwest Research Station wondered if such a link could be scientifically validated. His research team took advantage of an infestation of emerald ash borer, an invasive pest that kills ash trees, to conduct a...
Where to plant urban trees? A spatially explicit methodology to explore ecosystem service tradeoffs
E.W. Bodnaruk; C.N. Kroll; Y. Yang; S. Hirabayashi; David Nowak; T.A. Endreny
2017-01-01
Urban trees can help mitigate some of the environmental degradation linked to the rapid urbanization of humanity. Many municipalities are implementing ambitious tree planting programs to help remove air pollution, mitigate urban heat island effects, and provide other ecosystem services and benefits but lack quantitative tools to explore priority planting locations and...
Urban trees and the risk of poor birth outcomes
Geoffrey H. Donovan; Yvonne L. Michael; David T. Butry; Amy D. Sullivan; John M. Chase
2011-01-01
This paper investigated whether greater tree-canopy cover is associated with reduced risk of poor birth outcomes in Portland, Oregon. Residential addresses were geocoded and linked to classified-aerial imagery to calculate tree-canopy cover in 50, 100, and 200 m buffers around each home in our sample (n=5696). Detailed data on maternal characteristics and additional...
Deepa S. Pureswaran; Brian T. Sullivan; Matthew P. Ayres
2008-01-01
Aggregation via pheromone signaling is essential for tree-killing bark beetles to overcome tree defenses and reproduce within hosts. Pheromone production is a trait that is linked to fitness, so high individual variation is paradoxica1. One explanation is that the technique of measuring static pheromone pools overestimates true variation among individuals. An...
NASA Astrophysics Data System (ADS)
Ishiyama, Tatsuya; Mueller, Karl; Togo, Masami; Okada, Atsumasa; Takemura, Keiji
2004-12-01
We combine surface mapping of fault and fold scarps that deform late Quaternary alluvial strata with interpretation of a high-resolution seismic reflection profile to develop a kinematic model and determine fault slip rates for an active blind wedge thrust system that underlies Kuwana anticline in central Japan. Surface fold scarps on Kuwana anticline are closely correlated with narrow fold limbs and angular hinges on the seismic profile that suggest at least ˜1.3 km of fault slip completely consumed by folding in the upper 4 km of the crust. The close coincidence and kinematic link between folded terraces and the underlying thrust geometry indicate that Kuwana anticline has accommodated slip at an average rate of 2.2 ± 0.5 mm/yr on a 27°, west dipping thrust fault since early-middle Pleistocene time. In contrast to classical fault bend folds the fault slip budget in the stacked wedge thrusts also indicates that (1) the fault tip propagated upward at a low rate relative to the accrual of fault slip and (2) fault slip is partly absorbed by numerous bedding plane flexural-slip faults above the tips of wedge thrusts. An historic earthquake that occurred on the Kuwana blind thrust system possibly in A.D. 1586 is shown to have produced coseismic surface deformation above the doubly vergent wedge tip. Structural analyses of Kuwana anticline coupled with tectonic geomorphology at 103-105 years timescales illustrate the significance of active folds as indicators of slip on underlying blind thrust faults and thus their otherwise inaccessible seismic hazards.
NASA Astrophysics Data System (ADS)
Marín-Lechado, C.; Pedrera, A.; Peláez, J. A.; Ruiz-Constán, A.; González-Ramón, A.; Henares, J.
2017-06-01
The tectonic structure of the Guadalquivir foreland basin becomes complex eastward evolving from a single depocenter to a compartmented basin. The deformation pattern within the eastern Guadalquivir foreland basin has been characterized by combining seismic reflection profiles, boreholes, and structural field data to output a 3-D model. High-dipping NNE-SSW to NE-SW trending normal and reverse fault arrays deform the Variscan basement of the basin. These faults generally affect Tortonian sediments, which show syntectonic features sealed by the latest Miocene units. Curved and S-shaped fault traces are abundant and caused by the linkage of nearby fault segments during lateral fault propagation. Preexisting faults were reactivated either as normal or reverse faults depending on their position within the foreland. At Tortonian time, reverse faults deformed the basin forebulge, while normal faults predominated within the backbulge. Along-strike variation of the Betic foreland basin geometry is supported by an increasing mechanical coupling of the two plates (Alborán Domain and Variscan basement) toward the eastern part of the cordillera. Thus, subduction would have progressed in the western Betics, while it would have failed in the eastern one. There, the initially subducted Iberian paleomargin (Nevado-Filábride Complex) was incorporated into the upper plate promoting the transmission of collision-related compressional stresses into the foreland since the middle Miocene. Nowadays, compression is still active and produces low-magnitude earthquakes likely linked to NNE-SSW to NE-SW preexiting faults reactivated with reverse oblique-slip kinematics. Seismicity is mostly concentrated around fault tips that are frequently curved in overstepping zones.
NASA Astrophysics Data System (ADS)
Karson, J. A.
2017-11-01
Unlike most of the Mid-Atlantic Ridge, the North America/Eurasia plate boundary in Iceland lies above sea level where magmatic and tectonic processes can be directly investigated in subaerial exposures. Accordingly, geologic processes in Iceland have long been recognized as possible analogs for seafloor spreading in the submerged parts of the mid-ocean ridge system. Combining existing and new data from across Iceland provides an integrated view of this active, mostly subaerial plate boundary. The broad Iceland plate boundary zone includes segmented rift zones linked by transform fault zones. Rift propagation and transform fault migration away from the Iceland hotspot rearrange the plate boundary configuration resulting in widespread deformation of older crust and reactivation of spreading-related structures. Rift propagation results in block rotations that are accommodated by widespread, rift-parallel, strike-slip faulting. The geometry and kinematics of faulting in Iceland may have implications for spreading processes elsewhere on the mid-ocean ridge system where rift propagation and transform migration occur.
Strike-slip faulting in the Inner California Borderlands, offshore Southern California.
NASA Astrophysics Data System (ADS)
Bormann, J. M.; Kent, G. M.; Driscoll, N. W.; Harding, A. J.; Sahakian, V. J.; Holmes, J. J.; Klotsko, S.; Kell, A. M.; Wesnousky, S. G.
2015-12-01
In the Inner California Borderlands (ICB), offshore of Southern California, modern dextral strike-slip faulting overprints a prominent system of basins and ridges formed during plate boundary reorganization 30-15 Ma. Geodetic data indicate faults in the ICB accommodate 6-8 mm/yr of Pacific-North American plate boundary deformation; however, the hazard posed by the ICB faults is poorly understood due to unknown fault geometry and loosely constrained slip rates. We present observations from high-resolution and reprocessed legacy 2D multichannel seismic (MCS) reflection datasets and multibeam bathymetry to constrain the modern fault architecture and tectonic evolution of the ICB. We use a sequence stratigraphy approach to identify discrete episodes of deformation in the MCS data and present the results of our mapping in a regional fault model that distinguishes active faults from relict structures. Significant differences exist between our model of modern ICB deformation and existing models. From east to west, the major active faults are the Newport-Inglewood/Rose Canyon, Palos Verdes, San Diego Trough, and San Clemente fault zones. Localized deformation on the continental slope along the San Mateo, San Onofre, and Carlsbad trends results from geometrical complexities in the dextral fault system. Undeformed early to mid-Pleistocene age sediments onlap and overlie deformation associated with the northern Coronado Bank fault (CBF) and the breakaway zone of the purported Oceanside Blind Thrust. Therefore, we interpret the northern CBF to be inactive, and slip rate estimates based on linkage with the Holocene active Palos Verdes fault are unwarranted. In the western ICB, the San Diego Trough fault (SDTF) and San Clemente fault have robust linear geomorphic expression, which suggests that these faults may accommodate a significant portion of modern ICB slip in a westward temporal migration of slip. The SDTF offsets young sediments between the US/Mexico border and the eastern margin of Avalon Knoll, where the fault is spatially coincident and potentially linked with the San Pedro Basin fault (SPBF). Kinematic linkage between the SDTF and the SPBF increases the potential rupture length for earthquakes on either fault and may allow events nucleating on the SDTF to propagate much closer to the LA Basin.
Experimental evaluation of certification trails using abstract data type validation
NASA Technical Reports Server (NTRS)
Wilson, Dwight S.; Sullivan, Gregory F.; Masson, Gerald M.
1993-01-01
Certification trails are a recently introduced and promising approach to fault-detection and fault-tolerance. Recent experimental work reveals many cases in which a certification-trail approach allows for significantly faster program execution time than a basic time-redundancy approach. Algorithms for answer-validation of abstract data types allow a certification trail approach to be used for a wide variety of problems. An attempt to assess the performance of algorithms utilizing certification trails on abstract data types is reported. Specifically, this method was applied to the following problems: heapsort, Hullman tree, shortest path, and skyline. Previous results used certification trails specific to a particular problem and implementation. The approach allows certification trails to be localized to 'data structure modules,' making the use of this technique transparent to the user of such modules.
Generating Scenarios When Data Are Missing
NASA Technical Reports Server (NTRS)
Mackey, Ryan
2007-01-01
The Hypothetical Scenario Generator (HSG) is being developed in conjunction with other components of artificial-intelligence systems for automated diagnosis and prognosis of faults in spacecraft, aircraft, and other complex engineering systems. The HSG accepts, as input, possibly incomplete data on the current state of a system (see figure). The HSG models a potential fault scenario as an ordered disjunctive tree of conjunctive consequences, wherein the ordering is based upon the likelihood that a particular conjunctive path will be taken for the given set of inputs. The computation of likelihood is based partly on a numerical ranking of the degree of completeness of data with respect to satisfaction of the antecedent conditions of prognostic rules. The results from the HSG are then used by a model-based artificial- intelligence subsystem to predict realistic scenarios and states.
NASA Astrophysics Data System (ADS)
Hatem, A. E.; Dolan, J. F.; Langridge, R.; Zinke, R. W.; McGuire, C. P.; Rhodes, E. J.; Van Dissen, R. J.
2015-12-01
The Marlborough fault system, which links the Alpine fault with the Hikurangi subduction zone within the complex Australian-Pacific plate boundary zone, partitions strain between the Wairau, Awatere, Clarence and Hope faults. Previous best estimates of dextral strike-slip along the Hope fault are ≤ ~23 mm/yr± 4 mm/year. Those rates, however, are poorly constrained and could be improved using better age determinations in conjunction with measurements of fault offsets using high-resolution imagery. In this study, we use airborne lidar- and field-based mapping together with the subsurface geometry of offset channels at the Hossack site 12 km ESE of Hanmer Springs to more precisely determine stream offsets that were previously identified by McMorran (1991). Specifically, we measured fault offsets of ~10m, ~75 m, and ~195m. Together with 65 radiocarbon ages on charcoal, peat, and wood and 25 pending post-IR50-IRSL225 luminescence ages from the channel deposits, these offsets yield three different fault slip rates for the early Holocene, the late Holocene, and the past ca. 500-1,000 years. Using the large number of age determinations, we document in detail the timing of initiation and abandonment of each channel, enhancing the geomorphic interpretation at the Hossack site as channels deform over many earthquake cycles. Our preliminary incremental slip rate results from the Hossack site may indicate temporally variable strain release along the Hope fault. This study is part of a broader effort aimed at determining incremental slip rates and paleo-earthquake ages and displacements from all four main Marlborough faults. Collectively, these data will allow us to determine how the four main Marlborough faults have work together during Holocene-late Pleistocene to accommodate plate-boundary deformation in time and space.
Reliability Practice at NASA Goddard Space Flight Center
NASA Technical Reports Server (NTRS)
Pruessner, Paula S.; Li, Ming
2008-01-01
This paper describes in brief the Reliability and Maintainability (R&M) Programs performed directly by the reliability branch at Goddard Space Flight Center (GSFC). The mission assurance requirements flow down is explained. GSFC practices for PRA, reliability prediction/fault tree analysis/reliability block diagram, FMEA, part stress and derating analysis, worst case analysis, trend analysis, limit life items are presented. Lessons learned are summarized and recommendations on improvement are identified.
Online Performance-Improvement Algorithms
1994-08-01
fault rate as the request sequence length approaches infinity. Their algorithms are based on an innovative use of the classical Ziv - Lempel [85] data ...Report CS-TR-348-91. [85] J. Ziv and A. Lempel . Compression of individual sequences via variable-rate coding. IEEE Trans. Inf. Theory, 24:530-53`, 1978. 94...Deferred Data Structuring Recall that our incremental multi-trip algorithm spreads the building of the fence-tree over several trips in order to
ERIC Educational Resources Information Center
Torres, Edgardo E.; And Others
This comprehensive investigation into the reasons behind the crucial problem of the student dropout in foreign language programs focuses on seven interrelated areas. These are: (1) student, (2) teacher, (3) administration, (4) counselor, (5) parent, (6) community, and (7) teacher training. A fault-tree analysis of the dropout problem provides a…
2015-09-01
15 4. Commander, Naval Regional Maintenance Center .................. 15 5 . Private Ship Repair Industry...TURBINE EXHAUST SYSTEM MAINTENANCE STRATEGY FOR THE CG-47 TICONDEROGA CLASS CRUISER 5 . FUNDING NUMBERS 6. AUTHOR(S) Sparks, Robert D. 7. PERFORMING...condition-based maintenance, condition-directed, failure finding, fault tree analysis 15 . NUMBER OF PAGES 133 16. PRICE CODE 17. SECURITY
An integrated approach to system design, reliability, and diagnosis
NASA Technical Reports Server (NTRS)
Patterson-Hine, F. A.; Iverson, David L.
1990-01-01
The requirement for ultradependability of computer systems in future avionics and space applications necessitates a top-down, integrated systems engineering approach for design, implementation, testing, and operation. The functional analyses of hardware and software systems must be combined by models that are flexible enough to represent their interactions and behavior. The information contained in these models must be accessible throughout all phases of the system life cycle in order to maintain consistency and accuracy in design and operational decisions. One approach being taken by researchers at Ames Research Center is the creation of an object-oriented environment that integrates information about system components required in the reliability evaluation with behavioral information useful for diagnostic algorithms. Procedures have been developed at Ames that perform reliability evaluations during design and failure diagnoses during system operation. These procedures utilize information from a central source, structured as object-oriented fault trees. Fault trees were selected because they are a flexible model widely used in aerospace applications and because they give a concise, structured representation of system behavior. The utility of this integrated environment for aerospace applications in light of our experiences during its development and use is described. The techniques for reliability evaluation and failure diagnosis are discussed, and current extensions of the environment and areas requiring further development are summarized.
An integrated approach to system design, reliability, and diagnosis
NASA Astrophysics Data System (ADS)
Patterson-Hine, F. A.; Iverson, David L.
1990-12-01
The requirement for ultradependability of computer systems in future avionics and space applications necessitates a top-down, integrated systems engineering approach for design, implementation, testing, and operation. The functional analyses of hardware and software systems must be combined by models that are flexible enough to represent their interactions and behavior. The information contained in these models must be accessible throughout all phases of the system life cycle in order to maintain consistency and accuracy in design and operational decisions. One approach being taken by researchers at Ames Research Center is the creation of an object-oriented environment that integrates information about system components required in the reliability evaluation with behavioral information useful for diagnostic algorithms. Procedures have been developed at Ames that perform reliability evaluations during design and failure diagnoses during system operation. These procedures utilize information from a central source, structured as object-oriented fault trees. Fault trees were selected because they are a flexible model widely used in aerospace applications and because they give a concise, structured representation of system behavior. The utility of this integrated environment for aerospace applications in light of our experiences during its development and use is described. The techniques for reliability evaluation and failure diagnosis are discussed, and current extensions of the environment and areas requiring further development are summarized.
Doytchev, Doytchin E; Szwillus, Gerd
2009-11-01
Understanding the reasons for incident and accident occurrence is important for an organization's safety. Different methods have been developed to achieve this goal. To better understand the human behaviour in incident occurrence we propose an analysis concept that combines Fault Tree Analysis (FTA) and Task Analysis (TA). The former method identifies the root causes of an accident/incident, while the latter analyses the way people perform the tasks in their work environment and how they interact with machines or colleagues. These methods were complemented with the use of the Human Error Identification in System Tools (HEIST) methodology and the concept of Performance Shaping Factors (PSF) to deepen the insight into the error modes of an operator's behaviour. HEIST shows the external error modes that caused the human error and the factors that prompted the human to err. To show the validity of the approach, a case study at a Bulgarian Hydro power plant was carried out. An incident - the flooding of the plant's basement - was analysed by combining the afore-mentioned methods. The case study shows that Task Analysis in combination with other methods can be applied successfully to human error analysis, revealing details about erroneous actions in a realistic situation.
Emery, R J; Charlton, M A; Orders, A B; Hernandez, M
2001-02-01
An enhanced coding system for the characterization of notices of violation (NOV's) issued to radiation permit holders in the State of Texas was developed based on a series of fault tree analyses serving to identify a set of common causes. The coding system enhancement was retroactively applied to a representative sample (n = 185) of NOV's issued to specific licensees of radioactive materials in Texas during calendar year 1999. The results obtained were then compared to the currently available summary NOV information for the same year. In addition to identifying the most common NOV's, the enhanced coding system revealed that approximately 70% of the sampled NOV's were issued for non-compliance with a specific regulation as opposed to a permit condition. Furthermore, an underlying cause of 94% of the NOV's was the failure on the part of the licensee to execute a specific task. The findings suggest that opportunities exist to improve permit holder compliance through various means, including the creation of summaries which detail specific tasks to be completed, and revising training programs with more focus on the identification and scheduling of permit-related requirements. Broad application of these results is cautioned due to the bias associated with the restricted scope of the project.
Bayesian-network-based safety risk assessment for steel construction projects.
Leu, Sou-Sen; Chang, Ching-Miao
2013-05-01
There are four primary accident types at steel building construction (SC) projects: falls (tumbles), object falls, object collapse, and electrocution. Several systematic safety risk assessment approaches, such as fault tree analysis (FTA) and failure mode and effect criticality analysis (FMECA), have been used to evaluate safety risks at SC projects. However, these traditional methods ineffectively address dependencies among safety factors at various levels that fail to provide early warnings to prevent occupational accidents. To overcome the limitations of traditional approaches, this study addresses the development of a safety risk-assessment model for SC projects by establishing the Bayesian networks (BN) based on fault tree (FT) transformation. The BN-based safety risk-assessment model was validated against the safety inspection records of six SC building projects and nine projects in which site accidents occurred. The ranks of posterior probabilities from the BN model were highly consistent with the accidents that occurred at each project site. The model accurately provides site safety-management abilities by calculating the probabilities of safety risks and further analyzing the causes of accidents based on their relationships in BNs. In practice, based on the analysis of accident risks and significant safety factors, proper preventive safety management strategies can be established to reduce the occurrence of accidents on SC sites. Copyright © 2013 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stockli, Daniel
Geothermal plays in extensional and transtensional tectonic environments have long been a major target in the exploration of geothermal resources and the Dixie Valley area has served as a classic natural laboratory for this type of geothermal plays. In recent years, the interactions between normal faults and strike-slip faults, acting either as strain relay zones have attracted significant interest in geothermal exploration as they commonly result in fault-controlled dilational corners with enhanced fracture permeability and thus have the potential to host blind geothermal prospects. Structural ambiguity, complications in fault linkage, etc. often make the selection for geothermal exploration drilling targetsmore » complicated and risky. Though simplistic, the three main ingredients of a viable utility-grade geothermal resource are heat, fluids, and permeability. Our new geological mapping and fault kinematic analysis derived a structural model suggest a two-stage structural evolution with (a) middle Miocene N -S trending normal faults (faults cutting across the modern range), - and tiling Olio-Miocene volcanic and sedimentary sequences (similar in style to East Range and S Stillwater Range). NE-trending range-front normal faulting initiated during the Pliocene and are both truncating N-S trending normal faults and reactivating some former normal faults in a right-lateral fashion. Thus the two main fundamental differences to previous structural models are (1) N-S trending faults are pre-existing middle Miocene normal faults and (2) these faults are reactivated in a right-later fashion (NOT left-lateral) and kinematically linked to the younger NE-trending range-bounding normal faults (Pliocene in age). More importantly, this study provides the first constraints on transient fluid flow through the novel application of apatite (U-Th)/He (AHe) and 4He/ 3He thermochronometry in the geothermally active Dixie Valley area in Nevada.« less
NASA Astrophysics Data System (ADS)
Yin, An; Kelty, Thomas K.; Davis, Gregory A.
1989-09-01
Geologic mapping in southern Glacier National Park, Montana, reveals the presence of two duplexes sharing the same floor thrust fault, the Lewis thrust. The westernmost duplex (Brave Dog Mountain) includes the low-angle Brave Dog roof fault and Elk Mountain imbricate system, and the easternmost (Rising Wolf Mountain) duplex includes the low-angle Rockwell roof fault and Mt. Henry imbricate system. The geometry of these duplexes suggests that they differ from previously described geometric-kinematic models for duplex development. Their low-angle roof faults were preexisting structures that were locally utilized as roof faults during the formation of the imbricate systems. Crosscutting of the Brave Dog fault by the Mt. Henry imbricate system indicates that the two duplexes formed at different times. The younger Rockwell-Mt. Henry duplex developed 20 km east of the older Brave Dog-Elk Mountain duplex; the roof fault of the former is at a higher structural level. Field relations confirm that the low-angle Rockwell fault existed across the southern Glacier Park area prior to localized formation of the Mt. Henry imbricate thrusts beneath it. These thrusts kinematically link the Rockwell and Lewis faults and may be analogous to P shears that form between two synchronously active faults bounding a simple shear system. The abandonment of one duplex and its replacement by another with a new and higher roof fault may have been caused by (1) warping of the older and lower Brave Dog roof fault during the formation of the imbricate system (Elk Mountain) beneath it, (2) an upward shifting of the highest level of a simple shear system in the Lewis plate to a new decollement level in subhorizontal belt strata (= the Rockwell fault) that lay above inclined strata within the first duplex, and (3) a reinitiation of P-shear development (= Mt. Henry imbricate faults) between the Lewis thrust and the subparallel, synkinematic Rockwell fault.
NASA Astrophysics Data System (ADS)
Polverino, Pierpaolo; Esposito, Angelo; Pianese, Cesare; Ludwig, Bastian; Iwanschitz, Boris; Mai, Andreas
2016-02-01
In the current energetic scenario, Solid Oxide Fuel Cells (SOFCs) exhibit appealing features which make them suitable for environmental-friendly power production, especially for stationary applications. An example is represented by micro-combined heat and power (μ-CHP) generation units based on SOFC stacks, which are able to produce electric and thermal power with high efficiency and low pollutant and greenhouse gases emissions. However, the main limitations to their diffusion into the mass market consist in high maintenance and production costs and short lifetime. To improve these aspects, the current research activity focuses on the development of robust and generalizable diagnostic techniques, aimed at detecting and isolating faults within the entire system (i.e. SOFC stack and balance of plant). Coupled with appropriate recovery strategies, diagnosis can prevent undesired system shutdowns during faulty conditions, with consequent lifetime increase and maintenance costs reduction. This paper deals with the on-line experimental validation of a model-based diagnostic algorithm applied to a pre-commercial SOFC system. The proposed algorithm exploits a Fault Signature Matrix based on a Fault Tree Analysis and improved through fault simulations. The algorithm is characterized on the considered system and it is validated by means of experimental induction of faulty states in controlled conditions.
1993-07-01
version tree is formed that permits users to go back to any previous version. There are methods for traversing the version tree of a particular...workspace. Workspace objects are linked (or nested) hierarchically into a workspace tree . Applications can set the access privileges to parts of this...workspace tree to control access (and hence change). There must be a default global workspace. Workspace objects are then allocated within the context
Project delay analysis of HRSG
NASA Astrophysics Data System (ADS)
Silvianita; Novega, A. S.; Rosyid, D. M.; Suntoyo
2017-08-01
Completion of HRSG (Heat Recovery Steam Generator) fabrication project sometimes is not sufficient with the targeted time written on the contract. The delay on fabrication process can cause some disadvantages for fabricator, including forfeit payment, delay on HRSG construction process up until HRSG trials delay. In this paper, the author is using semi quantitative on HRSG pressure part fabrication delay with configuration plant 1 GT (Gas Turbine) + 1 HRSG + 1 STG (Steam Turbine Generator) using bow-tie analysis method. Bow-tie analysis method is a combination from FTA (Fault tree analysis) and ETA (Event tree analysis) to develop the risk matrix of HRSG. The result from FTA analysis is use as a threat for preventive measure. The result from ETA analysis is use as impact from fabrication delay.
2015-02-26
This image from NASA Terra spacecraft shows Prince Patrick Island, which is located in the Canadian Arctic Archipelago, and is the westernmost Elizabeth Island in the Northwest Territories of Canada. The island is underlain by sedimentary rocks, cut by still-active faults. The streams follow a dendritic drainage system: there are many contributing streams (analogous to the twigs of a tree), which are then joined together into the tributaries of the main river (the branches and the trunk of the tree, respectively). They develop where the river channel follows the slope of the terrain. The image covers an area of 22 by 27 km, was acquired July 2, 2011, and is located at 76.9 degrees north, 118.9 degrees west. http://photojournal.jpl.nasa.gov/catalog/PIA19222
Dokas, Ioannis M; Panagiotakopoulos, Demetrios C
2006-08-01
The available expertise on managing and operating solid waste management (SWM) facilities varies among countries and among types of facilities. Few experts are willing to record their experience, while few researchers systematically investigate the chains of events that could trigger operational failures in a facility; expertise acquisition and dissemination, in SWM, is neither popular nor easy, despite the great need for it. This paper presents a knowledge acquisition process aimed at capturing, codifying and expanding reliable expertise and propagating it to non-experts. The knowledge engineer (KE), the person performing the acquisition, must identify the events (or causes) that could trigger a failure, determine whether a specific event could trigger more than one failure, and establish how various events are related among themselves and how they are linked to specific operational problems. The proposed process, which utilizes logic diagrams (fault trees) widely used in system safety and reliability analyses, was used for the analysis of 24 common landfill operational problems. The acquired knowledge led to the development of a web-based expert system (Landfill Operation Management Advisor, http://loma.civil.duth.gr), which estimates the occurrence possibility of operational problems, provides advice and suggests solutions.
NASA Technical Reports Server (NTRS)
Hughes, Peter M.; Luczak, Edward C.
1991-01-01
Flight Operations Analysts (FOAs) in the Payload Operations Control Center (POCC) are responsible for monitoring a satellite's health and safety. As satellites become more complex and data rates increase, FOAs are quickly approaching a level of information saturation. The FOAs in the spacecraft control center for the COBE (Cosmic Background Explorer) satellite are currently using a fault isolation expert system named the Communications Link Expert Assistance Resource (CLEAR), to assist in isolating and correcting communications link faults. Due to the success of CLEAR and several other systems in the control center domain, many other monitoring and fault isolation expert systems will likely be developed to support control center operations during the early 1990s. To facilitate the development of these systems, a project was initiated to develop a domain specific tool, named the Generic Spacecraft Analyst Assistant (GenSAA). GenSAA will enable spacecraft analysts to easily build simple real-time expert systems that perform spacecraft monitoring and fault isolation functions. Lessons learned during the development of several expert systems at Goddard, thereby establishing the foundation of GenSAA's objectives and offering insights in how problems may be avoided in future project, are described. This is followed by a description of the capabilities, architecture, and usage of GenSAA along with a discussion of its application to future NASA missions.
Sarah Wilkinson; Jerome Ogee; Jean-Christophe Domec; Mark Rayment; Lisa Wingate
2015-01-01
Process-based models that link seasonally varying environmental signals to morphological features within tree rings are essential tools to predict tree growth response and commercially important wood quality traits under future climate scenarios. This study evaluated model portrayal of radial growth and wood anatomy observations within a mature maritime pine (Pinus...
Rick G. Kelsey; Douglas J. Westlind
2017-01-01
The lethal temperature limit is 60 degrees Celsius (°C) for plant tissues, including trees, with lower temperatures causing heat stress. As fire injury increases on tree stems, there is an accompanying rise in tissue ethanol concentrations, physiologically linked to impaired mitochondrial oxidative phosphorylation energy production. We theorize that sublethal tissue...
Klemen Novak; Martin de Luis; Miguel A. Saz; Luis A. Longares; Roberto Serrano-Notivoli; Josep Raventos; Katarina Cufar; Jozica Gricar; Alfredo Di Filippo; Gianluca Piovesan; Cyrille B.K. Rathgeber; Andreas Papadopoulos; Kevin T. Smith
2016-01-01
Climate predictions for the Mediterranean Basin include increased temperatures, decreased precipitation, and increased frequency of extreme climatic events (ECE). These conditions are associated with decreased tree growth and increased vulnerability to pests and diseases. The anatomy of tree rings responds to these environmental conditions. Quantitatively, the width of...
3D Model of the McGinness Hills Geothermal Area
Faulds, James E.
2013-12-31
The McGinness Hills geothermal system lies in a ~8.5 km wide, north-northeast trending accommodation zone defined by east-dipping normal faults bounding the Toiyabe Range to the west and west-dipping normal faults bounding the Simpson Park Mountains to the east. Within this broad accommodation zone lies a fault step-over defined by north-northeast striking, west-dipping normal faults which step to the left at roughly the latitude of the McGinness Hills geothermal system. The McGinness Hills 3D model consists of 9 geologic units and 41 faults. The basal geologic units are metasediments of the Ordovician Valmy and Vininni Formations (undifferentiated in the model) which are intruded by Jurassic granitic rocks. Unconformably overlying is a ~100s m-thick section of Tertiary andesitic lava flows and four Oligocene-to-Miocene ash-flow tuffs: The Rattlesnake Canyon Tuff, tuff of Sutcliffe, the Cambell Creek Tuff and the Nine Hill tuff. Overlying are sequences of pre-to-syn-extensional Quaternary alluvium and post-extensional Quaternary alluvium. 10-15º eastward dip of the Tertiary stratigraphy is controlled by the predominant west-dipping fault set. Geothermal production comes from two west dipping normal faults in the northern limb of the step over. Injection is into west dipping faults in the southern limb of the step over. Production and injection sites are in hydrologic communication, but at a deep level, as the northwest striking fault that links the southern and northern limbs of the step-over has no permeability.
NASA Astrophysics Data System (ADS)
Webb, J.; Gardner, T.
2016-12-01
In northwest Tasmania well-preserved mid-Holocene beach ridges with maximum radiocarbon ages of 5.25 ka occur along the coast; inland are a parallel set of lower relief beach ridges of probable MIS 5e age. The latter are cut by northeast-striking faults clearly visible on LIDAR images, with a maximum vertical displacement (evident as difference in topographic elevation) of 3 m. Also distinct on the LIDAR images are large sand boils along the fault lines; they are up to 5 m in diameter and 2-3 m high and mostly occur on the hanging wall close to the fault traces. Without LIDAR it would have been almost impossible to distinguish either the fault scarps or the sand boils. Excavations through the sand boils show that they are massive, with no internal structure, suggesting that they formed in a single event. They are composed of well-sorted, very fine white sand, identical to the sand in the underlying beach ridges. The sand boils overlie a peaty paleosol; this formed in the tea-tree swamp that formerly covered the area, and has been offset along the faults. Radiocarbon dating of the buried organic-rich paleosol gave ages of 14.8-7.2 ka, suggesting that the faulting is latest Pleistocene to early Holocene in age; it occurred prior to deposition of the mid-Holocene beach ridges, which are not offset. The beach ridge sediments are up to 7 m thick and contain an iron-cemented hard pan 1-3 m below the surface. The water table is very shallow and close to the ground surface, so the sands of the beach ridges are mostly saturated. During faulting these sands experienced extensive liquefaction. The resulting sand boils rose to a substantial height of 2-3 m, probably possibly reflecting the elevation of the potentiometric surface within the confined part of the beach ridge sediments below the iron-cemented hard pan. Motion on the faults was predominantly dip slip (shown by an absence of horizontal offset) and probably reverse, which is consistent with the present-day northwest-southeast compressive stress in this area.
Probabilistic Seismic Hazard Assessment for a NPP in the Upper Rhine Graben, France
NASA Astrophysics Data System (ADS)
Clément, Christophe; Chartier, Thomas; Jomard, Hervé; Baize, Stéphane; Scotti, Oona; Cushing, Edward
2015-04-01
The southern part of the Upper Rhine Graben (URG) straddling the border between eastern France and western Germany, presents a relatively important seismic activity for an intraplate area. A magnitude 5 or greater shakes the URG every 25 years and in 1356 a magnitude greater than 6.5 struck the city of Basel. Several potentially active faults have been identified in the area and documented in the French Active Fault Database (web site in construction). These faults are located along the Graben boundaries and also inside the Graben itself, beneath heavily populated areas and critical facilities (including the Fessenheim Nuclear Power Plant). These faults are prone to produce earthquakes with magnitude 6 and above. Published regional models and preliminary geomorphological investigations provided provisional assessment of slip rates for the individual faults (0.1-0.001 mm/a) resulting in recurrence time of 10 000 years or greater for magnitude 6+ earthquakes. Using a fault model, ground motion response spectra are calculated for annual frequencies of exceedance (AFE) ranging from 10-4 to 10-8 per year, typical for design basis and probabilistic safety analyses of NPPs. A logic tree is implemented to evaluate uncertainties in seismic hazard assessment. The choice of ground motion prediction equations (GMPEs) and range of slip rate uncertainty are the main sources of seismic hazard variability at the NPP site. In fact, the hazard for AFE lower than 10-4 is mostly controlled by the potentially active nearby Rhine River fault. Compared with areal source zone models, a fault model localizes the hazard around the active faults and changes the shape of the Uniform Hazard Spectrum at the site. Seismic hazard deaggregations are performed to identify the earthquake scenarios (including magnitude, distance and the number of standard deviations from the median ground motion as predicted by GMPEs) that contribute to the exceedance of spectral acceleration for the different AFE levels. These scenarios are finally examined with respect to the seismicity data available in paleoseismic, historic and instrumental catalogues.
NASA Astrophysics Data System (ADS)
Deckers, Jef
2016-06-01
The Roer Valley Graben is a Mesozoic continental rift basin that was reactivated during the Late Oligocene. The study area is located in the graben area of the southwestern part of the Roer Valley Graben. Rifting initiated in the study area with the development of a large number of faults in the prerift strata. Some of these faults were rooted in preexisting zones of weakness in the Mesozoic strata. Early in the Late Oligocene, several faults died out in the study area as strain became focused upon others, some of which were able to link into several-kilometer-long systems. Within the Late Oligocene to Early Miocene northwestward prograding shallow marine syn-rift deposits, the number of active faults further decreased with time. A relatively strong decrease was observed around the Oligocene/Miocene boundary and represents a further focus of strain onto the long fault systems. Miocene extensional strain was not accommodated by further growth, but predominantly by displacements along the long fault systems. Since the Oligocene/Miocene boundary coincides with a radical change in the European intraplate stress field, the latter might have contributed significantly to the simultaneous change of fault kinematics in the study area.
NASA Astrophysics Data System (ADS)
Gulen, L.; EMME WP2 Team*
2011-12-01
The Earthquake Model of the Middle East (EMME) Project is a regional project of the GEM (Global Earthquake Model) project (http://www.emme-gem.org/). The EMME project covers Turkey, Georgia, Armenia, Azerbaijan, Syria, Lebanon, Jordan, Iran, Pakistan, and Afghanistan. Both EMME and SHARE projects overlap and Turkey becomes a bridge connecting the two projects. The Middle East region is tectonically and seismically very active part of the Alpine-Himalayan orogenic belt. Many major earthquakes have occurred in this region over the years causing casualties in the millions. The EMME project consists of three main modules: hazard, risk, and socio-economic modules. The EMME project uses PSHA approach for earthquake hazard and the existing source models have been revised or modified by the incorporation of newly acquired data. The most distinguishing aspect of the EMME project from the previous ones is its dynamic character. This very important characteristic is accomplished by the design of a flexible and scalable database that permits continuous update, refinement, and analysis. An up-to-date earthquake catalog of the Middle East region has been prepared and declustered by the WP1 team. EMME WP2 team has prepared a digital active fault map of the Middle East region in ArcGIS format. We have constructed a database of fault parameters for active faults that are capable of generating earthquakes above a threshold magnitude of Mw≥5.5. The EMME project database includes information on the geometry and rates of movement of faults in a "Fault Section Database", which contains 36 entries for each fault section. The "Fault Section" concept has a physical significance, in that if one or more fault parameters change, a new fault section is defined along a fault zone. So far 6,991 Fault Sections have been defined and 83,402 km of faults are fully parameterized in the Middle East region. A separate "Paleo-Sites Database" includes information on the timing and amounts of fault displacement for major fault zones. A digital reference library, that includes the pdf files of relevant papers, reports and maps, is also prepared. A logic tree approach is utilized to encompass different interpretations for the areas where there is no consensus. Finally seismic source zones in the Middle East region have been delineated using all available data. *EMME Project WP2 Team: Levent Gülen, Murat Utkucu, M. Dinçer Köksal, Hilal Yalçin, Yigit Ince, Mine Demircioglu, Shota Adamia, Nino Sadradze, Aleksandre Gvencadze, Arkadi Karakhanyan, Mher Avanesyan, Tahir Mammadli, Gurban Yetirmishli, Arif Axundov, Khaled Hessami, M. Asif Khan, M. Sayab.
Variable modes of rifting in the eastern Basin and Range, USA from on-fault geological evidence
NASA Astrophysics Data System (ADS)
Stahl, T.; Niemi, N. A.
2017-12-01
Continental rifts are often divided along their axes into magmatic (or magma-assisted) and amagmatic (or magma-poor) segments. Less is known about magmatic versus non-magmatic extension across `wide' continental rift margins like the Basin and Range province of the USA. Paleoseismic trench investigations, Quaternary geochronology (10Be and 3He exposure-age, luminescence, and 40Ar/39Ar dating), and high-resolution topographic surveys (terrestrial laser scanning and UAV photogrammetry) were used to assess the timing and spatial variability of faulting at the Basin and Range-Colorado Plateau transition zone in central Utah. Results show that while the majority of strain is accommodated by a single, range- and province-bounding fault (the Wasatch fault zone, WFZ, slip rate of c. 3-4 mm yr-1), a transition to magma-assisted rifting occurs near the WFZ southern termination marked by a diffuse zone of faults associated with Pliocene to Holocene volcanism. Paleoseismic analysis of faults within and adjacent to this zone reveal recent (<18 ka) surface-ruptures on these faults. A single event displacement of 10-15 m for the Tabernacle fault at c. 15-18 ka (3He exposure-age) and large fault displacement gradients imply that slip was coeval with lava emplacement and that the faults in this region are linked, at least in part, to dike injection in the uppermost crust rather than slip at seismogenic depths. These results have implications for the controversial nature of regional seismic hazard and the structural evolution of the eastern Basin and Range.
Observations and Modelling of Alternative Tree Cover States of the Boreal Ecosystem
NASA Astrophysics Data System (ADS)
Abis, B.; Brovkin, V.
2017-12-01
Recently, multimodality of the tree cover distribution of the boreal forests has been detected, revealing the existence of three alternative vegetation modes. Identifying which are the regions with a potential for alternative tree cover states, and assessing which are the main factors underlying their existence, is important to project future change of natural vegetation cover and its effect on climate.Through the use of generalised additive models and phase-space analysis, we study the link between tree cover distribution and eight globally-observed environmental factors, such as rainfall, temperature, and permafrost distribution. Using a classification based on these factors, we show the location of areas with potentially alternative tree cover states under the same environmental conditions in the boreal region. Furthermore, to explain the multimodality found in the data and the asymmetry between North America and Eurasia, we study a conceptual model based on tree species competition, and use it to simulate the sensitivity of tree cover to changes in environmental factors.We find that the link between individual environmental variables and tree cover differs regionally. Nonetheless, environmental conditions uniquely determine the vegetation state among the three dominant modes in ˜95% of the cases. On the other hand, areas with potentially alternative tree cover states encompass ˜1.1 million km2, and correspond to possible transition zones with a reduced resilience to disturbances. Employing our conceptual model, we show that multimodality can be explained through competition between tree species with different adaptations to environmental factors and disturbances. Moreover, the model is able to reproduce the asymmetry in tree species distribution between Eurasia and North America. Finally, we find that changes in permafrost could be associated with bifurcation points of the model, corroborating the importance of permafrost in a changing climate.
Witter, Robert C.; Zhang, Yinglong J.; Wang, Kelin; Priest, George R.; Goldfinger, Chris; Stimely, Laura; English, John T.; Ferro, Paul A.
2013-01-01
Characterizations of tsunami hazards along the Cascadia subduction zone hinge on uncertainties in megathrust rupture models used for simulating tsunami inundation. To explore these uncertainties, we constructed 15 megathrust earthquake scenarios using rupture models that supply the initial conditions for tsunami simulations at Bandon, Oregon. Tsunami inundation varies with the amount and distribution of fault slip assigned to rupture models, including models where slip is partitioned to a splay fault in the accretionary wedge and models that vary the updip limit of slip on a buried fault. Constraints on fault slip come from onshore and offshore paleoseismological evidence. We rank each rupture model using a logic tree that evaluates a model’s consistency with geological and geophysical data. The scenarios provide inputs to a hydrodynamic model, SELFE, used to simulate tsunami generation, propagation, and inundation on unstructured grids with <5–15 m resolution in coastal areas. Tsunami simulations delineate the likelihood that Cascadia tsunamis will exceed mapped inundation lines. Maximum wave elevations at the shoreline varied from ∼4 m to 25 m for earthquakes with 9–44 m slip and Mw 8.7–9.2. Simulated tsunami inundation agrees with sparse deposits left by the A.D. 1700 and older tsunamis. Tsunami simulations for large (22–30 m slip) and medium (14–19 m slip) splay fault scenarios encompass 80%–95% of all inundation scenarios and provide reasonable guidelines for land-use planning and coastal development. The maximum tsunami inundation simulated for the greatest splay fault scenario (36–44 m slip) can help to guide development of local tsunami evacuation zones.
Dynamic rupture simulations on a fault network in the Corinth Rift
NASA Astrophysics Data System (ADS)
Durand, V.; Hok, S.; Boiselet, A.; Bernard, P.; Scotti, O.
2017-03-01
The Corinth rift (Greece) is made of a complex network of fault segments, typically 10-20 km long separated by stepovers. Assessing the maximum magnitude possible in this region requires accounting for multisegment rupture. Here we apply numerical models of dynamic rupture to quantify the probability of a multisegment rupture in the rift, based on the knowledge of the fault geometry and on the magnitude of the historical and palaeoearthquakes. We restrict our application to dynamic rupture on the most recent and active fault network of the western rift, located on the southern coast. We first define several models, varying the main physical parameters that control the rupture propagation. We keep the regional stress field and stress drop constant, and we test several fault geometries, several positions of the faults in their seismic cycle, several values of the critical distance (and so several fracture energies) and two different hypocentres (thus testing two directivity hypothesis). We obtain different scenarios in terms of the number of ruptured segments and the final magnitude (between M = 5.8 for a single segment rupture to M = 6.4 for a whole network rupture), and find that the main parameter controlling the variability of the scenarios is the fracture energy. We then use a probabilistic approach to quantify the probability of each generated scenario. To do that, we implement a logical tree associating a weight to each model input hypothesis. Combining these weights, we compute the probability of occurrence of each scenario, and show that the multisegment scenarios are very likely (52 per cent), but that the whole network rupture scenario is unlikely (14 per cent).
Pana, D.
2006-01-01
Re-examination of selected MVT outcrops and cores in the Interior Plains and Rocky Moun-tains of Alberta, corroborated with previous paragenetic, isotopic and structural data, suggests Laramide structural channelling of dolomitizing and mineralizing fluids into strained carbonate rocks. At Pine Point, extensional faults underlying the trends of MVT ore bodies and brittle faults overprinting the Great Slave Lake Shear Zone define apinnate fault geometry and appear to be kinematically linked. Chemical and isotopic characteristics of MVT parental fluids are consistent with seawater and brine convection within fault-confined verticalaquifers, strong water-basement rock interaction, metalleaching from the basement, and focused release of hydrothermal fluids within linear zones of strained carbonate caprocks. Zones of recurrent strain in the basement and a cap of carbonate strata constitute the critical criteria for MVTexploration target selection in the WCSB.
Faulting and hydration of the Juan de Fuca plate system
NASA Astrophysics Data System (ADS)
Nedimović, Mladen R.; Bohnenstiehl, DelWayne R.; Carbotte, Suzanne M.; Pablo Canales, J.; Dziak, Robert P.
2009-06-01
Multichannel seismic observations provide the first direct images of crustal scale normal faults within the Juan de Fuca plate system and indicate that brittle deformation extends up to ~ 200 km seaward of the Cascadia trench. Within the sedimentary layering steeply dipping faults are identified by stratigraphic offsets, with maximum throws of 110 ± 10 m found near the trench. Fault throws diminish both upsection and seaward from the trench. Long-term throw rates are estimated to be 13 ± 2 mm/kyr. Faulted offsets within the sedimentary layering are typically linked to larger offset scarps in the basement topography, suggesting reactivation of the normal fault systems formed at the spreading center. Imaged reflections within the gabbroic igneous crust indicate swallowing fault dips at depth. These reflections require local alteration to produce an impedance contrast, indicating that the imaged fault structures provide pathways for fluid transport and hydration. As the depth extent of imaged faulting within this young and sediment insulated oceanic plate is primarily limited to approximately Moho depths, fault-controlled hydration appears to be largely restricted to crustal levels. If dehydration embrittlement is an important mechanism for triggering intermediate-depth earthquakes within the subducting slab, then the limited occurrence rate and magnitude of intraslab seismicity at the Cascadia margin may in part be explained by the limited amount of water imbedded into the uppermost oceanic mantle prior to subduction. The distribution of submarine earthquakes within the Juan de Fuca plate system indicates that propagator wake areas are likely to be more faulted and therefore more hydrated than other parts of this plate system. However, being largely restricted to crustal levels, this localized increase in hydration generally does not appear to have a measurable effect on the intraslab seismicity along most of the subducted propagator wakes at the Cascadia margin.
Experimental fault characterization of a neural network
NASA Technical Reports Server (NTRS)
Tan, Chang-Huong
1990-01-01
The effects of a variety of faults on a neural network is quantified via simulation. The neural network consists of a single-layered clustering network and a three-layered classification network. The percentage of vectors mistagged by the clustering network, the percentage of vectors misclassified by the classification network, the time taken for the network to stabilize, and the output values are all measured. The results show that both transient and permanent faults have a significant impact on the performance of the measured network. The corresponding mistag and misclassification percentages are typically within 5 to 10 percent of each other. The average mistag percentage and the average misclassification percentage are both about 25 percent. After relearning, the percentage of misclassifications is reduced to 9 percent. In addition, transient faults are found to cause the network to be increasingly unstable as the duration of a transient is increased. The impact of link faults is relatively insignificant in comparison with node faults (1 versus 19 percent misclassified after relearning). There is a linear increase in the mistag and misclassification percentages with decreasing hardware redundancy. In addition, the mistag and misclassification percentages linearly decrease with increasing network size.
Gerlach, T.M.; Doukas, M.P.; McGee, K.A.; Kessler, R.
1998-01-01
We used the closed chamber method to measure soil CO2 efflux over a three-year period at the Horseshoe Lake tree kill (HLTK) - the largest tree kill on Mammoth Mountain in central eastern California. Efflux contour maps show a significant decline in the areas and rates of CO2 emission from 1995 to 1997. The emission rate fell from 350 t d-1 (metric tons per day) in 1995 to 130 t d-1 in 1997. The trend suggests a return to background soil CO2 efflux levels by early to mid 1999 and may reflect exhaustion of CO2 in a deep reservoir of accumulated gas and/or mechanical closure or sealing of fault conduits transmitting gas to the surface. However, emissions rose to 220 t d-1 on 23 September 1997 at the onset of a degassing event that lasted until 5 December 1997. Recent reservoir recharge and/or extension-enhanced gas flow may have caused the degassing event.
A tectonic model for the Tertiary evolution of strike slip faults and rift basins in SE Asia
NASA Astrophysics Data System (ADS)
Morley, C. K.
2002-04-01
Models for the Tertiary evolution of SE Asia fall into two main types: a pure escape tectonics model with no proto-South China Sea, and subduction of proto-South China Sea oceanic crust beneath Borneo. A related problem is which, if any, of the main strike-slip faults (Mae Ping, Three Pagodas and Aliao Shan-Red River (ASRR)) cross Sundaland to the NW Borneo margin to facilitate continental extrusion? Recent results investigating strike-slip faults, rift basins, and metamorphic core complexes are reviewed and a revised tectonic model for SE Asia proposed. Key points of the new model include: (1) The ASRR shear zone was mainly active in the Eocene-Oligocene in order to link with extension in the South China Sea. The ASRR was less active during the Miocene (tens of kilometres of sinistral displacement), with minor amounts of South China Sea spreading centre extension transferred to the ASRR shear zone. (2) At least three important regions of metamorphic core complex development affected Indochina from the Oligocene-Miocene (Mogok gneiss belt; Doi Inthanon and Doi Suthep; around the ASRR shear zone). Hence, Paleogene crustal thickening, buoyancy-driven crustal collapse, and lower crustal flow are important elements of the Tertiary evolution of Indochina. (3) Subduction of a proto-South China Sea oceanic crust during the Eocene-Early Miocene is necessary to explain the geological evolution of NW Borneo and must be built into any model for the region. (4) The Eocene-Oligocene collision of NE India with Burma activated extrusion tectonics along the Three Pagodas, Mae Ping, Ranong and Klong Marui faults and right lateral motion along the Sumatran subduction zone. (5) The only strike-slip fault link to the NW Borneo margin occurred along the trend of the ASRR fault system, which passes along strike into a right lateral transform system including the Baram line.
Evidence for Near-Road Air Pollution Abatement by Tree Cover
Urbanized areas represent concentrated demand for ecosystem services to buffer hazards and promote healthful lifestyles. Urban tree cover has been linked to multiple local health benefits including clean air and water, flood and drought protection, heat mitigation, and opportuni...
A Case Study of a Combat Helicopter’s Single Unit Vulnerability.
1987-03-01
22 2.6 Generic Fault Tree Diagram ----------------------- 24 2.7 Example Kill Diagram ----------------------------- 25 2.8 Example EEA Summary...that of the vulnerability program, a susceptibility program is subdivided into three major tasks. First is an essential elements analysis ( EEA ...which leads to the 27 i final undesired event in much the same manner as a FTA. An example EEA is provided in Figure 2.8. [Ref.l:p226] The
Probabilistic Risk Assessment: A Bibliography
NASA Technical Reports Server (NTRS)
2000-01-01
Probabilistic risk analysis is an integration of failure modes and effects analysis (FMEA), fault tree analysis and other techniques to assess the potential for failure and to find ways to reduce risk. This bibliography references 160 documents in the NASA STI Database that contain the major concepts, probabilistic risk assessment, risk and probability theory, in the basic index or major subject terms, An abstract is included with most citations, followed by the applicable subject terms.
Gaining Insight Into Femtosecond-scale CMOS Effects using FPGAs
2015-03-24
paths or detecting gross path delay faults , but for characterizing subtle aging effects, there is a need to isolate very short paths and detect very...data using COTS FPGAs and novel self-test. Hardware experiments using a 28 nm FPGA demonstrate isolation of small sets of transistors, detection of...hold the static configuration data specifying the LUT function. A set of inverters drive the SRAM contents into a pass-gate multiplexor tree; we
EvolView, an online tool for visualizing, annotating and managing phylogenetic trees.
Zhang, Huangkai; Gao, Shenghan; Lercher, Martin J; Hu, Songnian; Chen, Wei-Hua
2012-07-01
EvolView is a web application for visualizing, annotating and managing phylogenetic trees. First, EvolView is a phylogenetic tree viewer and customization tool; it visualizes trees in various formats, customizes them through built-in functions that can link information from external datasets, and exports the customized results to publication-ready figures. Second, EvolView is a tree and dataset management tool: users can easily organize related trees into distinct projects, add new datasets to trees and edit and manage existing trees and datasets. To make EvolView easy to use, it is equipped with an intuitive user interface. With a free account, users can save data and manipulations on the EvolView server. EvolView is freely available at: http://www.evolgenius.info/evolview.html.
Linking definitions, mechanisms, and modeling of drought-induced tree death.
Anderegg, William R L; Berry, Joseph A; Field, Christopher B
2012-12-01
Tree death from drought and heat stress is a critical and uncertain component in forest ecosystem responses to a changing climate. Recent research has illuminated how tree mortality is a complex cascade of changes involving interconnected plant systems over multiple timescales. Explicit consideration of the definitions, dynamics, and temporal and biological scales of tree mortality research can guide experimental and modeling approaches. In this review, we draw on the medical literature concerning human death to propose a water resource-based approach to tree mortality that considers the tree as a complex organism with a distinct growth strategy. This approach provides insight into mortality mechanisms at the tree and landscape scales and presents promising avenues into modeling tree death from drought and temperature stress. Copyright © 2012 Elsevier Ltd. All rights reserved.
EvolView, an online tool for visualizing, annotating and managing phylogenetic trees
Zhang, Huangkai; Gao, Shenghan; Lercher, Martin J.; Hu, Songnian; Chen, Wei-Hua
2012-01-01
EvolView is a web application for visualizing, annotating and managing phylogenetic trees. First, EvolView is a phylogenetic tree viewer and customization tool; it visualizes trees in various formats, customizes them through built-in functions that can link information from external datasets, and exports the customized results to publication-ready figures. Second, EvolView is a tree and dataset management tool: users can easily organize related trees into distinct projects, add new datasets to trees and edit and manage existing trees and datasets. To make EvolView easy to use, it is equipped with an intuitive user interface. With a free account, users can save data and manipulations on the EvolView server. EvolView is freely available at: http://www.evolgenius.info/evolview.html. PMID:22695796
Interface For Fault-Tolerant Control System
NASA Technical Reports Server (NTRS)
Shaver, Charles; Williamson, Michael
1989-01-01
Interface unit and controller emulator developed for research on electronic helicopter-flight-control systems equipped with artificial intelligence. Interface unit interrupt-driven system designed to link microprocessor-based, quadruply-redundant, asynchronous, ultra-reliable, fault-tolerant control system (controller) with electronic servocontrol unit that controls set of hydraulic actuators. Receives digital feedforward messages from, and transmits digital feedback messages to, controller through differential signal lines or fiber-optic cables (thus far only differential signal lines have been used). Analog signals transmitted to and from servocontrol unit via coaxial cables.