Automatic translation of digraph to fault-tree models
NASA Technical Reports Server (NTRS)
Iverson, David L.
1992-01-01
The author presents a technique for converting digraph models, including those models containing cycles, to a fault-tree format. A computer program which automatically performs this translation using an object-oriented representation of the models has been developed. The fault-trees resulting from translations can be used for fault-tree analysis and diagnosis. Programs to calculate fault-tree and digraph cut sets and perform diagnosis with fault-tree models have also been developed. The digraph to fault-tree translation system has been successfully tested on several digraphs of varying size and complexity. Details of some representative translation problems are presented. Most of the computation performed by the program is dedicated to finding minimal cut sets for digraph nodes in order to break cycles in the digraph. Fault-trees produced by the translator have been successfully used with NASA's Fault-Tree Diagnosis System (FTDS) to produce automated diagnostic systems.
Tutorial: Advanced fault tree applications using HARP
NASA Technical Reports Server (NTRS)
Dugan, Joanne Bechta; Bavuso, Salvatore J.; Boyd, Mark A.
1993-01-01
Reliability analysis of fault tolerant computer systems for critical applications is complicated by several factors. These modeling difficulties are discussed and dynamic fault tree modeling techniques for handling them are described and demonstrated. Several advanced fault tolerant computer systems are described, and fault tree models for their analysis are presented. HARP (Hybrid Automated Reliability Predictor) is a software package developed at Duke University and NASA Langley Research Center that is capable of solving the fault tree models presented.
Application Research of Fault Tree Analysis in Grid Communication System Corrective Maintenance
NASA Astrophysics Data System (ADS)
Wang, Jian; Yang, Zhenwei; Kang, Mei
2018-01-01
This paper attempts to apply the fault tree analysis method to the corrective maintenance field of grid communication system. Through the establishment of the fault tree model of typical system and the engineering experience, the fault tree analysis theory is used to analyze the fault tree model, which contains the field of structural function, probability importance and so on. The results show that the fault tree analysis can realize fast positioning and well repairing of the system. Meanwhile, it finds that the analysis method of fault tree has some guiding significance to the reliability researching and upgrading f the system.
Faults Discovery By Using Mined Data
NASA Technical Reports Server (NTRS)
Lee, Charles
2005-01-01
Fault discovery in the complex systems consist of model based reasoning, fault tree analysis, rule based inference methods, and other approaches. Model based reasoning builds models for the systems either by mathematic formulations or by experiment model. Fault Tree Analysis shows the possible causes of a system malfunction by enumerating the suspect components and their respective failure modes that may have induced the problem. The rule based inference build the model based on the expert knowledge. Those models and methods have one thing in common; they have presumed some prior-conditions. Complex systems often use fault trees to analyze the faults. Fault diagnosis, when error occurs, is performed by engineers and analysts performing extensive examination of all data gathered during the mission. International Space Station (ISS) control center operates on the data feedback from the system and decisions are made based on threshold values by using fault trees. Since those decision-making tasks are safety critical and must be done promptly, the engineers who manually analyze the data are facing time challenge. To automate this process, this paper present an approach that uses decision trees to discover fault from data in real-time and capture the contents of fault trees as the initial state of the trees.
Fault trees and sequence dependencies
NASA Technical Reports Server (NTRS)
Dugan, Joanne Bechta; Boyd, Mark A.; Bavuso, Salvatore J.
1990-01-01
One of the frequently cited shortcomings of fault-tree models, their inability to model so-called sequence dependencies, is discussed. Several sources of such sequence dependencies are discussed, and new fault-tree gates to capture this behavior are defined. These complex behaviors can be included in present fault-tree models because they utilize a Markov solution. The utility of the new gates is demonstrated by presenting several models of the fault-tolerant parallel processor, which include both hot and cold spares.
A dynamic fault tree model of a propulsion system
NASA Technical Reports Server (NTRS)
Xu, Hong; Dugan, Joanne Bechta; Meshkat, Leila
2006-01-01
We present a dynamic fault tree model of the benchmark propulsion system, and solve it using Galileo. Dynamic fault trees (DFT) extend traditional static fault trees with special gates to model spares and other sequence dependencies. Galileo solves DFT models using a judicious combination of automatically generated Markov and Binary Decision Diagram models. Galileo easily handles the complexities exhibited by the benchmark problem. In particular, Galileo is designed to model phased mission systems.
Fault tree models for fault tolerant hypercube multiprocessors
NASA Technical Reports Server (NTRS)
Boyd, Mark A.; Tuazon, Jezus O.
1991-01-01
Three candidate fault tolerant hypercube architectures are modeled, their reliability analyses are compared, and the resulting implications of these methods of incorporating fault tolerance into hypercube multiprocessors are discussed. In the course of performing the reliability analyses, the use of HARP and fault trees in modeling sequence dependent system behaviors is demonstrated.
Technology transfer by means of fault tree synthesis
NASA Astrophysics Data System (ADS)
Batzias, Dimitris F.
2012-12-01
Since Fault Tree Analysis (FTA) attempts to model and analyze failure processes of engineering, it forms a common technique for good industrial practice. On the contrary, fault tree synthesis (FTS) refers to the methodology of constructing complex trees either from dentritic modules built ad hoc or from fault tress already used and stored in a Knowledge Base. In both cases, technology transfer takes place in a quasi-inductive mode, from partial to holistic knowledge. In this work, an algorithmic procedure, including 9 activity steps and 3 decision nodes is developed for performing effectively this transfer when the fault under investigation occurs within one of the latter stages of an industrial procedure with several stages in series. The main parts of the algorithmic procedure are: (i) the construction of a local fault tree within the corresponding production stage, where the fault has been detected, (ii) the formation of an interface made of input faults that might occur upstream, (iii) the fuzzy (to count for uncertainty) multicriteria ranking of these faults according to their significance, and (iv) the synthesis of an extended fault tree based on the construction of part (i) and on the local fault tree of the first-ranked fault in part (iii). An implementation is presented, referring to 'uneven sealing of Al anodic film', thus proving the functionality of the developed methodology.
Software For Fault-Tree Diagnosis Of A System
NASA Technical Reports Server (NTRS)
Iverson, Dave; Patterson-Hine, Ann; Liao, Jack
1993-01-01
Fault Tree Diagnosis System (FTDS) computer program is automated-diagnostic-system program identifying likely causes of specified failure on basis of information represented in system-reliability mathematical models known as fault trees. Is modified implementation of failure-cause-identification phase of Narayanan's and Viswanadham's methodology for acquisition of knowledge and reasoning in analyzing failures of systems. Knowledge base of if/then rules replaced with object-oriented fault-tree representation. Enhancement yields more-efficient identification of causes of failures and enables dynamic updating of knowledge base. Written in C language, C++, and Common LISP.
Using Fault Trees to Advance Understanding of Diagnostic Errors.
Rogith, Deevakar; Iyengar, M Sriram; Singh, Hardeep
2017-11-01
Diagnostic errors annually affect at least 5% of adults in the outpatient setting in the United States. Formal analytic techniques are only infrequently used to understand them, in part because of the complexity of diagnostic processes and clinical work flows involved. In this article, diagnostic errors were modeled using fault tree analysis (FTA), a form of root cause analysis that has been successfully used in other high-complexity, high-risk contexts. How factors contributing to diagnostic errors can be systematically modeled by FTA to inform error understanding and error prevention is demonstrated. A team of three experts reviewed 10 published cases of diagnostic error and constructed fault trees. The fault trees were modeled according to currently available conceptual frameworks characterizing diagnostic error. The 10 trees were then synthesized into a single fault tree to identify common contributing factors and pathways leading to diagnostic error. FTA is a visual, structured, deductive approach that depicts the temporal sequence of events and their interactions in a formal logical hierarchy. The visual FTA enables easier understanding of causative processes and cognitive and system factors, as well as rapid identification of common pathways and interactions in a unified fashion. In addition, it enables calculation of empirical estimates for causative pathways. Thus, fault trees might provide a useful framework for both quantitative and qualitative analysis of diagnostic errors. Future directions include establishing validity and reliability by modeling a wider range of error cases, conducting quantitative evaluations, and undertaking deeper exploration of other FTA capabilities. Copyright © 2017 The Joint Commission. Published by Elsevier Inc. All rights reserved.
Lognormal Approximations of Fault Tree Uncertainty Distributions.
El-Shanawany, Ashraf Ben; Ardron, Keith H; Walker, Simon P
2018-01-26
Fault trees are used in reliability modeling to create logical models of fault combinations that can lead to undesirable events. The output of a fault tree analysis (the top event probability) is expressed in terms of the failure probabilities of basic events that are input to the model. Typically, the basic event probabilities are not known exactly, but are modeled as probability distributions: therefore, the top event probability is also represented as an uncertainty distribution. Monte Carlo methods are generally used for evaluating the uncertainty distribution, but such calculations are computationally intensive and do not readily reveal the dominant contributors to the uncertainty. In this article, a closed-form approximation for the fault tree top event uncertainty distribution is developed, which is applicable when the uncertainties in the basic events of the model are lognormally distributed. The results of the approximate method are compared with results from two sampling-based methods: namely, the Monte Carlo method and the Wilks method based on order statistics. It is shown that the closed-form expression can provide a reasonable approximation to results obtained by Monte Carlo sampling, without incurring the computational expense. The Wilks method is found to be a useful means of providing an upper bound for the percentiles of the uncertainty distribution while being computationally inexpensive compared with full Monte Carlo sampling. The lognormal approximation method and Wilks's method appear attractive, practical alternatives for the evaluation of uncertainty in the output of fault trees and similar multilinear models. © 2018 Society for Risk Analysis.
CUTSETS - MINIMAL CUT SET CALCULATION FOR DIGRAPH AND FAULT TREE RELIABILITY MODELS
NASA Technical Reports Server (NTRS)
Iverson, D. L.
1994-01-01
Fault tree and digraph models are frequently used for system failure analysis. Both type of models represent a failure space view of the system using AND and OR nodes in a directed graph structure. Fault trees must have a tree structure and do not allow cycles or loops in the graph. Digraphs allow any pattern of interconnection between loops in the graphs. A common operation performed on digraph and fault tree models is the calculation of minimal cut sets. A cut set is a set of basic failures that could cause a given target failure event to occur. A minimal cut set for a target event node in a fault tree or digraph is any cut set for the node with the property that if any one of the failures in the set is removed, the occurrence of the other failures in the set will not cause the target failure event. CUTSETS will identify all the minimal cut sets for a given node. The CUTSETS package contains programs that solve for minimal cut sets of fault trees and digraphs using object-oriented programming techniques. These cut set codes can be used to solve graph models for reliability analysis and identify potential single point failures in a modeled system. The fault tree minimal cut set code reads in a fault tree model input file with each node listed in a text format. In the input file the user specifies a top node of the fault tree and a maximum cut set size to be calculated. CUTSETS will find minimal sets of basic events which would cause the failure at the output of a given fault tree gate. The program can find all the minimal cut sets of a node, or minimal cut sets up to a specified size. The algorithm performs a recursive top down parse of the fault tree, starting at the specified top node, and combines the cut sets of each child node into sets of basic event failures that would cause the failure event at the output of that gate. Minimal cut set solutions can be found for all nodes in the fault tree or just for the top node. The digraph cut set code uses the same techniques as the fault tree cut set code, except it includes all upstream digraph nodes in the cut sets for a given node and checks for cycles in the digraph during the solution process. CUTSETS solves for specified nodes and will not automatically solve for all upstream digraph nodes. The cut sets will be output as a text file. CUTSETS includes a utility program that will convert the popular COD format digraph model description files into text input files suitable for use with the CUTSETS programs. FEAT (MSC-21873) and FIRM (MSC-21860) available from COSMIC are examples of programs that produce COD format digraph model description files that may be converted for use with the CUTSETS programs. CUTSETS is written in C-language to be machine independent. It has been successfully implemented on a Sun running SunOS, a DECstation running ULTRIX, a Macintosh running System 7, and a DEC VAX running VMS. The RAM requirement varies with the size of the models. CUTSETS is available in UNIX tar format on a .25 inch streaming magnetic tape cartridge (standard distribution) or on a 3.5 inch diskette. It is also available on a 3.5 inch Macintosh format diskette or on a 9-track 1600 BPI magnetic tape in DEC VAX FILES-11 format. Sample input and sample output are provided on the distribution medium. An electronic copy of the documentation in Macintosh Microsoft Word format is included on the distribution medium. Sun and SunOS are trademarks of Sun Microsystems, Inc. DEC, DeCstation, ULTRIX, VAX, and VMS are trademarks of Digital Equipment Corporation. UNIX is a registered trademark of AT&T Bell Laboratories. Macintosh is a registered trademark of Apple Computer, Inc.
Chen, Yingyi; Zhen, Zhumi; Yu, Huihui; Xu, Jing
2017-01-14
In the Internet of Things (IoT) equipment used for aquaculture is often deployed in outdoor ponds located in remote areas. Faults occur frequently in these tough environments and the staff generally lack professional knowledge and pay a low degree of attention in these areas. Once faults happen, expert personnel must carry out maintenance outdoors. Therefore, this study presents an intelligent method for fault diagnosis based on fault tree analysis and a fuzzy neural network. In the proposed method, first, the fault tree presents a logic structure of fault symptoms and faults. Second, rules extracted from the fault trees avoid duplicate and redundancy. Third, the fuzzy neural network is applied to train the relationship mapping between fault symptoms and faults. In the aquaculture IoT, one fault can cause various fault symptoms, and one symptom can be caused by a variety of faults. Four fault relationships are obtained. Results show that one symptom-to-one fault, two symptoms-to-two faults, and two symptoms-to-one fault relationships can be rapidly diagnosed with high precision, while one symptom-to-two faults patterns perform not so well, but are still worth researching. This model implements diagnosis for most kinds of faults in the aquaculture IoT.
Chen, Yingyi; Zhen, Zhumi; Yu, Huihui; Xu, Jing
2017-01-01
In the Internet of Things (IoT) equipment used for aquaculture is often deployed in outdoor ponds located in remote areas. Faults occur frequently in these tough environments and the staff generally lack professional knowledge and pay a low degree of attention in these areas. Once faults happen, expert personnel must carry out maintenance outdoors. Therefore, this study presents an intelligent method for fault diagnosis based on fault tree analysis and a fuzzy neural network. In the proposed method, first, the fault tree presents a logic structure of fault symptoms and faults. Second, rules extracted from the fault trees avoid duplicate and redundancy. Third, the fuzzy neural network is applied to train the relationship mapping between fault symptoms and faults. In the aquaculture IoT, one fault can cause various fault symptoms, and one symptom can be caused by a variety of faults. Four fault relationships are obtained. Results show that one symptom-to-one fault, two symptoms-to-two faults, and two symptoms-to-one fault relationships can be rapidly diagnosed with high precision, while one symptom-to-two faults patterns perform not so well, but are still worth researching. This model implements diagnosis for most kinds of faults in the aquaculture IoT. PMID:28098822
Fault trees for decision making in systems analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lambert, Howard E.
1975-10-09
The application of fault tree analysis (FTA) to system safety and reliability is presented within the framework of system safety analysis. The concepts and techniques involved in manual and automated fault tree construction are described and their differences noted. The theory of mathematical reliability pertinent to FTA is presented with emphasis on engineering applications. An outline of the quantitative reliability techniques of the Reactor Safety Study is given. Concepts of probabilistic importance are presented within the fault tree framework and applied to the areas of system design, diagnosis and simulation. The computer code IMPORTANCE ranks basic events and cut setsmore » according to a sensitivity analysis. A useful feature of the IMPORTANCE code is that it can accept relative failure data as input. The output of the IMPORTANCE code can assist an analyst in finding weaknesses in system design and operation, suggest the most optimal course of system upgrade, and determine the optimal location of sensors within a system. A general simulation model of system failure in terms of fault tree logic is described. The model is intended for efficient diagnosis of the causes of system failure in the event of a system breakdown. It can also be used to assist an operator in making decisions under a time constraint regarding the future course of operations. The model is well suited for computer implementation. New results incorporated in the simulation model include an algorithm to generate repair checklists on the basis of fault tree logic and a one-step-ahead optimization procedure that minimizes the expected time to diagnose system failure.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sarrack, A.G.
The purpose of this report is to document fault tree analyses which have been completed for the Defense Waste Processing Facility (DWPF) safety analysis. Logic models for equipment failures and human error combinations that could lead to flammable gas explosions in various process tanks, or failure of critical support systems were developed for internal initiating events and for earthquakes. These fault trees provide frequency estimates for support systems failures and accidents that could lead to radioactive and hazardous chemical releases both on-site and off-site. Top event frequency results from these fault trees will be used in further APET analyses tomore » calculate accident risk associated with DWPF facility operations. This report lists and explains important underlying assumptions, provides references for failure data sources, and briefly describes the fault tree method used. Specific commitments from DWPF to provide new procedural/administrative controls or system design changes are listed in the ''Facility Commitments'' section. The purpose of the ''Assumptions'' section is to clarify the basis for fault tree modeling, and is not necessarily a list of items required to be protected by Technical Safety Requirements (TSRs).« less
DG TO FT - AUTOMATIC TRANSLATION OF DIGRAPH TO FAULT TREE MODELS
NASA Technical Reports Server (NTRS)
Iverson, D. L.
1994-01-01
Fault tree and digraph models are frequently used for system failure analysis. Both types of models represent a failure space view of the system using AND and OR nodes in a directed graph structure. Each model has its advantages. While digraphs can be derived in a fairly straightforward manner from system schematics and knowledge about component failure modes and system design, fault tree structure allows for fast processing using efficient techniques developed for tree data structures. The similarities between digraphs and fault trees permits the information encoded in the digraph to be translated into a logically equivalent fault tree. The DG TO FT translation tool will automatically translate digraph models, including those with loops or cycles, into fault tree models that have the same minimum cut set solutions as the input digraph. This tool could be useful, for example, if some parts of a system have been modeled using digraphs and others using fault trees. The digraphs could be translated and incorporated into the fault trees, allowing them to be analyzed using a number of powerful fault tree processing codes, such as cut set and quantitative solution codes. A cut set for a given node is a group of failure events that will cause the failure of the node. A minimum cut set for a node is any cut set that, if any of the failures in the set were to be removed, the occurrence of the other failures in the set will not cause the failure of the event represented by the node. Cut sets calculations can be used to find dependencies, weak links, and vital system components whose failures would cause serious systems failure. The DG TO FT translation system reads in a digraph with each node listed as a separate object in the input file. The user specifies a terminal node for the digraph that will be used as the top node of the resulting fault tree. A fault tree basic event node representing the failure of that digraph node is created and becomes a child of the terminal root node. A subtree is created for each of the inputs to the digraph terminal node and the root of those subtrees are added as children of the top node of the fault tree. Every node in the digraph upstream of the terminal node will be visited and converted. During the conversion process, the algorithm keeps track of the path from the digraph terminal node to the current digraph node. If a node is visited twice, then the program has found a cycle in the digraph. This cycle is broken by finding the minimal cut sets of the twice visited digraph node and forming those cut sets into subtrees. Another implementation of the algorithm resolves loops by building a subtree based on the digraph minimal cut sets calculation. It does not reduce the subtree to minimal cut set form. This second implementation produces larger fault trees, but runs much faster than the version using minimal cut sets since it does not spend time reducing the subtrees to minimal cut sets. The fault trees produced by DG TO FT will contain OR gates, AND gates, Basic Event nodes, and NOP gates. The results of a translation can be output as a text object description of the fault tree similar to the text digraph input format. The translator can also output a LISP language formatted file and an augmented LISP file which can be used by the FTDS (ARC-13019) diagnosis system, available from COSMIC, which performs diagnostic reasoning using the fault tree as a knowledge base. DG TO FT is written in C-language to be machine independent. It has been successfully implemented on a Sun running SunOS, a DECstation running ULTRIX, a Macintosh running System 7, and a DEC VAX running VMS. The RAM requirement varies with the size of the models. DG TO FT is available in UNIX tar format on a .25 inch streaming magnetic tape cartridge (standard distribution) or on a 3.5 inch diskette. It is also available on a 3.5 inch Macintosh format diskette or on a 9-track 1600 BPI magnetic tape in DEC VAX FILES-11 format. Sample input and sample output are provided on the distribution medium. An electronic copy of the documentation in Macintosh Microsoft Word format is provided on the distribution medium. DG TO FT was developed in 1992. Sun, and SunOS are trademarks of Sun Microsystems, Inc. DECstation, ULTRIX, VAX, and VMS are trademarks of Digital Equipment Corporation. UNIX is a registered trademark of AT&T Bell Laboratories. Macintosh is a registered trademark of Apple Computer, Inc. System 7 is a trademark of Apple Computers Inc. Microsoft Word is a trademark of Microsoft Corporation.
Integrated Approach To Design And Analysis Of Systems
NASA Technical Reports Server (NTRS)
Patterson-Hine, F. A.; Iverson, David L.
1993-01-01
Object-oriented fault-tree representation unifies evaluation of reliability and diagnosis of faults. Programming/fault tree described more fully in "Object-Oriented Algorithm For Evaluation Of Fault Trees" (ARC-12731). Augmented fault tree object contains more information than fault tree object used in quantitative analysis of reliability. Additional information needed to diagnose faults in system represented by fault tree.
Interim reliability evaluation program, Browns Ferry fault trees
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stewart, M.E.
1981-01-01
An abbreviated fault tree method is used to evaluate and model Browns Ferry systems in the Interim Reliability Evaluation programs, simplifying the recording and displaying of events, yet maintaining the system of identifying faults. The level of investigation is not changed. The analytical thought process inherent in the conventional method is not compromised. But the abbreviated method takes less time, and the fault modes are much more visible.
NASA Technical Reports Server (NTRS)
English, Thomas
2005-01-01
A standard tool of reliability analysis used at NASA-JSC is the event tree. An event tree is simply a probability tree, with the probabilities determining the next step through the tree specified at each node. The nodal probabilities are determined by a reliability study of the physical system at work for a particular node. The reliability study performed at a node is typically referred to as a fault tree analysis, with the potential of a fault tree existing.for each node on the event tree. When examining an event tree it is obvious why the event tree/fault tree approach has been adopted. Typical event trees are quite complex in nature, and the event tree/fault tree approach provides a systematic and organized approach to reliability analysis. The purpose of this study was two fold. Firstly, we wanted to explore the possibility that a semi-Markov process can create dependencies between sojourn times (the times it takes to transition from one state to the next) that can decrease the uncertainty when estimating time to failures. Using a generalized semi-Markov model, we studied a four element reliability model and were able to demonstrate such sojourn time dependencies. Secondly, we wanted to study the use of semi-Markov processes to introduce a time variable into the event tree diagrams that are commonly developed in PRA (Probabilistic Risk Assessment) analyses. Event tree end states which change with time are more representative of failure scenarios than are the usual static probability-derived end states.
[The Application of the Fault Tree Analysis Method in Medical Equipment Maintenance].
Liu, Hongbin
2015-11-01
In this paper, the traditional fault tree analysis method is presented, detailed instructions for its application characteristics in medical instrument maintenance is made. It is made significant changes when the traditional fault tree analysis method is introduced into the medical instrument maintenance: gave up the logic symbolic, logic analysis and calculation, gave up its complicated programs, and only keep its image and practical fault tree diagram, and the fault tree diagram there are also differences: the fault tree is no longer a logical tree but the thinking tree in troubleshooting, the definition of the fault tree's nodes is different, the composition of the fault tree's branches is also different.
NASA Technical Reports Server (NTRS)
Lee, Charles; Alena, Richard L.; Robinson, Peter
2004-01-01
We started from ISS fault trees example to migrate to decision trees, presented a method to convert fault trees to decision trees. The method shows that the visualizations of root cause of fault are easier and the tree manipulating becomes more programmatic via available decision tree programs. The visualization of decision trees for the diagnostic shows a format of straight forward and easy understands. For ISS real time fault diagnostic, the status of the systems could be shown by mining the signals through the trees and see where it stops at. The other advantage to use decision trees is that the trees can learn the fault patterns and predict the future fault from the historic data. The learning is not only on the static data sets but also can be online, through accumulating the real time data sets, the decision trees can gain and store faults patterns in the trees and recognize them when they come.
Object-oriented fault tree models applied to system diagnosis
NASA Technical Reports Server (NTRS)
Iverson, David L.; Patterson-Hine, F. A.
1990-01-01
When a diagnosis system is used in a dynamic environment, such as the distributed computer system planned for use on Space Station Freedom, it must execute quickly and its knowledge base must be easily updated. Representing system knowledge as object-oriented augmented fault trees provides both features. The diagnosis system described here is based on the failure cause identification process of the diagnostic system described by Narayanan and Viswanadham. Their system has been enhanced in this implementation by replacing the knowledge base of if-then rules with an object-oriented fault tree representation. This allows the system to perform its task much faster and facilitates dynamic updating of the knowledge base in a changing diagnosis environment. Accessing the information contained in the objects is more efficient than performing a lookup operation on an indexed rule base. Additionally, the object-oriented fault trees can be easily updated to represent current system status. This paper describes the fault tree representation, the diagnosis algorithm extensions, and an example application of this system. Comparisons are made between the object-oriented fault tree knowledge structure solution and one implementation of a rule-based solution. Plans for future work on this system are also discussed.
Modular techniques for dynamic fault-tree analysis
NASA Technical Reports Server (NTRS)
Patterson-Hine, F. A.; Dugan, Joanne B.
1992-01-01
It is noted that current approaches used to assess the dependability of complex systems such as Space Station Freedom and the Air Traffic Control System are incapable of handling the size and complexity of these highly integrated designs. A novel technique for modeling such systems which is built upon current techniques in Markov theory and combinatorial analysis is described. It enables the development of a hierarchical representation of system behavior which is more flexible than either technique alone. A solution strategy which is based on an object-oriented approach to model representation and evaluation is discussed. The technique is virtually transparent to the user since the fault tree models can be built graphically and the objects defined automatically. The tree modularization procedure allows the two model types, Markov and combinatoric, to coexist and does not require that the entire fault tree be translated to a Markov chain for evaluation. This effectively reduces the size of the Markov chain required and enables solutions with less truncation, making analysis of longer mission times possible. Using the fault-tolerant parallel processor as an example, a model is built and solved for a specific mission scenario and the solution approach is illustrated in detail.
SPACE PROPULSION SYSTEM PHASED-MISSION PROBABILITY ANALYSIS USING CONVENTIONAL PRA METHODS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Curtis Smith; James Knudsen
As part of a series of papers on the topic of advance probabilistic methods, a benchmark phased-mission problem has been suggested. This problem consists of modeling a space mission using an ion propulsion system, where the mission consists of seven mission phases. The mission requires that the propulsion operate for several phases, where the configuration changes as a function of phase. The ion propulsion system itself consists of five thruster assemblies and a single propellant supply, where each thruster assembly has one propulsion power unit and two ion engines. In this paper, we evaluate the probability of mission failure usingmore » the conventional methodology of event tree/fault tree analysis. The event tree and fault trees are developed and analyzed using Systems Analysis Programs for Hands-on Integrated Reliability Evaluations (SAPHIRE). While the benchmark problem is nominally a "dynamic" problem, in our analysis the mission phases are modeled in a single event tree to show the progression from one phase to the next. The propulsion system is modeled in fault trees to account for the operation; or in this case, the failure of the system. Specifically, the propulsion system is decomposed into each of the five thruster assemblies and fed into the appropriate N-out-of-M gate to evaluate mission failure. A separate fault tree for the propulsion system is developed to account for the different success criteria of each mission phase. Common-cause failure modeling is treated using traditional (i.e., parametrically) methods. As part of this paper, we discuss the overall results in addition to the positive and negative aspects of modeling dynamic situations with non-dynamic modeling techniques. One insight from the use of this conventional method for analyzing the benchmark problem is that it requires significant manual manipulation to the fault trees and how they are linked into the event tree. The conventional method also requires editing the resultant cut sets to obtain the correct results. While conventional methods may be used to evaluate a dynamic system like that in the benchmark, the level of effort required may preclude its use on real-world problems.« less
Failure analysis of energy storage spring in automobile composite brake chamber
NASA Astrophysics Data System (ADS)
Luo, Zai; Wei, Qing; Hu, Xiaofeng
2015-02-01
This paper set energy storage spring of parking brake cavity, part of automobile composite brake chamber, as the research object. And constructed the fault tree model of energy storage spring which caused parking brake failure based on the fault tree analysis method. Next, the parking brake failure model of energy storage spring was established by analyzing the working principle of composite brake chamber. Finally, the data of working load and the push rod stroke measured by comprehensive test-bed valve was used to validate the failure model above. The experimental result shows that the failure model can distinguish whether the energy storage spring is faulted.
NASA Technical Reports Server (NTRS)
Butler, Ricky W.; Boerschlein, David P.
1993-01-01
Fault-Tree Compiler (FTC) program, is software tool used to calculate probability of top event in fault tree. Gates of five different types allowed in fault tree: AND, OR, EXCLUSIVE OR, INVERT, and M OF N. High-level input language easy to understand and use. In addition, program supports hierarchical fault-tree definition feature, which simplifies tree-description process and reduces execution time. Set of programs created forming basis for reliability-analysis workstation: SURE, ASSIST, PAWS/STEM, and FTC fault-tree tool (LAR-14586). Written in PASCAL, ANSI-compliant C language, and FORTRAN 77. Other versions available upon request.
Fault Tree in the Trenches, A Success Story
NASA Technical Reports Server (NTRS)
Long, R. Allen; Goodson, Amanda (Technical Monitor)
2000-01-01
Getting caught up in the explanation of Fault Tree Analysis (FTA) minutiae is easy. In fact, most FTA literature tends to address FTA concepts and methodology. Yet there seems to be few articles addressing actual design changes resulting from the successful application of fault tree analysis. This paper demonstrates how fault tree analysis was used to identify and solve a potentially catastrophic mechanical problem at a rocket motor manufacturer. While developing the fault tree given in this example, the analyst was told by several organizations that the piece of equipment in question had been evaluated by several committees and organizations, and that the analyst was wasting his time. The fault tree/cutset analysis resulted in a joint-redesign of the control system by the tool engineering group and the fault tree analyst, as well as bragging rights for the analyst. (That the fault tree found problems where other engineering reviews had failed was not lost on the other engineering groups.) Even more interesting was that this was the analyst's first fault tree which further demonstrates how effective fault tree analysis can be in guiding (i.e., forcing) the analyst to take a methodical approach in evaluating complex systems.
NASA Technical Reports Server (NTRS)
Butler, Ricky W.; Martensen, Anna L.
1992-01-01
FTC, Fault-Tree Compiler program, is reliability-analysis software tool used to calculate probability of top event of fault tree. Five different types of gates allowed in fault tree: AND, OR, EXCLUSIVE OR, INVERT, and M OF N. High-level input language of FTC easy to understand and use. Program supports hierarchical fault-tree-definition feature simplifying process of description of tree and reduces execution time. Solution technique implemented in FORTRAN, and user interface in Pascal. Written to run on DEC VAX computer operating under VMS operating system.
Reset Tree-Based Optical Fault Detection
Lee, Dong-Geon; Choi, Dooho; Seo, Jungtaek; Kim, Howon
2013-01-01
In this paper, we present a new reset tree-based scheme to protect cryptographic hardware against optical fault injection attacks. As one of the most powerful invasive attacks on cryptographic hardware, optical fault attacks cause semiconductors to misbehave by injecting high-energy light into a decapped integrated circuit. The contaminated result from the affected chip is then used to reveal secret information, such as a key, from the cryptographic hardware. Since the advent of such attacks, various countermeasures have been proposed. Although most of these countermeasures are strong, there is still the possibility of attack. In this paper, we present a novel optical fault detection scheme that utilizes the buffers on a circuit's reset signal tree as a fault detection sensor. To evaluate our proposal, we model radiation-induced currents into circuit components and perform a SPICE simulation. The proposed scheme is expected to be used as a supplemental security tool. PMID:23698267
Langenheim, Victoria E.; Rymer, Michael J.; Catchings, Rufus D.; Goldman, Mark R.; Watt, Janet T.; Powell, Robert E.; Matti, Jonathan C.
2016-03-02
We describe high-resolution gravity and seismic refraction surveys acquired to determine the thickness of valley-fill deposits and to delineate geologic structures that might influence groundwater flow beneath the Smoke Tree Wash area in Joshua Tree National Park. These surveys identified a sedimentary basin that is fault-controlled. A profile across the Smoke Tree Wash fault zone reveals low gravity values and seismic velocities that coincide with a mapped strand of the Smoke Tree Wash fault. Modeling of the gravity data reveals a basin about 2–2.5 km long and 1 km wide that is roughly centered on this mapped strand, and bounded by inferred faults. According to the gravity model the deepest part of the basin is about 270 m, but this area coincides with low velocities that are not characteristic of typical basement complex rocks. Most likely, the density contrast assumed in the inversion is too high or the uncharacteristically low velocities represent highly fractured or weathered basement rocks, or both. A longer seismic profile extending onto basement outcrops would help differentiate which scenario is more accurate. The seismic velocities also determine the depth to water table along the profile to be about 40–60 m, consistent with water levels measured in water wells near the northern end of the profile.
Graphical workstation capability for reliability modeling
NASA Technical Reports Server (NTRS)
Bavuso, Salvatore J.; Koppen, Sandra V.; Haley, Pamela J.
1992-01-01
In addition to computational capabilities, software tools for estimating the reliability of fault-tolerant digital computer systems must also provide a means of interfacing with the user. Described here is the new graphical interface capability of the hybrid automated reliability predictor (HARP), a software package that implements advanced reliability modeling techniques. The graphics oriented (GO) module provides the user with a graphical language for modeling system failure modes through the selection of various fault-tree gates, including sequence-dependency gates, or by a Markov chain. By using this graphical input language, a fault tree becomes a convenient notation for describing a system. In accounting for any sequence dependencies, HARP converts the fault-tree notation to a complex stochastic process that is reduced to a Markov chain, which it can then solve for system reliability. The graphics capability is available for use on an IBM-compatible PC, a Sun, and a VAX workstation. The GO module is written in the C programming language and uses the graphical kernal system (GKS) standard for graphics implementation. The PC, VAX, and Sun versions of the HARP GO module are currently in beta-testing stages.
Fault tree analysis for urban flooding.
ten Veldhuis, J A E; Clemens, F H L R; van Gelder, P H A J M
2009-01-01
Traditional methods to evaluate flood risk generally focus on heavy storm events as the principal cause of flooding. Conversely, fault tree analysis is a technique that aims at modelling all potential causes of flooding. It quantifies both overall flood probability and relative contributions of individual causes of flooding. This paper presents a fault model for urban flooding and an application to the case of Haarlem, a city of 147,000 inhabitants. Data from a complaint register, rainfall gauges and hydrodynamic model calculations are used to quantify probabilities of basic events in the fault tree. This results in a flood probability of 0.78/week for Haarlem. It is shown that gully pot blockages contribute to 79% of flood incidents, whereas storm events contribute only 5%. This implies that for this case more efficient gully pot cleaning is a more effective strategy to reduce flood probability than enlarging drainage system capacity. Whether this is also the most cost-effective strategy can only be decided after risk assessment has been complemented with a quantification of consequences of both types of events. To do this will be the next step in this study.
A diagnosis system using object-oriented fault tree models
NASA Technical Reports Server (NTRS)
Iverson, David L.; Patterson-Hine, F. A.
1990-01-01
Spaceborne computing systems must provide reliable, continuous operation for extended periods. Due to weight, power, and volume constraints, these systems must manage resources very effectively. A fault diagnosis algorithm is described which enables fast and flexible diagnoses in the dynamic distributed computing environments planned for future space missions. The algorithm uses a knowledge base that is easily changed and updated to reflect current system status. Augmented fault trees represented in an object-oriented form provide deep system knowledge that is easy to access and revise as a system changes. Given such a fault tree, a set of failure events that have occurred, and a set of failure events that have not occurred, this diagnosis system uses forward and backward chaining to propagate causal and temporal information about other failure events in the system being diagnosed. Once the system has established temporal and causal constraints, it reasons backward from heuristically selected failure events to find a set of basic failure events which are a likely cause of the occurrence of the top failure event in the fault tree. The diagnosis system has been implemented in common LISP using Flavors.
NASA Astrophysics Data System (ADS)
Akinci, A.; Pace, B.
2017-12-01
In this study, we discuss the seismic hazard variability of peak ground acceleration (PGA) at 475 years return period in the Southern Apennines of Italy. The uncertainty and parametric sensitivity are presented to quantify the impact of the several fault parameters on ground motion predictions for 10% exceedance in 50-year hazard. A time-independent PSHA model is constructed based on the long-term recurrence behavior of seismogenic faults adopting the characteristic earthquake model for those sources capable of rupturing the entire fault segment with a single maximum magnitude. The fault-based source model uses the dimensions and slip rates of mapped fault to develop magnitude-frequency estimates for characteristic earthquakes. Variability of the selected fault parameter is given with a truncated normal random variable distribution presented by standard deviation about a mean value. A Monte Carlo approach, based on the random balanced sampling by logic tree, is used in order to capture the uncertainty in seismic hazard calculations. For generating both uncertainty and sensitivity maps, we perform 200 simulations for each of the fault parameters. The results are synthesized both in frequency-magnitude distribution of modeled faults as well as the different maps: the overall uncertainty maps provide a confidence interval for the PGA values and the parameter uncertainty maps determine the sensitivity of hazard assessment to variability of every logic tree branch. These branches of logic tree, analyzed through the Monte Carlo approach, are maximum magnitudes, fault length, fault width, fault dip and slip rates. The overall variability of these parameters is determined by varying them simultaneously in the hazard calculations while the sensitivity of each parameter to overall variability is determined varying each of the fault parameters while fixing others. However, in this study we do not investigate the sensitivity of mean hazard results to the consideration of different GMPEs. Distribution of possible seismic hazard results is illustrated by 95% confidence factor map, which indicates the dispersion about mean value, and coefficient of variation map, which shows percent variability. The results of our study clearly illustrate the influence of active fault parameters to probabilistic seismic hazard maps.
Decision tree and PCA-based fault diagnosis of rotating machinery
NASA Astrophysics Data System (ADS)
Sun, Weixiang; Chen, Jin; Li, Jiaqing
2007-04-01
After analysing the flaws of conventional fault diagnosis methods, data mining technology is introduced to fault diagnosis field, and a new method based on C4.5 decision tree and principal component analysis (PCA) is proposed. In this method, PCA is used to reduce features after data collection, preprocessing and feature extraction. Then, C4.5 is trained by using the samples to generate a decision tree model with diagnosis knowledge. At last the tree model is used to make diagnosis analysis. To validate the method proposed, six kinds of running states (normal or without any defect, unbalance, rotor radial rub, oil whirl, shaft crack and a simultaneous state of unbalance and radial rub), are simulated on Bently Rotor Kit RK4 to test C4.5 and PCA-based method and back-propagation neural network (BPNN). The result shows that C4.5 and PCA-based diagnosis method has higher accuracy and needs less training time than BPNN.
NASA Astrophysics Data System (ADS)
Guns, K. A.; Bennett, R. A.; Blisniuk, K.
2017-12-01
To better evaluate the distribution and transfer of strain and slip along the Southern San Andreas Fault (SSAF) zone in the northern Coachella valley in southern California, we integrate geological and geodetic observations to test whether strain is being transferred away from the SSAF system towards the Eastern California Shear Zone through microblock rotation of the Eastern Transverse Ranges (ETR). The faults of the ETR consist of five east-west trending left lateral strike slip faults that have measured cumulative offsets of up to 20 km and as low as 1 km. Present kinematic and block models present a variety of slip rate estimates, from as low as zero to as high as 7 mm/yr, suggesting a gap in our understanding of what role these faults play in the larger system. To determine whether present-day block rotation along these faults is contributing to strain transfer in the region, we are applying 10Be surface exposure dating methods to observed offset channel and alluvial fan deposits in order to estimate fault slip rates along two faults in the ETR. We present observations of offset geomorphic landforms using field mapping and LiDAR data at three sites along the Blue Cut Fault and one site along the Smoke Tree Wash Fault in Joshua Tree National Park which indicate recent Quaternary fault activity. Initial results of site mapping and clast count analyses reveal at least three stages of offset, including potential Holocene offsets, for one site along the Blue Cut Fault, while preliminary 10Be geochronology is in progress. This geologic slip rate data, combined with our new geodetic surface velocity field derived from updated campaign-based GPS measurements within Joshua Tree National Park will allow us to construct a suite of elastic fault block models to elucidate rates of strain transfer away from the SSAF and how that strain transfer may be affecting the length of the interseismic period along the SSAF.
A graphical language for reliability model generation
NASA Technical Reports Server (NTRS)
Howell, Sandra V.; Bavuso, Salvatore J.; Haley, Pamela J.
1990-01-01
A graphical interface capability of the hybrid automated reliability predictor (HARP) is described. The graphics-oriented (GO) module provides the user with a graphical language for modeling system failure modes through the selection of various fault tree gates, including sequence dependency gates, or by a Markov chain. With this graphical input language, a fault tree becomes a convenient notation for describing a system. In accounting for any sequence dependencies, HARP converts the fault-tree notation to a complex stochastic process that is reduced to a Markov chain which it can then solve for system reliability. The graphics capability is available for use on an IBM-compatible PC, a Sun, and a VAX workstation. The GO module is written in the C programming language and uses the Graphical Kernel System (GKS) standard for graphics implementation. The PC, VAX, and Sun versions of the HARP GO module are currently in beta-testing.
McElroy, Lisa M; Khorzad, Rebeca; Rowe, Theresa A; Abecassis, Zachary A; Apley, Daniel W; Barnard, Cynthia; Holl, Jane L
The purpose of this study was to use fault tree analysis to evaluate the adequacy of quality reporting programs in identifying root causes of postoperative bloodstream infection (BSI). A systematic review of the literature was used to construct a fault tree to evaluate 3 postoperative BSI reporting programs: National Surgical Quality Improvement Program (NSQIP), Centers for Medicare and Medicaid Services (CMS), and The Joint Commission (JC). The literature review revealed 699 eligible publications, 90 of which were used to create the fault tree containing 105 faults. A total of 14 identified faults are currently mandated for reporting to NSQIP, 5 to CMS, and 3 to JC; 2 or more programs require 4 identified faults. The fault tree identifies numerous contributing faults to postoperative BSI and reveals substantial variation in the requirements and ability of national quality data reporting programs to capture these potential faults. Efforts to prevent postoperative BSI require more comprehensive data collection to identify the root causes and develop high-reliability improvement strategies.
Seera, Manjeevan; Lim, Chee Peng; Ishak, Dahaman; Singh, Harapajan
2012-01-01
In this paper, a novel approach to detect and classify comprehensive fault conditions of induction motors using a hybrid fuzzy min-max (FMM) neural network and classification and regression tree (CART) is proposed. The hybrid model, known as FMM-CART, exploits the advantages of both FMM and CART for undertaking data classification and rule extraction problems. A series of real experiments is conducted, whereby the motor current signature analysis method is applied to form a database comprising stator current signatures under different motor conditions. The signal harmonics from the power spectral density are extracted as discriminative input features for fault detection and classification with FMM-CART. A comprehensive list of induction motor fault conditions, viz., broken rotor bars, unbalanced voltages, stator winding faults, and eccentricity problems, has been successfully classified using FMM-CART with good accuracy rates. The results are comparable, if not better, than those reported in the literature. Useful explanatory rules in the form of a decision tree are also elicited from FMM-CART to analyze and understand different fault conditions of induction motors.
A fuzzy decision tree for fault classification.
Zio, Enrico; Baraldi, Piero; Popescu, Irina C
2008-02-01
In plant accident management, the control room operators are required to identify the causes of the accident, based on the different patterns of evolution of the monitored process variables thereby developing. This task is often quite challenging, given the large number of process parameters monitored and the intense emotional states under which it is performed. To aid the operators, various techniques of fault classification have been engineered. An important requirement for their practical application is the physical interpretability of the relationships among the process variables underpinning the fault classification. In this view, the present work propounds a fuzzy approach to fault classification, which relies on fuzzy if-then rules inferred from the clustering of available preclassified signal data, which are then organized in a logical and transparent decision tree structure. The advantages offered by the proposed approach are precisely that a transparent fault classification model is mined out of the signal data and that the underlying physical relationships among the process variables are easily interpretable as linguistic if-then rules that can be explicitly visualized in the decision tree structure. The approach is applied to a case study regarding the classification of simulated faults in the feedwater system of a boiling water reactor.
Reliability computation using fault tree analysis
NASA Technical Reports Server (NTRS)
Chelson, P. O.
1971-01-01
A method is presented for calculating event probabilities from an arbitrary fault tree. The method includes an analytical derivation of the system equation and is not a simulation program. The method can handle systems that incorporate standby redundancy and it uses conditional probabilities for computing fault trees where the same basic failure appears in more than one fault path.
Object-oriented fault tree evaluation program for quantitative analyses
NASA Technical Reports Server (NTRS)
Patterson-Hine, F. A.; Koen, B. V.
1988-01-01
Object-oriented programming can be combined with fault free techniques to give a significantly improved environment for evaluating the safety and reliability of large complex systems for space missions. Deep knowledge about system components and interactions, available from reliability studies and other sources, can be described using objects that make up a knowledge base. This knowledge base can be interrogated throughout the design process, during system testing, and during operation, and can be easily modified to reflect design changes in order to maintain a consistent information source. An object-oriented environment for reliability assessment has been developed on a Texas Instrument (TI) Explorer LISP workstation. The program, which directly evaluates system fault trees, utilizes the object-oriented extension to LISP called Flavors that is available on the Explorer. The object representation of a fault tree facilitates the storage and retrieval of information associated with each event in the tree, including tree structural information and intermediate results obtained during the tree reduction process. Reliability data associated with each basic event are stored in the fault tree objects. The object-oriented environment on the Explorer also includes a graphical tree editor which was modified to display and edit the fault trees.
NASA Technical Reports Server (NTRS)
Martensen, Anna L.; Butler, Ricky W.
1987-01-01
The Fault Tree Compiler Program is a new reliability tool used to predict the top event probability for a fault tree. Five different gate types are allowed in the fault tree: AND, OR, EXCLUSIVE OR, INVERT, and M OF N gates. The high level input language is easy to understand and use when describing the system tree. In addition, the use of the hierarchical fault tree capability can simplify the tree description and decrease program execution time. The current solution technique provides an answer precise (within the limits of double precision floating point arithmetic) to the five digits in the answer. The user may vary one failure rate or failure probability over a range of values and plot the results for sensitivity analyses. The solution technique is implemented in FORTRAN; the remaining program code is implemented in Pascal. The program is written to run on a Digital Corporation VAX with the VMS operation system.
The Fault Tree Compiler (FTC): Program and mathematics
NASA Technical Reports Server (NTRS)
Butler, Ricky W.; Martensen, Anna L.
1989-01-01
The Fault Tree Compiler Program is a new reliability tool used to predict the top-event probability for a fault tree. Five different gate types are allowed in the fault tree: AND, OR, EXCLUSIVE OR, INVERT, AND m OF n gates. The high-level input language is easy to understand and use when describing the system tree. In addition, the use of the hierarchical fault tree capability can simplify the tree description and decrease program execution time. The current solution technique provides an answer precisely (within the limits of double precision floating point arithmetic) within a user specified number of digits accuracy. The user may vary one failure rate or failure probability over a range of values and plot the results for sensitivity analyses. The solution technique is implemented in FORTRAN; the remaining program code is implemented in Pascal. The program is written to run on a Digital Equipment Corporation (DEC) VAX computer with the VMS operation system.
Systems Theoretic Process Analysis Applied to an Offshore Supply Vessel Dynamic Positioning System
2016-06-01
additional safety issues that were either not identified or inadequately mitigated through the use of Fault Tree Analysis and Failure Modes and...Techniques ...................................................................................................... 15 1.3.1. Fault Tree Analysis...49 3.2. Fault Tree Analysis Comparison
NASA Astrophysics Data System (ADS)
Koji, Yusuke; Kitamura, Yoshinobu; Kato, Yoshikiyo; Tsutsui, Yoshio; Mizoguchi, Riichiro
In conceptual design, it is important to develop functional structures which reflect the rich experience in the knowledge from previous design failures. Especially, if a designer learns possible abnormal behaviors from a previous design failure, he or she can add an additional function which prevents such abnormal behaviors and faults. To do this, it is a crucial issue to share such knowledge about possible faulty phenomena and how to cope with them. In fact, a part of such knowledge is described in FMEA (Failure Mode and Effect Analysis) sheets, function structure models for systematic design and fault trees for FTA (Fault Tree Analysis).
Model authoring system for fail safe analysis
NASA Technical Reports Server (NTRS)
Sikora, Scott E.
1990-01-01
The Model Authoring System is a prototype software application for generating fault tree analyses and failure mode and effects analyses for circuit designs. Utilizing established artificial intelligence and expert system techniques, the circuits are modeled as a frame-based knowledge base in an expert system shell, which allows the use of object oriented programming and an inference engine. The behavior of the circuit is then captured through IF-THEN rules, which then are searched to generate either a graphical fault tree analysis or failure modes and effects analysis. Sophisticated authoring techniques allow the circuit to be easily modeled, permit its behavior to be quickly defined, and provide abstraction features to deal with complexity.
A quantitative analysis of the F18 flight control system
NASA Technical Reports Server (NTRS)
Doyle, Stacy A.; Dugan, Joanne B.; Patterson-Hine, Ann
1993-01-01
This paper presents an informal quantitative analysis of the F18 flight control system (FCS). The analysis technique combines a coverage model with a fault tree model. To demonstrate the method's extensive capabilities, we replace the fault tree with a digraph model of the F18 FCS, the only model available to us. The substitution shows that while digraphs have primarily been used for qualitative analysis, they can also be used for quantitative analysis. Based on our assumptions and the particular failure rates assigned to the F18 FCS components, we show that coverage does have a significant effect on the system's reliability and thus it is important to include coverage in the reliability analysis.
Fault tree analysis for system modeling in case of intentional EMI
NASA Astrophysics Data System (ADS)
Genender, E.; Mleczko, M.; Döring, O.; Garbe, H.; Potthast, S.
2011-08-01
The complexity of modern systems on the one hand and the rising threat of intentional electromagnetic interference (IEMI) on the other hand increase the necessity for systematical risk analysis. Most of the problems can not be treated deterministically since slight changes in the configuration (source, position, polarization, ...) can dramatically change the outcome of an event. For that purpose, methods known from probabilistic risk analysis can be applied. One of the most common approaches is the fault tree analysis (FTA). The FTA is used to determine the system failure probability and also the main contributors to its failure. In this paper the fault tree analysis is introduced and a possible application of that method is shown using a small computer network as an example. The constraints of this methods are explained and conclusions for further research are drawn.
An overview of the phase-modular fault tree approach to phased mission system analysis
NASA Technical Reports Server (NTRS)
Meshkat, L.; Xing, L.; Donohue, S. K.; Ou, Y.
2003-01-01
We look at how fault tree analysis (FTA), a primary means of performing reliability analysis of PMS, can meet this challenge in this paper by presenting an overview of the modular approach to solving fault trees that represent PMS.
Try Fault Tree Analysis, a Step-by-Step Way to Improve Organization Development.
ERIC Educational Resources Information Center
Spitzer, Dean
1980-01-01
Fault Tree Analysis, a systems safety engineering technology used to analyze organizational systems, is described. Explains the use of logic gates to represent the relationship between failure events, qualitative analysis, quantitative analysis, and effective use of Fault Tree Analysis. (CT)
Fault Tree Analysis: A Research Tool for Educational Planning. Technical Report No. 1.
ERIC Educational Resources Information Center
Alameda County School Dept., Hayward, CA. PACE Center.
This ESEA Title III report describes fault tree analysis and assesses its applicability to education. Fault tree analysis is an operations research tool which is designed to increase the probability of success in any system by analyzing the most likely modes of failure that could occur. A graphic portrayal, which has the form of a tree, is…
Review: Evaluation of Foot-and-Mouth Disease Control Using Fault Tree Analysis.
Isoda, N; Kadohira, M; Sekiguchi, S; Schuppers, M; Stärk, K D C
2015-06-01
An outbreak of foot-and-mouth disease (FMD) causes huge economic losses and animal welfare problems. Although much can be learnt from past FMD outbreaks, several countries are not satisfied with their degree of contingency planning and aiming at more assurance that their control measures will be effective. The purpose of the present article was to develop a generic fault tree framework for the control of an FMD outbreak as a basis for systematic improvement and refinement of control activities and general preparedness. Fault trees are typically used in engineering to document pathways that can lead to an undesired event, that is, ineffective FMD control. The fault tree method allows risk managers to identify immature parts of the control system and to analyse the events or steps that will most probably delay rapid and effective disease control during a real outbreak. The present developed fault tree is generic and can be tailored to fit the specific needs of countries. For instance, the specific fault tree for the 2001 FMD outbreak in the UK was refined based on control weaknesses discussed in peer-reviewed articles. Furthermore, the specific fault tree based on the 2001 outbreak was applied to the subsequent FMD outbreak in 2007 to assess the refinement of control measures following the earlier, major outbreak. The FMD fault tree can assist risk managers to develop more refined and adequate control activities against FMD outbreaks and to find optimum strategies for rapid control. Further application using the current tree will be one of the basic measures for FMD control worldwide. © 2013 Blackwell Verlag GmbH.
The weakest t-norm based intuitionistic fuzzy fault-tree analysis to evaluate system reliability.
Kumar, Mohit; Yadav, Shiv Prasad
2012-07-01
In this paper, a new approach of intuitionistic fuzzy fault-tree analysis is proposed to evaluate system reliability and to find the most critical system component that affects the system reliability. Here weakest t-norm based intuitionistic fuzzy fault tree analysis is presented to calculate fault interval of system components from integrating expert's knowledge and experience in terms of providing the possibility of failure of bottom events. It applies fault-tree analysis, α-cut of intuitionistic fuzzy set and T(ω) (the weakest t-norm) based arithmetic operations on triangular intuitionistic fuzzy sets to obtain fault interval and reliability interval of the system. This paper also modifies Tanaka et al.'s fuzzy fault-tree definition. In numerical verification, a malfunction of weapon system "automatic gun" is presented as a numerical example. The result of the proposed method is compared with the listing approaches of reliability analysis methods. Copyright © 2012 ISA. Published by Elsevier Ltd. All rights reserved.
Reliability analysis of the solar array based on Fault Tree Analysis
NASA Astrophysics Data System (ADS)
Jianing, Wu; Shaoze, Yan
2011-07-01
The solar array is an important device used in the spacecraft, which influences the quality of in-orbit operation of the spacecraft and even the launches. This paper analyzes the reliability of the mechanical system and certifies the most vital subsystem of the solar array. The fault tree analysis (FTA) model is established according to the operating process of the mechanical system based on DFH-3 satellite; the logical expression of the top event is obtained by Boolean algebra and the reliability of the solar array is calculated. The conclusion shows that the hinges are the most vital links between the solar arrays. By analyzing the structure importance(SI) of the hinge's FTA model, some fatal causes, including faults of the seal, insufficient torque of the locking spring, temperature in space, and friction force, can be identified. Damage is the initial stage of the fault, so limiting damage is significant to prevent faults. Furthermore, recommendations for improving reliability associated with damage limitation are discussed, which can be used for the redesigning of the solar array and the reliability growth planning.
EDNA: Expert fault digraph analysis using CLIPS
NASA Technical Reports Server (NTRS)
Dixit, Vishweshwar V.
1990-01-01
Traditionally fault models are represented by trees. Recently, digraph models have been proposed (Sack). Digraph models closely imitate the real system dependencies and hence are easy to develop, validate and maintain. However, they can also contain directed cycles and analysis algorithms are hard to find. Available algorithms tend to be complicated and slow. On the other hand, the tree analysis (VGRH, Tayl) is well understood and rooted in vast research effort and analytical techniques. The tree analysis algorithms are sophisticated and orders of magnitude faster. Transformation of a digraph (cyclic) into trees (CLP, LP) is a viable approach to blend the advantages of the representations. Neither the digraphs nor the trees provide the ability to handle heuristic knowledge. An expert system, to capture the engineering knowledge, is essential. We propose an approach here, namely, expert network analysis. We combine the digraph representation and tree algorithms. The models are augmented by probabilistic and heuristic knowledge. CLIPS, an expert system shell from NASA-JSC will be used to develop a tool. The technique provides the ability to handle probabilities and heuristic knowledge. Mixed analysis, some nodes with probabilities, is possible. The tool provides graphics interface for input, query, and update. With the combined approach it is expected to be a valuable tool in the design process as well in the capture of final design knowledge.
Field, Edward; Biasi, Glenn P.; Bird, Peter; Dawson, Timothy E.; Felzer, Karen R.; Jackson, David A.; Johnson, Kaj M.; Jordan, Thomas H.; Madden, Christopher; Michael, Andrew J.; Milner, Kevin; Page, Morgan T.; Parsons, Thomas E.; Powers, Peter; Shaw, Bruce E.; Thatcher, Wayne R.; Weldon, Ray J.; Zeng, Yuehua
2015-01-01
The 2014 Working Group on California Earthquake Probabilities (WGCEP 2014) presents time-dependent earthquake probabilities for the third Uniform California Earthquake Rupture Forecast (UCERF3). Building on the UCERF3 time-independent model, published previously, renewal models are utilized to represent elastic-rebound-implied probabilities. A new methodology has been developed that solves applicability issues in the previous approach for un-segmented models. The new methodology also supports magnitude-dependent aperiodicity and accounts for the historic open interval on faults that lack a date-of-last-event constraint. Epistemic uncertainties are represented with a logic tree, producing 5,760 different forecasts. Results for a variety of evaluation metrics are presented, including logic-tree sensitivity analyses and comparisons to the previous model (UCERF2). For 30-year M≥6.7 probabilities, the most significant changes from UCERF2 are a threefold increase on the Calaveras fault and a threefold decrease on the San Jacinto fault. Such changes are due mostly to differences in the time-independent models (e.g., fault slip rates), with relaxation of segmentation and inclusion of multi-fault ruptures being particularly influential. In fact, some UCERF2 faults were simply too long to produce M 6.7 sized events given the segmentation assumptions in that study. Probability model differences are also influential, with the implied gains (relative to a Poisson model) being generally higher in UCERF3. Accounting for the historic open interval is one reason. Another is an effective 27% increase in the total elastic-rebound-model weight. The exact factors influencing differences between UCERF2 and UCERF3, as well as the relative importance of logic-tree branches, vary throughout the region, and depend on the evaluation metric of interest. For example, M≥6.7 probabilities may not be a good proxy for other hazard or loss measures. This sensitivity, coupled with the approximate nature of the model and known limitations, means the applicability of UCERF3 should be evaluated on a case-by-case basis.
Comparative analysis of techniques for evaluating the effectiveness of aircraft computing systems
NASA Technical Reports Server (NTRS)
Hitt, E. F.; Bridgman, M. S.; Robinson, A. C.
1981-01-01
Performability analysis is a technique developed for evaluating the effectiveness of fault-tolerant computing systems in multiphase missions. Performability was evaluated for its accuracy, practical usefulness, and relative cost. The evaluation was performed by applying performability and the fault tree method to a set of sample problems ranging from simple to moderately complex. The problems involved as many as five outcomes, two to five mission phases, permanent faults, and some functional dependencies. Transient faults and software errors were not considered. A different analyst was responsible for each technique. Significantly more time and effort were required to learn performability analysis than the fault tree method. Performability is inherently as accurate as fault tree analysis. For the sample problems, fault trees were more practical and less time consuming to apply, while performability required less ingenuity and was more checkable. Performability offers some advantages for evaluating very complex problems.
Product Support Manager Guidebook
2011-04-01
package is being developed using supportability analysis concepts such as Failure Mode, Effects and Criticality Analysis (FMECA), Fault Tree Analysis ( FTA ...Analysis (LORA) Condition Based Maintenance + (CBM+) Fault Tree Analysis ( FTA ) Failure Mode, Effects, and Criticality Analysis (FMECA) Maintenance Task...Reporting and Corrective Action System (FRACAS), Fault Tree Analysis ( FTA ), Level of Repair Analysis (LORA), Maintenance Task Analysis (MTA
MIRAP, microcomputer reliability analysis program
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jehee, J.N.T.
1989-01-01
A program for a microcomputer is outlined that can determine minimal cut sets from a specified fault tree logic. The speed and memory limitations of the microcomputers on which the program is implemented (Atari ST and IBM) are addressed by reducing the fault tree's size and by storing the cut set data on disk. Extensive well proven fault tree restructuring techniques, such as the identification of sibling events and of independent gate events, reduces the fault tree's size but does not alter its logic. New methods are used for the Boolean reduction of the fault tree logic. Special criteria formore » combining events in the 'AND' and 'OR' logic avoid the creation of many subsuming cut sets which all would cancel out due to existing cut sets. Figures and tables illustrates these methods. 4 refs., 5 tabs.« less
The FTA Method And A Possibility Of Its Application In The Area Of Road Freight Transport
NASA Astrophysics Data System (ADS)
Poliaková, Adela
2015-06-01
The Fault Tree process utilizes logic diagrams to portray and analyse potentially hazardous events. Three basic symbols (logic gates) are adequate for diagramming any fault tree. However, additional recently developed symbols can be used to reduce the time and effort required for analysis. A fault tree is a graphical representation of the relationship between certain specific events and the ultimate undesired event (2). This paper deals to method of Fault Tree Analysis basic description and provides a practical view on possibility of application by quality improvement in road freight transport company.
Development and validation of techniques for improving software dependability
NASA Technical Reports Server (NTRS)
Knight, John C.
1992-01-01
A collection of document abstracts are presented on the topic of improving software dependability through NASA grant NAG-1-1123. Specific topics include: modeling of error detection; software inspection; test cases; Magnetic Stereotaxis System safety specifications and fault trees; and injection of synthetic faults into software.
Fault Tree Analysis: Its Implications for Use in Education.
ERIC Educational Resources Information Center
Barker, Bruce O.
This study introduces the concept of Fault Tree Analysis as a systems tool and examines the implications of Fault Tree Analysis (FTA) as a technique for isolating failure modes in educational systems. A definition of FTA and discussion of its history, as it relates to education, are provided. The step by step process for implementation and use of…
Trade Studies of Space Launch Architectures using Modular Probabilistic Risk Analysis
NASA Technical Reports Server (NTRS)
Mathias, Donovan L.; Go, Susie
2006-01-01
A top-down risk assessment in the early phases of space exploration architecture development can provide understanding and intuition of the potential risks associated with new designs and technologies. In this approach, risk analysts draw from their past experience and the heritage of similar existing systems as a source for reliability data. This top-down approach captures the complex interactions of the risk driving parts of the integrated system without requiring detailed knowledge of the parts themselves, which is often unavailable in the early design stages. Traditional probabilistic risk analysis (PRA) technologies, however, suffer several drawbacks that limit their timely application to complex technology development programs. The most restrictive of these is a dependence on static planning scenarios, expressed through fault and event trees. Fault trees incorporating comprehensive mission scenarios are routinely constructed for complex space systems, and several commercial software products are available for evaluating fault statistics. These static representations cannot capture the dynamic behavior of system failures without substantial modification of the initial tree. Consequently, the development of dynamic models using fault tree analysis has been an active area of research in recent years. This paper discusses the implementation and demonstration of dynamic, modular scenario modeling for integration of subsystem fault evaluation modules using the Space Architecture Failure Evaluation (SAFE) tool. SAFE is a C++ code that was originally developed to support NASA s Space Launch Initiative. It provides a flexible framework for system architecture definition and trade studies. SAFE supports extensible modeling of dynamic, time-dependent risk drivers of the system and functions at the level of fidelity for which design and failure data exists. The approach is scalable, allowing inclusion of additional information as detailed data becomes available. The tool performs a Monte Carlo analysis to provide statistical estimates. Example results of an architecture system reliability study are summarized for an exploration system concept using heritage data from liquid-fueled expendable Saturn V/Apollo launch vehicles.
Preventing medical errors by designing benign failures.
Grout, John R
2003-07-01
One way to successfully reduce medical errors is to design health care systems that are more resistant to the tendencies of human beings to err. One interdisciplinary approach entails creating design changes, mitigating human errors, and making human error irrelevant to outcomes. This approach is intended to facilitate the creation of benign failures, which have been called mistake-proofing devices and forcing functions elsewhere. USING FAULT TREES TO DESIGN FORCING FUNCTIONS: A fault tree is a graphical tool used to understand the relationships that either directly cause or contribute to the cause of a particular failure. A careful analysis of a fault tree enables the analyst to anticipate how the process will behave after the change. EXAMPLE OF AN APPLICATION: A scenario in which a patient is scalded while bathing can serve as an example of how multiple fault trees can be used to design forcing functions. The first fault tree shows the undesirable event--patient scalded while bathing. The second fault tree has a benign event--no water. Adding a scald valve changes the outcome from the undesirable event ("patient scalded while bathing") to the benign event ("no water") Analysis of fault trees does not ensure or guarantee that changes necessary to eliminate error actually occur. Most mistake-proofing is used to prevent simple errors and to create well-defended processes, but complex errors can also result. The utilization of mistake-proofing or forcing functions can be thought of as changing the logic of a process. Errors that formerly caused undesirable failures can be converted into the causes of benign failures. The use of fault trees can provide a variety of insights into the design of forcing functions that will improve patient safety.
Fault Tree Analysis Application for Safety and Reliability
NASA Technical Reports Server (NTRS)
Wallace, Dolores R.
2003-01-01
Many commercial software tools exist for fault tree analysis (FTA), an accepted method for mitigating risk in systems. The method embedded in the tools identifies a root as use in system components, but when software is identified as a root cause, it does not build trees into the software component. No commercial software tools have been built specifically for development and analysis of software fault trees. Research indicates that the methods of FTA could be applied to software, but the method is not practical without automated tool support. With appropriate automated tool support, software fault tree analysis (SFTA) may be a practical technique for identifying the underlying cause of software faults that may lead to critical system failures. We strive to demonstrate that existing commercial tools for FTA can be adapted for use with SFTA, and that applied to a safety-critical system, SFTA can be used to identify serious potential problems long before integrator and system testing.
ERIC Educational Resources Information Center
Barker, Bruce O.; Petersen, Paul D.
This paper explores the fault-tree analysis approach to isolating failure modes within a system. Fault tree investigates potentially undesirable events and then looks for failures in sequence that would lead to their occurring. Relationships among these events are symbolized by AND or OR logic gates, AND used when single events must coexist to…
Evidential Networks for Fault Tree Analysis with Imprecise Knowledge
NASA Astrophysics Data System (ADS)
Yang, Jianping; Huang, Hong-Zhong; Liu, Yu; Li, Yan-Feng
2012-06-01
Fault tree analysis (FTA), as one of the powerful tools in reliability engineering, has been widely used to enhance system quality attributes. In most fault tree analyses, precise values are adopted to represent the probabilities of occurrence of those events. Due to the lack of sufficient data or imprecision of existing data at the early stage of product design, it is often difficult to accurately estimate the failure rates of individual events or the probabilities of occurrence of the events. Therefore, such imprecision and uncertainty need to be taken into account in reliability analysis. In this paper, the evidential networks (EN) are employed to quantify and propagate the aforementioned uncertainty and imprecision in fault tree analysis. The detailed conversion processes of some logic gates to EN are described in fault tree (FT). The figures of the logic gates and the converted equivalent EN, together with the associated truth tables and the conditional belief mass tables, are also presented in this work. The new epistemic importance is proposed to describe the effect of ignorance degree of event. The fault tree of an aircraft engine damaged by oil filter plugs is presented to demonstrate the proposed method.
Analysis of a hardware and software fault tolerant processor for critical applications
NASA Technical Reports Server (NTRS)
Dugan, Joanne B.
1993-01-01
Computer systems for critical applications must be designed to tolerate software faults as well as hardware faults. A unified approach to tolerating hardware and software faults is characterized by classifying faults in terms of duration (transient or permanent) rather than source (hardware or software). Errors arising from transient faults can be handled through masking or voting, but errors arising from permanent faults require system reconfiguration to bypass the failed component. Most errors which are caused by software faults can be considered transient, in that they are input-dependent. Software faults are triggered by a particular set of inputs. Quantitative dependability analysis of systems which exhibit a unified approach to fault tolerance can be performed by a hierarchical combination of fault tree and Markov models. A methodology for analyzing hardware and software fault tolerant systems is applied to the analysis of a hypothetical system, loosely based on the Fault Tolerant Parallel Processor. The models consider both transient and permanent faults, hardware and software faults, independent and related software faults, automatic recovery, and reconfiguration.
Probabilistic fault tree analysis of a radiation treatment system.
Ekaette, Edidiong; Lee, Robert C; Cooke, David L; Iftody, Sandra; Craighead, Peter
2007-12-01
Inappropriate administration of radiation for cancer treatment can result in severe consequences such as premature death or appreciably impaired quality of life. There has been little study of vulnerable treatment process components and their contribution to the risk of radiation treatment (RT). In this article, we describe the application of probabilistic fault tree methods to assess the probability of radiation misadministration to patients at a large cancer treatment center. We conducted a systematic analysis of the RT process that identified four process domains: Assessment, Preparation, Treatment, and Follow-up. For the Preparation domain, we analyzed possible incident scenarios via fault trees. For each task, we also identified existing quality control measures. To populate the fault trees we used subjective probabilities from experts and compared results with incident report data. Both the fault tree and the incident report analysis revealed simulation tasks to be most prone to incidents, and the treatment prescription task to be least prone to incidents. The probability of a Preparation domain incident was estimated to be in the range of 0.1-0.7% based on incident reports, which is comparable to the mean value of 0.4% from the fault tree analysis using probabilities from the expert elicitation exercise. In conclusion, an analysis of part of the RT system using a fault tree populated with subjective probabilities from experts was useful in identifying vulnerable components of the system, and provided quantitative data for risk management.
NASA Astrophysics Data System (ADS)
Wu, Jianing; Yan, Shaoze; Xie, Liyang
2011-12-01
To address the impact of solar array anomalies, it is important to perform analysis of the solar array reliability. This paper establishes the fault tree analysis (FTA) and fuzzy reasoning Petri net (FRPN) models of a solar array mechanical system and analyzes reliability to find mechanisms of the solar array fault. The index final truth degree (FTD) and cosine matching function (CMF) are employed to resolve the issue of how to evaluate the importance and influence of different faults. So an improvement reliability analysis method is developed by means of the sorting of FTD and CMF. An example is analyzed using the proposed method. The analysis results show that harsh thermal environment and impact caused by particles in space are the most vital causes of the solar array fault. Furthermore, other fault modes and the corresponding improvement methods are discussed. The results reported in this paper could be useful for the spacecraft designers, particularly, in the process of redesigning the solar array and scheduling its reliability growth plan.
Reconfigurable tree architectures using subtree oriented fault tolerance
NASA Technical Reports Server (NTRS)
Lowrie, Matthew B.
1987-01-01
An approach to the design of reconfigurable tree architecture is presented in which spare processors are allocated at the leaves. The approach is unique in that spares are associated with subtrees and sharing of spares between these subtrees can occur. The Subtree Oriented Fault Tolerance (SOFT) approach is more reliable than previous approaches capable of tolerating link and switch failures for both single chip and multichip tree implementations while reducing redundancy in terms of both spare processors and links. VLSI layout is 0(n) for binary trees and is directly extensible to N-ary trees and fault tolerance through performance degradation.
Secure Embedded System Design Methodologies for Military Cryptographic Systems
2016-03-31
Fault- Tree Analysis (FTA); Built-In Self-Test (BIST) Introduction Secure access-control systems restrict operations to authorized users via methods...failures in the individual software/processor elements, the question of exactly how unlikely is difficult to answer. Fault- Tree Analysis (FTA) has a...Collins of Sandia National Laboratories for years of sharing his extensive knowledge of Fail-Safe Design Assurance and Fault- Tree Analysis
Logic flowgraph methodology - A tool for modeling embedded systems
NASA Technical Reports Server (NTRS)
Muthukumar, C. T.; Guarro, S. B.; Apostolakis, G. E.
1991-01-01
The logic flowgraph methodology (LFM), a method for modeling hardware in terms of its process parameters, has been extended to form an analytical tool for the analysis of integrated (hardware/software) embedded systems. In the software part of a given embedded system model, timing and the control flow among different software components are modeled by augmenting LFM with modified Petrinet structures. The objective of the use of such an augmented LFM model is to uncover possible errors and the potential for unanticipated software/hardware interactions. This is done by backtracking through the augmented LFM mode according to established procedures which allow the semiautomated construction of fault trees for any chosen state of the embedded system (top event). These fault trees, in turn, produce the possible combinations of lower-level states (events) that may lead to the top event.
Rymer, M.J.
2000-01-01
The Coachella Valley area was strongly shaken by the 1992 Joshua Tree (23 April) and Landers (28 June) earthquakes, and both events caused triggered slip on active faults within the area. Triggered slip associated with the Joshua Tree earthquake was on a newly recognized fault, the East Wide Canyon fault, near the southwestern edge of the Little San Bernardino Mountains. Slip associated with the Landers earthquake formed along the San Andreas fault in the southeastern Coachella Valley. Surface fractures formed along the East Wide Canyon fault in association with the Joshua Tree earthquake. The fractures extended discontinuously over a 1.5-km stretch of the fault, near its southern end. Sense of slip was consistently right-oblique, west side down, similar to the long-term style of faulting. Measured offset values were small, with right-lateral and vertical components of slip ranging from 1 to 6 mm and 1 to 4 mm, respectively. This is the first documented historic slip on the East Wide Canyon fault, which was first mapped only months before the Joshua Tree earthquake. Surface slip associated with the Joshua Tree earthquake most likely developed as triggered slip given its 5 km distance from the Joshua Tree epicenter and aftershocks. As revealed in a trench investigation, slip formed in an area with only a thin (<3 m thick) veneer of alluvium in contrast to earlier documented triggered slip events in this region, all in the deep basins of the Salton Trough. A paleoseismic trench study in an area of 1992 surface slip revealed evidence of two and possibly three surface faulting events on the East Wide Canyon fault during the late Quaternary, probably latest Pleistocene (first event) and mid- to late Holocene (second two events). About two months after the Joshua Tree earthquake, the Landers earthquake then triggered slip on many faults, including the San Andreas fault in the southeastern Coachella Valley. Surface fractures associated with this event formed discontinuous breaks over a 54-km-long stretch of the fault, from the Indio Hills southeastward to Durmid Hill. Sense of slip was right-lateral; only locally was there a minor (~1 mm) vertical component of slip. Measured dextral displacement values ranged from 1 to 20 mm, with the largest amounts found in the Mecca Hills where large slip values have been measured following past triggered-slip events.
NASA Astrophysics Data System (ADS)
de Barros, Felipe P. J.; Bolster, Diogo; Sanchez-Vila, Xavier; Nowak, Wolfgang
2011-05-01
Assessing health risk in hydrological systems is an interdisciplinary field. It relies on the expertise in the fields of hydrology and public health and needs powerful translation concepts to provide decision support and policy making. Reliable health risk estimates need to account for the uncertainties and variabilities present in hydrological, physiological, and human behavioral parameters. Despite significant theoretical advancements in stochastic hydrology, there is still a dire need to further propagate these concepts to practical problems and to society in general. Following a recent line of work, we use fault trees to address the task of probabilistic risk analysis and to support related decision and management problems. Fault trees allow us to decompose the assessment of health risk into individual manageable modules, thus tackling a complex system by a structural divide and conquer approach. The complexity within each module can be chosen individually according to data availability, parsimony, relative importance, and stage of analysis. Three differences are highlighted in this paper when compared to previous works: (1) The fault tree proposed here accounts for the uncertainty in both hydrological and health components, (2) system failure within the fault tree is defined in terms of risk being above a threshold value, whereas previous studies that used fault trees used auxiliary events such as exceedance of critical concentration levels, and (3) we introduce a new form of stochastic fault tree that allows us to weaken the assumption of independent subsystems that is required by a classical fault tree approach. We illustrate our concept in a simple groundwater-related setting.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Powell, Danny H; Elwood Jr, Robert H
2011-01-01
Analysis of the material protection, control, and accountability (MPC&A) system is necessary to understand the limits and vulnerabilities of the system to internal threats. A self-appraisal helps the facility be prepared to respond to internal threats and reduce the risk of theft or diversion of nuclear material. The material control and accountability (MC&A) system effectiveness tool (MSET) fault tree was developed to depict the failure of the MPC&A system as a result of poor practices and random failures in the MC&A system. It can also be employed as a basis for assessing deliberate threats against a facility. MSET uses faultmore » tree analysis, which is a top-down approach to examining system failure. The analysis starts with identifying a potential undesirable event called a 'top event' and then determining the ways it can occur (e.g., 'Fail To Maintain Nuclear Materials Under The Purview Of The MC&A System'). The analysis proceeds by determining how the top event can be caused by individual or combined lower level faults or failures. These faults, which are the causes of the top event, are 'connected' through logic gates. The MSET model uses AND-gates and OR-gates and propagates the effect of event failure using Boolean algebra. To enable the fault tree analysis calculations, the basic events in the fault tree are populated with probability risk values derived by conversion of questionnaire data to numeric values. The basic events are treated as independent variables. This assumption affects the Boolean algebraic calculations used to calculate results. All the necessary calculations are built into the fault tree codes, but it is often useful to estimate the probabilities manually as a check on code functioning. The probability of failure of a given basic event is the probability that the basic event primary question fails to meet the performance metric for that question. The failure probability is related to how well the facility performs the task identified in that basic event over time (not just one performance or exercise). Fault tree calculations provide a failure probability for the top event in the fault tree. The basic fault tree calculations establish a baseline relative risk value for the system. This probability depicts relative risk, not absolute risk. Subsequent calculations are made to evaluate the change in relative risk that would occur if system performance is improved or degraded. During the development effort of MSET, the fault tree analysis program used was SAPHIRE. SAPHIRE is an acronym for 'Systems Analysis Programs for Hands-on Integrated Reliability Evaluations.' Version 1 of the SAPHIRE code was sponsored by the Nuclear Regulatory Commission in 1987 as an innovative way to draw, edit, and analyze graphical fault trees primarily for safe operation of nuclear power reactors. When the fault tree calculations are performed, the fault tree analysis program will produce several reports that can be used to analyze the MPC&A system. SAPHIRE produces reports showing risk importance factors for all basic events in the operational MC&A system. The risk importance information is used to examine the potential impacts when performance of certain basic events increases or decreases. The initial results produced by the SAPHIRE program are considered relative risk values. None of the results can be interpreted as absolute risk values since the basic event probability values represent estimates of risk associated with the performance of MPC&A tasks throughout the material balance area (MBA). The RRR for a basic event represents the decrease in total system risk that would result from improvement of that one event to a perfect performance level. Improvement of the basic event with the greatest RRR value produces a greater decrease in total system risk than improvement of any other basic event. Basic events with the greatest potential for system risk reduction are assigned performance improvement values, and new fault tree calculations show the improvement in total system risk. The operational impact or cost-effectiveness from implementing the performance improvements can then be evaluated. The improvements being evaluated can be system performance improvements, or they can be potential, or actual, upgrades to the system. The RIR for a basic event represents the increase in total system risk that would result from failure of that one event. Failure of the basic event with the greatest RIR value produces a greater increase in total system risk than failure of any other basic event. Basic events with the greatest potential for system risk increase are assigned failure performance values, and new fault tree calculations show the increase in total system risk. This evaluation shows the importance of preventing performance degradation of the basic events. SAPHIRE identifies combinations of basic events where concurrent failure of the events results in failure of the top event.« less
Probabilistic seismic hazard study based on active fault and finite element geodynamic models
NASA Astrophysics Data System (ADS)
Kastelic, Vanja; Carafa, Michele M. C.; Visini, Francesco
2016-04-01
We present a probabilistic seismic hazard analysis (PSHA) that is exclusively based on active faults and geodynamic finite element input models whereas seismic catalogues were used only in a posterior comparison. We applied the developed model in the External Dinarides, a slow deforming thrust-and-fold belt at the contact between Adria and Eurasia.. is the Our method consists of establishing s two earthquake rupture forecast models: (i) a geological active fault input (GEO) model and, (ii) a finite element (FEM) model. The GEO model is based on active fault database that provides information on fault location and its geometric and kinematic parameters together with estimations on its slip rate. By default in this model all deformation is set to be released along the active faults. The FEM model is based on a numerical geodynamic model developed for the region of study. In this model the deformation is, besides along the active faults, released also in the volumetric continuum elements. From both models we calculated their corresponding activity rates, its earthquake rates and their final expected peak ground accelerations. We investigated both the source model and the earthquake model uncertainties by varying the main active fault and earthquake rate calculation parameters through constructing corresponding branches of the seismic hazard logic tree. Hazard maps and UHS curves have been produced for horizontal ground motion on bedrock conditions VS 30 ≥ 800 m/s), thereby not considering local site amplification effects. The hazard was computed over a 0.2° spaced grid considering 648 branches of the logic tree and the mean value of 10% probability of exceedance in 50 years hazard level, while the 5th and 95th percentiles were also computed to investigate the model limits. We conducted a sensitivity analysis to control which of the input parameters influence the final hazard results in which measure. The results of such comparison evidence the deformation model and with their internal variability together with the choice of the ground motion prediction equations (GMPEs) are the most influencing parameter. Both of these parameters have significan affect on the hazard results. Thus having good knowledge of the existence of active faults and their geometric and activity characteristics is of key importance. We also show that PSHA models based exclusively on active faults and geodynamic inputs, which are thus not dependent on past earthquake occurrences, provide a valid method for seismic hazard calculation.
Planning effectiveness may grow on fault trees.
Chow, C W; Haddad, K; Mannino, B
1991-10-01
The first step of a strategic planning process--identifying and analyzing threats and opportunities--requires subjective judgments. By using an analytical tool known as a fault tree, healthcare administrators can reduce the unreliability of subjective decision making by creating a logical structure for problem solving and decision making. A case study of 11 healthcare administrators showed that an analysis technique called prospective hindsight can add to a fault tree's ability to improve a strategic planning process.
NASA Astrophysics Data System (ADS)
Batzias, Dimitris F.
2012-12-01
Fault Tree Analysis (FTA) can be used for technology transfer when the relevant problem (called 'top even' in FTA) is solved in a technology centre and the results are diffused to interested parties (usually Small Medium Enterprises - SMEs) that have not the proper equipment and the required know-how to solve the problem by their own. Nevertheless, there is a significant drawback in this procedure: the information usually provided by the SMEs to the technology centre, about production conditions and corresponding quality characteristics of the product, and (sometimes) the relevant expertise in the Knowledge Base of this centre may be inadequate to form a complete fault tree. Since such cases are quite frequent in practice, we have developed a methodology for transforming incomplete fault tree to Ishikawa diagram, which is more flexible and less strict in establishing causal chains, because it uses a surface phenomenological level with a limited number of categories of faults. On the other hand, such an Ishikawa diagram can be extended to simulate a fault tree as relevant knowledge increases. An implementation of this transformation, referring to anodization of aluminium, is presented.
A-Priori Rupture Models for Northern California Type-A Faults
Wills, Chris J.; Weldon, Ray J.; Field, Edward H.
2008-01-01
This appendix describes how a-priori rupture models were developed for the northern California Type-A faults. As described in the main body of this report, and in Appendix G, ?a-priori? models represent an initial estimate of the rate of single and multi-segment surface ruptures on each fault. Whether or not a given model is moment balanced (i.e., satisfies section slip-rate data) depends on assumptions made regarding the average slip on each segment in each rupture (which in turn depends on the chosen magnitude-area relationship). Therefore, for a given set of assumptions, or branch on the logic tree, the methodology of the present Working Group (WGCEP-2007) is to find a final model that is as close as possible to the a-priori model, in the least squares sense, but that also satisfies slip rate and perhaps other data. This is analogous the WGCEP- 2002 approach of effectively voting on the relative rate of each possible rupture, and then finding the closest moment-balance model (under a more limiting set of assumptions than adopted by the present WGCEP, as described in detail in Appendix G). The 2002 Working Group Report (WCCEP, 2003, referred to here as WGCEP-2002), created segmented earthquake rupture forecast models for all faults in the region, including some that had been designated as Type B faults in the NSHMP, 1996, and one that had not previously been considered. The 2002 National Seismic Hazard Maps used the values from WGCEP-2002 for all the faults in the region, essentially treating all the listed faults as Type A faults. As discussed in Appendix A, the current WGCEP found that there are a number of faults with little or no data on slip-per-event, or dates of previous earthquakes. As a result, the WGCEP recommends that faults with minimal available earthquake recurrence data: the Greenville, Mount Diablo, San Gregorio, Monte Vista-Shannon and Concord-Green Valley be modeled as Type B faults to be consistent with similarly poorly-known faults statewide. As a result, the modified segmented models discussed here only concern the San Andreas, Hayward-Rodgers Creek, and Calaveras faults. Given the extensive level of effort given by the recent Bay-Area WGCEP-2002, our approach has been to adopt their final average models as our preferred a-prior models. We have modified the WGCEP-2002 models where necessary to match data that were not available or not used by that WGCEP and where the models needed by WGCEP-2007 for a uniform statewide model require different assumptions and/or logic-tree branch weights. In these cases we have made what are usually slight modifications to the WGCEP-2002 model. This Appendix presents the minor changes needed to accomodate updated information and model construction. We do not attempt to reproduce here the extensive documentation of data, model parameters and earthquake probablilities in the WG-2002 report.
A systematic risk management approach employed on the CloudSat project
NASA Technical Reports Server (NTRS)
Basilio, R. R.; Plourde, K. S.; Lam, T.
2000-01-01
The CloudSat Project has developed a simplified approach for fault tree analysis and probabilistic risk assessment. A system-level fault tree has been constructed to identify credible fault scenarios and failure modes leading up to a potential failure to meet the nominal mission success criteria.
Fault Tree Analysis: A Bibliography
NASA Technical Reports Server (NTRS)
2000-01-01
Fault tree analysis is a top-down approach to the identification of process hazards. It is as one of the best methods for systematically identifying an graphically displaying the many ways some things can go wrong. This bibliography references 266 documents in the NASA STI Database that contain the major concepts. fault tree analysis, risk an probability theory, in the basic index or major subject terms. An abstract is included with most citations, followed by the applicable subject terms.
Graphical fault tree analysis for fatal falls in the construction industry.
Chi, Chia-Fen; Lin, Syuan-Zih; Dewi, Ratna Sari
2014-11-01
The current study applied a fault tree analysis to represent the causal relationships among events and causes that contributed to fatal falls in the construction industry. Four hundred and eleven work-related fatalities in the Taiwanese construction industry were analyzed in terms of age, gender, experience, falling site, falling height, company size, and the causes for each fatality. Given that most fatal accidents involve multiple events, the current study coded up to a maximum of three causes for each fall fatality. After the Boolean algebra and minimal cut set analyses, accident causes associated with each falling site can be presented as a fault tree to provide an overview of the basic causes, which could trigger fall fatalities in the construction industry. Graphical icons were designed for each falling site along with the associated accident causes to illustrate the fault tree in a graphical manner. A graphical fault tree can improve inter-disciplinary discussion of risk management and the communication of accident causation to first line supervisors. Copyright © 2014 Elsevier Ltd. All rights reserved.
Method and system for dynamic probabilistic risk assessment
NASA Technical Reports Server (NTRS)
Dugan, Joanne Bechta (Inventor); Xu, Hong (Inventor)
2013-01-01
The DEFT methodology, system and computer readable medium extends the applicability of the PRA (Probabilistic Risk Assessment) methodology to computer-based systems, by allowing DFT (Dynamic Fault Tree) nodes as pivot nodes in the Event Tree (ET) model. DEFT includes a mathematical model and solution algorithm, supports all common PRA analysis functions and cutsets. Additional capabilities enabled by the DFT include modularization, phased mission analysis, sequence dependencies, and imperfect coverage.
Fault Tree Analysis for an Inspection Robot in a Nuclear Power Plant
NASA Astrophysics Data System (ADS)
Ferguson, Thomas A.; Lu, Lixuan
2017-09-01
The life extension of current nuclear reactors has led to an increasing demand on inspection and maintenance of critical reactor components that are too expensive to replace. To reduce the exposure dosage to workers, robotics have become an attractive alternative as a preventative safety tool in nuclear power plants. It is crucial to understand the reliability of these robots in order to increase the veracity and confidence of their results. This study presents the Fault Tree (FT) analysis to a coolant outlet piper snake-arm inspection robot in a nuclear power plant. Fault trees were constructed for a qualitative analysis to determine the reliability of the robot. Insight on the applicability of fault tree methods for inspection robotics in the nuclear industry is gained through this investigation.
Object-Oriented Algorithm For Evaluation Of Fault Trees
NASA Technical Reports Server (NTRS)
Patterson-Hine, F. A.; Koen, B. V.
1992-01-01
Algorithm for direct evaluation of fault trees incorporates techniques of object-oriented programming. Reduces number of calls needed to solve trees with repeated events. Provides significantly improved software environment for such computations as quantitative analyses of safety and reliability of complicated systems of equipment (e.g., spacecraft or factories).
Structural system reliability calculation using a probabilistic fault tree analysis method
NASA Technical Reports Server (NTRS)
Torng, T. Y.; Wu, Y.-T.; Millwater, H. R.
1992-01-01
The development of a new probabilistic fault tree analysis (PFTA) method for calculating structural system reliability is summarized. The proposed PFTA procedure includes: developing a fault tree to represent the complex structural system, constructing an approximation function for each bottom event, determining a dominant sampling sequence for all bottom events, and calculating the system reliability using an adaptive importance sampling method. PFTA is suitable for complicated structural problems that require computer-intensive computer calculations. A computer program has been developed to implement the PFTA.
Fault-zone waves observed at the southern Joshua Tree earthquake rupture zone
Hough, S.E.; Ben-Zion, Y.; Leary, P.
1994-01-01
Waveform and spectral characteristics of several aftershocks of the M 6.1 22 April 1992 Joshua Tree earthquake recorded at stations just north of the Indio Hills in the Coachella Valley can be interpreted in terms of waves propagating within narrow, low-velocity, high-attenuation, vertical zones. Evidence for our interpretation consists of: (1) emergent P arrivals prior to and opposite in polarity to the impulsive direct phase; these arrivals can be modeled as headwaves indicative of a transfault velocity contrast; (2) spectral peaks in the S wave train that can be interpreted as internally reflected, low-velocity fault-zone wave energy; and (3) spatial selectivity of event-station pairs at which these data are observed, suggesting a long, narrow geologic structure. The observed waveforms are modeled using the analytical solution of Ben-Zion and Aki (1990) for a plane-parallel layered fault-zone structure. Synthetic waveform fits to the observed data indicate the presence of NS-trending vertical fault-zone layers characterized by a thickness of 50 to 100 m, a velocity decrease of 10 to 15% relative to the surrounding rock, and a P-wave quality factor in the range 25 to 50.
NASA Astrophysics Data System (ADS)
Chartier, Thomas; Scotti, Oona; Boiselet, Aurelien; Lyon-Caen, Hélène
2016-04-01
Including faults in probabilistic seismic hazard assessment tends to increase the degree of uncertainty in the results due to the intrinsically uncertain nature of the fault data. This is especially the case in the low to moderate seismicity regions of Europe, where slow slipping faults are difficult to characterize. In order to better understand the key parameters that control the uncertainty in the fault-related hazard computations, we propose to build an analytic tool that provides a clear link between the different components of the fault-related hazard computations and their impact on the results. This will allow identifying the important parameters that need to be better constrained in order to reduce the resulting uncertainty in hazard and also provide a more hazard-oriented strategy for collecting relevant fault parameters in the field. The tool will be illustrated through the example of the West Corinth rifts fault-models. Recent work performed in the gulf has shown the complexity of the normal faulting system that is accommodating the extensional deformation of the rift. A logic-tree approach is proposed to account for this complexity and the multiplicity of scientifically defendable interpretations. At the nodes of the logic tree, different options that could be considered at each step of the fault-related seismic hazard will be considered. The first nodes represent the uncertainty in the geometries of the faults and their slip rates, which can derive from different data and methodologies. The subsequent node explores, for a given geometry/slip rate of faults, different earthquake rupture scenarios that may occur in the complex network of faults. The idea is to allow the possibility of several faults segments to break together in a single rupture scenario. To build these multiple-fault-segment scenarios, two approaches are considered: one based on simple rules (i.e. minimum distance between faults) and a second one that relies on physically-based simulations. The following nodes represents for each rupture scenario different rupture forecast models (i.e; characteristic or Gutenberg-Richter) and for a given rupture forecast, two probability models commonly used in seismic hazard assessment: poissonian or time-dependent. The final node represents an exhaustive set of ground motion prediction equations chosen in order to be compatible with the region. Finally, the expected probability of exceeding a given ground motion level is computed at each sites. Results will be discussed for a few specific localities of the West Corinth Gulf.
Locating hardware faults in a data communications network of a parallel computer
Archer, Charles J.; Megerian, Mark G.; Ratterman, Joseph D.; Smith, Brian E.
2010-01-12
Hardware faults location in a data communications network of a parallel computer. Such a parallel computer includes a plurality of compute nodes and a data communications network that couples the compute nodes for data communications and organizes the compute node as a tree. Locating hardware faults includes identifying a next compute node as a parent node and a root of a parent test tree, identifying for each child compute node of the parent node a child test tree having the child compute node as root, running a same test suite on the parent test tree and each child test tree, and identifying the parent compute node as having a defective link connected from the parent compute node to a child compute node if the test suite fails on the parent test tree and succeeds on all the child test trees.
Khan, F I; Abbasi, S A
2000-07-10
Fault tree analysis (FTA) is based on constructing a hypothetical tree of base events (initiating events) branching into numerous other sub-events, propagating the fault and eventually leading to the top event (accident). It has been a powerful technique used traditionally in identifying hazards in nuclear installations and power industries. As the systematic articulation of the fault tree is associated with assigning probabilities to each fault, the exercise is also sometimes called probabilistic risk assessment. But powerful as this technique is, it is also very cumbersome and costly, limiting its area of application. We have developed a new algorithm based on analytical simulation (named as AS-II), which makes the application of FTA simpler, quicker, and cheaper; thus opening up the possibility of its wider use in risk assessment in chemical process industries. Based on the methodology we have developed a computer-automated tool. The details are presented in this paper.
Automated Generation of Fault Management Artifacts from a Simple System Model
NASA Technical Reports Server (NTRS)
Kennedy, Andrew K.; Day, John C.
2013-01-01
Our understanding of off-nominal behavior - failure modes and fault propagation - in complex systems is often based purely on engineering intuition; specific cases are assessed in an ad hoc fashion as a (fallible) fault management engineer sees fit. This work is an attempt to provide a more rigorous approach to this understanding and assessment by automating the creation of a fault management artifact, the Failure Modes and Effects Analysis (FMEA) through querying a representation of the system in a SysML model. This work builds off the previous development of an off-nominal behavior model for the upcoming Soil Moisture Active-Passive (SMAP) mission at the Jet Propulsion Laboratory. We further developed the previous system model to more fully incorporate the ideas of State Analysis, and it was restructured in an organizational hierarchy that models the system as layers of control systems while also incorporating the concept of "design authority". We present software that was developed to traverse the elements and relationships in this model to automatically construct an FMEA spreadsheet. We further discuss extending this model to automatically generate other typical fault management artifacts, such as Fault Trees, to efficiently portray system behavior, and depend less on the intuition of fault management engineers to ensure complete examination of off-nominal behavior.
Naghibi, Seyed Amir; Pourghasemi, Hamid Reza; Dixon, Barnali
2016-01-01
Groundwater is considered one of the most valuable fresh water resources. The main objective of this study was to produce groundwater spring potential maps in the Koohrang Watershed, Chaharmahal-e-Bakhtiari Province, Iran, using three machine learning models: boosted regression tree (BRT), classification and regression tree (CART), and random forest (RF). Thirteen hydrological-geological-physiographical (HGP) factors that influence locations of springs were considered in this research. These factors include slope degree, slope aspect, altitude, topographic wetness index (TWI), slope length (LS), plan curvature, profile curvature, distance to rivers, distance to faults, lithology, land use, drainage density, and fault density. Subsequently, groundwater spring potential was modeled and mapped using CART, RF, and BRT algorithms. The predicted results from the three models were validated using the receiver operating characteristics curve (ROC). From 864 springs identified, 605 (≈70 %) locations were used for the spring potential mapping, while the remaining 259 (≈30 %) springs were used for the model validation. The area under the curve (AUC) for the BRT model was calculated as 0.8103 and for CART and RF the AUC were 0.7870 and 0.7119, respectively. Therefore, it was concluded that the BRT model produced the best prediction results while predicting locations of springs followed by CART and RF models, respectively. Geospatially integrated BRT, CART, and RF methods proved to be useful in generating the spring potential map (SPM) with reasonable accuracy.
Accurate reliability analysis method for quantum-dot cellular automata circuits
NASA Astrophysics Data System (ADS)
Cui, Huanqing; Cai, Li; Wang, Sen; Liu, Xiaoqiang; Yang, Xiaokuo
2015-10-01
Probabilistic transfer matrix (PTM) is a widely used model in the reliability research of circuits. However, PTM model cannot reflect the impact of input signals on reliability, so it does not completely conform to the mechanism of the novel field-coupled nanoelectronic device which is called quantum-dot cellular automata (QCA). It is difficult to get accurate results when PTM model is used to analyze the reliability of QCA circuits. To solve this problem, we present the fault tree models of QCA fundamental devices according to different input signals. After that, the binary decision diagram (BDD) is used to quantitatively investigate the reliability of two QCA XOR gates depending on the presented models. By employing the fault tree models, the impact of input signals on reliability can be identified clearly and the crucial components of a circuit can be found out precisely based on the importance values (IVs) of components. So this method is contributive to the construction of reliable QCA circuits.
Integration of Advanced Probabilistic Analysis Techniques with Multi-Physics Models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cetiner, Mustafa Sacit; none,; Flanagan, George F.
2014-07-30
An integrated simulation platform that couples probabilistic analysis-based tools with model-based simulation tools can provide valuable insights for reactive and proactive responses to plant operating conditions. The objective of this work is to demonstrate the benefits of a partial implementation of the Small Modular Reactor (SMR) Probabilistic Risk Assessment (PRA) Detailed Framework Specification through the coupling of advanced PRA capabilities and accurate multi-physics plant models. Coupling a probabilistic model with a multi-physics model will aid in design, operations, and safety by providing a more accurate understanding of plant behavior. This represents the first attempt at actually integrating these two typesmore » of analyses for a control system used for operations, on a faster than real-time basis. This report documents the development of the basic communication capability to exchange data with the probabilistic model using Reliability Workbench (RWB) and the multi-physics model using Dymola. The communication pathways from injecting a fault (i.e., failing a component) to the probabilistic and multi-physics models were successfully completed. This first version was tested with prototypic models represented in both RWB and Modelica. First, a simple event tree/fault tree (ET/FT) model was created to develop the software code to implement the communication capabilities between the dynamic-link library (dll) and RWB. A program, written in C#, successfully communicates faults to the probabilistic model through the dll. A systems model of the Advanced Liquid-Metal Reactor–Power Reactor Inherently Safe Module (ALMR-PRISM) design developed under another DOE project was upgraded using Dymola to include proper interfaces to allow data exchange with the control application (ConApp). A program, written in C+, successfully communicates faults to the multi-physics model. The results of the example simulation were successfully plotted.« less
Reliability database development for use with an object-oriented fault tree evaluation program
NASA Technical Reports Server (NTRS)
Heger, A. Sharif; Harringtton, Robert J.; Koen, Billy V.; Patterson-Hine, F. Ann
1989-01-01
A description is given of the development of a fault-tree analysis method using object-oriented programming. In addition, the authors discuss the programs that have been developed or are under development to connect a fault-tree analysis routine to a reliability database. To assess the performance of the routines, a relational database simulating one of the nuclear power industry databases has been constructed. For a realistic assessment of the results of this project, the use of one of existing nuclear power reliability databases is planned.
Uncertainty analysis in fault tree models with dependent basic events.
Pedroni, Nicola; Zio, Enrico
2013-06-01
In general, two types of dependence need to be considered when estimating the probability of the top event (TE) of a fault tree (FT): "objective" dependence between the (random) occurrences of different basic events (BEs) in the FT and "state-of-knowledge" (epistemic) dependence between estimates of the epistemically uncertain probabilities of some BEs of the FT model. In this article, we study the effects on the TE probability of objective and epistemic dependences. The well-known Frèchet bounds and the distribution envelope determination (DEnv) method are used to model all kinds of (possibly unknown) objective and epistemic dependences, respectively. For exemplification, the analyses are carried out on a FT with six BEs. Results show that both types of dependence significantly affect the TE probability; however, the effects of epistemic dependence are likely to be overwhelmed by those of objective dependence (if present). © 2012 Society for Risk Analysis.
Fault diagnosis of power transformer based on fault-tree analysis (FTA)
NASA Astrophysics Data System (ADS)
Wang, Yongliang; Li, Xiaoqiang; Ma, Jianwei; Li, SuoYu
2017-05-01
Power transformers is an important equipment in power plants and substations, power distribution transmission link is made an important hub of power systems. Its performance directly affects the quality and health of the power system reliability and stability. This paper summarizes the five parts according to the fault type power transformers, then from the time dimension divided into three stages of power transformer fault, use DGA routine analysis and infrared diagnostics criterion set power transformer running state, finally, according to the needs of power transformer fault diagnosis, by the general to the section by stepwise refinement of dendritic tree constructed power transformer fault
Mines Systems Safety Improvement Using an Integrated Event Tree and Fault Tree Analysis
NASA Astrophysics Data System (ADS)
Kumar, Ranjan; Ghosh, Achyuta Krishna
2017-04-01
Mines systems such as ventilation system, strata support system, flame proof safety equipment, are exposed to dynamic operational conditions such as stress, humidity, dust, temperature, etc., and safety improvement of such systems can be done preferably during planning and design stage. However, the existing safety analysis methods do not handle the accident initiation and progression of mine systems explicitly. To bridge this gap, this paper presents an integrated Event Tree (ET) and Fault Tree (FT) approach for safety analysis and improvement of mine systems design. This approach includes ET and FT modeling coupled with redundancy allocation technique. In this method, a concept of top hazard probability is introduced for identifying system failure probability and redundancy is allocated to the system either at component or system level. A case study on mine methane explosion safety with two initiating events is performed. The results demonstrate that the presented method can reveal the accident scenarios and improve the safety of complex mine systems simultaneously.
Monitoring of Microseismicity with ArrayTechniques in the Peach Tree Valley Region
NASA Astrophysics Data System (ADS)
Garcia-Reyes, J. L.; Clayton, R. W.
2016-12-01
This study is focused on the analysis of microseismicity along the San Andreas Fault in the PeachTree Valley region. This zone is part of the transition zone between the locked portion to the south (Parkfield, CA) and the creeping section to the north (Jovilet, et al., JGR, 2014). The data for the study comes from a 2-week deployment of 116 Zland nodes in a cross-shaped configuration along (8.2 km) and across (9 km) the Fault. We analyze the distribution of microseismicity using a 3D backprojection technique, and we explore the use of Hidden Markov Models to identify different patterns of microseismicity (Hammer et al., GJI, 2013). The goal of the study is to relate the style of seismicity to the mechanical state of the Fault. The results show the evolution of seismic activity as well as at least two different patterns of seismic signals.
Toward a Model-Based Approach for Flight System Fault Protection
NASA Technical Reports Server (NTRS)
Day, John; Meakin, Peter; Murray, Alex
2012-01-01
Use SysML/UML to describe the physical structure of the system This part of the model would be shared with other teams - FS Systems Engineering, Planning & Execution, V&V, Operations, etc., in an integrated model-based engineering environment Use the UML Profile mechanism, defining Stereotypes to precisely express the concepts of the FP domain This extends the UML/SysML languages to contain our FP concepts Use UML/SysML, along with our profile, to capture FP concepts and relationships in the model Generate typical FP engineering products (the FMECA, Fault Tree, MRD, V&V Matrices)
Fire safety in transit systems fault tree analysis
DOT National Transportation Integrated Search
1981-09-01
Fire safety countermeasures applicable to transit vehicles are identified and evaluated. This document contains fault trees which illustrate the sequences of events which may lead to a transit-fire related casualty. A description of the basis for the...
CARE3MENU- A CARE III USER FRIENDLY INTERFACE
NASA Technical Reports Server (NTRS)
Pierce, J. L.
1994-01-01
CARE3MENU generates an input file for the CARE III program. CARE III is used for reliability prediction of complex, redundant, fault-tolerant systems including digital computers, aircraft, nuclear and chemical control systems. The CARE III input file often becomes complicated and is not easily formatted with a text editor. CARE3MENU provides an easy, interactive method of creating an input file by automatically formatting a set of user-supplied inputs for the CARE III system. CARE3MENU provides detailed on-line help for most of its screen formats. The reliability model input process is divided into sections using menu-driven screen displays. Each stage, or set of identical modules comprising the model, must be identified and described in terms of number of modules, minimum number of modules for stage operation, and critical fault threshold. The fault handling and fault occurence models are detailed in several screens by parameters such as transition rates, propagation and detection densities, Weibull or exponential characteristics, and model accuracy. The system fault tree and critical pairs fault tree screens are used to define the governing logic and to identify modules affected by component failures. Additional CARE3MENU screens prompt the user for output options and run time control values such as mission time and truncation values. There are fourteen major screens, many with default values and HELP options. The documentation includes: 1) a users guide with several examples of CARE III models, the dialog required to input them to CARE3MENU, and the output files created; and 2) a maintenance manual for assistance in changing the HELP files and modifying any of the menu formats or contents. CARE3MENU is written in FORTRAN 77 for interactive execution and has been implemented on a DEC VAX series computer operating under VMS. This program was developed in 1985.
System Analysis by Mapping a Fault-tree into a Bayesian-network
NASA Astrophysics Data System (ADS)
Sheng, B.; Deng, C.; Wang, Y. H.; Tang, L. H.
2018-05-01
In view of the limitations of fault tree analysis in reliability assessment, Bayesian Network (BN) has been studied as an alternative technology. After a brief introduction to the method for mapping a Fault Tree (FT) into an equivalent BN, equations used to calculate the structure importance degree, the probability importance degree and the critical importance degree are presented. Furthermore, the correctness of these equations is proved mathematically. Combining with an aircraft landing gear’s FT, an equivalent BN is developed and analysed. The results show that richer and more accurate information have been achieved through the BN method than the FT, which demonstrates that the BN is a superior technique in both reliability assessment and fault diagnosis.
Fault tree applications within the safety program of Idaho Nuclear Corporation
NASA Technical Reports Server (NTRS)
Vesely, W. E.
1971-01-01
Computerized fault tree analyses are used to obtain both qualitative and quantitative information about the safety and reliability of an electrical control system that shuts the reactor down when certain safety criteria are exceeded, in the design of a nuclear plant protection system, and in an investigation of a backup emergency system for reactor shutdown. The fault tree yields the modes by which the system failure or accident will occur, the most critical failure or accident causing areas, detailed failure probabilities, and the response of safety or reliability to design modifications and maintenance schemes.
Fault Tree Based Diagnosis with Optimal Test Sequencing for Field Service Engineers
NASA Technical Reports Server (NTRS)
Iverson, David L.; George, Laurence L.; Patterson-Hine, F. A.; Lum, Henry, Jr. (Technical Monitor)
1994-01-01
When field service engineers go to customer sites to service equipment, they want to diagnose and repair failures quickly and cost effectively. Symptoms exhibited by failed equipment frequently suggest several possible causes which require different approaches to diagnosis. This can lead the engineer to follow several fruitless paths in the diagnostic process before they find the actual failure. To assist in this situation, we have developed the Fault Tree Diagnosis and Optimal Test Sequence (FTDOTS) software system that performs automated diagnosis and ranks diagnostic hypotheses based on failure probability and the time or cost required to isolate and repair each failure. FTDOTS first finds a set of possible failures that explain exhibited symptoms by using a fault tree reliability model as a diagnostic knowledge to rank the hypothesized failures based on how likely they are and how long it would take or how much it would cost to isolate and repair them. This ordering suggests an optimal sequence for the field service engineer to investigate the hypothesized failures in order to minimize the time or cost required to accomplish the repair task. Previously, field service personnel would arrive at the customer site and choose which components to investigate based on past experience and service manuals. Using FTDOTS running on a portable computer, they can now enter a set of symptoms and get a list of possible failures ordered in an optimal test sequence to help them in their decisions. If facilities are available, the field engineer can connect the portable computer to the malfunctioning device for automated data gathering. FTDOTS is currently being applied to field service of medical test equipment. The techniques are flexible enough to use for many different types of devices. If a fault tree model of the equipment and information about component failure probabilities and isolation times or costs are available, a diagnostic knowledge base for that device can be developed easily.
Fault Tree Analysis as a Planning and Management Tool: A Case Study
ERIC Educational Resources Information Center
Witkin, Belle Ruth
1977-01-01
Fault Tree Analysis is an operations research technique used to analyse the most probable modes of failure in a system, in order to redesign or monitor the system more closely in order to increase its likelihood of success. (Author)
NASA Astrophysics Data System (ADS)
Rodak, C. M.; McHugh, R.; Wei, X.
2016-12-01
The development and combination of horizontal drilling and hydraulic fracturing has unlocked unconventional hydrocarbon reserves around the globe. These advances have triggered a number of concerns regarding aquifer contamination and over-exploitation, leading to scientific studies investigating potential risks posed by directional hydraulic fracturing activities. These studies, balanced with potential economic benefits of energy production, are a crucial source of information for communities considering the development of unconventional reservoirs. However, probabilistic quantification of the overall risk posed by hydraulic fracturing at the system level are rare. Here we present the concept of fault tree analysis to determine the overall probability of groundwater contamination or over-exploitation, broadly referred to as the probability of failure. The potential utility of fault tree analysis for the quantification and communication of risks is approached with a general application. However, the fault tree design is robust and can handle various combinations of regional-specific data pertaining to relevant spatial scales, geological conditions, and industry practices where available. All available data are grouped into quantity and quality-based impacts and sub-divided based on the stage of the hydraulic fracturing process in which the data is relevant as described by the USEPA. Each stage is broken down into the unique basic events required for failure; for example, to quantify the risk of an on-site spill we must consider the likelihood, magnitude, composition, and subsurface transport of the spill. The structure of the fault tree described above can be used to render a highly complex system of variables into a straightforward equation for risk calculation based on Boolean logic. This project shows the utility of fault tree analysis for the visual communication of the potential risks of hydraulic fracturing activities on groundwater resources.
NASA Astrophysics Data System (ADS)
Polverino, Pierpaolo; Pianese, Cesare; Sorrentino, Marco; Marra, Dario
2015-04-01
The paper focuses on the design of a procedure for the development of an on-field diagnostic algorithm for solid oxide fuel cell (SOFC) systems. The diagnosis design phase relies on an in-deep analysis of the mutual interactions among all system components by exploiting the physical knowledge of the SOFC system as a whole. This phase consists of the Fault Tree Analysis (FTA), which identifies the correlations among possible faults and their corresponding symptoms at system components level. The main outcome of the FTA is an inferential isolation tool (Fault Signature Matrix - FSM), which univocally links the faults to the symptoms detected during the system monitoring. In this work the FTA is considered as a starting point to develop an improved FSM. Making use of a model-based investigation, a fault-to-symptoms dependency study is performed. To this purpose a dynamic model, previously developed by the authors, is exploited to simulate the system under faulty conditions. Five faults are simulated, one for the stack and four occurring at BOP level. Moreover, the robustness of the FSM design is increased by exploiting symptom thresholds defined for the investigation of the quantitative effects of the simulated faults on the affected variables.
Fault Tree Analysis: An Emerging Methodology for Instructional Science.
ERIC Educational Resources Information Center
Wood, R. Kent; And Others
1979-01-01
Describes Fault Tree Analysis, a tool for systems analysis which attempts to identify possible modes of failure in systems to increase the probability of success. The article defines the technique and presents the steps of FTA construction, focusing on its application to education. (RAO)
A fault tree model to assess probability of contaminant discharge from shipwrecks.
Landquist, H; Rosén, L; Lindhe, A; Norberg, T; Hassellöv, I-M; Lindgren, J F; Dahllöf, I
2014-11-15
Shipwrecks on the sea floor around the world may contain hazardous substances that can cause harm to the marine environment. Today there are no comprehensive methods for environmental risk assessment of shipwrecks, and thus there is poor support for decision-making on prioritization of mitigation measures. The purpose of this study was to develop a tool for quantitative risk estimation of potentially polluting shipwrecks, and in particular an estimation of the annual probability of hazardous substance discharge. The assessment of the probability of discharge is performed using fault tree analysis, facilitating quantification of the probability with respect to a set of identified hazardous events. This approach enables a structured assessment providing transparent uncertainty and sensitivity analyses. The model facilitates quantification of risk, quantification of the uncertainties in the risk calculation and identification of parameters to be investigated further in order to obtain a more reliable risk calculation. Copyright © 2014 Elsevier Ltd. All rights reserved.
Program listing for fault tree analysis of JPL technical report 32-1542
NASA Technical Reports Server (NTRS)
Chelson, P. O.
1971-01-01
The computer program listing for the MAIN program and those subroutines unique to the fault tree analysis are described. Some subroutines are used for analyzing the reliability block diagram. The program is written in FORTRAN 5 and is running on a UNIVAC 1108.
Chen, Gang; Song, Yongduan; Lewis, Frank L
2016-05-03
This paper investigates the distributed fault-tolerant control problem of networked Euler-Lagrange systems with actuator and communication link faults. An adaptive fault-tolerant cooperative control scheme is proposed to achieve the coordinated tracking control of networked uncertain Lagrange systems on a general directed communication topology, which contains a spanning tree with the root node being the active target system. The proposed algorithm is capable of compensating for the actuator bias fault, the partial loss of effectiveness actuation fault, the communication link fault, the model uncertainty, and the external disturbance simultaneously. The control scheme does not use any fault detection and isolation mechanism to detect, separate, and identify the actuator faults online, which largely reduces the online computation and expedites the responsiveness of the controller. To validate the effectiveness of the proposed method, a test-bed of multiple robot-arm cooperative control system is developed for real-time verification. Experiments on the networked robot-arms are conduced and the results confirm the benefits and the effectiveness of the proposed distributed fault-tolerant control algorithms.
Direct evaluation of fault trees using object-oriented programming techniques
NASA Technical Reports Server (NTRS)
Patterson-Hine, F. A.; Koen, B. V.
1989-01-01
Object-oriented programming techniques are used in an algorithm for the direct evaluation of fault trees. The algorithm combines a simple bottom-up procedure for trees without repeated events with a top-down recursive procedure for trees with repeated events. The object-oriented approach results in a dynamic modularization of the tree at each step in the reduction process. The algorithm reduces the number of recursive calls required to solve trees with repeated events and calculates intermediate results as well as the solution of the top event. The intermediate results can be reused if part of the tree is modified. An example is presented in which the results of the algorithm implemented with conventional techniques are compared to those of the object-oriented approach.
MacDonald Iii, Angus W; Zick, Jennifer L; Chafee, Matthew V; Netoff, Theoden I
2015-01-01
The grand challenges of schizophrenia research are linking the causes of the disorder to its symptoms and finding ways to overcome those symptoms. We argue that the field will be unable to address these challenges within psychiatry's standard neo-Kraepelinian (DSM) perspective. At the same time the current corrective, based in molecular genetics and cognitive neuroscience, is also likely to flounder due to its neglect for psychiatry's syndromal structure. We suggest adopting a new approach long used in reliability engineering, which also serves as a synthesis of these approaches. This approach, known as fault tree analysis, can be combined with extant neuroscientific data collection and computational modeling efforts to uncover the causal structures underlying the cognitive and affective failures in people with schizophrenia as well as other complex psychiatric phenomena. By making explicit how causes combine from basic faults to downstream failures, this approach makes affordances for: (1) causes that are neither necessary nor sufficient in and of themselves; (2) within-diagnosis heterogeneity; and (3) between diagnosis co-morbidity.
FAULT TREE ANALYSIS FOR EXPOSURE TO REFRIGERANTS USED FOR AUTOMOTIVE AIR CONDITIONING IN THE U.S.
A fault tree analysis was used to estimate the number of refrigerant exposures of automotive service technicians and vehicle occupants in the United States. Exposures of service technicians can occur when service equipment or automotive air-conditioning systems leak during servic...
A Fault Tree Approach to Analysis of Organizational Communication Systems.
ERIC Educational Resources Information Center
Witkin, Belle Ruth; Stephens, Kent G.
Fault Tree Analysis (FTA) is a method of examing communication in an organization by focusing on: (1) the complex interrelationships in human systems, particularly in communication systems; (2) interactions across subsystems and system boundaries; and (3) the need to select and "prioritize" channels which will eliminate noise in the…
Applying fault tree analysis to the prevention of wrong-site surgery.
Abecassis, Zachary A; McElroy, Lisa M; Patel, Ronak M; Khorzad, Rebeca; Carroll, Charles; Mehrotra, Sanjay
2015-01-01
Wrong-site surgery (WSS) is a rare event that occurs to hundreds of patients each year. Despite national implementation of the Universal Protocol over the past decade, development of effective interventions remains a challenge. We performed a systematic review of the literature reporting root causes of WSS and used the results to perform a fault tree analysis to assess the reliability of the system in preventing WSS and identifying high-priority targets for interventions aimed at reducing WSS. Process components where a single error could result in WSS were labeled with OR gates; process aspects reinforced by verification were labeled with AND gates. The overall redundancy of the system was evaluated based on prevalence of AND gates and OR gates. In total, 37 studies described risk factors for WSS. The fault tree contains 35 faults, most of which fall into five main categories. Despite the Universal Protocol mandating patient verification, surgical site signing, and a brief time-out, a large proportion of the process relies on human transcription and verification. Fault tree analysis provides a standardized perspective of errors or faults within the system of surgical scheduling and site confirmation. It can be adapted by institutions or specialties to lead to more targeted interventions to increase redundancy and reliability within the preoperative process. Copyright © 2015 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Gülerce, Zeynep; Buğra Soyman, Kadir; Güner, Barış; Kaymakci, Nuretdin
2017-12-01
This contribution provides an updated planar seismic source characterization (SSC) model to be used in the probabilistic seismic hazard assessment (PSHA) for Istanbul. It defines planar rupture systems for the four main segments of the North Anatolian fault zone (NAFZ) that are critical for the PSHA of Istanbul: segments covering the rupture zones of the 1999 Kocaeli and Düzce earthquakes, central Marmara, and Ganos/Saros segments. In each rupture system, the source geometry is defined in terms of fault length, fault width, fault plane attitude, and segmentation points. Activity rates and the magnitude recurrence models for each rupture system are established by considering geological and geodetic constraints and are tested based on the observed seismicity that is associated with the rupture system. Uncertainty in the SSC model parameters (e.g., b value, maximum magnitude, slip rate, weights of the rupture scenarios) is considered, whereas the uncertainty in the fault geometry is not included in the logic tree. To acknowledge the effect of earthquakes that are not associated with the defined rupture systems on the hazard, a background zone is introduced and the seismicity rates in the background zone are calculated using smoothed-seismicity approach. The state-of-the-art SSC model presented here is the first fully documented and ready-to-use fault-based SSC model developed for the PSHA of Istanbul.
Modeling Off-Nominal Behavior in SysML
NASA Technical Reports Server (NTRS)
Day, John C.; Donahue, Kenneth; Ingham, Michel; Kadesch, Alex; Kennedy, Andrew K.; Post, Ethan
2012-01-01
Specification and development of fault management functionality in systems is performed in an ad hoc way - more of an art than a science. Improvements to system reliability, availability, safety and resilience will be limited without infusion of additional formality into the practice of fault management. Key to the formalization of fault management is a precise representation of off-nominal behavior. Using the upcoming Soil Moisture Active-Passive (SMAP) mission for source material, we have modeled the off-nominal behavior of the SMAP system during its initial spin-up activity, using the System Modeling Language (SysML). In the course of developing these models, we have developed generic patterns for capturing off-nominal behavior in SysML. We show how these patterns provide useful ways of reasoning about the system (e.g., checking for completeness and effectiveness) and allow the automatic generation of typical artifacts (e.g., success trees and FMECAs) used in system analyses.
A Fault Tree Approach to Needs Assessment -- An Overview.
ERIC Educational Resources Information Center
Stephens, Kent G.
A "failsafe" technology is presented based on a new unified theory of needs assessment. Basically the paper discusses fault tree analysis as a technique for enhancing the probability of success in any system by analyzing the most likely modes of failure that could occur and then suggesting high priority avoidance strategies for those…
Huang, Weiqing; Fan, Hongbo; Qiu, Yongfu; Cheng, Zhiyu; Xu, Pingru; Qian, Yu
2016-05-01
Recently, China has frequently experienced large-scale, severe and persistent haze pollution due to surging urbanization and industrialization and a rapid growth in the number of motor vehicles and energy consumption. The vehicle emission due to the consumption of a large number of fossil fuels is no doubt a critical factor of the haze pollution. This work is focused on the causation mechanism of haze pollution related to the vehicle emission for Guangzhou city by employing the Fault Tree Analysis (FTA) method for the first time. With the establishment of the fault tree system of "Haze weather-Vehicle exhausts explosive emission", all of the important risk factors are discussed and identified by using this deductive FTA method. The qualitative and quantitative assessments of the fault tree system are carried out based on the structure, probability and critical importance degree analysis of the risk factors. The study may provide a new simple and effective tool/strategy for the causation mechanism analysis and risk management of haze pollution in China. Copyright © 2016 Elsevier Ltd. All rights reserved.
PAWS/STEM - PADE APPROXIMATION WITH SCALING AND SCALED TAYLOR EXPONENTIAL MATRIX (VAX VMS VERSION)
NASA Technical Reports Server (NTRS)
Butler, R. W.
1994-01-01
Traditional fault-tree techniques for analyzing the reliability of large, complex systems fail to model the dynamic reconfiguration capabilities of modern computer systems. Markov models, on the other hand, can describe fault-recovery (via system reconfiguration) as well as fault-occurrence. The Pade Approximation with Scaling (PAWS) and Scaled Taylor Exponential Matrix (STEM) programs provide a flexible, user-friendly, language-based interface for the creation and evaluation of Markov models describing the behavior of fault-tolerant reconfigurable computer systems. PAWS and STEM produce exact solutions for the probability of system failure and provide a conservative estimate of the number of significant digits in the solution. The calculation of the probability of entering a death state of a Markov model (representing system failure) requires the solution of a set of coupled differential equations. Because of the large disparity between the rates of fault arrivals and system recoveries, Markov models of fault-tolerant architectures inevitably lead to numerically stiff differential equations. Both PAWS and STEM have the capability to solve numerically stiff models. These complementary programs use separate methods to determine the matrix exponential in the solution of the model's system of differential equations. In general, PAWS is better suited to evaluate small and dense models. STEM operates at lower precision, but works faster than PAWS for larger models. The mathematical approach chosen to solve a reliability problem may vary with the size and nature of the problem. Although different solution techniques are utilized on different programs, it is possible to have a common input language. The Systems Validation Methods group at NASA Langley Research Center has created a set of programs that form the basis for a reliability analysis workstation. The set of programs are: SURE reliability analysis program (COSMIC program LAR-13789, LAR-14921); the ASSIST specification interface program (LAR-14193, LAR-14923), PAWS/STEM reliability analysis programs (LAR-14165, LAR-14920); and the FTC fault tree tool (LAR-14586, LAR-14922). FTC is used to calculate the top-event probability for a fault tree. PAWS/STEM and SURE are programs which interpret the same SURE language, but utilize different solution methods. ASSIST is a preprocessor that generates SURE language from a more abstract definition. SURE, ASSIST, and PAWS/STEM are also offered as a bundle. Please see the abstract for COS-10039/COS-10041, SARA - SURE/ASSIST Reliability Analysis Workstation, for pricing details. PAWS/STEM was originally developed for DEC VAX series computers running VMS and was later ported for use on Sun computers running SunOS. The package is written in PASCAL, ANSI compliant C-language, and FORTRAN 77. The standard distribution medium for the VMS version of PAWS/STEM (LAR-14165) is a 9-track 1600 BPI magnetic tape in VMSINSTAL format. It is also available on a TK50 tape cartridge in VMSINSTAL format. Executables are included. The standard distribution medium for the Sun version of PAWS/STEM (LAR-14920) is a .25 inch streaming magnetic tape cartridge in UNIX tar format. Both Sun3 and Sun4 executables are included. PAWS/STEM was developed in 1989 and last updated in 1991. DEC, VAX, VMS, and TK50 are trademarks of Digital Equipment Corporation. SunOS, Sun3, and Sun4 are trademarks of Sun Microsystems, Inc. UNIX is a registered trademark of AT&T Bell Laboratories.
PAWS/STEM - PADE APPROXIMATION WITH SCALING AND SCALED TAYLOR EXPONENTIAL MATRIX (SUN VERSION)
NASA Technical Reports Server (NTRS)
Butler, R. W.
1994-01-01
Traditional fault-tree techniques for analyzing the reliability of large, complex systems fail to model the dynamic reconfiguration capabilities of modern computer systems. Markov models, on the other hand, can describe fault-recovery (via system reconfiguration) as well as fault-occurrence. The Pade Approximation with Scaling (PAWS) and Scaled Taylor Exponential Matrix (STEM) programs provide a flexible, user-friendly, language-based interface for the creation and evaluation of Markov models describing the behavior of fault-tolerant reconfigurable computer systems. PAWS and STEM produce exact solutions for the probability of system failure and provide a conservative estimate of the number of significant digits in the solution. The calculation of the probability of entering a death state of a Markov model (representing system failure) requires the solution of a set of coupled differential equations. Because of the large disparity between the rates of fault arrivals and system recoveries, Markov models of fault-tolerant architectures inevitably lead to numerically stiff differential equations. Both PAWS and STEM have the capability to solve numerically stiff models. These complementary programs use separate methods to determine the matrix exponential in the solution of the model's system of differential equations. In general, PAWS is better suited to evaluate small and dense models. STEM operates at lower precision, but works faster than PAWS for larger models. The mathematical approach chosen to solve a reliability problem may vary with the size and nature of the problem. Although different solution techniques are utilized on different programs, it is possible to have a common input language. The Systems Validation Methods group at NASA Langley Research Center has created a set of programs that form the basis for a reliability analysis workstation. The set of programs are: SURE reliability analysis program (COSMIC program LAR-13789, LAR-14921); the ASSIST specification interface program (LAR-14193, LAR-14923), PAWS/STEM reliability analysis programs (LAR-14165, LAR-14920); and the FTC fault tree tool (LAR-14586, LAR-14922). FTC is used to calculate the top-event probability for a fault tree. PAWS/STEM and SURE are programs which interpret the same SURE language, but utilize different solution methods. ASSIST is a preprocessor that generates SURE language from a more abstract definition. SURE, ASSIST, and PAWS/STEM are also offered as a bundle. Please see the abstract for COS-10039/COS-10041, SARA - SURE/ASSIST Reliability Analysis Workstation, for pricing details. PAWS/STEM was originally developed for DEC VAX series computers running VMS and was later ported for use on Sun computers running SunOS. The package is written in PASCAL, ANSI compliant C-language, and FORTRAN 77. The standard distribution medium for the VMS version of PAWS/STEM (LAR-14165) is a 9-track 1600 BPI magnetic tape in VMSINSTAL format. It is also available on a TK50 tape cartridge in VMSINSTAL format. Executables are included. The standard distribution medium for the Sun version of PAWS/STEM (LAR-14920) is a .25 inch streaming magnetic tape cartridge in UNIX tar format. Both Sun3 and Sun4 executables are included. PAWS/STEM was developed in 1989 and last updated in 1991. DEC, VAX, VMS, and TK50 are trademarks of Digital Equipment Corporation. SunOS, Sun3, and Sun4 are trademarks of Sun Microsystems, Inc. UNIX is a registered trademark of AT&T Bell Laboratories.
NASA Astrophysics Data System (ADS)
Sanchez-Vila, X.; de Barros, F.; Bolster, D.; Nowak, W.
2010-12-01
Assessing the potential risk of hydro(geo)logical supply systems to human population is an interdisciplinary field. It relies on the expertise in fields as distant as hydrogeology, medicine, or anthropology, and needs powerful translation concepts to provide decision support and policy making. Reliable health risk estimates need to account for the uncertainties in hydrological, physiological and human behavioral parameters. We propose the use of fault trees to address the task of probabilistic risk analysis (PRA) and to support related management decisions. Fault trees allow decomposing the assessment of health risk into individual manageable modules, thus tackling a complex system by a structural “Divide and Conquer” approach. The complexity within each module can be chosen individually according to data availability, parsimony, relative importance and stage of analysis. The separation in modules allows for a true inter- and multi-disciplinary approach. This presentation highlights the three novel features of our work: (1) we define failure in terms of risk being above a threshold value, whereas previous studies used auxiliary events such as exceedance of critical concentration levels, (2) we plot an integrated fault tree that handles uncertainty in both hydrological and health components in a unified way, and (3) we introduce a new form of stochastic fault tree that allows to weaken the assumption of independent subsystems that is required by a classical fault tree approach. We illustrate our concept in a simple groundwater-related setting.
A method of real-time fault diagnosis for power transformers based on vibration analysis
NASA Astrophysics Data System (ADS)
Hong, Kaixing; Huang, Hai; Zhou, Jianping; Shen, Yimin; Li, Yujie
2015-11-01
In this paper, a novel probability-based classification model is proposed for real-time fault detection of power transformers. First, the transformer vibration principle is introduced, and two effective feature extraction techniques are presented. Next, the details of the classification model based on support vector machine (SVM) are shown. The model also includes a binary decision tree (BDT) which divides transformers into different classes according to health state. The trained model produces posterior probabilities of membership to each predefined class for a tested vibration sample. During the experiments, the vibrations of transformers under different conditions are acquired, and the corresponding feature vectors are used to train the SVM classifiers. The effectiveness of this model is illustrated experimentally on typical in-service transformers. The consistency between the results of the proposed model and the actual condition of the test transformers indicates that the model can be used as a reliable method for transformer fault detection.
An earthquake rate forecast for Europe based on smoothed seismicity and smoothed fault contribution
NASA Astrophysics Data System (ADS)
Hiemer, Stefan; Woessner, Jochen; Basili, Roberto; Wiemer, Stefan
2013-04-01
The main objective of project SHARE (Seismic Hazard Harmonization in Europe) is to develop a community-based seismic hazard model for the Euro-Mediterranean region. The logic tree of earthquake rupture forecasts comprises several methodologies including smoothed seismicity approaches. Smoothed seismicity thus represents an alternative concept to express the degree of spatial stationarity of seismicity and provides results that are more objective, reproducible, and testable. Nonetheless, the smoothed-seismicity approach suffers from the common drawback of being generally based on earthquake catalogs alone, i.e. the wealth of knowledge from geology is completely ignored. We present a model that applies the kernel-smoothing method to both past earthquake locations and slip rates on mapped crustal faults and subductions. The result is mainly driven by the data, being independent of subjective delineation of seismic source zones. The core parts of our model are two distinct location probability densities: The first is computed by smoothing past seismicity (using variable kernel smoothing to account for varying data density). The second is obtained by smoothing fault moment rate contributions. The fault moment rates are calculated by summing the moment rate of each fault patch on a fully parameterized and discretized fault as available from the SHARE fault database. We assume that the regional frequency-magnitude distribution of the entire study area is well known and estimate the a- and b-value of a truncated Gutenberg-Richter magnitude distribution based on a maximum likelihood approach that considers the spatial and temporal completeness history of the seismic catalog. The two location probability densities are linearly weighted as a function of magnitude assuming that (1) the occurrence of past seismicity is a good proxy to forecast occurrence of future seismicity and (2) future large-magnitude events occur more likely in the vicinity of known faults. Consequently, the underlying location density of our model depends on the magnitude. We scale the density with the estimated a-value in order to construct a forecast that specifies the earthquake rate in each longitude-latitude-magnitude bin. The model is intended to be one branch of SHARE's logic tree of rupture forecasts and provides rates of events in the magnitude range of 5 <= m <= 8.5 for the entire region of interest and is suitable for comparison with other long-term models in the framework of the Collaboratory for the Study of Earthquake Predictability (CSEP).
NASA Technical Reports Server (NTRS)
Patterson, Jonathan D.; Breckenridge, Jonathan T.; Johnson, Stephen B.
2013-01-01
Building upon the purpose, theoretical approach, and use of a Goal-Function Tree (GFT) being presented by Dr. Stephen B. Johnson, described in a related Infotech 2013 ISHM abstract titled "Goal-Function Tree Modeling for Systems Engineering and Fault Management", this paper will describe the core framework used to implement the GFTbased systems engineering process using the Systems Modeling Language (SysML). These two papers are ideally accepted and presented together in the same Infotech session. Statement of problem: SysML, as a tool, is currently not capable of implementing the theoretical approach described within the "Goal-Function Tree Modeling for Systems Engineering and Fault Management" paper cited above. More generally, SysML's current capabilities to model functional decompositions in the rigorous manner required in the GFT approach are limited. The GFT is a new Model-Based Systems Engineering (MBSE) approach to the development of goals and requirements, functions, and its linkage to design. As a growing standard for systems engineering, it is important to develop methods to implement GFT in SysML. Proposed Method of Solution: Many of the central concepts of the SysML language are needed to implement a GFT for large complex systems. In the implementation of those central concepts, the following will be described in detail: changes to the nominal SysML process, model view definitions and examples, diagram definitions and examples, and detailed SysML construct and stereotype definitions.
AADL Fault Modeling and Analysis Within an ARP4761 Safety Assessment
2014-10-01
Analysis Generator 27 3.2.3 Mapping to OpenFTA Format File 27 3.2.4 Mapping to Generic XML Format 28 3.2.5 AADL and FTA Mapping Rules 28 3.2.6 Issues...PSSA), System Safety Assessment (SSA), Common Cause Analysis (CCA), Fault Tree Analysis ( FTA ), Failure Modes and Effects Analysis (FMEA), Failure...Modes and Effects Summary, Mar - kov Analysis (MA), and Dependence Diagrams (DDs), also referred to as Reliability Block Dia- grams (RBDs). The
FTC - THE FAULT-TREE COMPILER (SUN VERSION)
NASA Technical Reports Server (NTRS)
Butler, R. W.
1994-01-01
FTC, the Fault-Tree Compiler program, is a tool used to calculate the top-event probability for a fault-tree. Five different gate types are allowed in the fault tree: AND, OR, EXCLUSIVE OR, INVERT, and M OF N. The high-level input language is easy to understand and use. In addition, the program supports a hierarchical fault tree definition feature which simplifies the tree-description process and reduces execution time. A rigorous error bound is derived for the solution technique. This bound enables the program to supply an answer precisely (within the limits of double precision floating point arithmetic) at a user-specified number of digits accuracy. The program also facilitates sensitivity analysis with respect to any specified parameter of the fault tree such as a component failure rate or a specific event probability by allowing the user to vary one failure rate or the failure probability over a range of values and plot the results. The mathematical approach chosen to solve a reliability problem may vary with the size and nature of the problem. Although different solution techniques are utilized on different programs, it is possible to have a common input language. The Systems Validation Methods group at NASA Langley Research Center has created a set of programs that form the basis for a reliability analysis workstation. The set of programs are: SURE reliability analysis program (COSMIC program LAR-13789, LAR-14921); the ASSIST specification interface program (LAR-14193, LAR-14923), PAWS/STEM reliability analysis programs (LAR-14165, LAR-14920); and the FTC fault tree tool (LAR-14586, LAR-14922). FTC is used to calculate the top-event probability for a fault tree. PAWS/STEM and SURE are programs which interpret the same SURE language, but utilize different solution methods. ASSIST is a preprocessor that generates SURE language from a more abstract definition. SURE, ASSIST, and PAWS/STEM are also offered as a bundle. Please see the abstract for COS-10039/COS-10041, SARA - SURE/ASSIST Reliability Analysis Workstation, for pricing details. FTC was originally developed for DEC VAX series computers running VMS and was later ported for use on Sun computers running SunOS. The program is written in PASCAL, ANSI compliant C-language, and FORTRAN 77. The TEMPLATE graphics library is required to obtain graphical output. The standard distribution medium for the VMS version of FTC (LAR-14586) is a 9-track 1600 BPI magnetic tape in VMSINSTAL format. It is also available on a TK50 tape cartridge in VMSINSTAL format. Executables are included. The standard distribution medium for the Sun version of FTC (LAR-14922) is a .25 inch streaming magnetic tape cartridge in UNIX tar format. Both Sun3 and Sun4 executables are included. FTC was developed in 1989 and last updated in 1992. DEC, VAX, VMS, and TK50 are trademarks of Digital Equipment Corporation. UNIX is a registered trademark of AT&T Bell Laboratories. SunOS is a trademark of Sun Microsystems, Inc.
FTC - THE FAULT-TREE COMPILER (VAX VMS VERSION)
NASA Technical Reports Server (NTRS)
Butler, R. W.
1994-01-01
FTC, the Fault-Tree Compiler program, is a tool used to calculate the top-event probability for a fault-tree. Five different gate types are allowed in the fault tree: AND, OR, EXCLUSIVE OR, INVERT, and M OF N. The high-level input language is easy to understand and use. In addition, the program supports a hierarchical fault tree definition feature which simplifies the tree-description process and reduces execution time. A rigorous error bound is derived for the solution technique. This bound enables the program to supply an answer precisely (within the limits of double precision floating point arithmetic) at a user-specified number of digits accuracy. The program also facilitates sensitivity analysis with respect to any specified parameter of the fault tree such as a component failure rate or a specific event probability by allowing the user to vary one failure rate or the failure probability over a range of values and plot the results. The mathematical approach chosen to solve a reliability problem may vary with the size and nature of the problem. Although different solution techniques are utilized on different programs, it is possible to have a common input language. The Systems Validation Methods group at NASA Langley Research Center has created a set of programs that form the basis for a reliability analysis workstation. The set of programs are: SURE reliability analysis program (COSMIC program LAR-13789, LAR-14921); the ASSIST specification interface program (LAR-14193, LAR-14923), PAWS/STEM reliability analysis programs (LAR-14165, LAR-14920); and the FTC fault tree tool (LAR-14586, LAR-14922). FTC is used to calculate the top-event probability for a fault tree. PAWS/STEM and SURE are programs which interpret the same SURE language, but utilize different solution methods. ASSIST is a preprocessor that generates SURE language from a more abstract definition. SURE, ASSIST, and PAWS/STEM are also offered as a bundle. Please see the abstract for COS-10039/COS-10041, SARA - SURE/ASSIST Reliability Analysis Workstation, for pricing details. FTC was originally developed for DEC VAX series computers running VMS and was later ported for use on Sun computers running SunOS. The program is written in PASCAL, ANSI compliant C-language, and FORTRAN 77. The TEMPLATE graphics library is required to obtain graphical output. The standard distribution medium for the VMS version of FTC (LAR-14586) is a 9-track 1600 BPI magnetic tape in VMSINSTAL format. It is also available on a TK50 tape cartridge in VMSINSTAL format. Executables are included. The standard distribution medium for the Sun version of FTC (LAR-14922) is a .25 inch streaming magnetic tape cartridge in UNIX tar format. Both Sun3 and Sun4 executables are included. FTC was developed in 1989 and last updated in 1992. DEC, VAX, VMS, and TK50 are trademarks of Digital Equipment Corporation. UNIX is a registered trademark of AT&T Bell Laboratories. SunOS is a trademark of Sun Microsystems, Inc.
A Fault Tree Approach to Analysis of Behavioral Systems: An Overview.
ERIC Educational Resources Information Center
Stephens, Kent G.
Developed at Brigham Young University, Fault Tree Analysis (FTA) is a technique for enhancing the probability of success in any system by analyzing the most likely modes of failure that could occur. It provides a logical, step-by-step description of possible failure events within a system and their interaction--the combinations of potential…
The engine fuel system fault analysis
NASA Astrophysics Data System (ADS)
Zhang, Yong; Song, Hanqiang; Yang, Changsheng; Zhao, Wei
2017-05-01
For improving the reliability of the engine fuel system, the typical fault factor of the engine fuel system was analyzed from the point view of structure and functional. The fault character was gotten by building the fuel system fault tree. According the utilizing of fault mode effect analysis method (FMEA), several factors of key component fuel regulator was obtained, which include the fault mode, the fault cause, and the fault influences. All of this made foundation for next development of fault diagnosis system.
NASA Astrophysics Data System (ADS)
Guan, Yifeng; Zhao, Jie; Shi, Tengfei; Zhu, Peipei
2016-09-01
In recent years, China's increased interest in environmental protection has led to a promotion of energy-efficient dual fuel (diesel/natural gas) ships in Chinese inland rivers. A natural gas as ship fuel may pose dangers of fire and explosion if a gas leak occurs. If explosions or fires occur in the engine rooms of a ship, heavy damage and losses will be incurred. In this paper, a fault tree model is presented that considers both fires and explosions in a dual fuel ship; in this model, dual fuel engine rooms are the top events. All the basic events along with the minimum cut sets are obtained through the analysis. The primary factors that affect accidents involving fires and explosions are determined by calculating the degree of structure importance of the basic events. According to these results, corresponding measures are proposed to ensure and improve the safety and reliability of Chinese inland dual fuel ships.
Fault tree analysis: NiH2 aerospace cells for LEO mission
NASA Technical Reports Server (NTRS)
Klein, Glenn C.; Rash, Donald E., Jr.
1992-01-01
The Fault Tree Analysis (FTA) is one of several reliability analyses or assessments applied to battery cells to be utilized in typical Electric Power Subsystems for spacecraft in low Earth orbit missions. FTA is generally the process of reviewing and analytically examining a system or equipment in such a way as to emphasize the lower level fault occurrences which directly or indirectly contribute to the major fault or top level event. This qualitative FTA addresses the potential of occurrence for five specific top level events: hydrogen leakage through either discrete leakage paths or through pressure vessel rupture; and four distinct modes of performance degradation - high charge voltage, suppressed discharge voltage, loss of capacity, and high pressure.
Towards generating ECSS-compliant fault tree analysis results via ConcertoFLA
NASA Astrophysics Data System (ADS)
Gallina, B.; Haider, Z.; Carlsson, A.
2018-05-01
Attitude Control Systems (ACSs) maintain the orientation of the satellite in three-dimensional space. ACSs need to be engineered in compliance with ECSS standards and need to ensure a certain degree of dependability. Thus, dependability analysis is conducted at various levels and by using ECSS-compliant techniques. Fault Tree Analysis (FTA) is one of these techniques. FTA is being automated within various Model Driven Engineering (MDE)-based methodologies. The tool-supported CHESS-methodology is one of them. This methodology incorporates ConcertoFLA, a dependability analysis technique enabling failure behavior analysis and thus FTA-results generation. ConcertoFLA, however, similarly to other techniques, still belongs to the academic research niche. To promote this technique within the space industry, we apply it on an ACS and discuss about its multi-faceted potentialities in the context of ECSS-compliant engineering.
NASA Astrophysics Data System (ADS)
Zeng, Yajun; Skibniewski, Miroslaw J.
2013-08-01
Enterprise resource planning (ERP) system implementations are often characterised with large capital outlay, long implementation duration, and high risk of failure. In order to avoid ERP implementation failure and realise the benefits of the system, sound risk management is the key. This paper proposes a probabilistic risk assessment approach for ERP system implementation projects based on fault tree analysis, which models the relationship between ERP system components and specific risk factors. Unlike traditional risk management approaches that have been mostly focused on meeting project budget and schedule objectives, the proposed approach intends to address the risks that may cause ERP system usage failure. The approach can be used to identify the root causes of ERP system implementation usage failure and quantify the impact of critical component failures or critical risk events in the implementation process.
Accelerated Monte Carlo Simulation for Safety Analysis of the Advanced Airspace Concept
NASA Technical Reports Server (NTRS)
Thipphavong, David
2010-01-01
Safe separation of aircraft is a primary objective of any air traffic control system. An accelerated Monte Carlo approach was developed to assess the level of safety provided by a proposed next-generation air traffic control system. It combines features of fault tree and standard Monte Carlo methods. It runs more than one order of magnitude faster than the standard Monte Carlo method while providing risk estimates that only differ by about 10%. It also preserves component-level model fidelity that is difficult to maintain using the standard fault tree method. This balance of speed and fidelity allows sensitivity analysis to be completed in days instead of weeks or months with the standard Monte Carlo method. Results indicate that risk estimates are sensitive to transponder, pilot visual avoidance, and conflict detection failure probabilities.
Cost-effectiveness analysis of risk-reduction measures to reach water safety targets.
Lindhe, Andreas; Rosén, Lars; Norberg, Tommy; Bergstedt, Olof; Pettersson, Thomas J R
2011-01-01
Identifying the most suitable risk-reduction measures in drinking water systems requires a thorough analysis of possible alternatives. In addition to the effects on the risk level, also the economic aspects of the risk-reduction alternatives are commonly considered important. Drinking water supplies are complex systems and to avoid sub-optimisation of risk-reduction measures, the entire system from source to tap needs to be considered. There is a lack of methods for quantification of water supply risk reduction in an economic context for entire drinking water systems. The aim of this paper is to present a novel approach for risk assessment in combination with economic analysis to evaluate risk-reduction measures based on a source-to-tap approach. The approach combines a probabilistic and dynamic fault tree method with cost-effectiveness analysis (CEA). The developed approach comprises the following main parts: (1) quantification of risk reduction of alternatives using a probabilistic fault tree model of the entire system; (2) combination of the modelling results with CEA; and (3) evaluation of the alternatives with respect to the risk reduction, the probability of not reaching water safety targets and the cost-effectiveness. The fault tree method and CEA enable comparison of risk-reduction measures in the same quantitative unit and consider costs and uncertainties. The approach provides a structured and thorough analysis of risk-reduction measures that facilitates transparency and long-term planning of drinking water systems in order to avoid sub-optimisation of available resources for risk reduction. Copyright © 2010 Elsevier Ltd. All rights reserved.
MacDonald III, Angus W.; Zick, Jennifer L.; Chafee, Matthew V.; Netoff, Theoden I.
2016-01-01
The grand challenges of schizophrenia research are linking the causes of the disorder to its symptoms and finding ways to overcome those symptoms. We argue that the field will be unable to address these challenges within psychiatry’s standard neo-Kraepelinian (DSM) perspective. At the same time the current corrective, based in molecular genetics and cognitive neuroscience, is also likely to flounder due to its neglect for psychiatry’s syndromal structure. We suggest adopting a new approach long used in reliability engineering, which also serves as a synthesis of these approaches. This approach, known as fault tree analysis, can be combined with extant neuroscientific data collection and computational modeling efforts to uncover the causal structures underlying the cognitive and affective failures in people with schizophrenia as well as other complex psychiatric phenomena. By making explicit how causes combine from basic faults to downstream failures, this approach makes affordances for: (1) causes that are neither necessary nor sufficient in and of themselves; (2) within-diagnosis heterogeneity; and (3) between diagnosis co-morbidity. PMID:26779007
Soft error evaluation and vulnerability analysis in Xilinx Zynq-7010 system-on chip
NASA Astrophysics Data System (ADS)
Du, Xuecheng; He, Chaohui; Liu, Shuhuan; Zhang, Yao; Li, Yonghong; Xiong, Ceng; Tan, Pengkang
2016-09-01
Radiation-induced soft errors are an increasingly important threat to the reliability of modern electronic systems. In order to evaluate system-on chip's reliability and soft error, the fault tree analysis method was used in this work. The system fault tree was constructed based on Xilinx Zynq-7010 All Programmable SoC. Moreover, the soft error rates of different components in Zynq-7010 SoC were tested by americium-241 alpha radiation source. Furthermore, some parameters that used to evaluate the system's reliability and safety were calculated using Isograph Reliability Workbench 11.0, such as failure rate, unavailability and mean time to failure (MTTF). According to fault tree analysis for system-on chip, the critical blocks and system reliability were evaluated through the qualitative and quantitative analysis.
Learning from examples - Generation and evaluation of decision trees for software resource analysis
NASA Technical Reports Server (NTRS)
Selby, Richard W.; Porter, Adam A.
1988-01-01
A general solution method for the automatic generation of decision (or classification) trees is investigated. The approach is to provide insights through in-depth empirical characterization and evaluation of decision trees for software resource data analysis. The trees identify classes of objects (software modules) that had high development effort. Sixteen software systems ranging from 3,000 to 112,000 source lines were selected for analysis from a NASA production environment. The collection and analysis of 74 attributes (or metrics), for over 4,700 objects, captured information about the development effort, faults, changes, design style, and implementation style. A total of 9,600 decision trees were automatically generated and evaluated. The trees correctly identified 79.3 percent of the software modules that had high development effort or faults, and the trees generated from the best parameter combinations correctly identified 88.4 percent of the modules on the average.
NASA Technical Reports Server (NTRS)
Chang, Chi-Yung (Inventor); Fang, Wai-Chi (Inventor); Curlander, John C. (Inventor)
1995-01-01
A system for data compression utilizing systolic array architecture for Vector Quantization (VQ) is disclosed for both full-searched and tree-searched. For a tree-searched VQ, the special case of a Binary Tree-Search VQ (BTSVQ) is disclosed with identical Processing Elements (PE) in the array for both a Raw-Codebook VQ (RCVQ) and a Difference-Codebook VQ (DCVQ) algorithm. A fault tolerant system is disclosed which allows a PE that has developed a fault to be bypassed in the array and replaced by a spare at the end of the array, with codebook memory assignment shifted one PE past the faulty PE of the array.
Goal-Function Tree Modeling for Systems Engineering and Fault Management
NASA Technical Reports Server (NTRS)
Johnson, Stephen B.; Breckenridge, Jonathan T.
2013-01-01
This paper describes a new representation that enables rigorous definition and decomposition of both nominal and off-nominal system goals and functions: the Goal-Function Tree (GFT). GFTs extend the concept and process of functional decomposition, utilizing state variables as a key mechanism to ensure physical and logical consistency and completeness of the decomposition of goals (requirements) and functions, and enabling full and complete traceabilitiy to the design. The GFT also provides for means to define and represent off-nominal goals and functions that are activated when the system's nominal goals are not met. The physical accuracy of the GFT, and its ability to represent both nominal and off-nominal goals enable the GFT to be used for various analyses of the system, including assessments of the completeness and traceability of system goals and functions, the coverage of fault management failure detections, and definition of system failure scenarios.
Qualitative Importance Measures of Systems Components - A New Approach and Its Applications
NASA Astrophysics Data System (ADS)
Chybowski, Leszek; Gawdzińska, Katarzyna; Wiśnicki, Bogusz
2016-12-01
The paper presents an improved methodology of analysing the qualitative importance of components in the functional and reliability structures of the system. We present basic importance measures, i.e. the Birnbaum's structural measure, the order of the smallest minimal cut-set, the repetition count of an i-th event in the Fault Tree and the streams measure. A subsystem of circulation pumps and fuel heaters in the main engine fuel supply system of a container vessel illustrates the qualitative importance analysis. We constructed a functional model and a Fault Tree which we analysed using qualitative measures. Additionally, we compared the calculated measures and introduced corrected measures as a tool for improving the analysis. We proposed scaled measures and a common measure taking into account the location of the component in the reliability and functional structures. Finally, we proposed an area where the measures could be applied.
NASA Technical Reports Server (NTRS)
Breckenridge, Jonathan T.; Johnson, Stephen B.
2013-01-01
This paper describes the core framework used to implement a Goal-Function Tree (GFT) based systems engineering process using the Systems Modeling Language. It defines a set of principles built upon by the theoretical approach described in the InfoTech 2013 ISHM paper titled "Goal-Function Tree Modeling for Systems Engineering and Fault Management" presented by Dr. Stephen B. Johnson. Using the SysML language, the principles in this paper describe the expansion of the SysML language as a baseline in order to: hierarchically describe a system, describe that system functionally within success space, and allocate detection mechanisms to success functions for system protection.
NASA Astrophysics Data System (ADS)
Schwartz, D. P.; Haeussler, P. J.; Seitz, G. G.; Dawson, T. E.; Stenner, H. D.; Matmon, A.; Crone, A. J.; Personius, S.; Burns, P. B.; Cadena, A.; Thoms, E.
2005-12-01
Developing accurate rupture histories of long, high-slip-rate strike-slip faults is is especially challenging where recurrence is relatively short (hundreds of years), adjacent segments may fail within decades of each other, and uncertainties in dating can be as large as, or larger than, the time between events. The Denali Fault system (DFS) is the major active structure of interior Alaska, but received little study since pioneering fault investigations in the early 1970s. Until the summer of 2003 essentially no data existed on the timing or spatial distribution of past ruptures on the DFS. This changed with the occurrence of the M7.9 2002 Denali fault earthquake, which has been a catalyst for present paleoseismic investigations. It provided a well-constrained rupture length and slip distribution. Strike-slip faulting occurred along 290 km of the Denali and Totschunda faults, leaving unruptured ?140km of the eastern Denali fault, ?180 km of the western Denali fault, and ?70 km of the eastern Totschunda fault. The DFS presents us with a blank canvas on which to fill a chronology of past earthquakes using modern paleoseismic techniques. Aware of correlation issues with potentially closely-timed earthquakes we have a) investigated 11 paleoseismic sites that allow a variety of dating techniques, b) measured paleo offsets, which provide insight into magnitude and rupture length of past events, at 18 locations, and c) developed late Pleistocene and Holocene slip rates using exposure age dating to constrain long-term fault behavior models. We are in the process of: 1) radiocarbon-dating peats involved in faulting and liquefaction, and especially short-lived forest floor vegetation that includes outer rings of trees, spruce needles, and blueberry leaves killed and buried during paleoearthquakes; 2) supporting development of a 700-900 year tree-ring time-series for precise dating of trees used in event timing; 3) employing Pb 210 for constraining the youngest ruptures in sag ponds on the eastern and western Denali fault; and 4) using volcanic ashes in trenches for dating and correlation. Initial results are: 1) Large earthquakes occurred along the 2002 rupture section 350-700 yrb02 (2-sigma, calendar-corrected, years before 2002) with offsets about the same as 2002. The Denali penultimate rupture appears younger (350-570 yrb02) than the Totschunda (580-700 yrb02); 2) The western Denali fault is geomorphically fresh, its MRE likely occurred within the past 250 years, the penultimate event occurred 570-680 yrb02, and slip in each event was 4m; 3) The eastern Denali MRE post-dates peat dated at 550-680 yrb02, is younger than the penultimate Totschunda event, and could be part of the penultimate Denali fault rupture or a separate earthquake; 4) A 120-km section of the Denali fault between tNenana glacier and the Delta River may be a zone of overlap for large events and/or capable of producing smaller earthquakes; its western part has fresh scarps with small (1m) offsets. 2004/2005 field observations show there are longer datable records, with 4-5 events recorded in trenches on the eastern Denali fault and the west end of the 2002 rupture, 2-3 events on the western part of the fault in Denali National Park, and 3-4 events on the Totschunda fault. These and extensive datable material provide the basis to define the paleoseismic history of DFS earthquake ruptures through multiple and complete earthquake cycles.
Li, Jia; Wang, Deming; Huang, Zonghou
2017-01-01
Coal dust explosions (CDE) are one of the main threats to the occupational safety of coal miners. Aiming to identify and assess the risk of CDE, this paper proposes a novel method of fuzzy fault tree analysis combined with the Visual Basic (VB) program. In this methodology, various potential causes of the CDE are identified and a CDE fault tree is constructed. To overcome drawbacks from the lack of exact probability data for the basic events, fuzzy set theory is employed and the probability data of each basic event is treated as intuitionistic trapezoidal fuzzy numbers. In addition, a new approach for calculating the weighting of each expert is also introduced in this paper to reduce the error during the expert elicitation process. Specifically, an in-depth quantitative analysis of the fuzzy fault tree, such as the importance measure of the basic events and the cut sets, and the CDE occurrence probability is given to assess the explosion risk and acquire more details of the CDE. The VB program is applied to simplify the analysis process. A case study and analysis is provided to illustrate the effectiveness of this proposed method, and some suggestions are given to take preventive measures in advance and avoid CDE accidents. PMID:28793348
Shi, Lei; Shuai, Jian; Xu, Kui
2014-08-15
Fire and explosion accidents of steel oil storage tanks (FEASOST) occur occasionally during the petroleum and chemical industry production and storage processes and often have devastating impact on lives, the environment and property. To contribute towards the development of a quantitative approach for assessing the occurrence probability of FEASOST, a fault tree of FEASOST is constructed that identifies various potential causes. Traditional fault tree analysis (FTA) can achieve quantitative evaluation if the failure data of all of the basic events (BEs) are available, which is almost impossible due to the lack of detailed data, as well as other uncertainties. This paper makes an attempt to perform FTA of FEASOST by a hybrid application between an expert elicitation based improved analysis hierarchy process (AHP) and fuzzy set theory, and the occurrence possibility of FEASOST is estimated for an oil depot in China. A comparison between statistical data and calculated data using fuzzy fault tree analysis (FFTA) based on traditional and improved AHP is also made. Sensitivity and importance analysis has been performed to identify the most crucial BEs leading to FEASOST that will provide insights into how managers should focus effective mitigation. Copyright © 2014 Elsevier B.V. All rights reserved.
Wang, Hetang; Li, Jia; Wang, Deming; Huang, Zonghou
2017-01-01
Coal dust explosions (CDE) are one of the main threats to the occupational safety of coal miners. Aiming to identify and assess the risk of CDE, this paper proposes a novel method of fuzzy fault tree analysis combined with the Visual Basic (VB) program. In this methodology, various potential causes of the CDE are identified and a CDE fault tree is constructed. To overcome drawbacks from the lack of exact probability data for the basic events, fuzzy set theory is employed and the probability data of each basic event is treated as intuitionistic trapezoidal fuzzy numbers. In addition, a new approach for calculating the weighting of each expert is also introduced in this paper to reduce the error during the expert elicitation process. Specifically, an in-depth quantitative analysis of the fuzzy fault tree, such as the importance measure of the basic events and the cut sets, and the CDE occurrence probability is given to assess the explosion risk and acquire more details of the CDE. The VB program is applied to simplify the analysis process. A case study and analysis is provided to illustrate the effectiveness of this proposed method, and some suggestions are given to take preventive measures in advance and avoid CDE accidents.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mays, S.E.; Poloski, J.P.; Sullivan, W.H.
1982-07-01
This report describes a risk study of the Browns Ferry, Unit 1, nuclear plant. The study is one of four such studies sponsored by the NRC Office of Research, Division of Risk Assessment, as part of its Interim Reliability Evaluation Program (IREP), Phase II. This report is contained in four volumes: a main report and three appendixes. Appendix B provides a description of Browns Ferry, Unit 1, plant systems and the failure evaluation of those systems as they apply to accidents at Browns Ferry. Information is presented concerning front-line system fault analysis; support system fault analysis; human error models andmore » probabilities; and generic control circuit analyses.« less
NASA Technical Reports Server (NTRS)
Braden, W. B.
1992-01-01
This talk discusses the importance of providing a process operator with concise information about a process fault including a root cause diagnosis of the problem, a suggested best action for correcting the fault, and prioritization of the problem set. A decision tree approach is used to illustrate one type of approach for determining the root cause of a problem. Fault detection in several different types of scenarios is addressed, including pump malfunctions and pipeline leaks. The talk stresses the need for a good data rectification strategy and good process models along with a method for presenting the findings to the process operator in a focused and understandable way. A real time expert system is discussed as an effective tool to help provide operators with this type of information. The use of expert systems in the analysis of actual versus predicted results from neural networks and other types of process models is discussed.
Taheriyoun, Masoud; Moradinejad, Saber
2015-01-01
The reliability of a wastewater treatment plant is a critical issue when the effluent is reused or discharged to water resources. Main factors affecting the performance of the wastewater treatment plant are the variation of the influent, inherent variability in the treatment processes, deficiencies in design, mechanical equipment, and operational failures. Thus, meeting the established reuse/discharge criteria requires assessment of plant reliability. Among many techniques developed in system reliability analysis, fault tree analysis (FTA) is one of the popular and efficient methods. FTA is a top down, deductive failure analysis in which an undesired state of a system is analyzed. In this study, the problem of reliability was studied on Tehran West Town wastewater treatment plant. This plant is a conventional activated sludge process, and the effluent is reused in landscape irrigation. The fault tree diagram was established with the violation of allowable effluent BOD as the top event in the diagram, and the deficiencies of the system were identified based on the developed model. Some basic events are operator's mistake, physical damage, and design problems. The analytical method is minimal cut sets (based on numerical probability) and Monte Carlo simulation. Basic event probabilities were calculated according to available data and experts' opinions. The results showed that human factors, especially human error had a great effect on top event occurrence. The mechanical, climate, and sewer system factors were in subsequent tier. Literature shows applying FTA has been seldom used in the past wastewater treatment plant (WWTP) risk analysis studies. Thus, the developed FTA model in this study considerably improves the insight into causal failure analysis of a WWTP. It provides an efficient tool for WWTP operators and decision makers to achieve the standard limits in wastewater reuse and discharge to the environment.
Fault tree analysis for integrated and probabilistic risk analysis of drinking water systems.
Lindhe, Andreas; Rosén, Lars; Norberg, Tommy; Bergstedt, Olof
2009-04-01
Drinking water systems are vulnerable and subject to a wide range of risks. To avoid sub-optimisation of risk-reduction options, risk analyses need to include the entire drinking water system, from source to tap. Such an integrated approach demands tools that are able to model interactions between different events. Fault tree analysis is a risk estimation tool with the ability to model interactions between events. Using fault tree analysis on an integrated level, a probabilistic risk analysis of a large drinking water system in Sweden was carried out. The primary aims of the study were: (1) to develop a method for integrated and probabilistic risk analysis of entire drinking water systems; and (2) to evaluate the applicability of Customer Minutes Lost (CML) as a measure of risk. The analysis included situations where no water is delivered to the consumer (quantity failure) and situations where water is delivered but does not comply with water quality standards (quality failure). Hard data as well as expert judgements were used to estimate probabilities of events and uncertainties in the estimates. The calculations were performed using Monte Carlo simulations. CML is shown to be a useful measure of risks associated with drinking water systems. The method presented provides information on risk levels, probabilities of failure, failure rates and downtimes of the system. This information is available for the entire system as well as its different sub-systems. Furthermore, the method enables comparison of the results with performance targets and acceptable levels of risk. The method thus facilitates integrated risk analysis and consequently helps decision-makers to minimise sub-optimisation of risk-reduction options.
A fast bottom-up algorithm for computing the cut sets of noncoherent fault trees
DOE Office of Scientific and Technical Information (OSTI.GOV)
Corynen, G.C.
1987-11-01
An efficient procedure for finding the cut sets of large fault trees has been developed. Designed to address coherent or noncoherent systems, dependent events, shared or common-cause events, the method - called SHORTCUT - is based on a fast algorithm for transforming a noncoherent tree into a quasi-coherent tree (COHERE), and on a new algorithm for reducing cut sets (SUBSET). To assure sufficient clarity and precision, the procedure is discussed in the language of simple sets, which is also developed in this report. Although the new method has not yet been fully implemented on the computer, we report theoretical worst-casemore » estimates of its computational complexity. 12 refs., 10 figs.« less
Electromagnetic Compatibility (EMC) in Microelectronics.
1983-02-01
Fault Tree Analysis", System Saftey Symposium, June 8-9, 1965, Seattle: The Boeing Company . 12. Fussell, J.B., "Fault Tree Analysis-Concepts and...procedure for assessing EMC in microelectronics and for applying DD, 1473 EOiTO OP I, NOV6 IS OESOL.ETE UNCLASSIFIED SECURITY CLASSIFICATION OF THIS...CRITERIA 2.1 Background 2 2.2 The Probabilistic Nature of EMC 2 2.3 The Probabilistic Approach 5 2.4 The Compatibility Factor 6 3 APPLYING PROBABILISTIC
Witter, Robert C.; Zhang, Yinglong J.; Wang, Kelin; Priest, George R.; Goldfinger, Chris; Stimely, Laura; English, John T.; Ferro, Paul A.
2013-01-01
Characterizations of tsunami hazards along the Cascadia subduction zone hinge on uncertainties in megathrust rupture models used for simulating tsunami inundation. To explore these uncertainties, we constructed 15 megathrust earthquake scenarios using rupture models that supply the initial conditions for tsunami simulations at Bandon, Oregon. Tsunami inundation varies with the amount and distribution of fault slip assigned to rupture models, including models where slip is partitioned to a splay fault in the accretionary wedge and models that vary the updip limit of slip on a buried fault. Constraints on fault slip come from onshore and offshore paleoseismological evidence. We rank each rupture model using a logic tree that evaluates a model’s consistency with geological and geophysical data. The scenarios provide inputs to a hydrodynamic model, SELFE, used to simulate tsunami generation, propagation, and inundation on unstructured grids with <5–15 m resolution in coastal areas. Tsunami simulations delineate the likelihood that Cascadia tsunamis will exceed mapped inundation lines. Maximum wave elevations at the shoreline varied from ∼4 m to 25 m for earthquakes with 9–44 m slip and Mw 8.7–9.2. Simulated tsunami inundation agrees with sparse deposits left by the A.D. 1700 and older tsunamis. Tsunami simulations for large (22–30 m slip) and medium (14–19 m slip) splay fault scenarios encompass 80%–95% of all inundation scenarios and provide reasonable guidelines for land-use planning and coastal development. The maximum tsunami inundation simulated for the greatest splay fault scenario (36–44 m slip) can help to guide development of local tsunami evacuation zones.
Using Decision Trees to Detect and Isolate Simulated Leaks in the J-2X Rocket Engine
NASA Technical Reports Server (NTRS)
Schwabacher, Mark A.; Aguilar, Robert; Figueroa, Fernando F.
2009-01-01
The goal of this work was to use data-driven methods to automatically detect and isolate faults in the J-2X rocket engine. It was decided to use decision trees, since they tend to be easier to interpret than other data-driven methods. The decision tree algorithm automatically "learns" a decision tree by performing a search through the space of possible decision trees to find one that fits the training data. The particular decision tree algorithm used is known as C4.5. Simulated J-2X data from a high-fidelity simulator developed at Pratt & Whitney Rocketdyne and known as the Detailed Real-Time Model (DRTM) was used to "train" and test the decision tree. Fifty-six DRTM simulations were performed for this purpose, with different leak sizes, different leak locations, and different times of leak onset. To make the simulations as realistic as possible, they included simulated sensor noise, and included a gradual degradation in both fuel and oxidizer turbine efficiency. A decision tree was trained using 11 of these simulations, and tested using the remaining 45 simulations. In the training phase, the C4.5 algorithm was provided with labeled examples of data from nominal operation and data including leaks in each leak location. From the data, it "learned" a decision tree that can classify unseen data as having no leak or having a leak in one of the five leak locations. In the test phase, the decision tree produced very low false alarm rates and low missed detection rates on the unseen data. It had very good fault isolation rates for three of the five simulated leak locations, but it tended to confuse the remaining two locations, perhaps because a large leak at one of these two locations can look very similar to a small leak at the other location.
Fault detection and diagnosis for gas turbines based on a kernelized information entropy model.
Wang, Weiying; Xu, Zhiqiang; Tang, Rui; Li, Shuying; Wu, Wei
2014-01-01
Gas turbines are considered as one kind of the most important devices in power engineering and have been widely used in power generation, airplanes, and naval ships and also in oil drilling platforms. However, they are monitored without man on duty in the most cases. It is highly desirable to develop techniques and systems to remotely monitor their conditions and analyze their faults. In this work, we introduce a remote system for online condition monitoring and fault diagnosis of gas turbine on offshore oil well drilling platforms based on a kernelized information entropy model. Shannon information entropy is generalized for measuring the uniformity of exhaust temperatures, which reflect the overall states of the gas paths of gas turbine. In addition, we also extend the entropy to compute the information quantity of features in kernel spaces, which help to select the informative features for a certain recognition task. Finally, we introduce the information entropy based decision tree algorithm to extract rules from fault samples. The experiments on some real-world data show the effectiveness of the proposed algorithms.
Fault Detection and Diagnosis for Gas Turbines Based on a Kernelized Information Entropy Model
Wang, Weiying; Xu, Zhiqiang; Tang, Rui; Li, Shuying; Wu, Wei
2014-01-01
Gas turbines are considered as one kind of the most important devices in power engineering and have been widely used in power generation, airplanes, and naval ships and also in oil drilling platforms. However, they are monitored without man on duty in the most cases. It is highly desirable to develop techniques and systems to remotely monitor their conditions and analyze their faults. In this work, we introduce a remote system for online condition monitoring and fault diagnosis of gas turbine on offshore oil well drilling platforms based on a kernelized information entropy model. Shannon information entropy is generalized for measuring the uniformity of exhaust temperatures, which reflect the overall states of the gas paths of gas turbine. In addition, we also extend the entropy to compute the information quantity of features in kernel spaces, which help to select the informative features for a certain recognition task. Finally, we introduce the information entropy based decision tree algorithm to extract rules from fault samples. The experiments on some real-world data show the effectiveness of the proposed algorithms. PMID:25258726
Bayesian-network-based safety risk assessment for steel construction projects.
Leu, Sou-Sen; Chang, Ching-Miao
2013-05-01
There are four primary accident types at steel building construction (SC) projects: falls (tumbles), object falls, object collapse, and electrocution. Several systematic safety risk assessment approaches, such as fault tree analysis (FTA) and failure mode and effect criticality analysis (FMECA), have been used to evaluate safety risks at SC projects. However, these traditional methods ineffectively address dependencies among safety factors at various levels that fail to provide early warnings to prevent occupational accidents. To overcome the limitations of traditional approaches, this study addresses the development of a safety risk-assessment model for SC projects by establishing the Bayesian networks (BN) based on fault tree (FT) transformation. The BN-based safety risk-assessment model was validated against the safety inspection records of six SC building projects and nine projects in which site accidents occurred. The ranks of posterior probabilities from the BN model were highly consistent with the accidents that occurred at each project site. The model accurately provides site safety-management abilities by calculating the probabilities of safety risks and further analyzing the causes of accidents based on their relationships in BNs. In practice, based on the analysis of accident risks and significant safety factors, proper preventive safety management strategies can be established to reduce the occurrence of accidents on SC sites. Copyright © 2013 Elsevier Ltd. All rights reserved.
Probabilistic Seismic Hazard Assessment of the Chiapas State (SE Mexico)
NASA Astrophysics Data System (ADS)
Rodríguez-Lomelí, Anabel Georgina; García-Mayordomo, Julián
2015-04-01
The Chiapas State, in southeastern Mexico, is a very active seismic region due to the interaction of three tectonic plates: Northamerica, Cocos and Caribe. We present a probabilistic seismic hazard assessment (PSHA) specifically performed to evaluate seismic hazard in the Chiapas state. The PSHA was based on a composited seismic catalogue homogenized to Mw and was used a logic tree procedure for the consideration of different seismogenic source models and ground motion prediction equations (GMPEs). The results were obtained in terms of peak ground acceleration as well as spectral accelerations. The earthquake catalogue was compiled from the International Seismological Center and the Servicio Sismológico Nacional de México sources. Two different seismogenic source zones (SSZ) models were devised based on a revision of the tectonics of the region and the available geomorphological and geological maps. The SSZ were finally defined by the analysis of geophysical data, resulting two main different SSZ models. The Gutenberg-Richter parameters for each SSZ were calculated from the declustered and homogenized catalogue, while the maximum expected earthquake was assessed from both the catalogue and geological criteria. Several worldwide and regional GMPEs for subduction and crustal zones were revised. For each SSZ model we considered four possible combinations of GMPEs. Finally, hazard was calculated in terms of PGA and SA for 500-, 1000-, and 2500-years return periods for each branch of the logic tree using the CRISIS2007 software. The final hazard maps represent the mean values obtained from the two seismogenic and four attenuation models considered in the logic tree. For the three return periods analyzed, the maps locate the most hazardous areas in the Chiapas Central Pacific Zone, the Pacific Coastal Plain and in the Motagua and Polochic Fault Zone; intermediate hazard values in the Chiapas Batholith Zone and in the Strike-Slip Faults Province. The hazard decreases towards the northeast across the Reverse Faults Province and up to Yucatan Platform, where the lowest values are reached. We also produced uniform hazard spectra (UHS) for the three main cities of Chiapas. Tapachula city presents the highest spectral accelerations, while Tuxtla Gutierrez and San Cristobal de las Casas cities show similar values. We conclude that seismic hazard in Chiapas is chiefly controlled by the subduction of the Cocos beneath Northamerica and Caribe tectonic plates, that makes the coastal areas the most hazardous. Additionally, the Motagua and Polochic Fault Zones are also important, increasing the hazard particularly in southeastern Chiapas.
NASA Astrophysics Data System (ADS)
Yang, Wen-Xian
2006-05-01
Available machine fault diagnostic methods show unsatisfactory performances on both on-line and intelligent analyses because their operations involve intensive calculations and are labour intensive. Aiming at improving this situation, this paper describes the development of an intelligent approach by using the Genetic Programming (abbreviated as GP) method. Attributed to the simple calculation of the mathematical model being constructed, different kinds of machine faults may be diagnosed correctly and quickly. Moreover, human input is significantly reduced in the process of fault diagnosis. The effectiveness of the proposed strategy is validated by an illustrative example, in which three kinds of valve states inherent in a six-cylinders/four-stroke cycle diesel engine, i.e. normal condition, valve-tappet clearance and gas leakage faults, are identified. In the example, 22 mathematical functions have been specially designed and 8 easily obtained signal features are used to construct the diagnostic model. Different from existing GPs, the diagnostic tree used in the algorithm is constructed in an intelligent way by applying a power-weight coefficient to each feature. The power-weight coefficients vary adaptively between 0 and 1 during the evolutionary process. Moreover, different evolutionary strategies are employed, respectively for selecting the diagnostic features and functions, so that the mathematical functions are sufficiently utilized and in the meantime, the repeated use of signal features may be fully avoided. The experimental results are illustrated diagrammatically in the following sections.
Aydin, Ilhan; Karakose, Mehmet; Akin, Erhan
2014-03-01
Although reconstructed phase space is one of the most powerful methods for analyzing a time series, it can fail in fault diagnosis of an induction motor when the appropriate pre-processing is not performed. Therefore, boundary analysis based a new feature extraction method in phase space is proposed for diagnosis of induction motor faults. The proposed approach requires the measurement of one phase current signal to construct the phase space representation. Each phase space is converted into an image, and the boundary of each image is extracted by a boundary detection algorithm. A fuzzy decision tree has been designed to detect broken rotor bars and broken connector faults. The results indicate that the proposed approach has a higher recognition rate than other methods on the same dataset. © 2013 ISA Published by ISA All rights reserved.
NASA Astrophysics Data System (ADS)
Chartier, Thomas; Scotti, Oona; Lyon-Caen, Hélène; Boiselet, Aurélien
2017-10-01
Modeling the seismic potential of active faults is a fundamental step of probabilistic seismic hazard assessment (PSHA). An accurate estimation of the rate of earthquakes on the faults is necessary in order to obtain the probability of exceedance of a given ground motion. Most PSHA studies consider faults as independent structures and neglect the possibility of multiple faults or fault segments rupturing simultaneously (fault-to-fault, FtF, ruptures). The Uniform California Earthquake Rupture Forecast version 3 (UCERF-3) model takes into account this possibility by considering a system-level approach rather than an individual-fault-level approach using the geological, seismological and geodetical information to invert the earthquake rates. In many places of the world seismological and geodetical information along fault networks is often not well constrained. There is therefore a need to propose a methodology relying on geological information alone to compute earthquake rates of the faults in the network. In the proposed methodology, a simple distance criteria is used to define FtF ruptures and consider single faults or FtF ruptures as an aleatory uncertainty, similarly to UCERF-3. Rates of earthquakes on faults are then computed following two constraints: the magnitude frequency distribution (MFD) of earthquakes in the fault system as a whole must follow an a priori chosen shape and the rate of earthquakes on each fault is determined by the specific slip rate of each segment depending on the possible FtF ruptures. The modeled earthquake rates are then compared to the available independent data (geodetical, seismological and paleoseismological data) in order to weight different hypothesis explored in a logic tree.The methodology is tested on the western Corinth rift (WCR), Greece, where recent advancements have been made in the understanding of the geological slip rates of the complex network of normal faults which are accommodating the ˜ 15 mm yr-1 north-south extension. Modeling results show that geological, seismological and paleoseismological rates of earthquakes cannot be reconciled with only single-fault-rupture scenarios and require hypothesizing a large spectrum of possible FtF rupture sets. In order to fit the imposed regional Gutenberg-Richter (GR) MFD target, some of the slip along certain faults needs to be accommodated either with interseismic creep or as post-seismic processes. Furthermore, computed individual faults' MFDs differ depending on the position of each fault in the system and the possible FtF ruptures associated with the fault. Finally, a comparison of modeled earthquake rupture rates with those deduced from the regional and local earthquake catalog statistics and local paleoseismological data indicates a better fit with the FtF rupture set constructed with a distance criteria based on 5 km rather than 3 km, suggesting a high connectivity of faults in the WCR fault system.
Bedrosian, Paul A.; Burgess, Matthew K.; Nishikawa, Tracy
2013-01-01
Within the south-western Mojave Desert, the Joshua Basin Water District is considering applying imported water into infiltration ponds in the Joshua Tree groundwater sub-basin in an attempt to artificially recharge the underlying aquifer. Scarce subsurface hydrogeological data are available near the proposed recharge site; therefore, time-domain electromagnetic (TDEM) data were collected and analysed to characterize the subsurface. TDEM soundings were acquired to estimate the depth to water on either side of the Pinto Mountain Fault, a major east-west trending strike-slip fault that transects the proposed recharge site. While TDEM is a standard technique for groundwater investigations, special care must be taken when acquiring and interpreting TDEM data in a twodimensional (2D) faulted environment. A subset of the TDEM data consistent with a layered-earth interpretation was identified through a combination of three-dimensional (3D) forward modelling and diffusion time-distance estimates. Inverse modelling indicates an offset in water table elevation of nearly 40 m across the fault. These findings imply that the fault acts as a low-permeability barrier to groundwater flow in the vicinity of the proposed recharge site. Existing production wells on the south side of the fault, together with a thick unsaturated zone and permeable near-surface deposits, suggest the southern half of the study area is suitable for artificial recharge. These results illustrate the effectiveness of targeted TDEM in support of hydrological studies in a heavily faulted desert environment where data are scarce and the cost of obtaining these data by conventional drilling techniques is prohibitive.
A fault is born: The Landers-Mojave earthquake line
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nur, A.; Ron, H.
1993-04-01
The epicenter and the southern portion of the 1992 Landers earthquake fell on an approximately N-S earthquake line, defined by both epicentral locations and by the rupture directions of four previous M>5 earthquakes in the Mojave: The 1947 Manix; 1975 Galway Lake; 1979 Homestead Valley: and 1992 Joshua Tree events. Another M 5.2 earthquake epicenter in 1965 fell on this line where it intersects the Calico fault. In contrast, the northern part of the Landers rupture followed the NW-SE trending Camp Rock and parallel faults, exhibiting an apparently unusual rupture kink. The block tectonic model (Ron et al., 1984) combiningmore » fault kinematic and mechanics, explains both the alignment of the events, and their ruptures (Nur et al., 1986, 1989), as well as the Landers kink (Nur et al., 1992). Accordingly, the now NW oriented faults have rotated into their present direction away from the direction of maximum shortening, close to becoming locked, whereas a new fault set, optimally oriented relative to the direction of shortening, is developing to accommodate current crustal deformation. The Mojave-Landers line may thus be a new fault in formation. During the transition of faulting from the old, well developed and wak but poorly oriented faults to the strong, but favorably oriented new ones, both can slip simultaneously, giving rise to kinks such as Landers.« less
The P-Mesh: A Commodity-based Scalable Network Architecture for Clusters
NASA Technical Reports Server (NTRS)
Nitzberg, Bill; Kuszmaul, Chris; Stockdale, Ian; Becker, Jeff; Jiang, John; Wong, Parkson; Tweten, David (Technical Monitor)
1998-01-01
We designed a new network architecture, the P-Mesh which combines the scalability and fault resilience of a torus with the performance of a switch. We compare the scalability, performance, and cost of the hub, switch, torus, tree, and P-Mesh architectures. The latter three are capable of scaling to thousands of nodes, however, the torus has severe performance limitations with that many processors. The tree and P-Mesh have similar latency, bandwidth, and bisection bandwidth, but the P-Mesh outperforms the switch architecture (a lower bound for tree performance) on 16-node NAB Parallel Benchmark tests by up to 23%, and costs 40% less. Further, the P-Mesh has better fault resilience characteristics. The P-Mesh architecture trades increased management overhead for lower cost, and is a good bridging technology while the price of tree uplinks is expensive.
NASA Astrophysics Data System (ADS)
Okumura, K.
2011-12-01
Accurate location and geometry of seismic sources are critical to estimate strong ground motion. Complete and precise rupture history is also critical to estimate the probability of the future events. In order to better forecast future earthquakes and to reduce seismic hazards, we should consider over all options and choose the most likely parameter. Multiple options for logic trees are acceptable only after thorough examination of contradicting estimates and should not be a result from easy compromise or epoche. In the process of preparation and revisions of Japanese probabilistic and deterministic earthquake hazard maps by Headquarters for Earthquake Research Promotion since 1996, many decisions were made to select plausible parameters, but many contradicting estimates have been left without thorough examinations. There are several highly-active faults in central Japan such as Itoigawa-Shizuoka Tectonic Line active fault system (ISTL), West Nagano Basin fault system (WNBF), Inadani fault system (INFS), and Atera fault system (ATFS). The highest slip rate and the shortest recurrence interval are respectively ~1 cm/yr and 500 to 800 years, and estimated maximum magnitude is 7.5 to 8.5. Those faults are very hazardous because almost entire population and industries are located above the fault within tectonic depressions. As to the fault location, most uncertainties arises from interpretation of geomorphic features. Geomorphological interpretation without geological and structural insight often leads to wrong mapping. Though non-existent longer fault may be a safer estimate, incorrectness harm reliability of the forecast. Also this does not greatly affect strong motion estimates, but misleading to surface displacement issues. Fault geometry, on the other hand, is very important to estimate intensity distribution. For the middle portion of the ISTL, fast-moving left-lateral strike-slip up to 1 cm/yr is obvious. Recent seismicity possibly induced by 2011 Tohoku earthquake show pure strike-slip. However, thrusts are modeled from seismic profiles and gravity anomalies. Therefore, two contradicting models are presented for strong motion estimates. There should be a unique solution of the geometry, which will be discussed. As to the rupture history, there is plenty of paleoseismological evidence that supports segmentation of those faults above. However, in most fault zones, the largest and sometimes possibly less frequent earthquakes are modeled. Segmentation and modeling of coming earthquakes should be more carefully examined without leaving them in contradictions.
LIDAR Helps Identify Source of 1872 Earthquake Near Chelan, Washington
NASA Astrophysics Data System (ADS)
Sherrod, B. L.; Blakely, R. J.; Weaver, C. S.
2015-12-01
One of the largest historic earthquakes in the Pacific Northwest occurred on 15 December 1872 (M6.5-7) near the south end of Lake Chelan in north-central Washington State. Lack of recognized surface deformation suggested that the earthquake occurred on a blind, perhaps deep, fault. New LiDAR data show landslides and a ~6 km long, NW-side-up scarp in Spencer Canyon, ~30 km south of Lake Chelan. Two landslides in Spencer Canyon impounded small ponds. An historical account indicated that dead trees were visible in one pond in AD1884. Wood from a snag in the pond yielded a calibrated age of AD1670-1940. Tree ring counts show that the oldest living trees on each landslide are 130 and 128 years old. The larger of the two landslides obliterated the scarp and thus, post-dates the last scarp-forming event. Two trenches across the scarp exposed a NW-dipping thrust fault. One trench exposed alluvial fan deposits, Mazama ash, and scarp colluvium cut by a single thrust fault. Three charcoal samples from a colluvium buried during the last fault displacement had calibrated ages between AD1680 and AD1940. The second trench exposed gneiss thrust over colluvium during at least two, and possibly three fault displacements. The younger of two charcoal samples collected from a colluvium below gneiss had a calibrated age of AD1665- AD1905. For an historical constraint, we assume that the lack of felt reports for large earthquakes in the period between 1872 and today indicates that no large earthquakes capable of rupturing the ground surface occurred in the region after the 1872 earthquake; thus the last displacement on the Spencer Canyon scarp cannot post-date the 1872 earthquake. Modeling of the age data suggests that the last displacement occurred between AD1840 and AD1890. These data, combined with the historical record, indicate that this fault is the source of the 1872 earthquake. Analyses of aeromagnetic data reveal lithologic contacts beneath the scarp that form an ENE-striking, curvilinear zone ~2.5 km wide and ~55 km long. This zone coincides with monoclines mapped in Mesozoic bedrock and Miocene flood basalts. This study ends uncertainty regarding the source of the 1872 earthquake and provides important information for seismic hazard analyses of major infrastructure projects in Washington and British Columbia.
The 1992 Landers earthquake sequence; seismological observations
Egill Hauksson,; Jones, Lucile M.; Hutton, Kate; Eberhart-Phillips, Donna
1993-01-01
The (MW6.1, 7.3, 6.2) 1992 Landers earthquakes began on April 23 with the MW6.1 1992 Joshua Tree preshock and form the most substantial earthquake sequence to occur in California in the last 40 years. This sequence ruptured almost 100 km of both surficial and concealed faults and caused aftershocks over an area 100 km wide by 180 km long. The faulting was predominantly strike slip and three main events in the sequence had unilateral rupture to the north away from the San Andreas fault. The MW6.1 Joshua Tree preshock at 33°N58′ and 116°W19′ on 0451 UT April 23 was preceded by a tightly clustered foreshock sequence (M≤4.6) beginning 2 hours before the mainshock and followed by a large aftershock sequence with more than 6000 aftershocks. The aftershocks extended along a northerly trend from about 10 km north of the San Andreas fault, northwest of Indio, to the east-striking Pinto Mountain fault. The Mw7.3 Landers mainshock occurred at 34°N13′ and 116°W26′ at 1158 UT, June 28, 1992, and was preceded for 12 hours by 25 small M≤3 earthquakes at the mainshock epicenter. The distribution of more than 20,000 aftershocks, analyzed in this study, and short-period focal mechanisms illuminate a complex sequence of faulting. The aftershocks extend 60 km to the north of the mainshock epicenter along a system of at least five different surficial faults, and 40 km to the south, crossing the Pinto Mountain fault through the Joshua Tree aftershock zone towards the San Andreas fault near Indio. The rupture initiated in the depth range of 3–6 km, similar to previous M∼5 earthquakes in the region, although the maximum depth of aftershocks is about 15 km. The mainshock focal mechanism showed right-lateral strike-slip faulting with a strike of N10°W on an almost vertical fault. The rupture formed an arclike zone well defined by both surficial faulting and aftershocks, with more westerly faulting to the north. This change in strike is accomplished by jumping across dilational jogs connecting surficial faults with strikes rotated progressively to the west. A 20-km-long linear cluster of aftershocks occurred 10–20 km north of Barstow, or 30–40 km north of the end of the mainshock rupture. The most prominent off-fault aftershock cluster occurred 30 km to the west of the Landers mainshock. The largest aftershock was within this cluster, the Mw6.2 Big Bear aftershock occurring at 34°N10′ and 116°W49′ at 1505 UT June 28. It exhibited left-lateral strike-slip faulting on a northeast striking and steeply dipping plane. The Big Bear aftershocks form a linear trend extending 20 km to the northeast with a scattered distribution to the north. The Landers mainshock occurred near the southernmost extent of the Eastern California Shear Zone, an 80-km-wide, more than 400-km-long zone of deformation. This zone extends into the Death Valley region and accommodates about 10 to 20% of the plate motion between the Pacific and North American plates. The Joshua Tree preshock, its aftershocks, and Landers aftershocks form a previously missing link that connects the Eastern California Shear Zone to the southern San Andreas fault.
NASA Astrophysics Data System (ADS)
Newman, W. I.; Turcotte, D. L.
2002-12-01
We have studied a hybrid model combining the forest-fire model with the site-percolation model in order to better understand the earthquake cycle. We consider a square array of sites. At each time step, a "tree" is dropped on a randomly chosen site and is planted if the site is unoccupied. When a cluster of "trees" spans the site (a percolating cluster), all the trees in the cluster are removed ("burned") in a "fire." The removal of the cluster is analogous to a characteristic earthquake and planting "trees" is analogous to increasing the regional stress. The clusters are analogous to the metastable regions of a fault over which an earthquake rupture can propagate once triggered. We find that the frequency-area statistics of the metastable regions are power-law with a negative exponent of two (as in the forest-fire model). This is analogous to the Gutenberg-Richter distribution of seismicity. This "self-organized critical behavior" can be explained in terms of an inverse cascade of clusters. Individual trees move from small to larger clusters until they are destroyed. This inverse cascade of clusters is self-similar and the power-law distribution of cluster sizes has been shown to have an exponent of two. We have quantified the forecasting of the spanning fires using error diagrams. The assumption that "fires" (earthquakes) are quasi-periodic has moderate predictability. The density of trees gives an improved degree of predictability, while the size of the largest cluster of trees provides a substantial improvement in forecasting a "fire."
Experimental evaluation of the certification-trail method
NASA Technical Reports Server (NTRS)
Sullivan, Gregory F.; Wilson, Dwight S.; Masson, Gerald M.; Itoh, Mamoru; Smith, Warren W.; Kay, Jonathan S.
1993-01-01
Certification trails are a recently introduced and promising approach to fault-detection and fault-tolerance. A comprehensive attempt to assess experimentally the performance and overall value of the method is reported. The method is applied to algorithms for the following problems: huffman tree, shortest path, minimum spanning tree, sorting, and convex hull. Our results reveal many cases in which an approach using certification-trails allows for significantly faster overall program execution time than a basic time redundancy-approach. Algorithms for the answer-validation problem for abstract data types were also examined. This kind of problem provides a basis for applying the certification-trail method to wide classes of algorithms. Answer-validation solutions for two types of priority queues were implemented and analyzed. In both cases, the algorithm which performs answer-validation is substantially faster than the original algorithm for computing the answer. Next, a probabilistic model and analysis which enables comparison between the certification-trail method and the time-redundancy approach were presented. The analysis reveals some substantial and sometimes surprising advantages for ther certification-trail method. Finally, the work our group performed on the design and implementation of fault injection testbeds for experimental analysis of the certification trail technique is discussed. This work employs two distinct methodologies, software fault injection (modification of instruction, data, and stack segments of programs on a Sun Sparcstation ELC and on an IBM 386 PC) and hardware fault injection (control, address, and data lines of a Motorola MC68000-based target system pulsed at logical zero/one values). Our results indicate the viability of the certification trail technique. It is also believed that the tools developed provide a solid base for additional exploration.
Spatial distribution of block falls using volumetric GIS-decision-tree models
NASA Astrophysics Data System (ADS)
Abdallah, C.
2010-10-01
Block falls are considered a significant aspect of surficial instability contributing to losses in land and socio-economic aspects through their damaging effects to natural and human environments. This paper predicts and maps the geographic distribution and volumes of block falls in central Lebanon using remote sensing, geographic information systems (GIS) and decision-tree modeling (un-pruned and pruned trees). Eleven terrain parameters (lithology, proximity to fault line, karst type, soil type, distance to drainage line, elevation, slope gradient, slope aspect, slope curvature, land cover/use, and proximity to roads) were generated to statistically explain the occurrence of block falls. The latter were discriminated using SPOT4 satellite imageries, and their dimensions were determined during field surveys. The un-pruned tree model based on all considered parameters explained 86% of the variability in field block fall measurements. Once pruned, it classifies 50% in block falls' volumes by selecting just four parameters (lithology, slope gradient, soil type, and land cover/use). Both tree models (un-pruned and pruned) were converted to quantitative 1:50,000 block falls' maps with different classes; starting from Nil (no block falls) to more than 4000 m 3. These maps are fairly matching with coincidence value equal to 45%; however, both can be used to prioritize the choice of specific zones for further measurement and modeling, as well as for land-use management. The proposed tree models are relatively simple, and may also be applied to other areas (i.e. the choice of un-pruned or pruned model is related to the availability of terrain parameters in a given area).
An integrated approach to system design, reliability, and diagnosis
NASA Technical Reports Server (NTRS)
Patterson-Hine, F. A.; Iverson, David L.
1990-01-01
The requirement for ultradependability of computer systems in future avionics and space applications necessitates a top-down, integrated systems engineering approach for design, implementation, testing, and operation. The functional analyses of hardware and software systems must be combined by models that are flexible enough to represent their interactions and behavior. The information contained in these models must be accessible throughout all phases of the system life cycle in order to maintain consistency and accuracy in design and operational decisions. One approach being taken by researchers at Ames Research Center is the creation of an object-oriented environment that integrates information about system components required in the reliability evaluation with behavioral information useful for diagnostic algorithms. Procedures have been developed at Ames that perform reliability evaluations during design and failure diagnoses during system operation. These procedures utilize information from a central source, structured as object-oriented fault trees. Fault trees were selected because they are a flexible model widely used in aerospace applications and because they give a concise, structured representation of system behavior. The utility of this integrated environment for aerospace applications in light of our experiences during its development and use is described. The techniques for reliability evaluation and failure diagnosis are discussed, and current extensions of the environment and areas requiring further development are summarized.
An integrated approach to system design, reliability, and diagnosis
NASA Astrophysics Data System (ADS)
Patterson-Hine, F. A.; Iverson, David L.
1990-12-01
The requirement for ultradependability of computer systems in future avionics and space applications necessitates a top-down, integrated systems engineering approach for design, implementation, testing, and operation. The functional analyses of hardware and software systems must be combined by models that are flexible enough to represent their interactions and behavior. The information contained in these models must be accessible throughout all phases of the system life cycle in order to maintain consistency and accuracy in design and operational decisions. One approach being taken by researchers at Ames Research Center is the creation of an object-oriented environment that integrates information about system components required in the reliability evaluation with behavioral information useful for diagnostic algorithms. Procedures have been developed at Ames that perform reliability evaluations during design and failure diagnoses during system operation. These procedures utilize information from a central source, structured as object-oriented fault trees. Fault trees were selected because they are a flexible model widely used in aerospace applications and because they give a concise, structured representation of system behavior. The utility of this integrated environment for aerospace applications in light of our experiences during its development and use is described. The techniques for reliability evaluation and failure diagnosis are discussed, and current extensions of the environment and areas requiring further development are summarized.
Models for evaluating the performability of degradable computing systems
NASA Technical Reports Server (NTRS)
Wu, L. T.
1982-01-01
Recent advances in multiprocessor technology established the need for unified methods to evaluate computing systems performance and reliability. In response to this modeling need, a general modeling framework that permits the modeling, analysis and evaluation of degradable computing systems is considered. Within this framework, several user oriented performance variables are identified and shown to be proper generalizations of the traditional notions of system performance and reliability. Furthermore, a time varying version of the model is developed to generalize the traditional fault tree reliability evaluation methods of phased missions.
Generating Scenarios When Data Are Missing
NASA Technical Reports Server (NTRS)
Mackey, Ryan
2007-01-01
The Hypothetical Scenario Generator (HSG) is being developed in conjunction with other components of artificial-intelligence systems for automated diagnosis and prognosis of faults in spacecraft, aircraft, and other complex engineering systems. The HSG accepts, as input, possibly incomplete data on the current state of a system (see figure). The HSG models a potential fault scenario as an ordered disjunctive tree of conjunctive consequences, wherein the ordering is based upon the likelihood that a particular conjunctive path will be taken for the given set of inputs. The computation of likelihood is based partly on a numerical ranking of the degree of completeness of data with respect to satisfaction of the antecedent conditions of prognostic rules. The results from the HSG are then used by a model-based artificial- intelligence subsystem to predict realistic scenarios and states.
Fault tree safety analysis of a large Li/SOCl(sub)2 spacecraft battery
NASA Technical Reports Server (NTRS)
Uy, O. Manuel; Maurer, R. H.
1987-01-01
The results of the safety fault tree analysis on the eight module, 576 F cell Li/SOCl2 battery on the spacecraft and in the integration and test environment prior to launch on the ground are presented. The analysis showed that with the right combination of blocking diodes, electrical fuses, thermal fuses, thermal switches, cell balance, cell vents, and battery module vents the probability of a single cell or a 72 cell module exploding can be reduced to .000001, essentially the probability due to explosion for unexplained reasons.
Analyses of rear-end crashes based on classification tree models.
Yan, Xuedong; Radwan, Essam
2006-09-01
Signalized intersections are accident-prone areas especially for rear-end crashes due to the fact that the diversity of the braking behaviors of drivers increases during the signal change. The objective of this article is to improve knowledge of the relationship between rear-end crashes occurring at signalized intersections and a series of potential traffic risk factors classified by driver characteristics, environments, and vehicle types. Based on the 2001 Florida crash database, the classification tree method and Quasi-induced exposure concept were used to perform the statistical analysis. Two binary classification tree models were developed in this study. One was used for the crash comparison between rear-end and non-rear-end to identify those specific trends of the rear-end crashes. The other was constructed for the comparison between striking vehicles/drivers (at-fault) and struck vehicles/drivers (not-at-fault) to find more complex crash pattern associated with the traffic attributes of driver, vehicle, and environment. The modeling results showed that the rear-end crashes are over-presented in the higher speed limits (45-55 mph); the rear-end crash propensity for daytime is apparently larger than nighttime; and the reduction of braking capacity due to wet and slippery road surface conditions would definitely contribute to rear-end crashes, especially at intersections with higher speed limits. The tree model segmented drivers into four homogeneous age groups: < 21 years, 21-31 years, 32-75 years, and > 75 years. The youngest driver group shows the largest crash propensity; in the 21-31 age group, the male drivers are over-involved in rear-end crashes under adverse weather conditions and the 32-75 years drivers driving large size vehicles have a larger crash propensity compared to those driving passenger vehicles. Combined with the quasi-induced exposure concept, the classification tree method is a proper statistical tool for traffic-safety analysis to investigate crash propensity. Compared to the logistic regression models, tree models have advantages for handling continuous independent variables and easily explaining the complex interaction effect with more than two independent variables. This research recommended that at signalized intersections with higher speed limits, reducing the speed limit to 40 mph efficiently contribute to a lower accident rate. Drivers involved in alcohol use may increase not only rear-end crash risk but also the driver injury severity. Education and enforcement countermeasures should focus on the driver group younger than 21 years. Further studies are suggested to compare crash risk distributions of the driver age for other main crash types to seek corresponding traffic countermeasures.
Probabilistic Seismic Hazard Maps for Ecuador
NASA Astrophysics Data System (ADS)
Mariniere, J.; Beauval, C.; Yepes, H. A.; Laurence, A.; Nocquet, J. M.; Alvarado, A. P.; Baize, S.; Aguilar, J.; Singaucho, J. C.; Jomard, H.
2017-12-01
A probabilistic seismic hazard study is led for Ecuador, a country facing a high seismic hazard, both from megathrust subduction earthquakes and shallow crustal moderate to large earthquakes. Building on the knowledge produced in the last years in historical seismicity, earthquake catalogs, active tectonics, geodynamics, and geodesy, several alternative earthquake recurrence models are developed. An area source model is first proposed, based on the seismogenic crustal and inslab sources defined in Yepes et al. (2016). A slightly different segmentation is proposed for the subduction interface, with respect to Yepes et al. (2016). Three earthquake catalogs are used to account for the numerous uncertainties in the modeling of frequency-magnitude distributions. The hazard maps obtained highlight several source zones enclosing fault systems that exhibit low seismic activity, not representative of the geological and/or geodetical slip rates. Consequently, a fault model is derived, including faults with an earthquake recurrence model inferred from geological and/or geodetical slip rate estimates. The geodetical slip rates on the set of simplified faults are estimated from a GPS horizontal velocity field (Nocquet et al. 2014). Assumptions on the aseismic component of the deformation are required. Combining these alternative earthquake models in a logic tree, and using a set of selected ground-motion prediction equations adapted to Ecuador's different tectonic contexts, a mean hazard map is obtained. Hazard maps corresponding to the percentiles 16 and 84% are also derived, highlighting the zones where uncertainties on the hazard are highest.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sattison, M.B.; Schroeder, J.A.; Russell, K.D.
The Idaho National Engineering Laboratory (INEL) over the past year has created 75 plant-specific Accident Sequence Precursor (ASP) models using the SAPHIRE suite of PRA codes. Along with the new models, the INEL has also developed a new module for SAPHIRE which is tailored specifically to the unique needs of ASP evaluations. These models and software will be the next generation of risk tools for the evaluation of accident precursors by both NRR and AEOD. This paper presents an overview of the models and software. Key characteristics include: (1) classification of the plant models according to plant response with amore » unique set of event trees for each plant class, (2) plant-specific fault trees using supercomponents, (3) generation and retention of all system and sequence cutsets, (4) full flexibility in modifying logic, regenerating cutsets, and requantifying results, and (5) user interface for streamlined evaluation of ASP events.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sattison, M.B.; Schroeder, J.A.; Russell, K.D.
The Idaho National Engineering Laboratory (INEL) over the past year has created 75 plant-specific Accident Sequence Precursor (ASP) models using the SAPHIRE suite of PRA codes. Along with the new models, the INEL has also developed a new module for SAPHIRE which is tailored specifically to the unique needs of conditional core damage probability (CCDP) evaluations. These models and software will be the next generation of risk tools for the evaluation of accident precursors by both NRR and AEOD. This paper presents an overview of the models and software. Key characteristics include: (1) classification of the plant models according tomore » plant response with a unique set of event trees for each plant class, (2) plant-specific fault trees using supercomponents, (3) generation and retention of all system and sequence cutsets, (4) full flexibility in modifying logic, regenerating cutsets, and requantifying results, and (5) user interface for streamlined evaluation of ASP events.« less
Probabilistic Seismic Hazard Assessment for a NPP in the Upper Rhine Graben, France
NASA Astrophysics Data System (ADS)
Clément, Christophe; Chartier, Thomas; Jomard, Hervé; Baize, Stéphane; Scotti, Oona; Cushing, Edward
2015-04-01
The southern part of the Upper Rhine Graben (URG) straddling the border between eastern France and western Germany, presents a relatively important seismic activity for an intraplate area. A magnitude 5 or greater shakes the URG every 25 years and in 1356 a magnitude greater than 6.5 struck the city of Basel. Several potentially active faults have been identified in the area and documented in the French Active Fault Database (web site in construction). These faults are located along the Graben boundaries and also inside the Graben itself, beneath heavily populated areas and critical facilities (including the Fessenheim Nuclear Power Plant). These faults are prone to produce earthquakes with magnitude 6 and above. Published regional models and preliminary geomorphological investigations provided provisional assessment of slip rates for the individual faults (0.1-0.001 mm/a) resulting in recurrence time of 10 000 years or greater for magnitude 6+ earthquakes. Using a fault model, ground motion response spectra are calculated for annual frequencies of exceedance (AFE) ranging from 10-4 to 10-8 per year, typical for design basis and probabilistic safety analyses of NPPs. A logic tree is implemented to evaluate uncertainties in seismic hazard assessment. The choice of ground motion prediction equations (GMPEs) and range of slip rate uncertainty are the main sources of seismic hazard variability at the NPP site. In fact, the hazard for AFE lower than 10-4 is mostly controlled by the potentially active nearby Rhine River fault. Compared with areal source zone models, a fault model localizes the hazard around the active faults and changes the shape of the Uniform Hazard Spectrum at the site. Seismic hazard deaggregations are performed to identify the earthquake scenarios (including magnitude, distance and the number of standard deviations from the median ground motion as predicted by GMPEs) that contribute to the exceedance of spectral acceleration for the different AFE levels. These scenarios are finally examined with respect to the seismicity data available in paleoseismic, historic and instrumental catalogues.
Failed oceanic transform models: experience of shaking the tree
NASA Astrophysics Data System (ADS)
Gerya, Taras
2017-04-01
In geodynamics, numerical modeling is often used as a trial-and-error tool, which does not necessarily requires full understanding or even a correct concept for a modeled phenomenon. Paradoxically, in order to understand an enigmatic process one should simply try to model it based on some initial assumptions, which must not even be correct… The reason is that our intuition is not always well "calibrated" for understanding of geodynamic phenomena, which develop on space- and timescales that are very different from our everyday experience. We often have much better ideas about physical laws governing geodynamic processes than on how these laws should interact on geological space- and timescales. From this prospective, numerical models, in which these physical laws are self-consistently implemented, can gradually calibrate our intuition by exploring what scenarios are physically sensible and what are not. I personally went through this painful learning path many times and one noteworthy example was my 3D numerical modeling of oceanic transform faults. As I understand in retrospective, my initial literature-inspired concept of how and why transform faults form and evolve was thermomechanically inconsistent and based on two main assumptions (btw. both were incorrect!): (1) oceanic transforms are directly inherited from the continental rifting and breakup stages and (2) they represent plate fragmentation structures having peculiar extension-parallel orientation due to the stress rotation caused by thermal contraction of the oceanic lithosphere. During one year (!) of high-resolution thermomechanical numerical experiments exploring various physics (including very computationally demanding thermal contraction) I systematically observed how my initially prescribed extension-parallel weak transform faults connecting ridge segments rotated away from their original orientation and get converted into oblique ridge sections… This was really an epic failure! However, at the very same time, some pseudo-2D "side-models" with initial strait ridge and ad-hock strain weakened rheology, which were run for curiosity, suddenly showed spontaneous development of ridge curvature… Fraction of these models showed spontaneous development of orthogonal ridge-transform patterns by rotation of oblique ridge sections toward extension-parallel direction to accommodate asymmetric plate accretion. The later was controlled by detachment faults stabilized by strain weakening. Further exploration of these "side-models" resulted in complete changing of my concept for oceanic transforms: they are not plate fragmentation but rather plate growth structures stabilized by continuous plate accretion and rheological weakening of deforming rocks (Gerya, 2010, 2013). The conclusion is - keep shaking the tree and banana will fall… Gerya, T. (2010) Dynamical instability produces transform faults at mid-ocean ridges. Science, 329, 1047-1050. Gerya, T.V. (2013) Three-dimensional thermomechanical modeling of oceanic spreading initiation and evolution. Phys. Earth Planet. Interiors, 214, 35-52.
Monotone Boolean approximation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hulme, B.L.
1982-12-01
This report presents a theory of approximation of arbitrary Boolean functions by simpler, monotone functions. Monotone increasing functions can be expressed without the use of complements. Nonconstant monotone increasing functions are important in their own right since they model a special class of systems known as coherent systems. It is shown here that when Boolean expressions for noncoherent systems become too large to treat exactly, then monotone approximations are easily defined. The algorithms proposed here not only provide simpler formulas but also produce best possible upper and lower monotone bounds for any Boolean function. This theory has practical application formore » the analysis of noncoherent fault trees and event tree sequences.« less
A Fuzzy Reasoning Design for Fault Detection and Diagnosis of a Computer-Controlled System
Ting, Y.; Lu, W.B.; Chen, C.H.; Wang, G.K.
2008-01-01
A Fuzzy Reasoning and Verification Petri Nets (FRVPNs) model is established for an error detection and diagnosis mechanism (EDDM) applied to a complex fault-tolerant PC-controlled system. The inference accuracy can be improved through the hierarchical design of a two-level fuzzy rule decision tree (FRDT) and a Petri nets (PNs) technique to transform the fuzzy rule into the FRVPNs model. Several simulation examples of the assumed failure events were carried out by using the FRVPNs and the Mamdani fuzzy method with MATLAB tools. The reasoning performance of the developed FRVPNs was verified by comparing the inference outcome to that of the Mamdani method. Both methods result in the same conclusions. Thus, the present study demonstratrates that the proposed FRVPNs model is able to achieve the purpose of reasoning, and furthermore, determining of the failure event of the monitored application program. PMID:19255619
NASA Astrophysics Data System (ADS)
LI, Y.; Yang, S. H.
2017-05-01
The Antarctica astronomical telescopes work chronically on the top of the unattended South Pole, and they have only one chance to maintain every year. Due to the complexity of the optical, mechanical, and electrical systems, the telescopes are hard to be maintained and need multi-tasker expedition teams, which means an excessive awareness is essential for the reliability of the Antarctica telescopes. Based on the fault mechanism and fault mode of the main-axis control system for the equatorial Antarctica astronomical telescope AST3-3 (Antarctic Schmidt Telescopes 3-3), the method of fault tree analysis is introduced in this article, and we obtains the importance degree of the top event from the importance degree of the bottom event structure. From the above results, the hidden problems and weak links can be effectively found out, which will indicate the direction for promoting the stability of the system and optimizing the design of the system.
Health Management Applications for International Space Station
NASA Technical Reports Server (NTRS)
Alena, Richard; Duncavage, Dan
2005-01-01
Traditional mission and vehicle management involves teams of highly trained specialists monitoring vehicle status and crew activities, responding rapidly to any anomalies encountered during operations. These teams work from the Mission Control Center and have access to engineering support teams with specialized expertise in International Space Station (ISS) subsystems. Integrated System Health Management (ISHM) applications can significantly augment these capabilities by providing enhanced monitoring, prognostic and diagnostic tools for critical decision support and mission management. The Intelligent Systems Division of NASA Ames Research Center is developing many prototype applications using model-based reasoning, data mining and simulation, working with Mission Control through the ISHM Testbed and Prototypes Project. This paper will briefly describe information technology that supports current mission management practice, and will extend this to a vision for future mission control workflow incorporating new ISHM applications. It will describe ISHM applications currently under development at NASA and will define technical approaches for implementing our vision of future human exploration mission management incorporating artificial intelligence and distributed web service architectures using specific examples. Several prototypes are under development, each highlighting a different computational approach. The ISStrider application allows in-depth analysis of Caution and Warning (C&W) events by correlating real-time telemetry with the logical fault trees used to define off-nominal events. The application uses live telemetry data and the Livingstone diagnostic inference engine to display the specific parameters and fault trees that generated the C&W event, allowing a flight controller to identify the root cause of the event from thousands of possibilities by simply navigating animated fault tree models on their workstation. SimStation models the functional power flow for the ISS Electrical Power System and can predict power balance for nominal and off-nominal conditions. SimStation uses realtime telemetry data to keep detailed computational physics models synchronized with actual ISS power system state. In the event of failure, the application can then rapidly diagnose root cause, predict future resource levels and even correlate technical documents relevant to the specific failure. These advanced computational models will allow better insight and more precise control of ISS subsystems, increasing safety margins by speeding up anomaly resolution and reducing,engineering team effort and cost. This technology will make operating ISS more efficient and is directly applicable to next-generation exploration missions and Crew Exploration Vehicles.
Development of a methodology for assessing the safety of embedded software systems
NASA Technical Reports Server (NTRS)
Garrett, C. J.; Guarro, S. B.; Apostolakis, G. E.
1993-01-01
A Dynamic Flowgraph Methodology (DFM) based on an integrated approach to modeling and analyzing the behavior of software-driven embedded systems for assessing and verifying reliability and safety is discussed. DFM is based on an extension of the Logic Flowgraph Methodology to incorporate state transition models. System models which express the logic of the system in terms of causal relationships between physical variables and temporal characteristics of software modules are analyzed to determine how a certain state can be reached. This is done by developing timed fault trees which take the form of logical combinations of static trees relating the system parameters at different point in time. The resulting information concerning the hardware and software states can be used to eliminate unsafe execution paths and identify testing criteria for safety critical software functions.
Fault tree analysis of most common rolling bearing tribological failures
NASA Astrophysics Data System (ADS)
Vencl, Aleksandar; Gašić, Vlada; Stojanović, Blaža
2017-02-01
Wear as a tribological process has a major influence on the reliability and life of rolling bearings. Field examinations of bearing failures due to wear indicate possible causes and point to the necessary measurements for wear reduction or elimination. Wear itself is a very complex process initiated by the action of different mechanisms, and can be manifested by different wear types which are often related. However, the dominant type of wear can be approximately determined. The paper presents the classification of most common bearing damages according to the dominant wear type, i.e. abrasive wear, adhesive wear, surface fatigue wear, erosive wear, fretting wear and corrosive wear. The wear types are correlated with the terms used in ISO 15243 standard. Each wear type is illustrated with an appropriate photograph, and for each wear type, appropriate description of causes and manifestations is presented. Possible causes of rolling bearing failure are used for the fault tree analysis (FTA). It was performed to determine the root causes for bearing failures. The constructed fault tree diagram for rolling bearing failure can be useful tool for maintenance engineers.
Renjith, V R; Madhu, G; Nayagam, V Lakshmana Gomathi; Bhasi, A B
2010-11-15
The hazards associated with major accident hazard (MAH) industries are fire, explosion and toxic gas releases. Of these, toxic gas release is the worst as it has the potential to cause extensive fatalities. Qualitative and quantitative hazard analyses are essential for the identification and quantification of these hazards related to chemical industries. Fault tree analysis (FTA) is an established technique in hazard identification. This technique has the advantage of being both qualitative and quantitative, if the probabilities and frequencies of the basic events are known. This paper outlines the estimation of the probability of release of chlorine from storage and filling facility of chlor-alkali industry using FTA. An attempt has also been made to arrive at the probability of chlorine release using expert elicitation and proven fuzzy logic technique for Indian conditions. Sensitivity analysis has been done to evaluate the percentage contribution of each basic event that could lead to chlorine release. Two-dimensional fuzzy fault tree analysis (TDFFTA) has been proposed for balancing the hesitation factor involved in expert elicitation. Copyright © 2010 Elsevier B.V. All rights reserved.
An Application of the Geo-Semantic Micro-services in Seamless Data-Model Integration
NASA Astrophysics Data System (ADS)
Jiang, P.; Elag, M.; Kumar, P.; Liu, R.; Hu, Y.; Marini, L.; Peckham, S. D.; Hsu, L.
2016-12-01
We are applying machine learning (ML) techniques to continuous acoustic emission (AE) data from laboratory earthquake experiments. Our goal is to apply explicit ML methods to this acoustic datathe AE in order to infer frictional properties of a laboratory fault. The experiment is a double direct shear apparatus comprised of fault blocks surrounding fault gouge comprised of glass beads or quartz powder. Fault characteristics are recorded, including shear stress, applied load (bulk friction = shear stress/normal load) and shear velocity. The raw acoustic signal is continuously recorded. We rely on explicit decision tree approaches (Random Forest and Gradient Boosted Trees) that allow us to identify important features linked to the fault friction. A training procedure that employs both the AE and the recorded shear stress from the experiment is first conducted. Then, testing takes place on data the algorithm has never seen before, using only the continuous AE signal. We find that these methods provide rich information regarding frictional processes during slip (Rouet-Leduc et al., 2017a; Hulbert et al., 2017). In addition, similar machine learning approaches predict failure times, as well as slip magnitudes in some cases. We find that these methods work for both stick slip and slow slip experiments, for periodic slip and for aperiodic slip. We also derive a fundamental relationship between the AE and the friction describing the frictional behavior of any earthquake slip cycle in a given experiment (Rouet-Leduc et al., 2017b). Our goal is to ultimately scale these approaches to Earth geophysical data to probe fault friction. References Rouet-Leduc, B., C. Hulbert, N. Lubbers, K. Barros, C. Humphreys and P. A. Johnson, Machine learning predicts laboratory earthquakes, in review (2017). https://arxiv.org/abs/1702.05774Rouet-LeDuc, B. et al., Friction Laws Derived From the Acoustic Emissions of a Laboratory Fault by Machine Learning (2017), AGU Fall Meeting Session S025: Earthquake source: from the laboratory to the fieldHulbert, C., Characterizing slow slip applying machine learning (2017), AGU Fall Meeting Session S019: Slow slip, Tectonic Tremor, and the Brittle-to-Ductile Transition Zone: What mechanisms control the diversity of slow and fast earthquakes?
NASA Astrophysics Data System (ADS)
Krechowicz, Maria
2017-10-01
Nowadays, one of the characteristic features of construction industry is an increased complexity of a growing number of projects. Almost each construction project is unique, has its project-specific purpose, its own project structural complexity, owner’s expectations, ground conditions unique to a certain location, and its own dynamics. Failure costs and costs resulting from unforeseen problems in complex construction projects are very high. Project complexity drivers pose many vulnerabilities to a successful completion of a number of projects. This paper discusses the process of effective risk management in complex construction projects in which renewable energy sources were used, on the example of the realization phase of the ENERGIS teaching-laboratory building, from the point of view of DORBUD S.A., its general contractor. This paper suggests a new approach to risk management for complex construction projects in which renewable energy sources were applied. The risk management process was divided into six stages: gathering information, identification of the top, critical project risks resulting from the project complexity, construction of the fault tree for each top, critical risks, logical analysis of the fault tree, quantitative risk assessment applying fuzzy logic and development of risk response strategy. A new methodology for the qualitative and quantitative risk assessment for top, critical risks in complex construction projects was developed. Risk assessment was carried out applying Fuzzy Fault Tree analysis on the example of one top critical risk. Application of the Fuzzy sets theory to the proposed model allowed to decrease uncertainty and eliminate problems with gaining the crisp values of the basic events probability, common during expert risk assessment with the objective to give the exact risk score of each unwanted event probability.
A Flexible Hierarchical Bayesian Modeling Technique for Risk Analysis of Major Accidents.
Yu, Hongyang; Khan, Faisal; Veitch, Brian
2017-09-01
Safety analysis of rare events with potentially catastrophic consequences is challenged by data scarcity and uncertainty. Traditional causation-based approaches, such as fault tree and event tree (used to model rare event), suffer from a number of weaknesses. These include the static structure of the event causation, lack of event occurrence data, and need for reliable prior information. In this study, a new hierarchical Bayesian modeling based technique is proposed to overcome these drawbacks. The proposed technique can be used as a flexible technique for risk analysis of major accidents. It enables both forward and backward analysis in quantitative reasoning and the treatment of interdependence among the model parameters. Source-to-source variability in data sources is also taken into account through a robust probabilistic safety analysis. The applicability of the proposed technique has been demonstrated through a case study in marine and offshore industry. © 2017 Society for Risk Analysis.
NASA Astrophysics Data System (ADS)
Hiemer, S.; Woessner, J.; Basili, R.; Danciu, L.; Giardini, D.; Wiemer, S.
2014-08-01
We present a time-independent gridded earthquake rate forecast for the European region including Turkey. The spatial component of our model is based on kernel density estimation techniques, which we applied to both past earthquake locations and fault moment release on mapped crustal faults and subduction zone interfaces with assigned slip rates. Our forecast relies on the assumption that the locations of past seismicity is a good guide to future seismicity, and that future large-magnitude events occur more likely in the vicinity of known faults. We show that the optimal weighted sum of the corresponding two spatial densities depends on the magnitude range considered. The kernel bandwidths and density weighting function are optimized using retrospective likelihood-based forecast experiments. We computed earthquake activity rates (a- and b-value) of the truncated Gutenberg-Richter distribution separately for crustal and subduction seismicity based on a maximum likelihood approach that considers the spatial and temporal completeness history of the catalogue. The final annual rate of our forecast is purely driven by the maximum likelihood fit of activity rates to the catalogue data, whereas its spatial component incorporates contributions from both earthquake and fault moment-rate densities. Our model constitutes one branch of the earthquake source model logic tree of the 2013 European seismic hazard model released by the EU-FP7 project `Seismic HAzard haRmonization in Europe' (SHARE) and contributes to the assessment of epistemic uncertainties in earthquake activity rates. We performed retrospective and pseudo-prospective likelihood consistency tests to underline the reliability of our model and SHARE's area source model (ASM) using the testing algorithms applied in the collaboratory for the study of earthquake predictability (CSEP). We comparatively tested our model's forecasting skill against the ASM and find a statistically significant better performance for testing periods of 10-20 yr. The testing results suggest that our model is a viable candidate model to serve for long-term forecasting on timescales of years to decades for the European region.
Survey of critical failure events in on-chip interconnect by fault tree analysis
NASA Astrophysics Data System (ADS)
Yokogawa, Shinji; Kunii, Kyousuke
2018-07-01
In this paper, a framework based on reliability physics is proposed for adopting fault tree analysis (FTA) to the on-chip interconnect system of a semiconductor. By integrating expert knowledge and experience regarding the possibilities of failure on basic events, critical issues of on-chip interconnect reliability will be evaluated by FTA. In particular, FTA is used to identify the minimal cut sets with high risk priority. Critical events affecting the on-chip interconnect reliability are identified and discussed from the viewpoint of long-term reliability assessment. The moisture impact is evaluated as an external event.
Dynamic rupture simulations on a fault network in the Corinth Rift
NASA Astrophysics Data System (ADS)
Durand, V.; Hok, S.; Boiselet, A.; Bernard, P.; Scotti, O.
2017-03-01
The Corinth rift (Greece) is made of a complex network of fault segments, typically 10-20 km long separated by stepovers. Assessing the maximum magnitude possible in this region requires accounting for multisegment rupture. Here we apply numerical models of dynamic rupture to quantify the probability of a multisegment rupture in the rift, based on the knowledge of the fault geometry and on the magnitude of the historical and palaeoearthquakes. We restrict our application to dynamic rupture on the most recent and active fault network of the western rift, located on the southern coast. We first define several models, varying the main physical parameters that control the rupture propagation. We keep the regional stress field and stress drop constant, and we test several fault geometries, several positions of the faults in their seismic cycle, several values of the critical distance (and so several fracture energies) and two different hypocentres (thus testing two directivity hypothesis). We obtain different scenarios in terms of the number of ruptured segments and the final magnitude (between M = 5.8 for a single segment rupture to M = 6.4 for a whole network rupture), and find that the main parameter controlling the variability of the scenarios is the fracture energy. We then use a probabilistic approach to quantify the probability of each generated scenario. To do that, we implement a logical tree associating a weight to each model input hypothesis. Combining these weights, we compute the probability of occurrence of each scenario, and show that the multisegment scenarios are very likely (52 per cent), but that the whole network rupture scenario is unlikely (14 per cent).
An approach to solving large reliability models
NASA Technical Reports Server (NTRS)
Boyd, Mark A.; Veeraraghavan, Malathi; Dugan, Joanne Bechta; Trivedi, Kishor S.
1988-01-01
This paper describes a unified approach to the problem of solving large realistic reliability models. The methodology integrates behavioral decomposition, state trunction, and efficient sparse matrix-based numerical methods. The use of fault trees, together with ancillary information regarding dependencies to automatically generate the underlying Markov model state space is proposed. The effectiveness of this approach is illustrated by modeling a state-of-the-art flight control system and a multiprocessor system. Nonexponential distributions for times to failure of components are assumed in the latter example. The modeling tool used for most of this analysis is HARP (the Hybrid Automated Reliability Predictor).
Sun, Weifang; Yao, Bin; Zeng, Nianyin; Chen, Binqiang; He, Yuchao; Cao, Xincheng; He, Wangpeng
2017-07-12
As a typical example of large and complex mechanical systems, rotating machinery is prone to diversified sorts of mechanical faults. Among these faults, one of the prominent causes of malfunction is generated in gear transmission chains. Although they can be collected via vibration signals, the fault signatures are always submerged in overwhelming interfering contents. Therefore, identifying the critical fault's characteristic signal is far from an easy task. In order to improve the recognition accuracy of a fault's characteristic signal, a novel intelligent fault diagnosis method is presented. In this method, a dual-tree complex wavelet transform (DTCWT) is employed to acquire the multiscale signal's features. In addition, a convolutional neural network (CNN) approach is utilized to automatically recognise a fault feature from the multiscale signal features. The experiment results of the recognition for gear faults show the feasibility and effectiveness of the proposed method, especially in the gear's weak fault features.
NASA Astrophysics Data System (ADS)
Gulen, L.; EMME WP2 Team*
2011-12-01
The Earthquake Model of the Middle East (EMME) Project is a regional project of the GEM (Global Earthquake Model) project (http://www.emme-gem.org/). The EMME project covers Turkey, Georgia, Armenia, Azerbaijan, Syria, Lebanon, Jordan, Iran, Pakistan, and Afghanistan. Both EMME and SHARE projects overlap and Turkey becomes a bridge connecting the two projects. The Middle East region is tectonically and seismically very active part of the Alpine-Himalayan orogenic belt. Many major earthquakes have occurred in this region over the years causing casualties in the millions. The EMME project consists of three main modules: hazard, risk, and socio-economic modules. The EMME project uses PSHA approach for earthquake hazard and the existing source models have been revised or modified by the incorporation of newly acquired data. The most distinguishing aspect of the EMME project from the previous ones is its dynamic character. This very important characteristic is accomplished by the design of a flexible and scalable database that permits continuous update, refinement, and analysis. An up-to-date earthquake catalog of the Middle East region has been prepared and declustered by the WP1 team. EMME WP2 team has prepared a digital active fault map of the Middle East region in ArcGIS format. We have constructed a database of fault parameters for active faults that are capable of generating earthquakes above a threshold magnitude of Mw≥5.5. The EMME project database includes information on the geometry and rates of movement of faults in a "Fault Section Database", which contains 36 entries for each fault section. The "Fault Section" concept has a physical significance, in that if one or more fault parameters change, a new fault section is defined along a fault zone. So far 6,991 Fault Sections have been defined and 83,402 km of faults are fully parameterized in the Middle East region. A separate "Paleo-Sites Database" includes information on the timing and amounts of fault displacement for major fault zones. A digital reference library, that includes the pdf files of relevant papers, reports and maps, is also prepared. A logic tree approach is utilized to encompass different interpretations for the areas where there is no consensus. Finally seismic source zones in the Middle East region have been delineated using all available data. *EMME Project WP2 Team: Levent Gülen, Murat Utkucu, M. Dinçer Köksal, Hilal Yalçin, Yigit Ince, Mine Demircioglu, Shota Adamia, Nino Sadradze, Aleksandre Gvencadze, Arkadi Karakhanyan, Mher Avanesyan, Tahir Mammadli, Gurban Yetirmishli, Arif Axundov, Khaled Hessami, M. Asif Khan, M. Sayab.
Earthquake Model of the Middle East (EMME) Project: Active Fault Database for the Middle East Region
NASA Astrophysics Data System (ADS)
Gülen, L.; Wp2 Team
2010-12-01
The Earthquake Model of the Middle East (EMME) Project is a regional project of the umbrella GEM (Global Earthquake Model) project (http://www.emme-gem.org/). EMME project region includes Turkey, Georgia, Armenia, Azerbaijan, Syria, Lebanon, Jordan, Iran, Pakistan, and Afghanistan. Both EMME and SHARE projects overlap and Turkey becomes a bridge connecting the two projects. The Middle East region is tectonically and seismically very active part of the Alpine-Himalayan orogenic belt. Many major earthquakes have occurred in this region over the years causing casualties in the millions. The EMME project will use PSHA approach and the existing source models will be revised or modified by the incorporation of newly acquired data. More importantly the most distinguishing aspect of the EMME project from the previous ones will be its dynamic character. This very important characteristic is accomplished by the design of a flexible and scalable database that will permit continuous update, refinement, and analysis. A digital active fault map of the Middle East region is under construction in ArcGIS format. We are developing a database of fault parameters for active faults that are capable of generating earthquakes above a threshold magnitude of Mw≥5.5. Similar to the WGCEP-2007 and UCERF-2 projects, the EMME project database includes information on the geometry and rates of movement of faults in a “Fault Section Database”. The “Fault Section” concept has a physical significance, in that if one or more fault parameters change, a new fault section is defined along a fault zone. So far over 3,000 Fault Sections have been defined and parameterized for the Middle East region. A separate “Paleo-Sites Database” includes information on the timing and amounts of fault displacement for major fault zones. A digital reference library that includes the pdf files of the relevant papers, reports is also being prepared. Another task of the WP-2 of the EMME project is to prepare a strain and slip rate map of the Middle East region by basically compiling already published data. The third task is to calculate b-values, Mmax and determine the activity rates. New data and evidences will be interpreted to revise or modify the existing source models. A logic tree approach will be utilized for the areas where there is no consensus to encompass different interpretations. Finally seismic source zones in the Middle East region will be delineated using all available data. EMME Project WP2 Team: Levent Gülen, Murat Utkucu, M. Dinçer Köksal, Hilal Domaç, Yigit Ince, Mine Demircioglu, Shota Adamia, Nino Sandradze, Aleksandre Gvencadze, Arkadi Karakhanyan, Mher Avanesyan, Tahir Mammadli, Gurban Yetirmishli, Arif Axundov, Khaled Hessami, M. Asif Khan, M. Sayab.
NASA Astrophysics Data System (ADS)
Kamer, Yavor; Ouillon, Guy; Sornette, Didier; Wössner, Jochen
2014-05-01
We present applications of a new clustering method for fault network reconstruction based on the spatial distribution of seismicity. Unlike common approaches that start from the simplest large scale and gradually increase the complexity trying to explain the small scales, our method uses a bottom-up approach, by an initial sampling of the small scales and then reducing the complexity. The new approach also exploits the location uncertainty associated with each event in order to obtain a more accurate representation of the spatial probability distribution of the seismicity. For a given dataset, we first construct an agglomerative hierarchical cluster (AHC) tree based on Ward's minimum variance linkage. Such a tree starts out with one cluster and progressively branches out into an increasing number of clusters. To atomize the structure into its constitutive protoclusters, we initialize a Gaussian Mixture Modeling (GMM) at a given level of the hierarchical clustering tree. We then let the GMM converge using an Expectation Maximization (EM) algorithm. The kernels that become ill defined (less than 4 points) at the end of the EM are discarded. By incrementing the number of initialization clusters (by atomizing at increasingly populated levels of the AHC tree) and repeating the procedure above, we are able to determine the maximum number of Gaussian kernels the structure can hold. The kernels in this configuration constitute our protoclusters. In this setting, merging of any pair will lessen the likelihood (calculated over the pdf of the kernels) but in turn will reduce the model's complexity. The information loss/gain of any possible merging can thus be quantified based on the Minimum Description Length (MDL) principle. Similar to an inter-distance matrix, where the matrix element di,j gives the distance between points i and j, we can construct a MDL gain/loss matrix where mi,j gives the information gain/loss resulting from the merging of kernels i and j. Based on this matrix, merging events resulting in MDL gain are performed in descending order until no gainful merging is possible anymore. We envision that the results of this study could lead to a better understanding of the complex interactions within the Californian fault system and hopefully use the acquired insights for earthquake forecasting.
NASA Astrophysics Data System (ADS)
Polverino, Pierpaolo; Esposito, Angelo; Pianese, Cesare; Ludwig, Bastian; Iwanschitz, Boris; Mai, Andreas
2016-02-01
In the current energetic scenario, Solid Oxide Fuel Cells (SOFCs) exhibit appealing features which make them suitable for environmental-friendly power production, especially for stationary applications. An example is represented by micro-combined heat and power (μ-CHP) generation units based on SOFC stacks, which are able to produce electric and thermal power with high efficiency and low pollutant and greenhouse gases emissions. However, the main limitations to their diffusion into the mass market consist in high maintenance and production costs and short lifetime. To improve these aspects, the current research activity focuses on the development of robust and generalizable diagnostic techniques, aimed at detecting and isolating faults within the entire system (i.e. SOFC stack and balance of plant). Coupled with appropriate recovery strategies, diagnosis can prevent undesired system shutdowns during faulty conditions, with consequent lifetime increase and maintenance costs reduction. This paper deals with the on-line experimental validation of a model-based diagnostic algorithm applied to a pre-commercial SOFC system. The proposed algorithm exploits a Fault Signature Matrix based on a Fault Tree Analysis and improved through fault simulations. The algorithm is characterized on the considered system and it is validated by means of experimental induction of faulty states in controlled conditions.
Earthquake Rupture Forecast of M>= 6 for the Corinth Rift System
NASA Astrophysics Data System (ADS)
Scotti, O.; Boiselet, A.; Lyon-Caen, H.; Albini, P.; Bernard, P.; Briole, P.; Ford, M.; Lambotte, S.; Matrullo, E.; Rovida, A.; Satriano, C.
2014-12-01
Fourteen years of multidisciplinary observations and data collection in the Western Corinth Rift (WCR) near-fault observatory have been recently synthesized (Boiselet, Ph.D. 2014) for the purpose of providing earthquake rupture forecasts (ERF) of M>=6 in WCR. The main contribution of this work consisted in paving the road towards the development of a "community-based" fault model reflecting the level of knowledge gathered thus far by the WCR working group. The most relevant available data used for this exercise are: - onshore/offshore fault traces, based on geological and high-resolution seismics, revealing a complex network of E-W striking, ~10 km long fault segments; microseismicity recorded by a dense network ( > 60000 events; 1.5
Determining preventability of pediatric readmissions using fault tree analysis.
Jonas, Jennifer A; Devon, Erin Pete; Ronan, Jeanine C; Ng, Sonia C; Owusu-McKenzie, Jacqueline Y; Strausbaugh, Janet T; Fieldston, Evan S; Hart, Jessica K
2016-05-01
Previous studies attempting to distinguish preventable from nonpreventable readmissions reported challenges in completing reviews efficiently and consistently. (1) Examine the efficiency and reliability of a Web-based fault tree tool designed to guide physicians through chart reviews to a determination about preventability. (2) Investigate root causes of general pediatrics readmissions and identify the percent that are preventable. General pediatricians from The Children's Hospital of Philadelphia used a Web-based fault tree tool to classify root causes of all general pediatrics 15-day readmissions in 2014. The tool guided reviewers through a logical progression of questions, which resulted in 1 of 18 root causes of readmission, 8 of which were considered potentially preventable. Twenty percent of cases were cross-checked to measure inter-rater reliability. Of the 7252 discharges, 248 were readmitted, for an all-cause general pediatrics 15-day readmission rate of 3.4%. Of those readmissions, 15 (6.0%) were deemed potentially preventable, corresponding to 0.2% of total discharges. The most common cause of potentially preventable readmissions was premature discharge. For the 50 cross-checked cases, both reviews resulted in the same root cause for 44 (86%) of files (κ = 0.79; 95% confidence interval: 0.60-0.98). Completing 1 review using the tool took approximately 20 minutes. The Web-based fault tree tool helped physicians to identify root causes of hospital readmissions and classify them as either preventable or not preventable in an efficient and consistent way. It also confirmed that only a small percentage of general pediatrics 15-day readmissions are potentially preventable. Journal of Hospital Medicine 2016;11:329-335. © 2016 Society of Hospital Medicine. © 2016 Society of Hospital Medicine.
Risk Analysis of Return Support Material on Gas Compressor Platform Project
NASA Astrophysics Data System (ADS)
Silvianita; Aulia, B. U.; Khakim, M. L. N.; Rosyid, Daniel M.
2017-07-01
On a fixed platforms project are not only carried out by a contractor, but two or more contractors. Cooperation in the construction of fixed platforms is often not according to plan, it is caused by several factors. It takes a good synergy between the contractor to avoid miss communication may cause problems on the project. For the example is about support material (sea fastening, skid shoe and shipping support) used in the process of sending a jacket structure to operation place often does not return to the contractor. It needs a systematic method to overcome the problem of support material. This paper analyses the causes and effects of GAS Compressor Platform that support material is not return, using Fault Tree Analysis (FTA) and Event Tree Analysis (ETA). From fault tree analysis, the probability of top event is 0.7783. From event tree analysis diagram, the contractors lose Rp.350.000.000, - to Rp.10.000.000.000, -.
NASA Astrophysics Data System (ADS)
Li, Yongbo; Li, Guoyan; Yang, Yuantao; Liang, Xihui; Xu, Minqiang
2018-05-01
The fault diagnosis of planetary gearboxes is crucial to reduce the maintenance costs and economic losses. This paper proposes a novel fault diagnosis method based on adaptive multi-scale morphological filter (AMMF) and modified hierarchical permutation entropy (MHPE) to identify the different health conditions of planetary gearboxes. In this method, AMMF is firstly adopted to remove the fault-unrelated components and enhance the fault characteristics. Second, MHPE is utilized to extract the fault features from the denoised vibration signals. Third, Laplacian score (LS) approach is employed to refine the fault features. In the end, the obtained features are fed into the binary tree support vector machine (BT-SVM) to accomplish the fault pattern identification. The proposed method is numerically and experimentally demonstrated to be able to recognize the different fault categories of planetary gearboxes.
NASA Astrophysics Data System (ADS)
Pham, Binh Thai; Prakash, Indra; Tien Bui, Dieu
2018-02-01
A hybrid machine learning approach of Random Subspace (RSS) and Classification And Regression Trees (CART) is proposed to develop a model named RSSCART for spatial prediction of landslides. This model is a combination of the RSS method which is known as an efficient ensemble technique and the CART which is a state of the art classifier. The Luc Yen district of Yen Bai province, a prominent landslide prone area of Viet Nam, was selected for the model development. Performance of the RSSCART model was evaluated through the Receiver Operating Characteristic (ROC) curve, statistical analysis methods, and the Chi Square test. Results were compared with other benchmark landslide models namely Support Vector Machines (SVM), single CART, Naïve Bayes Trees (NBT), and Logistic Regression (LR). In the development of model, ten important landslide affecting factors related with geomorphology, geology and geo-environment were considered namely slope angles, elevation, slope aspect, curvature, lithology, distance to faults, distance to rivers, distance to roads, and rainfall. Performance of the RSSCART model (AUC = 0.841) is the best compared with other popular landslide models namely SVM (0.835), single CART (0.822), NBT (0.821), and LR (0.723). These results indicate that performance of the RSSCART is a promising method for spatial landslide prediction.
2013-05-01
specifics of the correlation will be explored followed by discussion of new paradigms— the ordered event list (OEL) and the decision tree — that result from...4.2.1 Brief Overview of the Decision Tree Paradigm ................................................15 4.2.2 OEL Explained...6 Figure 3. A depiction of a notional fault/activation tree . ................................................................7
Hybrid automated reliability predictor integrated work station (HiREL)
NASA Technical Reports Server (NTRS)
Bavuso, Salvatore J.
1991-01-01
The Hybrid Automated Reliability Predictor (HARP) integrated reliability (HiREL) workstation tool system marks another step toward the goal of producing a totally integrated computer aided design (CAD) workstation design capability. Since a reliability engineer must generally graphically represent a reliability model before he can solve it, the use of a graphical input description language increases productivity and decreases the incidence of error. The captured image displayed on a cathode ray tube (CRT) screen serves as a documented copy of the model and provides the data for automatic input to the HARP reliability model solver. The introduction of dependency gates to a fault tree notation allows the modeling of very large fault tolerant system models using a concise and visually recognizable and familiar graphical language. In addition to aiding in the validation of the reliability model, the concise graphical representation presents company management, regulatory agencies, and company customers a means of expressing a complex model that is readily understandable. The graphical postprocessor computer program HARPO (HARP Output) makes it possible for reliability engineers to quickly analyze huge amounts of reliability/availability data to observe trends due to exploratory design changes.
[Impact of water pollution risk in water transfer project based on fault tree analysis].
Liu, Jian-Chang; Zhang, Wei; Wang, Li-Min; Li, Dai-Qing; Fan, Xiu-Ying; Deng, Hong-Bing
2009-09-15
The methods to assess water pollution risk for medium water transfer are gradually being explored. The event-nature-proportion method was developed to evaluate the probability of the single event. Fault tree analysis on the basis of calculation on single event was employed to evaluate the extent of whole water pollution risk for the channel water body. The result indicates, that the risk of pollutants from towns and villages along the line of water transfer project to the channel water body is at high level with the probability of 0.373, which will increase pollution to the channel water body at the rate of 64.53 mg/L COD, 4.57 mg/L NH4(+) -N and 0.066 mg/L volatilization hydroxybenzene, respectively. The measurement of fault probability on the basis of proportion method is proved to be useful in assessing water pollution risk under much uncertainty.
Viewpoint on ISA TR84.0.02--simplified methods and fault tree analysis.
Summers, A E
2000-01-01
ANSI/ISA-S84.01-1996 and IEC 61508 require the establishment of a safety integrity level for any safety instrumented system or safety related system used to mitigate risk. Each stage of design, operation, maintenance, and testing is judged against this safety integrity level. Quantitative techniques can be used to verify whether the safety integrity level is met. ISA-dTR84.0.02 is a technical report under development by ISA, which discusses how to apply quantitative analysis techniques to safety instrumented systems. This paper discusses two of those techniques: (1) Simplified equations and (2) Fault tree analysis.
TH-EF-BRC-03: Fault Tree Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thomadsen, B.
2016-06-15
This Hands-on Workshop will be focused on providing participants with experience with the principal tools of TG 100 and hence start to build both competence and confidence in the use of risk-based quality management techniques. The three principal tools forming the basis of TG 100’s risk analysis: Process mapping, Failure-Modes and Effects Analysis and fault-tree analysis will be introduced with a 5 minute refresher presentation and each presentation will be followed by a 30 minute small group exercise. An exercise on developing QM from the risk analysis follows. During the exercise periods, participants will apply the principles in 2 differentmore » clinical scenarios. At the conclusion of each exercise there will be ample time for participants to discuss with each other and the faculty their experience and any challenges encountered. Learning Objectives: To review the principles of Process Mapping, Failure Modes and Effects Analysis and Fault Tree Analysis. To gain familiarity with these three techniques in a small group setting. To share and discuss experiences with the three techniques with faculty and participants. Director, TreatSafely, LLC. Director, Center for the Assessment of Radiological Sciences. Occasional Consultant to the IAEA and Varian.« less
Estimating earthquake-induced failure probability and downtime of critical facilities.
Porter, Keith; Ramer, Kyle
2012-01-01
Fault trees have long been used to estimate failure risk in earthquakes, especially for nuclear power plants (NPPs). One interesting application is that one can assess and manage the probability that two facilities - a primary and backup - would be simultaneously rendered inoperative in a single earthquake. Another is that one can calculate the probabilistic time required to restore a facility to functionality, and the probability that, during any given planning period, the facility would be rendered inoperative for any specified duration. A large new peer-reviewed library of component damageability and repair-time data for the first time enables fault trees to be used to calculate the seismic risk of operational failure and downtime for a wide variety of buildings other than NPPs. With the new library, seismic risk of both the failure probability and probabilistic downtime can be assessed and managed, considering the facility's unique combination of structural and non-structural components, their seismic installation conditions, and the other systems on which the facility relies. An example is offered of real computer data centres operated by a California utility. The fault trees were created and tested in collaboration with utility operators, and the failure probability and downtime results validated in several ways.
Huang, Weiqing; Fan, Hongbo; Qiu, Yongfu; Cheng, Zhiyu; Qian, Yu
2016-02-15
Haze weather has become a serious environmental pollution problem which occurs in many Chinese cities. One of the most critical factors for the formation of haze weather is the exhausts of coal combustion, thus it is meaningful to figure out the causation mechanism between urban haze and the exhausts of coal combustion. Based on above considerations, the fault tree analysis (FAT) approach was employed for the causation mechanism of urban haze in Beijing by considering the risk events related with the exhausts of coal combustion for the first time. Using this approach, firstly the fault tree of the urban haze causation system connecting with coal combustion exhausts was established; consequently the risk events were discussed and identified; then, the minimal cut sets were successfully determined using Boolean algebra; finally, the structure, probability and critical importance degree analysis of the risk events were completed for the qualitative and quantitative assessment. The study results proved that the FTA was an effective and simple tool for the causation mechanism analysis and risk management of urban haze in China. Copyright © 2015 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Mulyana, Cukup; Muhammad, Fajar; Saad, Aswad H.; Mariah, Riveli, Nowo
2017-03-01
Storage tank component is the most critical component in LNG regasification terminal. It has the risk of failure and accident which impacts to human health and environment. Risk assessment is conducted to detect and reduce the risk of failure in storage tank. The aim of this research is determining and calculating the probability of failure in regasification unit of LNG. In this case, the failure is caused by Boiling Liquid Expanding Vapor Explosion (BLEVE) and jet fire in LNG storage tank component. The failure probability can be determined by using Fault Tree Analysis (FTA). Besides that, the impact of heat radiation which is generated is calculated. Fault tree for BLEVE and jet fire on storage tank component has been determined and obtained with the value of failure probability for BLEVE of 5.63 × 10-19 and for jet fire of 9.57 × 10-3. The value of failure probability for jet fire is high enough and need to be reduced by customizing PID scheme of regasification LNG unit in pipeline number 1312 and unit 1. The value of failure probability after customization has been obtained of 4.22 × 10-6.
NASA Astrophysics Data System (ADS)
Li, Shuanghong; Cao, Hongliang; Yang, Yupu
2018-02-01
Fault diagnosis is a key process for the reliability and safety of solid oxide fuel cell (SOFC) systems. However, it is difficult to rapidly and accurately identify faults for complicated SOFC systems, especially when simultaneous faults appear. In this research, a data-driven Multi-Label (ML) pattern identification approach is proposed to address the simultaneous fault diagnosis of SOFC systems. The framework of the simultaneous-fault diagnosis primarily includes two components: feature extraction and ML-SVM classifier. The simultaneous-fault diagnosis approach can be trained to diagnose simultaneous SOFC faults, such as fuel leakage, air leakage in different positions in the SOFC system, by just using simple training data sets consisting only single fault and not demanding simultaneous faults data. The experimental result shows the proposed framework can diagnose the simultaneous SOFC system faults with high accuracy requiring small number training data and low computational burden. In addition, Fault Inference Tree Analysis (FITA) is employed to identify the correlations among possible faults and their corresponding symptoms at the system component level.
Support vector machines-based fault diagnosis for turbo-pump rotor
NASA Astrophysics Data System (ADS)
Yuan, Sheng-Fa; Chu, Fu-Lei
2006-05-01
Most artificial intelligence methods used in fault diagnosis are based on empirical risk minimisation principle and have poor generalisation when fault samples are few. Support vector machines (SVM) is a new general machine-learning tool based on structural risk minimisation principle that exhibits good generalisation even when fault samples are few. Fault diagnosis based on SVM is discussed. Since basic SVM is originally designed for two-class classification, while most of fault diagnosis problems are multi-class cases, a new multi-class classification of SVM named 'one to others' algorithm is presented to solve the multi-class recognition problems. It is a binary tree classifier composed of several two-class classifiers organised by fault priority, which is simple, and has little repeated training amount, and the rate of training and recognition is expedited. The effectiveness of the method is verified by the application to the fault diagnosis for turbo pump rotor.
Preliminary Isostatic Gravity Map of Joshua Tree National Park and Vicinity, Southern California
Langenheim, V.E.; Biehler, Shawn; McPhee, D.K.; McCabe, C.A.; Watt, J.T.; Anderson, M.L.; Chuchel, B.A.; Stoffer, P.
2007-01-01
This isostatic residual gravity map is part of an effort to map the three-dimensional distribution of rocks in Joshua Tree National Park, southern California. This map will serve as a basis for modeling the shape of basins beneath the Park and in adjacent valleys and also for determining the location and geometry of faults within the area. Local spatial variations in the Earth's gravity field, after accounting for variations caused by elevation, terrain, and deep crustal structure, reflect the distribution of densities in the mid- to upper crust. Densities often can be related to rock type, and abrupt spatial changes in density commonly mark lithologic or structural boundaries. High-density basement rocks exposed within the Eastern Transverse Ranges include crystalline rocks that range in age from Proterozoic to Mesozoic and these rocks are generally present in the mountainous areas of the quadrangle. Alluvial sediments, usually located in the valleys, and Tertiary sedimentary rocks are characterized by low densities. However, with increasing depth of burial and age, the densities of these rocks may become indistinguishable from those of basement rocks. Tertiary volcanic rocks are characterized by a wide range of densities, but, on average, are less dense than the pre-Cenozoic basement rocks. Basalt within the Park is as dense as crystalline basement, but is generally thin (less than 100 m thick; e.g., Powell, 2003). Isostatic residual gravity values within the map area range from about 44 mGal over Coachella Valley to about 8 mGal between the Mecca Hills and the Orocopia Mountains. Steep linear gravity gradients are coincident with the traces of several Quaternary strike-slip faults, most notably along the San Andreas Fault bounding the east side of Coachella Valley and east-west-striking, left-lateral faults, such as the Pinto Mountain, Blue Cut, and Chiriaco Faults (Fig. 1). Gravity gradients also define concealed basin-bounding faults, such as those beneath the Chuckwalla Valley (e.g. Rotstein and others, 1976). These gradients result from juxtaposing dense basement rocks against thick Cenozoic sedimentary rocks.
NASA Astrophysics Data System (ADS)
Hu, Bingbing; Li, Bing
2016-02-01
It is very difficult to detect weak fault signatures due to the large amount of noise in a wind turbine system. Multiscale noise tuning stochastic resonance (MSTSR) has proved to be an effective way to extract weak signals buried in strong noise. However, the MSTSR method originally based on discrete wavelet transform (DWT) has disadvantages such as shift variance and the aliasing effects in engineering application. In this paper, the dual-tree complex wavelet transform (DTCWT) is introduced into the MSTSR method, which makes it possible to further improve the system output signal-to-noise ratio and the accuracy of fault diagnosis by the merits of DTCWT (nearly shift invariant and reduced aliasing effects). Moreover, this method utilizes the relationship between the two dual-tree wavelet basis functions, instead of matching the single wavelet basis function to the signal being analyzed, which may speed up the signal processing and be employed in on-line engineering monitoring. The proposed method is applied to the analysis of bearing outer ring and shaft coupling vibration signals carrying fault information. The results confirm that the method performs better in extracting the fault features than the original DWT-based MSTSR, the wavelet transform with post spectral analysis, and EMD-based spectral analysis methods.
NASA Technical Reports Server (NTRS)
Mengshoel, Ole Jakob; Poll, Scott; Kurtoglu, Tolga
2009-01-01
This CD contains files that support the talk (see CASI ID 20100021404). There are 24 models that relate to the ADAPT system and 1 Excel worksheet. In the paper an investigation into the use of Bayesian networks to construct large-scale diagnostic systems is described. The high-level specifications, Bayesian networks, clique trees, and arithmetic circuits representing 24 different electrical power systems are described in the talk. The data in the CD are the models of the 24 different power systems.
Adaptive Sampling using Support Vector Machines
DOE Office of Scientific and Technical Information (OSTI.GOV)
D. Mandelli; C. Smith
2012-11-01
Reliability/safety analysis of stochastic dynamic systems (e.g., nuclear power plants, airplanes, chemical plants) is currently performed through a combination of Event-Tress and Fault-Trees. However, these conventional methods suffer from certain drawbacks: • Timing of events is not explicitly modeled • Ordering of events is preset by the analyst • The modeling of complex accident scenarios is driven by expert-judgment For these reasons, there is currently an increasing interest into the development of dynamic PRA methodologies since they can be used to address the deficiencies of conventional methods listed above.
Locating hardware faults in a parallel computer
Archer, Charles J.; Megerian, Mark G.; Ratterman, Joseph D.; Smith, Brian E.
2010-04-13
Locating hardware faults in a parallel computer, including defining within a tree network of the parallel computer two or more sets of non-overlapping test levels of compute nodes of the network that together include all the data communications links of the network, each non-overlapping test level comprising two or more adjacent tiers of the tree; defining test cells within each non-overlapping test level, each test cell comprising a subtree of the tree including a subtree root compute node and all descendant compute nodes of the subtree root compute node within a non-overlapping test level; performing, separately on each set of non-overlapping test levels, an uplink test on all test cells in a set of non-overlapping test levels; and performing, separately from the uplink tests and separately on each set of non-overlapping test levels, a downlink test on all test cells in a set of non-overlapping test levels.
Expert systems for fault diagnosis in nuclear reactor control
NASA Astrophysics Data System (ADS)
Jalel, N. A.; Nicholson, H.
1990-11-01
An expert system for accident analysis and fault diagnosis for the Loss Of Fluid Test (LOFT) reactor, a small scale pressurized water reactor, was developed for a personal computer. The knowledge of the system is presented using a production rule approach with a backward chaining inference engine. The data base of the system includes simulated dependent state variables of the LOFT reactor model. Another system is designed to assist the operator in choosing the appropriate cooling mode and to diagnose the fault in the selected cooling system. The response tree, which is used to provide the link between a list of very specific accident sequences and a set of generic emergency procedures which help the operator in monitoring system status, and to differentiate between different accident sequences and select the correct procedures, is used to build the system knowledge base. Both systems are written in TURBO PROLOG language and can be run on an IBM PC compatible with 640k RAM, 40 Mbyte hard disk and color graphics.
Rizal, Datu; Tani, Shinichi; Nishiyama, Kimitoshi; Suzuki, Kazuhiko
2006-10-11
In this paper, a novel methodology in batch plant safety and reliability analysis is proposed using a dynamic simulator. A batch process involving several safety objects (e.g. sensors, controller, valves, etc.) is activated during the operational stage. The performance of the safety objects is evaluated by the dynamic simulation and a fault propagation model is generated. By using the fault propagation model, an improved fault tree analysis (FTA) method using switching signal mode (SSM) is developed for estimating the probability of failures. The timely dependent failures can be considered as unavailability of safety objects that can cause the accidents in a plant. Finally, the rank of safety object is formulated as performance index (PI) and can be estimated using the importance measures. PI shows the prioritization of safety objects that should be investigated for safety improvement program in the plants. The output of this method can be used for optimal policy in safety object improvement and maintenance. The dynamic simulator was constructed using Visual Modeler (VM, the plant simulator, developed by Omega Simulation Corp., Japan). A case study is focused on the loss of containment (LOC) incident at polyvinyl chloride (PVC) batch process which is consumed the hazardous material, vinyl chloride monomer (VCM).
Cognitive Support During High-Consequence Episodes of Care in Cardiovascular Surgery.
Conboy, Heather M; Avrunin, George S; Clarke, Lori A; Osterweil, Leon J; Christov, Stefan C; Goldman, Julian M; Yule, Steven J; Zenati, Marco A
2017-03-01
Despite significant efforts to reduce preventable adverse events in medical processes, such events continue to occur at unacceptable rates. This paper describes a computer science approach that uses formal process modeling to provide situationally aware monitoring and management support to medical professionals performing complex processes. These process models represent both normative and non-normative situations, and are validated by rigorous automated techniques such as model checking and fault tree analysis, in addition to careful review by experts. Context-aware Smart Checklists are then generated from the models, providing cognitive support during high-consequence surgical episodes. The approach is illustrated with a case study in cardiovascular surgery.
Friction Laws Derived From the Acoustic Emissions of a Laboratory Fault by Machine Learning
NASA Astrophysics Data System (ADS)
Rouet-Leduc, B.; Hulbert, C.; Ren, C. X.; Bolton, D. C.; Marone, C.; Johnson, P. A.
2017-12-01
Fault friction controls nearly all aspects of fault rupture, yet it is only possible to measure in the laboratory. Here we describe laboratory experiments where acoustic emissions are recorded from the fault. We find that by applying a machine learning approach known as "extreme gradient boosting trees" to the continuous acoustical signal, the fault friction can be directly inferred, showing that instantaneous characteristics of the acoustic signal are a fingerprint of the frictional state. This machine learning-based inference leads to a simple law that links the acoustic signal to the friction state, and holds for every stress cycle the laboratory fault goes through. The approach does not use any other measured parameter than instantaneous statistics of the acoustic signal. This finding may have importance for inferring frictional characteristics from seismic waves in Earth where fault friction cannot be measured.
The Design of a Fault-Tolerant COTS-Based Bus Architecture for Space Applications
NASA Technical Reports Server (NTRS)
Chau, Savio N.; Alkalai, Leon; Tai, Ann T.
2000-01-01
The high-performance, scalability and miniaturization requirements together with the power, mass and cost constraints mandate the use of commercial-off-the-shelf (COTS) components and standards in the X2000 avionics system architecture for deep-space missions. In this paper, we report our experiences and findings on the design of an IEEE 1394 compliant fault-tolerant COTS-based bus architecture. While the COTS standard IEEE 1394 adequately supports power management, high performance and scalability, its topological criteria impose restrictions on fault tolerance realization. To circumvent the difficulties, we derive a "stack-tree" topology that not only complies with the IEEE 1394 standard but also facilitates fault tolerance realization in a spaceborne system with limited dedicated resource redundancies. Moreover, by exploiting pertinent standard features of the 1394 interface which are not purposely designed for fault tolerance, we devise a comprehensive set of fault detection mechanisms to support the fault-tolerant bus architecture.
Zeng, Hongcheng; Lu, Tao; Jenkins, Hillary; ...
2016-03-17
Earthquakes can produce significant tree mortality, and consequently affect regional carbon dynamics. Unfortunately, detailed studies quantifying the influence of earthquake on forest mortality are currently rare. The committed forest biomass carbon loss associated with the 2008 Wenchuan earthquake in China is assessed by a synthetic approach in this study that integrated field investigation, remote sensing analysis, empirical models and Monte Carlo simulation. The newly developed approach significantly improved the forest disturbance evaluation by quantitatively defining the earthquake impact boundary and detailed field survey to validate the mortality models. Based on our approach, a total biomass carbon of 10.9 Tg·C wasmore » lost in Wenchuan earthquake, which offset 0.23% of the living biomass carbon stock in Chinese forests. Tree mortality was highly clustered at epicenter, and declined rapidly with distance away from the fault zone. It is suggested that earthquakes represent a signif icant driver to forest carbon dynamics, and the earthquake-induced biomass carbon loss should be included in estimating forest carbon budgets.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zeng, Hongcheng; Lu, Tao; Jenkins, Hillary
Earthquakes can produce significant tree mortality, and consequently affect regional carbon dynamics. Unfortunately, detailed studies quantifying the influence of earthquake on forest mortality are currently rare. The committed forest biomass carbon loss associated with the 2008 Wenchuan earthquake in China is assessed by a synthetic approach in this study that integrated field investigation, remote sensing analysis, empirical models and Monte Carlo simulation. The newly developed approach significantly improved the forest disturbance evaluation by quantitatively defining the earthquake impact boundary and detailed field survey to validate the mortality models. Based on our approach, a total biomass carbon of 10.9 Tg·C wasmore » lost in Wenchuan earthquake, which offset 0.23% of the living biomass carbon stock in Chinese forests. Tree mortality was highly clustered at epicenter, and declined rapidly with distance away from the fault zone. It is suggested that earthquakes represent a signif icant driver to forest carbon dynamics, and the earthquake-induced biomass carbon loss should be included in estimating forest carbon budgets.« less
Probability and possibility-based representations of uncertainty in fault tree analysis.
Flage, Roger; Baraldi, Piero; Zio, Enrico; Aven, Terje
2013-01-01
Expert knowledge is an important source of input to risk analysis. In practice, experts might be reluctant to characterize their knowledge and the related (epistemic) uncertainty using precise probabilities. The theory of possibility allows for imprecision in probability assignments. The associated possibilistic representation of epistemic uncertainty can be combined with, and transformed into, a probabilistic representation; in this article, we show this with reference to a simple fault tree analysis. We apply an integrated (hybrid) probabilistic-possibilistic computational framework for the joint propagation of the epistemic uncertainty on the values of the (limiting relative frequency) probabilities of the basic events of the fault tree, and we use possibility-probability (probability-possibility) transformations for propagating the epistemic uncertainty within purely probabilistic and possibilistic settings. The results of the different approaches (hybrid, probabilistic, and possibilistic) are compared with respect to the representation of uncertainty about the top event (limiting relative frequency) probability. Both the rationale underpinning the approaches and the computational efforts they require are critically examined. We conclude that the approaches relevant in a given setting depend on the purpose of the risk analysis, and that further research is required to make the possibilistic approaches operational in a risk analysis context. © 2012 Society for Risk Analysis.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sattison, M.B.
The Idaho National Engineering Laboratory (INEL) over the three years has created 75 plant-specific Accident Sequence Precursor (ASP) models using the SAPHIRE suite of PRA codes. Along with the new models, the INEL has also developed a new module for SAPHIRE which is tailored specifically to the unique needs of ASP evaluations. These models and software will be the next generation of risk tools for the evaluation of accident precursors by both the U.S. Nuclear Regulatory Commission`s (NRC`s) Office of Nuclear Reactor Regulation (NRR) and the Office for Analysis and Evaluation of Operational Data (AEOD). This paper presents an overviewmore » of the models and software. Key characteristics include: (1) classification of the plant models according to plant response with a unique set of event trees for each plant class, (2) plant-specific fault trees using supercomponents, (3) generation and retention of all system and sequence cutsets, (4) full flexibility in modifying logic, regenerating cutsets, and requantifying results, and (5) user interface for streamlined evaluation of ASP events. Future plans for the ASP models is also presented.« less
Sequential Test Strategies for Multiple Fault Isolation
NASA Technical Reports Server (NTRS)
Shakeri, M.; Pattipati, Krishna R.; Raghavan, V.; Patterson-Hine, Ann; Kell, T.
1997-01-01
In this paper, we consider the problem of constructing near optimal test sequencing algorithms for diagnosing multiple faults in redundant (fault-tolerant) systems. The computational complexity of solving the optimal multiple-fault isolation problem is super-exponential, that is, it is much more difficult than the single-fault isolation problem, which, by itself, is NP-hard. By employing concepts from information theory and Lagrangian relaxation, we present several static and dynamic (on-line or interactive) test sequencing algorithms for the multiple fault isolation problem that provide a trade-off between the degree of suboptimality and computational complexity. Furthermore, we present novel diagnostic strategies that generate a static diagnostic directed graph (digraph), instead of a static diagnostic tree, for multiple fault diagnosis. Using this approach, the storage complexity of the overall diagnostic strategy reduces substantially. Computational results based on real-world systems indicate that the size of a static multiple fault strategy is strictly related to the structure of the system, and that the use of an on-line multiple fault strategy can diagnose faults in systems with as many as 10,000 failure sources.
Optical fiber-fault surveillance for passive optical networks in S-band operation window
NASA Astrophysics Data System (ADS)
Yeh, Chien-Hung; Chi, Sien
2005-07-01
An S-band (1470 to 1520 nm) fiber laser scheme, which uses multiple fiber Bragg grating (FBG) elements as feedback elements on each passive branch, is proposed and described for in-service fault identification in passive optical networks (PONs). By tuning a wavelength selective filter located within the laser cavity over a gain bandwidth, the fiber-fault of each branch can be monitored without affecting the in-service channels. In our experiment, an S-band four-branch monitoring tree-structured PON system is demonstrated and investigated experimentally.
Optical fiber-fault surveillance for passive optical networks in S-band operation window.
Yeh, Chien-Hung; Chi, Sien
2005-07-11
An S-band (1470 to 1520 nm) fiber laser scheme, which uses multiple fiber Bragg grating (FBG) elements as feedback elements on each passive branch, is proposed and described for in-service fault identification in passive optical networks (PONs). By tuning a wavelength selective filter located within the laser cavity over a gain bandwidth, the fiber-fault of each branch can be monitored without affecting the in-service channels. In our experiment, an S-band four-branch monitoring tree-structured PON system is demonstrated and investigated experimentally.
Sun, Weifang; Yao, Bin; Zeng, Nianyin; He, Yuchao; Cao, Xincheng; He, Wangpeng
2017-01-01
As a typical example of large and complex mechanical systems, rotating machinery is prone to diversified sorts of mechanical faults. Among these faults, one of the prominent causes of malfunction is generated in gear transmission chains. Although they can be collected via vibration signals, the fault signatures are always submerged in overwhelming interfering contents. Therefore, identifying the critical fault’s characteristic signal is far from an easy task. In order to improve the recognition accuracy of a fault’s characteristic signal, a novel intelligent fault diagnosis method is presented. In this method, a dual-tree complex wavelet transform (DTCWT) is employed to acquire the multiscale signal’s features. In addition, a convolutional neural network (CNN) approach is utilized to automatically recognise a fault feature from the multiscale signal features. The experiment results of the recognition for gear faults show the feasibility and effectiveness of the proposed method, especially in the gear’s weak fault features. PMID:28773148
Probabilistic seismic hazard analysis for a nuclear power plant site in southeast Brazil
NASA Astrophysics Data System (ADS)
de Almeida, Andréia Abreu Diniz; Assumpção, Marcelo; Bommer, Julian J.; Drouet, Stéphane; Riccomini, Claudio; Prates, Carlos L. M.
2018-05-01
A site-specific probabilistic seismic hazard analysis (PSHA) has been performed for the only nuclear power plant site in Brazil, located 130 km southwest of Rio de Janeiro at Angra dos Reis. Logic trees were developed for both the seismic source characterisation and ground-motion characterisation models, in both cases seeking to capture the appreciable ranges of epistemic uncertainty with relatively few branches. This logic-tree structure allowed the hazard calculations to be performed efficiently while obtaining results that reflect the inevitable uncertainty in long-term seismic hazard assessment in this tectonically stable region. An innovative feature of the study is an additional seismic source zone added to capture the potential contributions of characteristics earthquake associated with geological faults in the region surrounding the coastal site.
Chen, Yikai; Wang, Kai; Xu, Chengcheng; Shi, Qin; He, Jie; Li, Peiqing; Shi, Ting
2018-05-19
To overcome the limitations of previous highway alignment safety evaluation methods, this article presents a highway alignment safety evaluation method based on fault tree analysis (FTA) and the characteristics of vehicle safety boundaries, within the framework of dynamic modeling of the driver-vehicle-road system. Approaches for categorizing the vehicle failure modes while driving on highways and the corresponding safety boundaries were comprehensively investigated based on vehicle system dynamics theory. Then, an overall crash probability model was formulated based on FTA considering the risks of 3 failure modes: losing steering capability, losing track-holding capability, and rear-end collision. The proposed method was implemented on a highway segment between Bengbu and Nanjing in China. A driver-vehicle-road multibody dynamics model was developed based on the 3D alignments of the Bengbu to Nanjing section of Ning-Luo expressway using Carsim, and the dynamics indices, such as sideslip angle and, yaw rate were obtained. Then, the average crash probability of each road section was calculated with a fixed-length method. Finally, the average crash probability was validated against the crash frequency per kilometer to demonstrate the accuracy of the proposed method. The results of the regression analysis and correlation analysis indicated good consistency between the results of the safety evaluation and the crash data and that it outperformed the safety evaluation methods used in previous studies. The proposed method has the potential to be used in practical engineering applications to identify crash-prone locations and alignment deficiencies on highways in the planning and design phases, as well as those in service.
NASA Technical Reports Server (NTRS)
Joshi, Anjali; Heimdahl, Mats P. E.; Miller, Steven P.; Whalen, Mike W.
2006-01-01
System safety analysis techniques are well established and are used extensively during the design of safety-critical systems. Despite this, most of the techniques are highly subjective and dependent on the skill of the practitioner. Since these analyses are usually based on an informal system model, it is unlikely that they will be complete, consistent, and error free. In fact, the lack of precise models of the system architecture and its failure modes often forces the safety analysts to devote much of their effort to gathering architectural details about the system behavior from several sources and embedding this information in the safety artifacts such as the fault trees. This report describes Model-Based Safety Analysis, an approach in which the system and safety engineers share a common system model created using a model-based development process. By extending the system model with a fault model as well as relevant portions of the physical system to be controlled, automated support can be provided for much of the safety analysis. We believe that by using a common model for both system and safety engineering and automating parts of the safety analysis, we can both reduce the cost and improve the quality of the safety analysis. Here we present our vision of model-based safety analysis and discuss the advantages and challenges in making this approach practical.
Bayesian updating in a fault tree model for shipwreck risk assessment.
Landquist, H; Rosén, L; Lindhe, A; Norberg, T; Hassellöv, I-M
2017-07-15
Shipwrecks containing oil and other hazardous substances have been deteriorating on the seabeds of the world for many years and are threatening to pollute the marine environment. The status of the wrecks and the potential volume of harmful substances present in the wrecks are affected by a multitude of uncertainties. Each shipwreck poses a unique threat, the nature of which is determined by the structural status of the wreck and possible damage resulting from hazardous activities that could potentially cause a discharge. Decision support is required to ensure the efficiency of the prioritisation process and the allocation of resources required to carry out risk mitigation measures. Whilst risk assessments can provide the requisite decision support, comprehensive methods that take into account key uncertainties related to shipwrecks are limited. The aim of this paper was to develop a method for estimating the probability of discharge of hazardous substances from shipwrecks. The method is based on Bayesian updating of generic information on the hazards posed by different activities in the surroundings of the wreck, with information on site-specific and wreck-specific conditions in a fault tree model. Bayesian updating is performed using Monte Carlo simulations for estimating the probability of a discharge of hazardous substances and formal handling of intrinsic uncertainties. An example application involving two wrecks located off the Swedish coast is presented. Results show the estimated probability of opening, discharge and volume of the discharge for the two wrecks and illustrate the capability of the model to provide decision support. Together with consequence estimations of a discharge of hazardous substances, the suggested model enables comprehensive and probabilistic risk assessments of shipwrecks to be made. Copyright © 2017 Elsevier B.V. All rights reserved.
Redundancy management for efficient fault recovery in NASA's distributed computing system
NASA Technical Reports Server (NTRS)
Malek, Miroslaw; Pandya, Mihir; Yau, Kitty
1991-01-01
The management of redundancy in computer systems was studied and guidelines were provided for the development of NASA's fault-tolerant distributed systems. Fault recovery and reconfiguration mechanisms were examined. A theoretical foundation was laid for redundancy management by efficient reconfiguration methods and algorithmic diversity. Algorithms were developed to optimize the resources for embedding of computational graphs of tasks in the system architecture and reconfiguration of these tasks after a failure has occurred. The computational structure represented by a path and the complete binary tree was considered and the mesh and hypercube architectures were targeted for their embeddings. The innovative concept of Hybrid Algorithm Technique was introduced. This new technique provides a mechanism for obtaining fault tolerance while exhibiting improved performance.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stacey M. L. Hendrickson; April M. Whaley; Ronald L. Boring
The Office of Nuclear Regulatory Research (RES) is sponsoring work in response to a Staff Requirements Memorandum (SRM) directing an effort to establish a single human reliability analysis (HRA) method for the agency or guidance for the use of multiple methods. As part of this effort an attempt to develop a comprehensive HRA qualitative approach is being pursued. This paper presents a draft of the method’s middle layer, a part of the qualitative analysis phase that links failure mechanisms to performance shaping factors. Starting with a Crew Response Tree (CRT) that has identified human failure events, analysts identify potential failuremore » mechanisms using the mid-layer model. The mid-layer model presented in this paper traces the identification of the failure mechanisms using the Information-Diagnosis/Decision-Action (IDA) model and cognitive models from the psychological literature. Each failure mechanism is grouped according to a phase of IDA. Under each phase of IDA, the cognitive models help identify the relevant performance shaping factors for the failure mechanism. The use of IDA and cognitive models can be traced through fault trees, which provide a detailed complement to the CRT.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shen, Song-Hua; Chang, James Y. H.; Boring,Ronald L.
2010-03-01
The Office of Nuclear Regulatory Research (RES) at the US Nuclear Regulatory Commission (USNRC) is sponsoring work in response to a Staff Requirements Memorandum (SRM) directing an effort to establish a single human reliability analysis (HRA) method for the agency or guidance for the use of multiple methods. As part of this effort an attempt to develop a comprehensive HRA qualitative approach is being pursued. This paper presents a draft of the method's middle layer, a part of the qualitative analysis phase that links failure mechanisms to performance shaping factors. Starting with a Crew Response Tree (CRT) that has identifiedmore » human failure events, analysts identify potential failure mechanisms using the mid-layer model. The mid-layer model presented in this paper traces the identification of the failure mechanisms using the Information-Diagnosis/Decision-Action (IDA) model and cognitive models from the psychological literature. Each failure mechanism is grouped according to a phase of IDA. Under each phase of IDA, the cognitive models help identify the relevant performance shaping factors for the failure mechanism. The use of IDA and cognitive models can be traced through fault trees, which provide a detailed complement to the CRT.« less
Failure mode effect analysis and fault tree analysis as a combined methodology in risk management
NASA Astrophysics Data System (ADS)
Wessiani, N. A.; Yoshio, F.
2018-04-01
There have been many studies reported the implementation of Failure Mode Effect Analysis (FMEA) and Fault Tree Analysis (FTA) as a method in risk management. However, most of the studies usually only choose one of these two methods in their risk management methodology. On the other side, combining these two methods will reduce the drawbacks of each methods when implemented separately. This paper aims to combine the methodology of FMEA and FTA in assessing risk. A case study in the metal company will illustrate how this methodology can be implemented. In the case study, this combined methodology will assess the internal risks that occur in the production process. Further, those internal risks should be mitigated based on their level of risks.
Geology of Joshua Tree National Park geodatabase
Powell, Robert E.; Matti, Jonathan C.; Cossette, Pamela M.
2015-09-16
The database in this Open-File Report describes the geology of Joshua Tree National Park and was completed in support of the National Cooperative Geologic Mapping Program of the U.S. Geological Survey (USGS) and in cooperation with the National Park Service (NPS). The geologic observations and interpretations represented in the database are relevant to both the ongoing scientific interests of the USGS in southern California and the management requirements of NPS, specifically of Joshua Tree National Park (JOTR).Joshua Tree National Park is situated within the eastern part of California’s Transverse Ranges province and straddles the transition between the Mojave and Sonoran deserts. The geologically diverse terrain that underlies JOTR reveals a rich and varied geologic evolution, one that spans nearly two billion years of Earth history. The Park’s landscape is the current expression of this evolution, its varied landforms reflecting the differing origins of underlying rock types and their differing responses to subsequent geologic events. Crystalline basement in the Park consists of Proterozoic plutonic and metamorphic rocks intruded by a composite Mesozoic batholith of Triassic through Late Cretaceous plutons arrayed in northwest-trending lithodemic belts. The basement was exhumed during the Cenozoic and underwent differential deep weathering beneath a low-relief erosion surface, with the deepest weathering profiles forming on quartz-rich, biotite-bearing granitoid rocks. Disruption of the basement terrain by faults of the San Andreas system began ca. 20 Ma and the JOTR sinistral domain, preceded by basalt eruptions, began perhaps as early as ca. 7 Ma, but no later than 5 Ma. Uplift of the mountain blocks during this interval led to erosional stripping of the thick zones of weathered quartz-rich granitoid rocks to form etchplains dotted by bouldery tors—the iconic landscape of the Park. The stripped debris filled basins along the fault zones.Mountain ranges and basins in the Park exhibit an east-west physiographic grain controlled by left-lateral fault zones that form a sinistral domain within the broad zone of dextral shear along the transform boundary between the North American and Pacific plates. Geologic and geophysical evidence reveal that movement on the sinistral faults zones has resulted in left steps along the zones, resulting in the development of sub-basins beneath Pinto Basin and Shavers and Chuckwalla Valleys. The sinistral fault zones connect the Mojave Desert dextral faults of the Eastern California Shear Zone to the north and east with the Coachella Valley strands of the southern San Andreas Fault Zone to the west.Quaternary surficial deposits accumulated in alluvial washes and playas and lakes along the valley floors; in alluvial fans, washes, and sheet wash aprons along piedmonts flanking the mountain ranges; and in eolian dunes and sand sheets that span the transition from valley floor to piedmont slope. Sequences of Quaternary pediments are planed into piedmonts flanking valley-floor and upland basins, each pediment in turn overlain by successively younger residual and alluvial surficial deposits.
Improved FTA methodology and application to subsea pipeline reliability design.
Lin, Jing; Yuan, Yongbo; Zhang, Mingyuan
2014-01-01
An innovative logic tree, Failure Expansion Tree (FET), is proposed in this paper, which improves on traditional Fault Tree Analysis (FTA). It describes a different thinking approach for risk factor identification and reliability risk assessment. By providing a more comprehensive and objective methodology, the rather subjective nature of FTA node discovery is significantly reduced and the resulting mathematical calculations for quantitative analysis are greatly simplified. Applied to the Useful Life phase of a subsea pipeline engineering project, the approach provides a more structured analysis by constructing a tree following the laws of physics and geometry. Resulting improvements are summarized in comparison table form.
Improved FTA Methodology and Application to Subsea Pipeline Reliability Design
Lin, Jing; Yuan, Yongbo; Zhang, Mingyuan
2014-01-01
An innovative logic tree, Failure Expansion Tree (FET), is proposed in this paper, which improves on traditional Fault Tree Analysis (FTA). It describes a different thinking approach for risk factor identification and reliability risk assessment. By providing a more comprehensive and objective methodology, the rather subjective nature of FTA node discovery is significantly reduced and the resulting mathematical calculations for quantitative analysis are greatly simplified. Applied to the Useful Life phase of a subsea pipeline engineering project, the approach provides a more structured analysis by constructing a tree following the laws of physics and geometry. Resulting improvements are summarized in comparison table form. PMID:24667681
A Seismic Source Model for Central Europe and Italy
NASA Astrophysics Data System (ADS)
Nyst, M.; Williams, C.; Onur, T.
2006-12-01
We present a seismic source model for Central Europe (Belgium, Germany, Switzerland, and Austria) and Italy, as part of an overall seismic risk and loss modeling project for this region. A separate presentation at this conference discusses the probabilistic seismic hazard and risk assessment (Williams et al., 2006). Where available we adopt regional consensus models and adjusts these to fit our format, otherwise we develop our own model. Our seismic source model covers the whole region under consideration and consists of the following components: 1. A subduction zone environment in Calabria, SE Italy, with interface events between the Eurasian and African plates and intraslab events within the subducting slab. The subduction zone interface is parameterized as a set of dipping area sources that follow the geometry of the surface of the subducting plate, whereas intraslab events are modeled as plane sources at depth; 2. The main normal faults in the upper crust along the Apennines mountain range, in Calabria and Central Italy. Dipping faults and (sub-) vertical faults are parameterized as dipping plane and line sources, respectively; 3. The Upper and Lower Rhine Graben regime that runs from northern Italy into eastern Belgium, parameterized as a combination of dipping plane and line sources, and finally 4. Background seismicity, parameterized as area sources. The fault model is based on slip rates using characteristic recurrence. The modeling of background and subduction zone seismicity is based on a compilation of several national and regional historic seismic catalogs using a Gutenberg-Richter recurrence model. Merging the catalogs encompasses the deletion of double, fake and very old events and the application of a declustering algorithm (Reasenberg, 2000). The resulting catalog contains a little over 6000 events, has an average b-value of -0.9, is complete for moment magnitudes 4.5 and larger, and is used to compute a gridded a-value model (smoothed historical seismicity) for the region. The logic tree weighs various completeness intervals and minimum magnitudes. Using a weighted scheme of European and global ground motion models together with a detailed site classification map for Europe based on Eurocode 8, we generate hazard maps for recurrence periods of 200, 475, 1000 and 2500 yrs.
USGS National Seismic Hazard Maps
Frankel, A.D.; Mueller, C.S.; Barnhard, T.P.; Leyendecker, E.V.; Wesson, R.L.; Harmsen, S.C.; Klein, F.W.; Perkins, D.M.; Dickman, N.C.; Hanson, S.L.; Hopper, M.G.
2000-01-01
The U.S. Geological Survey (USGS) recently completed new probabilistic seismic hazard maps for the United States, including Alaska and Hawaii. These hazard maps form the basis of the probabilistic component of the design maps used in the 1997 edition of the NEHRP Recommended Provisions for Seismic Regulations for New Buildings and Other Structures, prepared by the Building Seismic Safety Council arid published by FEMA. The hazard maps depict peak horizontal ground acceleration and spectral response at 0.2, 0.3, and 1.0 sec periods, with 10%, 5%, and 2% probabilities of exceedance in 50 years, corresponding to return times of about 500, 1000, and 2500 years, respectively. In this paper we outline the methodology used to construct the hazard maps. There are three basic components to the maps. First, we use spatially smoothed historic seismicity as one portion of the hazard calculation. In this model, we apply the general observation that moderate and large earthquakes tend to occur near areas of previous small or moderate events, with some notable exceptions. Second, we consider large background source zones based on broad geologic criteria to quantify hazard in areas with little or no historic seismicity, but with the potential for generating large events. Third, we include the hazard from specific fault sources. We use about 450 faults in the western United States (WUS) and derive recurrence times from either geologic slip rates or the dating of pre-historic earthquakes from trenching of faults or other paleoseismic methods. Recurrence estimates for large earthquakes in New Madrid and Charleston, South Carolina, were taken from recent paleoliquefaction studies. We used logic trees to incorporate different seismicity models, fault recurrence models, Cascadia great earthquake scenarios, and ground-motion attenuation relations. We present disaggregation plots showing the contribution to hazard at four cities from potential earthquakes with various magnitudes and distances.
Uniform California earthquake rupture forecast, version 2 (UCERF 2)
Field, E.H.; Dawson, T.E.; Felzer, K.R.; Frankel, A.D.; Gupta, V.; Jordan, T.H.; Parsons, T.; Petersen, M.D.; Stein, R.S.; Weldon, R.J.; Wills, C.J.
2009-01-01
The 2007 Working Group on California Earthquake Probabilities (WGCEP, 2007) presents the Uniform California Earthquake Rupture Forecast, Version 2 (UCERF 2). This model comprises a time-independent (Poisson-process) earthquake rate model, developed jointly with the National Seismic Hazard Mapping Program and a time-dependent earthquake-probability model, based on recent earthquake rates and stress-renewal statistics conditioned on the date of last event. The models were developed from updated statewide earthquake catalogs and fault deformation databases using a uniform methodology across all regions and implemented in the modular, extensible Open Seismic Hazard Analysis framework. The rate model satisfies integrating measures of deformation across the plate-boundary zone and is consistent with historical seismicity data. An overprediction of earthquake rates found at intermediate magnitudes (6.5 ??? M ???7.0) in previous models has been reduced to within the 95% confidence bounds of the historical earthquake catalog. A logic tree with 480 branches represents the epistemic uncertainties of the full time-dependent model. The mean UCERF 2 time-dependent probability of one or more M ???6.7 earthquakes in the California region during the next 30 yr is 99.7%; this probability decreases to 46% for M ???7.5 and to 4.5% for M ???8.0. These probabilities do not include the Cascadia subduction zone, largely north of California, for which the estimated 30 yr, M ???8.0 time-dependent probability is 10%. The M ???6.7 probabilities on major strike-slip faults are consistent with the WGCEP (2003) study in the San Francisco Bay Area and the WGCEP (1995) study in southern California, except for significantly lower estimates along the San Jacinto and Elsinore faults, owing to provisions for larger multisegment ruptures. Important model limitations are discussed.
NASA Technical Reports Server (NTRS)
Fragola, Joseph R.; Maggio, Gaspare; Frank, Michael V.; Gerez, Luis; Mcfadden, Richard H.; Collins, Erin P.; Ballesio, Jorge; Appignani, Peter L.; Karns, James J.
1995-01-01
The application of the probabilistic risk assessment methodology to a Space Shuttle environment, particularly to the potential of losing the Shuttle during nominal operation is addressed. The different related concerns are identified and combined to determine overall program risks. A fault tree model is used to allocate system probabilities to the subsystem level. The loss of the vehicle due to failure to contain energetic gas and debris, to maintain proper propulsion and configuration is analyzed, along with the loss due to Orbiter, external tank failure, and landing failure or error.
NASA Astrophysics Data System (ADS)
Mayo, Michael; Pfeifer, Peter; Gheorghiu, Stefan
2008-03-01
The acinar airways lie at the periphery of the human lung and are responsible for the transfer of oxygen from air to the blood during respiration. This transfer occurs by the diffusion-reaction of oxygen over the irregular surface of the alveolar membranes lining the acinar airways. We present an exactly solvable diffusion-reaction model on a hierarchically branched tree, allowing a quantitative prediction of the oxygen current over the entire system of acinar airways responsible for the gas exchange. We discuss the effect of diffusional screening, which is strongly coupled to oxygen transport in the human lung. We show that the oxygen current is insensitive to a loss of permeability of the alveolar membranes over a wide range of permeabilities, similar to a ``constant-current source'' in an electric network. Such fault tolerance has been observed in other treatments of the gas exchange in the lung and is obtained here as a fully analytical result.
Time-dependent seismic hazard analysis for the Greater Tehran and surrounding areas
NASA Astrophysics Data System (ADS)
Jalalalhosseini, Seyed Mostafa; Zafarani, Hamid; Zare, Mehdi
2018-01-01
This study presents a time-dependent approach for seismic hazard in Tehran and surrounding areas. Hazard is evaluated by combining background seismic activity, and larger earthquakes may emanate from fault segments. Using available historical and paleoseismological data or empirical relation, the recurrence time and maximum magnitude of characteristic earthquakes for the major faults have been explored. The Brownian passage time (BPT) distribution has been used to calculate equivalent fictitious seismicity rate for major faults in the region. To include ground motion uncertainty, a logic tree and five ground motion prediction equations have been selected based on their applicability in the region. Finally, hazard maps have been presented.
Fault detection and fault tolerance in robotics
NASA Technical Reports Server (NTRS)
Visinsky, Monica; Walker, Ian D.; Cavallaro, Joseph R.
1992-01-01
Robots are used in inaccessible or hazardous environments in order to alleviate some of the time, cost and risk involved in preparing men to endure these conditions. In order to perform their expected tasks, the robots are often quite complex, thus increasing their potential for failures. If men must be sent into these environments to repair each component failure in the robot, the advantages of using the robot are quickly lost. Fault tolerant robots are needed which can effectively cope with failures and continue their tasks until repairs can be realistically scheduled. Before fault tolerant capabilities can be created, methods of detecting and pinpointing failures must be perfected. This paper develops a basic fault tree analysis of a robot in order to obtain a better understanding of where failures can occur and how they contribute to other failures in the robot. The resulting failure flow chart can also be used to analyze the resiliency of the robot in the presence of specific faults. By simulating robot failures and fault detection schemes, the problems involved in detecting failures for robots are explored in more depth.
NASA Astrophysics Data System (ADS)
Lai, Wenqing; Wang, Yuandong; Li, Wenpeng; Sun, Guang; Qu, Guomin; Cui, Shigang; Li, Mengke; Wang, Yongqiang
2017-10-01
Based on long term vibration monitoring of the No.2 oil-immersed fat wave reactor in the ±500kV converter station in East Mongolia, the vibration signals in normal state and in core loose fault state were saved. Through the time-frequency analysis of the signals, the vibration characteristics of the core loose fault were obtained, and a fault diagnosis method based on the dual tree complex wavelet (DT-CWT) and support vector machine (SVM) was proposed. The vibration signals were analyzed by DT-CWT, and the energy entropy of the vibration signals were taken as the feature vector; the support vector machine was used to train and test the feature vector, and the accurate identification of the core loose fault of the flat wave reactor was realized. Through the identification of many groups of normal and core loose fault state vibration signals, the diagnostic accuracy of the result reached 97.36%. The effectiveness and accuracy of the method in the fault diagnosis of the flat wave reactor core is verified.
Fault diagnosis of helical gearbox using acoustic signal and wavelets
NASA Astrophysics Data System (ADS)
Pranesh, SK; Abraham, Siju; Sugumaran, V.; Amarnath, M.
2017-05-01
The efficient transmission of power in machines is needed and gears are an appropriate choice. Faults in gears result in loss of energy and money. The monitoring and fault diagnosis are done by analysis of the acoustic and vibrational signals which are generally considered to be unwanted by products. This study proposes the usage of machine learning algorithm for condition monitoring of a helical gearbox by using the sound signals produced by the gearbox. Artificial faults were created and subsequently signals were captured by a microphone. An extensive study using different wavelet transformations for feature extraction from the acoustic signals was done, followed by waveletselection and feature selection using J48 decision tree and feature classification was performed using K star algorithm. Classification accuracy of 100% was obtained in the study
Inferring patterns in mitochondrial DNA sequences through hypercube independent spanning trees.
Silva, Eduardo Sant Ana da; Pedrini, Helio
2016-03-01
Given a graph G, a set of spanning trees rooted at a vertex r of G is said vertex/edge independent if, for each vertex v of G, v≠r, the paths of r to v in any pair of trees are vertex/edge disjoint. Independent spanning trees (ISTs) provide a number of advantages in data broadcasting due to their fault tolerant properties. For this reason, some studies have addressed the issue by providing mechanisms for constructing independent spanning trees efficiently. In this work, we investigate how to construct independent spanning trees on hypercubes, which are generated based upon spanning binomial trees, and how to use them to predict mitochondrial DNA sequence parts through paths on the hypercube. The prediction works both for inferring mitochondrial DNA sequences comprised of six bases as well as infer anomalies that probably should not belong to the mitochondrial DNA standard. Copyright © 2016 Elsevier Ltd. All rights reserved.
Nouri.Gharahasanlou, Ali; Mokhtarei, Ashkan; Khodayarei, Aliasqar; Ataei, Mohammad
2014-01-01
Evaluating and analyzing the risk in the mining industry is a new approach for improving the machinery performance. Reliability, safety, and maintenance management based on the risk analysis can enhance the overall availability and utilization of the mining technological systems. This study investigates the failure occurrence probability of the crushing and mixing bed hall department at Azarabadegan Khoy cement plant by using fault tree analysis (FTA) method. The results of the analysis in 200 h operating interval show that the probability of failure occurrence for crushing, conveyor systems, crushing and mixing bed hall department is 73, 64, and 95 percent respectively and the conveyor belt subsystem found as the most probable system for failure. Finally, maintenance as a method of control and prevent the occurrence of failure is proposed. PMID:26779433
Nouri Gharahasanlou, Ali; Mokhtarei, Ashkan; Khodayarei, Aliasqar; Ataei, Mohammad
2014-04-01
Evaluating and analyzing the risk in the mining industry is a new approach for improving the machinery performance. Reliability, safety, and maintenance management based on the risk analysis can enhance the overall availability and utilization of the mining technological systems. This study investigates the failure occurrence probability of the crushing and mixing bed hall department at Azarabadegan Khoy cement plant by using fault tree analysis (FTA) method. The results of the analysis in 200 h operating interval show that the probability of failure occurrence for crushing, conveyor systems, crushing and mixing bed hall department is 73, 64, and 95 percent respectively and the conveyor belt subsystem found as the most probable system for failure. Finally, maintenance as a method of control and prevent the occurrence of failure is proposed.
Risk assessment techniques with applicability in marine engineering
NASA Astrophysics Data System (ADS)
Rudenko, E.; Panaitescu, F. V.; Panaitescu, M.
2015-11-01
Nowadays risk management is a carefully planned process. The task of risk management is organically woven into the general problem of increasing the efficiency of business. Passive attitude to risk and awareness of its existence are replaced by active management techniques. Risk assessment is one of the most important stages of risk management, since for risk management it is necessary first to analyze and evaluate risk. There are many definitions of this notion but in general case risk assessment refers to the systematic process of identifying the factors and types of risk and their quantitative assessment, i.e. risk analysis methodology combines mutually complementary quantitative and qualitative approaches. Purpose of the work: In this paper we will consider as risk assessment technique Fault Tree analysis (FTA). The objectives are: understand purpose of FTA, understand and apply rules of Boolean algebra, analyse a simple system using FTA, FTA advantages and disadvantages. Research and methodology: The main purpose is to help identify potential causes of system failures before the failures actually occur. We can evaluate the probability of the Top event.The steps of this analize are: the system's examination from Top to Down, the use of symbols to represent events, the use of mathematical tools for critical areas, the use of Fault tree logic diagrams to identify the cause of the Top event. Results: In the finally of study it will be obtained: critical areas, Fault tree logical diagrams and the probability of the Top event. These results can be used for the risk assessment analyses.
Using certification trails to achieve software fault tolerance
NASA Technical Reports Server (NTRS)
Sullivan, Gregory F.; Masson, Gerald M.
1993-01-01
A conceptually novel and powerful technique to achieve fault tolerance in hardware and software systems is introduced. When used for software fault tolerance, this new technique uses time and software redundancy and can be outlined as follows. In the initial phase, a program is run to solve a problem and store the result. In addition, this program leaves behind a trail of data called a certification trail. In the second phase, another program is run which solves the original problem again. This program, however, has access to the certification trail left by the first program. Because of the availability of the certification trail, the second phase can be performed by a less complex program and can execute more quickly. In the final phase, the two results are accepted as correct; otherwise an error is indicated. An essential aspect of this approach is that the second program must always generate either an error indication or a correct output even when the certification trail it receives from the first program is incorrect. The certification trail approach to fault tolerance was formalized and it was illustrated by applying it to the fundamental problem of finding a minimum spanning tree. Cases in which the second phase can be run concorrectly with the first and act as a monitor are discussed. The certification trail approach was compared to other approaches to fault tolerance. Because of space limitations we have omitted examples of our technique applied to the Huffman tree, and convex hull problems. These can be found in the full version of this paper.
NASA Technical Reports Server (NTRS)
Vitali, Roberto; Lutomski, Michael G.
2004-01-01
National Aeronautics and Space Administration s (NASA) International Space Station (ISS) Program uses Probabilistic Risk Assessment (PRA) as part of its Continuous Risk Management Process. It is used as a decision and management support tool to not only quantify risk for specific conditions, but more importantly comparing different operational and management options to determine the lowest risk option and provide rationale for management decisions. This paper presents the derivation of the probability distributions used to quantify the failure rates and the probability of failures of the basic events employed in the PRA model of the ISS. The paper will show how a Bayesian approach was used with different sources of data including the actual ISS on orbit failures to enhance the confidence in results of the PRA. As time progresses and more meaningful data is gathered from on orbit failures, an increasingly accurate failure rate probability distribution for the basic events of the ISS PRA model can be obtained. The ISS PRA has been developed by mapping the ISS critical systems such as propulsion, thermal control, or power generation into event sequences diagrams and fault trees. The lowest level of indenture of the fault trees was the orbital replacement units (ORU). The ORU level was chosen consistently with the level of statistically meaningful data that could be obtained from the aerospace industry and from the experts in the field. For example, data was gathered for the solenoid valves present in the propulsion system of the ISS. However valves themselves are composed of parts and the individual failure of these parts was not accounted for in the PRA model. In other words the failure of a spring within a valve was considered a failure of the valve itself.
Bodin, Paul; Bilham, Roger; Behr, Jeff; Gomberg, Joan; Hudnut, Kenneth W.
1994-01-01
Five out of six functioning creepmeters on southern California faults recorded slip triggered at the time of some or all of the three largest events of the 1992 Landers earthquake sequence. Digital creep data indicate that dextral slip was triggered within 1 min of each mainshock and that maximum slip velocities occurred 2 to 3 min later. The duration of triggered slip events ranged from a few hours to several weeks. We note that triggered slip occurs commonly on faults that exhibit fault creep. To account for the observation that slip can be triggered repeatedly on a fault, we propose that the amplitude of triggered slip may be proportional to the depth of slip in the creep event and to the available near-surface tectonic strain that would otherwise eventually be released as fault creep. We advance the notion that seismic surface waves, perhaps amplified by sediments, generate transient local conditions that favor the release of tectonic strain to varying depths. Synthetic strain seismograms are presented that suggest increased pore pressure during periods of fault-normal contraction may be responsible for triggered slip, since maximum dextral shear strain transients correspond to times of maximum fault-normal contraction.
Insurance Applications of Active Fault Maps Showing Epistemic Uncertainty
NASA Astrophysics Data System (ADS)
Woo, G.
2005-12-01
Insurance loss modeling for earthquakes utilizes available maps of active faulting produced by geoscientists. All such maps are subject to uncertainty, arising from lack of knowledge of fault geometry and rupture history. Field work to undertake geological fault investigations drains human and monetary resources, and this inevitably limits the resolution of fault parameters. Some areas are more accessible than others; some may be of greater social or economic importance than others; some areas may be investigated more rapidly or diligently than others; or funding restrictions may have curtailed the extent of the fault mapping program. In contrast with the aleatory uncertainty associated with the inherent variability in the dynamics of earthquake fault rupture, uncertainty associated with lack of knowledge of fault geometry and rupture history is epistemic. The extent of this epistemic uncertainty may vary substantially from one regional or national fault map to another. However aware the local cartographer may be, this uncertainty is generally not conveyed in detail to the international map user. For example, an area may be left blank for a variety of reasons, ranging from lack of sufficient investigation of a fault to lack of convincing evidence of activity. Epistemic uncertainty in fault parameters is of concern in any probabilistic assessment of seismic hazard, not least in insurance earthquake risk applications. A logic-tree framework is appropriate for incorporating epistemic uncertainty. Some insurance contracts cover specific high-value properties or transport infrastructure, and therefore are extremely sensitive to the geometry of active faulting. Alternative Risk Transfer (ART) to the capital markets may also be considered. In order for such insurance or ART contracts to be properly priced, uncertainty should be taken into account. Accordingly, an estimate is needed for the likelihood of surface rupture capable of causing severe damage. Especially where a high deductible is in force, this requires estimation of the epistemic uncertainty on fault geometry and activity. Transport infrastructure insurance is of practical interest in seismic countries. On the North Anatolian Fault in Turkey, there is uncertainty over an unbroken segment between the eastern end of the Dazce Fault and Bolu. This may have ruptured during the 1944 earthquake. Existing hazard maps may simply use a question mark to flag uncertainty. However, a far more informative type of hazard map might express spatial variations in the confidence level associated with a fault map. Through such visual guidance, an insurance risk analyst would be better placed to price earthquake cover, allowing for epistemic uncertainty.
Machine Learning of Fault Friction
NASA Astrophysics Data System (ADS)
Johnson, P. A.; Rouet-Leduc, B.; Hulbert, C.; Marone, C.; Guyer, R. A.
2017-12-01
We are applying machine learning (ML) techniques to continuous acoustic emission (AE) data from laboratory earthquake experiments. Our goal is to apply explicit ML methods to this acoustic datathe AE in order to infer frictional properties of a laboratory fault. The experiment is a double direct shear apparatus comprised of fault blocks surrounding fault gouge comprised of glass beads or quartz powder. Fault characteristics are recorded, including shear stress, applied load (bulk friction = shear stress/normal load) and shear velocity. The raw acoustic signal is continuously recorded. We rely on explicit decision tree approaches (Random Forest and Gradient Boosted Trees) that allow us to identify important features linked to the fault friction. A training procedure that employs both the AE and the recorded shear stress from the experiment is first conducted. Then, testing takes place on data the algorithm has never seen before, using only the continuous AE signal. We find that these methods provide rich information regarding frictional processes during slip (Rouet-Leduc et al., 2017a; Hulbert et al., 2017). In addition, similar machine learning approaches predict failure times, as well as slip magnitudes in some cases. We find that these methods work for both stick slip and slow slip experiments, for periodic slip and for aperiodic slip. We also derive a fundamental relationship between the AE and the friction describing the frictional behavior of any earthquake slip cycle in a given experiment (Rouet-Leduc et al., 2017b). Our goal is to ultimately scale these approaches to Earth geophysical data to probe fault friction. References Rouet-Leduc, B., C. Hulbert, N. Lubbers, K. Barros, C. Humphreys and P. A. Johnson, Machine learning predicts laboratory earthquakes, in review (2017). https://arxiv.org/abs/1702.05774Rouet-LeDuc, B. et al., Friction Laws Derived From the Acoustic Emissions of a Laboratory Fault by Machine Learning (2017), AGU Fall Meeting Session S025: Earthquake source: from the laboratory to the fieldHulbert, C., Characterizing slow slip applying machine learning (2017), AGU Fall Meeting Session S019: Slow slip, Tectonic Tremor, and the Brittle-to-Ductile Transition Zone: What mechanisms control the diversity of slow and fast earthquakes?
NASA Technical Reports Server (NTRS)
Bavuso, Salvatore J.; Rothmann, Elizabeth; Mittal, Nitin; Koppen, Sandra Howell
1994-01-01
The Hybrid Automated Reliability Predictor (HARP) integrated Reliability (HiRel) tool system for reliability/availability prediction offers a toolbox of integrated reliability/availability programs that can be used to customize the user's application in a workstation or nonworkstation environment. HiRel consists of interactive graphical input/output programs and four reliability/availability modeling engines that provide analytical and simulative solutions to a wide host of highly reliable fault-tolerant system architectures and is also applicable to electronic systems in general. The tool system was designed at the outset to be compatible with most computing platforms and operating systems, and some programs have been beta tested within the aerospace community for over 8 years. This document is a user's guide for the HiRel graphical preprocessor Graphics Oriented (GO) program. GO is a graphical user interface for the HARP engine that enables the drawing of reliability/availability models on a monitor. A mouse is used to select fault tree gates or Markov graphical symbols from a menu for drawing.
Quantification of source uncertainties in Seismic Probabilistic Tsunami Hazard Analysis (SPTHA)
NASA Astrophysics Data System (ADS)
Selva, J.; Tonini, R.; Molinari, I.; Tiberti, M. M.; Romano, F.; Grezio, A.; Melini, D.; Piatanesi, A.; Basili, R.; Lorito, S.
2016-06-01
We propose a procedure for uncertainty quantification in Probabilistic Tsunami Hazard Analysis (PTHA), with a special emphasis on the uncertainty related to statistical modelling of the earthquake source in Seismic PTHA (SPTHA), and on the separate treatment of subduction and crustal earthquakes (treated as background seismicity). An event tree approach and ensemble modelling are used in spite of more classical approaches, such as the hazard integral and the logic tree. This procedure consists of four steps: (1) exploration of aleatory uncertainty through an event tree, with alternative implementations for exploring epistemic uncertainty; (2) numerical computation of tsunami generation and propagation up to a given offshore isobath; (3) (optional) site-specific quantification of inundation; (4) simultaneous quantification of aleatory and epistemic uncertainty through ensemble modelling. The proposed procedure is general and independent of the kind of tsunami source considered; however, we implement step 1, the event tree, specifically for SPTHA, focusing on seismic source uncertainty. To exemplify the procedure, we develop a case study considering seismic sources in the Ionian Sea (central-eastern Mediterranean Sea), using the coasts of Southern Italy as a target zone. The results show that an efficient and complete quantification of all the uncertainties is feasible even when treating a large number of potential sources and a large set of alternative model formulations. We also find that (i) treating separately subduction and background (crustal) earthquakes allows for optimal use of available information and for avoiding significant biases; (ii) both subduction interface and crustal faults contribute to the SPTHA, with different proportions that depend on source-target position and tsunami intensity; (iii) the proposed framework allows sensitivity and deaggregation analyses, demonstrating the applicability of the method for operational assessments.
1983-04-01
tolerances or spaci - able assets diagnostic/fault ness float fications isolation devices Operation of cannibalL- zation point Why Sustain materiel...with diagnostic software based on "fault tree " representation of the M65 ThS) to bridge the gap in diagnostics capability was demonstrated in 1980 and... identification friend or foe) which has much lower reliability than TSQ-73 peculiar hardware). Thus, as in other examples, reported readiness does not reflect
Unsupervised Learning —A Novel Clustering Method for Rolling Bearing Faults Identification
NASA Astrophysics Data System (ADS)
Kai, Li; Bo, Luo; Tao, Ma; Xuefeng, Yang; Guangming, Wang
2017-12-01
To promptly process the massive fault data and automatically provide accurate diagnosis results, numerous studies have been conducted on intelligent fault diagnosis of rolling bearing. Among these studies, such as artificial neural networks, support vector machines, decision trees and other supervised learning methods are used commonly. These methods can detect the failure of rolling bearing effectively, but to achieve better detection results, it often requires a lot of training samples. Based on above, a novel clustering method is proposed in this paper. This novel method is able to find the correct number of clusters automatically the effectiveness of the proposed method is validated using datasets from rolling element bearings. The diagnosis results show that the proposed method can accurately detect the fault types of small samples. Meanwhile, the diagnosis results are also relative high accuracy even for massive samples.
Fault Analysis on Bevel Gear Teeth Surface Damage of Aeroengine
NASA Astrophysics Data System (ADS)
Cheng, Li; Chen, Lishun; Li, Silu; Liang, Tao
2017-12-01
Aiming at the trouble phenomenon for bevel gear teeth surface damage of Aero-engine, Fault Tree of bevel gear teeth surface damage was drawing by logical relations, the possible cause of trouble was analyzed, scanning electron-microscope, energy spectrum analysis, Metallographic examination, hardness measurement and other analysis means were adopted to investigate the spall gear tooth. The results showed that Material composition, Metallographic structure, Micro-hardness, Carburization depth of the fault bevel gear accord with technical requirements. Contact fatigue spall defect caused bevel gear teeth surface damage. The small magnitude of Interference of accessory gearbox install hole and driving bevel gear bearing seat was mainly caused. Improved measures were proposed, after proof, Thermoelement measures are effective.
NASA Astrophysics Data System (ADS)
Riyadi, Eko H.
2014-09-01
Initiating event is defined as any event either internal or external to the nuclear power plants (NPPs) that perturbs the steady state operation of the plant, if operating, thereby initiating an abnormal event such as transient or loss of coolant accident (LOCA) within the NPPs. These initiating events trigger sequences of events that challenge plant control and safety systems whose failure could potentially lead to core damage or large early release. Selection for initiating events consists of two steps i.e. first step, definition of possible events, such as by evaluating a comprehensive engineering, and by constructing a top level logic model. Then the second step, grouping of identified initiating event's by the safety function to be performed or combinations of systems responses. Therefore, the purpose of this paper is to discuss initiating events identification in event tree development process and to reviews other probabilistic safety assessments (PSA). The identification of initiating events also involves the past operating experience, review of other PSA, failure mode and effect analysis (FMEA), feedback from system modeling, and master logic diagram (special type of fault tree). By using the method of study for the condition of the traditional US PSA categorization in detail, could be obtained the important initiating events that are categorized into LOCA, transients and external events.
Uniform California earthquake rupture forecast, version 3 (UCERF3): the time-independent model
Field, Edward H.; Biasi, Glenn P.; Bird, Peter; Dawson, Timothy E.; Felzer, Karen R.; Jackson, David D.; Johnson, Kaj M.; Jordan, Thomas H.; Madden, Christopher; Michael, Andrew J.; Milner, Kevin R.; Page, Morgan T.; Parsons, Thomas; Powers, Peter M.; Shaw, Bruce E.; Thatcher, Wayne R.; Weldon, Ray J.; Zeng, Yuehua; ,
2013-01-01
In this report we present the time-independent component of the Uniform California Earthquake Rupture Forecast, Version 3 (UCERF3), which provides authoritative estimates of the magnitude, location, and time-averaged frequency of potentially damaging earthquakes in California. The primary achievements have been to relax fault segmentation assumptions and to include multifault ruptures, both limitations of the previous model (UCERF2). The rates of all earthquakes are solved for simultaneously, and from a broader range of data, using a system-level "grand inversion" that is both conceptually simple and extensible. The inverse problem is large and underdetermined, so a range of models is sampled using an efficient simulated annealing algorithm. The approach is more derivative than prescriptive (for example, magnitude-frequency distributions are no longer assumed), so new analysis tools were developed for exploring solutions. Epistemic uncertainties were also accounted for using 1,440 alternative logic tree branches, necessitating access to supercomputers. The most influential uncertainties include alternative deformation models (fault slip rates), a new smoothed seismicity algorithm, alternative values for the total rate of M≥5 events, and different scaling relationships, virtually all of which are new. As a notable first, three deformation models are based on kinematically consistent inversions of geodetic and geologic data, also providing slip-rate constraints on faults previously excluded because of lack of geologic data. The grand inversion constitutes a system-level framework for testing hypotheses and balancing the influence of different experts. For example, we demonstrate serious challenges with the Gutenberg-Richter hypothesis for individual faults. UCERF3 is still an approximation of the system, however, and the range of models is limited (for example, constrained to stay close to UCERF2). Nevertheless, UCERF3 removes the apparent UCERF2 overprediction of M6.5–7 earthquake rates and also includes types of multifault ruptures seen in nature. Although UCERF3 fits the data better than UCERF2 overall, there may be areas that warrant further site-specific investigation. Supporting products may be of general interest, and we list key assumptions and avenues for future model improvements.
Risk management of PPP project in the preparation stage based on Fault Tree Analysis
NASA Astrophysics Data System (ADS)
Xing, Yuanzhi; Guan, Qiuling
2017-03-01
The risk management of PPP(Public Private Partnership) project can improve the level of risk control between government departments and private investors, so as to make more beneficial decisions, reduce investment losses and achieve mutual benefit as well. Therefore, this paper takes the PPP project preparation stage venture as the research object to identify and confirm four types of risks. At the same time, fault tree analysis(FTA) is used to evaluate the risk factors that belong to different parts, and quantify the influencing degree of risk impact on the basis of risk identification. In addition, it determines the importance order of risk factors by calculating unit structure importance on PPP project preparation stage. The result shows that accuracy of government decision-making, rationality of private investors funds allocation and instability of market returns are the main factors to generate the shared risk on the project.
Rocket engine system reliability analyses using probabilistic and fuzzy logic techniques
NASA Technical Reports Server (NTRS)
Hardy, Terry L.; Rapp, Douglas C.
1994-01-01
The reliability of rocket engine systems was analyzed by using probabilistic and fuzzy logic techniques. Fault trees were developed for integrated modular engine (IME) and discrete engine systems, and then were used with the two techniques to quantify reliability. The IRRAS (Integrated Reliability and Risk Analysis System) computer code, developed for the U.S. Nuclear Regulatory Commission, was used for the probabilistic analyses, and FUZZYFTA (Fuzzy Fault Tree Analysis), a code developed at NASA Lewis Research Center, was used for the fuzzy logic analyses. Although both techniques provided estimates of the reliability of the IME and discrete systems, probabilistic techniques emphasized uncertainty resulting from randomness in the system whereas fuzzy logic techniques emphasized uncertainty resulting from vagueness in the system. Because uncertainty can have both random and vague components, both techniques were found to be useful tools in the analysis of rocket engine system reliability.
Enterprise architecture availability analysis using fault trees and stakeholder interviews
NASA Astrophysics Data System (ADS)
Närman, Per; Franke, Ulrik; König, Johan; Buschle, Markus; Ekstedt, Mathias
2014-01-01
The availability of enterprise information systems is a key concern for many organisations. This article describes a method for availability analysis based on Fault Tree Analysis and constructs from the ArchiMate enterprise architecture (EA) language. To test the quality of the method, several case-studies within the banking and electrical utility industries were performed. Input data were collected through stakeholder interviews. The results from the case studies were compared with availability of log data to determine the accuracy of the method's predictions. In the five cases where accurate log data were available, the yearly downtime estimates were within eight hours from the actual downtimes. The cost of performing the analysis was low; no case study required more than 20 man-hours of work, making the method ideal for practitioners with an interest in obtaining rapid availability estimates of their enterprise information systems.
Khan, F I; Iqbal, A; Ramesh, N; Abbasi, S A
2001-10-12
As it is conventionally done, strategies for incorporating accident--prevention measures in any hazardous chemical process industry are developed on the basis of input from risk assessment. However, the two steps-- risk assessment and hazard reduction (or safety) measures--are not linked interactively in the existing methodologies. This prevents a quantitative assessment of the impacts of safety measures on risk control. We have made an attempt to develop a methodology in which risk assessment steps are interactively linked with implementation of safety measures. The resultant system tells us the extent of reduction of risk by each successive safety measure. It also tells based on sophisticated maximum credible accident analysis (MCAA) and probabilistic fault tree analysis (PFTA) whether a given unit can ever be made 'safe'. The application of the methodology has been illustrated with a case study.
SURE - SEMI-MARKOV UNRELIABILITY RANGE EVALUATOR (VAX VMS VERSION)
NASA Technical Reports Server (NTRS)
Butler, R. W.
1994-01-01
The Semi-Markov Unreliability Range Evaluator, SURE, is an analysis tool for reconfigurable, fault-tolerant systems. Traditional reliability analyses are based on aggregates of fault-handling and fault-occurrence models. SURE provides an efficient means for calculating accurate upper and lower bounds for the death state probabilities for a large class of semi-Markov models, not just those which can be reduced to critical-pair architectures. The calculated bounds are close enough (usually within 5 percent of each other) for use in reliability studies of ultra-reliable computer systems. The SURE bounding theorems have algebraic solutions and are consequently computationally efficient even for large and complex systems. SURE can optionally regard a specified parameter as a variable over a range of values, enabling an automatic sensitivity analysis. Highly reliable systems employ redundancy and reconfiguration as methods of ensuring operation. When such systems are modeled stochastically, some state transitions are orders of magnitude faster than others; that is, fault recovery is usually faster than fault arrival. SURE takes these time differences into account. Slow transitions are described by exponential functions and fast transitions are modeled by either the White or Lee theorems based on means, variances, and percentiles. The user must assign identifiers to every state in the system and define all transitions in the semi-Markov model. SURE input statements are composed of variables and constants related by FORTRAN-like operators such as =, +, *, SIN, EXP, etc. There are a dozen major commands such as READ, READO, SAVE, SHOW, PRUNE, TRUNCate, CALCulator, and RUN. Once the state transitions have been defined, SURE calculates the upper and lower probability bounds for entering specified death states within a specified mission time. SURE output is tabular. The mathematical approach chosen to solve a reliability problem may vary with the size and nature of the problem. Although different solution techniques are utilized on different programs, it is possible to have a common input language. The Systems Validation Methods group at NASA Langley Research Center has created a set of programs that form the basis for a reliability analysis workstation. The set of programs are: SURE reliability analysis program (COSMIC program LAR-13789, LAR-14921); the ASSIST specification interface program (LAR-14193, LAR-14923), PAWS/STEM reliability analysis programs (LAR-14165, LAR-14920); and the FTC fault tree tool (LAR-14586, LAR-14922). FTC is used to calculate the top-event probability for a fault tree. PAWS/STEM and SURE are programs which interpret the same SURE language, but utilize different solution methods. ASSIST is a preprocessor that generates SURE language from a more abstract definition. SURE, ASSIST, and PAWS/STEM are also offered as a bundle. Please see the abstract for COS-10039/COS-10041, SARA - SURE/ASSIST Reliability Analysis Workstation, for pricing details. SURE was originally developed for DEC VAX series computers running VMS and was later ported for use on Sun computers running SunOS. The VMS version (LAR13789) is written in PASCAL, C-language, and FORTRAN 77. The standard distribution medium for the VMS version of SURE is a 9-track 1600 BPI magnetic tape in VMSINSTAL format. It is also available on a TK50 tape cartridge in VMSINSTAL format. Executables are included. The Sun UNIX version (LAR14921) is written in ANSI C-language and PASCAL. An ANSI compliant C compiler is required in order to compile the C portion of this package. The standard distribution medium for the Sun version of SURE is a .25 inch streaming magnetic tape cartridge in UNIX tar format. Both Sun3 and Sun4 executables are included. SURE was developed in 1988 and last updated in 1992. DEC, VAX, VMS, and TK50 are trademarks of Digital Equipment Corporation. TEMPLATE is a registered trademark of Template Graphics Software, Inc. UNIX is a registered trademark of AT&T Bell Laboratories. Sun3 and Sun4 are trademarks of Sun Microsystems, Inc.
SURE - SEMI-MARKOV UNRELIABILITY RANGE EVALUATOR (SUN VERSION)
NASA Technical Reports Server (NTRS)
Butler, R. W.
1994-01-01
The Semi-Markov Unreliability Range Evaluator, SURE, is an analysis tool for reconfigurable, fault-tolerant systems. Traditional reliability analyses are based on aggregates of fault-handling and fault-occurrence models. SURE provides an efficient means for calculating accurate upper and lower bounds for the death state probabilities for a large class of semi-Markov models, not just those which can be reduced to critical-pair architectures. The calculated bounds are close enough (usually within 5 percent of each other) for use in reliability studies of ultra-reliable computer systems. The SURE bounding theorems have algebraic solutions and are consequently computationally efficient even for large and complex systems. SURE can optionally regard a specified parameter as a variable over a range of values, enabling an automatic sensitivity analysis. Highly reliable systems employ redundancy and reconfiguration as methods of ensuring operation. When such systems are modeled stochastically, some state transitions are orders of magnitude faster than others; that is, fault recovery is usually faster than fault arrival. SURE takes these time differences into account. Slow transitions are described by exponential functions and fast transitions are modeled by either the White or Lee theorems based on means, variances, and percentiles. The user must assign identifiers to every state in the system and define all transitions in the semi-Markov model. SURE input statements are composed of variables and constants related by FORTRAN-like operators such as =, +, *, SIN, EXP, etc. There are a dozen major commands such as READ, READO, SAVE, SHOW, PRUNE, TRUNCate, CALCulator, and RUN. Once the state transitions have been defined, SURE calculates the upper and lower probability bounds for entering specified death states within a specified mission time. SURE output is tabular. The mathematical approach chosen to solve a reliability problem may vary with the size and nature of the problem. Although different solution techniques are utilized on different programs, it is possible to have a common input language. The Systems Validation Methods group at NASA Langley Research Center has created a set of programs that form the basis for a reliability analysis workstation. The set of programs are: SURE reliability analysis program (COSMIC program LAR-13789, LAR-14921); the ASSIST specification interface program (LAR-14193, LAR-14923), PAWS/STEM reliability analysis programs (LAR-14165, LAR-14920); and the FTC fault tree tool (LAR-14586, LAR-14922). FTC is used to calculate the top-event probability for a fault tree. PAWS/STEM and SURE are programs which interpret the same SURE language, but utilize different solution methods. ASSIST is a preprocessor that generates SURE language from a more abstract definition. SURE, ASSIST, and PAWS/STEM are also offered as a bundle. Please see the abstract for COS-10039/COS-10041, SARA - SURE/ASSIST Reliability Analysis Workstation, for pricing details. SURE was originally developed for DEC VAX series computers running VMS and was later ported for use on Sun computers running SunOS. The VMS version (LAR13789) is written in PASCAL, C-language, and FORTRAN 77. The standard distribution medium for the VMS version of SURE is a 9-track 1600 BPI magnetic tape in VMSINSTAL format. It is also available on a TK50 tape cartridge in VMSINSTAL format. Executables are included. The Sun UNIX version (LAR14921) is written in ANSI C-language and PASCAL. An ANSI compliant C compiler is required in order to compile the C portion of this package. The standard distribution medium for the Sun version of SURE is a .25 inch streaming magnetic tape cartridge in UNIX tar format. Both Sun3 and Sun4 executables are included. SURE was developed in 1988 and last updated in 1992. DEC, VAX, VMS, and TK50 are trademarks of Digital Equipment Corporation. TEMPLATE is a registered trademark of Template Graphics Software, Inc. UNIX is a registered trademark of AT&T Bell Laboratories. Sun3 and Sun4 are trademarks of Sun Microsystems, Inc.
Schwartz, D.P.; Pantosti, D.; Okumura, K.; Powers, T.J.; Hamilton, J.C.
1998-01-01
Trenching, microgeomorphic mapping, and tree ring analysis provide information on timing of paleoearthquakes and behavior of the San Andreas fault in the Santa Cruz mountains. At the Grizzly Flat site alluvial units dated at 1640-1659 A.D., 1679-1894 A.D., 1668-1893 A.D., and the present ground surface are displaced by a single event. This was the 1906 surface rupture. Combined trench dates and tree ring analysis suggest that the penultimate event occurred in the mid-1600s, possibly in an interval as narrow as 1632-1659 A.D. There is no direct evidence in the trenches for the 1838 or 1865 earthquakes, which have been proposed as occurring on this part of the fault zone. In a minimum time of about 340 years only one large surface faulting event (1906) occurred at Grizzly Flat, in contrast to previous recurrence estimates of 95-110 years for the Santa Cruz mountains segment. Comparison with dates of the penultimate San Andreas earthquake at sites north of San Francisco suggests that the San Andreas fault between Point Arena and the Santa Cruz mountains may have failed either as a sequence of closely timed earthquakes on adjacent segments or as a single long rupture similar in length to the 1906 rupture around the mid-1600s. The 1906 coseismic geodetic slip and the late Holocene geologic slip rate on the San Francisco peninsula and southward are about 50-70% and 70% of their values north of San Francisco, respectively. The slip gradient along the 1906 rupture section of the San Andreas reflects partitioning of plate boundary slip onto the San Gregorio, Sargent, and other faults south of the Golden Gate. If a mid-1600s event ruptured the same section of the fault that failed in 1906, it supports the concept that long strike-slip faults can contain master rupture segments that repeat in both length and slip distribution. Recognition of a persistent slip rate gradient along the northern San Andreas fault and the concept of a master segment remove the requirement that lower slip sections of large events such as 1906 must fill in on a periodic basis with smaller and more frequent earthquakes.
NASA Astrophysics Data System (ADS)
Griffin, J.; Clark, D.; Allen, T.; Ghasemi, H.; Leonard, M.
2017-12-01
Standard probabilistic seismic hazard assessment (PSHA) simulates earthquake occurrence as a time-independent process. However paleoseismic studies in slowly deforming regions such as Australia show compelling evidence that large earthquakes on individual faults cluster within active periods, followed by long periods of quiescence. Therefore the instrumental earthquake catalog, which forms the basis of PSHA earthquake recurrence calculations, may only capture the state of the system over the period of the catalog. Together this means that data informing our PSHA may not be truly time-independent. This poses challenges in developing PSHAs for typical design probabilities (such as 10% in 50 years probability of exceedance): Is the present state observed through the instrumental catalog useful for estimating the next 50 years of earthquake hazard? Can paleo-earthquake data, that shows variations in earthquake frequency over time-scales of 10,000s of years or more, be robustly included in such PSHA models? Can a single PSHA logic tree be useful over a range of different probabilities of exceedance? In developing an updated PSHA for Australia, decadal-scale data based on instrumental earthquake catalogs (i.e. alternative area based source models and smoothed seismicity models) is integrated with paleo-earthquake data through inclusion of a fault source model. Use of time-dependent non-homogeneous Poisson models allows earthquake clustering to be modeled on fault sources with sufficient paleo-earthquake data. This study assesses the performance of alternative models by extracting decade-long segments of the instrumental catalog, developing earthquake probability models based on the remaining catalog, and testing performance against the extracted component of the catalog. Although this provides insights into model performance over the short-term, for longer timescales it is recognised that model choice is subject to considerable epistemic uncertainty. Therefore a formal expert elicitation process has been used to assign weights to alternative models for the 2018 update to Australia's national PSHA.
Moran, Michael J.; Wilson, Jon W.; Beard, L. Sue
2015-11-03
Several major faults, including the Salt Cedar Fault and the Palm Tree Fault, play an important role in the movement of groundwater. Groundwater may move along these faults and discharge where faults intersect volcanic breccias or fractured rock. Vertical movement of groundwater along faults is suggested as a mechanism for the introduction of heat energy present in groundwater from many of the springs. Groundwater altitudes in the study area indicate a potential for flow from Eldorado Valley to Black Canyon although current interpretations of the geology of this area do not favor such flow. If groundwater from Eldorado Valley discharges at springs in Black Canyon then the development of groundwater resources in Eldorado Valley could result in a decrease in discharge from the springs. Geology and structure indicate that it is not likely that groundwater can move between Detrital Valley and Black Canyon. Thus, the development of groundwater resources in Detrital Valley may not result in a decrease in discharge from springs in Black Canyon.
NASA Astrophysics Data System (ADS)
Abdelrhman, Ahmed M.; Sei Kien, Yong; Salman Leong, M.; Meng Hee, Lim; Al-Obaidi, Salah M. Ali
2017-07-01
The vibration signals produced by rotating machinery contain useful information for condition monitoring and fault diagnosis. Fault severities assessment is a challenging task. Wavelet Transform (WT) as a multivariate analysis tool is able to compromise between the time and frequency information in the signals and served as a de-noising method. The CWT scaling function gives different resolutions to the discretely signals such as very fine resolution at lower scale but coarser resolution at a higher scale. However, the computational cost increased as it needs to produce different signal resolutions. DWT has better low computation cost as the dilation function allowed the signals to be decomposed through a tree of low and high pass filters and no further analysing the high-frequency components. In this paper, a method for bearing faults identification is presented by combing Continuous Wavelet Transform (CWT) and Discrete Wavelet Transform (DWT) with envelope analysis for bearing fault diagnosis. The experimental data was sampled by Case Western Reserve University. The analysis result showed that the proposed method is effective in bearing faults detection, identify the exact fault’s location and severity assessment especially for the inner race and outer race faults.
Investigation of Fuel Oil/Lube Oil Spray Fires On Board Vessels. Volume 3.
1998-11-01
U.S. Coast Guard Research and Development Center 1082 Shennecossett Road, Groton, CT 06340-6096 Report No. CG-D-01-99, III Investigation of Fuel ...refinery). Developed the technical and mathematical specifications for BRAVO™2.0, a state-of-the-art Windows program for performing event tree and fault...tree analyses. Also managed the development of and prepared the technical specifications for QRA ROOTS™, a Windows program for storing, searching K-4
1992-01-01
boost plenum which houses the camshaft . The compressed mixture is metered by a throttle to intake valves of the engine. The engine is constructed from...difficulties associated with a time-tagged fault tree . In particular, recent work indicates that the multi-layer perception architecture can give good fdi...Abstract: In the past decade, wastepaper recycling has gained a wider acceptance. Depletion of tree stocks, waste water treatment demands and
Interim reliability evaluation program, Browns Ferry 1
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mays, S.E.; Poloski, J.P.; Sullivan, W.H.
1981-01-01
Probabilistic risk analysis techniques, i.e., event tree and fault tree analysis, were utilized to provide a risk assessment of the Browns Ferry Nuclear Plant Unit 1. Browns Ferry 1 is a General Electric boiling water reactor of the BWR 4 product line with a Mark 1 (drywell and torus) containment. Within the guidelines of the IREP Procedure and Schedule Guide, dominant accident sequences that contribute to public health and safety risks were identified and grouped according to release categories.
TU-AB-BRD-03: Fault Tree Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dunscombe, P.
2015-06-15
Current quality assurance and quality management guidelines provided by various professional organizations are prescriptive in nature, focusing principally on performance characteristics of planning and delivery devices. However, published analyses of events in radiation therapy show that most events are often caused by flaws in clinical processes rather than by device failures. This suggests the need for the development of a quality management program that is based on integrated approaches to process and equipment quality assurance. Industrial engineers have developed various risk assessment tools that are used to identify and eliminate potential failures from a system or a process before amore » failure impacts a customer. These tools include, but are not limited to, process mapping, failure modes and effects analysis, fault tree analysis. Task Group 100 of the American Association of Physicists in Medicine has developed these tools and used them to formulate an example risk-based quality management program for intensity-modulated radiotherapy. This is a prospective risk assessment approach that analyzes potential error pathways inherent in a clinical process and then ranks them according to relative risk, typically before implementation, followed by the design of a new process or modification of the existing process. Appropriate controls are then put in place to ensure that failures are less likely to occur and, if they do, they will more likely be detected before they propagate through the process, compromising treatment outcome and causing harm to the patient. Such a prospective approach forms the basis of the work of Task Group 100 that has recently been approved by the AAPM. This session will be devoted to a discussion of these tools and practical examples of how these tools can be used in a given radiotherapy clinic to develop a risk based quality management program. Learning Objectives: Learn how to design a process map for a radiotherapy process Learn how to perform failure modes and effects analysis analysis for a given process Learn what fault trees are all about Learn how to design a quality management program based upon the information obtained from process mapping, failure modes and effects analysis and fault tree analysis. Dunscombe: Director, TreatSafely, LLC and Center for the Assessment of Radiological Sciences; Consultant to IAEA and Varian Thomadsen: President, Center for the Assessment of Radiological Sciences Palta: Vice President of the Center for the Assessment of Radiological Sciences.« less
NASA Technical Reports Server (NTRS)
Shooman, Martin L.
1991-01-01
Many of the most challenging reliability problems of our present decade involve complex distributed systems such as interconnected telephone switching computers, air traffic control centers, aircraft and space vehicles, and local area and wide area computer networks. In addition to the challenge of complexity, modern fault-tolerant computer systems require very high levels of reliability, e.g., avionic computers with MTTF goals of one billion hours. Most analysts find that it is too difficult to model such complex systems without computer aided design programs. In response to this need, NASA has developed a suite of computer aided reliability modeling programs beginning with CARE 3 and including a group of new programs such as: HARP, HARP-PC, Reliability Analysts Workbench (Combination of model solvers SURE, STEM, PAWS, and common front-end model ASSIST), and the Fault Tree Compiler. The HARP program is studied and how well the user can model systems using this program is investigated. One of the important objectives will be to study how user friendly this program is, e.g., how easy it is to model the system, provide the input information, and interpret the results. The experiences of the author and his graduate students who used HARP in two graduate courses are described. Some brief comparisons were made with the ARIES program which the students also used. Theoretical studies of the modeling techniques used in HARP are also included. Of course no answer can be any more accurate than the fidelity of the model, thus an Appendix is included which discusses modeling accuracy. A broad viewpoint is taken and all problems which occurred in the use of HARP are discussed. Such problems include: computer system problems, installation manual problems, user manual problems, program inconsistencies, program limitations, confusing notation, long run times, accuracy problems, etc.
Signal processing and neural network toolbox and its application to failure diagnosis and prognosis
NASA Astrophysics Data System (ADS)
Tu, Fang; Wen, Fang; Willett, Peter K.; Pattipati, Krishna R.; Jordan, Eric H.
2001-07-01
Many systems are comprised of components equipped with self-testing capability; however, if the system is complex involving feedback and the self-testing itself may occasionally be faulty, tracing faults to a single or multiple causes is difficult. Moreover, many sensors are incapable of reliable decision-making on their own. In such cases, a signal processing front-end that can match inference needs will be very helpful. The work is concerned with providing an object-oriented simulation environment for signal processing and neural network-based fault diagnosis and prognosis. In the toolbox, we implemented a wide range of spectral and statistical manipulation methods such as filters, harmonic analyzers, transient detectors, and multi-resolution decomposition to extract features for failure events from data collected by data sensors. Then we evaluated multiple learning paradigms for general classification, diagnosis and prognosis. The network models evaluated include Restricted Coulomb Energy (RCE) Neural Network, Learning Vector Quantization (LVQ), Decision Trees (C4.5), Fuzzy Adaptive Resonance Theory (FuzzyArtmap), Linear Discriminant Rule (LDR), Quadratic Discriminant Rule (QDR), Radial Basis Functions (RBF), Multiple Layer Perceptrons (MLP) and Single Layer Perceptrons (SLP). Validation techniques, such as N-fold cross-validation and bootstrap techniques, are employed for evaluating the robustness of network models. The trained networks are evaluated for their performance using test data on the basis of percent error rates obtained via cross-validation, time efficiency, generalization ability to unseen faults. Finally, the usage of neural networks for the prediction of residual life of turbine blades with thermal barrier coatings is described and the results are shown. The neural network toolbox has also been applied to fault diagnosis in mixed-signal circuits.
Dodge, D.A.; Beroza, G.C.; Ellsworth, W.L.
1996-01-01
We find that foreshocks provide clear evidence for an extended nucleation process before some earthquakes. In this study, we examine in detail the evolution of six California foreshock sequences, the 1986 Mount Lewis (ML, = 5.5), the 1986 Chalfant (ML = 6.4), the. 1986 Stone Canyon (ML = 4.7), the 1990 Upland (ML = 5.2), the 1992 Joshua Tree (MW= 6.1), and the 1992 Landers (MW = 7.3) sequence. Typically, uncertainties in hypocentral parameters are too large to establish the geometry of foreshock sequences and hence to understand their evolution. However, the similarity of location and focal mechanisms for the events in these sequences leads to similar foreshock waveforms that we cross correlate to obtain extremely accurate relative locations. We use these results to identify small-scale fault zone structures that could influence nucleation and to determine the stress evolution leading up to the mainshock. In general, these foreshock sequences are not compatible with a cascading failure nucleation model in which the foreshocks all occur on a single fault plane and trigger the mainshock by static stress transfer. Instead, the foreshocks seem to concentrate near structural discontinuities in the fault and may themselves be a product of an aseismic nucleation process. Fault zone heterogeneity may also be important in controlling the number of foreshocks, i.e., the stronger the heterogeneity, the greater the number of foreshocks. The size of the nucleation region, as measured by the extent of the foreshock sequence, appears to scale with mainshock moment in the same manner as determined independently by measurements of the seismic nucleation phase. We also find evidence for slip localization as predicted by some models of earthquake nucleation. Copyright 1996 by the American Geophysical Union.
Quantitative method of medication system interface evaluation.
Pingenot, Alleene Anne; Shanteau, James; Pingenot, James D F
2007-01-01
The objective of this study was to develop a quantitative method of evaluating the user interface for medication system software. A detailed task analysis provided a description of user goals and essential activity. A structural fault analysis was used to develop a detailed description of the system interface. Nurses experienced with use of the system under evaluation provided estimates of failure rates for each point in this simplified fault tree. Means of estimated failure rates provided quantitative data for fault analysis. Authors note that, although failures of steps in the program were frequent, participants reported numerous methods of working around these failures so that overall system failure was rare. However, frequent process failure can affect the time required for processing medications, making a system inefficient. This method of interface analysis, called Software Efficiency Evaluation and Fault Identification Method, provides quantitative information with which prototypes can be compared and problems within an interface identified.
Seismic hazard in the Istanbul metropolitan area: A preliminary re-evaluation
Kalkan, E.; Gulkan, Polat; Ozturk, N.Y.; Celebi, M.
2008-01-01
In 1999, two destructive earthquakes (M7.4 Kocaeli and M7.2 Duzce) occurred in the north west of Turkey and resulted in major stress-drops on the western segment of the North Anatolian Fault system where it continues under the Marmara Sea. These undersea fault segments were recently explored using bathymetric and reflection surveys. These recent findings helped to reshape the seismotectonic environment of the Marmara basin, which is a perplexing tectonic domain. Based on collected new information, seismic hazard of the Marmara region, particularly Istanbul Metropolitan Area and its vicinity, were re-examined using a probabilistic approach. Two seismic source and alternate recurrence models combined with various indigenous and foreign attenuation relationships were adapted within a logic tree formulation to quantify and project the regional exposure on a set of hazard maps. The hazard maps show the peak horizontal ground acceleration and spectral acceleration at 1.0 s. These acceleration levels were computed for 2 and 10 % probabilities of transcendence in 50 years.
Risk Analysis Methods for Deepwater Port Oil Transfer Systems
DOT National Transportation Integrated Search
1976-06-01
This report deals with the risk analysis methodology for oil spills from the oil transfer systems in deepwater ports. Failure mode and effect analysis in combination with fault tree analysis are identified as the methods best suited for the assessmen...
NASA Technical Reports Server (NTRS)
Prassinos, Peter G.; Stamatelatos, Michael G.; Young, Jonathan; Smith, Curtis
2010-01-01
Managed by NASA's Office of Safety and Mission Assurance, a pilot probabilistic risk analysis (PRA) of the NASA Crew Exploration Vehicle (CEV) was performed in early 2006. The PRA methods used follow the general guidance provided in the NASA PRA Procedures Guide for NASA Managers and Practitioners'. Phased-mission based event trees and fault trees are used to model a lunar sortie mission of the CEV - involving the following phases: launch of a cargo vessel and a crew vessel; rendezvous of these two vessels in low Earth orbit; transit to th$: moon; lunar surface activities; ascension &om the lunar surface; and return to Earth. The analysis is based upon assumptions, preliminary system diagrams, and failure data that may involve large uncertainties or may lack formal validation. Furthermore, some of the data used were based upon expert judgment or extrapolated from similar componentssystemsT. his paper includes a discussion of the system-level models and provides an overview of the analysis results used to identify insights into CEV risk drivers, and trade and sensitivity studies. Lastly, the PRA model was used to determine changes in risk as the system configurations or key parameters are modified.
Reliability analysis and initial requirements for FC systems and stacks
NASA Astrophysics Data System (ADS)
Åström, K.; Fontell, E.; Virtanen, S.
In the year 2000 Wärtsilä Corporation started an R&D program to develop SOFC systems for CHP applications. The program aims to bring to the market highly efficient, clean and cost competitive fuel cell systems with rated power output in the range of 50-250 kW for distributed generation and marine applications. In the program Wärtsilä focuses on system integration and development. System reliability and availability are key issues determining the competitiveness of the SOFC technology. In Wärtsilä, methods have been implemented for analysing the system in respect to reliability and safety as well as for defining reliability requirements for system components. A fault tree representation is used as the basis for reliability prediction analysis. A dynamic simulation technique has been developed to allow for non-static properties in the fault tree logic modelling. Special emphasis has been placed on reliability analysis of the fuel cell stacks in the system. A method for assessing reliability and critical failure predictability requirements for fuel cell stacks in a system consisting of several stacks has been developed. The method is based on a qualitative model of the stack configuration where each stack can be in a functional, partially failed or critically failed state, each of the states having different failure rates and effects on the system behaviour. The main purpose of the method is to understand the effect of stack reliability, critical failure predictability and operating strategy on the system reliability and availability. An example configuration, consisting of 5 × 5 stacks (series of 5 sets of 5 parallel stacks) is analysed in respect to stack reliability requirements as a function of predictability of critical failures and Weibull shape factor of failure rate distributions.
A 3D modeling approach to complex faults with multi-source data
NASA Astrophysics Data System (ADS)
Wu, Qiang; Xu, Hua; Zou, Xukai; Lei, Hongzhuan
2015-04-01
Fault modeling is a very important step in making an accurate and reliable 3D geological model. Typical existing methods demand enough fault data to be able to construct complex fault models, however, it is well known that the available fault data are generally sparse and undersampled. In this paper, we propose a workflow of fault modeling, which can integrate multi-source data to construct fault models. For the faults that are not modeled with these data, especially small-scale or approximately parallel with the sections, we propose the fault deduction method to infer the hanging wall and footwall lines after displacement calculation. Moreover, using the fault cutting algorithm can supplement the available fault points on the location where faults cut each other. Increasing fault points in poor sample areas can not only efficiently construct fault models, but also reduce manual intervention. By using a fault-based interpolation and remeshing the horizons, an accurate 3D geological model can be constructed. The method can naturally simulate geological structures no matter whether the available geological data are sufficient or not. A concrete example of using the method in Tangshan, China, shows that the method can be applied to broad and complex geological areas.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cheung, Howard; Braun, James E.
This report describes models of building faults created for OpenStudio to support the ongoing development of fault detection and diagnostic (FDD) algorithms at the National Renewable Energy Laboratory. Building faults are operating abnormalities that degrade building performance, such as using more energy than normal operation, failing to maintain building temperatures according to the thermostat set points, etc. Models of building faults in OpenStudio can be used to estimate fault impacts on building performance and to develop and evaluate FDD algorithms. The aim of the project is to develop fault models of typical heating, ventilating and air conditioning (HVAC) equipment inmore » the United States, and the fault models in this report are grouped as control faults, sensor faults, packaged and split air conditioner faults, water-cooled chiller faults, and other uncategorized faults. The control fault models simulate impacts of inappropriate thermostat control schemes such as an incorrect thermostat set point in unoccupied hours and manual changes of thermostat set point due to extreme outside temperature. Sensor fault models focus on the modeling of sensor biases including economizer relative humidity sensor bias, supply air temperature sensor bias, and water circuit temperature sensor bias. Packaged and split air conditioner fault models simulate refrigerant undercharging, condenser fouling, condenser fan motor efficiency degradation, non-condensable entrainment in refrigerant, and liquid line restriction. Other fault models that are uncategorized include duct fouling, excessive infiltration into the building, and blower and pump motor degradation.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cheung, Howard; Braun, James E.
2015-12-31
This report describes models of building faults created for OpenStudio to support the ongoing development of fault detection and diagnostic (FDD) algorithms at the National Renewable Energy Laboratory. Building faults are operating abnormalities that degrade building performance, such as using more energy than normal operation, failing to maintain building temperatures according to the thermostat set points, etc. Models of building faults in OpenStudio can be used to estimate fault impacts on building performance and to develop and evaluate FDD algorithms. The aim of the project is to develop fault models of typical heating, ventilating and air conditioning (HVAC) equipment inmore » the United States, and the fault models in this report are grouped as control faults, sensor faults, packaged and split air conditioner faults, water-cooled chiller faults, and other uncategorized faults. The control fault models simulate impacts of inappropriate thermostat control schemes such as an incorrect thermostat set point in unoccupied hours and manual changes of thermostat set point due to extreme outside temperature. Sensor fault models focus on the modeling of sensor biases including economizer relative humidity sensor bias, supply air temperature sensor bias, and water circuit temperature sensor bias. Packaged and split air conditioner fault models simulate refrigerant undercharging, condenser fouling, condenser fan motor efficiency degradation, non-condensable entrainment in refrigerant, and liquid line restriction. Other fault models that are uncategorized include duct fouling, excessive infiltration into the building, and blower and pump motor degradation.« less
Methodology for Designing Fault-Protection Software
NASA Technical Reports Server (NTRS)
Barltrop, Kevin; Levison, Jeffrey; Kan, Edwin
2006-01-01
A document describes a methodology for designing fault-protection (FP) software for autonomous spacecraft. The methodology embodies and extends established engineering practices in the technical discipline of Fault Detection, Diagnosis, Mitigation, and Recovery; and has been successfully implemented in the Deep Impact Spacecraft, a NASA Discovery mission. Based on established concepts of Fault Monitors and Responses, this FP methodology extends the notion of Opinion, Symptom, Alarm (aka Fault), and Response with numerous new notions, sub-notions, software constructs, and logic and timing gates. For example, Monitor generates a RawOpinion, which graduates into Opinion, categorized into no-opinion, acceptable, or unacceptable opinion. RaiseSymptom, ForceSymptom, and ClearSymptom govern the establishment and then mapping to an Alarm (aka Fault). Local Response is distinguished from FP System Response. A 1-to-n and n-to- 1 mapping is established among Monitors, Symptoms, and Responses. Responses are categorized by device versus by function. Responses operate in tiers, where the early tiers attempt to resolve the Fault in a localized step-by-step fashion, relegating more system-level response to later tier(s). Recovery actions are gated by epoch recovery timing, enabling strategy, urgency, MaxRetry gate, hardware availability, hazardous versus ordinary fault, and many other priority gates. This methodology is systematic, logical, and uses multiple linked tables, parameter files, and recovery command sequences. The credibility of the FP design is proven via a fault-tree analysis "top-down" approach, and a functional fault-mode-effects-and-analysis via "bottoms-up" approach. Via this process, the mitigation and recovery strategy(s) per Fault Containment Region scope (width versus depth) the FP architecture.
Risk-based maintenance of ethylene oxide production facilities.
Khan, Faisal I; Haddara, Mahmoud R
2004-05-20
This paper discusses a methodology for the design of an optimum inspection and maintenance program. The methodology, called risk-based maintenance (RBM) is based on integrating a reliability approach and a risk assessment strategy to obtain an optimum maintenance schedule. First, the likely equipment failure scenarios are formulated. Out of many likely failure scenarios, the ones, which are most probable, are subjected to a detailed study. Detailed consequence analysis is done for the selected scenarios. Subsequently, these failure scenarios are subjected to a fault tree analysis to determine their probabilities. Finally, risk is computed by combining the results of the consequence and the probability analyses. The calculated risk is compared against known acceptable criteria. The frequencies of the maintenance tasks are obtained by minimizing the estimated risk. A case study involving an ethylene oxide production facility is presented. Out of the five most hazardous units considered, the pipeline used for the transportation of the ethylene is found to have the highest risk. Using available failure data and a lognormal reliability distribution function human health risk factors are calculated. Both societal risk factors and individual risk factors exceeded the acceptable risk criteria. To determine an optimal maintenance interval, a reverse fault tree analysis was used. The maintenance interval was determined such that the original high risk is brought down to an acceptable level. A sensitivity analysis is also undertaken to study the impact of changing the distribution of the reliability model as well as the error in the distribution parameters on the maintenance interval.
Naive Bayes Bearing Fault Diagnosis Based on Enhanced Independence of Data
Zhang, Nannan; Wu, Lifeng; Yang, Jing; Guan, Yong
2018-01-01
The bearing is the key component of rotating machinery, and its performance directly determines the reliability and safety of the system. Data-based bearing fault diagnosis has become a research hotspot. Naive Bayes (NB), which is based on independent presumption, is widely used in fault diagnosis. However, the bearing data are not completely independent, which reduces the performance of NB algorithms. In order to solve this problem, we propose a NB bearing fault diagnosis method based on enhanced independence of data. The method deals with data vector from two aspects: the attribute feature and the sample dimension. After processing, the classification limitation of NB is reduced by the independence hypothesis. First, we extract the statistical characteristics of the original signal of the bearings effectively. Then, the Decision Tree algorithm is used to select the important features of the time domain signal, and the low correlation features is selected. Next, the Selective Support Vector Machine (SSVM) is used to prune the dimension data and remove redundant vectors. Finally, we use NB to diagnose the fault with the low correlation data. The experimental results show that the independent enhancement of data is effective for bearing fault diagnosis. PMID:29401730
NASA Astrophysics Data System (ADS)
Li, Yongbo; Xu, Minqiang; Wang, Rixin; Huang, Wenhu
2016-01-01
This paper presents a new rolling bearing fault diagnosis method based on local mean decomposition (LMD), improved multiscale fuzzy entropy (IMFE), Laplacian score (LS) and improved support vector machine based binary tree (ISVM-BT). When the fault occurs in rolling bearings, the measured vibration signal is a multi-component amplitude-modulated and frequency-modulated (AM-FM) signal. LMD, a new self-adaptive time-frequency analysis method can decompose any complicated signal into a series of product functions (PFs), each of which is exactly a mono-component AM-FM signal. Hence, LMD is introduced to preprocess the vibration signal. Furthermore, IMFE that is designed to avoid the inaccurate estimation of fuzzy entropy can be utilized to quantify the complexity and self-similarity of time series for a range of scales based on fuzzy entropy. Besides, the LS approach is introduced to refine the fault features by sorting the scale factors. Subsequently, the obtained features are fed into the multi-fault classifier ISVM-BT to automatically fulfill the fault pattern identifications. The experimental results validate the effectiveness of the methodology and demonstrate that proposed algorithm can be applied to recognize the different categories and severities of rolling bearings.
Mumma, Joel M; Durso, Francis T; Ferguson, Ashley N; Gipson, Christina L; Casanova, Lisa; Erukunuakpor, Kimberly; Kraft, Colleen S; Walsh, Victoria L; Zimring, Craig; DuBose, Jennifer; Jacob, Jesse T
2018-03-05
Doffing protocols for personal protective equipment (PPE) are critical for keeping healthcare workers (HCWs) safe during care of patients with Ebola virus disease. We assessed the relationship between errors and self-contamination during doffing. Eleven HCWs experienced with doffing Ebola-level PPE participated in simulations in which HCWs donned PPE marked with surrogate viruses (ɸ6 and MS2), completed a clinical task, and were assessed for contamination after doffing. Simulations were video recorded, and a failure modes and effects analysis and fault tree analyses were performed to identify errors during doffing, quantify their risk (risk index), and predict contamination data. Fifty-one types of errors were identified, many having the potential to spread contamination. Hand hygiene and removing the powered air purifying respirator (PAPR) hood had the highest total risk indexes (111 and 70, respectively) and number of types of errors (9 and 13, respectively). ɸ6 was detected on 10% of scrubs and the fault tree predicted a 10.4% contamination rate, likely occurring when the PAPR hood inadvertently contacted scrubs during removal. MS2 was detected on 10% of hands, 20% of scrubs, and 70% of inner gloves and the predicted rates were 7.3%, 19.4%, 73.4%, respectively. Fault trees for MS2 and ɸ6 contamination suggested similar pathways. Ebola-level PPE can both protect and put HCWs at risk for self-contamination throughout the doffing process, even among experienced HCWs doffing with a trained observer. Human factors methodologies can identify error-prone steps, delineate the relationship between errors and self-contamination, and suggest remediation strategies.
Jetter, J J; Forte, R; Rubenstein, R
2001-02-01
A fault tree analysis was used to estimate the number of refrigerant exposures of automotive service technicians and vehicle occupants in the United States. Exposures of service technicians can occur when service equipment or automotive air-conditioning systems leak during servicing. The number of refrigerant exposures of service technicians was estimated to be 135,000 per year. Exposures of vehicle occupants can occur when refrigerant enters passenger compartments due to sudden leaks in air-conditioning systems, leaks following servicing, or leaks caused by collisions. The total number of exposures of vehicle occupants was estimated to be 3,600 per year. The largest number of exposures of vehicle occupants was estimated for leaks caused by collisions, and the second largest number of exposures was estimated for leaks following servicing. Estimates used in the fault tree analysis were based on a survey of automotive air-conditioning service shops, the best available data from the literature, and the engineering judgement of the authors and expert reviewers from the Society of Automotive Engineers Interior Climate Control Standards Committee. Exposure concentrations and durations were estimated and compared with toxicity data for refrigerants currently used in automotive air conditioners. Uncertainty was high for the estimated numbers of exposures, exposure concentrations, and exposure durations. Uncertainty could be reduced in the future by conducting more extensive surveys, measurements of refrigerant concentrations, and exposure monitoring. Nevertheless, the analysis indicated that the risk of exposure of service technicians and vehicle occupants is significant, and it is recommended that no refrigerant that is substantially more toxic than currently available substitutes be accepted for use in vehicle air-conditioning systems, absent a means of mitigating exposure.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Riyadi, Eko H., E-mail: e.riyadi@bapeten.go.id
2014-09-30
Initiating event is defined as any event either internal or external to the nuclear power plants (NPPs) that perturbs the steady state operation of the plant, if operating, thereby initiating an abnormal event such as transient or loss of coolant accident (LOCA) within the NPPs. These initiating events trigger sequences of events that challenge plant control and safety systems whose failure could potentially lead to core damage or large early release. Selection for initiating events consists of two steps i.e. first step, definition of possible events, such as by evaluating a comprehensive engineering, and by constructing a top level logicmore » model. Then the second step, grouping of identified initiating event's by the safety function to be performed or combinations of systems responses. Therefore, the purpose of this paper is to discuss initiating events identification in event tree development process and to reviews other probabilistic safety assessments (PSA). The identification of initiating events also involves the past operating experience, review of other PSA, failure mode and effect analysis (FMEA), feedback from system modeling, and master logic diagram (special type of fault tree). By using the method of study for the condition of the traditional US PSA categorization in detail, could be obtained the important initiating events that are categorized into LOCA, transients and external events.« less
NASA Astrophysics Data System (ADS)
Hong, H.; Zhu, A. X.
2017-12-01
Climate change is a common phenomenon and it is very serious all over the world. The intensification of rainfall extremes with climate change is of key importance to society and then it may induce a large impact through landslides. This paper presents GIS-based new ensemble data mining techniques that weight-of-evidence, logistic model tree, linear and quadratic discriminant for landslide spatial modelling. This research was applied in Anfu County, which is a landslide-prone area in Jiangxi Province, China. According to a literature review and research the study area, we select the landslide influencing factor and their maps were digitized in a GIS environment. These landslide influencing factors are the altitude, plan curvature, profile curvature, slope degree, slope aspect, topographic wetness index (TWI), Stream Power Index (SPI), Topographic Wetness Index (SPI), distance to faults, distance to rivers, distance to roads, soil, lithology, normalized difference vegetation index and land use. According to historical information of individual landslide events, interpretation of the aerial photographs, and field surveys supported by the government of Jiangxi Meteorological Bureau of China, 367 landslides were identified in the study area. The landslide locations were divided into two subsets, namely, training and validating (70/30), based on a random selection scheme. In this research, Pearson's correlation was used for the evaluation of the relationship between the landslides and influencing factors. In the next step, three data mining techniques combined with the weight-of-evidence, logistic model tree, linear and quadratic discriminant, were used for the landslide spatial modelling and its zonation. Finally, the landslide susceptibility maps produced by the mentioned models were evaluated by the ROC curve. The results showed that the area under the curve (AUC) of all of the models was > 0.80. At the same time, the highest AUC value was for the linear and quadratic discriminant model (0.864), followed by logistic model tree (0.832), and weight-of-evidence (0.819). In general, the landslide maps can be applied for land use planning and management in the Anfu area.
Recent Mega-Thrust Tsunamigenic Earthquakes and PTHA
NASA Astrophysics Data System (ADS)
Lorito, S.
2013-05-01
The occurrence of several mega-thrust tsunamigenic earthquakes in the last decade, including but not limited to the 2004 Sumatra-Andaman, the 2010 Maule, and 2011 Tohoku earthquakes, has been a dramatic reminder of the limitations in our capability of assessing earthquake and tsunami hazard and risk. However, the increasingly high-quality geophysical observational networks allowed the retrieval of most accurate than ever models of the rupture process of mega-thrust earthquakes, thus paving the way for future improved hazard assessments. Probabilistic Tsunami Hazard Analysis (PTHA) methodology, in particular, is less mature than its seismic counterpart, PSHA. Worldwide recent research efforts of the tsunami science community allowed to start filling this gap, and to define some best practices that are being progressively employed in PTHA for different regions and coasts at threat. In the first part of my talk, I will briefly review some rupture models of recent mega-thrust earthquakes, and highlight some of their surprising features that likely result in bigger error bars associated to PTHA results. More specifically, recent events of unexpected size at a given location, and with unexpected rupture process features, posed first-order open questions which prevent the definition of an heterogeneous rupture probability along a subduction zone, despite of several recent promising results on the subduction zone seismic cycle. In the second part of the talk, I will dig a bit more into a specific ongoing effort for improving PTHA methods, in particular as regards epistemic and aleatory uncertainties determination, and the computational PTHA feasibility when considering the full assumed source variability. Only logic trees are usually explicated in PTHA studies, accounting for different possible assumptions on the source zone properties and behavior. The selection of the earthquakes to be actually modelled is then in general made on a qualitative basis or remains implicit, despite different methods like event trees have been used for different applications. I will define a quite general PTHA framework, based on the mixed use of logic and event trees. I will first discuss a particular class of epistemic uncertainties, i.e. those related to the parametric fault characterization in terms of geometry, kinematics, and assessment of activity rates. A systematic classification in six justification levels of epistemic uncertainty related with the existence and behaviour of fault sources will be presented. Then, a particular branch of the logic tree is chosen in order to discuss just the aleatory variability of earthquake parameters, represented with an event tree. Even so, PTHA based on numerical scenarios is a too demanding computational task, particularly when probabilistic inundation maps are needed. For trying to reduce the computational burden without under-representing the source variability, the event tree is first constructed by taking care of densely (over-)sampling the earthquake parameter space, and then the earthquakes are filtered basing on their associated tsunami impact offshore, before calculating inundation maps. I'll describe this approach by means of a case study in the Mediterranean Sea, namely the PTHA for some locations of Eastern Sicily coasts and Southern Crete coast due to potential subduction earthquakes occurring on the Hellenic Arc.
Risk management of key issues of FPSO
NASA Astrophysics Data System (ADS)
Sun, Liping; Sun, Hai
2012-12-01
Risk analysis of key systems have become a growing topic late of because of the development of offshore structures. Equipment failures of offloading system and fire accidents were analyzed based on the floating production, storage and offloading (FPSO) features. Fault tree analysis (FTA), and failure modes and effects analysis (FMEA) methods were examined based on information already researched on modules of relex reliability studio (RRS). Equipment failures were also analyzed qualitatively by establishing a fault tree and Boolean structure function based on the shortage of failure cases, statistical data, and risk control measures examined. Failure modes of fire accident were classified according to the different areas of fire occurrences during the FMEA process, using risk priority number (RPN) methods to evaluate their severity rank. The qualitative analysis of FTA gave the basic insight of forming the failure modes of FPSO offloading, and the fire FMEA gave the priorities and suggested processes. The research has practical importance for the security analysis problems of FPSO.
Yazdi, Mohammad; Korhan, Orhan; Daneshvar, Sahand
2018-05-09
This study aimed at establishing fault tree analysis (FTA) using expert opinion to compute the probability of an event. To find the probability of the top event (TE), all probabilities of the basic events (BEs) should be available when the FTA is drawn. In this case, employing expert judgment can be used as an alternative to failure data in an awkward situation. The fuzzy analytical hierarchy process as a standard technique is used to give a specific weight to each expert, and fuzzy set theory is engaged for aggregating expert opinion. In this regard, the probability of BEs will be computed and, consequently, the probability of the TE obtained using Boolean algebra. Additionally, to reduce the probability of the TE in terms of three parameters (safety consequences, cost and benefit), the importance measurement technique and modified TOPSIS was employed. The effectiveness of the proposed approach is demonstrated with a real-life case study.
Kingman, D M; Field, W E
2005-11-01
Findings reported by researchers at Illinois State University and Purdue University indicated that since 1980, an average of eight individuals per year have become engulfed and died in farm grain bins in the U.S. and Canada and that all these deaths are significant because they are believed to be preventable. During a recent effort to develop intervention strategies and recommendations for an ASAE farm grain bin safety standard, fault tree analysis (FTA) was utilized to identify contributing factors to engulfments in grain stored in on-farm grain bins. FTA diagrams provided a spatial perspective of the circumstances that occurred prior to engulfment incidents, a perspective never before presented in other hazard analyses. The FTA also demonstrated relationships and interrelationships of the contributing factors. FTA is a useful tool that should be applied more often in agricultural incident investigations to assist in the more complete understanding of the problem studied.
Fault tree analysis for data-loss in long-term monitoring networks.
Dirksen, J; ten Veldhuis, J A E; Schilperoort, R P S
2009-01-01
Prevention of data-loss is an important aspect in the design as well as the operational phase of monitoring networks since data-loss can seriously limit intended information yield. In the literature limited attention has been paid to the origin of unreliable or doubtful data from monitoring networks. Better understanding of causes of data-loss points out effective solutions to increase data yield. This paper introduces FTA as a diagnostic tool to systematically deduce causes of data-loss in long-term monitoring networks in urban drainage systems. In order to illustrate the effectiveness of FTA, a fault tree is developed for a monitoring network and FTA is applied to analyze the data yield of a UV/VIS submersible spectrophotometer. Although some of the causes of data-loss cannot be recovered because the historical database of metadata has been updated infrequently, the example points out that FTA still is a powerful tool to analyze the causes of data-loss and provides useful information on effective data-loss prevention.
Information processing requirements for on-board monitoring of automatic landing
NASA Technical Reports Server (NTRS)
Sorensen, J. A.; Karmarkar, J. S.
1977-01-01
A systematic procedure is presented for determining the information processing requirements for on-board monitoring of automatic landing systems. The monitoring system detects landing anomalies through use of appropriate statistical tests. The time-to-correct aircraft perturbations is determined from covariance analyses using a sequence of suitable aircraft/autoland/pilot models. The covariance results are used to establish landing safety and a fault recovery operating envelope via an event outcome tree. This procedure is demonstrated with examples using the NASA Terminal Configured Vehicle (B-737 aircraft). The procedure can also be used to define decision height, assess monitoring implementation requirements, and evaluate alternate autoland configurations.
SARA - SURE/ASSIST RELIABILITY ANALYSIS WORKSTATION (VAX VMS VERSION)
NASA Technical Reports Server (NTRS)
Butler, R. W.
1994-01-01
SARA, the SURE/ASSIST Reliability Analysis Workstation, is a bundle of programs used to solve reliability problems. The mathematical approach chosen to solve a reliability problem may vary with the size and nature of the problem. The Systems Validation Methods group at NASA Langley Research Center has created a set of four software packages that form the basis for a reliability analysis workstation, including three for use in analyzing reconfigurable, fault-tolerant systems and one for analyzing non-reconfigurable systems. The SARA bundle includes the three for reconfigurable, fault-tolerant systems: SURE reliability analysis program (COSMIC program LAR-13789, LAR-14921); the ASSIST specification interface program (LAR-14193, LAR-14923), and PAWS/STEM reliability analysis programs (LAR-14165, LAR-14920). As indicated by the program numbers in parentheses, each of these three packages is also available separately in two machine versions. The fourth package, which is only available separately, is FTC, the Fault Tree Compiler (LAR-14586, LAR-14922). FTC is used to calculate the top-event probability for a fault tree which describes a non-reconfigurable system. PAWS/STEM and SURE are analysis programs which utilize different solution methods, but have a common input language, the SURE language. ASSIST is a preprocessor that generates SURE language from a more abstract definition. ASSIST, SURE, and PAWS/STEM are described briefly in the following paragraphs. For additional details about the individual packages, including pricing, please refer to their respective abstracts. ASSIST, the Abstract Semi-Markov Specification Interface to the SURE Tool program, allows a reliability engineer to describe the failure behavior of a fault-tolerant computer system in an abstract, high-level language. The ASSIST program then automatically generates a corresponding semi-Markov model. A one-page ASSIST-language description may result in a semi-Markov model with thousands of states and transitions. The ASSIST program also includes model-reduction techniques to facilitate efficient modeling of large systems. The semi-Markov model generated by ASSIST is in the format needed for input to SURE and PAWS/STEM. The Semi-Markov Unreliability Range Evaluator, SURE, is an analysis tool for reconfigurable, fault-tolerant systems. SURE provides an efficient means for calculating accurate upper and lower bounds for the death state probabilities for a large class of semi-Markov models, not just those which can be reduced to critical-pair architectures. The calculated bounds are close enough (usually within 5 percent of each other) for use in reliability studies of ultra-reliable computer systems. The SURE bounding theorems have algebraic solutions and are consequently computationally efficient even for large and complex systems. SURE can optionally regard a specified parameter as a variable over a range of values, enabling an automatic sensitivity analysis. SURE output is tabular. The PAWS/STEM package includes two programs for the creation and evaluation of pure Markov models describing the behavior of fault-tolerant reconfigurable computer systems: the Pade Approximation with Scaling (PAWS) and Scaled Taylor Exponential Matrix (STEM) programs. PAWS and STEM produce exact solutions for the probability of system failure and provide a conservative estimate of the number of significant digits in the solution. Markov models of fault-tolerant architectures inevitably lead to numerically stiff differential equations. Both PAWS and STEM have the capability to solve numerically stiff models. These complementary programs use separate methods to determine the matrix exponential in the solution of the model's system of differential equations. In general, PAWS is better suited to evaluate small and dense models. STEM operates at lower precision, but works faster than PAWS for larger models. The programs that comprise the SARA package were originally developed for use on DEC VAX series computers running VMS and were later ported for use on Sun series computers running SunOS. They are written in C-language, Pascal, and FORTRAN 77. An ANSI compliant C compiler is required in order to compile the C portion of the Sun version source code. The Pascal and FORTRAN code can be compiled on Sun computers using Sun Pascal and Sun Fortran. For the VMS version, VAX C, VAX PASCAL, and VAX FORTRAN can be used to recompile the source code. The standard distribution medium for the VMS version of SARA (COS-10041) is a 9-track 1600 BPI magnetic tape in VMSINSTAL format. It is also available on a TK50 tape cartridge in VMSINSTAL format. Executables are included. The standard distribution medium for the Sun version of SARA (COS-10039) is a .25 inch streaming magnetic tape cartridge in UNIX tar format. Both Sun3 and Sun4 executables are included. Electronic copies of the ASSIST user's manual in TeX and PostScript formats are provided on the distribution medium. DEC, VAX, VMS, and TK50 are registered trademarks of Digital Equipment Corporation. Sun, Sun3, Sun4, and SunOS are trademarks of Sun Microsystems, Inc. TeX is a trademark of the American Mathematical Society. PostScript is a registered trademark of Adobe Systems Incorporated.
SARA - SURE/ASSIST RELIABILITY ANALYSIS WORKSTATION (UNIX VERSION)
NASA Technical Reports Server (NTRS)
Butler, R. W.
1994-01-01
SARA, the SURE/ASSIST Reliability Analysis Workstation, is a bundle of programs used to solve reliability problems. The mathematical approach chosen to solve a reliability problem may vary with the size and nature of the problem. The Systems Validation Methods group at NASA Langley Research Center has created a set of four software packages that form the basis for a reliability analysis workstation, including three for use in analyzing reconfigurable, fault-tolerant systems and one for analyzing non-reconfigurable systems. The SARA bundle includes the three for reconfigurable, fault-tolerant systems: SURE reliability analysis program (COSMIC program LAR-13789, LAR-14921); the ASSIST specification interface program (LAR-14193, LAR-14923), and PAWS/STEM reliability analysis programs (LAR-14165, LAR-14920). As indicated by the program numbers in parentheses, each of these three packages is also available separately in two machine versions. The fourth package, which is only available separately, is FTC, the Fault Tree Compiler (LAR-14586, LAR-14922). FTC is used to calculate the top-event probability for a fault tree which describes a non-reconfigurable system. PAWS/STEM and SURE are analysis programs which utilize different solution methods, but have a common input language, the SURE language. ASSIST is a preprocessor that generates SURE language from a more abstract definition. ASSIST, SURE, and PAWS/STEM are described briefly in the following paragraphs. For additional details about the individual packages, including pricing, please refer to their respective abstracts. ASSIST, the Abstract Semi-Markov Specification Interface to the SURE Tool program, allows a reliability engineer to describe the failure behavior of a fault-tolerant computer system in an abstract, high-level language. The ASSIST program then automatically generates a corresponding semi-Markov model. A one-page ASSIST-language description may result in a semi-Markov model with thousands of states and transitions. The ASSIST program also includes model-reduction techniques to facilitate efficient modeling of large systems. The semi-Markov model generated by ASSIST is in the format needed for input to SURE and PAWS/STEM. The Semi-Markov Unreliability Range Evaluator, SURE, is an analysis tool for reconfigurable, fault-tolerant systems. SURE provides an efficient means for calculating accurate upper and lower bounds for the death state probabilities for a large class of semi-Markov models, not just those which can be reduced to critical-pair architectures. The calculated bounds are close enough (usually within 5 percent of each other) for use in reliability studies of ultra-reliable computer systems. The SURE bounding theorems have algebraic solutions and are consequently computationally efficient even for large and complex systems. SURE can optionally regard a specified parameter as a variable over a range of values, enabling an automatic sensitivity analysis. SURE output is tabular. The PAWS/STEM package includes two programs for the creation and evaluation of pure Markov models describing the behavior of fault-tolerant reconfigurable computer systems: the Pade Approximation with Scaling (PAWS) and Scaled Taylor Exponential Matrix (STEM) programs. PAWS and STEM produce exact solutions for the probability of system failure and provide a conservative estimate of the number of significant digits in the solution. Markov models of fault-tolerant architectures inevitably lead to numerically stiff differential equations. Both PAWS and STEM have the capability to solve numerically stiff models. These complementary programs use separate methods to determine the matrix exponential in the solution of the model's system of differential equations. In general, PAWS is better suited to evaluate small and dense models. STEM operates at lower precision, but works faster than PAWS for larger models. The programs that comprise the SARA package were originally developed for use on DEC VAX series computers running VMS and were later ported for use on Sun series computers running SunOS. They are written in C-language, Pascal, and FORTRAN 77. An ANSI compliant C compiler is required in order to compile the C portion of the Sun version source code. The Pascal and FORTRAN code can be compiled on Sun computers using Sun Pascal and Sun Fortran. For the VMS version, VAX C, VAX PASCAL, and VAX FORTRAN can be used to recompile the source code. The standard distribution medium for the VMS version of SARA (COS-10041) is a 9-track 1600 BPI magnetic tape in VMSINSTAL format. It is also available on a TK50 tape cartridge in VMSINSTAL format. Executables are included. The standard distribution medium for the Sun version of SARA (COS-10039) is a .25 inch streaming magnetic tape cartridge in UNIX tar format. Both Sun3 and Sun4 executables are included. Electronic copies of the ASSIST user's manual in TeX and PostScript formats are provided on the distribution medium. DEC, VAX, VMS, and TK50 are registered trademarks of Digital Equipment Corporation. Sun, Sun3, Sun4, and SunOS are trademarks of Sun Microsystems, Inc. TeX is a trademark of the American Mathematical Society. PostScript is a registered trademark of Adobe Systems Incorporated.
NASA Astrophysics Data System (ADS)
Aydin, Orhun; Caers, Jef Karel
2017-08-01
Faults are one of the building-blocks for subsurface modeling studies. Incomplete observations of subsurface fault networks lead to uncertainty pertaining to location, geometry and existence of faults. In practice, gaps in incomplete fault network observations are filled based on tectonic knowledge and interpreter's intuition pertaining to fault relationships. Modeling fault network uncertainty with realistic models that represent tectonic knowledge is still a challenge. Although methods that address specific sources of fault network uncertainty and complexities of fault modeling exists, a unifying framework is still lacking. In this paper, we propose a rigorous approach to quantify fault network uncertainty. Fault pattern and intensity information are expressed by means of a marked point process, marked Strauss point process. Fault network information is constrained to fault surface observations (complete or partial) within a Bayesian framework. A structural prior model is defined to quantitatively express fault patterns, geometries and relationships within the Bayesian framework. Structural relationships between faults, in particular fault abutting relations, are represented with a level-set based approach. A Markov Chain Monte Carlo sampler is used to sample posterior fault network realizations that reflect tectonic knowledge and honor fault observations. We apply the methodology to a field study from Nankai Trough & Kumano Basin. The target for uncertainty quantification is a deep site with attenuated seismic data with only partially visible faults and many faults missing from the survey or interpretation. A structural prior model is built from shallow analog sites that are believed to have undergone similar tectonics compared to the site of study. Fault network uncertainty for the field is quantified with fault network realizations that are conditioned to structural rules, tectonic information and partially observed fault surfaces. We show the proposed methodology generates realistic fault network models conditioned to data and a conceptual model of the underlying tectonics.
Varzakas, Theodoros H; Arvanitoyannis, Ioannis S
2007-01-01
The Failure Mode and Effect Analysis (FMEA) model has been applied for the risk assessment of corn curl manufacturing. A tentative approach of FMEA application to the snacks industry was attempted in an effort to exclude the presence of GMOs in the final product. This is of crucial importance both from the ethics and the legislation (Regulations EC 1829/2003; EC 1830/2003; Directive EC 18/2001) point of view. The Preliminary Hazard Analysis and the Fault Tree Analysis were used to analyze and predict the occurring failure modes in a food chain system (corn curls processing plant), based on the functions, characteristics, and/or interactions of the ingredients or the processes, upon which the system depends. Critical Control points have been identified and implemented in the cause and effect diagram (also known as Ishikawa, tree diagram, and the fishbone diagram). Finally, Pareto diagrams were employed towards the optimization of GMOs detection potential of FMEA.
49 CFR Appendix B to Part 236 - Risk Assessment Criteria
Code of Federal Regulations, 2012 CFR
2012-10-01
... availability calculations for subsystems and components, Fault Tree Analysis (FTA) of the subsystems, and... upper bound, as estimated with a sensitivity analysis, and the risk value selected must be demonstrated... interconnected subsystems/components? The risk assessment of each safety-critical system (product) must account...
49 CFR Appendix B to Part 236 - Risk Assessment Criteria
Code of Federal Regulations, 2014 CFR
2014-10-01
... availability calculations for subsystems and components, Fault Tree Analysis (FTA) of the subsystems, and... upper bound, as estimated with a sensitivity analysis, and the risk value selected must be demonstrated... interconnected subsystems/components? The risk assessment of each safety-critical system (product) must account...
49 CFR Appendix D to Part 236 - Independent Review of Verification and Validation
Code of Federal Regulations, 2010 CFR
2010-10-01
... standards. (f) The reviewer shall analyze all Fault Tree Analyses (FTA), Failure Mode and Effects... for each product vulnerability cited by the reviewer; (4) Identification of any documentation or... not properly followed; (6) Identification of the software verification and validation procedures, as...
14 CFR 417.309 - Flight safety system analysis.
Code of Federal Regulations, 2012 CFR
2012-01-01
... system anomaly occurring and all of its effects as determined by the single failure point analysis and... termination system. (c) Single failure point. A command control system must undergo an analysis that... fault tree analysis or a failure modes effects and criticality analysis; (2) Identify all possible...
14 CFR 417.309 - Flight safety system analysis.
Code of Federal Regulations, 2010 CFR
2010-01-01
... system anomaly occurring and all of its effects as determined by the single failure point analysis and... termination system. (c) Single failure point. A command control system must undergo an analysis that... fault tree analysis or a failure modes effects and criticality analysis; (2) Identify all possible...
14 CFR 417.309 - Flight safety system analysis.
Code of Federal Regulations, 2013 CFR
2013-01-01
... system anomaly occurring and all of its effects as determined by the single failure point analysis and... termination system. (c) Single failure point. A command control system must undergo an analysis that... fault tree analysis or a failure modes effects and criticality analysis; (2) Identify all possible...
14 CFR 417.309 - Flight safety system analysis.
Code of Federal Regulations, 2014 CFR
2014-01-01
... system anomaly occurring and all of its effects as determined by the single failure point analysis and... termination system. (c) Single failure point. A command control system must undergo an analysis that... fault tree analysis or a failure modes effects and criticality analysis; (2) Identify all possible...
14 CFR 417.309 - Flight safety system analysis.
Code of Federal Regulations, 2011 CFR
2011-01-01
... system anomaly occurring and all of its effects as determined by the single failure point analysis and... termination system. (c) Single failure point. A command control system must undergo an analysis that... fault tree analysis or a failure modes effects and criticality analysis; (2) Identify all possible...
Conversion of Questionnaire Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Powell, Danny H; Elwood Jr, Robert H
During the survey, respondents are asked to provide qualitative answers (well, adequate, needs improvement) on how well material control and accountability (MC&A) functions are being performed. These responses can be used to develop failure probabilities for basic events performed during routine operation of the MC&A systems. The failure frequencies for individual events may be used to estimate total system effectiveness using a fault tree in a probabilistic risk analysis (PRA). Numeric risk values are required for the PRA fault tree calculations that are performed to evaluate system effectiveness. So, the performance ratings in the questionnaire must be converted to relativemore » risk values for all of the basic MC&A tasks performed in the facility. If a specific material protection, control, and accountability (MPC&A) task is being performed at the 'perfect' level, the task is considered to have a near zero risk of failure. If the task is performed at a less than perfect level, the deficiency in performance represents some risk of failure for the event. As the degree of deficiency in performance increases, the risk of failure increases. If a task that should be performed is not being performed, that task is in a state of failure. The failure probabilities of all basic events contribute to the total system risk. Conversion of questionnaire MPC&A system performance data to numeric values is a separate function from the process of completing the questionnaire. When specific questions in the questionnaire are answered, the focus is on correctly assessing and reporting, in an adjectival manner, the actual performance of the related MC&A function. Prior to conversion, consideration should not be given to the numeric value that will be assigned during the conversion process. In the conversion process, adjectival responses to questions on system performance are quantified based on a log normal scale typically used in human error analysis (see A.D. Swain and H.E. Guttmann, 'Handbook of Human Reliability Analysis with Emphasis on Nuclear Power Plant Applications,' NUREG/CR-1278). This conversion produces the basic event risk of failure values required for the fault tree calculations. The fault tree is a deductive logic structure that corresponds to the operational nuclear MC&A system at a nuclear facility. The conventional Delphi process is a time-honored approach commonly used in the risk assessment field to extract numerical values for the failure rates of actions or activities when statistically significant data is absent.« less
3D Modelling of Seismically Active Parts of Underground Faults via Seismic Data Mining
NASA Astrophysics Data System (ADS)
Frantzeskakis, Theofanis; Konstantaras, Anthony
2015-04-01
During the last few years rapid steps have been taken towards drilling for oil in the western Mediterranean sea. Since most of the countries in the region benefit mainly from tourism and considering that the Mediterranean is a closed sea only replenishing its water once every ninety years careful measures are being taken to ensure safe drilling. In that concept this research work attempts to derive a three dimensional model of the seismically active parts of the underlying underground faults in areas of petroleum interest. For that purpose seismic spatio-temporal clustering has been applied to seismic data to identify potential distinct seismic regions in the area of interest. Results have been coalesced with two dimensional maps of underground faults from past surveys and seismic epicentres, having followed careful reallocation processing, have been used to provide information regarding the vertical extent of multiple underground faults in the region of interest. The end product is a three dimensional map of the possible underground location and extent of the seismically active parts of underground faults. Indexing terms: underground faults modelling, seismic data mining, 3D visualisation, active seismic source mapping, seismic hazard evaluation, dangerous phenomena modelling Acknowledgment This research work is supported by the ESPA Operational Programme, Education and Life Long Learning, Students Practical Placement Initiative. References [1] Alves, T.M., Kokinou, E. and Zodiatis, G.: 'A three-step model to assess shoreline and offshore susceptibility to oil spills: The South Aegean (Crete) as an analogue for confined marine basins', Marine Pollution Bulletin, In Press, 2014 [2] Ciappa, A., Costabile, S.: 'Oil spill hazard assessment using a reverse trajectory method for the Egadi marine protected area (Central Mediterranean Sea)', Marine Pollution Bulletin, vol. 84 (1-2), pp. 44-55, 2014 [3] Ganas, A., Karastathis, V., Moshou, A., Valkaniotis, S., Mouzakiotis, E. and Papathanassiou, G.: 'Aftershock relocation and frequency-size distribution, stress inversion and seismotectonic setting of the 7 August 2013 M=5.4 earthquake in Kallidromon Mountain, central Greece', Tectonophysics, vol. 617, pp. 101-113, 2014 [4] Maravelakis, E., Bilalis, N., Mantzorou, I., Konstantaras, A. and Antoniadis, A.: '3D modelling of the oldest olive tree of the world', International Journal Of Computational Engineering Research, vol. 2 (2), pp. 340-347, 2012 [5] Konstantaras, A., Katsifarakis, E, Maravelakis, E, Skounakis, E, Kokkinos, E. and Karapidakis, E.: 'Intelligent spatial-clustering of seismicity in the vicinity of the Hellenic seismic arc', Earth Science Research, vol. 1 (2), pp. 1- 10, 2012 [6] Georgoulas, G., Konstantaras, A., Katsifarakis, E., Stylios, C., Maravelakis, E and Vachtsevanos, G.: 'Seismic-mass" density-based algorithm for spatio-temporal clustering', Expert Systems with Applications, vol. 40 (10), pp. 4183-4189, 2013 [7] Konstantaras, A.: 'Classification of Distinct Seismic Regions and Regional Temporal Modelling of Seismicity in the Vicinity of the Hellenic Seismic Arc', Selected Topics in Applied Earth Observations and Remote Sensing, IEEE Journal of', vol. 99, pp. 1-7, 2013
The Fault Block Model: A novel approach for faulted gas reservoirs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ursin, J.R.; Moerkeseth, P.O.
1994-12-31
The Fault Block Model was designed for the development of gas production from Sleipner Vest. The reservoir consists of marginal marine sandstone of Hugine Formation. Modeling of highly faulted and compartmentalized reservoirs is severely impeded by the nature and extent of known and undetected faults and, in particular, their effectiveness as flow barrier. The model presented is efficient and superior to other models, for highly faulted reservoir, i.e. grid based simulators, because it minimizes the effect of major undetected faults and geological uncertainties. In this article the authors present the Fault Block Model as a new tool to better understandmore » the implications of geological uncertainty in faulted gas reservoirs with good productivity, with respect to uncertainty in well coverage and optimum gas recovery.« less
Quality-based Multimodal Classification Using Tree-Structured Sparsity
2014-03-08
Pennsylvania State University soheil@psu.edu Asok Ray Pennsylvania State University axr2@psu.edu@psu.edu Nasser M. Nasrabadi Army Research Laboratory...clustering for on- line fault detection and isolation. Applied Intelligence, 35(2):269–284, 2011. 4 [2] S. Bahrampour, A. Ray , S. Sarkar, T. Damarla, and N
Assessing Institutional Ineffectiveness: A Strategy for Improvement.
ERIC Educational Resources Information Center
Cameron, Kim S.
1984-01-01
Based on the theory that institutional change and improvement are motivated more by knowledge of problems than by knowledge of successes, a fault tree analysis technique using Boolean logic for assessing institutional ineffectiveness by determining weaknesses in the system is presented. Advantages and disadvantages of focusing on weakness rather…
Using faults for PSHA in a volcanic context: the Etna case (Southern Italy)
NASA Astrophysics Data System (ADS)
Azzaro, Raffaele; D'Amico, Salvatore; Gee, Robin; Pace, Bruno; Peruzza, Laura
2016-04-01
At Mt. Etna volcano (Southern Italy), recurrent volcano-tectonic earthquakes affect the urbanised areas, with an overall population of about 400,000 and with important infrastructures and lifelines. For this reason, seismic hazard analyses have been undertaken in the last decade focusing on the capability of local faults to generate damaging earthquakes especially in the short-term (30-5 yrs); these results have to be intended as complementary to the regulatory seismic hazard maps, and devoted to establish priority in the seismic retrofitting of the exposed municipalities. Starting from past experience, in the framework of the V3 Project funded by the Italian Department of Civil Defense we performed a fully probabilistic seismic hazard assessment by using an original definition of seismic sources and ground-motion prediction equations specifically derived for this volcanic area; calculations are referred to a new brand topographic surface (Mt. Etna reaches more than 3,000 m in elevation, in less than 20 km from the coast), and to both Poissonian and time-dependent occurrence models. We present at first the process of defining seismic sources that includes individual faults, seismic zones and gridded seismicity; they are obtained by integrating geological field data with long-term (the historical macroseismic catalogue) and short-term earthquake data (the instrumental catalogue). The analysis of the Frequency Magnitude Distribution identifies areas in the volcanic complex, with a- and b-values of the Gutenberg-Richter relationship representative of different dynamic processes. Then, we discuss the variability of the mean occurrence times of major earthquakes along the main Etnean faults estimated by using a purely geologic approach. This analysis has been carried out through the software code FISH, a Matlab® tool developed to turn fault data representative of the seismogenic process into hazard models. The utilization of a magnitude-size scaling relationship specific for volcanic areas is a key element: the FiSH code may thus calculate the most probable values of characteristic expected magnitude (Mchar) with the associated standard deviation σ, the corresponding mean recurrence times (Tmean) and the aperiodicity factor for each fault. Finally, we show some results obtained by the OpenQuake-engine by considering a conceptual logic tree model organised in several branches (zone and zoneless, historical and geological rates, Poisson and time-dependent assumptions). Maps are referred to various exposure periods (10% exceeding probability in 30-5 years) and different spectral accelerations. The volcanic region of Mt. Etna represents a perfect lab for fault-based PSHA; the large dataset of input parameters used in the calculations allows testing different methodological approaches and validating some conceptual procedures.
Fault compaction and overpressured faults: results from a 3-D model of a ductile fault zone
NASA Astrophysics Data System (ADS)
Fitzenz, D. D.; Miller, S. A.
2003-10-01
A model of a ductile fault zone is incorporated into a forward 3-D earthquake model to better constrain fault-zone hydraulics. The conceptual framework of the model fault zone was chosen such that two distinct parts are recognized. The fault core, characterized by a relatively low permeability, is composed of a coseismic fault surface embedded in a visco-elastic volume that can creep and compact. The fault core is surrounded by, and mostly sealed from, a high permeability damaged zone. The model fault properties correspond explicitly to those of the coseismic fault core. Porosity and pore pressure evolve to account for the viscous compaction of the fault core, while stresses evolve in response to the applied tectonic loading and to shear creep of the fault itself. A small diffusive leakage is allowed in and out of the fault zone. Coseismically, porosity is created to account for frictional dilatancy. We show in the case of a 3-D fault model with no in-plane flow and constant fluid compressibility, pore pressures do not drop to hydrostatic levels after a seismic rupture, leading to an overpressured weak fault. Since pore pressure plays a key role in the fault behaviour, we investigate coseismic hydraulic property changes. In the full 3-D model, pore pressures vary instantaneously by the poroelastic effect during the propagation of the rupture. Once the stress state stabilizes, pore pressures are incrementally redistributed in the failed patch. We show that the significant effect of pressure-dependent fluid compressibility in the no in-plane flow case becomes a secondary effect when the other spatial dimensions are considered because in-plane flow with a near-lithostatically pressured neighbourhood equilibrates at a pressure much higher than hydrostatic levels, forming persistent high-pressure fluid compartments. If the observed faults are not all overpressured and weak, other mechanisms, not included in this model, must be at work in nature, which need to be investigated. Significant leakage perpendicular to the fault strike (in the case of a young fault), or cracks hydraulically linking the fault core to the damaged zone (for a mature fault) are probable mechanisms for keeping the faults strong and might play a significant role in modulating fault pore pressures. Therefore, fault-normal hydraulic properties of fault zones should be a future focus of field and numerical experiments.
Managing Risk to Ensure a Successful Cassini/Huygens Saturn Orbit Insertion (SOI)
NASA Technical Reports Server (NTRS)
Witkowski, Mona M.; Huh, Shin M.; Burt, John B.; Webster, Julie L.
2004-01-01
I. Design: a) S/C designed to be largely single fault tolerant; b) Operate in flight demonstrated envelope, with margin; and c) Strict compliance with requirements & flight rules. II. Test: a) Baseline, fault & stress testing using flight system testbeds (H/W & S/W); b) In-flight checkout & demos to remove first time events. III. Failure Analysis: a) Critical event driven fault tree analysis; b) Risk mitigation & development of contingencies. IV) Residual Risks: a) Accepted pre-launch waivers to Single Point Failures; b) Unavoidable risks (e.g. natural disaster). V) Mission Assurance: a) Strict process for characterization of variances (ISAs, PFRs & Waivers; b) Full time Mission Assurance Manager reports to Program Manager: 1) Independent assessment of compliance with institutional standards; 2) Oversight & risk assessment of ISAs, PFRs & Waivers etc.; and 3) Risk Management Process facilitator.
Water-Tree Modelling and Detection for Underground Cables
NASA Astrophysics Data System (ADS)
Chen, Qi
In recent years, aging infrastructure has become a major concern for the power industry. Since its inception in early 20th century, the electrical system has been the cornerstone of an industrial society. Stable and uninterrupted delivery of electrical power is now a base necessity for the modern world. As the times march-on, however, the electrical infrastructure ages and there is the inevitable need to renew and replace the existing system. Unfortunately, due to time and financial constraints, many electrical systems today are forced to operate beyond their original design and power utilities must find ways to prolong the lifespan of older equipment. Thus, the concept of preventative maintenance arises. Preventative maintenance allows old equipment to operate longer and at better efficiency, but in order to implement preventative maintenance, the operators must know minute details of the electrical system, especially some of the harder to assess issues such water-tree. Water-tree induced insulation degradation is a problem typically associated with older cable systems. It is a very high impedance phenomenon and it is difficult to detect using traditional methods such as Tan-Delta or Partial Discharge. The proposed dissertation studies water-tree development in underground cables, potential methods to detect water-tree location and water-tree severity estimation. The dissertation begins by developing mathematical models of water-tree using finite element analysis. The method focuses on surface-originated vented tree, the most prominent type of water-tree fault in the field. Using the standard operation parameters of North American electrical systems, the water-tree boundary conditions are defined. By applying finite element analysis technique, the complex water-tree structure is broken down to homogeneous components. The result is a generalized representation of water-tree capacitance at different stages of development. The result from the finite element analysis is used to model water-tree in large system. Both empirical measurements and the mathematical model show that the impedance of early-stage water-tree is extremely large. As the result, traditional detection methods such Tan-Delta or Partial Discharge are not effective due to the excessively high accuracy requirement. A high-frequency pulse detection method is developed instead. The water-tree impedance is capacitive in nature and it can be reduced to manageable level by high-frequency inputs. The method is able to determine the location of early-stage water-tree in long-distance cables using economically feasible equipment. A pattern recognition method is developed to estimate the severity of water-tree using its pulse response from the high-frequency test method. The early-warning system for water-tree appearance is a tool developed to assist the practical implementation of the high-frequency pulse detection method. Although the equipment used by the detection method is economically feasible, it is still a specialized test and not designed for constant monitoring of the system. The test also place heavy stress on the cable and it is most effective when the cable is taken offline. As the result, utilities need a method to estimate the likelihood of water-tree presence before subjecting the cable to the specialized test. The early-warning system takes advantage of naturally occurring high-frequency events in the system and uses a deviation-comparison method to estimate the probability of water-tree presence on the cable. If the likelihood is high, then the utility can use the high-frequency pulse detection method to obtain accurate results. Specific pulse response patterns can be used to calculate the capacitance of water-tree. The calculated result, however, is subjected to margins of error due to limitations from the real system. There are both long-term and short-term methods to improve the accuracy. Computation algorithm improvement allows immediate improvement on accuracy of the capacitance estimation. The probability distribution of the calculation solution showed that improvements in waveform time-step measurement allow fundamental improves to the overall result.
Analysis of typical fault-tolerant architectures using HARP
NASA Technical Reports Server (NTRS)
Bavuso, Salvatore J.; Bechta Dugan, Joanne; Trivedi, Kishor S.; Rothmann, Elizabeth M.; Smith, W. Earl
1987-01-01
Difficulties encountered in the modeling of fault-tolerant systems are discussed. The Hybrid Automated Reliability Predictor (HARP) approach to modeling fault-tolerant systems is described. The HARP is written in FORTRAN, consists of nearly 30,000 lines of codes and comments, and is based on behavioral decomposition. Using the behavioral decomposition, the dependability model is divided into fault-occurrence/repair and fault/error-handling models; the characteristics and combining of these two models are examined. Examples in which the HARP is applied to the modeling of some typical fault-tolerant systems, including a local-area network, two fault-tolerant computer systems, and a flight control system, are presented.
Columbia Accident Investigation Board Report. Volume Two
NASA Technical Reports Server (NTRS)
Barry, J. R.; Jenkins, D. R.; White, D. J.; Goodman, P. A.; Reingold, L. A.
2003-01-01
Volume II of the Report contains appendices that were cited in Volume I. The Columbia Accident Investigation Board produced many of these appendices as working papers during the investigation into the February 1, 2003 destruction of the Space Shuttle Columbia. Other appendices were produced by other organizations (mainly NASA) in support of the Board investigation. In the case of documents that have been published by others, they are included here in the interest of establishing a complete record, but often at less than full page size. Contents include: CAIB Technical Documents Cited in the Report: Reader's Guide to Volume II; Appendix D. a Supplement to the Report; Appendix D.b Corrections to Volume I of the Report; Appendix D.1 STS-107 Training Investigation; Appendix D.2 Payload Operations Checklist 3; Appendix D.3 Fault Tree Closure Summary; Appendix D.4 Fault Tree Elements - Not Closed; Appendix D.5 Space Weather Conditions; Appendix D.6 Payload and Payload Integration; Appendix D.7 Working Scenario; Appendix D.8 Debris Transport Analysis; Appendix D.9 Data Review and Timeline Reconstruction Report; Appendix D.10 Debris Recovery; Appendix D.11 STS-107 Columbia Reconstruction Report; Appendix D.12 Impact Modeling; Appendix D.13 STS-107 In-Flight Options Assessment; Appendix D.14 Orbiter Major Modification (OMM) Review; Appendix D.15 Maintenance, Material, and Management Inputs; Appendix D.16 Public Safety Analysis; Appendix D.17 MER Manager's Tiger Team Checklist; Appendix D.18 Past Reports Review; Appendix D.19 Qualification and Interpretation of Sensor Data from STS-107; Appendix D.20 Bolt Catcher Debris Analysis.
A distributed fault-detection and diagnosis system using on-line parameter estimation
NASA Technical Reports Server (NTRS)
Guo, T.-H.; Merrill, W.; Duyar, A.
1991-01-01
The development of a model-based fault-detection and diagnosis system (FDD) is reviewed. The system can be used as an integral part of an intelligent control system. It determines the faults of a system from comparison of the measurements of the system with a priori information represented by the model of the system. The method of modeling a complex system is described and a description of diagnosis models which include process faults is presented. There are three distinct classes of fault modes covered by the system performance model equation: actuator faults, sensor faults, and performance degradation. A system equation for a complete model that describes all three classes of faults is given. The strategy for detecting the fault and estimating the fault parameters using a distributed on-line parameter identification scheme is presented. A two-step approach is proposed. The first step is composed of a group of hypothesis testing modules, (HTM) in parallel processing to test each class of faults. The second step is the fault diagnosis module which checks all the information obtained from the HTM level, isolates the fault, and determines its magnitude. The proposed FDD system was demonstrated by applying it to detect actuator and sensor faults added to a simulation of the Space Shuttle Main Engine. The simulation results show that the proposed FDD system can adequately detect the faults and estimate their magnitudes.
Reliability and availability evaluation of Wireless Sensor Networks for industrial applications.
Silva, Ivanovitch; Guedes, Luiz Affonso; Portugal, Paulo; Vasques, Francisco
2012-01-01
Wireless Sensor Networks (WSN) currently represent the best candidate to be adopted as the communication solution for the last mile connection in process control and monitoring applications in industrial environments. Most of these applications have stringent dependability (reliability and availability) requirements, as a system failure may result in economic losses, put people in danger or lead to environmental damages. Among the different type of faults that can lead to a system failure, permanent faults on network devices have a major impact. They can hamper communications over long periods of time and consequently disturb, or even disable, control algorithms. The lack of a structured approach enabling the evaluation of permanent faults, prevents system designers to optimize decisions that minimize these occurrences. In this work we propose a methodology based on an automatic generation of a fault tree to evaluate the reliability and availability of Wireless Sensor Networks, when permanent faults occur on network devices. The proposal supports any topology, different levels of redundancy, network reconfigurations, criticality of devices and arbitrary failure conditions. The proposed methodology is particularly suitable for the design and validation of Wireless Sensor Networks when trying to optimize its reliability and availability requirements.
Reliability and Availability Evaluation of Wireless Sensor Networks for Industrial Applications
Silva, Ivanovitch; Guedes, Luiz Affonso; Portugal, Paulo; Vasques, Francisco
2012-01-01
Wireless Sensor Networks (WSN) currently represent the best candidate to be adopted as the communication solution for the last mile connection in process control and monitoring applications in industrial environments. Most of these applications have stringent dependability (reliability and availability) requirements, as a system failure may result in economic losses, put people in danger or lead to environmental damages. Among the different type of faults that can lead to a system failure, permanent faults on network devices have a major impact. They can hamper communications over long periods of time and consequently disturb, or even disable, control algorithms. The lack of a structured approach enabling the evaluation of permanent faults, prevents system designers to optimize decisions that minimize these occurrences. In this work we propose a methodology based on an automatic generation of a fault tree to evaluate the reliability and availability of Wireless Sensor Networks, when permanent faults occur on network devices. The proposal supports any topology, different levels of redundancy, network reconfigurations, criticality of devices and arbitrary failure conditions. The proposed methodology is particularly suitable for the design and validation of Wireless Sensor Networks when trying to optimize its reliability and availability requirements. PMID:22368497
Li, Qiuying; Pham, Hoang
2017-01-01
In this paper, we propose a software reliability model that considers not only error generation but also fault removal efficiency combined with testing coverage information based on a nonhomogeneous Poisson process (NHPP). During the past four decades, many software reliability growth models (SRGMs) based on NHPP have been proposed to estimate the software reliability measures, most of which have the same following agreements: 1) it is a common phenomenon that during the testing phase, the fault detection rate always changes; 2) as a result of imperfect debugging, fault removal has been related to a fault re-introduction rate. But there are few SRGMs in the literature that differentiate between fault detection and fault removal, i.e. they seldom consider the imperfect fault removal efficiency. But in practical software developing process, fault removal efficiency cannot always be perfect, i.e. the failures detected might not be removed completely and the original faults might still exist and new faults might be introduced meanwhile, which is referred to as imperfect debugging phenomenon. In this study, a model aiming to incorporate fault introduction rate, fault removal efficiency and testing coverage into software reliability evaluation is developed, using testing coverage to express the fault detection rate and using fault removal efficiency to consider the fault repair. We compare the performance of the proposed model with several existing NHPP SRGMs using three sets of real failure data based on five criteria. The results exhibit that the model can give a better fitting and predictive performance.
Fault Diagnostics for Turbo-Shaft Engine Sensors Based on a Simplified On-Board Model
Lu, Feng; Huang, Jinquan; Xing, Yaodong
2012-01-01
Combining a simplified on-board turbo-shaft model with sensor fault diagnostic logic, a model-based sensor fault diagnosis method is proposed. The existing fault diagnosis method for turbo-shaft engine key sensors is mainly based on a double redundancies technique, and this can't be satisfied in some occasions as lack of judgment. The simplified on-board model provides the analytical third channel against which the dual channel measurements are compared, while the hardware redundancy will increase the structure complexity and weight. The simplified turbo-shaft model contains the gas generator model and the power turbine model with loads, this is built up via dynamic parameters method. Sensor fault detection, diagnosis (FDD) logic is designed, and two types of sensor failures, such as the step faults and the drift faults, are simulated. When the discrepancy among the triplex channels exceeds a tolerance level, the fault diagnosis logic determines the cause of the difference. Through this approach, the sensor fault diagnosis system achieves the objectives of anomaly detection, sensor fault diagnosis and redundancy recovery. Finally, experiments on this method are carried out on a turbo-shaft engine, and two types of faults under different channel combinations are presented. The experimental results show that the proposed method for sensor fault diagnostics is efficient. PMID:23112645
Fault diagnostics for turbo-shaft engine sensors based on a simplified on-board model.
Lu, Feng; Huang, Jinquan; Xing, Yaodong
2012-01-01
Combining a simplified on-board turbo-shaft model with sensor fault diagnostic logic, a model-based sensor fault diagnosis method is proposed. The existing fault diagnosis method for turbo-shaft engine key sensors is mainly based on a double redundancies technique, and this can't be satisfied in some occasions as lack of judgment. The simplified on-board model provides the analytical third channel against which the dual channel measurements are compared, while the hardware redundancy will increase the structure complexity and weight. The simplified turbo-shaft model contains the gas generator model and the power turbine model with loads, this is built up via dynamic parameters method. Sensor fault detection, diagnosis (FDD) logic is designed, and two types of sensor failures, such as the step faults and the drift faults, are simulated. When the discrepancy among the triplex channels exceeds a tolerance level, the fault diagnosis logic determines the cause of the difference. Through this approach, the sensor fault diagnosis system achieves the objectives of anomaly detection, sensor fault diagnosis and redundancy recovery. Finally, experiments on this method are carried out on a turbo-shaft engine, and two types of faults under different channel combinations are presented. The experimental results show that the proposed method for sensor fault diagnostics is efficient.
Constructive Epistemic Modeling: A Hierarchical Bayesian Model Averaging Method
NASA Astrophysics Data System (ADS)
Tsai, F. T. C.; Elshall, A. S.
2014-12-01
Constructive epistemic modeling is the idea that our understanding of a natural system through a scientific model is a mental construct that continually develops through learning about and from the model. Using the hierarchical Bayesian model averaging (HBMA) method [1], this study shows that segregating different uncertain model components through a BMA tree of posterior model probabilities, model prediction, within-model variance, between-model variance and total model variance serves as a learning tool [2]. First, the BMA tree of posterior model probabilities permits the comparative evaluation of the candidate propositions of each uncertain model component. Second, systemic model dissection is imperative for understanding the individual contribution of each uncertain model component to the model prediction and variance. Third, the hierarchical representation of the between-model variance facilitates the prioritization of the contribution of each uncertain model component to the overall model uncertainty. We illustrate these concepts using the groundwater modeling of a siliciclastic aquifer-fault system. The sources of uncertainty considered are from geological architecture, formation dip, boundary conditions and model parameters. The study shows that the HBMA analysis helps in advancing knowledge about the model rather than forcing the model to fit a particularly understanding or merely averaging several candidate models. [1] Tsai, F. T.-C., and A. S. Elshall (2013), Hierarchical Bayesian model averaging for hydrostratigraphic modeling: Uncertainty segregation and comparative evaluation. Water Resources Research, 49, 5520-5536, doi:10.1002/wrcr.20428. [2] Elshall, A.S., and F. T.-C. Tsai (2014). Constructive epistemic modeling of groundwater flow with geological architecture and boundary condition uncertainty under Bayesian paradigm, Journal of Hydrology, 517, 105-119, doi: 10.1016/j.jhydrol.2014.05.027.
NASA Technical Reports Server (NTRS)
Guarro, Sergio B.
2010-01-01
This report validates and documents the detailed features and practical application of the framework for software intensive digital systems risk assessment and risk-informed safety assurance presented in the NASA PRA Procedures Guide for Managers and Practitioner. This framework, called herein the "Context-based Software Risk Model" (CSRM), enables the assessment of the contribution of software and software-intensive digital systems to overall system risk, in a manner which is entirely compatible and integrated with the format of a "standard" Probabilistic Risk Assessment (PRA), as currently documented and applied for NASA missions and applications. The CSRM also provides a risk-informed path and criteria for conducting organized and systematic digital system and software testing so that, within this risk-informed paradigm, the achievement of a quantitatively defined level of safety and mission success assurance may be targeted and demonstrated. The framework is based on the concept of context-dependent software risk scenarios and on the modeling of such scenarios via the use of traditional PRA techniques - i.e., event trees and fault trees - in combination with more advanced modeling devices such as the Dynamic Flowgraph Methodology (DFM) or other dynamic logic-modeling representations. The scenarios can be synthesized and quantified in a conditional logic and probabilistic formulation. The application of the CSRM method documented in this report refers to the MiniAERCam system designed and developed by the NASA Johnson Space Center.
Communications and tracking expert systems study
NASA Technical Reports Server (NTRS)
Leibfried, T. F.; Feagin, Terry; Overland, David
1987-01-01
The original objectives of the study consisted of five broad areas of investigation: criteria and issues for explanation of communication and tracking system anomaly detection, isolation, and recovery; data storage simplification issues for fault detection expert systems; data selection procedures for decision tree pruning and optimization to enhance the abstraction of pertinent information for clear explanation; criteria for establishing levels of explanation suited to needs; and analysis of expert system interaction and modularization. Progress was made in all areas, but to a lesser extent in the criteria for establishing levels of explanation suited to needs. Among the types of expert systems studied were those related to anomaly or fault detection, isolation, and recovery.
[Medical Equipment Maintenance Methods].
Liu, Hongbin
2015-09-01
Due to the high technology and the complexity of medical equipment, as well as to the safety and effectiveness, it determines the high requirements of the medical equipment maintenance work. This paper introduces some basic methods of medical instrument maintenance, including fault tree analysis, node method and exclusive method which are the three important methods in the medical equipment maintenance, through using these three methods for the instruments that have circuit drawings, hardware breakdown maintenance can be done easily. And this paper introduces the processing methods of some special fault conditions, in order to reduce little detours in meeting the same problems. Learning is very important for stuff just engaged in this area.
A Simplified Model for Multiphase Leakage through Faults with Applications for CO2 Storage
NASA Astrophysics Data System (ADS)
Watson, F. E.; Doster, F.
2017-12-01
In the context of geological CO2 storage, faults in the subsurface could affect storage security by acting as high permeability pathways which allow CO2 to flow upwards and away from the storage formation. To assess the likelihood of leakage through faults and the impacts faults might have on storage security numerical models are required. However, faults are complex geological features, usually consisting of a fault core surrounded by a highly fractured damage zone. A direct representation of these in a numerical model would require very fine grid resolution and would be computationally expensive. Here, we present the development of a reduced complexity model for fault flow using the vertically integrated formulation. This model captures the main features of the flow but does not require us to resolve the vertical dimension, nor the fault in the horizontal dimension, explicitly. It is thus less computationally expensive than full resolution models. Consequently, we can quickly model many realisations for parameter uncertainty studies of CO2 injection into faulted reservoirs. We develop the model based on explicitly simulating local 3D representations of faults for characteristic scenarios using the Matlab Reservoir Simulation Toolbox (MRST). We have assessed the impact of variables such as fault geometry, porosity and permeability on multiphase leakage rates.
Mori, J.
1996-01-01
Details of the M 4.3 foreshock to the Joshua Tree earthquake were studied using P waves recorded on the Southern California Seismic Network and the Anza network. Deconvolution, using an M 2.4 event as an empirical Green's function, corrected for complicated path and site effects in the seismograms and produced simple far-field displacement pulses that were inverted for a slip distribution. Both possible fault planes, north-south and east-west, for the focal mechanism were tested by a least-squares inversion procedure with a range of rupture velocities. The results showed that the foreshock ruptured the north-south plane, similar to the mainshock. The foreshock initiated a few hundred meters south of the mainshock and ruptured to the north, toward the mainshock hypocenter. The mainshock (M 6.1) initiated near the northern edge of the foreshock rupture 2 hr later. The foreshock had a high stress drop (320 to 800 bars) and broke a small portion of the fault adjacent to the mainshock but was not able to immediately initiate the mainshock rupture.
Li, Qiuying; Pham, Hoang
2017-01-01
In this paper, we propose a software reliability model that considers not only error generation but also fault removal efficiency combined with testing coverage information based on a nonhomogeneous Poisson process (NHPP). During the past four decades, many software reliability growth models (SRGMs) based on NHPP have been proposed to estimate the software reliability measures, most of which have the same following agreements: 1) it is a common phenomenon that during the testing phase, the fault detection rate always changes; 2) as a result of imperfect debugging, fault removal has been related to a fault re-introduction rate. But there are few SRGMs in the literature that differentiate between fault detection and fault removal, i.e. they seldom consider the imperfect fault removal efficiency. But in practical software developing process, fault removal efficiency cannot always be perfect, i.e. the failures detected might not be removed completely and the original faults might still exist and new faults might be introduced meanwhile, which is referred to as imperfect debugging phenomenon. In this study, a model aiming to incorporate fault introduction rate, fault removal efficiency and testing coverage into software reliability evaluation is developed, using testing coverage to express the fault detection rate and using fault removal efficiency to consider the fault repair. We compare the performance of the proposed model with several existing NHPP SRGMs using three sets of real failure data based on five criteria. The results exhibit that the model can give a better fitting and predictive performance. PMID:28750091
Risk-informed Maintenance for Non-coherent Systems
NASA Astrophysics Data System (ADS)
Tao, Ye
Probabilistic Safety Assessment (PSA) is a systematic and comprehensive methodology to evaluate risks associated with a complex engineered technological entity. The information provided by PSA has been increasingly implemented for regulatory purposes but rarely used in providing information for operation and maintenance activities. As one of the key parts in PSA, Fault Tree Analysis (FTA) attempts to model and analyze failure processes of engineering and biological systems. The fault trees are composed of logic diagrams that display the state of the system and are constructed using graphical design techniques. Risk Importance Measures (RIMs) are information that can be obtained from both qualitative and quantitative aspects of FTA. Components within a system can be ranked with respect to each specific criterion defined by each RIM. Through a RIM, a ranking of the components or basic events can be obtained and provide valuable information for risk-informed decision making. Various RIMs have been applied in various applications. In order to provide a thorough understanding of RIMs and interpret the results, they are categorized with respect to risk significance (RS) and safety significance (SS) in this thesis. This has also tied them into different maintenance activities. When RIMs are used for maintenance purposes, it is called risk-informed maintenance. On the other hand, the majority of work produced on the FTA method has been concentrated on failure logic diagrams restricted to the direct or implied use of AND and OR operators. Such systems are considered as coherent systems. However, the NOT logic can also contribute to the information produced by PSA. The importance analysis of non-coherent systems is rather limited, even though the field has received more and more attention over the years. The non-coherent systems introduce difficulties in both qualitative and quantitative assessment of the fault tree compared with the coherent systems. In this thesis, a set of RIMs is analyzed and investigated. The 8 commonly used RIMs (Birnbaum's Measure, Criticality Importance Factor, Fussell-Vesely Measure, Improvement Potential, Conditional Probability, Risk Achievement, Risk Achievement Worth, and Risk Reduction Worth) are extended to non-coherent forms. Both coherent and non-coherent forms are classified into different categories in order to assist different types of maintenance activities. The real systems such as the Steam Generator Level Control System in CANDU Nuclear Power Plant (NPP), a Gas Detection System, and the Automatic Power Control System of the experimental nuclear reactor are presented to demonstrate the application of the results as case studies.
Langridge, R.M.; Stenner, Heidi D.; Fumal, T.E.; Christofferson, S.A.; Rockwell, T.K.; Hartleb, R.D.; Bachhuber, J.; Barka, A.A.
2002-01-01
The Mw 7.4 17 August 1999 İzmit earthquake ruptured five major fault segments of the dextral North Anatolian Fault Zone. The 26-km-long, N86°W-trending Sakarya fault segment (SFS) extends from the Sapanca releasing step-over in the west to near the town of Akyazi in the east. The SFS emerges from Lake Sapanca as two distinct fault traces that rejoin to traverse the Adapazari Plain to Akyazi. Offsets were measured across 88 cultural and natural features that cross the fault, such as roads, cornfield rows, rows of trees, walls, rails, field margins, ditches, vehicle ruts, a dike, and ground cracks. The maximum displacement observed for the İzmit earthquake (∼5.1 m) was encountered on this segment. Dextral displacement for the SFS rises from less than 1 m at Lake Sapanca to greater than 5 m near Arifiye, only 3 km away. Average slip decreases uniformly to the east from Arifiye until the fault steps left from Sagir to Kazanci to the N75°W, 6-km-long Akyazi strand, where slip drops to less than 1 m. The Akyazi strand passes eastward into the Akyazi Bend, which consists of a high-angle bend (18°-29°) between the Sakarya and Karadere fault segments, a 6-km gap in surface rupture, and high aftershock energy release. Complex structural geometries exist between the İzmit, Düzce, and 1967 Mudurnu fault segments that have arrested surface ruptures on timescales ranging from 30 sec to 88 days to 32 yr. The largest of these step-overs may have acted as a rupture segmentation boundary in previous earthquake cycles.
Fitzenz, D.D.; Miller, S.A.
2004-01-01
Understanding the stress field surrounding and driving active fault systems is an important component of mechanistic seismic hazard assessment. We develop and present results from a time-forward three-dimensional (3-D) model of the San Andreas fault system near its Big Bend in southern California. The model boundary conditions are assessed by comparing model and observed tectonic regimes. The model of earthquake generation along two fault segments is used to target measurable properties (e.g., stress orientations, heat flow) that may allow inferences on the stress state on the faults. It is a quasi-static model, where GPS-constrained tectonic loading drives faults modeled as mostly sealed viscoelastic bodies embedded in an elastic half-space subjected to compaction and shear creep. A transpressive tectonic regime develops southwest of the model bend as a result of the tectonic loading and migrates toward the bend because of fault slip. The strength of the model faults is assessed on the basis of stress orientations, stress drop, and overpressures, showing a departure in the behavior of 3-D finite faults compared to models of 1-D or homogeneous infinite faults. At a smaller scale, stress transfers from fault slip transiently induce significant perturbations in the local stress tensors (where the slip profile is very heterogeneous). These stress rotations disappear when subsequent model earthquakes smooth the slip profile. Maps of maximum absolute shear stress emphasize both that (1) future models should include a more continuous representation of the faults and (2) that hydrostatically pressured intact rock is very difficult to break when no material weakness is considered. Copyright 2004 by the American Geophysical Union.
Using Remote Sensing Data to Constrain Models of Fault Interactions and Plate Boundary Deformation
NASA Astrophysics Data System (ADS)
Glasscoe, M. T.; Donnellan, A.; Lyzenga, G. A.; Parker, J. W.; Milliner, C. W. D.
2016-12-01
Determining the distribution of slip and behavior of fault interactions at plate boundaries is a complex problem. Field and remotely sensed data often lack the necessary coverage to fully resolve fault behavior. However, realistic physical models may be used to more accurately characterize the complex behavior of faults constrained with observed data, such as GPS, InSAR, and SfM. These results will improve the utility of using combined models and data to estimate earthquake potential and characterize plate boundary behavior. Plate boundary faults exhibit complex behavior, with partitioned slip and distributed deformation. To investigate what fraction of slip becomes distributed deformation off major faults, we examine a model fault embedded within a damage zone of reduced elastic rigidity that narrows with depth and forward model the slip and resulting surface deformation. The fault segments and slip distributions are modeled using the JPL GeoFEST software. GeoFEST (Geophysical Finite Element Simulation Tool) is a two- and three-dimensional finite element software package for modeling solid stress and strain in geophysical and other continuum domain applications [Lyzenga, et al., 2000; Glasscoe, et al., 2004; Parker, et al., 2008, 2010]. New methods to advance geohazards research using computer simulations and remotely sensed observations for model validation are required to understand fault slip, the complex nature of fault interaction and plate boundary deformation. These models help enhance our understanding of the underlying processes, such as transient deformation and fault creep, and can aid in developing observation strategies for sUAV, airborne, and upcoming satellite missions seeking to determine how faults behave and interact and assess their associated hazard. Models will also help to characterize this behavior, which will enable improvements in hazard estimation. Validating the model results against remotely sensed observations will allow us to better constrain fault zone rheology and physical properties, having implications for the overall understanding of earthquake physics, fault interactions, plate boundary deformation and earthquake hazard, preparedness and risk reduction.
Three-dimensional models of deformation near strike-slip faults
ten Brink, Uri S.; Katzman, Rafael; Lin, J.
1996-01-01
We use three-dimensional elastic models to help guide the kinematic interpretation of crustal deformation associated with strike-slip faults. Deformation of the brittle upper crust in the vicinity of strike-slip fault systems is modeled with the assumption that upper crustal deformation is driven by the relative plate motion in the upper mantle. The driving motion is represented by displacement that is specified on the bottom of a 15-km-thick elastic upper crust everywhere except in a zone of finite width in the vicinity of the faults, which we term the "shear zone." Stress-free basal boundary conditions are specified within the shear zone. The basal driving displacement is either pure strike slip or strike slip with a small oblique component, and the geometry of the fault system includes a single fault, several parallel faults, and overlapping en echelon faults. We examine the variations in deformation due to changes in the width of the shear zone and due to changes in the shear strength of the faults. In models with weak faults the width of the shear zone has a considerable effect on the surficial extent and amplitude of the vertical and horizontal deformation and on the amount of rotation around horizontal and vertical axes. Strong fault models have more localized deformation at the tip of the faults, and the deformation is partly distributed outside the fault zone. The dimensions of large basins along strike-slip faults, such as the Rukwa and Dead Sea basins, and the absence of uplift around pull-apart basins fit models with weak faults better than models with strong faults. Our models also suggest that the length-to-width ratio of pull-apart basins depends on the width of the shear zone and the shear strength of the faults and is not constant as previously suggested. We show that pure strike-slip motion can produce tectonic features, such as elongate half grabens along a single fault, rotated blocks at the ends of parallel faults, or extension perpendicular to overlapping en echelon faults, which can be misinterpreted to indicate a regional component of extension. Zones of subsidence or uplift can become wider than expected for transform plate boundaries when a minor component of oblique motion is added to a system of parallel strike-slip faults.
Three-dimensional models of deformation near strike-slip faults
ten Brink, Uri S.; Katzman, Rafael; Lin, Jian
1996-01-01
We use three-dimensional elastic models to help guide the kinematic interpretation of crustal deformation associated with strike-slip faults. Deformation of the brittle upper crust in the vicinity of strike-slip fault systems is modeled with the assumption that upper crustal deformation is driven by the relative plate motion in the upper mantle. The driving motion is represented by displacement that is specified on the bottom of a 15-km-thick elastic upper crust everywhere except in a zone of finite width in the vicinity of the faults, which we term the “shear zone.” Stress-free basal boundary conditions are specified within the shear zone. The basal driving displacement is either pure strike slip or strike slip with a small oblique component, and the geometry of the fault system includes a single fault, several parallel faults, and overlapping en echelon faults. We examine the variations in deformation due to changes in the width of the shear zone and due to changes in the shear strength of the faults. In models with weak faults the width of the shear zone has a considerable effect on the surficial extent and amplitude of the vertical and horizontal deformation and on the amount of rotation around horizontal and vertical axes. Strong fault models have more localized deformation at the tip of the faults, and the deformation is partly distributed outside the fault zone. The dimensions of large basins along strike-slip faults, such as the Rukwa and Dead Sea basins, and the absence of uplift around pull-apart basins fit models with weak faults better than models with strong faults. Our models also suggest that the length-to-width ratio of pull-apart basins depends on the width of the shear zone and the shear strength of the faults and is not constant as previously suggested. We show that pure strike-slip motion can produce tectonic features, such as elongate half grabens along a single fault, rotated blocks at the ends of parallel faults, or extension perpendicular to overlapping en echelon faults, which can be misinterpreted to indicate a regional component of extension. Zones of subsidence or uplift can become wider than expected for transform plate boundaries when a minor component of oblique motion is added to a system of parallel strike-slip faults.
Certification trails for data structures
NASA Technical Reports Server (NTRS)
Sullivan, Gregory F.; Masson, Gerald M.
1993-01-01
Certification trails are a recently introduced and promising approach to fault detection and fault tolerance. The applicability of the certification trail technique is significantly generalized. Previously, certification trails had to be customized to each algorithm application; trails appropriate to wide classes of algorithms were developed. These certification trails are based on common data-structure operations such as those carried out using these sets of operations such as those carried out using balanced binary trees and heaps. Any algorithms using these sets of operations can therefore employ the certification trail method to achieve software fault tolerance. To exemplify the scope of the generalization of the certification trail technique provided, constructions of trails for abstract data types such as priority queues and union-find structures are given. These trails are applicable to any data-structure implementation of the abstract data type. It is also shown that these ideals lead naturally to monitors for data-structure operations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cappa, F.; Rutqvist, J.
2010-06-01
The interaction between mechanical deformation and fluid flow in fault zones gives rise to a host of coupled hydromechanical processes fundamental to fault instability, induced seismicity, and associated fluid migration. In this paper, we discuss these coupled processes in general and describe three modeling approaches that have been considered to analyze fluid flow and stress coupling in fault-instability processes. First, fault hydromechanical models were tested to investigate fault behavior using different mechanical modeling approaches, including slip interface and finite-thickness elements with isotropic or anisotropic elasto-plastic constitutive models. The results of this investigation showed that fault hydromechanical behavior can be appropriatelymore » represented with the least complex alternative, using a finite-thickness element and isotropic plasticity. We utilized this pragmatic approach coupled with a strain-permeability model to study hydromechanical effects on fault instability during deep underground injection of CO{sub 2}. We demonstrated how such a modeling approach can be applied to determine the likelihood of fault reactivation and to estimate the associated loss of CO{sub 2} from the injection zone. It is shown that shear-enhanced permeability initiated where the fault intersects the injection zone plays an important role in propagating fault instability and permeability enhancement through the overlying caprock.« less
Formal Validation of Fault Management Design Solutions
NASA Technical Reports Server (NTRS)
Gibson, Corrina; Karban, Robert; Andolfato, Luigi; Day, John
2013-01-01
The work presented in this paper describes an approach used to develop SysML modeling patterns to express the behavior of fault protection, test the model's logic by performing fault injection simulations, and verify the fault protection system's logical design via model checking. A representative example, using a subset of the fault protection design for the Soil Moisture Active-Passive (SMAP) system, was modeled with SysML State Machines and JavaScript as Action Language. The SysML model captures interactions between relevant system components and system behavior abstractions (mode managers, error monitors, fault protection engine, and devices/switches). Development of a method to implement verifiable and lightweight executable fault protection models enables future missions to have access to larger fault test domains and verifiable design patterns. A tool-chain to transform the SysML model to jpf-Statechart compliant Java code and then verify the generated code via model checking was established. Conclusions and lessons learned from this work are also described, as well as potential avenues for further research and development.
Block rotations, fault domains and crustal deformation in the western US
NASA Technical Reports Server (NTRS)
Nur, Amos
1990-01-01
The aim of the project was to develop a 3D model of crustal deformation by distributed fault sets and to test the model results in the field. In the first part of the project, Nur's 2D model (1986) was generalized to 3D. In Nur's model the frictional strength of rocks and faults of a domain provides a tight constraint on the amount of rotation that a fault set can undergo during block rotation. Domains of fault sets are commonly found in regions where the deformation is distributed across a region. The interaction of each fault set causes the fault bounded blocks to rotate. The work that has been done towards quantifying the rotation of fault sets in a 3D stress field is briefly summarized. In the second part of the project, field studies were carried out in Israel, Nevada and China. These studies combined both paleomagnetic and structural information necessary to test the block rotation model results. In accordance with the model, field studies demonstrate that faults and attending fault bounded blocks slip and rotate away from the direction of maximum compression when deformation is distributed across fault sets. Slip and rotation of fault sets may continue as long as the earth's crustal strength is not exceeded. More optimally oriented faults must form, for subsequent deformation to occur. Eventually the block rotation mechanism may create a complex pattern of intersecting generations of faults.
Chip level modeling of LSI devices
NASA Technical Reports Server (NTRS)
Armstrong, J. R.
1984-01-01
The advent of Very Large Scale Integration (VLSI) technology has rendered the gate level model impractical for many simulation activities critical to the design automation process. As an alternative, an approach to the modeling of VLSI devices at the chip level is described, including the specification of modeling language constructs important to the modeling process. A model structure is presented in which models of the LSI devices are constructed as single entities. The modeling structure is two layered. The functional layer in this structure is used to model the input/output response of the LSI chip. A second layer, the fault mapping layer, is added, if fault simulations are required, in order to map the effects of hardware faults onto the functional layer. Modeling examples for each layer are presented. Fault modeling at the chip level is described. Approaches to realistic functional fault selection and defining fault coverage for functional faults are given. Application of the modeling techniques to single chip and bit slice microprocessors is discussed.
Post-seismic and interseismic fault creep I: model description
NASA Astrophysics Data System (ADS)
Hetland, E. A.; Simons, M.; Dunham, E. M.
2010-04-01
We present a model of localized, aseismic fault creep during the full interseismic period, including both transient and steady fault creep, in response to a sequence of imposed coseismic slip events and tectonic loading. We consider the behaviour of models with linear viscous, non-linear viscous, rate-dependent friction, and rate- and state-dependent friction fault rheologies. Both the transient post-seismic creep and the pattern of steady interseismic creep rates surrounding asperities depend on recent coseismic slip and fault rheologies. In these models, post-seismic fault creep is manifest as pulses of elevated creep rates that propagate from the coseismic slip, these pulses feature sharper fronts and are longer lived in models with rate-state friction compared to other models. With small characteristic slip distances in rate-state friction models, interseismic creep is similar to that in models with rate-dependent friction faults, except for the earliest periods of post-seismic creep. Our model can be used to constrain fault rheologies from geodetic observations in cases where the coseismic slip history is relatively well known. When only considering surface deformation over a short period of time, there are strong trade-offs between fault rheology and the details of the imposed coseismic slip. Geodetic observations over longer times following an earthquake will reduce these trade-offs, while simultaneous modelling of interseismic and post-seismic observations provide the strongest constraints on fault rheologies.
NASA Astrophysics Data System (ADS)
Dygert, Nick; Liang, Yan
2015-06-01
Mantle peridotites from ophiolites are commonly interpreted as having mid-ocean ridge (MOR) or supra-subduction zone (SSZ) affinity. Recently, an REE-in-two-pyroxene thermometer was developed (Liang et al., 2013) that has higher closure temperatures (designated as TREE) than major element based two-pyroxene thermometers for mafic and ultramafic rocks that experienced cooling. The REE-in-two-pyroxene thermometer has the potential to extract meaningful cooling rates from ophiolitic peridotites and thus shed new light on the thermal history of the different tectonic regimes. We calculated TREE for available literature data from abyssal peridotites, subcontinental (SC) peridotites, and ophiolites around the world (Alps, Coast Range, Corsica, New Caledonia, Oman, Othris, Puerto Rico, Russia, and Turkey), and augmented the data with new measurements for peridotites from the Trinity and Josephine ophiolites and the Mariana trench. TREE are compared to major element based thermometers, including the two-pyroxene thermometer of Brey and Köhler (1990) (TBKN). Samples with SC affinity have TREE and TBKN in good agreement. Samples with MOR and SSZ affinity have near-solidus TREE but TBKN hundreds of degrees lower. Closure temperatures for REE and Fe-Mg in pyroxenes were calculated to compare cooling rates among abyssal peridotites, MOR ophiolites, and SSZ ophiolites. Abyssal peridotites appear to cool more rapidly than peridotites from most ophiolites. On average, SSZ ophiolites have lower closure temperatures than abyssal peridotites and many ophiolites with MOR affinity. We propose that these lower temperatures can be attributed to the residence time in the cooling oceanic lithosphere prior to obduction. MOR ophiolites define a continuum spanning cooling rates from SSZ ophiolites to abyssal peridotites. Consistent high closure temperatures for abyssal peridotites and the Oman and Corsica ophiolites suggests hydrothermal circulation and/or rapid cooling events (e.g., normal faulting, unroofing) control the late thermal histories of peridotites from transform faults and slow and fast spreading centers with or without a crustal section.
NASA Technical Reports Server (NTRS)
Bennett, Richard A.; Reilinger, Robert E.; Rodi, William; Li, Yingping; Toksoz, M. Nafi; Hudnut, Ken
1995-01-01
Coseismic surface deformation associated with the M(sub w) 6.1, April 23, 1992, Joshua Tree earthquake is well represented by estimates of geodetic monument displacements at 20 locations independently derived from Global Positioning System and trilateration measurements. The rms signal to noise ratio for these inferred displacements is 1.8 with near-fault displacement estimates exceeding 40 mm. In order to determine the long-wavelength distribution of slip over the plane of rupture, a Tikhonov regularization operator is applied to these estimates which minimizes stress variability subject to purely right-lateral slip and zero surface slip constraints. The resulting slip distribution yields a geodetic moment estimate of 1.7 x 10(exp 18) N m with corresponding maximum slip around 0.8 m and compares well with independent and complementary information including seismic moment and source time function estimates and main shock and aftershock locations. From empirical Green's functions analyses, a rupture duration of 5 s is obtained which implies a rupture radius of 6-8 km. Most of the inferred slip lies to the north of the hypocenter, consistent with northward rupture propagation. Stress drop estimates are in the range of 2-4 MPa. In addition, predicted Coulomb stress increases correlate remarkably well with the distribution of aftershock hypocenters; most of the aftershocks occur in areas for which the mainshock rupture produced stress increases larger than about 0.1 MPa. In contrast, predicted stress changes are near zero at the hypocenter of the M(sub w) 7.3, June 28, 1992, Landers earthquake which nucleated about 20 km beyond the northernmost edge of the Joshua Tree rupture. Based on aftershock migrations and the predicted static stress field, we speculate that redistribution of Joshua Tree-induced stress perturbations played a role in the spatio-temporal development of the earth sequence culminating in the Landers event.
NASA Astrophysics Data System (ADS)
Cooke, M. L.; Fattaruso, L.; Dorsey, R. J.; Housen, B. A.
2015-12-01
Between ~1.5 and 1.1 Ma, the southern San Andreas fault system underwent a major reorganization that included initiation of the San Jacinto fault and termination of slip on the extensional West Salton detachment fault. The southern San Andreas fault itself has also evolved since this time, with several shifts in activity among fault strands within San Gorgonio Pass. We use three-dimensional mechanical Boundary Element Method models to investigate the impact of these changes to the fault network on deformation patterns. A series of snapshot models of the succession of active fault geometries explore the role of fault interaction and tectonic loading in abandonment of the West Salton detachment fault, initiation of the San Jacinto fault, and shifts in activity of the San Andreas fault. Interpreted changes to uplift patterns are well matched by model results. These results support the idea that growth of the San Jacinto fault led to increased uplift rates in the San Gabriel Mountains and decreased uplift rates in the San Bernardino Mountains. Comparison of model results for vertical axis rotation to data from paleomagnetic studies reveals a good match to local rotation patterns in the Mecca Hills and Borrego Badlands. We explore the mechanical efficiency at each step in the evolution, and find an overall trend toward increased efficiency through time. Strain energy density patterns are used to identify regions of off-fault deformation and potential incipient faulting. These patterns support the notion of north-to-south propagation of the San Jacinto fault during its initiation. The results of the present-day model are compared with microseismicity focal mechanisms to provide additional insight into the patterns of off-fault deformation within the southern San Andreas fault system.
The Active Fault Parameters for Time-Dependent Earthquake Hazard Assessment in Taiwan
NASA Astrophysics Data System (ADS)
Lee, Y.; Cheng, C.; Lin, P.; Shao, K.; Wu, Y.; Shih, C.
2011-12-01
Taiwan is located at the boundary between the Philippine Sea Plate and the Eurasian Plate, with a convergence rate of ~ 80 mm/yr in a ~N118E direction. The plate motion is so active that earthquake is very frequent. In the Taiwan area, disaster-inducing earthquakes often result from active faults. For this reason, it's an important subject to understand the activity and hazard of active faults. The active faults in Taiwan are mainly located in the Western Foothills and the Eastern longitudinal valley. Active fault distribution map published by the Central Geological Survey (CGS) in 2010 shows that there are 31 active faults in the island of Taiwan and some of which are related to earthquake. Many researchers have investigated these active faults and continuously update new data and results, but few people have integrated them for time-dependent earthquake hazard assessment. In this study, we want to gather previous researches and field work results and then integrate these data as an active fault parameters table for time-dependent earthquake hazard assessment. We are going to gather the seismic profiles or earthquake relocation of a fault and then combine the fault trace on land to establish the 3D fault geometry model in GIS system. We collect the researches of fault source scaling in Taiwan and estimate the maximum magnitude from fault length or fault area. We use the characteristic earthquake model to evaluate the active fault earthquake recurrence interval. In the other parameters, we will collect previous studies or historical references and complete our parameter table of active faults in Taiwan. The WG08 have done the time-dependent earthquake hazard assessment of active faults in California. They established the fault models, deformation models, earthquake rate models, and probability models and then compute the probability of faults in California. Following these steps, we have the preliminary evaluated probability of earthquake-related hazards in certain faults in Taiwan. By accomplishing active fault parameters table in Taiwan, we would apply it in time-dependent earthquake hazard assessment. The result can also give engineers a reference for design. Furthermore, it can be applied in the seismic hazard map to mitigate disasters.
NASA Astrophysics Data System (ADS)
Zuza, A. V.; Yin, A.; Lin, J. C.
2015-12-01
Parallel evenly-spaced strike-slip faults are prominent in the southern San Andreas fault system, as well as other settings along plate boundaries (e.g., the Alpine fault) and within continental interiors (e.g., the North Anatolian, central Asian, and northern Tibetan faults). In southern California, the parallel San Jacinto, Elsinore, Rose Canyon, and San Clemente faults to the west of the San Andreas are regularly spaced at ~40 km. In the Eastern California Shear Zone, east of the San Andreas, faults are spaced at ~15 km. These characteristic spacings provide unique mechanical constraints on how the faults interact. Despite the common occurrence of parallel strike-slip faults, the fundamental questions of how and why these fault systems form remain unanswered. We address this issue by using the stress shadow concept of Lachenbruch (1961)—developed to explain extensional joints by using the stress-free condition on the crack surface—to present a mechanical analysis of the formation of parallel strike-slip faults that relates fault spacing and brittle-crust thickness to fault strength, crustal strength, and the crustal stress state. We discuss three independent models: (1) a fracture mechanics model, (2) an empirical stress-rise function model embedded in a plastic medium, and (3) an elastic-plate model. The assumptions and predictions of these models are quantitatively tested using scaled analogue sandbox experiments that show that strike-slip fault spacing is linearly related to the brittle-crust thickness. We derive constraints on the mechanical properties of the southern San Andreas strike-slip faults and fault-bounded crust (e.g., local fault strength and crustal/regional stress) given the observed fault spacing and brittle-crust thickness, which is obtained by defining the base of the seismogenic zone with high-resolution earthquake data. Our models allow direct comparison of the parallel faults in the southern San Andreas system with other similar strike-slip fault systems, both on Earth and throughout the solar system (e.g., the Tiger Stripe Fractures on Enceladus).
NASA Astrophysics Data System (ADS)
Inoue, N.; Kitada, N.; Kusumoto, S.; Itoh, Y.; Takemura, K.
2011-12-01
The Osaka basin surrounded by the Rokko and Ikoma Ranges is one of the typical Quaternary sedimentary basins in Japan. The Osaka basin has been filled by the Pleistocene Osaka group and the later sediments. Several large cities and metropolitan areas, such as Osaka and Kobe are located in the Osaka basin. The basin is surrounded by E-W trending strike slip faults and N-S trending reverse faults. The N-S trending 42-km-long Uemachi faults traverse in the central part of the Osaka city. The Uemachi faults have been investigated for countermeasures against earthquake disaster. It is important to reveal the detailed fault parameters, such as length, dip and recurrence interval, so on for strong ground motion simulation and disaster prevention. For strong ground motion simulation, the fault model of the Uemachi faults consist of the two parts, the north and south parts, because of the no basement displacement in the central part of the faults. The Ministry of Education, Culture, Sports, Science and Technology started the project to survey of the Uemachi faults. The Disaster Prevention Institute of Kyoto University is carried out various surveys from 2009 to 2012 for 3 years. The result of the last year revealed the higher fault activity of the branch fault than main faults in the central part (see poster of "Subsurface Flexure of Uemachi Fault, Japan" by Kitada et al., in this meeting). Kusumoto et al. (2001) reported that surrounding faults enable to form the similar basement relief without the Uemachi faults model based on a dislocation model. We performed various parameter studies for dislocation model and gravity changes based on simplified faults model, which were designed based on the distribution of the real faults. The model was consisted 7 faults including the Uemachi faults. The dislocation and gravity change were calculated based on the Okada et al. (1985) and Okubo et al. (1993) respectively. The results show the similar basement displacement pattern to the Kusumoto et al. (2001) and no characteristic gravity change pattern. The Quantitative estimation is further problem.
NASA Astrophysics Data System (ADS)
Brothers, Daniel Stephen
Five studies along the Pacific-North America (PA-NA) plate boundary offer new insights into continental margin processes, the development of the PA-NA tectonic margin and regional earthquake hazards. This research is based on the collection and analysis of several new marine geophysical and geological datasets. Two studies used seismic CHIRP surveys and sediment coring in Fallen Leaf Lake (FLL) and Lake Tahoe to constrain tectonic and geomorphic processes in the lakes, but also the slip-rate and earthquake history along the West Tahoe-Dollar Point Fault. CHIRP profiles image vertically offset and folded strata that record deformation associated with the most recent event (MRE). Radiocarbon dating of organic material extracted from piston cores constrain the age of the MRE to be between 4.1--4.5 k.y. B.P. Offset of Tioga aged glacial deposits yield a slip rate of 0.4--0.8 mm/yr. An ancillary study in FLL determined that submerged, in situ pine trees that date to between 900-1250 AD are related to a medieval megadrought in the Lake Tahoe Basin. The timing and severity of this event match medieval megadroughts observed in the western United States and in Europe. CHIRP profiles acquired in the Salton Sea, California provide new insights into the processes that control pull-apart basin development and earthquake hazards along the southernmost San Andreas Fault. Differential subsidence (>10 mm/yr) in the southern sea suggests the existence of northwest-dipping basin-bounding faults near the southern shoreline. In contrast to previous models, the rapid subsidence and fault architecture observed in the southern part of the sea are consistent with experimental models for pull-apart basins. Geophysical surveys imaged more than 15 ˜N15°E oriented faults, some of which have produced up to 10 events in the last 2-3 kyr. Potentially 2 of the last 5 events on the southern San Andreas Fault (SAF) were synchronous with rupture on offshore faults, but it appears that ruptures on three offshore faults are synchronous with Colorado River diversions into the basin. The final study was used coincident wide-angle seismic refraction and multichannel seismic reflection surveys that spanned the width of the of the southern Baja California (BC) Peninsula. The data provide insight into the spatial and temporal evolution of the BC microplate capture by the Pacific Plate. Seismic reflection profiles constrain the upper crustal structure and deformation history along fault zone on the western Baja margin and in the Gulf of California. Stratal divergence in two transtensional basins along the Magdalena Shelf records the onset of extension across the Tosco-Abreojos and Santa Margarita faults. We define an upper bound of 12 Ma on the age of the pre-rift sediments and an age of ˜8 Ma for the onset of extension. Tomographic imaging reveals a very heterogeneous upper crust and a narrow, high velocity zone that extends ˜40 km east of the paleotrench and is interpreted to be remnant oceanic crust.
NASA Astrophysics Data System (ADS)
Jackson, C. A. L.; Bell, R. E.; Rotevatn, A.; Tvedt, A. B. M.
2015-12-01
Normal faulting accommodates stretching of the Earth's crust and is one of the fundamental controls on landscape evolution and sediment dispersal in rift basins. Displacement-length scaling relationships compiled from global datasets suggest normal faults grow via a sympathetic increase in these two parameters (the 'isolated fault model'). This model has dominated the structural geology literature for >20 years and underpins the structural and tectono-stratigraphic models developed for active rifts. However, relatively recent analysis of high-quality 3D seismic reflection data suggests faults may grow by rapid establishment of their near-final length prior to significant displacement accumulation (the 'coherent fault model'). The isolated and coherent fault models make very different predictions regarding the tectono-stratigraphic evolution of rift basin, thus assessing their applicability is important. To-date, however, very few studies have explicitly set out to critically test the coherent fault model thus, it may be argued, it has yet to be widely accepted in the structural geology community. Displacement backstripping is a simple graphical technique typically used to determine how faults lengthen and accumulate displacement; this technique should therefore allow us to test the competing fault models. However, in this talk we use several subsurface case studies to show that the most commonly used backstripping methods (the 'original' and 'modified' methods) are, however, of limited value, because application of one over the other requires an a priori assumption of the model most applicable to any given fault; we argue this is illogical given that the style of growth is exactly what the analysis is attempting to determine. We then revisit our case studies and demonstrate that, in the case of seismic-scale growth faults, growth strata thickness patterns and relay zone kinematics, rather than displacement backstripping, should be assessed to directly constrain fault length and thus tip behaviour through time. We conclude that rapid length establishment prior to displacement accumulation may be more common than is typically assumed, thus challenging the well-established, widely cited and perhaps overused, isolated fault model.
3D Model of the Tuscarora Geothermal Area
Faulds, James E.
2013-12-31
The Tuscarora geothermal system sits within a ~15 km wide left-step in a major west-dipping range-bounding normal fault system. The step over is defined by the Independence Mountains fault zone and the Bull Runs Mountains fault zone which overlap along strike. Strain is transferred between these major fault segments via and array of northerly striking normal faults with offsets of 10s to 100s of meters and strike lengths of less than 5 km. These faults within the step over are one to two orders of magnitude smaller than the range-bounding fault zones between which they reside. Faults within the broad step define an anticlinal accommodation zone wherein east-dipping faults mainly occupy western half of the accommodation zone and west-dipping faults lie in the eastern half of the accommodation zone. The 3D model of Tuscarora encompasses 70 small-offset normal faults that define the accommodation zone and a portion of the Independence Mountains fault zone, which dips beneath the geothermal field. The geothermal system resides in the axial part of the accommodation, straddling the two fault dip domains. The Tuscarora 3D geologic model consists of 10 stratigraphic units. Unconsolidated Quaternary alluvium has eroded down into bedrock units, the youngest and stratigraphically highest bedrock units are middle Miocene rhyolite and dacite flows regionally correlated with the Jarbidge Rhyolite and modeled with uniform cumulative thickness of ~350 m. Underlying these lava flows are Eocene volcanic rocks of the Big Cottonwood Canyon caldera. These units are modeled as intracaldera deposits, including domes, flows, and thick ash deposits that change in thickness and locally pinch out. The Paleozoic basement of consists metasedimenary and metavolcanic rocks, dominated by argillite, siltstone, limestone, quartzite, and metabasalt of the Schoonover and Snow Canyon Formations. Paleozoic formations are lumped in a single basement unit in the model. Fault blocks in the eastern portion of the model are tilted 5-30 degrees toward the Independence Mountains fault zone. Fault blocks in the western portion of the model are tilted toward steeply east-dipping normal faults. These opposing fault block dips define a shallow extensional anticline. Geothermal production is from 4 closely-spaced wells, that exploit a west-dipping, NNE-striking fault zone near the axial part of the accommodation zone.
NASA Astrophysics Data System (ADS)
Fattaruso, Laura A.; Cooke, Michele L.; Dorsey, Rebecca J.; Housen, Bernard A.
2016-12-01
Between 1.5 and 1.1 Ma, the southern San Andreas fault system underwent a major reorganization that included initiation of the San Jacinto fault zone and termination of slip on the extensional West Salton detachment fault. The southern San Andreas fault itself has also evolved since this time, with several shifts in activity among fault strands within San Gorgonio Pass. We use three-dimensional mechanical Boundary Element Method models to investigate the impact of these changes to the fault network on deformation patterns. A series of snapshot models of the succession of active fault geometries explore the role of fault interaction and tectonic loading in abandonment of the West Salton detachment fault, initiation of the San Jacinto fault zone, and shifts in activity of the San Andreas fault. Interpreted changes to uplift patterns are well matched by model results. These results support the idea that initiation and growth of the San Jacinto fault zone led to increased uplift rates in the San Gabriel Mountains and decreased uplift rates in the San Bernardino Mountains. Comparison of model results for vertical-axis rotation to data from paleomagnetic studies reveals a good match to local rotation patterns in the Mecca Hills and Borrego Badlands. We explore the mechanical efficiency at each step in the modeled fault evolution, and find an overall trend toward increased efficiency through time. Strain energy density patterns are used to identify regions of incipient faulting, and support the notion of north-to-south propagation of the San Jacinto fault during its initiation.
Nearly frictionless faulting by unclamping in long-term interaction models
Parsons, T.
2002-01-01
In defiance of direct rock-friction observations, some transform faults appear to slide with little resistance. In this paper finite element models are used to show how strain energy is minimized by interacting faults that can cause long-term reduction in fault-normal stresses (unclamping). A model fault contained within a sheared elastic medium concentrates stress at its end points with increasing slip. If accommodating structures free up the ends, then the fault responds by rotating, lengthening, and unclamping. This concept is illustrated by a comparison between simple strike-slip faulting and a mid-ocean-ridge model with the same total transform length; calculations show that the more complex system unclapms the transforms and operates at lower energy. In another example, the overlapping San Andreas fault system in the San Francisco Bay region is modeled; this system is complicated by junctions and stepovers. A finite element model indicates that the normal stress along parts of the faults could be reduced to hydrostatic levels after ???60-100 k.y. of system-wide slip. If this process occurs in the earth, then parts of major transform fault zones could appear nearly frictionless.
A technique for evaluating the application of the pin-level stuck-at fault model to VLSI circuits
NASA Technical Reports Server (NTRS)
Palumbo, Daniel L.; Finelli, George B.
1987-01-01
Accurate fault models are required to conduct the experiments defined in validation methodologies for highly reliable fault-tolerant computers (e.g., computers with a probability of failure of 10 to the -9 for a 10-hour mission). Described is a technique by which a researcher can evaluate the capability of the pin-level stuck-at fault model to simulate true error behavior symptoms in very large scale integrated (VLSI) digital circuits. The technique is based on a statistical comparison of the error behavior resulting from faults applied at the pin-level of and internal to a VLSI circuit. As an example of an application of the technique, the error behavior of a microprocessor simulation subjected to internal stuck-at faults is compared with the error behavior which results from pin-level stuck-at faults. The error behavior is characterized by the time between errors and the duration of errors. Based on this example data, the pin-level stuck-at fault model is found to deliver less than ideal performance. However, with respect to the class of faults which cause a system crash, the pin-level, stuck-at fault model is found to provide a good modeling capability.
Fault Modeling of Extreme Scale Applications Using Machine Learning
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vishnu, Abhinav; Dam, Hubertus van; Tallent, Nathan R.
Faults are commonplace in large scale systems. These systems experience a variety of faults such as transient, permanent and intermittent. Multi-bit faults are typically not corrected by the hardware resulting in an error. Here, this paper attempts to answer an important question: Given a multi-bit fault in main memory, will it result in an application error — and hence a recovery algorithm should be invoked — or can it be safely ignored? We propose an application fault modeling methodology to answer this question. Given a fault signature (a set of attributes comprising of system and application state), we use machinemore » learning to create a model which predicts whether a multibit permanent/transient main memory fault will likely result in error. We present the design elements such as the fault injection methodology for covering important data structures, the application and system attributes which should be used for learning the model, the supervised learning algorithms (and potentially ensembles), and important metrics. Lastly, we use three applications — NWChem, LULESH and SVM — as examples for demonstrating the effectiveness of the proposed fault modeling methodology.« less
Fault Modeling of Extreme Scale Applications Using Machine Learning
Vishnu, Abhinav; Dam, Hubertus van; Tallent, Nathan R.; ...
2016-05-01
Faults are commonplace in large scale systems. These systems experience a variety of faults such as transient, permanent and intermittent. Multi-bit faults are typically not corrected by the hardware resulting in an error. Here, this paper attempts to answer an important question: Given a multi-bit fault in main memory, will it result in an application error — and hence a recovery algorithm should be invoked — or can it be safely ignored? We propose an application fault modeling methodology to answer this question. Given a fault signature (a set of attributes comprising of system and application state), we use machinemore » learning to create a model which predicts whether a multibit permanent/transient main memory fault will likely result in error. We present the design elements such as the fault injection methodology for covering important data structures, the application and system attributes which should be used for learning the model, the supervised learning algorithms (and potentially ensembles), and important metrics. Lastly, we use three applications — NWChem, LULESH and SVM — as examples for demonstrating the effectiveness of the proposed fault modeling methodology.« less
Dynamic modeling of gearbox faults: A review
NASA Astrophysics Data System (ADS)
Liang, Xihui; Zuo, Ming J.; Feng, Zhipeng
2018-01-01
Gearbox is widely used in industrial and military applications. Due to high service load, harsh operating conditions or inevitable fatigue, faults may develop in gears. If the gear faults cannot be detected early, the health will continue to degrade, perhaps causing heavy economic loss or even catastrophe. Early fault detection and diagnosis allows properly scheduled shutdowns to prevent catastrophic failure and consequently result in a safer operation and higher cost reduction. Recently, many studies have been done to develop gearbox dynamic models with faults aiming to understand gear fault generation mechanism and then develop effective fault detection and diagnosis methods. This paper focuses on dynamics based gearbox fault modeling, detection and diagnosis. State-of-art and challenges are reviewed and discussed. This detailed literature review limits research results to the following fundamental yet key aspects: gear mesh stiffness evaluation, gearbox damage modeling and fault diagnosis techniques, gearbox transmission path modeling and method validation. In the end, a summary and some research prospects are presented.
Integrated Safety Risk Reduction Approach to Enhancing Human-Rated Spaceflight Safety
NASA Astrophysics Data System (ADS)
Mikula, J. F. Kip
2005-12-01
This paper explores and defines the current accepted concept and philosophy of safety improvement based on a Reliability enhancement (called here Reliability Enhancement Based Safety Theory [REBST]). In this theory a Reliability calculation is used as a measure of the safety achieved on the program. This calculation may be based on a math model or a Fault Tree Analysis (FTA) of the system, or on an Event Tree Analysis (ETA) of the system's operational mission sequence. In each case, the numbers used in this calculation are hardware failure rates gleaned from past similar programs. As part of this paper, a fictional but representative case study is provided that helps to illustrate the problems and inaccuracies of this approach to safety determination. Then a safety determination and enhancement approach based on hazard, worst case analysis, and safety risk determination (called here Worst Case Based Safety Theory [WCBST]) is included. This approach is defined and detailed using the same example case study as shown in the REBST case study. In the end it is concluded that an approach combining the two theories works best to reduce Safety Risk.
Improving online risk assessment with equipment prognostics and health monitoring
DOE Office of Scientific and Technical Information (OSTI.GOV)
Coble, Jamie B.; Liu, Xiaotong; Briere, Chris
The current approach to evaluating the risk of nuclear power plant (NPP) operation relies on static probabilities of component failure, which are based on industry experience with the existing fleet of nominally similar light water reactors (LWRs). As the nuclear industry looks to advanced reactor designs that feature non-light water coolants (e.g., liquid metal, high temperature gas, molten salt), this operating history is not available. Many advanced reactor designs use advanced components, such as electromagnetic pumps, that have not been used in the US commercial nuclear fleet. Given the lack of rich operating experience, we cannot accurately estimate the evolvingmore » probability of failure for basic components to populate the fault trees and event trees that typically comprise probabilistic risk assessment (PRA) models. Online equipment prognostics and health management (PHM) technologies can bridge this gap to estimate the failure probabilities for components under operation. The enhanced risk monitor (ERM) incorporates equipment condition assessment into the existing PRA and risk monitor framework to provide accurate and timely estimates of operational risk.« less
A computational framework for prime implicants identification in noncoherent dynamic systems.
Di Maio, Francesco; Baronchelli, Samuele; Zio, Enrico
2015-01-01
Dynamic reliability methods aim at complementing the capability of traditional static approaches (e.g., event trees [ETs] and fault trees [FTs]) by accounting for the system dynamic behavior and its interactions with the system state transition process. For this, the system dynamics is here described by a time-dependent model that includes the dependencies with the stochastic transition events. In this article, we present a novel computational framework for dynamic reliability analysis whose objectives are i) accounting for discrete stochastic transition events and ii) identifying the prime implicants (PIs) of the dynamic system. The framework entails adopting a multiple-valued logic (MVL) to consider stochastic transitions at discretized times. Then, PIs are originally identified by a differential evolution (DE) algorithm that looks for the optimal MVL solution of a covering problem formulated for MVL accident scenarios. For testing the feasibility of the framework, a dynamic noncoherent system composed of five components that can fail at discretized times has been analyzed, showing the applicability of the framework to practical cases. © 2014 Society for Risk Analysis.
NASA Technical Reports Server (NTRS)
Joshi, Suresh M.
2012-01-01
This paper explores a class of multiple-model-based fault detection and identification (FDI) methods for bias-type faults in actuators and sensors. These methods employ banks of Kalman-Bucy filters to detect the faults, determine the fault pattern, and estimate the fault values, wherein each Kalman-Bucy filter is tuned to a different failure pattern. Necessary and sufficient conditions are presented for identifiability of actuator faults, sensor faults, and simultaneous actuator and sensor faults. It is shown that FDI of simultaneous actuator and sensor faults is not possible using these methods when all sensors have biases.
Improving Multiple Fault Diagnosability using Possible Conflicts
NASA Technical Reports Server (NTRS)
Daigle, Matthew J.; Bregon, Anibal; Biswas, Gautam; Koutsoukos, Xenofon; Pulido, Belarmino
2012-01-01
Multiple fault diagnosis is a difficult problem for dynamic systems. Due to fault masking, compensation, and relative time of fault occurrence, multiple faults can manifest in many different ways as observable fault signature sequences. This decreases diagnosability of multiple faults, and therefore leads to a loss in effectiveness of the fault isolation step. We develop a qualitative, event-based, multiple fault isolation framework, and derive several notions of multiple fault diagnosability. We show that using Possible Conflicts, a model decomposition technique that decouples faults from residuals, we can significantly improve the diagnosability of multiple faults compared to an approach using a single global model. We demonstrate these concepts and provide results using a multi-tank system as a case study.
Safety Study of TCAS II for Logic Version 6.04
1992-07-01
used in the fault tree of the 198 tdy. The fu given for Logic and Altimetry effects represent the site averages, and we bued upon TCAS RAs always being...comparison with the results of Monte Carlo simulations. Five million iterations were carril out for each of the four cases (eqs. 3, 4, 6 and 7
Code of Federal Regulations, 2010 CFR
2010-10-01
..., national, or international standards. (f) The reviewer shall analyze all Fault Tree Analyses (FTA), Failure... cited by the reviewer; (4) Identification of any documentation or information sought by the reviewer...) Identification of the hardware and software verification and validation procedures for the PTC system's safety...
The Two-By-Two Array: An Aid in Conceptualization and Problem Solving
ERIC Educational Resources Information Center
Eberhart, James
2004-01-01
The fields of mathematics, science, and engineering are replete with diagrams of many varieties. They range in nature from the Venn diagrams of symbolic logic to the Periodic Chart of the Elements; and from the fault trees of risk assessment to the flow charts used to describe laboratory procedures, industrial processes, and computer programs. All…
Stability of faults with heterogeneous friction properties and effective normal stress
NASA Astrophysics Data System (ADS)
Luo, Yingdi; Ampuero, Jean-Paul
2018-05-01
Abundant geological, seismological and experimental evidence of the heterogeneous structure of natural faults motivates the theoretical and computational study of the mechanical behavior of heterogeneous frictional fault interfaces. Fault zones are composed of a mixture of materials with contrasting strength, which may affect the spatial variability of seismic coupling, the location of high-frequency radiation and the diversity of slip behavior observed in natural faults. To develop a quantitative understanding of the effect of strength heterogeneity on the mechanical behavior of faults, here we investigate a fault model with spatially variable frictional properties and pore pressure. Conceptually, this model may correspond to two rough surfaces in contact along discrete asperities, the space in between being filled by compressed gouge. The asperities have different permeability than the gouge matrix and may be hydraulically sealed, resulting in different pore pressure. We consider faults governed by rate-and-state friction, with mixtures of velocity-weakening and velocity-strengthening materials and contrasts of effective normal stress. We systematically study the diversity of slip behaviors generated by this model through multi-cycle simulations and linear stability analysis. The fault can be either stable without spontaneous slip transients, or unstable with spontaneous rupture. When the fault is unstable, slip can rupture either part or the entire fault. In some cases the fault alternates between these behaviors throughout multiple cycles. We determine how the fault behavior is controlled by the proportion of velocity-weakening and velocity-strengthening materials, their relative strength and other frictional properties. We also develop, through heuristic approximations, closed-form equations to predict the stability of slip on heterogeneous faults. Our study shows that a fault model with heterogeneous materials and pore pressure contrasts is a viable framework to reproduce the full spectrum of fault behaviors observed in natural faults: from fast earthquakes, to slow transients, to stable sliding. In particular, this model constitutes a building block for models of episodic tremor and slow slip events.
Goal-Function Tree Modeling for Systems Engineering and Fault Management
NASA Technical Reports Server (NTRS)
Johnson, Stephen B.; Breckenridge, Jonathan T.
2013-01-01
The draft NASA Fault Management (FM) Handbook (2012) states that Fault Management (FM) is a "part of systems engineering", and that it "demands a system-level perspective" (NASAHDBK- 1002, 7). What, exactly, is the relationship between systems engineering and FM? To NASA, systems engineering (SE) is "the art and science of developing an operable system capable of meeting requirements within often opposed constraints" (NASA/SP-2007-6105, 3). Systems engineering starts with the elucidation and development of requirements, which set the goals that the system is to achieve. To achieve these goals, the systems engineer typically defines functions, and the functions in turn are the basis for design trades to determine the best means to perform the functions. System Health Management (SHM), by contrast, defines "the capabilities of a system that preserve the system's ability to function as intended" (Johnson et al., 2011, 3). Fault Management, in turn, is the operational subset of SHM, which detects current or future failures, and takes operational measures to prevent or respond to these failures. Failure, in turn, is the "unacceptable performance of intended function." (Johnson 2011, 605) Thus the relationship of SE to FM is that SE defines the functions and the design to perform those functions to meet system goals and requirements, while FM detects the inability to perform those functions and takes action. SHM and FM are in essence "the dark side" of SE. For every function to be performed (SE), there is the possibility that it is not successfully performed (SHM); FM defines the means to operationally detect and respond to this lack of success. We can also describe this in terms of goals: for every goal to be achieved, there is the possibility that it is not achieved; FM defines the means to operationally detect and respond to this inability to achieve the goal. This brief description of relationships between SE, SHM, and FM provide hints to a modeling approach to provide formal connectivity between the nominal (SE), and off-nominal (SHM and FM) aspects of functions and designs. This paper describes a formal modeling approach to the initial phases of the development process that integrates the nominal and off-nominal perspectives in a model that unites SE goals and functions of with the failure to achieve goals and functions (SHM/FM).
NASA Astrophysics Data System (ADS)
Smith, D. E.; Aagaard, B. T.; Heaton, T. H.
2001-12-01
It has been hypothesized (Brune, 1996) that teleseismic inversions may underestimate the moment of shallow thrust fault earthquakes if energy becomes trapped in the hanging wall of the fault, i.e. if the fault boundary becomes opaque. We address this by creating and analyzing synthetic P and SH seismograms for a variety of friction models. There are a total of five models: (1) crack model (slip weakening) with instantaneous healing (2) crack model without healing (3) crack model with zero sliding friction (4) pulse model (slip and rate weakening) (5) prescribed model (Haskell-like rupture with the same final slip and peak slip-rate as model 4). Models 1-4 are all dynamic models where fault friction laws determine the rupture history. This allows feedback between the ongoing rupture and waves from the beginning of the rupture that hit the surface and reflect downwards. Hence, models 1-4 can exhibit opaque fault characteristics. Model 5, a prescribed rupture, allows for no interaction between the rupture and reflected waves, therefore, it is a transparent fault. We first produce source time functions for the different friction models by rupturing shallow thrust faults in 3-D dynamic finite-element simulations. The source time functions are used as point dislocations in a teleseismic body-wave code. We examine the P and SH waves for different azimuths and epicentral distances. The peak P and S first arrival displacement amplitudes for the crack, crack with healing and pulse models are all very similar. These dynamic models with opaque faults produce smaller peak P and S first arrivals than the prescribed, transparent fault. For example, a fault with strike = 90 degrees, azimuth = 45 degrees has P arrivals smaller by about 30% and S arrivals smaller by about 15%. The only dynamic model that doesn't fit this pattern is the crack model with zero sliding friction. It oscillates around its equilibrium position; therefore, it overshoots and yields an excessively large peak first arrival. In general, it appears that the dynamic, opaque faults have smaller peak teleseismic displacements that would lead to lower moment estimates by a modest amount.
NASA Astrophysics Data System (ADS)
Jackson, Christopher; Bell, Rebecca; Rotevatn, Atle; Tvedt, Anette
2016-04-01
Normal faulting accommodates stretching of the Earth's crust, and it is arguably the most fundamental tectonic process leading to continent rupture and oceanic crust emplacement. Furthermore, the incremental and finite geometries associated with normal faulting dictate landscape evolution, sediment dispersal and hydrocarbon systems development in rifts. Displacement-length scaling relationships compiled from global datasets suggest normal faults grow via a sympathetic increase in these two parameters (the 'isolated fault model'). This model has dominated the structural geology literature for >20 years and underpins the structural and tectono-stratigraphic models developed for active rifts. However, relatively recent analysis of high-quality 3D seismic reflection data suggests faults may grow by rapid establishment of their near-final length prior to significant displacement accumulation (the 'coherent fault model'). The isolated and coherent fault models make very different predictions regarding the tectono-stratigraphic evolution of rift basin, thus assessing their applicability is important. To-date, however, very few studies have explicitly set out to critically test the coherent fault model thus, it may be argued, it has yet to be widely accepted in the structural geology community. Displacement backstripping is a simple graphical technique typically used to determine how faults lengthen and accumulate displacement; this technique should therefore allow us to test the competing fault models. However, in this talk we use several subsurface case studies to show that the most commonly used backstripping methods (the 'original' and 'modified' methods) are, however, of limited value, because application of one over the other requires an a priori assumption of the model most applicable to any given fault; we argue this is illogical given that the style of growth is exactly what the analysis is attempting to determine. We then revisit our case studies and demonstrate that, in the case of seismic-scale growth faults, growth strata thickness patterns and relay zone kinematics, rather than displacement backstripping, should be assessed to directly constrain fault length and thus tip behaviour through time. We conclude that rapid length establishment prior to displacement accumulation may be more common than is typically assumed, thus challenging the well-established, widely cited and perhaps overused, isolated fault model.
NASA Astrophysics Data System (ADS)
Zhang, Yanhua; Clennell, Michael B.; Delle Piane, Claudio; Ahmed, Shakil; Sarout, Joel
2016-12-01
This generic 2D elastic-plastic modelling investigated the reactivation of a small isolated and critically-stressed fault in carbonate rocks at a reservoir depth level for fluid depletion and normal-faulting stress conditions. The model properties and boundary conditions are based on field and laboratory experimental data from a carbonate reservoir. The results show that a pore pressure perturbation of -25 MPa by depletion can lead to the reactivation of the fault and parts of the surrounding damage zones, producing normal-faulting downthrows and strain localization. The mechanism triggering fault reactivation in a carbonate field is the increase of shear stresses with pore-pressure reduction, due to the decrease of the absolute horizontal stress, which leads to an expanded Mohr's circle and mechanical failure, consistent with the predictions of previous poroelastic models. Two scenarios for fault and damage-zone permeability development are explored: (1) large permeability enhancement of a sealing fault upon reactivation, and (2) fault and damage zone permeability development governed by effective mean stress. In the first scenario, the fault becomes highly permeable to across- and along-fault fluid transport, removing local pore pressure highs/lows arising from the presence of the initially sealing fault. In the second scenario, reactivation induces small permeability enhancement in the fault and parts of damage zones, followed by small post-reactivation permeability reduction. Such permeability changes do not appear to change the original flow capacity of the fault or modify the fluid flow velocity fields dramatically.
Reverse fault growth and fault interaction with frictional interfaces: insights from analogue models
NASA Astrophysics Data System (ADS)
Bonanno, Emanuele; Bonini, Lorenzo; Basili, Roberto; Toscani, Giovanni; Seno, Silvio
2017-04-01
The association of faulting and folding is a common feature in mountain chains, fold-and-thrust belts, and accretionary wedges. Kinematic models are developed and widely used to explain a range of relationships between faulting and folding. However, these models may result not to be completely appropriate to explain shortening in mechanically heterogeneous rock bodies. Weak layers, bedding surfaces, or pre-existing faults placed ahead of a propagating fault tip may influence the fault propagation rate itself and the associated fold shape. In this work, we employed clay analogue models to investigate how mechanical discontinuities affect the propagation rate and the associated fold shape during the growth of reverse master faults. The simulated master faults dip at 30° and 45°, recalling the range of the most frequent dip angles for active reverse faults that occurs in nature. The mechanical discontinuities are simulated by pre-cutting the clay pack. For both experimental setups (30° and 45° dipping faults) we analyzed three different configurations: 1) isotropic, i.e. without precuts; 2) with one precut in the middle of the clay pack; and 3) with two evenly-spaced precuts. To test the repeatability of the processes and to have a statistically valid dataset we replicate each configuration three times. The experiments were monitored by collecting successive snapshots with a high-resolution camera pointing at the side of the model. The pictures were then processed using the Digital Image Correlation method (D.I.C.), in order to extract the displacement and shear-rate fields. These two quantities effectively show both the on-fault and off-fault deformation, indicating the activity along the newly-formed faults and whether and at what stage the discontinuities (precuts) are reactivated. To study the fault propagation and fold shape variability we marked the position of the fault tips and the fold profiles for every successive step of deformation. Then we compared precut models with isotropic models to evaluate the trends of variability. Our results indicate that the discontinuities are reactivated especially when the tip of the newly-formed fault is either below or connected to them. During the stage of maximum activity along the precut, the faults slow down or even stop their propagation. The fault propagation systematically resumes when the angle between the fault and the precut is about 90° (critical angle); only during this stage the fault crosses the precut. The reactivation of the discontinuities induces an increase of the apical angle of the fault-related fold and produces wider limbs compared to the isotropic reference experiments.
NASA Astrophysics Data System (ADS)
Madden, E. H.; McBeck, J.; Cooke, M. L.
2013-12-01
Over multiple earthquake cycles, strike-slip faults link to form through-going structures, as demonstrated by the continuous nature of the mature San Andreas fault system in California relative to the younger and more segmented San Jacinto fault system nearby. Despite its immaturity, the San Jacinto system accommodates between one third and one half of the slip along the boundary between the North American and Pacific plates. It therefore poses a significant seismic threat to southern California. Better understanding of how the San Jacinto system has evolved over geologic time and of current interactions between faults within the system is critical to assessing this seismic hazard accurately. Numerical models are well suited to simulating kilometer-scale processes, but models of fault system development are challenged by the multiple physical mechanisms involved. For example, laboratory experiments on brittle materials show that faults propagate and eventually join (hard-linkage) by both opening-mode and shear failure. In addition, faults interact prior to linkage through stress transfer (soft-linkage). The new algorithm GROW (GRowth by Optimization of Work) accounts for this complex array of behaviors by taking a global approach to fault propagation while adhering to the principals of linear elastic fracture mechanics. This makes GROW a powerful tool for studying fault interactions and fault system development over geologic time. In GROW, faults evolve to minimize the work (or energy) expended during deformation, thereby maximizing the mechanical efficiency of the entire system. Furthermore, the incorporation of both static and dynamic friction allows GROW models to capture fault slip and fault propagation in single earthquakes as well as over consecutive earthquake cycles. GROW models with idealized faults reveal that the initial fault spacing and the applied stress orientation control fault linkage propensity and linkage patterns. These models allow the gains in efficiency provided by both hard-linkage and soft-linkage to be quantified and compared. Specialized models of interactions over the past 1 Ma between the Clark and Coyote Creek faults within the San Jacinto system reveal increasing mechanical efficiency as these fault structures change over time. Alongside this increasing efficiency is an increasing likelihood for single, larger earthquakes that rupture multiple fault segments. These models reinforce the sensitivity of mechanical efficiency to both fault structure and the regional tectonic stress orientation controlled by plate motions and provide insight into how slip may have been partitioned between the San Andreas and San Jacinto systems over the past 1 Ma.
NASA Astrophysics Data System (ADS)
Shi, J. T.; Han, X. T.; Xie, J. F.; Yao, L.; Huang, L. T.; Li, L.
2013-03-01
A Pulsed High Magnetic Field Facility (PHMFF) has been established in Wuhan National High Magnetic Field Center (WHMFC) and various protection measures are applied in its control system. In order to improve the reliability and robustness of the control system, the safety analysis of the PHMFF is carried out based on Fault Tree Analysis (FTA) technique. The function and realization of 5 protection systems, which include sequence experiment operation system, safety assistant system, emergency stop system, fault detecting and processing system and accident isolating protection system, are given. The tests and operation indicate that these measures improve the safety of the facility and ensure the safety of people.
Wali, Behram; Khattak, Asad J; Xu, Jingjing
2018-01-01
The main objective of this study is to simultaneously investigate the degree of injury severity sustained by drivers involved in head-on collisions with respect to fault status designation. This is complicated to answer due to many issues, one of which is the potential presence of correlation between injury outcomes of drivers involved in the same head-on collision. To address this concern, we present seemingly unrelated bivariate ordered response models by analyzing the joint injury severity probability distribution of at-fault and not-at-fault drivers. Moreover, the assumption of bivariate normality of residuals and the linear form of stochastic dependence implied by such models may be unduly restrictive. To test this, Archimedean copula structures and normal mixture marginals are integrated into the joint estimation framework, which can characterize complex forms of stochastic dependencies and non-normality in residual terms. The models are estimated using 2013 Virginia police reported two-vehicle head-on collision data, where exactly one driver is at-fault. The results suggest that both at-fault and not-at-fault drivers sustained serious/fatal injuries in 8% of crashes, whereas, in 4% of the cases, the not-at-fault driver sustained a serious/fatal injury with no injury to the at-fault driver at all. Furthermore, if the at-fault driver is fatigued, apparently asleep, or has been drinking the not-at-fault driver is more likely to sustain a severe/fatal injury, controlling for other factors and potential correlations between the injury outcomes. While not-at-fault vehicle speed affects injury severity of at-fault driver, the effect is smaller than the effect of at-fault vehicle speed on at-fault injury outcome. Contrarily, and importantly, the effect of at-fault vehicle speed on injury severity of not-at-fault driver is almost equal to the effect of not-at-fault vehicle speed on injury outcome of not-at-fault driver. Compared to traditional ordered probability models, the study provides evidence that copula based bivariate models can provide more reliable estimates and richer insights. Practical implications of the results are discussed. Published by Elsevier Ltd.
Siontorou, Christina G; Batzias, Fragiskos A
2014-03-01
Biosensor technology began in the 1960s to revolutionize instrumentation and measurement. Despite the glucose sensor market success that revolutionized medical diagnostics, and artificial pancreas promise currently the approval stage, the industry is reluctant to capitalize on other relevant university-produced knowledge and innovation. On the other hand, the scientific literature is extensive and persisting, while the number of university-hosted biosensor groups is growing. Considering the limited marketability of biosensors compared to the available research output, the biosensor field has been used by the present authors as a suitable paradigm for developing a methodological combined framework for "roadmapping" university research output in this discipline. This framework adopts the basic principles of the Analytic Hierarchy Process (AHP), replacing the lower level of technology alternatives with internal barriers (drawbacks, limitations, disadvantages), modeled through fault tree analysis (FTA) relying on fuzzy reasoning to count for uncertainty. The proposed methodology is validated retrospectively using ion selective field effect transistor (ISFET) - based biosensors as a case example, and then implemented prospectively membrane biosensors, putting an emphasis on the manufacturability issues. The analysis performed the trajectory of membrane platforms differently than the available market roadmaps that, considering the vast industrial experience in tailoring and handling crystallic forms, suggest the technology path of biomimetic and synthetic materials. The results presented herein indicate that future trajectories lie along with nanotechnology, and especially nanofabrication and nano-bioinformatics, and focused, more on the science-path, that is, on controlling the natural process of self-assembly and the thermodynamics of bioelement-lipid interaction. This retained the nature-derived sensitivity of the biosensor platform, pointing out the differences between the scope of academic research and the market viewpoint.
NASA Astrophysics Data System (ADS)
Pei, Yangwen; Paton, Douglas A.; Wu, Kongyou; Xie, Liujuan
2017-08-01
The application of trishear algorithm, in which deformation occurs in a triangle zone in front of a propagating fault tip, is often used to understand fault related folding. In comparison to kink-band methods, a key characteristic of trishear algorithm is that non-uniform deformation within the triangle zone allows the layer thickness and horizon length to change during deformation, which is commonly observed in natural structures. An example from the Lenghu5 fold-and-thrust belt (Qaidam Basin, Northern Tibetan Plateau) is interpreted to help understand how to employ trishear forward modelling to improve the accuracy of seismic interpretation. High resolution fieldwork data, including high-angle dips, 'dragging structures', thinning hanging-wall and thickening footwall, are used to determined best-fit trishear model to explain the deformation happened to the Lenghu5 fold-and-thrust belt. We also consider the factors that increase the complexity of trishear models, including: (a) fault-dip changes and (b) pre-existing faults. We integrate fault dip change and pre-existing faults to predict subsurface structures that are apparently under seismic resolution. The analogue analysis by trishear models indicates that the Lenghu5 fold-and-thrust belt is controlled by an upward-steepening reverse fault above a pre-existing opposite-thrusting fault in deeper subsurface. The validity of the trishear model is confirmed by the high accordance between the model and the high-resolution fieldwork. The validated trishear forward model provides geometric constraints to the faults and horizons in the seismic section, e.g., fault cutoffs and fault tip position, faults' intersecting relationship and horizon/fault cross-cutting relationship. The subsurface prediction using trishear algorithm can significantly increase the accuracy of seismic interpretation, particularly in seismic sections with low signal/noise ratio.
Towards "realistic" fault zones in a 3D structure model of the Thuringian Basin, Germany
NASA Astrophysics Data System (ADS)
Kley, J.; Malz, A.; Donndorf, S.; Fischer, T.; Zehner, B.
2012-04-01
3D computer models of geological architecture are evolving into a standard tool for visualization and analysis. Such models typically comprise the bounding surfaces of stratigraphic layers and faults. Faults affect the continuity of aquifers and can themselves act as fluid conduits or barriers. This is one reason why a "realistic" representation of faults in 3D models is desirable. Still so, many existing models treat faults in a simplistic fashion, e.g. as vertical downward projections of fault traces observed at the surface. Besides being geologically and mechanically unreasonable, this also causes technical difficulties in the modelling workflow. Most natural faults are inclined and may change dips according to rock type or flatten into mechanically weak layers. Boreholes located close to a fault can therefore cross it at depth, resulting in stratigraphic control points allocated to the wrong block. Also, faults tend to split up into several branches, forming fault zones. Obtaining a more accurate representation of faults and fault zones is therefore challenging. We present work-in-progress from the Thuringian Basin in central Germany. The fault zone geometries are never fully constrained by data and must be extrapolated to depth. We use balancing of serial, parallel cross-sections to constrain subsurface extrapolations. The structure sections are checked for consistency by restoring them to an undeformed state. If this is possible without producing gaps or overlaps, the interpretation is considered valid (but not unique) for a single cross-section. Additional constraints are provided by comparison of adjacent cross-sections. Structures should change continuously from one section to another. Also, from the deformed and restored cross-sections we can measure the strain incurred during deformation. Strain should be compatible among the cross-sections: If at all, it should vary smoothly and systematically along a given fault zone. The stratigraphic contacts and faults in the resulting grid of parallel balanced sections are then interpolated into a gOcad model containing stratigraphic boundaries and faults as triangulated surfaces. The interpolation is also controlled by borehole data located off the sections and the surface traces of stratigraphic boundaries. We have written customized scripts to largely automatize this step, with particular attention to a seamless fit between stratigraphic surfaces and fault planes which share the same nodes and segments along their contacts. Additional attention was paid to the creation of a uniform triangulated grid with maximized angles. This ensures that uniform triangulated volumes can be created for further use in numerical flow modelling. An as yet unsolved problem is the implementation of the fault zones and their hydraulic properties in a large-scale model of the entire basin. Short-wavelength folds and subsidiary faults control which aquifers and seals are juxtaposed across the fault zones. It is impossible to include these structures in the regional model, but neglecting them would result in incorrect assessments of hydraulic links or barriers. We presently plan to test and calibrate the hydraulic properties of the fault zones in smaller, high-resolution models and then to implement geometrically simple "equivalent" fault zones with appropriate, variable transmissivities between specific aquifers.
Nelson, Alan R.; Personius, Stephen F.; Sherrod, Brian L.; Buck, Jason; Bradley, Lee-Ann; Henley, Gary; Liberty, Lee M.; Kelsey, Harvey M.; Witter, Robert C.; Koehler, R.D.; Schermer, Elizabeth R.; Nemser, Eliza S.; Cladouhos, Trenton T.
2008-01-01
As part of the effort to assess seismic hazard in the Puget Sound region, we map fault scarps on Airborne Laser Swath Mapping (ALSM, an application of LiDAR) imagery (with 2.5-m elevation contours on 1:4,000-scale maps) and show field and laboratory data from backhoe trenches across the scarps that are being used to develop a latest Pleistocene and Holocene history of large earthquakes on the Tacoma fault. We supplement previous Tacoma fault paleoseismic studies with data from five trenches on the hanging wall of the fault. In a new trench across the Catfish Lake scarp, broad folding of more tightly folded glacial sediment does not predate 4.3 ka because detrital charcoal of this age was found in stream-channel sand in the trench beneath the crest of the scarp. A post-4.3-ka age for scarp folding is consistent with previously identified uplift across the fault during AD 770-1160. In the trench across the younger of the two Stansberry Lake scarps, six maximum 14C ages on detrital charcoal in pre-faulting B and C soil horizons and three minimum ages on a tree root in post-faulting colluvium, limit a single oblique-slip (right-lateral) surface faulting event to AD 410-990. Stratigraphy and sedimentary structures in the trench across the older scarp at the same site show eroded glacial sediments, probably cut by a meltwater channel, with no evidence of post-glacial deformation. At the northeast end of the Sunset Beach scarps, charcoal ages in two trenches across graben-forming scarps give a close maximum age of 1.3 ka for graben formation. The ages that best limit the time of faulting and folding in each of the trenches are consistent with the time of the large regional earthquake in southern Puget Sound about AD 900-930.
Rockwell, Thomas K.; Lindvall, Scott; Dawson, Tim; Langridge, Rob; Lettis, William; Klinger, Yann
2002-01-01
Surveys of multiple tree lines within groves of poplar trees, planted in straight lines across the fault prior to the earthquake, show surprisingly large lateral variations. In one grove, slip increases by nearly 1.8 m, or 35% of the maximum measured value, over a lateral distance of nearly 100 m. This and other observations along the 1999 ruptures suggest that the lateral variability of slip observed from displaced geomorphic features in many earthquakes of the past may represent a combination of (1) actual differences in slip at the surface and (2) the difficulty in recognizing distributed nonbrittle deformation.
Using minimal spanning trees to compare the reliability of network topologies
NASA Technical Reports Server (NTRS)
Leister, Karen J.; White, Allan L.; Hayhurst, Kelly J.
1990-01-01
Graph theoretic methods are applied to compute the reliability for several types of networks of moderate size. The graph theory methods used are minimal spanning trees for networks with bi-directional links and the related concept of strongly connected directed graphs for networks with uni-directional links. A comparison is conducted of ring networks and braided networks. The case is covered where just the links fail and the case where both links and nodes fail. Two different failure modes for the links are considered. For one failure mode, the link no longer carries messages. For the other failure mode, the link delivers incorrect messages. There is a description and comparison of link-redundancy versus path-redundancy as methods to achieve reliability. All the computations are carried out by means of a fault tree program.
NASA Astrophysics Data System (ADS)
Kissling, W. M.; Villamor, P.; Ellis, S. M.; Rae, A.
2018-05-01
Present-day geothermal activity on the margins of the Ngakuru graben and evidence of fossil hydrothermal activity in the central graben suggest that a graben-wide system of permeable intersecting faults acts as the principal conduit for fluid flow to the surface. We have developed numerical models of fluid and heat flow in a regional-scale 2-D cross-section of the Ngakuru Graben. The models incorporate simplified representations of two 'end-member' fault architectures (one symmetric at depth, the other highly asymmetric) which are consistent with the surface locations and dips of the Ngakuru graben faults. The models are used to explore controls on buoyancy-driven convective fluid flow which could explain the differences between the past and present hydrothermal systems associated with these faults. The models show that the surface flows from the faults are strongly controlled by the fault permeability, the fault system architecture and the location of the heat source with respect to the faults in the graben. In particular, fault intersections at depth allow exchange of fluid between faults, and the location of the heat source on the footwall of normal faults can facilitate upflow along those faults. These controls give rise to two distinct fluid flow regimes in the fault network. The first, a regular flow regime, is characterised by a nearly unchanging pattern of fluid flow vectors within the fault network as the fault permeability evolves. In the second, complex flow regime, the surface flows depend strongly on fault permeability, and can fluctuate in an erratic manner. The direction of flow within faults can reverse in both regimes as fault permeability changes. Both flow regimes provide insights into the differences between the present-day and fossil geothermal systems in the Ngakuru graben. Hydrothermal upflow along the Paeroa fault seems to have occurred, possibly continuously, for tens of thousands of years, while upflow in other faults in the graben has switched on and off during the same period. An asymmetric graben architecture with the Paeroa being the major boundary fault will facilitate the predominant upflow along this fault. Upflow on the axial faults is more difficult to explain with this modelling. It occurs most easily with an asymmetric graben architecture and heat sources close to the graben axis (which could be associated with remnant heat from recent eruptions from Okataina Volcanic Centre). Temporal changes in upflow can also be associated with acceleration and deceleration of fault activity if this is considered a proxy for fault permeability. Other explanations for temporal variations in hydrothermal activity not explored here are different permeability on different faults, and different permeability along fault strike.
Seismic Hazard Analysis on a Complex, Interconnected Fault Network
NASA Astrophysics Data System (ADS)
Page, M. T.; Field, E. H.; Milner, K. R.
2017-12-01
In California, seismic hazard models have evolved from simple, segmented prescriptive models to much more complex representations of multi-fault and multi-segment earthquakes on an interconnected fault network. During the development of the 3rd Uniform California Earthquake Rupture Forecast (UCERF3), the prevalence of multi-fault ruptures in the modeling was controversial. Yet recent earthquakes, for example, the Kaikora earthquake - as well as new research on the potential of multi-fault ruptures (e.g., Nissen et al., 2016; Sahakian et al. 2017) - have validated this approach. For large crustal earthquakes, multi-fault ruptures may be the norm rather than the exception. As datasets improve and we can view the rupture process at a finer scale, the interconnected, fractal nature of faults is revealed even by individual earthquakes. What is the proper way to model earthquakes on a fractal fault network? We show multiple lines of evidence that connectivity even in modern models such as UCERF3 may be underestimated, although clustering in UCERF3 mitigates some modeling simplifications. We need a methodology that can be applied equally well where the fault network is well-mapped and where it is not - an extendable methodology that allows us to "fill in" gaps in the fault network and in our knowledge.
Onboard Nonlinear Engine Sensor and Component Fault Diagnosis and Isolation Scheme
NASA Technical Reports Server (NTRS)
Tang, Liang; DeCastro, Jonathan A.; Zhang, Xiaodong
2011-01-01
A method detects and isolates in-flight sensor, actuator, and component faults for advanced propulsion systems. In sharp contrast to many conventional methods, which deal with either sensor fault or component fault, but not both, this method considers sensor fault, actuator fault, and component fault under one systemic and unified framework. The proposed solution consists of two main components: a bank of real-time, nonlinear adaptive fault diagnostic estimators for residual generation, and a residual evaluation module that includes adaptive thresholds and a Transferable Belief Model (TBM)-based residual evaluation scheme. By employing a nonlinear adaptive learning architecture, the developed approach is capable of directly dealing with nonlinear engine models and nonlinear faults without the need of linearization. Software modules have been developed and evaluated with the NASA C-MAPSS engine model. Several typical engine-fault modes, including a subset of sensor/actuator/components faults, were tested with a mild transient operation scenario. The simulation results demonstrated that the algorithm was able to successfully detect and isolate all simulated faults as long as the fault magnitudes were larger than the minimum detectable/isolable sizes, and no misdiagnosis occurred
A footwall system of faults associated with a foreland thrust in Montana
NASA Astrophysics Data System (ADS)
Watkinson, A. J.
1993-05-01
Some recent structural geology models of faulting have promoted the idea of a rigid footwall behaviour or response under the main thrust fault, especially for fault ramps or fault-bend folds. However, a very well-exposed thrust fault in the Montana fold and thrust belt shows an intricate but well-ordered system of subsidiary minor faults in the footwall position with respect to the main thrust fault plane. Considerable shortening has occurred off the main fault in this footwall collapse zone and the distribution and style of the minor faults accord well with published patterns of aftershock foci associated with thrust faults. In detail, there appear to be geometrically self-similar fault systems from metre length down to a few centimetres. The smallest sets show both slip and dilation. The slickensides show essentially two-dimensional displacements, and three slip systems were operative—one parallel to the bedding, and two conjugate and symmetric about the bedding (acute angle of 45-50°). A reconstruction using physical analogue models suggests one possible model for the evolution and sequencing of slip of the thrust fault system.
Deformation associated with continental normal faults
NASA Astrophysics Data System (ADS)
Resor, Phillip G.
Deformation associated with normal fault earthquakes and geologic structures provide insights into the seismic cycle as it unfolds over time scales from seconds to millions of years. Improved understanding of normal faulting will lead to more accurate seismic hazard assessments and prediction of associated structures. High-precision aftershock locations for the 1995 Kozani-Grevena earthquake (Mw 6.5), Greece image a segmented master fault and antithetic faults. This three-dimensional fault geometry is typical of normal fault systems mapped from outcrop or interpreted from reflection seismic data and illustrates the importance of incorporating three-dimensional fault geometry in mechanical models. Subsurface fault slip associated with the Kozani-Grevena and 1999 Hector Mine (Mw 7.1) earthquakes is modeled using a new method for slip inversion on three-dimensional fault surfaces. Incorporation of three-dimensional fault geometry improves the fit to the geodetic data while honoring aftershock distributions and surface ruptures. GPS Surveying of deformed bedding surfaces associated with normal faulting in the western Grand Canyon reveals patterns of deformation that are similar to those observed by interferometric satellite radar interferometry (InSAR) for the Kozani Grevena earthquake with a prominent down-warp in the hanging wall and a lesser up-warp in the footwall. However, deformation associated with the Kozani-Grevena earthquake extends ˜20 km from the fault surface trace, while the folds in the western Grand Canyon only extend 500 m into the footwall and 1500 m into the hanging wall. A comparison of mechanical and kinematic models illustrates advantages of mechanical models in exploring normal faulting processes including incorporation of both deformation and causative forces, and the opportunity to incorporate more complex fault geometry and constitutive properties. Elastic models with antithetic or synthetic faults or joints in association with a master normal fault illustrate how these secondary structures influence the deformation in ways that are similar to fault/fold geometry mapped in the western Grand Canyon. Specifically, synthetic faults amplify hanging wall bedding dips, antithetic faults reduce dips, and joints act to localize deformation. The distribution of aftershocks in the hanging wall of the Kozani-Grevena earthquake suggests that secondary structures may accommodate strains associated with slip on a master fault during postseismic deformation.
NASA Astrophysics Data System (ADS)
Rainaud, Jean-François; Clochard, Vincent; Delépine, Nicolas; Crabié, Thomas; Poudret, Mathieu; Perrin, Michel; Klein, Emmanuel
2018-07-01
Accurate reservoir characterization is needed all along the development of an oil and gas field study. It helps building 3D numerical reservoir simulation models for estimating the original oil and gas volumes in place and for simulating fluid flow behaviors. At a later stage of the field development, reservoir characterization can also help deciding which recovery techniques need to be used for fluids extraction. In complex media, such as faulted reservoirs, flow behavior predictions within volumes close to faults can be a very challenging issue. During the development plan, it is necessary to determine which types of communication exist between faults or which potential barriers exist for fluid flows. The solving of these issues rests on accurate fault characterization. In most cases, faults are not preserved along reservoir characterization workflows. The memory of the interpreted faults from seismic is not kept during seismic inversion and further interpretation of the result. The goal of our study is at first to integrate a 3D fault network as a priori information into a model-based stratigraphic inversion procedure. Secondly, we apply our methodology on a well-known oil and gas case study over a typical North Sea field (UK Northern North Sea) in order to demonstrate its added value for determining reservoir properties. More precisely, the a priori model is composed of several geological units populated by physical attributes, they are extrapolated from well log data following the deposition mode, but usually a priori model building methods respect neither the 3D fault geometry nor the stratification dips on the fault sides. We address this difficulty by applying an efficient flattening method for each stratigraphic unit in our workflow. Even before seismic inversion, the obtained stratigraphic model has been directly used to model synthetic seismic on our case study. Comparisons between synthetic seismic obtained from our 3D fault network model give much lower residuals than with a "basic" stratigraphic model. Finally, we apply our model-based inversion considering both faulted and non-faulted a priori models. By comparing the rock impedances results obtain in the two cases, we can see a better delineation of the Brent-reservoir compartments by using the 3D faulted a priori model built with our method.
NASA Astrophysics Data System (ADS)
Lee, En-Jui; Chen, Po
2017-04-01
More precise spatial descriptions of fault systems play an essential role in tectonic interpretations, deformation modeling, and seismic hazard assessments. The recent developed full-3D waveform tomography techniques provide high-resolution images and are able to image the material property differences across faults to assist the understanding of fault systems. In the updated seismic velocity model for Southern California, CVM-S4.26, many velocity gradients show consistency with surface geology and major faults defined in the Community Fault Model (CFM) (Plesch et al. 2007), which was constructed by using various geological and geophysical observations. In addition to faults in CFM, CVM-S4.26 reveals a velocity reversal mainly beneath the San Gabriel Mountain and Western Mojave Desert regions, which is correlated with the detachment structure that has also been found in other independent studies. The high-resolution tomographic images of CVM-S4.26 could assist the understanding of fault systems in Southern California and therefore benefit the development of fault models as well as other applications, such as seismic hazard analysis, tectonic reconstructions, and crustal deformation modeling.
Shell Tectonics: A Mechanical Model for Strike-slip Displacement on Europa
NASA Technical Reports Server (NTRS)
Rhoden, Alyssa Rose; Wurman, Gilead; Huff, Eric M.; Manga, Michael; Hurford, Terry A.
2012-01-01
We introduce a new mechanical model for producing tidally-driven strike-slip displacement along preexisting faults on Europa, which we call shell tectonics. This model differs from previous models of strike-slip on icy satellites by incorporating a Coulomb failure criterion, approximating a viscoelastic rheology, determining the slip direction based on the gradient of the tidal shear stress rather than its sign, and quantitatively determining the net offset over many orbits. This model allows us to predict the direction of net displacement along faults and determine relative accumulation rate of displacement. To test the shell tectonics model, we generate global predictions of slip direction and compare them with the observed global pattern of strike-slip displacement on Europa in which left-lateral faults dominate far north of the equator, right-lateral faults dominate in the far south, and near-equatorial regions display a mixture of both types of faults. The shell tectonics model reproduces this global pattern. Incorporating a small obliquity into calculations of tidal stresses, which are used as inputs to the shell tectonics model, can also explain regional differences in strike-slip fault populations. We also discuss implications for fault azimuths, fault depth, and Europa's tectonic history.
NASA Astrophysics Data System (ADS)
Ye, Jiyang; Liu, Mian
2017-08-01
In Southern California, the Pacific-North America relative plate motion is accommodated by the complex southern San Andreas Fault system that includes many young faults (<2 Ma). The initiation of these young faults and their impact on strain partitioning and fault slip rates are important for understanding the evolution of this plate boundary zone and assessing earthquake hazard in Southern California. Using a three-dimensional viscoelastoplastic finite element model, we have investigated how this plate boundary fault system has evolved to accommodate the relative plate motion in Southern California. Our results show that when the plate boundary faults are not optimally configured to accommodate the relative plate motion, strain is localized in places where new faults would initiate to improve the mechanical efficiency of the fault system. In particular, the Eastern California Shear Zone, the San Jacinto Fault, the Elsinore Fault, and the offshore dextral faults all developed in places of highly localized strain. These younger faults compensate for the reduced fault slip on the San Andreas Fault proper because of the Big Bend, a major restraining bend. The evolution of the fault system changes the apportionment of fault slip rates over time, which may explain some of the slip rate discrepancy between geological and geodetic measurements in Southern California. For the present fault configuration, our model predicts localized strain in western Transverse Ranges and along the dextral faults across the Mojave Desert, where numerous damaging earthquakes occurred in recent years.
Curry, Magdalena A. E.; Barnes, Jason B.; Colgan, Joseph P.
2016-01-01
Common fault growth models diverge in predicting how faults accumulate displacement and lengthen through time. A paucity of field-based data documenting the lateral component of fault growth hinders our ability to test these models and fully understand how natural fault systems evolve. Here we outline a framework for using apatite (U-Th)/He thermochronology (AHe) to quantify the along-strike growth of faults. To test our framework, we first use a transect in the normal fault-bounded Jackson Mountains in the Nevada Basin and Range Province, then apply the new framework to the adjacent Pine Forest Range. We combine new and existing cross sections with 18 new and 16 existing AHe cooling ages to determine the spatiotemporal variability in footwall exhumation and evaluate models for fault growth. Three age-elevation transects in the Pine Forest Range show that rapid exhumation began along the range-front fault between approximately 15 and 11 Ma at rates of 0.2–0.4 km/Myr, ultimately exhuming approximately 1.5–5 km. The ages of rapid exhumation identified at each transect lie within data uncertainty, indicating concomitant onset of faulting along strike. We show that even in the case of growth by fault-segment linkage, the fault would achieve its modern length within 3–4 Myr of onset. Comparison with the Jackson Mountains highlights the inadequacies of spatially limited sampling. A constant fault-length growth model is the best explanation for our thermochronology results. We advocate that low-temperature thermochronology can be further utilized to better understand and quantify fault growth with broader implications for seismic hazard assessments and the coevolution of faulting and topography.
Revised seismic hazard map for the Kyrgyz Republic
NASA Astrophysics Data System (ADS)
Fleming, Kevin; Ullah, Shahid; Parolai, Stefano; Walker, Richard; Pittore, Massimiliano; Free, Matthew; Fourniadis, Yannis; Villiani, Manuela; Sousa, Luis; Ormukov, Cholponbek; Moldobekov, Bolot; Takeuchi, Ko
2017-04-01
As part of a seismic risk study sponsored by the World Bank, a revised seismic hazard map for the Kyrgyz Republic has been produced, using the OpenQuake-engine developed by the Global Earthquake Model Foundation (GEM). In this project, an earthquake catalogue spanning a period from 250 BCE to 2014 was compiled and processed through spatial and temporal declustering tools. The territory of the Kyrgyz Republic was divided into 31 area sources defined based on local seismicity, including a total area covering 200 km from the border. The results are presented in terms of Peak Ground Acceleration (PGA). In addition, macroseismic intensity estimates, making use of recent intensity prediction equations, were also provided, given that this measure is still widely used in Central Asia. In order to accommodate the associated epistemic uncertainty, three ground motion prediction equations were used in a logic tree structure. A set of representative earthquake scenarios were further identified based on historical data and the nature of the considered faults. The resulting hazard map, as expected, follows the country's seismicity, with the highest levels of hazard in the northeast, south and southwest of the country, with an elevated part around the centre. When considering PGA, the hazard is slightly greater for major urban centres than in previous works (e.g., Abdrakhmatov et al., 2003), although the macroseismic intensity estimates are less than previous studies, e.g., Ulomov (1999). For the scenario assessments, the examples that most affect the urban centres assessed are the Issyk Ata fault (in particular for Bishkek), the Chilik and Kemin faults (in particular Balykchy and Karakol), the Ferghana Valley fault system (in particular Osh, Jalah-Abad and Uzgen), the Oinik Djar fault (Naryn) and the central and western Talas-Ferghanafaukt (Talas). Finally, while site effects (in particular, those dependent on the upper-most geological structure) have an obvious effect on the final hazard level, this is still not fully accounted for, even if a nation-wide first order Vs30 model (i.e., from the USGS) is available. Abdrakhmatov, K., Havenith, H.-B., Delvaux, D., Jongsmans, D. and Trefois, P. (2003) Probabilistic PGA and Arias Intensity maps of Kyrgyzstan (Central Asia), Journal of Seismology, 7, 203-220. Ulomov, V.I., The GSHAP Region 7 working group (1999) Seismic hazard of Northern Eurasia, Annali di Geofisica, 42, 1012-1038.
NASA Astrophysics Data System (ADS)
Bing, Xue; Yicai, Ji
2018-06-01
In order to understand directly and analyze accurately the detected magnetotelluric (MT) data on anisotropic infinite faults, two-dimensional partial differential equations of MT fields are used to establish a model of anisotropic infinite faults using the Fourier transform method. A multi-fault model is developed to expand the one-fault model. The transverse electric mode and transverse magnetic mode analytic solutions are derived using two-infinite-fault models. The infinite integral terms of the quasi-analytic solutions are discussed. The dual-fault model is computed using the finite element method to verify the correctness of the solutions. The MT responses of isotropic and anisotropic media are calculated to analyze the response functions by different anisotropic conductivity structures. The thickness and conductivity of the media, influencing MT responses, are discussed. The analytic principles are also given. The analysis results are significant to how MT responses are perceived and to the data interpretation of the complex anisotropic infinite faults.
NASA Technical Reports Server (NTRS)
Wilson, Edward (Inventor)
2008-01-01
The present invention is a method for detecting and isolating fault modes in a system having a model describing its behavior and regularly sampled measurements. The models are used to calculate past and present deviations from measurements that would result with no faults present, as well as with one or more potential fault modes present. Algorithms that calculate and store these deviations, along with memory of when said faults, if present, would have an effect on the said actual measurements, are used to detect when a fault is present. Related algorithms are used to exonerate false fault modes and finally to isolate the true fault mode. This invention is presented with application to detection and isolation of thruster faults for a thruster-controlled spacecraft. As a supporting aspect of the invention, a novel, effective, and efficient filtering method for estimating the derivative of a noisy signal is presented.
NASA Technical Reports Server (NTRS)
Cohen, S. C.
1979-01-01
A viscoelastic model for deformation and stress associated with earthquakes is reported. The model consists of a rectangular dislocation (strike slip fault) in a viscoelastic layer (lithosphere) lying over a viscoelastic half space (asthenosphere). The time dependent surface stresses are analyzed. The model predicts that near the fault a significant fraction of the stress that was reduced during the earthquake is recovered by viscoelastic softening of the lithosphere. By contrast, the strain shows very little change near the fault. The model also predicts that the stress changes associated with asthenospheric flow extend over a broader region than those associated with lithospheric relaxation even though the peak value is less. The dependence of the displacements, stresses on fault parameters studied. Peak values of strain and stress drop increase with increasing fault height and decrease with fault depth. Under many circumstances postseismic strains and stresses show an increase with decreasing depth to the lithosphere-asthenosphere boundary. Values of the strain and stress at distant points from the fault increase with fault area but are relatively insensitive to fault depth.
NASA Astrophysics Data System (ADS)
Glesener, G. B.; Peltzer, G.; Stubailo, I.; Cochran, E. S.; Lawrence, J. F.
2009-12-01
The Modeling and Educational Demonstrations Laboratory (MEDL) at the University of California, Los Angeles has developed a fourth version of the Elastic Rebound Strike-slip (ERS) Fault Model to be used to educate students and the general public about the process and mechanics of earthquakes from strike-slip faults. The ERS Fault Model is an interactive hands-on teaching tool which produces failure on a predefined fault embedded in an elastic medium, with adjustable normal stress. With the addition of an accelerometer sensor, called the Joy Warrior, the user can experience what it is like for a field geophysicist to collect and observe ground shaking data from an earthquake without having to experience a real earthquake. Two knobs on the ERS Fault Model control the normal and shear stress on the fault. Adjusting the normal stress knob will increase or decrease the friction on the fault. The shear stress knob displaces one side of the elastic medium parallel to the strike of the fault, resulting in changing shear stress on the fault surface. When the shear stress exceeds the threshold defined by the static friction of the fault, an earthquake on the model occurs. The accelerometer sensor then sends the data to a computer where the shaking of the model due to the sudden slip on the fault can be displayed and analyzed by the student. The experiment clearly illustrates the relationship between earthquakes and seismic waves. One of the major benefits to using the ERS Fault Model in undergraduate courses is that it helps to connect non-science students with the work of scientists. When students that are not accustomed to scientific thought are able to experience the scientific process first hand, a connection is made between the scientists and students. Connections like this might inspire a student to become a scientist, or promote the advancement of scientific research through public policy.
Model-Based Diagnostics for Propellant Loading Systems
NASA Technical Reports Server (NTRS)
Daigle, Matthew John; Foygel, Michael; Smelyanskiy, Vadim N.
2011-01-01
The loading of spacecraft propellants is a complex, risky operation. Therefore, diagnostic solutions are necessary to quickly identify when a fault occurs, so that recovery actions can be taken or an abort procedure can be initiated. Model-based diagnosis solutions, established using an in-depth analysis and understanding of the underlying physical processes, offer the advanced capability to quickly detect and isolate faults, identify their severity, and predict their effects on system performance. We develop a physics-based model of a cryogenic propellant loading system, which describes the complex dynamics of liquid hydrogen filling from a storage tank to an external vehicle tank, as well as the influence of different faults on this process. The model takes into account the main physical processes such as highly nonequilibrium condensation and evaporation of the hydrogen vapor, pressurization, and also the dynamics of liquid hydrogen and vapor flows inside the system in the presence of helium gas. Since the model incorporates multiple faults in the system, it provides a suitable framework for model-based diagnostics and prognostics algorithms. Using this model, we analyze the effects of faults on the system, derive symbolic fault signatures for the purposes of fault isolation, and perform fault identification using a particle filter approach. We demonstrate the detection, isolation, and identification of a number of faults using simulation-based experiments.
Intelligent classifier for dynamic fault patterns based on hidden Markov model
NASA Astrophysics Data System (ADS)
Xu, Bo; Feng, Yuguang; Yu, Jinsong
2006-11-01
It's difficult to build precise mathematical models for complex engineering systems because of the complexity of the structure and dynamics characteristics. Intelligent fault diagnosis introduces artificial intelligence and works in a different way without building the analytical mathematical model of a diagnostic object, so it's a practical approach to solve diagnostic problems of complex systems. This paper presents an intelligent fault diagnosis method, an integrated fault-pattern classifier based on Hidden Markov Model (HMM). This classifier consists of dynamic time warping (DTW) algorithm, self-organizing feature mapping (SOFM) network and Hidden Markov Model. First, after dynamic observation vector in measuring space is processed by DTW, the error vector including the fault feature of being tested system is obtained. Then a SOFM network is used as a feature extractor and vector quantization processor. Finally, fault diagnosis is realized by fault patterns classifying with the Hidden Markov Model classifier. The importing of dynamic time warping solves the problem of feature extracting from dynamic process vectors of complex system such as aeroengine, and makes it come true to diagnose complex system by utilizing dynamic process information. Simulating experiments show that the diagnosis model is easy to extend, and the fault pattern classifier is efficient and is convenient to the detecting and diagnosing of new faults.
NASA Astrophysics Data System (ADS)
Nicholson, C.; Plesch, A.; Sorlien, C. C.; Shaw, J. H.; Hauksson, E.
2014-12-01
Southern California represents an ideal natural laboratory to investigate oblique deformation in 3D owing to its comprehensive datasets, complex tectonic history, evolving components of oblique slip, and continued crustal rotations about horizontal and vertical axes. As the SCEC Community Fault Model (CFM) aims to accurately reflect this 3D deformation, we present the results of an extensive update to the model by using primarily detailed fault trace, seismic reflection, relocated hypocenter and focal mechanism nodal plane data to generate improved, more realistic digital 3D fault surfaces. The results document a wide variety of oblique strain accommodation, including various aspects of strain partitioning and fault-related folding, sets of both high-angle and low-angle faults that mutually interact, significant non-planar, multi-stranded faults with variable dip along strike and with depth, and active mid-crustal detachments. In places, closely-spaced fault strands or fault systems can remain surprisingly subparallel to seismogenic depths, while in other areas, major strike-slip to oblique-slip faults can merge, such as the S-dipping Arroyo Parida-Mission Ridge and Santa Ynez faults with the N-dipping North Channel-Pitas Point-Red Mountain fault system, or diverge with depth. Examples of the latter include the steep-to-west-dipping Laguna Salada-Indiviso faults with the steep-to-east-dipping Sierra Cucapah faults, and the steep southern San Andreas fault with the adjacent NE-dipping Mecca Hills-Hidden Springs fault system. In addition, overprinting by steep predominantly strike-slip faulting can segment which parts of intersecting inherited low-angle faults are reactivated, or result in mutual cross-cutting relationships. The updated CFM 3D fault surfaces thus help characterize a more complex pattern of fault interactions at depth between various fault sets and linked fault systems, and a more complex fault geometry than typically inferred or expected from projecting near-surface data down-dip, or modeled from surface strain and potential field data alone.
Nguyen, Ba Nghiep; Hou, Zhangshuan; Last, George V.; ...
2016-09-29
This work develops a three-dimensional multiscale model to analyze a complex CO 2 faulted reservoir that includes some key geological features of the San Andreas and nearby faults southwest of the Kimberlina site. The model uses the STOMP-CO 2 code for flow modeling that is coupled to the ABAQUS® finite element package for geomechanical analysis. A 3D ABAQUS® finite element model is developed that contains a large number of 3D solid elements with two nearly parallel faults whose damage zones and cores are discretized using the same continuum elements. Five zones with different mineral compositions are considered: shale, sandstone, faultmore » damaged sandstone, fault damaged shale, and fault core. Rocks’ elastic properties that govern their poroelastic behavior are modeled by an Eshelby-Mori-Tanka approach (EMTA). EMTA can account for up to 15 mineral phases. The permeability of fault damage zones affected by crack density and orientations is also predicted by an EMTA formulation. A STOMP-CO 2 grid that exactly maps the ABAQUS® finite element model is built for coupled hydro-mechanical analyses. Simulations of the reservoir assuming three different crack pattern situations (including crack volume fraction and orientation) for the fault damage zones are performed to predict the potential leakage of CO 2 due to cracks that enhance the permeability of the fault damage zones. Here, the results illustrate the important effect of the crack orientation on fault permeability that can lead to substantial leakage along the fault attained by the expansion of the CO 2 plume. Potential hydraulic fracture and the tendency for the faults to slip are also examined and discussed in terms of stress distributions and geomechanical properties.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nguyen, Ba Nghiep; Hou, Zhangshuan; Last, George V.
This work develops a three-dimensional multiscale model to analyze a complex CO 2 faulted reservoir that includes some key geological features of the San Andreas and nearby faults southwest of the Kimberlina site. The model uses the STOMP-CO 2 code for flow modeling that is coupled to the ABAQUS® finite element package for geomechanical analysis. A 3D ABAQUS® finite element model is developed that contains a large number of 3D solid elements with two nearly parallel faults whose damage zones and cores are discretized using the same continuum elements. Five zones with different mineral compositions are considered: shale, sandstone, faultmore » damaged sandstone, fault damaged shale, and fault core. Rocks’ elastic properties that govern their poroelastic behavior are modeled by an Eshelby-Mori-Tanka approach (EMTA). EMTA can account for up to 15 mineral phases. The permeability of fault damage zones affected by crack density and orientations is also predicted by an EMTA formulation. A STOMP-CO 2 grid that exactly maps the ABAQUS® finite element model is built for coupled hydro-mechanical analyses. Simulations of the reservoir assuming three different crack pattern situations (including crack volume fraction and orientation) for the fault damage zones are performed to predict the potential leakage of CO 2 due to cracks that enhance the permeability of the fault damage zones. Here, the results illustrate the important effect of the crack orientation on fault permeability that can lead to substantial leakage along the fault attained by the expansion of the CO 2 plume. Potential hydraulic fracture and the tendency for the faults to slip are also examined and discussed in terms of stress distributions and geomechanical properties.« less
Pet-Armacost, J J; Sepulveda, J; Sakude, M
1999-12-01
The US Department of Transportation was interested in the risks associated with transporting Hydrazine in tanks with and without relief devices. Hydrazine is both highly toxic and flammable, as well as corrosive. Consequently, there was a conflict as to whether a relief device should be used or not. Data were not available on the impact of relief devices on release probabilities or the impact of Hydrazine on the likelihood of fires and explosions. In this paper, a Monte Carlo sensitivity analysis of the unknown parameters was used to assess the risks associated with highway transport of Hydrazine. To help determine whether or not relief devices should be used, fault trees and event trees were used to model the sequences of events that could lead to adverse consequences during transport of Hydrazine. The event probabilities in the event trees were derived as functions of the parameters whose effects were not known. The impacts of these parameters on the risk of toxic exposures, fires, and explosions were analyzed through a Monte Carlo sensitivity analysis and analyzed statistically through an analysis of variance. The analysis allowed the determination of which of the unknown parameters had a significant impact on the risks. It also provided the necessary support to a critical transportation decision even though the values of several key parameters were not known.
Modeling, Detection, and Disambiguation of Sensor Faults for Aerospace Applications
NASA Technical Reports Server (NTRS)
Balaban, Edward; Saxena, Abhinav; Bansal, Prasun; Goebel, Kai F.; Curran, Simon
2009-01-01
Sensor faults continue to be a major hurdle for systems health management to reach its full potential. At the same time, few recorded instances of sensor faults exist. It is equally difficult to seed particular sensor faults. Therefore, research is underway to better understand the different fault modes seen in sensors and to model the faults. The fault models can then be used in simulated sensor fault scenarios to ensure that algorithms can distinguish between sensor faults and system faults. The paper illustrates the work with data collected from an electro-mechanical actuator in an aerospace setting, equipped with temperature, vibration, current, and position sensors. The most common sensor faults, such as bias, drift, scaling, and dropout were simulated and injected into the experimental data, with the goal of making these simulations as realistic as feasible. A neural network based classifier was then created and tested on both experimental data and the more challenging randomized data sequences. Additional studies were also conducted to determine sensitivity of detection and disambiguation efficacy to severity of fault conditions.
Analytic Study of Three-Dimensional Rupture Propagation in Strike-Slip Faulting with Analogue Models
NASA Astrophysics Data System (ADS)
Chan, Pei-Chen; Chu, Sheng-Shin; Lin, Ming-Lang
2014-05-01
Strike-slip faults are high angle (or nearly vertical) fractures where the blocks have moved along strike way (nearly horizontal). Overburden soil profiles across main faults of Strike-slip faults have revealed the palm and tulip structure characteristics. McCalpin (2005) has trace rupture propagation on overburden soil surface. In this study, we used different offset of slip sandbox model profiles to study the evolution of three-dimensional rupture propagation by strike -slip faulting. In strike-slip faults model, type of rupture propagation and width of shear zone (W) are primary affecting by depth of overburden layer (H), distances of fault slip (Sy). There are few research to trace of three-dimensional rupture behavior and propagation. Therefore, in this simplified sandbox model, investigate rupture propagation and shear zone with profiles across main faults when formation are affecting by depth of overburden layer and distances of fault slip. The investigators at the model included width of shear zone, length of rupture (L), angle of rupture (θ) and space of rupture. The surface results was follow the literature that the evolution sequence of failure envelope was R-faults, P-faults and Y-faults which are parallel to the basement fault. Comparison surface and profiles structure which were curved faces and cross each other to define 3-D rupture and width of shear zone. We found that an increase in fault slip could result in a greater width of shear zone, and proposed a W/H versus Sy/H relationship. Deformation of shear zone showed a similar trend as in the literature that the increase of fault slip resulted in the increase of W, however, the increasing trend became opposite after a peak (when Sy/H was 1) value of W was reached (small than 1.5). The results showed that the W width is limited at a constant value in 3-D models by strike-slip faulting. In conclusion, this study helps evaluate the extensions of the shear zone influenced regions for strike-slip faults.
Architecture Analysis with AADL: The Speed Regulation Case-Study
2014-11-01
Overview Functional Hazard Analysis ( FHA ) Failures inventory with description, classification, etc. Fault-Tree Analysis (FTA) Dependencies between...University Pittsburgh, PA 15213 Julien Delange Report Documentation Page Form ApprovedOMB No. 0704-0188 Public reporting burden for the collection of...Information Operations and Reports , 1215 Jefferson Davis Highway, Suite 1204, Arlington VA 22202-4302. Respondents should be aware that notwithstanding any
Journal of Air Transportation, Volume 12, No. 2 (ATRS Special Edition)
NASA Technical Reports Server (NTRS)
Bowen, Brent D. (Editor); Kabashkin, Igor (Editor); Fink, Mary (Editor)
2007-01-01
Topics covered include: Competition and Change in the Long-Haul Markets from Europe; Insights into the Maintenance, Repair, and Overhaul Configurations of European Airlines; Validation of Fault Tree Analysis in Aviation Safety Management; An Investigation into Airline Service Quality Performance between U.S. Legacy Carriers and Their EU Competitors and Partners; and Climate Impact of Aircraft Technology and Design Changes.
Risk Analysis of a Fuel Storage Terminal Using HAZOP and FTA
Baixauli-Pérez, Mª Piedad
2017-01-01
The size and complexity of industrial chemical plants, together with the nature of the products handled, means that an analysis and control of the risks involved is required. This paper presents a methodology for risk analysis in chemical and allied industries that is based on a combination of HAZard and OPerability analysis (HAZOP) and a quantitative analysis of the most relevant risks through the development of fault trees, fault tree analysis (FTA). Results from FTA allow prioritizing the preventive and corrective measures to minimize the probability of failure. An analysis of a case study is performed; it consists in the terminal for unloading chemical and petroleum products, and the fuel storage facilities of two companies, in the port of Valencia (Spain). HAZOP analysis shows that loading and unloading areas are the most sensitive areas of the plant and where the most significant danger is a fuel spill. FTA analysis indicates that the most likely event is a fuel spill in tank truck loading area. A sensitivity analysis from the FTA results show the importance of the human factor in all sequences of the possible accidents, so it should be mandatory to improve the training of the staff of the plants. PMID:28665325
TH-EF-BRC-04: Quality Management Program
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yorke, E.
2016-06-15
This Hands-on Workshop will be focused on providing participants with experience with the principal tools of TG 100 and hence start to build both competence and confidence in the use of risk-based quality management techniques. The three principal tools forming the basis of TG 100’s risk analysis: Process mapping, Failure-Modes and Effects Analysis and fault-tree analysis will be introduced with a 5 minute refresher presentation and each presentation will be followed by a 30 minute small group exercise. An exercise on developing QM from the risk analysis follows. During the exercise periods, participants will apply the principles in 2 differentmore » clinical scenarios. At the conclusion of each exercise there will be ample time for participants to discuss with each other and the faculty their experience and any challenges encountered. Learning Objectives: To review the principles of Process Mapping, Failure Modes and Effects Analysis and Fault Tree Analysis. To gain familiarity with these three techniques in a small group setting. To share and discuss experiences with the three techniques with faculty and participants. Director, TreatSafely, LLC. Director, Center for the Assessment of Radiological Sciences. Occasional Consultant to the IAEA and Varian.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
2016-06-15
This Hands-on Workshop will be focused on providing participants with experience with the principal tools of TG 100 and hence start to build both competence and confidence in the use of risk-based quality management techniques. The three principal tools forming the basis of TG 100’s risk analysis: Process mapping, Failure-Modes and Effects Analysis and fault-tree analysis will be introduced with a 5 minute refresher presentation and each presentation will be followed by a 30 minute small group exercise. An exercise on developing QM from the risk analysis follows. During the exercise periods, participants will apply the principles in 2 differentmore » clinical scenarios. At the conclusion of each exercise there will be ample time for participants to discuss with each other and the faculty their experience and any challenges encountered. Learning Objectives: To review the principles of Process Mapping, Failure Modes and Effects Analysis and Fault Tree Analysis. To gain familiarity with these three techniques in a small group setting. To share and discuss experiences with the three techniques with faculty and participants. Director, TreatSafely, LLC. Director, Center for the Assessment of Radiological Sciences. Occasional Consultant to the IAEA and Varian.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huq, M.
2016-06-15
This Hands-on Workshop will be focused on providing participants with experience with the principal tools of TG 100 and hence start to build both competence and confidence in the use of risk-based quality management techniques. The three principal tools forming the basis of TG 100’s risk analysis: Process mapping, Failure-Modes and Effects Analysis and fault-tree analysis will be introduced with a 5 minute refresher presentation and each presentation will be followed by a 30 minute small group exercise. An exercise on developing QM from the risk analysis follows. During the exercise periods, participants will apply the principles in 2 differentmore » clinical scenarios. At the conclusion of each exercise there will be ample time for participants to discuss with each other and the faculty their experience and any challenges encountered. Learning Objectives: To review the principles of Process Mapping, Failure Modes and Effects Analysis and Fault Tree Analysis. To gain familiarity with these three techniques in a small group setting. To share and discuss experiences with the three techniques with faculty and participants. Director, TreatSafely, LLC. Director, Center for the Assessment of Radiological Sciences. Occasional Consultant to the IAEA and Varian.« less
Risk Analysis of a Fuel Storage Terminal Using HAZOP and FTA.
Fuentes-Bargues, José Luis; González-Cruz, Mª Carmen; González-Gaya, Cristina; Baixauli-Pérez, Mª Piedad
2017-06-30
The size and complexity of industrial chemical plants, together with the nature of the products handled, means that an analysis and control of the risks involved is required. This paper presents a methodology for risk analysis in chemical and allied industries that is based on a combination of HAZard and OPerability analysis (HAZOP) and a quantitative analysis of the most relevant risks through the development of fault trees, fault tree analysis (FTA). Results from FTA allow prioritizing the preventive and corrective measures to minimize the probability of failure. An analysis of a case study is performed; it consists in the terminal for unloading chemical and petroleum products, and the fuel storage facilities of two companies, in the port of Valencia (Spain). HAZOP analysis shows that loading and unloading areas are the most sensitive areas of the plant and where the most significant danger is a fuel spill. FTA analysis indicates that the most likely event is a fuel spill in tank truck loading area. A sensitivity analysis from the FTA results show the importance of the human factor in all sequences of the possible accidents, so it should be mandatory to improve the training of the staff of the plants.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dunscombe, P.
This Hands-on Workshop will be focused on providing participants with experience with the principal tools of TG 100 and hence start to build both competence and confidence in the use of risk-based quality management techniques. The three principal tools forming the basis of TG 100’s risk analysis: Process mapping, Failure-Modes and Effects Analysis and fault-tree analysis will be introduced with a 5 minute refresher presentation and each presentation will be followed by a 30 minute small group exercise. An exercise on developing QM from the risk analysis follows. During the exercise periods, participants will apply the principles in 2 differentmore » clinical scenarios. At the conclusion of each exercise there will be ample time for participants to discuss with each other and the faculty their experience and any challenges encountered. Learning Objectives: To review the principles of Process Mapping, Failure Modes and Effects Analysis and Fault Tree Analysis. To gain familiarity with these three techniques in a small group setting. To share and discuss experiences with the three techniques with faculty and participants. Director, TreatSafely, LLC. Director, Center for the Assessment of Radiological Sciences. Occasional Consultant to the IAEA and Varian.« less
Rath, Frank
2008-01-01
This article examines the concepts of quality management (QM) and quality assurance (QA), as well as the current state of QM and QA practices in radiotherapy. A systematic approach incorporating a series of industrial engineering-based tools is proposed, which can be applied in health care organizations proactively to improve process outcomes, reduce risk and/or improve patient safety, improve through-put, and reduce cost. This tool set includes process mapping and process flowcharting, failure modes and effects analysis (FMEA), value stream mapping, and fault tree analysis (FTA). Many health care organizations do not have experience in applying these tools and therefore do not understand how and when to use them. As a result there are many misconceptions about how to use these tools, and they are often incorrectly applied. This article describes these industrial engineering-based tools and also how to use them, when they should be used (and not used), and the intended purposes for their use. In addition the strengths and weaknesses of each of these tools are described, and examples are given to demonstrate the application of these tools in health care settings.
Liu, Xiao Yu; Xue, Kang Ning; Rong, Rong; Zhao, Chi Hong
2016-01-01
Epidemic hemorrhagic fever has been an ongoing threat to laboratory personnel involved in animal care and use. Laboratory transmissions and severe infections occurred over the past twenty years, even though the standards and regulations for laboratory biosafety have been issued, upgraded, and implemented in China. Therefore, there is an urgent need to identify risk factors and to seek effective preventive measures that can curb the incidences of epidemic hemorrhagic fever among laboratory personnel. In the present study, we reviewed literature that relevant to animals laboratory-acquired hemorrhagic fever infections reported from 1995 to 2015, and analyzed these incidences using fault tree analysis (FTA). The results of data analysis showed that purchasing of qualified animals and guarding against wild rats which could make sure the laboratory animals without hantaviruses, are the basic measures to prevent infections. During the process of daily management, the consciousness of personal protecting and the ability of personal protecting need to be further improved. Undoubtedly vaccination is the most direct and effective method, while it plays role after infection. So avoiding infections can't rely entirely on vaccination. Copyright © 2016 The Editorial Board of Biomedical and Environmental Sciences. Published by China CDC. All rights reserved.
Fault tree analysis of the causes of waterborne outbreaks.
Risebro, Helen L; Doria, Miguel F; Andersson, Yvonne; Medema, Gertjan; Osborn, Keith; Schlosser, Olivier; Hunter, Paul R
2007-01-01
Prevention and containment of outbreaks requires examination of the contribution and interrelation of outbreak causative events. An outbreak fault tree was developed and applied to 61 enteric outbreaks related to public drinking water supplies in the EU. A mean of 3.25 causative events per outbreak were identified; each event was assigned a score based on percentage contribution per outbreak. Source and treatment system causative events often occurred concurrently (in 34 outbreaks). Distribution system causative events occurred less frequently (19 outbreaks) but were often solitary events contributing heavily towards the outbreak (a mean % score of 87.42). Livestock and rainfall in the catchment with no/inadequate filtration of water sources contributed concurrently to 11 of 31 Cryptosporidium outbreaks. Of the 23 protozoan outbreaks experiencing at least one treatment causative event, 90% of these events were filtration deficiencies; by contrast, for bacterial, viral, gastroenteritis and mixed pathogen outbreaks, 75% of treatment events were disinfection deficiencies. Roughly equal numbers of groundwater and surface water outbreaks experienced at least one treatment causative event (18 and 17 outbreaks, respectively). Retrospective analysis of multiple outbreaks of enteric disease can be used to inform outbreak investigations, facilitate corrective measures, and further develop multi-barrier approaches.
Fault and event tree analyses for process systems risk analysis: uncertainty handling formulations.
Ferdous, Refaul; Khan, Faisal; Sadiq, Rehan; Amyotte, Paul; Veitch, Brian
2011-01-01
Quantitative risk analysis (QRA) is a systematic approach for evaluating likelihood, consequences, and risk of adverse events. QRA based on event (ETA) and fault tree analyses (FTA) employs two basic assumptions. The first assumption is related to likelihood values of input events, and the second assumption is regarding interdependence among the events (for ETA) or basic events (for FTA). Traditionally, FTA and ETA both use crisp probabilities; however, to deal with uncertainties, the probability distributions of input event likelihoods are assumed. These probability distributions are often hard to come by and even if available, they are subject to incompleteness (partial ignorance) and imprecision. Furthermore, both FTA and ETA assume that events (or basic events) are independent. In practice, these two assumptions are often unrealistic. This article focuses on handling uncertainty in a QRA framework of a process system. Fuzzy set theory and evidence theory are used to describe the uncertainties in the input event likelihoods. A method based on a dependency coefficient is used to express interdependencies of events (or basic events) in ETA and FTA. To demonstrate the approach, two case studies are discussed. © 2010 Society for Risk Analysis.
NASA Astrophysics Data System (ADS)
Gable, C. W.; Fialko, Y.; Hager, B. H.; Plesch, A.; Williams, C. A.
2006-12-01
More realistic models of crustal deformation are possible due to advances in measurements and modeling capabilities. This study integrates various data to constrain a finite element model of stress and strain in the vicinity of the 1992 Landers earthquake and the 1999 Hector Mine earthquake. The geometry of the model is designed to incorporate the Southern California Earthquake Center (SCEC), Community Fault Model (CFM) to define fault geometry. The Hector Mine fault is represented by a single surface that follows the trace of the Hector Mine fault, is vertical and has variable depth. The fault associated with the Landers earthquake is a set of seven surfaces that capture the geometry of the splays and echelon offsets of the fault. A three dimensional finite element mesh of tetrahedral elements is built that closely maintains the geometry of these fault surfaces. The spatially variable coseismic slip on faults is prescribed based on an inversion of geodetic (Synthetic Aperture Radar and Global Positioning System) data. Time integration of stress and strain is modeled with the finite element code Pylith. As a first step the methodology of incorporating all these data is described. Results of the time history of the stress and strain transfer between 1992 and 1999 are analyzed as well as the time history of deformation from 1999 to the present.
NETRA: A parallel architecture for integrated vision systems. 1: Architecture and organization
NASA Technical Reports Server (NTRS)
Choudhary, Alok N.; Patel, Janak H.; Ahuja, Narendra
1989-01-01
Computer vision is regarded as one of the most complex and computationally intensive problems. An integrated vision system (IVS) is considered to be a system that uses vision algorithms from all levels of processing for a high level application (such as object recognition). A model of computation is presented for parallel processing for an IVS. Using the model, desired features and capabilities of a parallel architecture suitable for IVSs are derived. Then a multiprocessor architecture (called NETRA) is presented. This architecture is highly flexible without the use of complex interconnection schemes. The topology of NETRA is recursively defined and hence is easily scalable from small to large systems. Homogeneity of NETRA permits fault tolerance and graceful degradation under faults. It is a recursively defined tree-type hierarchical architecture where each of the leaf nodes consists of a cluster of processors connected with a programmable crossbar with selective broadcast capability to provide for desired flexibility. A qualitative evaluation of NETRA is presented. Then general schemes are described to map parallel algorithms onto NETRA. Algorithms are classified according to their communication requirements for parallel processing. An extensive analysis of inter-cluster communication strategies in NETRA is presented, and parameters affecting performance of parallel algorithms when mapped on NETRA are discussed. Finally, a methodology to evaluate performance of algorithms on NETRA is described.
Automated forward mechanical modeling of wrinkle ridges on Mars
NASA Astrophysics Data System (ADS)
Nahm, Amanda; Peterson, Samuel
2016-04-01
One of the main goals of the InSight mission to Mars is to understand the internal structure of Mars [1], in part through passive seismology. Understanding the shallow surface structure of the landing site is critical to the robust interpretation of recorded seismic signals. Faults, such as the wrinkle ridges abundant in the proposed landing site in Elysium Planitia, can be used to determine the subsurface structure of the regions they deform. Here, we test a new automated method for modeling of the topography of a wrinkle ridge (WR) in Elysium Planitia, allowing for faster and more robust determination of subsurface fault geometry for interpretation of the local subsurface structure. We perform forward mechanical modeling of fault-related topography [e.g., 2, 3], utilizing the modeling program Coulomb [4, 5] to model surface displacements surface induced by blind thrust faulting. Fault lengths are difficult to determine for WR; we initially assume a fault length of 30 km, but also test the effects of different fault lengths on model results. At present, we model the wrinkle ridge as a single blind thrust fault with a constant fault dip, though WR are likely to have more complicated fault geometry [e.g., 6-8]. Typically, the modeling is performed using the Coulomb GUI. This approach can be time consuming, requiring user inputs to change model parameters and to calculate the associated displacements for each model, which limits the number of models and parameter space that can be tested. To reduce active user computation time, we have developed a method in which the Coulomb GUI is bypassed. The general modeling procedure remains unchanged, and a set of input files is generated before modeling with ranges of pre-defined parameter values. The displacement calculations are divided into two suites. For Suite 1, a total of 3770 input files were generated in which the fault displacement (D), dip angle (δ), depth to upper fault tip (t), and depth to lower fault tip (B) were varied. A second set of input files was created (Suite 2) after the best-fit model from Suite 1 was determined, in which fault parameters were varied with a smaller range and incremental changes, resulting in a total of 28,080 input files. RMS values were calculated for each Coulomb model. RMS values for Suite 1 models were calculated over the entire profile and for a restricted x range; the latter shows a reduced RMS misfit by 1.2 m. The minimum RMS value for Suite 2 models decreases again by 0.2 m, resulting in an overall reduction of the RMS value of ~1.4 m (18%). Models with different fault lengths (15, 30, and 60 km) are visually indistinguishable. Values for δ, t, B, and RMS misfit are either the same or very similar for each best fit model. These results indicate that the subsurface structure can be reliably determined from forward mechanical modeling even with uncertainty in fault length. Future work will test this method with the more realistic WR fault geometry. References: [1] Banerdt et al. (2013), 44th LPSC, #1915. [2] Cohen (1999), Adv. Geophys., 41, 133-231. [3] Schultz and Lin (2001), JGR, 106, 16549-16566. [4] Lin and Stein (2004), JGR, 109, B02303, doi:10.1029/2003JB002607. [5] Toda et al. (2005), JGR, 103, 24543-24565. [6] Okubo and Schultz (2004), GSAB, 116, 597-605. [7] Watters (2004), Icarus, 171, 284-294. [8] Schultz (2000), JGR, 105, 12035-12052.
NASA Astrophysics Data System (ADS)
Bense, V. F.; Gleeson, T.; Loveless, S. E.; Bour, O.; Scibek, J.
2013-12-01
Deformation along faults in the shallow crust (< 1 km) introduces permeability heterogeneity and anisotropy, which has an important impact on processes such as regional groundwater flow, hydrocarbon migration, and hydrothermal fluid circulation. Fault zones have the capacity to be hydraulic conduits connecting shallow and deep geological environments, but simultaneously the fault cores of many faults often form effective barriers to flow. The direct evaluation of the impact of faults to fluid flow patterns remains a challenge and requires a multidisciplinary research effort of structural geologists and hydrogeologists. However, we find that these disciplines often use different methods with little interaction between them. In this review, we document the current multi-disciplinary understanding of fault zone hydrogeology. We discuss surface- and subsurface observations from diverse rock types from unlithified and lithified clastic sediments through to carbonate, crystalline, and volcanic rocks. For each rock type, we evaluate geological deformation mechanisms, hydrogeologic observations and conceptual models of fault zone hydrogeology. Outcrop observations indicate that fault zones commonly have a permeability structure suggesting they should act as complex conduit-barrier systems in which along-fault flow is encouraged and across-fault flow is impeded. Hydrogeological observations of fault zones reported in the literature show a broad qualitative agreement with outcrop-based conceptual models of fault zone hydrogeology. Nevertheless, the specific impact of a particular fault permeability structure on fault zone hydrogeology can only be assessed when the hydrogeological context of the fault zone is considered and not from outcrop observations alone. To gain a more integrated, comprehensive understanding of fault zone hydrogeology, we foresee numerous synergistic opportunities and challenges for the discipline of structural geology and hydrogeology to co-evolve and address remaining challenges by co-locating study areas, sharing approaches and fusing data, developing conceptual models from hydrogeologic data, numerical modeling, and training interdisciplinary scientists.
NASA Astrophysics Data System (ADS)
Pinzuti, Paul; Mignan, Arnaud; King, Geoffrey C. P.
2010-10-01
Tectonic-stretching models have been previously proposed to explain the process of continental break-up through the example of the Asal Rift, Djibouti, one of the few places where the early stages of seafloor spreading can be observed. In these models, deformation is distributed starting at the base of a shallow seismogenic zone, in which sub-vertical normal faults are responsible for subsidence whereas cracks accommodate extension. Alternative models suggest that extension results from localised magma intrusion, with normal faults accommodating extension and subsidence only above the maximum reach of the magma column. In these magmatic rifting models, or so-called magmatic intrusion models, normal faults have dips of 45-55° and root into dikes. Vertical profiles of normal fault scarps from levelling campaign in the Asal Rift, where normal faults seem sub-vertical at surface level, have been analysed to discuss the creation and evolution of normal faults in massive fractured rocks (basalt lava flows), using mechanical and kinematics concepts. We show that the studied normal fault planes actually have an average dip ranging between 45° and 65° and are characterised by an irregular stepped form. We suggest that these normal fault scarps correspond to sub-vertical en echelon structures, and that, at greater depth, these scarps combine and give birth to dipping normal faults. The results of our analysis are compatible with the magmatic intrusion models instead of tectonic-stretching models. The geometry of faulting between the Fieale volcano and Lake Asal in the Asal Rift can be simply related to the depth of diking, which in turn can be related to magma supply. This new view supports the magmatic intrusion model of early stages of continental breaking.
Growth trishear model and its application to the Gilbertown graben system, southwest Alabama
Jin, G.; Groshong, R.H.; Pashin, J.C.
2009-01-01
Fault-propagation folding associated with an upward propagating fault in the Gilbertown graben system is revealed by well-based 3-D subsurface mapping and dipmeter analysis. The fold is developed in the Selma chalk, which is an oil reservoir along the southern margin of the graben. Area-depth-strain analysis suggests that the Cretaceous strata were growth units, the Jurassic strata were pregrowth units, and the graben system is detached in the Louann Salt. The growth trishear model has been applied in this paper to study the evolution and kinematics of extensional fault-propagation folding. Models indicate that the propagation to slip (p/s) ratio of the underlying fault plays an important role in governing the geometry of the resulting extensional fault-propagation fold. With a greater p/s ratio, the fold is more localized in the vicinity of the propagating fault. The extensional fault-propagation fold in the Gilbertown graben is modeled by both a compactional and a non-compactional growth trishear model. Both models predict a similar geometry of the extensional fault-propagation fold. The trishear model with compaction best predicts the fold geometry. ?? 2008 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Aprilia, Ayu Rizky; Santoso, Imam; Ekasari, Dhita Murita
2017-05-01
Yogurt is a product based on milk, which has beneficial effects for health. The process for the production of yogurt is very susceptible to failure because it involves bacteria and fermentation. For an industry, the risks may cause harm and have a negative impact. In order for a product to be successful and profitable, it requires the analysis of risks that may occur during the production process. Risk analysis can identify the risks in detail and prevent as well as determine its handling, so that the risks can be minimized. Therefore, this study will analyze the risks of the production process with a case study in CV.XYZ. The method used in this research is the Fuzzy Failure Mode and Effect Analysis (fuzzy FMEA) and Fault Tree Analysis (FTA). The results showed that there are 6 risks from equipment variables, raw material variables, and process variables. Those risks include the critical risk, which is the risk of a lack of an aseptic process, more specifically if starter yogurt is damaged due to contamination by fungus or other bacteria and a lack of sanitation equipment. The results of quantitative analysis of FTA showed that the highest probability is the probability of the lack of an aseptic process, with a risk of 3.902%. The recommendations for improvement include establishing SOPs (Standard Operating Procedures), which include the process, workers, and environment, controlling the starter of yogurt and improving the production planning and sanitation equipment using hot water immersion.
NASA Astrophysics Data System (ADS)
Budach, Ingmar; Moeck, Inga; Lüschen, Ewald; Wolfgramm, Markus
2018-03-01
The structural evolution of faults in foreland basins is linked to a complex basin history ranging from extension to contraction and inversion tectonics. Faults in the Upper Jurassic of the German Molasse Basin, a Cenozoic Alpine foreland basin, play a significant role for geothermal exploration and are therefore imaged, interpreted and studied by 3D seismic reflection data. Beyond this applied aspect, the analysis of these seismic data help to better understand the temporal evolution of faults and respective stress fields. In 2009, a 27 km2 3D seismic reflection survey was conducted around the Unterhaching Gt 2 well, south of Munich. The main focus of this study is an in-depth analysis of a prominent v-shaped fault block structure located at the center of the 3D seismic survey. Two methods were used to study the periodic fault activity and its relative age of the detected faults: (1) horizon flattening and (2) analysis of incremental fault throws. Slip and dilation tendency analyses were conducted afterwards to determine the stresses resolved on the faults in the current stress field. Two possible kinematic models explain the structural evolution: One model assumes a left-lateral strike slip fault in a transpressional regime resulting in a positive flower structure. The other model incorporates crossing conjugate normal faults within a transtensional regime. The interpreted successive fault formation prefers the latter model. The episodic fault activity may enhance fault zone permeability hence reservoir productivity implying that the analysis of periodically active faults represents an important part in successfully targeting geothermal wells.
NASA Astrophysics Data System (ADS)
Wang, Lei; Bai, Bing; Li, Xiaochun; Liu, Mingze; Wu, Haiqing; Hu, Shaobin
2016-07-01
Induced seismicity and fault reactivation associated with fluid injection and depletion were reported in hydrocarbon, geothermal, and waste fluid injection fields worldwide. Here, we establish an analytical model to assess fault reactivation surrounding a reservoir during fluid injection and extraction that considers the stress concentrations at the fault tips and the effects of fault length. In this model, induced stress analysis in a full-space under the plane strain condition is implemented based on Eshelby's theory of inclusions in terms of a homogeneous, isotropic, and poroelastic medium. The stress intensity factor concept in linear elastic fracture mechanics is adopted as an instability criterion for pre-existing faults in surrounding rocks. To characterize the fault reactivation caused by fluid injection and extraction, we define a new index, the "fault reactivation factor" η, which can be interpreted as an index of fault stability in response to fluid pressure changes per unit within a reservoir resulting from injection or extraction. The critical fluid pressure change within a reservoir is also determined by the superposition principle using the in situ stress surrounding a fault. Our parameter sensitivity analyses show that the fault reactivation tendency is strongly sensitive to fault location, fault length, fault dip angle, and Poisson's ratio of the surrounding rock. Our case study demonstrates that the proposed model focuses on the mechanical behavior of the whole fault, unlike the conventional methodologies. The proposed method can be applied to engineering cases related to injection and depletion within a reservoir owing to its efficient computational codes implementation.
Zeng, Yuehua; Shen, Zheng-Kang
2017-01-01
We develop a crustal deformation model to determine fault‐slip rates for the western United States (WUS) using the Zeng and Shen (2014) method that is based on a combined inversion of Global Positioning System (GPS) velocities and geological slip‐rate constraints. The model consists of six blocks with boundaries aligned along major faults in California and the Cascadia subduction zone, which are represented as buried dislocations in the Earth. Faults distributed within blocks have their geometrical structure and locking depths specified by the Uniform California Earthquake Rupture Forecast, version 3 (UCERF3) and the 2008 U.S. Geological Survey National Seismic Hazard Map Project model. Faults slip beneath a predefined locking depth, except for a few segments where shallow creep is allowed. The slip rates are estimated using a least‐squares inversion. The model resolution analysis shows that the resulting model is influenced heavily by geologic input, which fits the UCERF3 geologic bounds on California B faults and ±one‐half of the geologic slip rates for most other WUS faults. The modeled slip rates for the WUS faults are consistent with the observed GPS velocity field. Our fit to these velocities is measured in terms of a normalized chi‐square, which is 6.5. This updated model fits the data better than most other geodetic‐based inversion models. Major discrepancies between well‐resolved GPS inversion rates and geologic‐consensus rates occur along some of the northern California A faults, the Mojave to San Bernardino segments of the San Andreas fault, the western Garlock fault, the southern segment of the Wasatch fault, and other faults. Off‐fault strain‐rate distributions are consistent with regional tectonics, with a total off‐fault moment rate of 7.2×1018">7.2×1018 and 8.5×1018 N·m/year">8.5×1018 N⋅m/year for California and the WUS outside California, respectively.
Spectral element modelling of fault-plane reflections arising from fluid pressure distributions
Haney, M.; Snieder, R.; Ampuero, J.-P.; Hofmann, R.
2007-01-01
The presence of fault-plane reflections in seismic images, besides indicating the locations of faults, offers a possible source of information on the properties of these poorly understood zones. To better understand the physical mechanism giving rise to fault-plane reflections in compacting sedimentary basins, we numerically model the full elastic wavefield via the spectral element method (SEM) for several different fault models. Using well log data from the South Eugene Island field, offshore Louisiana, we derive empirical relationships between the elastic parameters (e.g. P-wave velocity and density) and the effective-stress along both normal compaction and unloading paths. These empirical relationships guide the numerical modelling and allow the investigation of how differences in fluid pressure modify the elastic wavefield. We choose to simulate the elastic wave equation via SEM since irregular model geometries can be accommodated and slip boundary conditions at an interface, such as a fault or fracture, are implemented naturally. The method we employ for including a slip interface retains the desirable qualities of SEM in that it is explicit in time and, therefore, does not require the inversion of a large matrix. We performa complete numerical study by forward modelling seismic shot gathers over a faulted earth model using SEM followed by seismic processing of the simulated data. With this procedure, we construct post-stack time-migrated images of the kind that are routinely interpreted in the seismic exploration industry. We dip filter the seismic images to highlight the fault-plane reflections prior to making amplitude maps along the fault plane. With these amplitude maps, we compare the reflectivity from the different fault models to diagnose which physical mechanism contributes most to observed fault reflectivity. To lend physical meaning to the properties of a locally weak fault zone characterized as a slip interface, we propose an equivalent-layer model under the assumption of weak scattering. This allows us to use the empirical relationships between density, velocity and effective stress from the South Eugene Island field to relate a slip interface to an amount of excess pore-pressure in a fault zone. ?? 2007 The Authors Journal compilation ?? 2007 RAS.
Modeling and Fault Simulation of Propellant Filling System
NASA Astrophysics Data System (ADS)
Jiang, Yunchun; Liu, Weidong; Hou, Xiaobo
2012-05-01
Propellant filling system is one of the key ground plants in launching site of rocket that use liquid propellant. There is an urgent demand for ensuring and improving its reliability and safety, and there is no doubt that Failure Mode Effect Analysis (FMEA) is a good approach to meet it. Driven by the request to get more fault information for FMEA, and because of the high expense of propellant filling, in this paper, the working process of the propellant filling system in fault condition was studied by simulating based on AMESim. Firstly, based on analyzing its structure and function, the filling system was modular decomposed, and the mathematic models of every module were given, based on which the whole filling system was modeled in AMESim. Secondly, a general method of fault injecting into dynamic system was proposed, and as an example, two typical faults - leakage and blockage - were injected into the model of filling system, based on which one can get two fault models in AMESim. After that, fault simulation was processed and the dynamic characteristics of several key parameters were analyzed under fault conditions. The results show that the model can simulate effectively the two faults, and can be used to provide guidance for the filling system maintain and amelioration.
Timing of late Holocene surface rupture of the Wairau Fault, Marlborough, New Zealand
Zachariasen, J.; Berryman, K.; Langridge, Rob; Prentice, C.; Rymer, M.; Stirling, M.; Villamor, P.
2006-01-01
Three trenches excavated across the central portion of the right-lateral strike-slip Wairau Fault in South Island, New Zealand, exposed a complex set of fault strands that have displaced a sequence of late Holocene alluvial and colluvial deposits. Abundant charcoal fragments provide age control for various stratigraphic horizons dating back to c. 5610 yr ago. Faulting relations from the Wadsworth trench show that the most recent surface rupture event occurred at least 1290 yr and at most 2740 yr ago. Drowned trees in landslide-dammed Lake Chalice, in combination with charcoal from the base of an unfaulted colluvial wedge at Wadsworth trench, suggest a narrower time bracket for this event of 1811-2301 cal. yr BP. The penultimate faulting event occurred between c. 2370 and 3380 yr, and possibly near 2680 ?? 60 cal. yr BP, when data from both the Wadsworth and Dillon trenches are combined. Two older events have been recognised from Dillon trench but remain poorly dated. A probable elapsed time of at least 1811 yr since the last surface rupture, and an average slip rate estimate for the Wairau Fault of 3-5 mm/yr, suggests that at least 5.4 m and up to 11.5 m of elastic shear strain has accumulated since the last rupture. This is near to or greater than the single-event displacement estimates of 5-7 m. The average recurrence interval for surface rupture of the fault determined from the trench data is 1150-1400 yr. Although the uncertainties in the timing of faulting events and variability in inter-event times remain high, the time elapsed since the last event is in the order of 1-2 times the average recurrence interval, implying that the Wairau Fault is near the end of its interseismic period. ?? The Royal Society of New Zealand 2006.
NASA Astrophysics Data System (ADS)
Dura-Gomez, I.; Addison, A.; Knapp, C. C.; Talwani, P.; Chapman, A.
2005-12-01
During the 1886 Charleston earthquake, two parallel tabby walls of Fort Dorchester broke left-laterally, and a strike of ~N25°W was inferred for the causative Sawmill Branch fault. To better define this fault, which does not have any surface expression, we planned to cut trenches across it. However, as Fort Dorchester is a protected archeological site, we were required to locate the fault accurately away from the fort, before permission could be obtained to cut short trenches. The present GPR investigations were planned as a preliminary step to determine locations for trenching. A pulseEKKO 100 GPR was used to collect data along eight profiles (varying in length from 10 m to 30 m) that were run across the projected strike of the fault, and one 50 m long profile that was run parallel to it. The locations of the profiles were obtained using a total station. To capture the signature of the fault, sixteen common-offset (COS) lines were acquired by using different antennas (50, 100 and 200 MHz) and stacking 64 times to increase the signal-to-noise ratio. The location of trees and stumps were recorded. In addition, two common-midpoint (CMP) tests were carried out, and gave an average velocity of about 0.097 m/ns. Processing included the subtraction of the low frequency "wow" on the trace (dewow), automatic gain control (AGC) and the application of bandpass filters. The signals using the 50 MHz, 100 MHz and 200 MHz antennas were found to penetrate up to about 30 meters, 20 meters and 12 meters respectively. Vertically offset reflectors and disruptions of the electrical signal were used to infer the location of the fault(s). Comparisons of the locations of these disruptions on various lines were used to infer the presence of a N30°W fault zone We plan to confirm these locations by cutting shallow trenches.
Stollhofen, Harald; Stanistreet, Ian G
2012-08-01
Normal faults displacing Upper Bed I and Lower Bed II strata of the Plio-Pleistocene Lake Olduvai were studied on the basis of facies and thickness changes as well as diversion of transport directions across them in order to establish criteria for their synsedimentary activity. Decompacted differential thicknesses across faults were then used to calculate average fault slip rates of 0.05-0.47 mm/yr for the Tuff IE/IF interval (Upper Bed I) and 0.01-0.13 mm/yr for the Tuff IF/IIA section (Lower Bed II). Considering fault recurrence intervals of ~1000 years, fault scarp heights potentially achieved average values of 0.05-0.47 m and a maximum value of 5.4 m during Upper Bed I, which dropped to average values of 0.01-0.13 m and a localized maximum of 0.72 m during Lower Bed II deposition. Synsedimentary faults were of importance to the form and paleoecology of landscapes utilized by early hominins, most traceably and provably Homo habilis as illustrated by the recurrent density and compositional pattern of Oldowan stone artifact assemblage variation across them. Two potential relationship factors are: (1) fault scarp topographies controlled sediment distribution, surface, and subsurface hydrology, and thus vegetation, so that a resulting mosaic of microenvironments and paleoecologies provided a variety of opportunities for omnivorous hominins; and (2) they ensured that the most voluminous and violent pyroclastic flows from the Mt. Olmoti volcano were dammed and conduited away from the Olduvai Basin depocenter, when otherwise a single or set of ignimbrite flows might have filled and devastated the topography that contained the central lake body. In addition, hydraulically active faults may have conduited groundwater, supporting freshwater springs and wetlands and favoring growth of trees. Copyright © 2011 Elsevier Ltd. All rights reserved.
The use of automatic programming techniques for fault tolerant computing systems
NASA Technical Reports Server (NTRS)
Wild, C.
1985-01-01
It is conjectured that the production of software for ultra-reliable computing systems such as required by Space Station, aircraft, nuclear power plants and the like will require a high degree of automation as well as fault tolerance. In this paper, the relationship between automatic programming techniques and fault tolerant computing systems is explored. Initial efforts in the automatic synthesis of code from assertions to be used for error detection as well as the automatic generation of assertions and test cases from abstract data type specifications is outlined. Speculation on the ability to generate truly diverse designs capable of recovery from errors by exploring alternate paths in the program synthesis tree is discussed. Some initial thoughts on the use of knowledge based systems for the global detection of abnormal behavior using expectations and the goal-directed reconfiguration of resources to meet critical mission objectives are given. One of the sources of information for these systems would be the knowledge captured during the automatic programming process.
Qualitative Event-Based Diagnosis: Case Study on the Second International Diagnostic Competition
NASA Technical Reports Server (NTRS)
Daigle, Matthew; Roychoudhury, Indranil
2010-01-01
We describe a diagnosis algorithm entered into the Second International Diagnostic Competition. We focus on the first diagnostic problem of the industrial track of the competition in which a diagnosis algorithm must detect, isolate, and identify faults in an electrical power distribution testbed and provide corresponding recovery recommendations. The diagnosis algorithm embodies a model-based approach, centered around qualitative event-based fault isolation. Faults produce deviations in measured values from model-predicted values. The sequence of these deviations is matched to those predicted by the model in order to isolate faults. We augment this approach with model-based fault identification, which determines fault parameters and helps to further isolate faults. We describe the diagnosis approach, provide diagnosis results from running the algorithm on provided example scenarios, and discuss the issues faced, and lessons learned, from implementing the approach
The mechanics of fault-bend folding and tear-fault systems in the Niger Delta
NASA Astrophysics Data System (ADS)
Benesh, Nathan Philip
This dissertation investigates the mechanics of fault-bend folding using the discrete element method (DEM) and explores the nature of tear-fault systems in the deep-water Niger Delta fold-and-thrust belt. In Chapter 1, we employ the DEM to investigate the development of growth structures in anticlinal fault-bend folds. This work was inspired by observations that growth strata in active folds show a pronounced upward decrease in bed dip, in contrast to traditional kinematic fault-bend fold models. Our analysis shows that the modeled folds grow largely by parallel folding as specified by the kinematic theory; however, the process of folding over a broad axial surface zone yields a component of fold growth by limb rotation that is consistent with the patterns observed in natural folds. This result has important implications for how growth structures can he used to constrain slip and paleo-earthquake ages on active blind-thrust faults. In Chapter 2, we expand our DEM study to investigate the development of a wider range of fault-bend folds. We examine the influence of mechanical stratigraphy and quantitatively compare our models with the relationships between fold and fault shape prescribed by the kinematic theory. While the synclinal fault-bend models closely match the kinematic theory, the modeled anticlinal fault-bend folds show robust behavior that is distinct from the kinematic theory. Specifically, we observe that modeled structures maintain a linear relationship between fold shape (gamma) and fault-horizon cutoff angle (theta), rather than expressing the non-linear relationship with two distinct modes of anticlinal folding that is prescribed by the kinematic theory. These observations lead to a revised quantitative relationship for fault-bend folds that can serve as a useful interpretation tool. Finally, in Chapter 3, we examine the 3D relationships of tear- and thrust-fault systems in the western, deep-water Niger Delta. Using 3D seismic reflection data and new map-based structural restoration techniques, we find that the tear faults have distinct displacement patterns that distinguish them from conventional strike-slip faults and reflect their roles in accommodating displacement gradients within the fold-and-thrust belt.
Unraveling the dynamics of magmatic CO2 degassing at Mammoth Mountain, California
NASA Astrophysics Data System (ADS)
Peiffer, Loïc; Wanner, Christoph; Lewicki, Jennifer L.
2018-02-01
The accumulation of magmatic CO2 beneath low-permeability barriers may lead to the formation of CO2-rich gas reservoirs within volcanic systems. Such accumulation is often evidenced by high surface CO2 emissions that fluctuate over time. The temporal variability in surface degassing is believed in part to reflect a complex interplay between deep magmatic degassing and the permeability of degassing pathways. A better understanding of the dynamics of CO2 degassing is required to improve monitoring and hazards mitigation in these systems. Owing to the availability of long-term records of CO2 emissions rates and seismicity, Mammoth Mountain in California constitutes an ideal site towards such predictive understanding. Mammoth Mountain is characterized by intense soil CO2 degassing (up to ∼1000 t d-1) and tree kill areas that resulted from leakage of CO2 from a CO2-rich gas reservoir located in the upper ∼4 km. The release of CO2-rich fluids from deeper basaltic intrusions towards the reservoir induces seismicity and potentially reactivates faults connecting the reservoir to the surface. While this conceptual model is well-accepted, there is still a debate whether temporally variable surface CO2 fluxes directly reflect degassing of intrusions or variations in fault permeability. Here, we report the first large-scale numerical model of fluid and heat transport for Mammoth Mountain. We discuss processes (i) leading to the initial formation of the CO2-rich gas reservoir prior to the occurrence of high surface CO2 degassing rates and (ii) controlling current CO2 degassing at the surface. Although the modeling settings are site-specific, the key mechanisms discussed in this study are likely at play at other volcanic systems hosting CO2-rich gas reservoirs. In particular, our model results illustrate the role of convection in stripping a CO2-rich gas phase from a rising hydrothermal fluid and leading to an accumulation of a large mass of CO2 (∼107-108 t) in a shallow gas reservoir. Moreover, we show that both, short-lived (months to years) and long-lived (hundreds of years) events of magmatic fluid injection can lead to critical pressures within the reservoir and potentially trigger fault reactivation. Our sensitivity analysis suggests that observed temporal fluctuations in surface degassing are only indirectly controlled by variations in magmatic degassing and are mainly the result of temporally variable fault permeability. Finally, we suggest that long-term CO2 emission monitoring, seismic tomography and coupled thermal-hydraulic-mechanical modeling are important for CO2-related hazard mitigation.
AGSM Functional Fault Models for Fault Isolation Project
NASA Technical Reports Server (NTRS)
Harp, Janicce Leshay
2014-01-01
This project implements functional fault models to automate the isolation of failures during ground systems operations. FFMs will also be used to recommend sensor placement to improve fault isolation capabilities. The project enables the delivery of system health advisories to ground system operators.
Akinci, A.; Galadini, F.; Pantosti, D.; Petersen, M.; Malagnini, L.; Perkins, D.
2009-01-01
We produce probabilistic seismic-hazard assessments for the central Apennines, Italy, using time-dependent models that are characterized using a Brownian passage time recurrence model. Using aperiodicity parameters, ?? of 0.3, 0.5, and 0.7, we examine the sensitivity of the probabilistic ground motion and its deaggregation to these parameters. For the seismic source model we incorporate both smoothed historical seismicity over the area and geological information on faults. We use the maximum magnitude model for the fault sources together with a uniform probability of rupture along the fault (floating fault model) to model fictitious faults to account for earthquakes that cannot be correlated with known geologic structural segmentation.
NASA Astrophysics Data System (ADS)
Fitzenz, D. D.; Miller, S. A.
2001-12-01
We present preliminary results from a 3-dimensional fault interaction model, with the fault system specified by the geometry and tectonics of the San Andreas Fault (SAF) system. We use the forward model for earthquake generation on interacting faults of Fitzenz and Miller [2001] that incorporates the analytical solutions of Okada [85,92], GPS-constrained tectonic loading, creep compaction and frictional dilatancy [Sleep and Blanpied, 1994, Sleep, 1995], and undrained poro-elasticity. The model fault system is centered at the Big Bend, and includes three large strike-slip faults (each discretized into multiple subfaults); 1) a 300km, right-lateral segment of the SAF to the North, 2) a 200km-long left-lateral segment of the Garlock fault to the East, and 3) a 100km-long right-lateral segment of the SAF to the South. In the initial configuration, three shallow-dipping faults are also included that correspond to the thrust belt sub-parallel to the SAF. Tectonic loading is decomposed into basal shear drag parallel to the plate boundary with a 35mm yr-1 plate velocity, and East-West compression approximated by a vertical dislocation surface applied at the far-field boundary resulting in fault-normal compression rates in the model space about 4mm yr-1. Our aim is to study the long-term seismicity characteristics, tectonic evolution, and fault interaction of this system. We find that overpressured faults through creep compaction are a necessary consequence of the tectonic loading, specifically where high normal stress acts on long straight fault segments. The optimal orientation of thrust faults is a function of the strike-slip behavior, and therefore results in a complex stress state in the elastic body. This stress state is then used to generate new fault surfaces, and preliminary results of dynamically generated faults will also be presented. Our long-term aim is to target measurable properties in or around fault zones, (e.g. pore pressures, hydrofractures, seismicity catalogs, stress orientation, surface strain, triggering, etc.), which may allow inferences on the stress state of fault systems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Yu; Guo, Jianqiu; Goue, Ouloide
Recently, we reported on the formation of overlapping rhombus-shaped stacking faults from scratches left over by the chemical mechanical polishing during high temperature annealing of PVT-grown 4H–SiC wafer. These stacking faults are restricted to regions with high N-doped areas of the wafer. The type of these stacking faults were determined to be Shockley stacking faults by analyzing the behavior of their area contrast using synchrotron white beam X-ray topography studies. A model was proposed to explain the formation mechanism of the rhombus shaped stacking faults based on double Shockley fault nucleation and propagation. In this paper, we have experimentally verifiedmore » this model by characterizing the configuration of the bounding partials of the stacking faults on both surfaces using synchrotron topography in back reflection geometry. As predicted by the model, on both the Si and C faces, the leading partials bounding the rhombus-shaped stacking faults are 30° Si-core and the trailing partials are 30° C-core. Finally, using high resolution transmission electron microscopy, we have verified that the enclosed stacking fault is a double Shockley type.« less
NASA Astrophysics Data System (ADS)
Koyi, Hemin; Nilfouroushan, Faramarz; Hessami, Khaled
2015-04-01
A series of scaled analogue models are run to study the degree of coupling between basement block kinematics and cover deformation. In these models, rigid basal blocks were rotated about vertical axis in a "bookshelf" fashion, which caused strike-slip faulting along the blocks and, to some degrees, in the overlying cover units of loose sand. Three different combinations of cover basement deformations are modeled; cover shortening prior to basement fault movement; basement fault movement prior to shortening of cover units; and simultaneous cover shortening with basement fault movement. Model results show that the effect of basement strike-slip faults depends on the timing of their reactivation during the orogenic process. Pre- and syn-orogen basement strike-slip faults have a significant impact on the structural pattern of the cover units, whereas post-orogenic basement strike-slip faults have less influence on the thickened hinterland of the overlying fold-and-thrust belt. The interaction of basement faulting and cover shortening results in formation of rhomb features. In models with pre- and syn-orogen basement strike-slip faults, rhomb-shaped cover blocks develop as a result of shortening of the overlying cover during basement strike-slip faulting. These rhombic blocks, which have resemblance to flower structures, differ in kinematics, genesis and structural extent. They are bounded by strike-slip faults on two opposite sides and thrusts on the other two sides. In the models, rhomb-shaped cover blocks develop as a result of shortening of the overlying cover during basement strke-slip faulting. Such rhomb features are recognized in the Alborz and Zagros fold-and-thrust belts where cover units are shortened simultaneously with strike-slip faulting in the basement. Model results are also compared with geodetic results obtained from combination of all available GPS velocities in the Zagros and Alborz FTBs. Geodetic results indicate domains of clockwise and anticlockwise rotation in these two FTBs. The typical pattern of structures and their spatial distributions are used to suggest clockwise block rotation of basement blocks about vertical axes and their associated strike-slip faulting in both west-central Alborz and the southeastern part of the Zagros fold-and-thrust belt.
NASA Astrophysics Data System (ADS)
Adib, Ahmad; Afzal, Peyman; Mirzaei Ilani, Shapour; Aliyari, Farhang
2017-10-01
The aim of this study is to determine a relationship between zinc mineralization and a major fault in the Behabad area, central Iran, using the Concentration-Distance to Major Fault (C-DMF), Area of Mineralized Zone-Distance to Major Fault (AMZ-DMF), and Concentration-Area (C-A) fractal models for Zn deposit/mine classification according to their distance from the Behabad fault. Application of the C-DMF and the AMZ-DMF models for Zn mineralization classification in the Behabad fault zone reveals that the main Zn deposits have a good correlation with the major fault in the area. The distance from the known zinc deposits/mines with Zn values higher than 29% and the area of the mineralized zone of more than 900 m2 to the major fault is lower than 1 km, which shows a positive correlation between Zn mineralization and the structural zone. As a result, the AMZ-DMF and C-DMF fractal models can be utilized for the delineation and the recognition of different mineralized zones in different types of magmatic and hydrothermal deposits.
NASA Astrophysics Data System (ADS)
Yang, H.; Moresi, L. N.
2017-12-01
The San Andreas fault forms a dominant component of the transform boundary between the Pacific and the North American plate. The density and strength of the complex accretionary margin is very heterogeneous. Based on the density structure of the lithosphere in the SW United States, we utilize the 3D finite element thermomechanical, viscoplastic model (Underworld2) to simulate deformation in the San Andreas Fault system. The purpose of the model is to examine the role of a big bend in the existing geometry. In particular, the big bend of the fault is an initial condition of in our model. We first test the strength of the fault by comparing the surface principle stresses from our numerical model with the in situ tectonic stress. The best fit model indicates the model with extremely weak fault (friction coefficient < 0.1) is requisite. To the first order, there is significant density difference between the Great Valley and the adjacent Mojave block. The Great Valley block is much colder and of larger density (>200 kg/m3) than surrounding blocks. In contrast, the Mojave block is detected to find that it has lost its mafic lower crust by other geophysical surveys. Our model indicates strong strain localization at the jointer boundary between two blocks, which is an analogue for the Garlock fault. High density lower crust material of the Great Valley tends to under-thrust beneath the Transverse Range near the big bend. This motion is likely to rotate the fault plane from the initial vertical direction to dip to the southwest. For the straight section, north to the big bend, the fault is nearly vertical. The geometry of the fault plane is consistent with field observations.
NASA Technical Reports Server (NTRS)
Bird, P.; Baumgardner, J.
1984-01-01
To determine the correct fault rheology of the Transverse Ranges area of California, a new finite element to represent faults and a mangle drag element are introduced into a set of 63 simulation models of anelastic crustal strain. It is shown that a slip rate weakening rheology for faults is not valid in California. Assuming that mantle drag effects on the crust's base are minimal, the optimal coefficient of friction in the seismogenic portion of the fault zones is 0.4-0.6 (less than Byerly's law assumed to apply elsewhere). Depending on how the southern California upper mantle seismic velocity anomaly is interpreted, model results are improved or degraded. It is found that the location of the mantle plate boundary is the most important secondary parameter, and that the best model is either a low-stress model (fault friction = 0.3) or a high-stress model (fault friction = 0.85), each of which has strong mantel drag. It is concluded that at least the fastest moving faults in southern California have a low friction coefficient (approximtely 0.3) because they contain low strength hydrated clay gouges throughout the low-temperature seismogenic zone.
Model-Based Fault Tolerant Control
NASA Technical Reports Server (NTRS)
Kumar, Aditya; Viassolo, Daniel
2008-01-01
The Model Based Fault Tolerant Control (MBFTC) task was conducted under the NASA Aviation Safety and Security Program. The goal of MBFTC is to develop and demonstrate real-time strategies to diagnose and accommodate anomalous aircraft engine events such as sensor faults, actuator faults, or turbine gas-path component damage that can lead to in-flight shutdowns, aborted take offs, asymmetric thrust/loss of thrust control, or engine surge/stall events. A suite of model-based fault detection algorithms were developed and evaluated. Based on the performance and maturity of the developed algorithms two approaches were selected for further analysis: (i) multiple-hypothesis testing, and (ii) neural networks; both used residuals from an Extended Kalman Filter to detect the occurrence of the selected faults. A simple fusion algorithm was implemented to combine the results from each algorithm to obtain an overall estimate of the identified fault type and magnitude. The identification of the fault type and magnitude enabled the use of an online fault accommodation strategy to correct for the adverse impact of these faults on engine operability thereby enabling continued engine operation in the presence of these faults. The performance of the fault detection and accommodation algorithm was extensively tested in a simulation environment.
Modelling earthquake ruptures with dynamic off-fault damage
NASA Astrophysics Data System (ADS)
Okubo, Kurama; Bhat, Harsha S.; Klinger, Yann; Rougier, Esteban
2017-04-01
Earthquake rupture modelling has been developed for producing scenario earthquakes. This includes understanding the source mechanisms and estimating far-field ground motion with given a priori constraints like fault geometry, constitutive law of the medium and friction law operating on the fault. It is necessary to consider all of the above complexities of a fault systems to conduct realistic earthquake rupture modelling. In addition to the complexity of the fault geometry in nature, coseismic off-fault damage, which is observed by a variety of geological and seismological methods, plays a considerable role on the resultant ground motion and its spectrum compared to a model with simple planer fault surrounded by purely elastic media. Ideally all of these complexities should be considered in earthquake modelling. State of the art techniques developed so far, however, cannot treat all of them simultaneously due to a variety of computational restrictions. Therefore, we adopt the combined finite-discrete element method (FDEM), which can effectively deal with pre-existing complex fault geometry such as fault branches and kinks and can describe coseismic off-fault damage generated during the dynamic rupture. The advantage of FDEM is that it can handle a wide range of length scales, from metric to kilometric scale, corresponding to the off-fault damage and complex fault geometry respectively. We used the FDEM-based software tool called HOSSedu (Hybrid Optimization Software Suite - Educational Version) for the earthquake rupture modelling, which was developed by Los Alamos National Laboratory. We firstly conducted the cross-validation of this new methodology against other conventional numerical schemes such as the finite difference method (FDM), the spectral element method (SEM) and the boundary integral equation method (BIEM), to evaluate the accuracy with various element sizes and artificial viscous damping values. We demonstrate the capability of the FDEM tool for modelling earthquake ruptures. We then modelled earthquake ruptures allowing for coseismic off-fault damage with appropriate fracture nucleation and growth criteria. We studied the effect of different conditions such as rupture speed (sub-Rayleigh or supershear), the orientation of the initial maximum principal stress with respect to the fault and the magnitude of the initial stress (to mimic depth). The comparison between the sub-Rayleigh and supershear case shows that the coseismic off-fault damage is enhanced in the supershear case when compared with the sub-Rayleigh case. The orientation of the maximum principal stress also has significant difference such that the dynamic off-fault cracking is more likely to occur on the extensional side of the fault for high principal stress orientation. It is found that the coseismic off-fault damage reduces the rupture speed due to the dissipation of the energy by dynamic off-fault cracking generated in the vicinity of the rupture front. In terms of the ground motion amplitude spectra it is shown that the high-frequency radiation is enhanced by the coseismic off-fault damage though it is quickly attenuated. This is caused by the intricate superposition of the radiation generated by the off-fault damage and the perturbation of the rupture speed on the main fault.
NASA Astrophysics Data System (ADS)
Zuza, Andrew V.; Yin, An
2016-05-01
Collision-induced continental deformation commonly involves complex interactions between strike-slip faulting and off-fault deformation, yet this relationship has rarely been quantified. In northern Tibet, Cenozoic deformation is expressed by the development of the > 1000-km-long east-striking left-slip Kunlun, Qinling, and Haiyuan faults. Each have a maximum slip in the central fault segment exceeding 10s to ~ 100 km but a much smaller slip magnitude (~< 10% of the maximum slip) at their terminations. The along-strike variation of fault offsets and pervasive off-fault deformation create a strain pattern that departs from the expectations of the classic plate-like rigid-body motion and flow-like distributed deformation end-member models for continental tectonics. Here we propose a non-rigid bookshelf-fault model for the Cenozoic tectonic development of northern Tibet. Our model, quantitatively relating discrete left-slip faulting to distributed off-fault deformation during regional clockwise rotation, explains several puzzling features, including the: (1) clockwise rotation of east-striking left-slip faults against the northeast-striking left-slip Altyn Tagh fault along the northwestern margin of the Tibetan Plateau, (2) alternating fault-parallel extension and shortening in the off-fault regions, and (3) eastward-tapering map-view geometries of the Qimen Tagh, Qaidam, and Qilian Shan thrust belts that link with the three major left-slip faults in northern Tibet. We refer to this specific non-rigid bookshelf-fault system as a passive bookshelf-fault system because the rotating bookshelf panels are detached from the rigid bounding domains. As a consequence, the wallrock of the strike-slip faults deforms to accommodate both the clockwise rotation of the left-slip faults and off-fault strain that arises at the fault ends. An important implication of our model is that the style and magnitude of Cenozoic deformation in northern Tibet vary considerably in the east-west direction. Thus, any single north-south cross section and its kinematic reconstruction through the region do not properly quantify the complex deformational processes of plateau formation.
Joshuva, A; Sugumaran, V
2017-03-01
Wind energy is one of the important renewable energy resources available in nature. It is one of the major resources for production of energy because of its dependability due to the development of the technology and relatively low cost. Wind energy is converted into electrical energy using rotating blades. Due to environmental conditions and large structure, the blades are subjected to various vibration forces that may cause damage to the blades. This leads to a liability in energy production and turbine shutdown. The downtime can be reduced when the blades are diagnosed continuously using structural health condition monitoring. These are considered as a pattern recognition problem which consists of three phases namely, feature extraction, feature selection, and feature classification. In this study, statistical features were extracted from vibration signals, feature selection was carried out using a J48 decision tree algorithm and feature classification was performed using best-first tree algorithm and functional trees algorithm. The better algorithm is suggested for fault diagnosis of wind turbine blade. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
A multiple fault rupture model of the November 13 2016, M 7.8 Kaikoura earthquake, New Zealand
NASA Astrophysics Data System (ADS)
Benites, R. A.; Francois-Holden, C.; Langridge, R. M.; Kaneko, Y.; Fry, B.; Kaiser, A. E.; Caldwell, T. G.
2017-12-01
The rupture-history of the November 13 2016 MW7.8 Kaikoura earthquake recorded by near- and intermediate-field strong-motion seismometers and 2 high-rate GPS stations reveals a complex cascade of multiple crustal fault rupture. In spite of such complexity, we show that the rupture history of each fault is well approximated by simple kinematic model with uniform slip and rupture velocity. Using 9 faults embedded in a crustal layer 19 km thick, each with a prescribed slip vector and rupture velocity, this model accurately reproduces the displacement waveforms recorded at the near-field strong-motion and GPS stations. This model includes the `Papatea Fault' with a mixed thrust and strike-slip mechanism based on in-situ geological observations with up to 8 m of uplift observed. Although the kinematic model fits the ground-motion at the nearest strong station, it doesn not reproduce the one sided nature of the static deformation field observed geodetically. This suggests a dislocation based approach does not completely capture the mechanical response of the Papatea Fault. The fault system as a whole extends for approximately 150 km along the eastern side of the Marlborough fault system in the South Island of New Zealand. The total duration of the rupture was 74 seconds. The timing and location of each fault's rupture suggests fault interaction and triggering resulting in a northward cascade crustal ruptures. Our model does not require rupture of the underlying subduction interface to explain the data.
Three-dimensional curved grid finite-difference modelling for non-planar rupture dynamics
NASA Astrophysics Data System (ADS)
Zhang, Zhenguo; Zhang, Wei; Chen, Xiaofei
2014-11-01
In this study, we present a new method for simulating the 3-D dynamic rupture process occurring on a non-planar fault. The method is based on the curved-grid finite-difference method (CG-FDM) proposed by Zhang & Chen and Zhang et al. to simulate the propagation of seismic waves in media with arbitrary irregular surface topography. While keeping the advantages of conventional FDM, that is computational efficiency and easy implementation, the CG-FDM also is flexible in modelling the complex fault model by using general curvilinear grids, and thus is able to model the rupture dynamics of a fault with complex geometry, such as oblique dipping fault, non-planar fault, fault with step-over, fault branching, even if irregular topography exists. The accuracy and robustness of this new method have been validated by comparing with the previous results of Day et al., and benchmarks for rupture dynamics simulations. Finally, two simulations of rupture dynamics with complex fault geometry, that is a non-planar fault and a fault rupturing a free surface with topography, are presented. A very interesting phenomenon was observed that topography can weaken the tendency for supershear transition to occur when rupture breaks out at a free surface. Undoubtedly, this new method provides an effective, at least an alternative, tool to simulate the rupture dynamics of a complex non-planar fault, and can be applied to model the rupture dynamics of a real earthquake with complex geometry.
NASA Astrophysics Data System (ADS)
Ritz, E.; Pollard, D. D.
2011-12-01
Geological and geophysical investigations demonstrate that faults are geometrically complex structures, and that the nature and intensity of off-fault damage is spatially correlated with geometric irregularities of the slip surfaces. Geologic observations of exhumed meter-scale strike-slip faults in the Bear Creek drainage, central Sierra Nevada, CA, provide insight into the relationship between non-planar fault geometry and frictional slip at depth. We investigate natural fault geometries in an otherwise homogeneous and isotropic elastic material with a two-dimensional displacement discontinuity method (DDM). Although the DDM is a powerful tool, frictional contact problems are beyond the scope of the elementary implementation because it allows interpenetration of the crack surfaces. By incorporating a complementarity algorithm, we are able to enforce appropriate contact boundary conditions along the model faults and include variable friction and frictional strength. This tool allows us to model quasi-static slip on non-planar faults and the resulting deformation of the surrounding rock. Both field observations and numerical investigations indicate that sliding along geometrically discontinuous or irregular faults may lead to opening of the fault and the formation of new fractures, affecting permeability in the nearby rock mass and consequently impacting pore fluid pressure. Numerical simulations of natural fault geometries provide local stress fields that are correlated to the style and spatial distribution of off-fault damage. We also show how varying the friction and frictional strength along the model faults affects slip surface behavior and consequently influences the stress distributions in the adjacent material.
Advanced Ground Systems Maintenance Functional Fault Models For Fault Isolation Project
NASA Technical Reports Server (NTRS)
Perotti, Jose M. (Compiler)
2014-01-01
This project implements functional fault models (FFM) to automate the isolation of failures during ground systems operations. FFMs will also be used to recommend sensor placement to improve fault isolation capabilities. The project enables the delivery of system health advisories to ground system operators.
NASA Astrophysics Data System (ADS)
Pinzuti, P.; Mignan, A.; King, G. C.
2009-12-01
Mechanical stretching models have been previously proposed to explain the process of continental break-up through the example of the Asal Rift, Djibouti, one of the few places where the early stages of seafloor spreading can be observed. In these models, deformation is distributed starting at the base of a shallow seismogenic zone, in which sub-vertical normal faults are responsible for subsidence whereas cracks accommodate extension. Alternative models suggest that extension results from localized magma injection, with normal faults accommodating extension and subsidence above the maximum reach of the magma column. In these magmatic intrusion models, normal faults have dips of 45-55° and root into dikes. Using mechanical and kinematics concepts and vertical profiles of normal fault scarps from an Asal Rift campaign, where normal faults are sub-vertical on surface level, we discuss the creation and evolution of normal faults in massive fractured rocks (basalt). We suggest that the observed fault scarps correspond to sub-vertical en echelon structures and that at greater depth, these scarps combine and give birth to dipping normal faults. Finally, the geometry of faulting between the Fieale volcano and Lake Asal in the Asal Rift can be simply related to the depth of diking, which in turn can be related to magma supply. This new view supports the magmatic intrusion model of early stages of continental breaking.
NASA Astrophysics Data System (ADS)
Tsai, M. C.; Hu, J. C.; Yang, Y. H.; Hashimoto, M.; Aurelio, M.; Su, Z.; Escudero, J. A.
2017-12-01
Multi-sight and high spatial resolution interferometric SAR data enhances our ability for mapping detailed coseismic deformation to estimate fault rupture model and to infer the Coulomb stress change associated with a big earthquake. Here, we use multi-sight coseismic interferograms acquired by ALOS-2 and Sentinel-1A satellites to estimate the fault geometry and slip distribution on the fault plane of the 2017 Mw 6.5 Ormoc Earthquake in Leyte island of Philippine. The best fitting model predicts that the coseismic rupture occurs along a fault plane with strike of 325.8º and dip of 78.5ºE. This model infers that the rupture of 2017 Ormoc earthquake is dominated by left-lateral slip with minor dip-slip motion, consistent with the left-lateral strike-slip Philippine fault system. The fault tip has propagated to the ground surface, and the predicted coseismic slip on the surface is about 1 m located at 6.5 km Northeast of Kananga city. Significant slip is concentrated on the fault patches at depth of 0-8 km and an along-strike distance of 20 km with varying slip magnitude from 0.3 m to 2.3 m along the southwest segment of this seismogenic fault. Two minor coseismic fault patches are predicted underneath of the Tononan geothermal field and the creeping segment of the northwest portion of this seismogenic fault. This implies that the high geothermal gradient underneath of the Tongonan geothermal filed could prevent heated rock mass from the coseismic failure. The seismic moment release of our preferred fault model is 7.78×1018 Nm, equivalent to Mw 6.6 event. The Coulomb failure stress (CFS) calculated by the preferred fault model predicts significant positive CFS change on the northwest segment of the Philippine fault in Leyte Island which has coseismic slip deficit and is absent from aftershocks. Consequently, this segment should be considered to have increasing of risk for future seismic hazard.
Distributed deformation and block rotation in 3D
NASA Technical Reports Server (NTRS)
Scotti, Oona; Nur, Amos; Estevez, Raul
1990-01-01
The authors address how block rotation and complex distributed deformation in the Earth's shallow crust may be explained within a stationary regional stress field. Distributed deformation is characterized by domains of sub-parallel fault-bounded blocks. In response to the contemporaneous activity of neighboring domains some domains rotate, as suggested by both structural and paleomagnetic evidence. Rotations within domains are achieved through the contemporaneous slip and rotation of the faults and of the blocks they bound. Thus, in regions of distributed deformation, faults must remain active in spite of their poor orientation in the stress field. The authors developed a model that tracks the orientation of blocks and their bounding faults during rotation in a 3D stress field. In the model, the effective stress magnitudes of the principal stresses (sigma sub 1, sigma sub 2, and sigma sub 3) are controlled by the orientation of fault sets in each domain. Therefore, adjacent fault sets with differing orientations may be active and may display differing faulting styles, and a given set of faults may change its style of motion as it rotates within a stationary stress regime. The style of faulting predicted by the model depends on a dimensionless parameter phi = (sigma sub 2 - sigma sub 3)/(sigma sub 1 - sigma sub 3). Thus, the authors present a model for complex distributed deformation and complex offset history requiring neither geographical nor temporal changes in the stress regime. They apply the model to the Western Transverse Range domain of southern California. There, it is mechanically feasible for blocks and faults to have experienced up to 75 degrees of clockwise rotation in a phi = 0.1 strike-slip stress regime. The results of the model suggest that this domain may first have accommodated deformation along preexisting NNE-SSW faults, reactivated as normal faults. After rotation, these same faults became strike-slip in nature.
NASA Astrophysics Data System (ADS)
Abe, Steffen; Krieger, Lars; Deckert, Hagen
2017-04-01
The changes of fluid pressures related to the injection of fluids into the deep underground, for example during geothermal energy production, can potentially reactivate faults and thus cause induced seismic events. Therefore, an important aspect in the planning and operation of such projects, in particular in densely populated regions such as the Upper Rhine Graben in Germany, is the estimation and mitigation of the induced seismic risk. The occurrence of induced seismicity depends on a combination of hydraulic properties of the underground, mechanical and geometric parameters of the fault, and the fluid injection regime. In this study we are therefore employing a numerical model to investigate the impact of fluid pressure changes on the dynamics of the faults and the resulting seismicity. The approach combines a model of the fluid flow around a geothermal well based on a 3D finite difference discretisation of the Darcy-equation with a 2D block-slider model of a fault. The models are coupled so that the evolving pore pressure at the relevant locations of the hydraulic model is taken into account in the calculation of the stick-slip dynamics of the fault model. Our modelling approach uses two subsequent modelling steps. Initially, the fault model is run by applying a fixed deformation rate for a given duration and without the influence of the hydraulic model in order to generate the background event statistics. Initial tests have shown that the response of the fault to hydraulic loading depends on the timing of the fluid injection relative to the seismic cycle of the fault. Therefore, multiple snapshots of the fault's stress- and displacement state are generated from the fault model. In a second step, these snapshots are then used as initial conditions in a set of coupled hydro-mechanical model runs including the effects of the fluid injection. This set of models is then compared with the background event statistics to evaluate the change in the probability of seismic events. The event data such as location, magnitude, and source characteristics can be used as input for numerical wave propagation models. This allows the translation of seismic event statistics generated by the model into ground shaking probabilities.
China Report, Science and Technology, No. 197.
1983-05-13
eucalyptus trees both make excellent lumber for ship build- ing. Bark from the casuarina equisetifolia contains 13-18 percent tannic acid which can be...34 - / 2.*. 77 NOTE JPRS publications contain information primarily from foreign newspapers, periodicals and books, but also from news agency transmissions...depression zone, Hainan Island-easterly extension of Hainan Island-Dongsha continental slope uplift zone, northern Xisha Islands faulted trough and Zhongsha
1981-01-01
are applied to determine what system states (usually failed states) are possible; deductive methods are applied to determine how a given system state...Similar considerations apply to the single failures of CVA, BVB and CVB and this important additional information has been displayed in the principal...way. The point "maximum tolerable failure" corresponds to the survival point of the company building the aircraft. Above that point, only intolerable
Simultaneous Sensor and Process Fault Diagnostics for Propellant Feed System
NASA Technical Reports Server (NTRS)
Cao, J.; Kwan, C.; Figueroa, F.; Xu, R.
2006-01-01
The main objective of this research is to extract fault features from sensor faults and process faults by using advanced fault detection and isolation (FDI) algorithms. A tank system that has some common characteristics to a NASA testbed at Stennis Space Center was used to verify our proposed algorithms. First, a generic tank system was modeled. Second, a mathematical model suitable for FDI has been derived for the tank system. Third, a new and general FDI procedure has been designed to distinguish process faults and sensor faults. Extensive simulations clearly demonstrated the advantages of the new design.
Diagnosing a Strong-Fault Model by Conflict and Consistency
Zhou, Gan; Feng, Wenquan
2018-01-01
The diagnosis method for a weak-fault model with only normal behaviors of each component has evolved over decades. However, many systems now demand a strong-fault models, the fault modes of which have specific behaviors as well. It is difficult to diagnose a strong-fault model due to its non-monotonicity. Currently, diagnosis methods usually employ conflicts to isolate possible fault and the process can be expedited when some observed output is consistent with the model’s prediction where the consistency indicates probably normal components. This paper solves the problem of efficiently diagnosing a strong-fault model by proposing a novel Logic-based Truth Maintenance System (LTMS) with two search approaches based on conflict and consistency. At the beginning, the original a strong-fault model is encoded by Boolean variables and converted into Conjunctive Normal Form (CNF). Then the proposed LTMS is employed to reason over CNF and find multiple minimal conflicts and maximal consistencies when there exists fault. The search approaches offer the best candidate efficiency based on the reasoning result until the diagnosis results are obtained. The completeness, coverage, correctness and complexity of the proposals are analyzed theoretically to show their strength and weakness. Finally, the proposed approaches are demonstrated by applying them to a real-world domain—the heat control unit of a spacecraft—where the proposed methods are significantly better than best first and conflict directly with A* search methods. PMID:29596302
M ≥ 7.0 earthquake recurrence on the San Andreas fault from a stress renewal model
Parsons, Thomas E.
2006-01-01
Forecasting M ≥ 7.0 San Andreas fault earthquakes requires an assessment of their expected frequency. I used a three-dimensional finite element model of California to calculate volumetric static stress drops from scenario M ≥ 7.0 earthquakes on three San Andreas fault sections. The ratio of stress drop to tectonic stressing rate derived from geodetic displacements yielded recovery times at points throughout the model volume. Under a renewal model, stress recovery times on ruptured fault planes can be a proxy for earthquake recurrence. I show curves of magnitude versus stress recovery time for three San Andreas fault sections. When stress recovery times were converted to expected M ≥ 7.0 earthquake frequencies, they fit Gutenberg-Richter relationships well matched to observed regional rates of M ≤ 6.0 earthquakes. Thus a stress-balanced model permits large earthquake Gutenberg-Richter behavior on an individual fault segment, though it does not require it. Modeled slip magnitudes and their expected frequencies were consistent with those observed at the Wrightwood paleoseismic site if strict time predictability does not apply to the San Andreas fault.
A mechanical model of the San Andreas fault and SAFOD Pilot Hole stress measurements
Chery, J.; Zoback, M.D.; Hickman, S.
2004-01-01
Stress measurements made in the SAFOD pilot hole provide an opportunity to study the relation between crustal stress outside the fault zone and the stress state within it using an integrated mechanical model of a transform fault loaded in transpression. The results of this modeling indicate that only a fault model in which the effective friction is very low (<0.1) through the seismogenic thickness of the crust is capable of matching stress measurements made in both the far field and in the SAFOD pilot hole. The stress rotation measured with depth in the SAFOD pilot hole (???28??) appears to be a typical feature of a weak fault embedded in a strong crust and a weak upper mantle with laterally variable heat flow, although our best model predicts less rotation (15??) than observed. Stress magnitudes predicted by our model within the fault zone indicate low shear stress on planes parallel to the fault but a very anomalous mean stress, approximately twice the lithostatic stress. Copyright 2004 by the American Geophysical Union.
Predeployment validation of fault-tolerant systems through software-implemented fault insertion
NASA Technical Reports Server (NTRS)
Czeck, Edward W.; Siewiorek, Daniel P.; Segall, Zary Z.
1989-01-01
Fault injection-based automated testing (FIAT) environment, which can be used to experimentally characterize and evaluate distributed realtime systems under fault-free and faulted conditions is described. A survey is presented of validation methodologies. The need for fault insertion based on validation methodologies is demonstrated. The origins and models of faults, and motivation for the FIAT concept are reviewed. FIAT employs a validation methodology which builds confidence in the system through first providing a baseline of fault-free performance data and then characterizing the behavior of the system with faults present. Fault insertion is accomplished through software and allows faults or the manifestation of faults to be inserted by either seeding faults into memory or triggering error detection mechanisms. FIAT is capable of emulating a variety of fault-tolerant strategies and architectures, can monitor system activity, and can automatically orchestrate experiments involving insertion of faults. There is a common system interface which allows ease of use to decrease experiment development and run time. Fault models chosen for experiments on FIAT have generated system responses which parallel those observed in real systems under faulty conditions. These capabilities are shown by two example experiments each using a different fault-tolerance strategy.
NASA Technical Reports Server (NTRS)
Throop, David R.
1992-01-01
The paper examines the requirements for the reuse of computational models employed in model-based reasoning (MBR) to support automated inference about mechanisms. Areas in which the theory of MBR is not yet completely adequate for using the information that simulations can yield are identified, and recent work in these areas is reviewed. It is argued that using MBR along with simulations forces the use of specific fault models. Fault models are used so that a particular fault can be instantiated into the model and run. This in turn implies that the component specification language needs to be capable of encoding any fault that might need to be sensed or diagnosed. It also means that the simulation code must anticipate all these faults at the component level.
Functional Fault Modeling of a Cryogenic System for Real-Time Fault Detection and Isolation
NASA Technical Reports Server (NTRS)
Ferrell, Bob; Lewis, Mark; Perotti, Jose; Oostdyk, Rebecca; Brown, Barbara
2010-01-01
The purpose of this paper is to present the model development process used to create a Functional Fault Model (FFM) of a liquid hydrogen (L H2) system that will be used for realtime fault isolation in a Fault Detection, Isolation and Recover (FDIR) system. The paper explains th e steps in the model development process and the data products required at each step, including examples of how the steps were performed fo r the LH2 system. It also shows the relationship between the FDIR req uirements and steps in the model development process. The paper concl udes with a description of a demonstration of the LH2 model developed using the process and future steps for integrating the model in a live operational environment.
NASA Astrophysics Data System (ADS)
Marchandon, Mathilde; Vergnolle, Mathilde; Sudhaus, Henriette; Cavalié, Olivier
2018-02-01
In this study, we reestimate the source model of the 1997 Mw 7.2 Zirkuh earthquake (northeastern Iran) by jointly optimizing intermediate-field Interferometry Synthetic Aperture Radar data and near-field optical correlation data using a two-step fault modeling procedure. First, we estimate the geometry of the multisegmented Abiz fault using a genetic algorithm. Then, we discretize the fault segments into subfaults and invert the data to image the slip distribution on the fault. Our joint-data model, although similar to the Interferometry Synthetic Aperture Radar-based model to the first order, highlights differences in the fault dip and slip distribution. Our preferred model is ˜80° west dipping in the northern part of the fault, ˜75° east dipping in the southern part and shows three disconnected high slip zones separated by low slip zones. The low slip zones are located where the Abiz fault shows geometric complexities and where the aftershocks are located. We interpret this rough slip distribution as three asperities separated by geometrical barriers that impede the rupture propagation. Finally, no shallow slip deficit is found for the overall rupture except on the central segment where it could be due to off-fault deformation in quaternary deposits.
How does damage affect rupture propagation across a fault stepover?
NASA Astrophysics Data System (ADS)
Cooke, M. L.; Savage, H. M.
2011-12-01
We investigate the potential for fault damage to influence earthquake rupture at fault step-overs using a mechanical numerical model that explicitly includes the generation of cracks around faults. We compare the off-fault fracture patterns and slip profiles generated along faults with a variety of frictional slip-weakening distances and step-over geometry. Models with greater damage facilitate the transfer of slip to the second fault. Increasing separation and decreasing the overlap distance reduces the transfer of slip across the step over. This is consistent with observations of rupture stopping at step-over separation greater than 4 km (Wesnousky, 2006). In cases of slip transfer, rupture is often passed to the second fault before the damage zone cracks of the first fault reach the second fault. This implies that stresses from the damage fracture tips are transmitted elastically to the second fault to trigger the onset of slip along the second fault. Consequently, the growth of damage facilitates transfer of rupture from one fault to another across the step-over. In addition, the rupture propagates along the damage-producing fault faster than along the rougher fault that does not produce damage. While this result seems counter to our understanding that damage slows rupture propagation, which is documented in our models with pre-existing damage, these model results are suggesting an additional process. The slip along the newly created damage may unclamp portions of the fault ahead of the rupture and promote faster rupture. We simulate the M7.1 Hector Mine Earthquake and compare the generated fracture patterns to maps of surface damage. Because along with the detailed damage pattern, we also know the stress drop during the earthquake, we may begin to constrain parameters like the slip-weakening distance along portions of the faults that ruptured in the Hector Mine earthquake.
A dynamic integrated fault diagnosis method for power transformers.
Gao, Wensheng; Bai, Cuifen; Liu, Tong
2015-01-01
In order to diagnose transformer fault efficiently and accurately, a dynamic integrated fault diagnosis method based on Bayesian network is proposed in this paper. First, an integrated fault diagnosis model is established based on the causal relationship among abnormal working conditions, failure modes, and failure symptoms of transformers, aimed at obtaining the most possible failure mode. And then considering the evidence input into the diagnosis model is gradually acquired and the fault diagnosis process in reality is multistep, a dynamic fault diagnosis mechanism is proposed based on the integrated fault diagnosis model. Different from the existing one-step diagnosis mechanism, it includes a multistep evidence-selection process, which gives the most effective diagnostic test to be performed in next step. Therefore, it can reduce unnecessary diagnostic tests and improve the accuracy and efficiency of diagnosis. Finally, the dynamic integrated fault diagnosis method is applied to actual cases, and the validity of this method is verified.
A Dynamic Integrated Fault Diagnosis Method for Power Transformers
Gao, Wensheng; Liu, Tong
2015-01-01
In order to diagnose transformer fault efficiently and accurately, a dynamic integrated fault diagnosis method based on Bayesian network is proposed in this paper. First, an integrated fault diagnosis model is established based on the causal relationship among abnormal working conditions, failure modes, and failure symptoms of transformers, aimed at obtaining the most possible failure mode. And then considering the evidence input into the diagnosis model is gradually acquired and the fault diagnosis process in reality is multistep, a dynamic fault diagnosis mechanism is proposed based on the integrated fault diagnosis model. Different from the existing one-step diagnosis mechanism, it includes a multistep evidence-selection process, which gives the most effective diagnostic test to be performed in next step. Therefore, it can reduce unnecessary diagnostic tests and improve the accuracy and efficiency of diagnosis. Finally, the dynamic integrated fault diagnosis method is applied to actual cases, and the validity of this method is verified. PMID:25685841
Morphologic dating of fault scarps using airborne laser swath mapping (ALSM) data
Hilley, G.E.; Delong, S.; Prentice, C.; Blisniuk, K.; Arrowsmith, J.R.
2010-01-01
Models of fault scarp morphology have been previously used to infer the relative age of different fault scarps in a fault zone using labor-intensive ground surveying. We present a method for automatically extracting scarp morphologic ages within high-resolution digital topography. Scarp degradation is modeled as a diffusive mass transport process in the across-scarp direction. The second derivative of the modeled degraded fault scarp was normalized to yield the best-fitting (in a least-squared sense) scarp height at each point, and the signal-to-noise ratio identified those areas containing scarp-like topography. We applied this method to three areas along the San Andreas Fault and found correspondence between the mapped geometry of the fault and that extracted by our analysis. This suggests that the spatial distribution of scarp ages may be revealed by such an analysis, allowing the recent temporal development of a fault zone to be imaged along its length.
A fault isolation method based on the incidence matrix of an augmented system
NASA Astrophysics Data System (ADS)
Chen, Changxiong; Chen, Liping; Ding, Jianwan; Wu, Yizhong
2018-03-01
A new approach is proposed for isolating faults and fast identifying the redundant sensors of a system in this paper. By introducing fault signal as additional state variable, an augmented system model is constructed by the original system model, fault signals and sensor measurement equations. The structural properties of an augmented system model are provided in this paper. From the viewpoint of evaluating fault variables, the calculating correlations of the fault variables in the system can be found, which imply the fault isolation properties of the system. Compared with previous isolation approaches, the highlights of the new approach are that it can quickly find the faults which can be isolated using exclusive residuals, at the same time, and can identify the redundant sensors in the system, which are useful for the design of diagnosis system. The simulation of a four-tank system is reported to validate the proposed method.
NASA Technical Reports Server (NTRS)
Al Hassan, Mohammad; Britton, Paul; Hatfield, Glen Spencer; Novack, Steven D.
2017-01-01
Today's launch vehicles complex electronic and avionics systems heavily utilize Field Programmable Gate Array (FPGA) integrated circuits (IC) for their superb speed and reconfiguration capabilities. Consequently, FPGAs are prevalent ICs in communication protocols such as MILSTD- 1553B and in control signal commands such as in solenoid valve actuations. This paper will identify reliability concerns and high level guidelines to estimate FPGA total failure rates in a launch vehicle application. The paper will discuss hardware, hardware description language, and radiation induced failures. The hardware contribution of the approach accounts for physical failures of the IC. The hardware description language portion will discuss the high level FPGA programming languages and software/code reliability growth. The radiation portion will discuss FPGA susceptibility to space environment radiation.
Fault latency in the memory - An experimental study on VAX 11/780
NASA Technical Reports Server (NTRS)
Chillarege, Ram; Iyer, Ravishankar K.
1986-01-01
Fault latency is the time between the physical occurrence of a fault and its corruption of data, causing an error. The measure of this time is difficult to obtain because the time of occurrence of a fault and the exact moment of generation of an error are not known. This paper describes an experiment to accurately study the fault latency in the memory subsystem. The experiment employs real memory data from a VAX 11/780 at the University of Illinois. Fault latency distributions are generated for s-a-0 and s-a-1 permanent fault models. Results show that the mean fault latency of a s-a-0 fault is nearly 5 times that of the s-a-1 fault. Large variations in fault latency are found for different regions in memory. An analysis of a variance model to quantify the relative influence of various workload measures on the evaluated latency is also given.
Preliminary Earthquake Hazard Map of Afghanistan
Boyd, Oliver S.; Mueller, Charles S.; Rukstales, Kenneth S.
2007-01-01
Introduction Earthquakes represent a serious threat to the people and institutions of Afghanistan. As part of a United States Agency for International Development (USAID) effort to assess the resource potential and seismic hazards of Afghanistan, the Seismic Hazard Mapping group of the United States Geological Survey (USGS) has prepared a series of probabilistic seismic hazard maps that help quantify the expected frequency and strength of ground shaking nationwide. To construct the maps, we do a complete hazard analysis for each of ~35,000 sites in the study area. We use a probabilistic methodology that accounts for all potential seismic sources and their rates of earthquake activity, and we incorporate modeling uncertainty by using logic trees for source and ground-motion parameters. See the Appendix for an explanation of probabilistic seismic hazard analysis and discussion of seismic risk. Afghanistan occupies a southward-projecting, relatively stable promontory of the Eurasian tectonic plate (Ambraseys and Bilham, 2003; Wheeler and others, 2005). Active plate boundaries, however, surround Afghanistan on the west, south, and east. To the west, the Arabian plate moves northward relative to Eurasia at about 3 cm/yr. The active plate boundary trends northwestward through the Zagros region of southwestern Iran. Deformation is accommodated throughout the territory of Iran; major structures include several north-south-trending, right-lateral strike-slip fault systems in the east and, farther to the north, a series of east-west-trending reverse- and strike-slip faults. This deformation apparently does not cross the border into relatively stable western Afghanistan. In the east, the Indian plate moves northward relative to Eurasia at a rate of about 4 cm/yr. A broad, transpressional plate-boundary zone extends into eastern Afghanistan, trending southwestward from the Hindu Kush in northeast Afghanistan, through Kabul, and along the Afghanistan-Pakistan border. Deformation here is expressed as a belt of major, north-northeast-trending, left-lateral strike-slip faults and abundant seismicity. The seismicity intensifies farther to the northeast and includes a prominent zone of deep earthquakes associated with northward subduction of the Indian plate beneath Eurasia that extends beneath the Hindu Kush and Pamirs Mountains. Production of the seismic hazard maps is challenging because the geological and seismological data required to produce a seismic hazard model are limited. The data that are available for this project include historical seismicity and poorly constrained slip rates on only a few of the many active faults in the country. Much of the hazard is derived from a new catalog of historical earthquakes: from 1964 to the present, with magnitude equal to or greater than about 4.5, and with depth between 0 and 250 kilometers. We also include four specific faults in the model: the Chaman fault with an assigned slip rate of 10 mm/yr, the Central Badakhshan fault with an assigned slip rate of 12 mm/yr, the Darvaz fault with an assigned slip rate of 7 mm/yr, and the Hari Rud fault with an assigned slip rate of 2 mm/yr. For these faults and for shallow seismicity less than 50 km deep, we incorporate published ground-motion estimates from tectonically active regions of western North America, Europe, and the Middle East. Ground-motion estimates for deeper seismicity are derived from data in subduction environments. We apply estimates derived for tectonic regions where subduction is the main tectonic process for intermediate-depth seismicity between 50- and 250-km depth. Within the framework of these limitations, we have developed a preliminary probabilistic seismic-hazard assessment of Afghanistan, the type of analysis that underpins the seismic components of modern building codes in the United States. The assessment includes maps of estimated peak ground-acceleration (PGA), 0.2-second spectral acceleration (SA), and 1.0-secon
NASA Astrophysics Data System (ADS)
Hughes, A. N.; Benesh, N. P.; Alt, R. C., II; Shaw, J. H.
2011-12-01
Contractional fault-related folds form as stratigraphic layers of rock are deformed due to displacement on an underlying fault. Specifically, fault-bend folds form as rock strata are displaced over non-planar faults, and fault-propagation folds form at the tips of faults as they propagate upward through sedimentary layers. Both types of structures are commonly observed in fold and thrust belts and passive margin settings throughout the world. Fault-bend and fault-propagation folds are often seen in close proximity to each other, and kinematic analysis of some fault-related folds suggests that they have undergone a transition in structural style from fault-bend to fault-propagation folding during their deformational history. Because of the similarity in conditions in which both fault-bend and fault-propagation folds are found, the circumstances that promote the formation of one of these structural styles over the other is not immediately evident. In an effort to better understand this issue, we have investigated the role of mechanical and geometric factors in the transition between fault-bend folding and fault-propagation folding using a series of models developed with the discrete element method (DEM). The DEM models employ an aggregate of circular, frictional disks that incorporate bonding at particle contacts to represent the numerical stratigraphy. A vertical wall moving at a fixed velocity drives displacement of the hanging-wall section along a pre-defined fault ramp and detachment. We utilize this setup to study the transition between fault-bend and fault-propagation folding by varying mechanical strength, stratigraphic layering, fault geometries, and boundary conditions of the model. In most circumstances, displacement of the hanging-wall leads to the development of an emergent fold as the hanging-wall material passes across the fault bend. However, in other cases, an emergent fault propagates upward through the sedimentary section, associated with the development of a steep, narrow front-limb, characteristic of fault-propagation folding. We find that the boundary conditions imposed on the far wall of the model have the strongest influence on structural style, but that other factors, such as fault dip and mechanical strengths, play secondary roles. By testing a range of values for each of the parameters, we are able to identify the range of values under which the transition occurs. Additionally, we find that the transition between fault-bend and fault-propagation folding is gradual, with structures in the transitional regime showing evidence of each structural style during a portion of their history. The primary role that boundary conditions play in determining fault-related folding style implies that the growth of natural structures may be affected by the emergence of adjacent structures, or in distal variations in detachment strengths. We explore these relationships using natural examples from various fold-and-thrust belts.
Achieving Agreement in Three Rounds With Bounded-Byzantine Faults
NASA Technical Reports Server (NTRS)
Malekpour, Mahyar R.
2015-01-01
A three-round algorithm is presented that guarantees agreement in a system of K (nodes) greater than or equal to 3F (faults) +1 nodes provided each faulty node induces no more than F faults and each good node experiences no more than F faults, where, F is the maximum number of simultaneous faults in the network. The algorithm is based on the Oral Message algorithm of Lamport et al. and is scalable with respect to the number of nodes in the system and applies equally to the traditional node-fault model as well as the link-fault model. We also present a mechanical verification of the algorithm focusing on verifying the correctness of a bounded model of the algorithm as well as confirming claims of determinism.
Surveillance system and method having an operating mode partitioned fault classification model
NASA Technical Reports Server (NTRS)
Bickford, Randall L. (Inventor)
2005-01-01
A system and method which partitions a parameter estimation model, a fault detection model, and a fault classification model for a process surveillance scheme into two or more coordinated submodels together providing improved diagnostic decision making for at least one determined operating mode of an asset.
A Dynamic Finite Element Method for Simulating the Physics of Faults Systems
NASA Astrophysics Data System (ADS)
Saez, E.; Mora, P.; Gross, L.; Weatherley, D.
2004-12-01
We introduce a dynamic Finite Element method using a novel high level scripting language to describe the physical equations, boundary conditions and time integration scheme. The library we use is the parallel Finley library: a finite element kernel library, designed for solving large-scale problems. It is incorporated as a differential equation solver into a more general library called escript, based on the scripting language Python. This library has been developed to facilitate the rapid development of 3D parallel codes, and is optimised for the Australian Computational Earth Systems Simulator Major National Research Facility (ACcESS MNRF) supercomputer, a 208 processor SGI Altix with a peak performance of 1.1 TFlops. Using the scripting approach we obtain a parallel FE code able to take advantage of the computational efficiency of the Altix 3700. We consider faults as material discontinuities (the displacement, velocity, and acceleration fields are discontinuous at the fault), with elastic behavior. The stress continuity at the fault is achieved naturally through the expression of the fault interactions in the weak formulation. The elasticity problem is solved explicitly in time, using the Saint Verlat scheme. Finally, we specify a suitable frictional constitutive relation and numerical scheme to simulate fault behaviour. Our model is based on previous work on modelling fault friction and multi-fault systems using lattice solid-like models. We adapt the 2D model for simulating the dynamics of parallel fault systems described to the Finite-Element method. The approach uses a frictional relation along faults that is slip and slip-rate dependent, and the numerical integration approach introduced by Mora and Place in the lattice solid model. In order to illustrate the new Finite Element model, single and multi-fault simulation examples are presented.
Use of multiple relocation techniques to better understand seismotectonic structure in Greece
NASA Astrophysics Data System (ADS)
Bozionelos, George; Ganas, Athanassios; Karastathis, Vassilios; Moshou, Alexandra
2015-04-01
The identification of the structure of seismicity associated with active faults is of great significance particularly for the densely populated areas of Greece, such as Corinth Gulf, SW Peloponnese and central Crete. Manual analysis of the seismicity that has been recorded by the Hellenic Unified Seismological Network (HUSN) for the recent years provides the opportunity to determine accurate hypocentral solutions using the weighted P and S wave arrival times for these regions. The purpose is to perform precise event location and relative relocation so as to obtain the spatial distribution of the recorded seismicity with the needed resolution. In order to investigate the influence of the velocity model on the seismicity distribution and to find the most reliable hypocentral locations, different velocity models (both 1-D and 3-D) and location schemes are adopted and thoroughly tested. Initially, to test the models, the hypocentral locations, including the determination of the location uncertainties, are obtained applying the non-linear location tool, NonLinLoc. To approximate the likelihood function, the much more robust in the presence of outliers, Equal Differential Time (EDT) is selected. To locate the earthquakes the Oct-tree search is used. Histograms with the RMS error, the spatial errors and the maximum half-axis (LEN3) of the 68% confidence ellipsoid are created. Moreover, the form of density scatterplots and the difference between maximum likelihood and expectation locations is taken into account. As an additional procedure, the travel-time residuals are examined separately for each station as a function of epicentral distance. Finaly, several cross sections are constructed in various azimuths and the spatial distribution of the earthquakes is evaluated and compared with the active fault structures. In order to highlight the activated faults, an additional relocation procedure is performed, using the double-difference algorithm HYPODD and incorporating the traveltime data of the best fitting velocity models. The accurate determination of seismicity will play a key role in revealing the mechanisms that contribute to the crustal deformation and to active tectonics. Note: this research was funded by the ASPIDA project.
NASA Astrophysics Data System (ADS)
Mahya, M. J.; Sanny, T. A.
2017-04-01
Lembang and Cimandiri fault are active faults in West Java that thread people near the faults with earthquake and surface deformation risk. To determine the deformation, GPS measurements around Lembang and Cimandiri fault was conducted then the data was processed to get the horizontal velocity at each GPS stations by Graduate Research of Earthquake and Active Tectonics (GREAT) Department of Geodesy and Geomatics Engineering Study Program, ITB. The purpose of this study is to model the displacement distribution as deformation parameter in the area along Lembang and Cimandiri fault using 2-dimensional boundary element method (BEM) using the horizontal velocity that has been corrected by the effect of Sunda plate horizontal movement as the input. The assumptions that used at the modeling stage are the deformation occurs in homogeneous and isotropic medium, and the stresses that acted on faults are in elastostatic condition. The results of modeling show that Lembang fault had left-lateral slip component and divided into two segments. A lineament oriented in southwest-northeast direction is observed near Tangkuban Perahu Mountain separating the eastern and the western segments of Lembang fault. The displacement pattern of Cimandiri fault shows that Cimandiri fault is divided into the eastern segment with right-lateral slip component and the western segment with left-lateral slip component separated by a northwest-southeast oriented lineament at the western part of Gede Pangrango Mountain. The displacement value between Lembang and Cimandiri fault is nearly zero indicating that Lembang and Cimandiri fault are not connected each other and this area is relatively safe for infrastructure development.
Comparison of fault-related folding algorithms to restore a fold-and-thrust-belt
NASA Astrophysics Data System (ADS)
Brandes, Christian; Tanner, David
2017-04-01
Fault-related folding means the contemporaneous evolution of folds as a consequence of fault movement. It is a common deformation process in the upper crust that occurs worldwide in accretionary wedges, fold-and-thrust belts, and intra-plate settings, in either strike-slip, compressional, or extensional regimes. Over the last 30 years different algorithms have been developed to simulate the kinematic evolution of fault-related folds. All these models of fault-related folding include similar simplifications and limitations and use the same kinematic behaviour throughout the model (Brandes & Tanner, 2014). We used a natural example of fault-related folding from the Limón fold-and-thrust belt in eastern Costa Rica to test two different algorithms and to compare the resulting geometries. A thrust fault and its hanging-wall anticline were restored using both the trishear method (Allmendinger, 1998; Zehnder & Allmendinger, 2000) and the fault-parallel flow approach (Ziesch et al. 2014); both methods are widely used in academia and industry. The resulting hanging-wall folds above the thrust fault are restored in substantially different fashions. This is largely a function of the propagation-to-slip ratio of the thrust, which controls the geometry of the related anticline. Understanding the controlling factors for anticline evolution is important for the evaluation of potential hydrocarbon reservoirs and the characterization of fault processes. References: Allmendinger, R.W., 1998. Inverse and forward numerical modeling of trishear fault propagation folds. Tectonics, 17, 640-656. Brandes, C., Tanner, D.C. 2014. Fault-related folding: a review of kinematic models and their application. Earth Science Reviews, 138, 352-370. Zehnder, A.T., Allmendinger, R.W., 2000. Velocity field for the trishear model. Journal of Structural Geology, 22, 1009-1014. Ziesch, J., Tanner, D.C., Krawczyk, C.M. 2014. Strain associated with the fault-parallel flow algorithm during kinematic fault displacement. Mathematical Geosciences, 46(1), 59-73.
Deformation pattern during normal faulting: A sequential limit analysis
NASA Astrophysics Data System (ADS)
Yuan, X. P.; Maillot, B.; Leroy, Y. M.
2017-02-01
We model in 2-D the formation and development of half-graben faults above a low-angle normal detachment fault. The model, based on a "sequential limit analysis" accounting for mechanical equilibrium and energy dissipation, simulates the incremental deformation of a frictional, cohesive, and fluid-saturated rock wedge above the detachment. Two modes of deformation, gravitational collapse and tectonic collapse, are revealed which compare well with the results of the critical Coulomb wedge theory. We additionally show that the fault and the axial surface of the half-graben rotate as topographic subsidence increases. This progressive rotation makes some of the footwall material being sheared and entering into the hanging wall, creating a specific region called foot-to-hanging wall (FHW). The model allows introducing additional effects, such as weakening of the faults once they have slipped and sedimentation in their hanging wall. These processes are shown to control the size of the FHW region and the number of fault-bounded blocks it eventually contains. Fault weakening tends to make fault rotation more discontinuous and this results in the FHW zone containing multiple blocks of intact material separated by faults. By compensating the topographic subsidence of the half-graben, sedimentation tends to slow the fault rotation and this results in the reduction of the size of the FHW zone and of its number of fault-bounded blocks. We apply the new approach to reproduce the faults observed along a seismic line in the Southern Jeanne d'Arc Basin, Grand Banks, offshore Newfoundland. There, a single block exists in the hanging wall of the principal fault. The model explains well this situation provided that a slow sedimentation rate in the Lower Jurassic is proposed followed by an increasing rate over time as the main detachment fault was growing.