Integrated Approach To Design And Analysis Of Systems
NASA Technical Reports Server (NTRS)
Patterson-Hine, F. A.; Iverson, David L.
1993-01-01
Object-oriented fault-tree representation unifies evaluation of reliability and diagnosis of faults. Programming/fault tree described more fully in "Object-Oriented Algorithm For Evaluation Of Fault Trees" (ARC-12731). Augmented fault tree object contains more information than fault tree object used in quantitative analysis of reliability. Additional information needed to diagnose faults in system represented by fault tree.
Fault Tree in the Trenches, A Success Story
NASA Technical Reports Server (NTRS)
Long, R. Allen; Goodson, Amanda (Technical Monitor)
2000-01-01
Getting caught up in the explanation of Fault Tree Analysis (FTA) minutiae is easy. In fact, most FTA literature tends to address FTA concepts and methodology. Yet there seems to be few articles addressing actual design changes resulting from the successful application of fault tree analysis. This paper demonstrates how fault tree analysis was used to identify and solve a potentially catastrophic mechanical problem at a rocket motor manufacturer. While developing the fault tree given in this example, the analyst was told by several organizations that the piece of equipment in question had been evaluated by several committees and organizations, and that the analyst was wasting his time. The fault tree/cutset analysis resulted in a joint-redesign of the control system by the tool engineering group and the fault tree analyst, as well as bragging rights for the analyst. (That the fault tree found problems where other engineering reviews had failed was not lost on the other engineering groups.) Even more interesting was that this was the analyst's first fault tree which further demonstrates how effective fault tree analysis can be in guiding (i.e., forcing) the analyst to take a methodical approach in evaluating complex systems.
Comparative analysis of techniques for evaluating the effectiveness of aircraft computing systems
NASA Technical Reports Server (NTRS)
Hitt, E. F.; Bridgman, M. S.; Robinson, A. C.
1981-01-01
Performability analysis is a technique developed for evaluating the effectiveness of fault-tolerant computing systems in multiphase missions. Performability was evaluated for its accuracy, practical usefulness, and relative cost. The evaluation was performed by applying performability and the fault tree method to a set of sample problems ranging from simple to moderately complex. The problems involved as many as five outcomes, two to five mission phases, permanent faults, and some functional dependencies. Transient faults and software errors were not considered. A different analyst was responsible for each technique. Significantly more time and effort were required to learn performability analysis than the fault tree method. Performability is inherently as accurate as fault tree analysis. For the sample problems, fault trees were more practical and less time consuming to apply, while performability required less ingenuity and was more checkable. Performability offers some advantages for evaluating very complex problems.
Object-oriented fault tree evaluation program for quantitative analyses
NASA Technical Reports Server (NTRS)
Patterson-Hine, F. A.; Koen, B. V.
1988-01-01
Object-oriented programming can be combined with fault free techniques to give a significantly improved environment for evaluating the safety and reliability of large complex systems for space missions. Deep knowledge about system components and interactions, available from reliability studies and other sources, can be described using objects that make up a knowledge base. This knowledge base can be interrogated throughout the design process, during system testing, and during operation, and can be easily modified to reflect design changes in order to maintain a consistent information source. An object-oriented environment for reliability assessment has been developed on a Texas Instrument (TI) Explorer LISP workstation. The program, which directly evaluates system fault trees, utilizes the object-oriented extension to LISP called Flavors that is available on the Explorer. The object representation of a fault tree facilitates the storage and retrieval of information associated with each event in the tree, including tree structural information and intermediate results obtained during the tree reduction process. Reliability data associated with each basic event are stored in the fault tree objects. The object-oriented environment on the Explorer also includes a graphical tree editor which was modified to display and edit the fault trees.
McElroy, Lisa M; Khorzad, Rebeca; Rowe, Theresa A; Abecassis, Zachary A; Apley, Daniel W; Barnard, Cynthia; Holl, Jane L
The purpose of this study was to use fault tree analysis to evaluate the adequacy of quality reporting programs in identifying root causes of postoperative bloodstream infection (BSI). A systematic review of the literature was used to construct a fault tree to evaluate 3 postoperative BSI reporting programs: National Surgical Quality Improvement Program (NSQIP), Centers for Medicare and Medicaid Services (CMS), and The Joint Commission (JC). The literature review revealed 699 eligible publications, 90 of which were used to create the fault tree containing 105 faults. A total of 14 identified faults are currently mandated for reporting to NSQIP, 5 to CMS, and 3 to JC; 2 or more programs require 4 identified faults. The fault tree identifies numerous contributing faults to postoperative BSI and reveals substantial variation in the requirements and ability of national quality data reporting programs to capture these potential faults. Efforts to prevent postoperative BSI require more comprehensive data collection to identify the root causes and develop high-reliability improvement strategies.
Interim reliability evaluation program, Browns Ferry fault trees
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stewart, M.E.
1981-01-01
An abbreviated fault tree method is used to evaluate and model Browns Ferry systems in the Interim Reliability Evaluation programs, simplifying the recording and displaying of events, yet maintaining the system of identifying faults. The level of investigation is not changed. The analytical thought process inherent in the conventional method is not compromised. But the abbreviated method takes less time, and the fault modes are much more visible.
The weakest t-norm based intuitionistic fuzzy fault-tree analysis to evaluate system reliability.
Kumar, Mohit; Yadav, Shiv Prasad
2012-07-01
In this paper, a new approach of intuitionistic fuzzy fault-tree analysis is proposed to evaluate system reliability and to find the most critical system component that affects the system reliability. Here weakest t-norm based intuitionistic fuzzy fault tree analysis is presented to calculate fault interval of system components from integrating expert's knowledge and experience in terms of providing the possibility of failure of bottom events. It applies fault-tree analysis, α-cut of intuitionistic fuzzy set and T(ω) (the weakest t-norm) based arithmetic operations on triangular intuitionistic fuzzy sets to obtain fault interval and reliability interval of the system. This paper also modifies Tanaka et al.'s fuzzy fault-tree definition. In numerical verification, a malfunction of weapon system "automatic gun" is presented as a numerical example. The result of the proposed method is compared with the listing approaches of reliability analysis methods. Copyright © 2012 ISA. Published by Elsevier Ltd. All rights reserved.
Object-Oriented Algorithm For Evaluation Of Fault Trees
NASA Technical Reports Server (NTRS)
Patterson-Hine, F. A.; Koen, B. V.
1992-01-01
Algorithm for direct evaluation of fault trees incorporates techniques of object-oriented programming. Reduces number of calls needed to solve trees with repeated events. Provides significantly improved software environment for such computations as quantitative analyses of safety and reliability of complicated systems of equipment (e.g., spacecraft or factories).
Fire safety in transit systems fault tree analysis
DOT National Transportation Integrated Search
1981-09-01
Fire safety countermeasures applicable to transit vehicles are identified and evaluated. This document contains fault trees which illustrate the sequences of events which may lead to a transit-fire related casualty. A description of the basis for the...
Soft error evaluation and vulnerability analysis in Xilinx Zynq-7010 system-on chip
NASA Astrophysics Data System (ADS)
Du, Xuecheng; He, Chaohui; Liu, Shuhuan; Zhang, Yao; Li, Yonghong; Xiong, Ceng; Tan, Pengkang
2016-09-01
Radiation-induced soft errors are an increasingly important threat to the reliability of modern electronic systems. In order to evaluate system-on chip's reliability and soft error, the fault tree analysis method was used in this work. The system fault tree was constructed based on Xilinx Zynq-7010 All Programmable SoC. Moreover, the soft error rates of different components in Zynq-7010 SoC were tested by americium-241 alpha radiation source. Furthermore, some parameters that used to evaluate the system's reliability and safety were calculated using Isograph Reliability Workbench 11.0, such as failure rate, unavailability and mean time to failure (MTTF). According to fault tree analysis for system-on chip, the critical blocks and system reliability were evaluated through the qualitative and quantitative analysis.
Review: Evaluation of Foot-and-Mouth Disease Control Using Fault Tree Analysis.
Isoda, N; Kadohira, M; Sekiguchi, S; Schuppers, M; Stärk, K D C
2015-06-01
An outbreak of foot-and-mouth disease (FMD) causes huge economic losses and animal welfare problems. Although much can be learnt from past FMD outbreaks, several countries are not satisfied with their degree of contingency planning and aiming at more assurance that their control measures will be effective. The purpose of the present article was to develop a generic fault tree framework for the control of an FMD outbreak as a basis for systematic improvement and refinement of control activities and general preparedness. Fault trees are typically used in engineering to document pathways that can lead to an undesired event, that is, ineffective FMD control. The fault tree method allows risk managers to identify immature parts of the control system and to analyse the events or steps that will most probably delay rapid and effective disease control during a real outbreak. The present developed fault tree is generic and can be tailored to fit the specific needs of countries. For instance, the specific fault tree for the 2001 FMD outbreak in the UK was refined based on control weaknesses discussed in peer-reviewed articles. Furthermore, the specific fault tree based on the 2001 outbreak was applied to the subsequent FMD outbreak in 2007 to assess the refinement of control measures following the earlier, major outbreak. The FMD fault tree can assist risk managers to develop more refined and adequate control activities against FMD outbreaks and to find optimum strategies for rapid control. Further application using the current tree will be one of the basic measures for FMD control worldwide. © 2013 Blackwell Verlag GmbH.
Direct evaluation of fault trees using object-oriented programming techniques
NASA Technical Reports Server (NTRS)
Patterson-Hine, F. A.; Koen, B. V.
1989-01-01
Object-oriented programming techniques are used in an algorithm for the direct evaluation of fault trees. The algorithm combines a simple bottom-up procedure for trees without repeated events with a top-down recursive procedure for trees with repeated events. The object-oriented approach results in a dynamic modularization of the tree at each step in the reduction process. The algorithm reduces the number of recursive calls required to solve trees with repeated events and calculates intermediate results as well as the solution of the top event. The intermediate results can be reused if part of the tree is modified. An example is presented in which the results of the algorithm implemented with conventional techniques are compared to those of the object-oriented approach.
Reliability database development for use with an object-oriented fault tree evaluation program
NASA Technical Reports Server (NTRS)
Heger, A. Sharif; Harringtton, Robert J.; Koen, Billy V.; Patterson-Hine, F. Ann
1989-01-01
A description is given of the development of a fault-tree analysis method using object-oriented programming. In addition, the authors discuss the programs that have been developed or are under development to connect a fault-tree analysis routine to a reliability database. To assess the performance of the routines, a relational database simulating one of the nuclear power industry databases has been constructed. For a realistic assessment of the results of this project, the use of one of existing nuclear power reliability databases is planned.
Learning from examples - Generation and evaluation of decision trees for software resource analysis
NASA Technical Reports Server (NTRS)
Selby, Richard W.; Porter, Adam A.
1988-01-01
A general solution method for the automatic generation of decision (or classification) trees is investigated. The approach is to provide insights through in-depth empirical characterization and evaluation of decision trees for software resource data analysis. The trees identify classes of objects (software modules) that had high development effort. Sixteen software systems ranging from 3,000 to 112,000 source lines were selected for analysis from a NASA production environment. The collection and analysis of 74 attributes (or metrics), for over 4,700 objects, captured information about the development effort, faults, changes, design style, and implementation style. A total of 9,600 decision trees were automatically generated and evaluated. The trees correctly identified 79.3 percent of the software modules that had high development effort or faults, and the trees generated from the best parameter combinations correctly identified 88.4 percent of the modules on the average.
SPACE PROPULSION SYSTEM PHASED-MISSION PROBABILITY ANALYSIS USING CONVENTIONAL PRA METHODS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Curtis Smith; James Knudsen
As part of a series of papers on the topic of advance probabilistic methods, a benchmark phased-mission problem has been suggested. This problem consists of modeling a space mission using an ion propulsion system, where the mission consists of seven mission phases. The mission requires that the propulsion operate for several phases, where the configuration changes as a function of phase. The ion propulsion system itself consists of five thruster assemblies and a single propellant supply, where each thruster assembly has one propulsion power unit and two ion engines. In this paper, we evaluate the probability of mission failure usingmore » the conventional methodology of event tree/fault tree analysis. The event tree and fault trees are developed and analyzed using Systems Analysis Programs for Hands-on Integrated Reliability Evaluations (SAPHIRE). While the benchmark problem is nominally a "dynamic" problem, in our analysis the mission phases are modeled in a single event tree to show the progression from one phase to the next. The propulsion system is modeled in fault trees to account for the operation; or in this case, the failure of the system. Specifically, the propulsion system is decomposed into each of the five thruster assemblies and fed into the appropriate N-out-of-M gate to evaluate mission failure. A separate fault tree for the propulsion system is developed to account for the different success criteria of each mission phase. Common-cause failure modeling is treated using traditional (i.e., parametrically) methods. As part of this paper, we discuss the overall results in addition to the positive and negative aspects of modeling dynamic situations with non-dynamic modeling techniques. One insight from the use of this conventional method for analyzing the benchmark problem is that it requires significant manual manipulation to the fault trees and how they are linked into the event tree. The conventional method also requires editing the resultant cut sets to obtain the correct results. While conventional methods may be used to evaluate a dynamic system like that in the benchmark, the level of effort required may preclude its use on real-world problems.« less
Lognormal Approximations of Fault Tree Uncertainty Distributions.
El-Shanawany, Ashraf Ben; Ardron, Keith H; Walker, Simon P
2018-01-26
Fault trees are used in reliability modeling to create logical models of fault combinations that can lead to undesirable events. The output of a fault tree analysis (the top event probability) is expressed in terms of the failure probabilities of basic events that are input to the model. Typically, the basic event probabilities are not known exactly, but are modeled as probability distributions: therefore, the top event probability is also represented as an uncertainty distribution. Monte Carlo methods are generally used for evaluating the uncertainty distribution, but such calculations are computationally intensive and do not readily reveal the dominant contributors to the uncertainty. In this article, a closed-form approximation for the fault tree top event uncertainty distribution is developed, which is applicable when the uncertainties in the basic events of the model are lognormally distributed. The results of the approximate method are compared with results from two sampling-based methods: namely, the Monte Carlo method and the Wilks method based on order statistics. It is shown that the closed-form expression can provide a reasonable approximation to results obtained by Monte Carlo sampling, without incurring the computational expense. The Wilks method is found to be a useful means of providing an upper bound for the percentiles of the uncertainty distribution while being computationally inexpensive compared with full Monte Carlo sampling. The lognormal approximation method and Wilks's method appear attractive, practical alternatives for the evaluation of uncertainty in the output of fault trees and similar multilinear models. © 2018 Society for Risk Analysis.
[The Application of the Fault Tree Analysis Method in Medical Equipment Maintenance].
Liu, Hongbin
2015-11-01
In this paper, the traditional fault tree analysis method is presented, detailed instructions for its application characteristics in medical instrument maintenance is made. It is made significant changes when the traditional fault tree analysis method is introduced into the medical instrument maintenance: gave up the logic symbolic, logic analysis and calculation, gave up its complicated programs, and only keep its image and practical fault tree diagram, and the fault tree diagram there are also differences: the fault tree is no longer a logical tree but the thinking tree in troubleshooting, the definition of the fault tree's nodes is different, the composition of the fault tree's branches is also different.
NASA Technical Reports Server (NTRS)
Lee, Charles; Alena, Richard L.; Robinson, Peter
2004-01-01
We started from ISS fault trees example to migrate to decision trees, presented a method to convert fault trees to decision trees. The method shows that the visualizations of root cause of fault are easier and the tree manipulating becomes more programmatic via available decision tree programs. The visualization of decision trees for the diagnostic shows a format of straight forward and easy understands. For ISS real time fault diagnostic, the status of the systems could be shown by mining the signals through the trees and see where it stops at. The other advantage to use decision trees is that the trees can learn the fault patterns and predict the future fault from the historic data. The learning is not only on the static data sets but also can be online, through accumulating the real time data sets, the decision trees can gain and store faults patterns in the trees and recognize them when they come.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Powell, Danny H; Elwood Jr, Robert H
2011-01-01
Analysis of the material protection, control, and accountability (MPC&A) system is necessary to understand the limits and vulnerabilities of the system to internal threats. A self-appraisal helps the facility be prepared to respond to internal threats and reduce the risk of theft or diversion of nuclear material. The material control and accountability (MC&A) system effectiveness tool (MSET) fault tree was developed to depict the failure of the MPC&A system as a result of poor practices and random failures in the MC&A system. It can also be employed as a basis for assessing deliberate threats against a facility. MSET uses faultmore » tree analysis, which is a top-down approach to examining system failure. The analysis starts with identifying a potential undesirable event called a 'top event' and then determining the ways it can occur (e.g., 'Fail To Maintain Nuclear Materials Under The Purview Of The MC&A System'). The analysis proceeds by determining how the top event can be caused by individual or combined lower level faults or failures. These faults, which are the causes of the top event, are 'connected' through logic gates. The MSET model uses AND-gates and OR-gates and propagates the effect of event failure using Boolean algebra. To enable the fault tree analysis calculations, the basic events in the fault tree are populated with probability risk values derived by conversion of questionnaire data to numeric values. The basic events are treated as independent variables. This assumption affects the Boolean algebraic calculations used to calculate results. All the necessary calculations are built into the fault tree codes, but it is often useful to estimate the probabilities manually as a check on code functioning. The probability of failure of a given basic event is the probability that the basic event primary question fails to meet the performance metric for that question. The failure probability is related to how well the facility performs the task identified in that basic event over time (not just one performance or exercise). Fault tree calculations provide a failure probability for the top event in the fault tree. The basic fault tree calculations establish a baseline relative risk value for the system. This probability depicts relative risk, not absolute risk. Subsequent calculations are made to evaluate the change in relative risk that would occur if system performance is improved or degraded. During the development effort of MSET, the fault tree analysis program used was SAPHIRE. SAPHIRE is an acronym for 'Systems Analysis Programs for Hands-on Integrated Reliability Evaluations.' Version 1 of the SAPHIRE code was sponsored by the Nuclear Regulatory Commission in 1987 as an innovative way to draw, edit, and analyze graphical fault trees primarily for safe operation of nuclear power reactors. When the fault tree calculations are performed, the fault tree analysis program will produce several reports that can be used to analyze the MPC&A system. SAPHIRE produces reports showing risk importance factors for all basic events in the operational MC&A system. The risk importance information is used to examine the potential impacts when performance of certain basic events increases or decreases. The initial results produced by the SAPHIRE program are considered relative risk values. None of the results can be interpreted as absolute risk values since the basic event probability values represent estimates of risk associated with the performance of MPC&A tasks throughout the material balance area (MBA). The RRR for a basic event represents the decrease in total system risk that would result from improvement of that one event to a perfect performance level. Improvement of the basic event with the greatest RRR value produces a greater decrease in total system risk than improvement of any other basic event. Basic events with the greatest potential for system risk reduction are assigned performance improvement values, and new fault tree calculations show the improvement in total system risk. The operational impact or cost-effectiveness from implementing the performance improvements can then be evaluated. The improvements being evaluated can be system performance improvements, or they can be potential, or actual, upgrades to the system. The RIR for a basic event represents the increase in total system risk that would result from failure of that one event. Failure of the basic event with the greatest RIR value produces a greater increase in total system risk than failure of any other basic event. Basic events with the greatest potential for system risk increase are assigned failure performance values, and new fault tree calculations show the increase in total system risk. This evaluation shows the importance of preventing performance degradation of the basic events. SAPHIRE identifies combinations of basic events where concurrent failure of the events results in failure of the top event.« less
Automatic translation of digraph to fault-tree models
NASA Technical Reports Server (NTRS)
Iverson, David L.
1992-01-01
The author presents a technique for converting digraph models, including those models containing cycles, to a fault-tree format. A computer program which automatically performs this translation using an object-oriented representation of the models has been developed. The fault-trees resulting from translations can be used for fault-tree analysis and diagnosis. Programs to calculate fault-tree and digraph cut sets and perform diagnosis with fault-tree models have also been developed. The digraph to fault-tree translation system has been successfully tested on several digraphs of varying size and complexity. Details of some representative translation problems are presented. Most of the computation performed by the program is dedicated to finding minimal cut sets for digraph nodes in order to break cycles in the digraph. Fault-trees produced by the translator have been successfully used with NASA's Fault-Tree Diagnosis System (FTDS) to produce automated diagnostic systems.
Reset Tree-Based Optical Fault Detection
Lee, Dong-Geon; Choi, Dooho; Seo, Jungtaek; Kim, Howon
2013-01-01
In this paper, we present a new reset tree-based scheme to protect cryptographic hardware against optical fault injection attacks. As one of the most powerful invasive attacks on cryptographic hardware, optical fault attacks cause semiconductors to misbehave by injecting high-energy light into a decapped integrated circuit. The contaminated result from the affected chip is then used to reveal secret information, such as a key, from the cryptographic hardware. Since the advent of such attacks, various countermeasures have been proposed. Although most of these countermeasures are strong, there is still the possibility of attack. In this paper, we present a novel optical fault detection scheme that utilizes the buffers on a circuit's reset signal tree as a fault detection sensor. To evaluate our proposal, we model radiation-induced currents into circuit components and perform a SPICE simulation. The proposed scheme is expected to be used as a supplemental security tool. PMID:23698267
NASA Technical Reports Server (NTRS)
Butler, Ricky W.; Boerschlein, David P.
1993-01-01
Fault-Tree Compiler (FTC) program, is software tool used to calculate probability of top event in fault tree. Gates of five different types allowed in fault tree: AND, OR, EXCLUSIVE OR, INVERT, and M OF N. High-level input language easy to understand and use. In addition, program supports hierarchical fault-tree definition feature, which simplifies tree-description process and reduces execution time. Set of programs created forming basis for reliability-analysis workstation: SURE, ASSIST, PAWS/STEM, and FTC fault-tree tool (LAR-14586). Written in PASCAL, ANSI-compliant C language, and FORTRAN 77. Other versions available upon request.
Application Research of Fault Tree Analysis in Grid Communication System Corrective Maintenance
NASA Astrophysics Data System (ADS)
Wang, Jian; Yang, Zhenwei; Kang, Mei
2018-01-01
This paper attempts to apply the fault tree analysis method to the corrective maintenance field of grid communication system. Through the establishment of the fault tree model of typical system and the engineering experience, the fault tree analysis theory is used to analyze the fault tree model, which contains the field of structural function, probability importance and so on. The results show that the fault tree analysis can realize fast positioning and well repairing of the system. Meanwhile, it finds that the analysis method of fault tree has some guiding significance to the reliability researching and upgrading f the system.
Modular techniques for dynamic fault-tree analysis
NASA Technical Reports Server (NTRS)
Patterson-Hine, F. A.; Dugan, Joanne B.
1992-01-01
It is noted that current approaches used to assess the dependability of complex systems such as Space Station Freedom and the Air Traffic Control System are incapable of handling the size and complexity of these highly integrated designs. A novel technique for modeling such systems which is built upon current techniques in Markov theory and combinatorial analysis is described. It enables the development of a hierarchical representation of system behavior which is more flexible than either technique alone. A solution strategy which is based on an object-oriented approach to model representation and evaluation is discussed. The technique is virtually transparent to the user since the fault tree models can be built graphically and the objects defined automatically. The tree modularization procedure allows the two model types, Markov and combinatoric, to coexist and does not require that the entire fault tree be translated to a Markov chain for evaluation. This effectively reduces the size of the Markov chain required and enables solutions with less truncation, making analysis of longer mission times possible. Using the fault-tolerant parallel processor as an example, a model is built and solved for a specific mission scenario and the solution approach is illustrated in detail.
Using Fault Trees to Advance Understanding of Diagnostic Errors.
Rogith, Deevakar; Iyengar, M Sriram; Singh, Hardeep
2017-11-01
Diagnostic errors annually affect at least 5% of adults in the outpatient setting in the United States. Formal analytic techniques are only infrequently used to understand them, in part because of the complexity of diagnostic processes and clinical work flows involved. In this article, diagnostic errors were modeled using fault tree analysis (FTA), a form of root cause analysis that has been successfully used in other high-complexity, high-risk contexts. How factors contributing to diagnostic errors can be systematically modeled by FTA to inform error understanding and error prevention is demonstrated. A team of three experts reviewed 10 published cases of diagnostic error and constructed fault trees. The fault trees were modeled according to currently available conceptual frameworks characterizing diagnostic error. The 10 trees were then synthesized into a single fault tree to identify common contributing factors and pathways leading to diagnostic error. FTA is a visual, structured, deductive approach that depicts the temporal sequence of events and their interactions in a formal logical hierarchy. The visual FTA enables easier understanding of causative processes and cognitive and system factors, as well as rapid identification of common pathways and interactions in a unified fashion. In addition, it enables calculation of empirical estimates for causative pathways. Thus, fault trees might provide a useful framework for both quantitative and qualitative analysis of diagnostic errors. Future directions include establishing validity and reliability by modeling a wider range of error cases, conducting quantitative evaluations, and undertaking deeper exploration of other FTA capabilities. Copyright © 2017 The Joint Commission. Published by Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Butler, Ricky W.; Martensen, Anna L.
1992-01-01
FTC, Fault-Tree Compiler program, is reliability-analysis software tool used to calculate probability of top event of fault tree. Five different types of gates allowed in fault tree: AND, OR, EXCLUSIVE OR, INVERT, and M OF N. High-level input language of FTC easy to understand and use. Program supports hierarchical fault-tree-definition feature simplifying process of description of tree and reduces execution time. Solution technique implemented in FORTRAN, and user interface in Pascal. Written to run on DEC VAX computer operating under VMS operating system.
Tutorial: Advanced fault tree applications using HARP
NASA Technical Reports Server (NTRS)
Dugan, Joanne Bechta; Bavuso, Salvatore J.; Boyd, Mark A.
1993-01-01
Reliability analysis of fault tolerant computer systems for critical applications is complicated by several factors. These modeling difficulties are discussed and dynamic fault tree modeling techniques for handling them are described and demonstrated. Several advanced fault tolerant computer systems are described, and fault tree models for their analysis are presented. HARP (Hybrid Automated Reliability Predictor) is a software package developed at Duke University and NASA Langley Research Center that is capable of solving the fault tree models presented.
Technology transfer by means of fault tree synthesis
NASA Astrophysics Data System (ADS)
Batzias, Dimitris F.
2012-12-01
Since Fault Tree Analysis (FTA) attempts to model and analyze failure processes of engineering, it forms a common technique for good industrial practice. On the contrary, fault tree synthesis (FTS) refers to the methodology of constructing complex trees either from dentritic modules built ad hoc or from fault tress already used and stored in a Knowledge Base. In both cases, technology transfer takes place in a quasi-inductive mode, from partial to holistic knowledge. In this work, an algorithmic procedure, including 9 activity steps and 3 decision nodes is developed for performing effectively this transfer when the fault under investigation occurs within one of the latter stages of an industrial procedure with several stages in series. The main parts of the algorithmic procedure are: (i) the construction of a local fault tree within the corresponding production stage, where the fault has been detected, (ii) the formation of an interface made of input faults that might occur upstream, (iii) the fuzzy (to count for uncertainty) multicriteria ranking of these faults according to their significance, and (iv) the synthesis of an extended fault tree based on the construction of part (i) and on the local fault tree of the first-ranked fault in part (iii). An implementation is presented, referring to 'uneven sealing of Al anodic film', thus proving the functionality of the developed methodology.
Faults Discovery By Using Mined Data
NASA Technical Reports Server (NTRS)
Lee, Charles
2005-01-01
Fault discovery in the complex systems consist of model based reasoning, fault tree analysis, rule based inference methods, and other approaches. Model based reasoning builds models for the systems either by mathematic formulations or by experiment model. Fault Tree Analysis shows the possible causes of a system malfunction by enumerating the suspect components and their respective failure modes that may have induced the problem. The rule based inference build the model based on the expert knowledge. Those models and methods have one thing in common; they have presumed some prior-conditions. Complex systems often use fault trees to analyze the faults. Fault diagnosis, when error occurs, is performed by engineers and analysts performing extensive examination of all data gathered during the mission. International Space Station (ISS) control center operates on the data feedback from the system and decisions are made based on threshold values by using fault trees. Since those decision-making tasks are safety critical and must be done promptly, the engineers who manually analyze the data are facing time challenge. To automate this process, this paper present an approach that uses decision trees to discover fault from data in real-time and capture the contents of fault trees as the initial state of the trees.
Survey of critical failure events in on-chip interconnect by fault tree analysis
NASA Astrophysics Data System (ADS)
Yokogawa, Shinji; Kunii, Kyousuke
2018-07-01
In this paper, a framework based on reliability physics is proposed for adopting fault tree analysis (FTA) to the on-chip interconnect system of a semiconductor. By integrating expert knowledge and experience regarding the possibilities of failure on basic events, critical issues of on-chip interconnect reliability will be evaluated by FTA. In particular, FTA is used to identify the minimal cut sets with high risk priority. Critical events affecting the on-chip interconnect reliability are identified and discussed from the viewpoint of long-term reliability assessment. The moisture impact is evaluated as an external event.
Applying fault tree analysis to the prevention of wrong-site surgery.
Abecassis, Zachary A; McElroy, Lisa M; Patel, Ronak M; Khorzad, Rebeca; Carroll, Charles; Mehrotra, Sanjay
2015-01-01
Wrong-site surgery (WSS) is a rare event that occurs to hundreds of patients each year. Despite national implementation of the Universal Protocol over the past decade, development of effective interventions remains a challenge. We performed a systematic review of the literature reporting root causes of WSS and used the results to perform a fault tree analysis to assess the reliability of the system in preventing WSS and identifying high-priority targets for interventions aimed at reducing WSS. Process components where a single error could result in WSS were labeled with OR gates; process aspects reinforced by verification were labeled with AND gates. The overall redundancy of the system was evaluated based on prevalence of AND gates and OR gates. In total, 37 studies described risk factors for WSS. The fault tree contains 35 faults, most of which fall into five main categories. Despite the Universal Protocol mandating patient verification, surgical site signing, and a brief time-out, a large proportion of the process relies on human transcription and verification. Fault tree analysis provides a standardized perspective of errors or faults within the system of surgical scheduling and site confirmation. It can be adapted by institutions or specialties to lead to more targeted interventions to increase redundancy and reliability within the preoperative process. Copyright © 2015 Elsevier Inc. All rights reserved.
Fault trees and sequence dependencies
NASA Technical Reports Server (NTRS)
Dugan, Joanne Bechta; Boyd, Mark A.; Bavuso, Salvatore J.
1990-01-01
One of the frequently cited shortcomings of fault-tree models, their inability to model so-called sequence dependencies, is discussed. Several sources of such sequence dependencies are discussed, and new fault-tree gates to capture this behavior are defined. These complex behaviors can be included in present fault-tree models because they utilize a Markov solution. The utility of the new gates is demonstrated by presenting several models of the fault-tolerant parallel processor, which include both hot and cold spares.
A dynamic fault tree model of a propulsion system
NASA Technical Reports Server (NTRS)
Xu, Hong; Dugan, Joanne Bechta; Meshkat, Leila
2006-01-01
We present a dynamic fault tree model of the benchmark propulsion system, and solve it using Galileo. Dynamic fault trees (DFT) extend traditional static fault trees with special gates to model spares and other sequence dependencies. Galileo solves DFT models using a judicious combination of automatically generated Markov and Binary Decision Diagram models. Galileo easily handles the complexities exhibited by the benchmark problem. In particular, Galileo is designed to model phased mission systems.
Chen, Yingyi; Zhen, Zhumi; Yu, Huihui; Xu, Jing
2017-01-14
In the Internet of Things (IoT) equipment used for aquaculture is often deployed in outdoor ponds located in remote areas. Faults occur frequently in these tough environments and the staff generally lack professional knowledge and pay a low degree of attention in these areas. Once faults happen, expert personnel must carry out maintenance outdoors. Therefore, this study presents an intelligent method for fault diagnosis based on fault tree analysis and a fuzzy neural network. In the proposed method, first, the fault tree presents a logic structure of fault symptoms and faults. Second, rules extracted from the fault trees avoid duplicate and redundancy. Third, the fuzzy neural network is applied to train the relationship mapping between fault symptoms and faults. In the aquaculture IoT, one fault can cause various fault symptoms, and one symptom can be caused by a variety of faults. Four fault relationships are obtained. Results show that one symptom-to-one fault, two symptoms-to-two faults, and two symptoms-to-one fault relationships can be rapidly diagnosed with high precision, while one symptom-to-two faults patterns perform not so well, but are still worth researching. This model implements diagnosis for most kinds of faults in the aquaculture IoT.
Chen, Yingyi; Zhen, Zhumi; Yu, Huihui; Xu, Jing
2017-01-01
In the Internet of Things (IoT) equipment used for aquaculture is often deployed in outdoor ponds located in remote areas. Faults occur frequently in these tough environments and the staff generally lack professional knowledge and pay a low degree of attention in these areas. Once faults happen, expert personnel must carry out maintenance outdoors. Therefore, this study presents an intelligent method for fault diagnosis based on fault tree analysis and a fuzzy neural network. In the proposed method, first, the fault tree presents a logic structure of fault symptoms and faults. Second, rules extracted from the fault trees avoid duplicate and redundancy. Third, the fuzzy neural network is applied to train the relationship mapping between fault symptoms and faults. In the aquaculture IoT, one fault can cause various fault symptoms, and one symptom can be caused by a variety of faults. Four fault relationships are obtained. Results show that one symptom-to-one fault, two symptoms-to-two faults, and two symptoms-to-one fault relationships can be rapidly diagnosed with high precision, while one symptom-to-two faults patterns perform not so well, but are still worth researching. This model implements diagnosis for most kinds of faults in the aquaculture IoT. PMID:28098822
NASA Astrophysics Data System (ADS)
Guns, K. A.; Bennett, R. A.; Blisniuk, K.
2017-12-01
To better evaluate the distribution and transfer of strain and slip along the Southern San Andreas Fault (SSAF) zone in the northern Coachella valley in southern California, we integrate geological and geodetic observations to test whether strain is being transferred away from the SSAF system towards the Eastern California Shear Zone through microblock rotation of the Eastern Transverse Ranges (ETR). The faults of the ETR consist of five east-west trending left lateral strike slip faults that have measured cumulative offsets of up to 20 km and as low as 1 km. Present kinematic and block models present a variety of slip rate estimates, from as low as zero to as high as 7 mm/yr, suggesting a gap in our understanding of what role these faults play in the larger system. To determine whether present-day block rotation along these faults is contributing to strain transfer in the region, we are applying 10Be surface exposure dating methods to observed offset channel and alluvial fan deposits in order to estimate fault slip rates along two faults in the ETR. We present observations of offset geomorphic landforms using field mapping and LiDAR data at three sites along the Blue Cut Fault and one site along the Smoke Tree Wash Fault in Joshua Tree National Park which indicate recent Quaternary fault activity. Initial results of site mapping and clast count analyses reveal at least three stages of offset, including potential Holocene offsets, for one site along the Blue Cut Fault, while preliminary 10Be geochronology is in progress. This geologic slip rate data, combined with our new geodetic surface velocity field derived from updated campaign-based GPS measurements within Joshua Tree National Park will allow us to construct a suite of elastic fault block models to elucidate rates of strain transfer away from the SSAF and how that strain transfer may be affecting the length of the interseismic period along the SSAF.
Reliability computation using fault tree analysis
NASA Technical Reports Server (NTRS)
Chelson, P. O.
1971-01-01
A method is presented for calculating event probabilities from an arbitrary fault tree. The method includes an analytical derivation of the system equation and is not a simulation program. The method can handle systems that incorporate standby redundancy and it uses conditional probabilities for computing fault trees where the same basic failure appears in more than one fault path.
NASA Technical Reports Server (NTRS)
Martensen, Anna L.; Butler, Ricky W.
1987-01-01
The Fault Tree Compiler Program is a new reliability tool used to predict the top event probability for a fault tree. Five different gate types are allowed in the fault tree: AND, OR, EXCLUSIVE OR, INVERT, and M OF N gates. The high level input language is easy to understand and use when describing the system tree. In addition, the use of the hierarchical fault tree capability can simplify the tree description and decrease program execution time. The current solution technique provides an answer precise (within the limits of double precision floating point arithmetic) to the five digits in the answer. The user may vary one failure rate or failure probability over a range of values and plot the results for sensitivity analyses. The solution technique is implemented in FORTRAN; the remaining program code is implemented in Pascal. The program is written to run on a Digital Corporation VAX with the VMS operation system.
The Fault Tree Compiler (FTC): Program and mathematics
NASA Technical Reports Server (NTRS)
Butler, Ricky W.; Martensen, Anna L.
1989-01-01
The Fault Tree Compiler Program is a new reliability tool used to predict the top-event probability for a fault tree. Five different gate types are allowed in the fault tree: AND, OR, EXCLUSIVE OR, INVERT, AND m OF n gates. The high-level input language is easy to understand and use when describing the system tree. In addition, the use of the hierarchical fault tree capability can simplify the tree description and decrease program execution time. The current solution technique provides an answer precisely (within the limits of double precision floating point arithmetic) within a user specified number of digits accuracy. The user may vary one failure rate or failure probability over a range of values and plot the results for sensitivity analyses. The solution technique is implemented in FORTRAN; the remaining program code is implemented in Pascal. The program is written to run on a Digital Equipment Corporation (DEC) VAX computer with the VMS operation system.
Trade Studies of Space Launch Architectures using Modular Probabilistic Risk Analysis
NASA Technical Reports Server (NTRS)
Mathias, Donovan L.; Go, Susie
2006-01-01
A top-down risk assessment in the early phases of space exploration architecture development can provide understanding and intuition of the potential risks associated with new designs and technologies. In this approach, risk analysts draw from their past experience and the heritage of similar existing systems as a source for reliability data. This top-down approach captures the complex interactions of the risk driving parts of the integrated system without requiring detailed knowledge of the parts themselves, which is often unavailable in the early design stages. Traditional probabilistic risk analysis (PRA) technologies, however, suffer several drawbacks that limit their timely application to complex technology development programs. The most restrictive of these is a dependence on static planning scenarios, expressed through fault and event trees. Fault trees incorporating comprehensive mission scenarios are routinely constructed for complex space systems, and several commercial software products are available for evaluating fault statistics. These static representations cannot capture the dynamic behavior of system failures without substantial modification of the initial tree. Consequently, the development of dynamic models using fault tree analysis has been an active area of research in recent years. This paper discusses the implementation and demonstration of dynamic, modular scenario modeling for integration of subsystem fault evaluation modules using the Space Architecture Failure Evaluation (SAFE) tool. SAFE is a C++ code that was originally developed to support NASA s Space Launch Initiative. It provides a flexible framework for system architecture definition and trade studies. SAFE supports extensible modeling of dynamic, time-dependent risk drivers of the system and functions at the level of fidelity for which design and failure data exists. The approach is scalable, allowing inclusion of additional information as detailed data becomes available. The tool performs a Monte Carlo analysis to provide statistical estimates. Example results of an architecture system reliability study are summarized for an exploration system concept using heritage data from liquid-fueled expendable Saturn V/Apollo launch vehicles.
Systems Theoretic Process Analysis Applied to an Offshore Supply Vessel Dynamic Positioning System
2016-06-01
additional safety issues that were either not identified or inadequately mitigated through the use of Fault Tree Analysis and Failure Modes and...Techniques ...................................................................................................... 15 1.3.1. Fault Tree Analysis...49 3.2. Fault Tree Analysis Comparison
[Impact of water pollution risk in water transfer project based on fault tree analysis].
Liu, Jian-Chang; Zhang, Wei; Wang, Li-Min; Li, Dai-Qing; Fan, Xiu-Ying; Deng, Hong-Bing
2009-09-15
The methods to assess water pollution risk for medium water transfer are gradually being explored. The event-nature-proportion method was developed to evaluate the probability of the single event. Fault tree analysis on the basis of calculation on single event was employed to evaluate the extent of whole water pollution risk for the channel water body. The result indicates, that the risk of pollutants from towns and villages along the line of water transfer project to the channel water body is at high level with the probability of 0.373, which will increase pollution to the channel water body at the rate of 64.53 mg/L COD, 4.57 mg/L NH4(+) -N and 0.066 mg/L volatilization hydroxybenzene, respectively. The measurement of fault probability on the basis of proportion method is proved to be useful in assessing water pollution risk under much uncertainty.
Fault tree analysis for urban flooding.
ten Veldhuis, J A E; Clemens, F H L R; van Gelder, P H A J M
2009-01-01
Traditional methods to evaluate flood risk generally focus on heavy storm events as the principal cause of flooding. Conversely, fault tree analysis is a technique that aims at modelling all potential causes of flooding. It quantifies both overall flood probability and relative contributions of individual causes of flooding. This paper presents a fault model for urban flooding and an application to the case of Haarlem, a city of 147,000 inhabitants. Data from a complaint register, rainfall gauges and hydrodynamic model calculations are used to quantify probabilities of basic events in the fault tree. This results in a flood probability of 0.78/week for Haarlem. It is shown that gully pot blockages contribute to 79% of flood incidents, whereas storm events contribute only 5%. This implies that for this case more efficient gully pot cleaning is a more effective strategy to reduce flood probability than enlarging drainage system capacity. Whether this is also the most cost-effective strategy can only be decided after risk assessment has been complemented with a quantification of consequences of both types of events. To do this will be the next step in this study.
An overview of the phase-modular fault tree approach to phased mission system analysis
NASA Technical Reports Server (NTRS)
Meshkat, L.; Xing, L.; Donohue, S. K.; Ou, Y.
2003-01-01
We look at how fault tree analysis (FTA), a primary means of performing reliability analysis of PMS, can meet this challenge in this paper by presenting an overview of the modular approach to solving fault trees that represent PMS.
Try Fault Tree Analysis, a Step-by-Step Way to Improve Organization Development.
ERIC Educational Resources Information Center
Spitzer, Dean
1980-01-01
Fault Tree Analysis, a systems safety engineering technology used to analyze organizational systems, is described. Explains the use of logic gates to represent the relationship between failure events, qualitative analysis, quantitative analysis, and effective use of Fault Tree Analysis. (CT)
Fault Tree Analysis: A Research Tool for Educational Planning. Technical Report No. 1.
ERIC Educational Resources Information Center
Alameda County School Dept., Hayward, CA. PACE Center.
This ESEA Title III report describes fault tree analysis and assesses its applicability to education. Fault tree analysis is an operations research tool which is designed to increase the probability of success in any system by analyzing the most likely modes of failure that could occur. A graphic portrayal, which has the form of a tree, is…
Shi, Lei; Shuai, Jian; Xu, Kui
2014-08-15
Fire and explosion accidents of steel oil storage tanks (FEASOST) occur occasionally during the petroleum and chemical industry production and storage processes and often have devastating impact on lives, the environment and property. To contribute towards the development of a quantitative approach for assessing the occurrence probability of FEASOST, a fault tree of FEASOST is constructed that identifies various potential causes. Traditional fault tree analysis (FTA) can achieve quantitative evaluation if the failure data of all of the basic events (BEs) are available, which is almost impossible due to the lack of detailed data, as well as other uncertainties. This paper makes an attempt to perform FTA of FEASOST by a hybrid application between an expert elicitation based improved analysis hierarchy process (AHP) and fuzzy set theory, and the occurrence possibility of FEASOST is estimated for an oil depot in China. A comparison between statistical data and calculated data using fuzzy fault tree analysis (FFTA) based on traditional and improved AHP is also made. Sensitivity and importance analysis has been performed to identify the most crucial BEs leading to FEASOST that will provide insights into how managers should focus effective mitigation. Copyright © 2014 Elsevier B.V. All rights reserved.
Software For Fault-Tree Diagnosis Of A System
NASA Technical Reports Server (NTRS)
Iverson, Dave; Patterson-Hine, Ann; Liao, Jack
1993-01-01
Fault Tree Diagnosis System (FTDS) computer program is automated-diagnostic-system program identifying likely causes of specified failure on basis of information represented in system-reliability mathematical models known as fault trees. Is modified implementation of failure-cause-identification phase of Narayanan's and Viswanadham's methodology for acquisition of knowledge and reasoning in analyzing failures of systems. Knowledge base of if/then rules replaced with object-oriented fault-tree representation. Enhancement yields more-efficient identification of causes of failures and enables dynamic updating of knowledge base. Written in C language, C++, and Common LISP.
Fault tree models for fault tolerant hypercube multiprocessors
NASA Technical Reports Server (NTRS)
Boyd, Mark A.; Tuazon, Jezus O.
1991-01-01
Three candidate fault tolerant hypercube architectures are modeled, their reliability analyses are compared, and the resulting implications of these methods of incorporating fault tolerance into hypercube multiprocessors are discussed. In the course of performing the reliability analyses, the use of HARP and fault trees in modeling sequence dependent system behaviors is demonstrated.
NASA Astrophysics Data System (ADS)
Wu, Jianing; Yan, Shaoze; Xie, Liyang
2011-12-01
To address the impact of solar array anomalies, it is important to perform analysis of the solar array reliability. This paper establishes the fault tree analysis (FTA) and fuzzy reasoning Petri net (FRPN) models of a solar array mechanical system and analyzes reliability to find mechanisms of the solar array fault. The index final truth degree (FTD) and cosine matching function (CMF) are employed to resolve the issue of how to evaluate the importance and influence of different faults. So an improvement reliability analysis method is developed by means of the sorting of FTD and CMF. An example is analyzed using the proposed method. The analysis results show that harsh thermal environment and impact caused by particles in space are the most vital causes of the solar array fault. Furthermore, other fault modes and the corresponding improvement methods are discussed. The results reported in this paper could be useful for the spacecraft designers, particularly, in the process of redesigning the solar array and scheduling its reliability growth plan.
Product Support Manager Guidebook
2011-04-01
package is being developed using supportability analysis concepts such as Failure Mode, Effects and Criticality Analysis (FMECA), Fault Tree Analysis ( FTA ...Analysis (LORA) Condition Based Maintenance + (CBM+) Fault Tree Analysis ( FTA ) Failure Mode, Effects, and Criticality Analysis (FMECA) Maintenance Task...Reporting and Corrective Action System (FRACAS), Fault Tree Analysis ( FTA ), Level of Repair Analysis (LORA), Maintenance Task Analysis (MTA
MIRAP, microcomputer reliability analysis program
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jehee, J.N.T.
1989-01-01
A program for a microcomputer is outlined that can determine minimal cut sets from a specified fault tree logic. The speed and memory limitations of the microcomputers on which the program is implemented (Atari ST and IBM) are addressed by reducing the fault tree's size and by storing the cut set data on disk. Extensive well proven fault tree restructuring techniques, such as the identification of sibling events and of independent gate events, reduces the fault tree's size but does not alter its logic. New methods are used for the Boolean reduction of the fault tree logic. Special criteria formore » combining events in the 'AND' and 'OR' logic avoid the creation of many subsuming cut sets which all would cancel out due to existing cut sets. Figures and tables illustrates these methods. 4 refs., 5 tabs.« less
The FTA Method And A Possibility Of Its Application In The Area Of Road Freight Transport
NASA Astrophysics Data System (ADS)
Poliaková, Adela
2015-06-01
The Fault Tree process utilizes logic diagrams to portray and analyse potentially hazardous events. Three basic symbols (logic gates) are adequate for diagramming any fault tree. However, additional recently developed symbols can be used to reduce the time and effort required for analysis. A fault tree is a graphical representation of the relationship between certain specific events and the ultimate undesired event (2). This paper deals to method of Fault Tree Analysis basic description and provides a practical view on possibility of application by quality improvement in road freight transport company.
Fault Tree Analysis: Its Implications for Use in Education.
ERIC Educational Resources Information Center
Barker, Bruce O.
This study introduces the concept of Fault Tree Analysis as a systems tool and examines the implications of Fault Tree Analysis (FTA) as a technique for isolating failure modes in educational systems. A definition of FTA and discussion of its history, as it relates to education, are provided. The step by step process for implementation and use of…
Preventing medical errors by designing benign failures.
Grout, John R
2003-07-01
One way to successfully reduce medical errors is to design health care systems that are more resistant to the tendencies of human beings to err. One interdisciplinary approach entails creating design changes, mitigating human errors, and making human error irrelevant to outcomes. This approach is intended to facilitate the creation of benign failures, which have been called mistake-proofing devices and forcing functions elsewhere. USING FAULT TREES TO DESIGN FORCING FUNCTIONS: A fault tree is a graphical tool used to understand the relationships that either directly cause or contribute to the cause of a particular failure. A careful analysis of a fault tree enables the analyst to anticipate how the process will behave after the change. EXAMPLE OF AN APPLICATION: A scenario in which a patient is scalded while bathing can serve as an example of how multiple fault trees can be used to design forcing functions. The first fault tree shows the undesirable event--patient scalded while bathing. The second fault tree has a benign event--no water. Adding a scald valve changes the outcome from the undesirable event ("patient scalded while bathing") to the benign event ("no water") Analysis of fault trees does not ensure or guarantee that changes necessary to eliminate error actually occur. Most mistake-proofing is used to prevent simple errors and to create well-defended processes, but complex errors can also result. The utilization of mistake-proofing or forcing functions can be thought of as changing the logic of a process. Errors that formerly caused undesirable failures can be converted into the causes of benign failures. The use of fault trees can provide a variety of insights into the design of forcing functions that will improve patient safety.
Fault Tree Analysis Application for Safety and Reliability
NASA Technical Reports Server (NTRS)
Wallace, Dolores R.
2003-01-01
Many commercial software tools exist for fault tree analysis (FTA), an accepted method for mitigating risk in systems. The method embedded in the tools identifies a root as use in system components, but when software is identified as a root cause, it does not build trees into the software component. No commercial software tools have been built specifically for development and analysis of software fault trees. Research indicates that the methods of FTA could be applied to software, but the method is not practical without automated tool support. With appropriate automated tool support, software fault tree analysis (SFTA) may be a practical technique for identifying the underlying cause of software faults that may lead to critical system failures. We strive to demonstrate that existing commercial tools for FTA can be adapted for use with SFTA, and that applied to a safety-critical system, SFTA can be used to identify serious potential problems long before integrator and system testing.
ERIC Educational Resources Information Center
Barker, Bruce O.; Petersen, Paul D.
This paper explores the fault-tree analysis approach to isolating failure modes within a system. Fault tree investigates potentially undesirable events and then looks for failures in sequence that would lead to their occurring. Relationships among these events are symbolized by AND or OR logic gates, AND used when single events must coexist to…
Evidential Networks for Fault Tree Analysis with Imprecise Knowledge
NASA Astrophysics Data System (ADS)
Yang, Jianping; Huang, Hong-Zhong; Liu, Yu; Li, Yan-Feng
2012-06-01
Fault tree analysis (FTA), as one of the powerful tools in reliability engineering, has been widely used to enhance system quality attributes. In most fault tree analyses, precise values are adopted to represent the probabilities of occurrence of those events. Due to the lack of sufficient data or imprecision of existing data at the early stage of product design, it is often difficult to accurately estimate the failure rates of individual events or the probabilities of occurrence of the events. Therefore, such imprecision and uncertainty need to be taken into account in reliability analysis. In this paper, the evidential networks (EN) are employed to quantify and propagate the aforementioned uncertainty and imprecision in fault tree analysis. The detailed conversion processes of some logic gates to EN are described in fault tree (FT). The figures of the logic gates and the converted equivalent EN, together with the associated truth tables and the conditional belief mass tables, are also presented in this work. The new epistemic importance is proposed to describe the effect of ignorance degree of event. The fault tree of an aircraft engine damaged by oil filter plugs is presented to demonstrate the proposed method.
Object-oriented fault tree models applied to system diagnosis
NASA Technical Reports Server (NTRS)
Iverson, David L.; Patterson-Hine, F. A.
1990-01-01
When a diagnosis system is used in a dynamic environment, such as the distributed computer system planned for use on Space Station Freedom, it must execute quickly and its knowledge base must be easily updated. Representing system knowledge as object-oriented augmented fault trees provides both features. The diagnosis system described here is based on the failure cause identification process of the diagnostic system described by Narayanan and Viswanadham. Their system has been enhanced in this implementation by replacing the knowledge base of if-then rules with an object-oriented fault tree representation. This allows the system to perform its task much faster and facilitates dynamic updating of the knowledge base in a changing diagnosis environment. Accessing the information contained in the objects is more efficient than performing a lookup operation on an indexed rule base. Additionally, the object-oriented fault trees can be easily updated to represent current system status. This paper describes the fault tree representation, the diagnosis algorithm extensions, and an example application of this system. Comparisons are made between the object-oriented fault tree knowledge structure solution and one implementation of a rule-based solution. Plans for future work on this system are also discussed.
Probabilistic fault tree analysis of a radiation treatment system.
Ekaette, Edidiong; Lee, Robert C; Cooke, David L; Iftody, Sandra; Craighead, Peter
2007-12-01
Inappropriate administration of radiation for cancer treatment can result in severe consequences such as premature death or appreciably impaired quality of life. There has been little study of vulnerable treatment process components and their contribution to the risk of radiation treatment (RT). In this article, we describe the application of probabilistic fault tree methods to assess the probability of radiation misadministration to patients at a large cancer treatment center. We conducted a systematic analysis of the RT process that identified four process domains: Assessment, Preparation, Treatment, and Follow-up. For the Preparation domain, we analyzed possible incident scenarios via fault trees. For each task, we also identified existing quality control measures. To populate the fault trees we used subjective probabilities from experts and compared results with incident report data. Both the fault tree and the incident report analysis revealed simulation tasks to be most prone to incidents, and the treatment prescription task to be least prone to incidents. The probability of a Preparation domain incident was estimated to be in the range of 0.1-0.7% based on incident reports, which is comparable to the mean value of 0.4% from the fault tree analysis using probabilities from the expert elicitation exercise. In conclusion, an analysis of part of the RT system using a fault tree populated with subjective probabilities from experts was useful in identifying vulnerable components of the system, and provided quantitative data for risk management.
Reconfigurable tree architectures using subtree oriented fault tolerance
NASA Technical Reports Server (NTRS)
Lowrie, Matthew B.
1987-01-01
An approach to the design of reconfigurable tree architecture is presented in which spare processors are allocated at the leaves. The approach is unique in that spares are associated with subtrees and sharing of spares between these subtrees can occur. The Subtree Oriented Fault Tolerance (SOFT) approach is more reliable than previous approaches capable of tolerating link and switch failures for both single chip and multichip tree implementations while reducing redundancy in terms of both spare processors and links. VLSI layout is 0(n) for binary trees and is directly extensible to N-ary trees and fault tolerance through performance degradation.
Quantitative method of medication system interface evaluation.
Pingenot, Alleene Anne; Shanteau, James; Pingenot, James D F
2007-01-01
The objective of this study was to develop a quantitative method of evaluating the user interface for medication system software. A detailed task analysis provided a description of user goals and essential activity. A structural fault analysis was used to develop a detailed description of the system interface. Nurses experienced with use of the system under evaluation provided estimates of failure rates for each point in this simplified fault tree. Means of estimated failure rates provided quantitative data for fault analysis. Authors note that, although failures of steps in the program were frequent, participants reported numerous methods of working around these failures so that overall system failure was rare. However, frequent process failure can affect the time required for processing medications, making a system inefficient. This method of interface analysis, called Software Efficiency Evaluation and Fault Identification Method, provides quantitative information with which prototypes can be compared and problems within an interface identified.
Secure Embedded System Design Methodologies for Military Cryptographic Systems
2016-03-31
Fault- Tree Analysis (FTA); Built-In Self-Test (BIST) Introduction Secure access-control systems restrict operations to authorized users via methods...failures in the individual software/processor elements, the question of exactly how unlikely is difficult to answer. Fault- Tree Analysis (FTA) has a...Collins of Sandia National Laboratories for years of sharing his extensive knowledge of Fail-Safe Design Assurance and Fault- Tree Analysis
Rymer, M.J.
2000-01-01
The Coachella Valley area was strongly shaken by the 1992 Joshua Tree (23 April) and Landers (28 June) earthquakes, and both events caused triggered slip on active faults within the area. Triggered slip associated with the Joshua Tree earthquake was on a newly recognized fault, the East Wide Canyon fault, near the southwestern edge of the Little San Bernardino Mountains. Slip associated with the Landers earthquake formed along the San Andreas fault in the southeastern Coachella Valley. Surface fractures formed along the East Wide Canyon fault in association with the Joshua Tree earthquake. The fractures extended discontinuously over a 1.5-km stretch of the fault, near its southern end. Sense of slip was consistently right-oblique, west side down, similar to the long-term style of faulting. Measured offset values were small, with right-lateral and vertical components of slip ranging from 1 to 6 mm and 1 to 4 mm, respectively. This is the first documented historic slip on the East Wide Canyon fault, which was first mapped only months before the Joshua Tree earthquake. Surface slip associated with the Joshua Tree earthquake most likely developed as triggered slip given its 5 km distance from the Joshua Tree epicenter and aftershocks. As revealed in a trench investigation, slip formed in an area with only a thin (<3 m thick) veneer of alluvium in contrast to earlier documented triggered slip events in this region, all in the deep basins of the Salton Trough. A paleoseismic trench study in an area of 1992 surface slip revealed evidence of two and possibly three surface faulting events on the East Wide Canyon fault during the late Quaternary, probably latest Pleistocene (first event) and mid- to late Holocene (second two events). About two months after the Joshua Tree earthquake, the Landers earthquake then triggered slip on many faults, including the San Andreas fault in the southeastern Coachella Valley. Surface fractures associated with this event formed discontinuous breaks over a 54-km-long stretch of the fault, from the Indio Hills southeastward to Durmid Hill. Sense of slip was right-lateral; only locally was there a minor (~1 mm) vertical component of slip. Measured dextral displacement values ranged from 1 to 20 mm, with the largest amounts found in the Mecca Hills where large slip values have been measured following past triggered-slip events.
NASA Astrophysics Data System (ADS)
de Barros, Felipe P. J.; Bolster, Diogo; Sanchez-Vila, Xavier; Nowak, Wolfgang
2011-05-01
Assessing health risk in hydrological systems is an interdisciplinary field. It relies on the expertise in the fields of hydrology and public health and needs powerful translation concepts to provide decision support and policy making. Reliable health risk estimates need to account for the uncertainties and variabilities present in hydrological, physiological, and human behavioral parameters. Despite significant theoretical advancements in stochastic hydrology, there is still a dire need to further propagate these concepts to practical problems and to society in general. Following a recent line of work, we use fault trees to address the task of probabilistic risk analysis and to support related decision and management problems. Fault trees allow us to decompose the assessment of health risk into individual manageable modules, thus tackling a complex system by a structural divide and conquer approach. The complexity within each module can be chosen individually according to data availability, parsimony, relative importance, and stage of analysis. Three differences are highlighted in this paper when compared to previous works: (1) The fault tree proposed here accounts for the uncertainty in both hydrological and health components, (2) system failure within the fault tree is defined in terms of risk being above a threshold value, whereas previous studies that used fault trees used auxiliary events such as exceedance of critical concentration levels, and (3) we introduce a new form of stochastic fault tree that allows us to weaken the assumption of independent subsystems that is required by a classical fault tree approach. We illustrate our concept in a simple groundwater-related setting.
Renjith, V R; Madhu, G; Nayagam, V Lakshmana Gomathi; Bhasi, A B
2010-11-15
The hazards associated with major accident hazard (MAH) industries are fire, explosion and toxic gas releases. Of these, toxic gas release is the worst as it has the potential to cause extensive fatalities. Qualitative and quantitative hazard analyses are essential for the identification and quantification of these hazards related to chemical industries. Fault tree analysis (FTA) is an established technique in hazard identification. This technique has the advantage of being both qualitative and quantitative, if the probabilities and frequencies of the basic events are known. This paper outlines the estimation of the probability of release of chlorine from storage and filling facility of chlor-alkali industry using FTA. An attempt has also been made to arrive at the probability of chlorine release using expert elicitation and proven fuzzy logic technique for Indian conditions. Sensitivity analysis has been done to evaluate the percentage contribution of each basic event that could lead to chlorine release. Two-dimensional fuzzy fault tree analysis (TDFFTA) has been proposed for balancing the hesitation factor involved in expert elicitation. Copyright © 2010 Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mays, S.E.; Poloski, J.P.; Sullivan, W.H.
1982-07-01
This report describes a risk study of the Browns Ferry, Unit 1, nuclear plant. The study is one of four such studies sponsored by the NRC Office of Research, Division of Risk Assessment, as part of its Interim Reliability Evaluation Program (IREP), Phase II. This report is contained in four volumes: a main report and three appendixes. Appendix B provides a description of Browns Ferry, Unit 1, plant systems and the failure evaluation of those systems as they apply to accidents at Browns Ferry. Information is presented concerning front-line system fault analysis; support system fault analysis; human error models andmore » probabilities; and generic control circuit analyses.« less
Field, Edward; Biasi, Glenn P.; Bird, Peter; Dawson, Timothy E.; Felzer, Karen R.; Jackson, David A.; Johnson, Kaj M.; Jordan, Thomas H.; Madden, Christopher; Michael, Andrew J.; Milner, Kevin; Page, Morgan T.; Parsons, Thomas E.; Powers, Peter; Shaw, Bruce E.; Thatcher, Wayne R.; Weldon, Ray J.; Zeng, Yuehua
2015-01-01
The 2014 Working Group on California Earthquake Probabilities (WGCEP 2014) presents time-dependent earthquake probabilities for the third Uniform California Earthquake Rupture Forecast (UCERF3). Building on the UCERF3 time-independent model, published previously, renewal models are utilized to represent elastic-rebound-implied probabilities. A new methodology has been developed that solves applicability issues in the previous approach for un-segmented models. The new methodology also supports magnitude-dependent aperiodicity and accounts for the historic open interval on faults that lack a date-of-last-event constraint. Epistemic uncertainties are represented with a logic tree, producing 5,760 different forecasts. Results for a variety of evaluation metrics are presented, including logic-tree sensitivity analyses and comparisons to the previous model (UCERF2). For 30-year M≥6.7 probabilities, the most significant changes from UCERF2 are a threefold increase on the Calaveras fault and a threefold decrease on the San Jacinto fault. Such changes are due mostly to differences in the time-independent models (e.g., fault slip rates), with relaxation of segmentation and inclusion of multi-fault ruptures being particularly influential. In fact, some UCERF2 faults were simply too long to produce M 6.7 sized events given the segmentation assumptions in that study. Probability model differences are also influential, with the implied gains (relative to a Poisson model) being generally higher in UCERF3. Accounting for the historic open interval is one reason. Another is an effective 27% increase in the total elastic-rebound-model weight. The exact factors influencing differences between UCERF2 and UCERF3, as well as the relative importance of logic-tree branches, vary throughout the region, and depend on the evaluation metric of interest. For example, M≥6.7 probabilities may not be a good proxy for other hazard or loss measures. This sensitivity, coupled with the approximate nature of the model and known limitations, means the applicability of UCERF3 should be evaluated on a case-by-case basis.
Planning effectiveness may grow on fault trees.
Chow, C W; Haddad, K; Mannino, B
1991-10-01
The first step of a strategic planning process--identifying and analyzing threats and opportunities--requires subjective judgments. By using an analytical tool known as a fault tree, healthcare administrators can reduce the unreliability of subjective decision making by creating a logical structure for problem solving and decision making. A case study of 11 healthcare administrators showed that an analysis technique called prospective hindsight can add to a fault tree's ability to improve a strategic planning process.
NASA Astrophysics Data System (ADS)
Batzias, Dimitris F.
2012-12-01
Fault Tree Analysis (FTA) can be used for technology transfer when the relevant problem (called 'top even' in FTA) is solved in a technology centre and the results are diffused to interested parties (usually Small Medium Enterprises - SMEs) that have not the proper equipment and the required know-how to solve the problem by their own. Nevertheless, there is a significant drawback in this procedure: the information usually provided by the SMEs to the technology centre, about production conditions and corresponding quality characteristics of the product, and (sometimes) the relevant expertise in the Knowledge Base of this centre may be inadequate to form a complete fault tree. Since such cases are quite frequent in practice, we have developed a methodology for transforming incomplete fault tree to Ishikawa diagram, which is more flexible and less strict in establishing causal chains, because it uses a surface phenomenological level with a limited number of categories of faults. On the other hand, such an Ishikawa diagram can be extended to simulate a fault tree as relevant knowledge increases. An implementation of this transformation, referring to anodization of aluminium, is presented.
A systematic risk management approach employed on the CloudSat project
NASA Technical Reports Server (NTRS)
Basilio, R. R.; Plourde, K. S.; Lam, T.
2000-01-01
The CloudSat Project has developed a simplified approach for fault tree analysis and probabilistic risk assessment. A system-level fault tree has been constructed to identify credible fault scenarios and failure modes leading up to a potential failure to meet the nominal mission success criteria.
Fault Tree Analysis: A Bibliography
NASA Technical Reports Server (NTRS)
2000-01-01
Fault tree analysis is a top-down approach to the identification of process hazards. It is as one of the best methods for systematically identifying an graphically displaying the many ways some things can go wrong. This bibliography references 266 documents in the NASA STI Database that contain the major concepts. fault tree analysis, risk an probability theory, in the basic index or major subject terms. An abstract is included with most citations, followed by the applicable subject terms.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sarrack, A.G.
The purpose of this report is to document fault tree analyses which have been completed for the Defense Waste Processing Facility (DWPF) safety analysis. Logic models for equipment failures and human error combinations that could lead to flammable gas explosions in various process tanks, or failure of critical support systems were developed for internal initiating events and for earthquakes. These fault trees provide frequency estimates for support systems failures and accidents that could lead to radioactive and hazardous chemical releases both on-site and off-site. Top event frequency results from these fault trees will be used in further APET analyses tomore » calculate accident risk associated with DWPF facility operations. This report lists and explains important underlying assumptions, provides references for failure data sources, and briefly describes the fault tree method used. Specific commitments from DWPF to provide new procedural/administrative controls or system design changes are listed in the ''Facility Commitments'' section. The purpose of the ''Assumptions'' section is to clarify the basis for fault tree modeling, and is not necessarily a list of items required to be protected by Technical Safety Requirements (TSRs).« less
Graphical fault tree analysis for fatal falls in the construction industry.
Chi, Chia-Fen; Lin, Syuan-Zih; Dewi, Ratna Sari
2014-11-01
The current study applied a fault tree analysis to represent the causal relationships among events and causes that contributed to fatal falls in the construction industry. Four hundred and eleven work-related fatalities in the Taiwanese construction industry were analyzed in terms of age, gender, experience, falling site, falling height, company size, and the causes for each fatality. Given that most fatal accidents involve multiple events, the current study coded up to a maximum of three causes for each fall fatality. After the Boolean algebra and minimal cut set analyses, accident causes associated with each falling site can be presented as a fault tree to provide an overview of the basic causes, which could trigger fall fatalities in the construction industry. Graphical icons were designed for each falling site along with the associated accident causes to illustrate the fault tree in a graphical manner. A graphical fault tree can improve inter-disciplinary discussion of risk management and the communication of accident causation to first line supervisors. Copyright © 2014 Elsevier Ltd. All rights reserved.
Fault Tree Analysis for an Inspection Robot in a Nuclear Power Plant
NASA Astrophysics Data System (ADS)
Ferguson, Thomas A.; Lu, Lixuan
2017-09-01
The life extension of current nuclear reactors has led to an increasing demand on inspection and maintenance of critical reactor components that are too expensive to replace. To reduce the exposure dosage to workers, robotics have become an attractive alternative as a preventative safety tool in nuclear power plants. It is crucial to understand the reliability of these robots in order to increase the veracity and confidence of their results. This study presents the Fault Tree (FT) analysis to a coolant outlet piper snake-arm inspection robot in a nuclear power plant. Fault trees were constructed for a qualitative analysis to determine the reliability of the robot. Insight on the applicability of fault tree methods for inspection robotics in the nuclear industry is gained through this investigation.
NASA Technical Reports Server (NTRS)
English, Thomas
2005-01-01
A standard tool of reliability analysis used at NASA-JSC is the event tree. An event tree is simply a probability tree, with the probabilities determining the next step through the tree specified at each node. The nodal probabilities are determined by a reliability study of the physical system at work for a particular node. The reliability study performed at a node is typically referred to as a fault tree analysis, with the potential of a fault tree existing.for each node on the event tree. When examining an event tree it is obvious why the event tree/fault tree approach has been adopted. Typical event trees are quite complex in nature, and the event tree/fault tree approach provides a systematic and organized approach to reliability analysis. The purpose of this study was two fold. Firstly, we wanted to explore the possibility that a semi-Markov process can create dependencies between sojourn times (the times it takes to transition from one state to the next) that can decrease the uncertainty when estimating time to failures. Using a generalized semi-Markov model, we studied a four element reliability model and were able to demonstrate such sojourn time dependencies. Secondly, we wanted to study the use of semi-Markov processes to introduce a time variable into the event tree diagrams that are commonly developed in PRA (Probabilistic Risk Assessment) analyses. Event tree end states which change with time are more representative of failure scenarios than are the usual static probability-derived end states.
Structural system reliability calculation using a probabilistic fault tree analysis method
NASA Technical Reports Server (NTRS)
Torng, T. Y.; Wu, Y.-T.; Millwater, H. R.
1992-01-01
The development of a new probabilistic fault tree analysis (PFTA) method for calculating structural system reliability is summarized. The proposed PFTA procedure includes: developing a fault tree to represent the complex structural system, constructing an approximation function for each bottom event, determining a dominant sampling sequence for all bottom events, and calculating the system reliability using an adaptive importance sampling method. PFTA is suitable for complicated structural problems that require computer-intensive computer calculations. A computer program has been developed to implement the PFTA.
Locating hardware faults in a data communications network of a parallel computer
Archer, Charles J.; Megerian, Mark G.; Ratterman, Joseph D.; Smith, Brian E.
2010-01-12
Hardware faults location in a data communications network of a parallel computer. Such a parallel computer includes a plurality of compute nodes and a data communications network that couples the compute nodes for data communications and organizes the compute node as a tree. Locating hardware faults includes identifying a next compute node as a parent node and a root of a parent test tree, identifying for each child compute node of the parent node a child test tree having the child compute node as root, running a same test suite on the parent test tree and each child test tree, and identifying the parent compute node as having a defective link connected from the parent compute node to a child compute node if the test suite fails on the parent test tree and succeeds on all the child test trees.
Khan, F I; Abbasi, S A
2000-07-10
Fault tree analysis (FTA) is based on constructing a hypothetical tree of base events (initiating events) branching into numerous other sub-events, propagating the fault and eventually leading to the top event (accident). It has been a powerful technique used traditionally in identifying hazards in nuclear installations and power industries. As the systematic articulation of the fault tree is associated with assigning probabilities to each fault, the exercise is also sometimes called probabilistic risk assessment. But powerful as this technique is, it is also very cumbersome and costly, limiting its area of application. We have developed a new algorithm based on analytical simulation (named as AS-II), which makes the application of FTA simpler, quicker, and cheaper; thus opening up the possibility of its wider use in risk assessment in chemical process industries. Based on the methodology we have developed a computer-automated tool. The details are presented in this paper.
DG TO FT - AUTOMATIC TRANSLATION OF DIGRAPH TO FAULT TREE MODELS
NASA Technical Reports Server (NTRS)
Iverson, D. L.
1994-01-01
Fault tree and digraph models are frequently used for system failure analysis. Both types of models represent a failure space view of the system using AND and OR nodes in a directed graph structure. Each model has its advantages. While digraphs can be derived in a fairly straightforward manner from system schematics and knowledge about component failure modes and system design, fault tree structure allows for fast processing using efficient techniques developed for tree data structures. The similarities between digraphs and fault trees permits the information encoded in the digraph to be translated into a logically equivalent fault tree. The DG TO FT translation tool will automatically translate digraph models, including those with loops or cycles, into fault tree models that have the same minimum cut set solutions as the input digraph. This tool could be useful, for example, if some parts of a system have been modeled using digraphs and others using fault trees. The digraphs could be translated and incorporated into the fault trees, allowing them to be analyzed using a number of powerful fault tree processing codes, such as cut set and quantitative solution codes. A cut set for a given node is a group of failure events that will cause the failure of the node. A minimum cut set for a node is any cut set that, if any of the failures in the set were to be removed, the occurrence of the other failures in the set will not cause the failure of the event represented by the node. Cut sets calculations can be used to find dependencies, weak links, and vital system components whose failures would cause serious systems failure. The DG TO FT translation system reads in a digraph with each node listed as a separate object in the input file. The user specifies a terminal node for the digraph that will be used as the top node of the resulting fault tree. A fault tree basic event node representing the failure of that digraph node is created and becomes a child of the terminal root node. A subtree is created for each of the inputs to the digraph terminal node and the root of those subtrees are added as children of the top node of the fault tree. Every node in the digraph upstream of the terminal node will be visited and converted. During the conversion process, the algorithm keeps track of the path from the digraph terminal node to the current digraph node. If a node is visited twice, then the program has found a cycle in the digraph. This cycle is broken by finding the minimal cut sets of the twice visited digraph node and forming those cut sets into subtrees. Another implementation of the algorithm resolves loops by building a subtree based on the digraph minimal cut sets calculation. It does not reduce the subtree to minimal cut set form. This second implementation produces larger fault trees, but runs much faster than the version using minimal cut sets since it does not spend time reducing the subtrees to minimal cut sets. The fault trees produced by DG TO FT will contain OR gates, AND gates, Basic Event nodes, and NOP gates. The results of a translation can be output as a text object description of the fault tree similar to the text digraph input format. The translator can also output a LISP language formatted file and an augmented LISP file which can be used by the FTDS (ARC-13019) diagnosis system, available from COSMIC, which performs diagnostic reasoning using the fault tree as a knowledge base. DG TO FT is written in C-language to be machine independent. It has been successfully implemented on a Sun running SunOS, a DECstation running ULTRIX, a Macintosh running System 7, and a DEC VAX running VMS. The RAM requirement varies with the size of the models. DG TO FT is available in UNIX tar format on a .25 inch streaming magnetic tape cartridge (standard distribution) or on a 3.5 inch diskette. It is also available on a 3.5 inch Macintosh format diskette or on a 9-track 1600 BPI magnetic tape in DEC VAX FILES-11 format. Sample input and sample output are provided on the distribution medium. An electronic copy of the documentation in Macintosh Microsoft Word format is provided on the distribution medium. DG TO FT was developed in 1992. Sun, and SunOS are trademarks of Sun Microsystems, Inc. DECstation, ULTRIX, VAX, and VMS are trademarks of Digital Equipment Corporation. UNIX is a registered trademark of AT&T Bell Laboratories. Macintosh is a registered trademark of Apple Computer, Inc. System 7 is a trademark of Apple Computers Inc. Microsoft Word is a trademark of Microsoft Corporation.
Fault diagnosis of power transformer based on fault-tree analysis (FTA)
NASA Astrophysics Data System (ADS)
Wang, Yongliang; Li, Xiaoqiang; Ma, Jianwei; Li, SuoYu
2017-05-01
Power transformers is an important equipment in power plants and substations, power distribution transmission link is made an important hub of power systems. Its performance directly affects the quality and health of the power system reliability and stability. This paper summarizes the five parts according to the fault type power transformers, then from the time dimension divided into three stages of power transformer fault, use DGA routine analysis and infrared diagnostics criterion set power transformer running state, finally, according to the needs of power transformer fault diagnosis, by the general to the section by stepwise refinement of dendritic tree constructed power transformer fault
CUTSETS - MINIMAL CUT SET CALCULATION FOR DIGRAPH AND FAULT TREE RELIABILITY MODELS
NASA Technical Reports Server (NTRS)
Iverson, D. L.
1994-01-01
Fault tree and digraph models are frequently used for system failure analysis. Both type of models represent a failure space view of the system using AND and OR nodes in a directed graph structure. Fault trees must have a tree structure and do not allow cycles or loops in the graph. Digraphs allow any pattern of interconnection between loops in the graphs. A common operation performed on digraph and fault tree models is the calculation of minimal cut sets. A cut set is a set of basic failures that could cause a given target failure event to occur. A minimal cut set for a target event node in a fault tree or digraph is any cut set for the node with the property that if any one of the failures in the set is removed, the occurrence of the other failures in the set will not cause the target failure event. CUTSETS will identify all the minimal cut sets for a given node. The CUTSETS package contains programs that solve for minimal cut sets of fault trees and digraphs using object-oriented programming techniques. These cut set codes can be used to solve graph models for reliability analysis and identify potential single point failures in a modeled system. The fault tree minimal cut set code reads in a fault tree model input file with each node listed in a text format. In the input file the user specifies a top node of the fault tree and a maximum cut set size to be calculated. CUTSETS will find minimal sets of basic events which would cause the failure at the output of a given fault tree gate. The program can find all the minimal cut sets of a node, or minimal cut sets up to a specified size. The algorithm performs a recursive top down parse of the fault tree, starting at the specified top node, and combines the cut sets of each child node into sets of basic event failures that would cause the failure event at the output of that gate. Minimal cut set solutions can be found for all nodes in the fault tree or just for the top node. The digraph cut set code uses the same techniques as the fault tree cut set code, except it includes all upstream digraph nodes in the cut sets for a given node and checks for cycles in the digraph during the solution process. CUTSETS solves for specified nodes and will not automatically solve for all upstream digraph nodes. The cut sets will be output as a text file. CUTSETS includes a utility program that will convert the popular COD format digraph model description files into text input files suitable for use with the CUTSETS programs. FEAT (MSC-21873) and FIRM (MSC-21860) available from COSMIC are examples of programs that produce COD format digraph model description files that may be converted for use with the CUTSETS programs. CUTSETS is written in C-language to be machine independent. It has been successfully implemented on a Sun running SunOS, a DECstation running ULTRIX, a Macintosh running System 7, and a DEC VAX running VMS. The RAM requirement varies with the size of the models. CUTSETS is available in UNIX tar format on a .25 inch streaming magnetic tape cartridge (standard distribution) or on a 3.5 inch diskette. It is also available on a 3.5 inch Macintosh format diskette or on a 9-track 1600 BPI magnetic tape in DEC VAX FILES-11 format. Sample input and sample output are provided on the distribution medium. An electronic copy of the documentation in Macintosh Microsoft Word format is included on the distribution medium. Sun and SunOS are trademarks of Sun Microsystems, Inc. DEC, DeCstation, ULTRIX, VAX, and VMS are trademarks of Digital Equipment Corporation. UNIX is a registered trademark of AT&T Bell Laboratories. Macintosh is a registered trademark of Apple Computer, Inc.
Fault trees for decision making in systems analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lambert, Howard E.
1975-10-09
The application of fault tree analysis (FTA) to system safety and reliability is presented within the framework of system safety analysis. The concepts and techniques involved in manual and automated fault tree construction are described and their differences noted. The theory of mathematical reliability pertinent to FTA is presented with emphasis on engineering applications. An outline of the quantitative reliability techniques of the Reactor Safety Study is given. Concepts of probabilistic importance are presented within the fault tree framework and applied to the areas of system design, diagnosis and simulation. The computer code IMPORTANCE ranks basic events and cut setsmore » according to a sensitivity analysis. A useful feature of the IMPORTANCE code is that it can accept relative failure data as input. The output of the IMPORTANCE code can assist an analyst in finding weaknesses in system design and operation, suggest the most optimal course of system upgrade, and determine the optimal location of sensors within a system. A general simulation model of system failure in terms of fault tree logic is described. The model is intended for efficient diagnosis of the causes of system failure in the event of a system breakdown. It can also be used to assist an operator in making decisions under a time constraint regarding the future course of operations. The model is well suited for computer implementation. New results incorporated in the simulation model include an algorithm to generate repair checklists on the basis of fault tree logic and a one-step-ahead optimization procedure that minimizes the expected time to diagnose system failure.« less
System Analysis by Mapping a Fault-tree into a Bayesian-network
NASA Astrophysics Data System (ADS)
Sheng, B.; Deng, C.; Wang, Y. H.; Tang, L. H.
2018-05-01
In view of the limitations of fault tree analysis in reliability assessment, Bayesian Network (BN) has been studied as an alternative technology. After a brief introduction to the method for mapping a Fault Tree (FT) into an equivalent BN, equations used to calculate the structure importance degree, the probability importance degree and the critical importance degree are presented. Furthermore, the correctness of these equations is proved mathematically. Combining with an aircraft landing gear’s FT, an equivalent BN is developed and analysed. The results show that richer and more accurate information have been achieved through the BN method than the FT, which demonstrates that the BN is a superior technique in both reliability assessment and fault diagnosis.
A diagnosis system using object-oriented fault tree models
NASA Technical Reports Server (NTRS)
Iverson, David L.; Patterson-Hine, F. A.
1990-01-01
Spaceborne computing systems must provide reliable, continuous operation for extended periods. Due to weight, power, and volume constraints, these systems must manage resources very effectively. A fault diagnosis algorithm is described which enables fast and flexible diagnoses in the dynamic distributed computing environments planned for future space missions. The algorithm uses a knowledge base that is easily changed and updated to reflect current system status. Augmented fault trees represented in an object-oriented form provide deep system knowledge that is easy to access and revise as a system changes. Given such a fault tree, a set of failure events that have occurred, and a set of failure events that have not occurred, this diagnosis system uses forward and backward chaining to propagate causal and temporal information about other failure events in the system being diagnosed. Once the system has established temporal and causal constraints, it reasons backward from heuristically selected failure events to find a set of basic failure events which are a likely cause of the occurrence of the top failure event in the fault tree. The diagnosis system has been implemented in common LISP using Flavors.
Risk assessment techniques with applicability in marine engineering
NASA Astrophysics Data System (ADS)
Rudenko, E.; Panaitescu, F. V.; Panaitescu, M.
2015-11-01
Nowadays risk management is a carefully planned process. The task of risk management is organically woven into the general problem of increasing the efficiency of business. Passive attitude to risk and awareness of its existence are replaced by active management techniques. Risk assessment is one of the most important stages of risk management, since for risk management it is necessary first to analyze and evaluate risk. There are many definitions of this notion but in general case risk assessment refers to the systematic process of identifying the factors and types of risk and their quantitative assessment, i.e. risk analysis methodology combines mutually complementary quantitative and qualitative approaches. Purpose of the work: In this paper we will consider as risk assessment technique Fault Tree analysis (FTA). The objectives are: understand purpose of FTA, understand and apply rules of Boolean algebra, analyse a simple system using FTA, FTA advantages and disadvantages. Research and methodology: The main purpose is to help identify potential causes of system failures before the failures actually occur. We can evaluate the probability of the Top event.The steps of this analize are: the system's examination from Top to Down, the use of symbols to represent events, the use of mathematical tools for critical areas, the use of Fault tree logic diagrams to identify the cause of the Top event. Results: In the finally of study it will be obtained: critical areas, Fault tree logical diagrams and the probability of the Top event. These results can be used for the risk assessment analyses.
Fault tree applications within the safety program of Idaho Nuclear Corporation
NASA Technical Reports Server (NTRS)
Vesely, W. E.
1971-01-01
Computerized fault tree analyses are used to obtain both qualitative and quantitative information about the safety and reliability of an electrical control system that shuts the reactor down when certain safety criteria are exceeded, in the design of a nuclear plant protection system, and in an investigation of a backup emergency system for reactor shutdown. The fault tree yields the modes by which the system failure or accident will occur, the most critical failure or accident causing areas, detailed failure probabilities, and the response of safety or reliability to design modifications and maintenance schemes.
Time-dependent seismic hazard analysis for the Greater Tehran and surrounding areas
NASA Astrophysics Data System (ADS)
Jalalalhosseini, Seyed Mostafa; Zafarani, Hamid; Zare, Mehdi
2018-01-01
This study presents a time-dependent approach for seismic hazard in Tehran and surrounding areas. Hazard is evaluated by combining background seismic activity, and larger earthquakes may emanate from fault segments. Using available historical and paleoseismological data or empirical relation, the recurrence time and maximum magnitude of characteristic earthquakes for the major faults have been explored. The Brownian passage time (BPT) distribution has been used to calculate equivalent fictitious seismicity rate for major faults in the region. To include ground motion uncertainty, a logic tree and five ground motion prediction equations have been selected based on their applicability in the region. Finally, hazard maps have been presented.
Fault Tree Analysis as a Planning and Management Tool: A Case Study
ERIC Educational Resources Information Center
Witkin, Belle Ruth
1977-01-01
Fault Tree Analysis is an operations research technique used to analyse the most probable modes of failure in a system, in order to redesign or monitor the system more closely in order to increase its likelihood of success. (Author)
NASA Astrophysics Data System (ADS)
Rodak, C. M.; McHugh, R.; Wei, X.
2016-12-01
The development and combination of horizontal drilling and hydraulic fracturing has unlocked unconventional hydrocarbon reserves around the globe. These advances have triggered a number of concerns regarding aquifer contamination and over-exploitation, leading to scientific studies investigating potential risks posed by directional hydraulic fracturing activities. These studies, balanced with potential economic benefits of energy production, are a crucial source of information for communities considering the development of unconventional reservoirs. However, probabilistic quantification of the overall risk posed by hydraulic fracturing at the system level are rare. Here we present the concept of fault tree analysis to determine the overall probability of groundwater contamination or over-exploitation, broadly referred to as the probability of failure. The potential utility of fault tree analysis for the quantification and communication of risks is approached with a general application. However, the fault tree design is robust and can handle various combinations of regional-specific data pertaining to relevant spatial scales, geological conditions, and industry practices where available. All available data are grouped into quantity and quality-based impacts and sub-divided based on the stage of the hydraulic fracturing process in which the data is relevant as described by the USEPA. Each stage is broken down into the unique basic events required for failure; for example, to quantify the risk of an on-site spill we must consider the likelihood, magnitude, composition, and subsurface transport of the spill. The structure of the fault tree described above can be used to render a highly complex system of variables into a straightforward equation for risk calculation based on Boolean logic. This project shows the utility of fault tree analysis for the visual communication of the potential risks of hydraulic fracturing activities on groundwater resources.
Fault Tree Analysis: An Emerging Methodology for Instructional Science.
ERIC Educational Resources Information Center
Wood, R. Kent; And Others
1979-01-01
Describes Fault Tree Analysis, a tool for systems analysis which attempts to identify possible modes of failure in systems to increase the probability of success. The article defines the technique and presents the steps of FTA construction, focusing on its application to education. (RAO)
Interim reliability evaluation program, Browns Ferry 1
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mays, S.E.; Poloski, J.P.; Sullivan, W.H.
1981-01-01
Probabilistic risk analysis techniques, i.e., event tree and fault tree analysis, were utilized to provide a risk assessment of the Browns Ferry Nuclear Plant Unit 1. Browns Ferry 1 is a General Electric boiling water reactor of the BWR 4 product line with a Mark 1 (drywell and torus) containment. Within the guidelines of the IREP Procedure and Schedule Guide, dominant accident sequences that contribute to public health and safety risks were identified and grouped according to release categories.
Reliability and availability evaluation of Wireless Sensor Networks for industrial applications.
Silva, Ivanovitch; Guedes, Luiz Affonso; Portugal, Paulo; Vasques, Francisco
2012-01-01
Wireless Sensor Networks (WSN) currently represent the best candidate to be adopted as the communication solution for the last mile connection in process control and monitoring applications in industrial environments. Most of these applications have stringent dependability (reliability and availability) requirements, as a system failure may result in economic losses, put people in danger or lead to environmental damages. Among the different type of faults that can lead to a system failure, permanent faults on network devices have a major impact. They can hamper communications over long periods of time and consequently disturb, or even disable, control algorithms. The lack of a structured approach enabling the evaluation of permanent faults, prevents system designers to optimize decisions that minimize these occurrences. In this work we propose a methodology based on an automatic generation of a fault tree to evaluate the reliability and availability of Wireless Sensor Networks, when permanent faults occur on network devices. The proposal supports any topology, different levels of redundancy, network reconfigurations, criticality of devices and arbitrary failure conditions. The proposed methodology is particularly suitable for the design and validation of Wireless Sensor Networks when trying to optimize its reliability and availability requirements.
Reliability and Availability Evaluation of Wireless Sensor Networks for Industrial Applications
Silva, Ivanovitch; Guedes, Luiz Affonso; Portugal, Paulo; Vasques, Francisco
2012-01-01
Wireless Sensor Networks (WSN) currently represent the best candidate to be adopted as the communication solution for the last mile connection in process control and monitoring applications in industrial environments. Most of these applications have stringent dependability (reliability and availability) requirements, as a system failure may result in economic losses, put people in danger or lead to environmental damages. Among the different type of faults that can lead to a system failure, permanent faults on network devices have a major impact. They can hamper communications over long periods of time and consequently disturb, or even disable, control algorithms. The lack of a structured approach enabling the evaluation of permanent faults, prevents system designers to optimize decisions that minimize these occurrences. In this work we propose a methodology based on an automatic generation of a fault tree to evaluate the reliability and availability of Wireless Sensor Networks, when permanent faults occur on network devices. The proposal supports any topology, different levels of redundancy, network reconfigurations, criticality of devices and arbitrary failure conditions. The proposed methodology is particularly suitable for the design and validation of Wireless Sensor Networks when trying to optimize its reliability and availability requirements. PMID:22368497
Program listing for fault tree analysis of JPL technical report 32-1542
NASA Technical Reports Server (NTRS)
Chelson, P. O.
1971-01-01
The computer program listing for the MAIN program and those subroutines unique to the fault tree analysis are described. Some subroutines are used for analyzing the reliability block diagram. The program is written in FORTRAN 5 and is running on a UNIVAC 1108.
FAULT TREE ANALYSIS FOR EXPOSURE TO REFRIGERANTS USED FOR AUTOMOTIVE AIR CONDITIONING IN THE U.S.
A fault tree analysis was used to estimate the number of refrigerant exposures of automotive service technicians and vehicle occupants in the United States. Exposures of service technicians can occur when service equipment or automotive air-conditioning systems leak during servic...
A Fault Tree Approach to Analysis of Organizational Communication Systems.
ERIC Educational Resources Information Center
Witkin, Belle Ruth; Stephens, Kent G.
Fault Tree Analysis (FTA) is a method of examing communication in an organization by focusing on: (1) the complex interrelationships in human systems, particularly in communication systems; (2) interactions across subsystems and system boundaries; and (3) the need to select and "prioritize" channels which will eliminate noise in the…
Cost-effectiveness analysis of risk-reduction measures to reach water safety targets.
Lindhe, Andreas; Rosén, Lars; Norberg, Tommy; Bergstedt, Olof; Pettersson, Thomas J R
2011-01-01
Identifying the most suitable risk-reduction measures in drinking water systems requires a thorough analysis of possible alternatives. In addition to the effects on the risk level, also the economic aspects of the risk-reduction alternatives are commonly considered important. Drinking water supplies are complex systems and to avoid sub-optimisation of risk-reduction measures, the entire system from source to tap needs to be considered. There is a lack of methods for quantification of water supply risk reduction in an economic context for entire drinking water systems. The aim of this paper is to present a novel approach for risk assessment in combination with economic analysis to evaluate risk-reduction measures based on a source-to-tap approach. The approach combines a probabilistic and dynamic fault tree method with cost-effectiveness analysis (CEA). The developed approach comprises the following main parts: (1) quantification of risk reduction of alternatives using a probabilistic fault tree model of the entire system; (2) combination of the modelling results with CEA; and (3) evaluation of the alternatives with respect to the risk reduction, the probability of not reaching water safety targets and the cost-effectiveness. The fault tree method and CEA enable comparison of risk-reduction measures in the same quantitative unit and consider costs and uncertainties. The approach provides a structured and thorough analysis of risk-reduction measures that facilitates transparency and long-term planning of drinking water systems in order to avoid sub-optimisation of available resources for risk reduction. Copyright © 2010 Elsevier Ltd. All rights reserved.
Nouri.Gharahasanlou, Ali; Mokhtarei, Ashkan; Khodayarei, Aliasqar; Ataei, Mohammad
2014-01-01
Evaluating and analyzing the risk in the mining industry is a new approach for improving the machinery performance. Reliability, safety, and maintenance management based on the risk analysis can enhance the overall availability and utilization of the mining technological systems. This study investigates the failure occurrence probability of the crushing and mixing bed hall department at Azarabadegan Khoy cement plant by using fault tree analysis (FTA) method. The results of the analysis in 200 h operating interval show that the probability of failure occurrence for crushing, conveyor systems, crushing and mixing bed hall department is 73, 64, and 95 percent respectively and the conveyor belt subsystem found as the most probable system for failure. Finally, maintenance as a method of control and prevent the occurrence of failure is proposed. PMID:26779433
Nouri Gharahasanlou, Ali; Mokhtarei, Ashkan; Khodayarei, Aliasqar; Ataei, Mohammad
2014-04-01
Evaluating and analyzing the risk in the mining industry is a new approach for improving the machinery performance. Reliability, safety, and maintenance management based on the risk analysis can enhance the overall availability and utilization of the mining technological systems. This study investigates the failure occurrence probability of the crushing and mixing bed hall department at Azarabadegan Khoy cement plant by using fault tree analysis (FTA) method. The results of the analysis in 200 h operating interval show that the probability of failure occurrence for crushing, conveyor systems, crushing and mixing bed hall department is 73, 64, and 95 percent respectively and the conveyor belt subsystem found as the most probable system for failure. Finally, maintenance as a method of control and prevent the occurrence of failure is proposed.
Langenheim, Victoria E.; Rymer, Michael J.; Catchings, Rufus D.; Goldman, Mark R.; Watt, Janet T.; Powell, Robert E.; Matti, Jonathan C.
2016-03-02
We describe high-resolution gravity and seismic refraction surveys acquired to determine the thickness of valley-fill deposits and to delineate geologic structures that might influence groundwater flow beneath the Smoke Tree Wash area in Joshua Tree National Park. These surveys identified a sedimentary basin that is fault-controlled. A profile across the Smoke Tree Wash fault zone reveals low gravity values and seismic velocities that coincide with a mapped strand of the Smoke Tree Wash fault. Modeling of the gravity data reveals a basin about 2–2.5 km long and 1 km wide that is roughly centered on this mapped strand, and bounded by inferred faults. According to the gravity model the deepest part of the basin is about 270 m, but this area coincides with low velocities that are not characteristic of typical basement complex rocks. Most likely, the density contrast assumed in the inversion is too high or the uncharacteristically low velocities represent highly fractured or weathered basement rocks, or both. A longer seismic profile extending onto basement outcrops would help differentiate which scenario is more accurate. The seismic velocities also determine the depth to water table along the profile to be about 40–60 m, consistent with water levels measured in water wells near the northern end of the profile.
A Fault Tree Approach to Needs Assessment -- An Overview.
ERIC Educational Resources Information Center
Stephens, Kent G.
A "failsafe" technology is presented based on a new unified theory of needs assessment. Basically the paper discusses fault tree analysis as a technique for enhancing the probability of success in any system by analyzing the most likely modes of failure that could occur and then suggesting high priority avoidance strategies for those…
Huang, Weiqing; Fan, Hongbo; Qiu, Yongfu; Cheng, Zhiyu; Xu, Pingru; Qian, Yu
2016-05-01
Recently, China has frequently experienced large-scale, severe and persistent haze pollution due to surging urbanization and industrialization and a rapid growth in the number of motor vehicles and energy consumption. The vehicle emission due to the consumption of a large number of fossil fuels is no doubt a critical factor of the haze pollution. This work is focused on the causation mechanism of haze pollution related to the vehicle emission for Guangzhou city by employing the Fault Tree Analysis (FTA) method for the first time. With the establishment of the fault tree system of "Haze weather-Vehicle exhausts explosive emission", all of the important risk factors are discussed and identified by using this deductive FTA method. The qualitative and quantitative assessments of the fault tree system are carried out based on the structure, probability and critical importance degree analysis of the risk factors. The study may provide a new simple and effective tool/strategy for the causation mechanism analysis and risk management of haze pollution in China. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Sanchez-Vila, X.; de Barros, F.; Bolster, D.; Nowak, W.
2010-12-01
Assessing the potential risk of hydro(geo)logical supply systems to human population is an interdisciplinary field. It relies on the expertise in fields as distant as hydrogeology, medicine, or anthropology, and needs powerful translation concepts to provide decision support and policy making. Reliable health risk estimates need to account for the uncertainties in hydrological, physiological and human behavioral parameters. We propose the use of fault trees to address the task of probabilistic risk analysis (PRA) and to support related management decisions. Fault trees allow decomposing the assessment of health risk into individual manageable modules, thus tackling a complex system by a structural “Divide and Conquer” approach. The complexity within each module can be chosen individually according to data availability, parsimony, relative importance and stage of analysis. The separation in modules allows for a true inter- and multi-disciplinary approach. This presentation highlights the three novel features of our work: (1) we define failure in terms of risk being above a threshold value, whereas previous studies used auxiliary events such as exceedance of critical concentration levels, (2) we plot an integrated fault tree that handles uncertainty in both hydrological and health components in a unified way, and (3) we introduce a new form of stochastic fault tree that allows to weaken the assumption of independent subsystems that is required by a classical fault tree approach. We illustrate our concept in a simple groundwater-related setting.
A fuzzy decision tree for fault classification.
Zio, Enrico; Baraldi, Piero; Popescu, Irina C
2008-02-01
In plant accident management, the control room operators are required to identify the causes of the accident, based on the different patterns of evolution of the monitored process variables thereby developing. This task is often quite challenging, given the large number of process parameters monitored and the intense emotional states under which it is performed. To aid the operators, various techniques of fault classification have been engineered. An important requirement for their practical application is the physical interpretability of the relationships among the process variables underpinning the fault classification. In this view, the present work propounds a fuzzy approach to fault classification, which relies on fuzzy if-then rules inferred from the clustering of available preclassified signal data, which are then organized in a logical and transparent decision tree structure. The advantages offered by the proposed approach are precisely that a transparent fault classification model is mined out of the signal data and that the underlying physical relationships among the process variables are easily interpretable as linguistic if-then rules that can be explicitly visualized in the decision tree structure. The approach is applied to a case study regarding the classification of simulated faults in the feedwater system of a boiling water reactor.
FTC - THE FAULT-TREE COMPILER (SUN VERSION)
NASA Technical Reports Server (NTRS)
Butler, R. W.
1994-01-01
FTC, the Fault-Tree Compiler program, is a tool used to calculate the top-event probability for a fault-tree. Five different gate types are allowed in the fault tree: AND, OR, EXCLUSIVE OR, INVERT, and M OF N. The high-level input language is easy to understand and use. In addition, the program supports a hierarchical fault tree definition feature which simplifies the tree-description process and reduces execution time. A rigorous error bound is derived for the solution technique. This bound enables the program to supply an answer precisely (within the limits of double precision floating point arithmetic) at a user-specified number of digits accuracy. The program also facilitates sensitivity analysis with respect to any specified parameter of the fault tree such as a component failure rate or a specific event probability by allowing the user to vary one failure rate or the failure probability over a range of values and plot the results. The mathematical approach chosen to solve a reliability problem may vary with the size and nature of the problem. Although different solution techniques are utilized on different programs, it is possible to have a common input language. The Systems Validation Methods group at NASA Langley Research Center has created a set of programs that form the basis for a reliability analysis workstation. The set of programs are: SURE reliability analysis program (COSMIC program LAR-13789, LAR-14921); the ASSIST specification interface program (LAR-14193, LAR-14923), PAWS/STEM reliability analysis programs (LAR-14165, LAR-14920); and the FTC fault tree tool (LAR-14586, LAR-14922). FTC is used to calculate the top-event probability for a fault tree. PAWS/STEM and SURE are programs which interpret the same SURE language, but utilize different solution methods. ASSIST is a preprocessor that generates SURE language from a more abstract definition. SURE, ASSIST, and PAWS/STEM are also offered as a bundle. Please see the abstract for COS-10039/COS-10041, SARA - SURE/ASSIST Reliability Analysis Workstation, for pricing details. FTC was originally developed for DEC VAX series computers running VMS and was later ported for use on Sun computers running SunOS. The program is written in PASCAL, ANSI compliant C-language, and FORTRAN 77. The TEMPLATE graphics library is required to obtain graphical output. The standard distribution medium for the VMS version of FTC (LAR-14586) is a 9-track 1600 BPI magnetic tape in VMSINSTAL format. It is also available on a TK50 tape cartridge in VMSINSTAL format. Executables are included. The standard distribution medium for the Sun version of FTC (LAR-14922) is a .25 inch streaming magnetic tape cartridge in UNIX tar format. Both Sun3 and Sun4 executables are included. FTC was developed in 1989 and last updated in 1992. DEC, VAX, VMS, and TK50 are trademarks of Digital Equipment Corporation. UNIX is a registered trademark of AT&T Bell Laboratories. SunOS is a trademark of Sun Microsystems, Inc.
FTC - THE FAULT-TREE COMPILER (VAX VMS VERSION)
NASA Technical Reports Server (NTRS)
Butler, R. W.
1994-01-01
FTC, the Fault-Tree Compiler program, is a tool used to calculate the top-event probability for a fault-tree. Five different gate types are allowed in the fault tree: AND, OR, EXCLUSIVE OR, INVERT, and M OF N. The high-level input language is easy to understand and use. In addition, the program supports a hierarchical fault tree definition feature which simplifies the tree-description process and reduces execution time. A rigorous error bound is derived for the solution technique. This bound enables the program to supply an answer precisely (within the limits of double precision floating point arithmetic) at a user-specified number of digits accuracy. The program also facilitates sensitivity analysis with respect to any specified parameter of the fault tree such as a component failure rate or a specific event probability by allowing the user to vary one failure rate or the failure probability over a range of values and plot the results. The mathematical approach chosen to solve a reliability problem may vary with the size and nature of the problem. Although different solution techniques are utilized on different programs, it is possible to have a common input language. The Systems Validation Methods group at NASA Langley Research Center has created a set of programs that form the basis for a reliability analysis workstation. The set of programs are: SURE reliability analysis program (COSMIC program LAR-13789, LAR-14921); the ASSIST specification interface program (LAR-14193, LAR-14923), PAWS/STEM reliability analysis programs (LAR-14165, LAR-14920); and the FTC fault tree tool (LAR-14586, LAR-14922). FTC is used to calculate the top-event probability for a fault tree. PAWS/STEM and SURE are programs which interpret the same SURE language, but utilize different solution methods. ASSIST is a preprocessor that generates SURE language from a more abstract definition. SURE, ASSIST, and PAWS/STEM are also offered as a bundle. Please see the abstract for COS-10039/COS-10041, SARA - SURE/ASSIST Reliability Analysis Workstation, for pricing details. FTC was originally developed for DEC VAX series computers running VMS and was later ported for use on Sun computers running SunOS. The program is written in PASCAL, ANSI compliant C-language, and FORTRAN 77. The TEMPLATE graphics library is required to obtain graphical output. The standard distribution medium for the VMS version of FTC (LAR-14586) is a 9-track 1600 BPI magnetic tape in VMSINSTAL format. It is also available on a TK50 tape cartridge in VMSINSTAL format. Executables are included. The standard distribution medium for the Sun version of FTC (LAR-14922) is a .25 inch streaming magnetic tape cartridge in UNIX tar format. Both Sun3 and Sun4 executables are included. FTC was developed in 1989 and last updated in 1992. DEC, VAX, VMS, and TK50 are trademarks of Digital Equipment Corporation. UNIX is a registered trademark of AT&T Bell Laboratories. SunOS is a trademark of Sun Microsystems, Inc.
A Fault Tree Approach to Analysis of Behavioral Systems: An Overview.
ERIC Educational Resources Information Center
Stephens, Kent G.
Developed at Brigham Young University, Fault Tree Analysis (FTA) is a technique for enhancing the probability of success in any system by analyzing the most likely modes of failure that could occur. It provides a logical, step-by-step description of possible failure events within a system and their interaction--the combinations of potential…
Risk management of PPP project in the preparation stage based on Fault Tree Analysis
NASA Astrophysics Data System (ADS)
Xing, Yuanzhi; Guan, Qiuling
2017-03-01
The risk management of PPP(Public Private Partnership) project can improve the level of risk control between government departments and private investors, so as to make more beneficial decisions, reduce investment losses and achieve mutual benefit as well. Therefore, this paper takes the PPP project preparation stage venture as the research object to identify and confirm four types of risks. At the same time, fault tree analysis(FTA) is used to evaluate the risk factors that belong to different parts, and quantify the influencing degree of risk impact on the basis of risk identification. In addition, it determines the importance order of risk factors by calculating unit structure importance on PPP project preparation stage. The result shows that accuracy of government decision-making, rationality of private investors funds allocation and instability of market returns are the main factors to generate the shared risk on the project.
The engine fuel system fault analysis
NASA Astrophysics Data System (ADS)
Zhang, Yong; Song, Hanqiang; Yang, Changsheng; Zhao, Wei
2017-05-01
For improving the reliability of the engine fuel system, the typical fault factor of the engine fuel system was analyzed from the point view of structure and functional. The fault character was gotten by building the fuel system fault tree. According the utilizing of fault mode effect analysis method (FMEA), several factors of key component fuel regulator was obtained, which include the fault mode, the fault cause, and the fault influences. All of this made foundation for next development of fault diagnosis system.
Fault tree analysis: NiH2 aerospace cells for LEO mission
NASA Technical Reports Server (NTRS)
Klein, Glenn C.; Rash, Donald E., Jr.
1992-01-01
The Fault Tree Analysis (FTA) is one of several reliability analyses or assessments applied to battery cells to be utilized in typical Electric Power Subsystems for spacecraft in low Earth orbit missions. FTA is generally the process of reviewing and analytically examining a system or equipment in such a way as to emphasize the lower level fault occurrences which directly or indirectly contribute to the major fault or top level event. This qualitative FTA addresses the potential of occurrence for five specific top level events: hydrogen leakage through either discrete leakage paths or through pressure vessel rupture; and four distinct modes of performance degradation - high charge voltage, suppressed discharge voltage, loss of capacity, and high pressure.
Decision tree and PCA-based fault diagnosis of rotating machinery
NASA Astrophysics Data System (ADS)
Sun, Weixiang; Chen, Jin; Li, Jiaqing
2007-04-01
After analysing the flaws of conventional fault diagnosis methods, data mining technology is introduced to fault diagnosis field, and a new method based on C4.5 decision tree and principal component analysis (PCA) is proposed. In this method, PCA is used to reduce features after data collection, preprocessing and feature extraction. Then, C4.5 is trained by using the samples to generate a decision tree model with diagnosis knowledge. At last the tree model is used to make diagnosis analysis. To validate the method proposed, six kinds of running states (normal or without any defect, unbalance, rotor radial rub, oil whirl, shaft crack and a simultaneous state of unbalance and radial rub), are simulated on Bently Rotor Kit RK4 to test C4.5 and PCA-based method and back-propagation neural network (BPNN). The result shows that C4.5 and PCA-based diagnosis method has higher accuracy and needs less training time than BPNN.
NASA Technical Reports Server (NTRS)
Chang, Chi-Yung (Inventor); Fang, Wai-Chi (Inventor); Curlander, John C. (Inventor)
1995-01-01
A system for data compression utilizing systolic array architecture for Vector Quantization (VQ) is disclosed for both full-searched and tree-searched. For a tree-searched VQ, the special case of a Binary Tree-Search VQ (BTSVQ) is disclosed with identical Processing Elements (PE) in the array for both a Raw-Codebook VQ (RCVQ) and a Difference-Codebook VQ (DCVQ) algorithm. A fault tolerant system is disclosed which allows a PE that has developed a fault to be bypassed in the array and replaced by a spare at the end of the array, with codebook memory assignment shifted one PE past the faulty PE of the array.
Fault tree analysis for system modeling in case of intentional EMI
NASA Astrophysics Data System (ADS)
Genender, E.; Mleczko, M.; Döring, O.; Garbe, H.; Potthast, S.
2011-08-01
The complexity of modern systems on the one hand and the rising threat of intentional electromagnetic interference (IEMI) on the other hand increase the necessity for systematical risk analysis. Most of the problems can not be treated deterministically since slight changes in the configuration (source, position, polarization, ...) can dramatically change the outcome of an event. For that purpose, methods known from probabilistic risk analysis can be applied. One of the most common approaches is the fault tree analysis (FTA). The FTA is used to determine the system failure probability and also the main contributors to its failure. In this paper the fault tree analysis is introduced and a possible application of that method is shown using a small computer network as an example. The constraints of this methods are explained and conclusions for further research are drawn.
NASA Astrophysics Data System (ADS)
Akinci, A.; Pace, B.
2017-12-01
In this study, we discuss the seismic hazard variability of peak ground acceleration (PGA) at 475 years return period in the Southern Apennines of Italy. The uncertainty and parametric sensitivity are presented to quantify the impact of the several fault parameters on ground motion predictions for 10% exceedance in 50-year hazard. A time-independent PSHA model is constructed based on the long-term recurrence behavior of seismogenic faults adopting the characteristic earthquake model for those sources capable of rupturing the entire fault segment with a single maximum magnitude. The fault-based source model uses the dimensions and slip rates of mapped fault to develop magnitude-frequency estimates for characteristic earthquakes. Variability of the selected fault parameter is given with a truncated normal random variable distribution presented by standard deviation about a mean value. A Monte Carlo approach, based on the random balanced sampling by logic tree, is used in order to capture the uncertainty in seismic hazard calculations. For generating both uncertainty and sensitivity maps, we perform 200 simulations for each of the fault parameters. The results are synthesized both in frequency-magnitude distribution of modeled faults as well as the different maps: the overall uncertainty maps provide a confidence interval for the PGA values and the parameter uncertainty maps determine the sensitivity of hazard assessment to variability of every logic tree branch. These branches of logic tree, analyzed through the Monte Carlo approach, are maximum magnitudes, fault length, fault width, fault dip and slip rates. The overall variability of these parameters is determined by varying them simultaneously in the hazard calculations while the sensitivity of each parameter to overall variability is determined varying each of the fault parameters while fixing others. However, in this study we do not investigate the sensitivity of mean hazard results to the consideration of different GMPEs. Distribution of possible seismic hazard results is illustrated by 95% confidence factor map, which indicates the dispersion about mean value, and coefficient of variation map, which shows percent variability. The results of our study clearly illustrate the influence of active fault parameters to probabilistic seismic hazard maps.
Li, Jia; Wang, Deming; Huang, Zonghou
2017-01-01
Coal dust explosions (CDE) are one of the main threats to the occupational safety of coal miners. Aiming to identify and assess the risk of CDE, this paper proposes a novel method of fuzzy fault tree analysis combined with the Visual Basic (VB) program. In this methodology, various potential causes of the CDE are identified and a CDE fault tree is constructed. To overcome drawbacks from the lack of exact probability data for the basic events, fuzzy set theory is employed and the probability data of each basic event is treated as intuitionistic trapezoidal fuzzy numbers. In addition, a new approach for calculating the weighting of each expert is also introduced in this paper to reduce the error during the expert elicitation process. Specifically, an in-depth quantitative analysis of the fuzzy fault tree, such as the importance measure of the basic events and the cut sets, and the CDE occurrence probability is given to assess the explosion risk and acquire more details of the CDE. The VB program is applied to simplify the analysis process. A case study and analysis is provided to illustrate the effectiveness of this proposed method, and some suggestions are given to take preventive measures in advance and avoid CDE accidents. PMID:28793348
Wang, Hetang; Li, Jia; Wang, Deming; Huang, Zonghou
2017-01-01
Coal dust explosions (CDE) are one of the main threats to the occupational safety of coal miners. Aiming to identify and assess the risk of CDE, this paper proposes a novel method of fuzzy fault tree analysis combined with the Visual Basic (VB) program. In this methodology, various potential causes of the CDE are identified and a CDE fault tree is constructed. To overcome drawbacks from the lack of exact probability data for the basic events, fuzzy set theory is employed and the probability data of each basic event is treated as intuitionistic trapezoidal fuzzy numbers. In addition, a new approach for calculating the weighting of each expert is also introduced in this paper to reduce the error during the expert elicitation process. Specifically, an in-depth quantitative analysis of the fuzzy fault tree, such as the importance measure of the basic events and the cut sets, and the CDE occurrence probability is given to assess the explosion risk and acquire more details of the CDE. The VB program is applied to simplify the analysis process. A case study and analysis is provided to illustrate the effectiveness of this proposed method, and some suggestions are given to take preventive measures in advance and avoid CDE accidents.
An integrated approach to system design, reliability, and diagnosis
NASA Technical Reports Server (NTRS)
Patterson-Hine, F. A.; Iverson, David L.
1990-01-01
The requirement for ultradependability of computer systems in future avionics and space applications necessitates a top-down, integrated systems engineering approach for design, implementation, testing, and operation. The functional analyses of hardware and software systems must be combined by models that are flexible enough to represent their interactions and behavior. The information contained in these models must be accessible throughout all phases of the system life cycle in order to maintain consistency and accuracy in design and operational decisions. One approach being taken by researchers at Ames Research Center is the creation of an object-oriented environment that integrates information about system components required in the reliability evaluation with behavioral information useful for diagnostic algorithms. Procedures have been developed at Ames that perform reliability evaluations during design and failure diagnoses during system operation. These procedures utilize information from a central source, structured as object-oriented fault trees. Fault trees were selected because they are a flexible model widely used in aerospace applications and because they give a concise, structured representation of system behavior. The utility of this integrated environment for aerospace applications in light of our experiences during its development and use is described. The techniques for reliability evaluation and failure diagnosis are discussed, and current extensions of the environment and areas requiring further development are summarized.
An integrated approach to system design, reliability, and diagnosis
NASA Astrophysics Data System (ADS)
Patterson-Hine, F. A.; Iverson, David L.
1990-12-01
The requirement for ultradependability of computer systems in future avionics and space applications necessitates a top-down, integrated systems engineering approach for design, implementation, testing, and operation. The functional analyses of hardware and software systems must be combined by models that are flexible enough to represent their interactions and behavior. The information contained in these models must be accessible throughout all phases of the system life cycle in order to maintain consistency and accuracy in design and operational decisions. One approach being taken by researchers at Ames Research Center is the creation of an object-oriented environment that integrates information about system components required in the reliability evaluation with behavioral information useful for diagnostic algorithms. Procedures have been developed at Ames that perform reliability evaluations during design and failure diagnoses during system operation. These procedures utilize information from a central source, structured as object-oriented fault trees. Fault trees were selected because they are a flexible model widely used in aerospace applications and because they give a concise, structured representation of system behavior. The utility of this integrated environment for aerospace applications in light of our experiences during its development and use is described. The techniques for reliability evaluation and failure diagnosis are discussed, and current extensions of the environment and areas requiring further development are summarized.
Graphical workstation capability for reliability modeling
NASA Technical Reports Server (NTRS)
Bavuso, Salvatore J.; Koppen, Sandra V.; Haley, Pamela J.
1992-01-01
In addition to computational capabilities, software tools for estimating the reliability of fault-tolerant digital computer systems must also provide a means of interfacing with the user. Described here is the new graphical interface capability of the hybrid automated reliability predictor (HARP), a software package that implements advanced reliability modeling techniques. The graphics oriented (GO) module provides the user with a graphical language for modeling system failure modes through the selection of various fault-tree gates, including sequence-dependency gates, or by a Markov chain. By using this graphical input language, a fault tree becomes a convenient notation for describing a system. In accounting for any sequence dependencies, HARP converts the fault-tree notation to a complex stochastic process that is reduced to a Markov chain, which it can then solve for system reliability. The graphics capability is available for use on an IBM-compatible PC, a Sun, and a VAX workstation. The GO module is written in the C programming language and uses the graphical kernal system (GKS) standard for graphics implementation. The PC, VAX, and Sun versions of the HARP GO module are currently in beta-testing stages.
NASA Astrophysics Data System (ADS)
Koji, Yusuke; Kitamura, Yoshinobu; Kato, Yoshikiyo; Tsutsui, Yoshio; Mizoguchi, Riichiro
In conceptual design, it is important to develop functional structures which reflect the rich experience in the knowledge from previous design failures. Especially, if a designer learns possible abnormal behaviors from a previous design failure, he or she can add an additional function which prevents such abnormal behaviors and faults. To do this, it is a crucial issue to share such knowledge about possible faulty phenomena and how to cope with them. In fact, a part of such knowledge is described in FMEA (Failure Mode and Effect Analysis) sheets, function structure models for systematic design and fault trees for FTA (Fault Tree Analysis).
Failure analysis of energy storage spring in automobile composite brake chamber
NASA Astrophysics Data System (ADS)
Luo, Zai; Wei, Qing; Hu, Xiaofeng
2015-02-01
This paper set energy storage spring of parking brake cavity, part of automobile composite brake chamber, as the research object. And constructed the fault tree model of energy storage spring which caused parking brake failure based on the fault tree analysis method. Next, the parking brake failure model of energy storage spring was established by analyzing the working principle of composite brake chamber. Finally, the data of working load and the push rod stroke measured by comprehensive test-bed valve was used to validate the failure model above. The experimental result shows that the failure model can distinguish whether the energy storage spring is faulted.
Models for evaluating the performability of degradable computing systems
NASA Technical Reports Server (NTRS)
Wu, L. T.
1982-01-01
Recent advances in multiprocessor technology established the need for unified methods to evaluate computing systems performance and reliability. In response to this modeling need, a general modeling framework that permits the modeling, analysis and evaluation of degradable computing systems is considered. Within this framework, several user oriented performance variables are identified and shown to be proper generalizations of the traditional notions of system performance and reliability. Furthermore, a time varying version of the model is developed to generalize the traditional fault tree reliability evaluation methods of phased missions.
A fast bottom-up algorithm for computing the cut sets of noncoherent fault trees
DOE Office of Scientific and Technical Information (OSTI.GOV)
Corynen, G.C.
1987-11-01
An efficient procedure for finding the cut sets of large fault trees has been developed. Designed to address coherent or noncoherent systems, dependent events, shared or common-cause events, the method - called SHORTCUT - is based on a fast algorithm for transforming a noncoherent tree into a quasi-coherent tree (COHERE), and on a new algorithm for reducing cut sets (SUBSET). To assure sufficient clarity and precision, the procedure is discussed in the language of simple sets, which is also developed in this report. Although the new method has not yet been fully implemented on the computer, we report theoretical worst-casemore » estimates of its computational complexity. 12 refs., 10 figs.« less
Risk management of key issues of FPSO
NASA Astrophysics Data System (ADS)
Sun, Liping; Sun, Hai
2012-12-01
Risk analysis of key systems have become a growing topic late of because of the development of offshore structures. Equipment failures of offloading system and fire accidents were analyzed based on the floating production, storage and offloading (FPSO) features. Fault tree analysis (FTA), and failure modes and effects analysis (FMEA) methods were examined based on information already researched on modules of relex reliability studio (RRS). Equipment failures were also analyzed qualitatively by establishing a fault tree and Boolean structure function based on the shortage of failure cases, statistical data, and risk control measures examined. Failure modes of fire accident were classified according to the different areas of fire occurrences during the FMEA process, using risk priority number (RPN) methods to evaluate their severity rank. The qualitative analysis of FTA gave the basic insight of forming the failure modes of FPSO offloading, and the fire FMEA gave the priorities and suggested processes. The research has practical importance for the security analysis problems of FPSO.
Electromagnetic Compatibility (EMC) in Microelectronics.
1983-02-01
Fault Tree Analysis", System Saftey Symposium, June 8-9, 1965, Seattle: The Boeing Company . 12. Fussell, J.B., "Fault Tree Analysis-Concepts and...procedure for assessing EMC in microelectronics and for applying DD, 1473 EOiTO OP I, NOV6 IS OESOL.ETE UNCLASSIFIED SECURITY CLASSIFICATION OF THIS...CRITERIA 2.1 Background 2 2.2 The Probabilistic Nature of EMC 2 2.3 The Probabilistic Approach 5 2.4 The Compatibility Factor 6 3 APPLYING PROBABILISTIC
A graphical language for reliability model generation
NASA Technical Reports Server (NTRS)
Howell, Sandra V.; Bavuso, Salvatore J.; Haley, Pamela J.
1990-01-01
A graphical interface capability of the hybrid automated reliability predictor (HARP) is described. The graphics-oriented (GO) module provides the user with a graphical language for modeling system failure modes through the selection of various fault tree gates, including sequence dependency gates, or by a Markov chain. With this graphical input language, a fault tree becomes a convenient notation for describing a system. In accounting for any sequence dependencies, HARP converts the fault-tree notation to a complex stochastic process that is reduced to a Markov chain which it can then solve for system reliability. The graphics capability is available for use on an IBM-compatible PC, a Sun, and a VAX workstation. The GO module is written in the C programming language and uses the Graphical Kernel System (GKS) standard for graphics implementation. The PC, VAX, and Sun versions of the HARP GO module are currently in beta-testing.
Experimental evaluation of the certification-trail method
NASA Technical Reports Server (NTRS)
Sullivan, Gregory F.; Wilson, Dwight S.; Masson, Gerald M.; Itoh, Mamoru; Smith, Warren W.; Kay, Jonathan S.
1993-01-01
Certification trails are a recently introduced and promising approach to fault-detection and fault-tolerance. A comprehensive attempt to assess experimentally the performance and overall value of the method is reported. The method is applied to algorithms for the following problems: huffman tree, shortest path, minimum spanning tree, sorting, and convex hull. Our results reveal many cases in which an approach using certification-trails allows for significantly faster overall program execution time than a basic time redundancy-approach. Algorithms for the answer-validation problem for abstract data types were also examined. This kind of problem provides a basis for applying the certification-trail method to wide classes of algorithms. Answer-validation solutions for two types of priority queues were implemented and analyzed. In both cases, the algorithm which performs answer-validation is substantially faster than the original algorithm for computing the answer. Next, a probabilistic model and analysis which enables comparison between the certification-trail method and the time-redundancy approach were presented. The analysis reveals some substantial and sometimes surprising advantages for ther certification-trail method. Finally, the work our group performed on the design and implementation of fault injection testbeds for experimental analysis of the certification trail technique is discussed. This work employs two distinct methodologies, software fault injection (modification of instruction, data, and stack segments of programs on a Sun Sparcstation ELC and on an IBM 386 PC) and hardware fault injection (control, address, and data lines of a Motorola MC68000-based target system pulsed at logical zero/one values). Our results indicate the viability of the certification trail technique. It is also believed that the tools developed provide a solid base for additional exploration.
Seera, Manjeevan; Lim, Chee Peng; Ishak, Dahaman; Singh, Harapajan
2012-01-01
In this paper, a novel approach to detect and classify comprehensive fault conditions of induction motors using a hybrid fuzzy min-max (FMM) neural network and classification and regression tree (CART) is proposed. The hybrid model, known as FMM-CART, exploits the advantages of both FMM and CART for undertaking data classification and rule extraction problems. A series of real experiments is conducted, whereby the motor current signature analysis method is applied to form a database comprising stator current signatures under different motor conditions. The signal harmonics from the power spectral density are extracted as discriminative input features for fault detection and classification with FMM-CART. A comprehensive list of induction motor fault conditions, viz., broken rotor bars, unbalanced voltages, stator winding faults, and eccentricity problems, has been successfully classified using FMM-CART with good accuracy rates. The results are comparable, if not better, than those reported in the literature. Useful explanatory rules in the form of a decision tree are also elicited from FMM-CART to analyze and understand different fault conditions of induction motors.
Reliability studies of Integrated Modular Engine system designs
NASA Technical Reports Server (NTRS)
Hardy, Terry L.; Rapp, Douglas C.
1993-01-01
A study was performed to evaluate the reliability of Integrated Modular Engine (IME) concepts. Comparisons were made between networked IME systems and non-networked discrete systems using expander cycle configurations. Both redundant and non-redundant systems were analyzed. Binomial approximation and Markov analysis techniques were employed to evaluate total system reliability. In addition, Failure Modes and Effects Analyses (FMEA), Preliminary Hazard Analyses (PHA), and Fault Tree Analysis (FTA) were performed to allow detailed evaluation of the IME concept. A discussion of these system reliability concepts is also presented.
Reliability studies of integrated modular engine system designs
NASA Technical Reports Server (NTRS)
Hardy, Terry L.; Rapp, Douglas C.
1993-01-01
A study was performed to evaluate the reliability of Integrated Modular Engine (IME) concepts. Comparisons were made between networked IME systems and non-networked discrete systems using expander cycle configurations. Both redundant and non-redundant systems were analyzed. Binomial approximation and Markov analysis techniques were employed to evaluate total system reliability. In addition, Failure Modes and Effects Analyses (FMEA), Preliminary Hazard Analyses (PHA), and Fault Tree Analysis (FTA) were performed to allow detailed evaluation of the IME concept. A discussion of these system reliability concepts is also presented.
Reliability studies of integrated modular engine system designs
NASA Astrophysics Data System (ADS)
Hardy, Terry L.; Rapp, Douglas C.
1993-06-01
A study was performed to evaluate the reliability of Integrated Modular Engine (IME) concepts. Comparisons were made between networked IME systems and non-networked discrete systems using expander cycle configurations. Both redundant and non-redundant systems were analyzed. Binomial approximation and Markov analysis techniques were employed to evaluate total system reliability. In addition, Failure Modes and Effects Analyses (FMEA), Preliminary Hazard Analyses (PHA), and Fault Tree Analysis (FTA) were performed to allow detailed evaluation of the IME concept. A discussion of these system reliability concepts is also presented.
Reliability studies of Integrated Modular Engine system designs
NASA Astrophysics Data System (ADS)
Hardy, Terry L.; Rapp, Douglas C.
1993-06-01
A study was performed to evaluate the reliability of Integrated Modular Engine (IME) concepts. Comparisons were made between networked IME systems and non-networked discrete systems using expander cycle configurations. Both redundant and non-redundant systems were analyzed. Binomial approximation and Markov analysis techniques were employed to evaluate total system reliability. In addition, Failure Modes and Effects Analyses (FMEA), Preliminary Hazard Analyses (PHA), and Fault Tree Analysis (FTA) were performed to allow detailed evaluation of the IME concept. A discussion of these system reliability concepts is also presented.
Aydin, Ilhan; Karakose, Mehmet; Akin, Erhan
2014-03-01
Although reconstructed phase space is one of the most powerful methods for analyzing a time series, it can fail in fault diagnosis of an induction motor when the appropriate pre-processing is not performed. Therefore, boundary analysis based a new feature extraction method in phase space is proposed for diagnosis of induction motor faults. The proposed approach requires the measurement of one phase current signal to construct the phase space representation. Each phase space is converted into an image, and the boundary of each image is extracted by a boundary detection algorithm. A fuzzy decision tree has been designed to detect broken rotor bars and broken connector faults. The results indicate that the proposed approach has a higher recognition rate than other methods on the same dataset. © 2013 ISA Published by ISA All rights reserved.
The P-Mesh: A Commodity-based Scalable Network Architecture for Clusters
NASA Technical Reports Server (NTRS)
Nitzberg, Bill; Kuszmaul, Chris; Stockdale, Ian; Becker, Jeff; Jiang, John; Wong, Parkson; Tweten, David (Technical Monitor)
1998-01-01
We designed a new network architecture, the P-Mesh which combines the scalability and fault resilience of a torus with the performance of a switch. We compare the scalability, performance, and cost of the hub, switch, torus, tree, and P-Mesh architectures. The latter three are capable of scaling to thousands of nodes, however, the torus has severe performance limitations with that many processors. The tree and P-Mesh have similar latency, bandwidth, and bisection bandwidth, but the P-Mesh outperforms the switch architecture (a lower bound for tree performance) on 16-node NAB Parallel Benchmark tests by up to 23%, and costs 40% less. Further, the P-Mesh has better fault resilience characteristics. The P-Mesh architecture trades increased management overhead for lower cost, and is a good bridging technology while the price of tree uplinks is expensive.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sattison, M.B.; Schroeder, J.A.; Russell, K.D.
The Idaho National Engineering Laboratory (INEL) over the past year has created 75 plant-specific Accident Sequence Precursor (ASP) models using the SAPHIRE suite of PRA codes. Along with the new models, the INEL has also developed a new module for SAPHIRE which is tailored specifically to the unique needs of ASP evaluations. These models and software will be the next generation of risk tools for the evaluation of accident precursors by both NRR and AEOD. This paper presents an overview of the models and software. Key characteristics include: (1) classification of the plant models according to plant response with amore » unique set of event trees for each plant class, (2) plant-specific fault trees using supercomponents, (3) generation and retention of all system and sequence cutsets, (4) full flexibility in modifying logic, regenerating cutsets, and requantifying results, and (5) user interface for streamlined evaluation of ASP events.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sattison, M.B.; Schroeder, J.A.; Russell, K.D.
The Idaho National Engineering Laboratory (INEL) over the past year has created 75 plant-specific Accident Sequence Precursor (ASP) models using the SAPHIRE suite of PRA codes. Along with the new models, the INEL has also developed a new module for SAPHIRE which is tailored specifically to the unique needs of conditional core damage probability (CCDP) evaluations. These models and software will be the next generation of risk tools for the evaluation of accident precursors by both NRR and AEOD. This paper presents an overview of the models and software. Key characteristics include: (1) classification of the plant models according tomore » plant response with a unique set of event trees for each plant class, (2) plant-specific fault trees using supercomponents, (3) generation and retention of all system and sequence cutsets, (4) full flexibility in modifying logic, regenerating cutsets, and requantifying results, and (5) user interface for streamlined evaluation of ASP events.« less
The 1992 Landers earthquake sequence; seismological observations
Egill Hauksson,; Jones, Lucile M.; Hutton, Kate; Eberhart-Phillips, Donna
1993-01-01
The (MW6.1, 7.3, 6.2) 1992 Landers earthquakes began on April 23 with the MW6.1 1992 Joshua Tree preshock and form the most substantial earthquake sequence to occur in California in the last 40 years. This sequence ruptured almost 100 km of both surficial and concealed faults and caused aftershocks over an area 100 km wide by 180 km long. The faulting was predominantly strike slip and three main events in the sequence had unilateral rupture to the north away from the San Andreas fault. The MW6.1 Joshua Tree preshock at 33°N58′ and 116°W19′ on 0451 UT April 23 was preceded by a tightly clustered foreshock sequence (M≤4.6) beginning 2 hours before the mainshock and followed by a large aftershock sequence with more than 6000 aftershocks. The aftershocks extended along a northerly trend from about 10 km north of the San Andreas fault, northwest of Indio, to the east-striking Pinto Mountain fault. The Mw7.3 Landers mainshock occurred at 34°N13′ and 116°W26′ at 1158 UT, June 28, 1992, and was preceded for 12 hours by 25 small M≤3 earthquakes at the mainshock epicenter. The distribution of more than 20,000 aftershocks, analyzed in this study, and short-period focal mechanisms illuminate a complex sequence of faulting. The aftershocks extend 60 km to the north of the mainshock epicenter along a system of at least five different surficial faults, and 40 km to the south, crossing the Pinto Mountain fault through the Joshua Tree aftershock zone towards the San Andreas fault near Indio. The rupture initiated in the depth range of 3–6 km, similar to previous M∼5 earthquakes in the region, although the maximum depth of aftershocks is about 15 km. The mainshock focal mechanism showed right-lateral strike-slip faulting with a strike of N10°W on an almost vertical fault. The rupture formed an arclike zone well defined by both surficial faulting and aftershocks, with more westerly faulting to the north. This change in strike is accomplished by jumping across dilational jogs connecting surficial faults with strikes rotated progressively to the west. A 20-km-long linear cluster of aftershocks occurred 10–20 km north of Barstow, or 30–40 km north of the end of the mainshock rupture. The most prominent off-fault aftershock cluster occurred 30 km to the west of the Landers mainshock. The largest aftershock was within this cluster, the Mw6.2 Big Bear aftershock occurring at 34°N10′ and 116°W49′ at 1505 UT June 28. It exhibited left-lateral strike-slip faulting on a northeast striking and steeply dipping plane. The Big Bear aftershocks form a linear trend extending 20 km to the northeast with a scattered distribution to the north. The Landers mainshock occurred near the southernmost extent of the Eastern California Shear Zone, an 80-km-wide, more than 400-km-long zone of deformation. This zone extends into the Death Valley region and accommodates about 10 to 20% of the plate motion between the Pacific and North American plates. The Joshua Tree preshock, its aftershocks, and Landers aftershocks form a previously missing link that connects the Eastern California Shear Zone to the southern San Andreas fault.
Fault tree analysis for integrated and probabilistic risk analysis of drinking water systems.
Lindhe, Andreas; Rosén, Lars; Norberg, Tommy; Bergstedt, Olof
2009-04-01
Drinking water systems are vulnerable and subject to a wide range of risks. To avoid sub-optimisation of risk-reduction options, risk analyses need to include the entire drinking water system, from source to tap. Such an integrated approach demands tools that are able to model interactions between different events. Fault tree analysis is a risk estimation tool with the ability to model interactions between events. Using fault tree analysis on an integrated level, a probabilistic risk analysis of a large drinking water system in Sweden was carried out. The primary aims of the study were: (1) to develop a method for integrated and probabilistic risk analysis of entire drinking water systems; and (2) to evaluate the applicability of Customer Minutes Lost (CML) as a measure of risk. The analysis included situations where no water is delivered to the consumer (quantity failure) and situations where water is delivered but does not comply with water quality standards (quality failure). Hard data as well as expert judgements were used to estimate probabilities of events and uncertainties in the estimates. The calculations were performed using Monte Carlo simulations. CML is shown to be a useful measure of risks associated with drinking water systems. The method presented provides information on risk levels, probabilities of failure, failure rates and downtimes of the system. This information is available for the entire system as well as its different sub-systems. Furthermore, the method enables comparison of the results with performance targets and acceptable levels of risk. The method thus facilitates integrated risk analysis and consequently helps decision-makers to minimise sub-optimisation of risk-reduction options.
Reliability analysis of the solar array based on Fault Tree Analysis
NASA Astrophysics Data System (ADS)
Jianing, Wu; Shaoze, Yan
2011-07-01
The solar array is an important device used in the spacecraft, which influences the quality of in-orbit operation of the spacecraft and even the launches. This paper analyzes the reliability of the mechanical system and certifies the most vital subsystem of the solar array. The fault tree analysis (FTA) model is established according to the operating process of the mechanical system based on DFH-3 satellite; the logical expression of the top event is obtained by Boolean algebra and the reliability of the solar array is calculated. The conclusion shows that the hinges are the most vital links between the solar arrays. By analyzing the structure importance(SI) of the hinge's FTA model, some fatal causes, including faults of the seal, insufficient torque of the locking spring, temperature in space, and friction force, can be identified. Damage is the initial stage of the fault, so limiting damage is significant to prevent faults. Furthermore, recommendations for improving reliability associated with damage limitation are discussed, which can be used for the redesigning of the solar array and the reliability growth planning.
Fault tree safety analysis of a large Li/SOCl(sub)2 spacecraft battery
NASA Technical Reports Server (NTRS)
Uy, O. Manuel; Maurer, R. H.
1987-01-01
The results of the safety fault tree analysis on the eight module, 576 F cell Li/SOCl2 battery on the spacecraft and in the integration and test environment prior to launch on the ground are presented. The analysis showed that with the right combination of blocking diodes, electrical fuses, thermal fuses, thermal switches, cell balance, cell vents, and battery module vents the probability of a single cell or a 72 cell module exploding can be reduced to .000001, essentially the probability due to explosion for unexplained reasons.
Chen, Yikai; Wang, Kai; Xu, Chengcheng; Shi, Qin; He, Jie; Li, Peiqing; Shi, Ting
2018-05-19
To overcome the limitations of previous highway alignment safety evaluation methods, this article presents a highway alignment safety evaluation method based on fault tree analysis (FTA) and the characteristics of vehicle safety boundaries, within the framework of dynamic modeling of the driver-vehicle-road system. Approaches for categorizing the vehicle failure modes while driving on highways and the corresponding safety boundaries were comprehensively investigated based on vehicle system dynamics theory. Then, an overall crash probability model was formulated based on FTA considering the risks of 3 failure modes: losing steering capability, losing track-holding capability, and rear-end collision. The proposed method was implemented on a highway segment between Bengbu and Nanjing in China. A driver-vehicle-road multibody dynamics model was developed based on the 3D alignments of the Bengbu to Nanjing section of Ning-Luo expressway using Carsim, and the dynamics indices, such as sideslip angle and, yaw rate were obtained. Then, the average crash probability of each road section was calculated with a fixed-length method. Finally, the average crash probability was validated against the crash frequency per kilometer to demonstrate the accuracy of the proposed method. The results of the regression analysis and correlation analysis indicated good consistency between the results of the safety evaluation and the crash data and that it outperformed the safety evaluation methods used in previous studies. The proposed method has the potential to be used in practical engineering applications to identify crash-prone locations and alignment deficiencies on highways in the planning and design phases, as well as those in service.
PAWS/STEM - PADE APPROXIMATION WITH SCALING AND SCALED TAYLOR EXPONENTIAL MATRIX (VAX VMS VERSION)
NASA Technical Reports Server (NTRS)
Butler, R. W.
1994-01-01
Traditional fault-tree techniques for analyzing the reliability of large, complex systems fail to model the dynamic reconfiguration capabilities of modern computer systems. Markov models, on the other hand, can describe fault-recovery (via system reconfiguration) as well as fault-occurrence. The Pade Approximation with Scaling (PAWS) and Scaled Taylor Exponential Matrix (STEM) programs provide a flexible, user-friendly, language-based interface for the creation and evaluation of Markov models describing the behavior of fault-tolerant reconfigurable computer systems. PAWS and STEM produce exact solutions for the probability of system failure and provide a conservative estimate of the number of significant digits in the solution. The calculation of the probability of entering a death state of a Markov model (representing system failure) requires the solution of a set of coupled differential equations. Because of the large disparity between the rates of fault arrivals and system recoveries, Markov models of fault-tolerant architectures inevitably lead to numerically stiff differential equations. Both PAWS and STEM have the capability to solve numerically stiff models. These complementary programs use separate methods to determine the matrix exponential in the solution of the model's system of differential equations. In general, PAWS is better suited to evaluate small and dense models. STEM operates at lower precision, but works faster than PAWS for larger models. The mathematical approach chosen to solve a reliability problem may vary with the size and nature of the problem. Although different solution techniques are utilized on different programs, it is possible to have a common input language. The Systems Validation Methods group at NASA Langley Research Center has created a set of programs that form the basis for a reliability analysis workstation. The set of programs are: SURE reliability analysis program (COSMIC program LAR-13789, LAR-14921); the ASSIST specification interface program (LAR-14193, LAR-14923), PAWS/STEM reliability analysis programs (LAR-14165, LAR-14920); and the FTC fault tree tool (LAR-14586, LAR-14922). FTC is used to calculate the top-event probability for a fault tree. PAWS/STEM and SURE are programs which interpret the same SURE language, but utilize different solution methods. ASSIST is a preprocessor that generates SURE language from a more abstract definition. SURE, ASSIST, and PAWS/STEM are also offered as a bundle. Please see the abstract for COS-10039/COS-10041, SARA - SURE/ASSIST Reliability Analysis Workstation, for pricing details. PAWS/STEM was originally developed for DEC VAX series computers running VMS and was later ported for use on Sun computers running SunOS. The package is written in PASCAL, ANSI compliant C-language, and FORTRAN 77. The standard distribution medium for the VMS version of PAWS/STEM (LAR-14165) is a 9-track 1600 BPI magnetic tape in VMSINSTAL format. It is also available on a TK50 tape cartridge in VMSINSTAL format. Executables are included. The standard distribution medium for the Sun version of PAWS/STEM (LAR-14920) is a .25 inch streaming magnetic tape cartridge in UNIX tar format. Both Sun3 and Sun4 executables are included. PAWS/STEM was developed in 1989 and last updated in 1991. DEC, VAX, VMS, and TK50 are trademarks of Digital Equipment Corporation. SunOS, Sun3, and Sun4 are trademarks of Sun Microsystems, Inc. UNIX is a registered trademark of AT&T Bell Laboratories.
PAWS/STEM - PADE APPROXIMATION WITH SCALING AND SCALED TAYLOR EXPONENTIAL MATRIX (SUN VERSION)
NASA Technical Reports Server (NTRS)
Butler, R. W.
1994-01-01
Traditional fault-tree techniques for analyzing the reliability of large, complex systems fail to model the dynamic reconfiguration capabilities of modern computer systems. Markov models, on the other hand, can describe fault-recovery (via system reconfiguration) as well as fault-occurrence. The Pade Approximation with Scaling (PAWS) and Scaled Taylor Exponential Matrix (STEM) programs provide a flexible, user-friendly, language-based interface for the creation and evaluation of Markov models describing the behavior of fault-tolerant reconfigurable computer systems. PAWS and STEM produce exact solutions for the probability of system failure and provide a conservative estimate of the number of significant digits in the solution. The calculation of the probability of entering a death state of a Markov model (representing system failure) requires the solution of a set of coupled differential equations. Because of the large disparity between the rates of fault arrivals and system recoveries, Markov models of fault-tolerant architectures inevitably lead to numerically stiff differential equations. Both PAWS and STEM have the capability to solve numerically stiff models. These complementary programs use separate methods to determine the matrix exponential in the solution of the model's system of differential equations. In general, PAWS is better suited to evaluate small and dense models. STEM operates at lower precision, but works faster than PAWS for larger models. The mathematical approach chosen to solve a reliability problem may vary with the size and nature of the problem. Although different solution techniques are utilized on different programs, it is possible to have a common input language. The Systems Validation Methods group at NASA Langley Research Center has created a set of programs that form the basis for a reliability analysis workstation. The set of programs are: SURE reliability analysis program (COSMIC program LAR-13789, LAR-14921); the ASSIST specification interface program (LAR-14193, LAR-14923), PAWS/STEM reliability analysis programs (LAR-14165, LAR-14920); and the FTC fault tree tool (LAR-14586, LAR-14922). FTC is used to calculate the top-event probability for a fault tree. PAWS/STEM and SURE are programs which interpret the same SURE language, but utilize different solution methods. ASSIST is a preprocessor that generates SURE language from a more abstract definition. SURE, ASSIST, and PAWS/STEM are also offered as a bundle. Please see the abstract for COS-10039/COS-10041, SARA - SURE/ASSIST Reliability Analysis Workstation, for pricing details. PAWS/STEM was originally developed for DEC VAX series computers running VMS and was later ported for use on Sun computers running SunOS. The package is written in PASCAL, ANSI compliant C-language, and FORTRAN 77. The standard distribution medium for the VMS version of PAWS/STEM (LAR-14165) is a 9-track 1600 BPI magnetic tape in VMSINSTAL format. It is also available on a TK50 tape cartridge in VMSINSTAL format. Executables are included. The standard distribution medium for the Sun version of PAWS/STEM (LAR-14920) is a .25 inch streaming magnetic tape cartridge in UNIX tar format. Both Sun3 and Sun4 executables are included. PAWS/STEM was developed in 1989 and last updated in 1991. DEC, VAX, VMS, and TK50 are trademarks of Digital Equipment Corporation. SunOS, Sun3, and Sun4 are trademarks of Sun Microsystems, Inc. UNIX is a registered trademark of AT&T Bell Laboratories.
NASA Astrophysics Data System (ADS)
LI, Y.; Yang, S. H.
2017-05-01
The Antarctica astronomical telescopes work chronically on the top of the unattended South Pole, and they have only one chance to maintain every year. Due to the complexity of the optical, mechanical, and electrical systems, the telescopes are hard to be maintained and need multi-tasker expedition teams, which means an excessive awareness is essential for the reliability of the Antarctica telescopes. Based on the fault mechanism and fault mode of the main-axis control system for the equatorial Antarctica astronomical telescope AST3-3 (Antarctic Schmidt Telescopes 3-3), the method of fault tree analysis is introduced in this article, and we obtains the importance degree of the top event from the importance degree of the bottom event structure. From the above results, the hidden problems and weak links can be effectively found out, which will indicate the direction for promoting the stability of the system and optimizing the design of the system.
Fault tree analysis of most common rolling bearing tribological failures
NASA Astrophysics Data System (ADS)
Vencl, Aleksandar; Gašić, Vlada; Stojanović, Blaža
2017-02-01
Wear as a tribological process has a major influence on the reliability and life of rolling bearings. Field examinations of bearing failures due to wear indicate possible causes and point to the necessary measurements for wear reduction or elimination. Wear itself is a very complex process initiated by the action of different mechanisms, and can be manifested by different wear types which are often related. However, the dominant type of wear can be approximately determined. The paper presents the classification of most common bearing damages according to the dominant wear type, i.e. abrasive wear, adhesive wear, surface fatigue wear, erosive wear, fretting wear and corrosive wear. The wear types are correlated with the terms used in ISO 15243 standard. Each wear type is illustrated with an appropriate photograph, and for each wear type, appropriate description of causes and manifestations is presented. Possible causes of rolling bearing failure are used for the fault tree analysis (FTA). It was performed to determine the root causes for bearing failures. The constructed fault tree diagram for rolling bearing failure can be useful tool for maintenance engineers.
Development and validation of techniques for improving software dependability
NASA Technical Reports Server (NTRS)
Knight, John C.
1992-01-01
A collection of document abstracts are presented on the topic of improving software dependability through NASA grant NAG-1-1123. Specific topics include: modeling of error detection; software inspection; test cases; Magnetic Stereotaxis System safety specifications and fault trees; and injection of synthetic faults into software.
Sun, Weifang; Yao, Bin; Zeng, Nianyin; Chen, Binqiang; He, Yuchao; Cao, Xincheng; He, Wangpeng
2017-07-12
As a typical example of large and complex mechanical systems, rotating machinery is prone to diversified sorts of mechanical faults. Among these faults, one of the prominent causes of malfunction is generated in gear transmission chains. Although they can be collected via vibration signals, the fault signatures are always submerged in overwhelming interfering contents. Therefore, identifying the critical fault's characteristic signal is far from an easy task. In order to improve the recognition accuracy of a fault's characteristic signal, a novel intelligent fault diagnosis method is presented. In this method, a dual-tree complex wavelet transform (DTCWT) is employed to acquire the multiscale signal's features. In addition, a convolutional neural network (CNN) approach is utilized to automatically recognise a fault feature from the multiscale signal features. The experiment results of the recognition for gear faults show the feasibility and effectiveness of the proposed method, especially in the gear's weak fault features.
RSRM Nozzle Anomalous Throat Erosion Investigation Overview
NASA Technical Reports Server (NTRS)
Clinton, R. G., Jr.; Wendel, Gary M.
1998-01-01
In September, 1996, anomalous pocketing erosion was observed in the aft end of the throat ring of the nozzle of one of the reusable solid rocket motors (RSRM 56B) used on NASA's space transportation system (STS) mission 79. The RSRM throat ring is constructed of bias tape-wrapped carbon cloth/ phenolic (CCP) ablative material. A comprehensive investigation revealed necessary and sufficient conditions for occurrence of the pocketing event and provided rationale that the solid rocket motors for the subsequent mission, STS-80, were safe to fly. The nozzles of both of these motors also exhibited anomalous erosion similar to, but less extensive than that observed on STS-79. Subsequent to this flight, the investigation to identify both the specific causes and the corrective actions for elimination of the necessary and sufficient conditions for the pocketing erosion was intensified. A detailed fault tree approach was utilized to examine potential material and process contributors to the anomalous performance. The investigation involved extensive constituent and component material property testing, pedigree assessments, supplier audits, process audits, full scale processing test article fabrication and evaluation, thermal and thermostructural analyses, nondestructive evaluation, and material performance tests conducted using hot fire simulation in laboratory test beds and subscale and full scale solid rocket motor static test firings. This presentation will provide an over-view of the observed anomalous nozzle erosion and the comprehensive, fault-tree based investigation conducted to resolve this issue.
Analysis of a hardware and software fault tolerant processor for critical applications
NASA Technical Reports Server (NTRS)
Dugan, Joanne B.
1993-01-01
Computer systems for critical applications must be designed to tolerate software faults as well as hardware faults. A unified approach to tolerating hardware and software faults is characterized by classifying faults in terms of duration (transient or permanent) rather than source (hardware or software). Errors arising from transient faults can be handled through masking or voting, but errors arising from permanent faults require system reconfiguration to bypass the failed component. Most errors which are caused by software faults can be considered transient, in that they are input-dependent. Software faults are triggered by a particular set of inputs. Quantitative dependability analysis of systems which exhibit a unified approach to fault tolerance can be performed by a hierarchical combination of fault tree and Markov models. A methodology for analyzing hardware and software fault tolerant systems is applied to the analysis of a hypothetical system, loosely based on the Fault Tolerant Parallel Processor. The models consider both transient and permanent faults, hardware and software faults, independent and related software faults, automatic recovery, and reconfiguration.
Experimental evaluation of certification trails using abstract data type validation
NASA Technical Reports Server (NTRS)
Wilson, Dwight S.; Sullivan, Gregory F.; Masson, Gerald M.
1993-01-01
Certification trails are a recently introduced and promising approach to fault-detection and fault-tolerance. Recent experimental work reveals many cases in which a certification-trail approach allows for significantly faster program execution time than a basic time-redundancy approach. Algorithms for answer-validation of abstract data types allow a certification trail approach to be used for a wide variety of problems. An attempt to assess the performance of algorithms utilizing certification trails on abstract data types is reported. Specifically, this method was applied to the following problems: heapsort, Hullman tree, shortest path, and skyline. Previous results used certification trails specific to a particular problem and implementation. The approach allows certification trails to be localized to 'data structure modules,' making the use of this technique transparent to the user of such modules.
Determining preventability of pediatric readmissions using fault tree analysis.
Jonas, Jennifer A; Devon, Erin Pete; Ronan, Jeanine C; Ng, Sonia C; Owusu-McKenzie, Jacqueline Y; Strausbaugh, Janet T; Fieldston, Evan S; Hart, Jessica K
2016-05-01
Previous studies attempting to distinguish preventable from nonpreventable readmissions reported challenges in completing reviews efficiently and consistently. (1) Examine the efficiency and reliability of a Web-based fault tree tool designed to guide physicians through chart reviews to a determination about preventability. (2) Investigate root causes of general pediatrics readmissions and identify the percent that are preventable. General pediatricians from The Children's Hospital of Philadelphia used a Web-based fault tree tool to classify root causes of all general pediatrics 15-day readmissions in 2014. The tool guided reviewers through a logical progression of questions, which resulted in 1 of 18 root causes of readmission, 8 of which were considered potentially preventable. Twenty percent of cases were cross-checked to measure inter-rater reliability. Of the 7252 discharges, 248 were readmitted, for an all-cause general pediatrics 15-day readmission rate of 3.4%. Of those readmissions, 15 (6.0%) were deemed potentially preventable, corresponding to 0.2% of total discharges. The most common cause of potentially preventable readmissions was premature discharge. For the 50 cross-checked cases, both reviews resulted in the same root cause for 44 (86%) of files (κ = 0.79; 95% confidence interval: 0.60-0.98). Completing 1 review using the tool took approximately 20 minutes. The Web-based fault tree tool helped physicians to identify root causes of hospital readmissions and classify them as either preventable or not preventable in an efficient and consistent way. It also confirmed that only a small percentage of general pediatrics 15-day readmissions are potentially preventable. Journal of Hospital Medicine 2016;11:329-335. © 2016 Society of Hospital Medicine. © 2016 Society of Hospital Medicine.
Risk Analysis of Return Support Material on Gas Compressor Platform Project
NASA Astrophysics Data System (ADS)
Silvianita; Aulia, B. U.; Khakim, M. L. N.; Rosyid, Daniel M.
2017-07-01
On a fixed platforms project are not only carried out by a contractor, but two or more contractors. Cooperation in the construction of fixed platforms is often not according to plan, it is caused by several factors. It takes a good synergy between the contractor to avoid miss communication may cause problems on the project. For the example is about support material (sea fastening, skid shoe and shipping support) used in the process of sending a jacket structure to operation place often does not return to the contractor. It needs a systematic method to overcome the problem of support material. This paper analyses the causes and effects of GAS Compressor Platform that support material is not return, using Fault Tree Analysis (FTA) and Event Tree Analysis (ETA). From fault tree analysis, the probability of top event is 0.7783. From event tree analysis diagram, the contractors lose Rp.350.000.000, - to Rp.10.000.000.000, -.
Mines Systems Safety Improvement Using an Integrated Event Tree and Fault Tree Analysis
NASA Astrophysics Data System (ADS)
Kumar, Ranjan; Ghosh, Achyuta Krishna
2017-04-01
Mines systems such as ventilation system, strata support system, flame proof safety equipment, are exposed to dynamic operational conditions such as stress, humidity, dust, temperature, etc., and safety improvement of such systems can be done preferably during planning and design stage. However, the existing safety analysis methods do not handle the accident initiation and progression of mine systems explicitly. To bridge this gap, this paper presents an integrated Event Tree (ET) and Fault Tree (FT) approach for safety analysis and improvement of mine systems design. This approach includes ET and FT modeling coupled with redundancy allocation technique. In this method, a concept of top hazard probability is introduced for identifying system failure probability and redundancy is allocated to the system either at component or system level. A case study on mine methane explosion safety with two initiating events is performed. The results demonstrate that the presented method can reveal the accident scenarios and improve the safety of complex mine systems simultaneously.
NASA Astrophysics Data System (ADS)
Li, Yongbo; Li, Guoyan; Yang, Yuantao; Liang, Xihui; Xu, Minqiang
2018-05-01
The fault diagnosis of planetary gearboxes is crucial to reduce the maintenance costs and economic losses. This paper proposes a novel fault diagnosis method based on adaptive multi-scale morphological filter (AMMF) and modified hierarchical permutation entropy (MHPE) to identify the different health conditions of planetary gearboxes. In this method, AMMF is firstly adopted to remove the fault-unrelated components and enhance the fault characteristics. Second, MHPE is utilized to extract the fault features from the denoised vibration signals. Third, Laplacian score (LS) approach is employed to refine the fault features. In the end, the obtained features are fed into the binary tree support vector machine (BT-SVM) to accomplish the fault pattern identification. The proposed method is numerically and experimentally demonstrated to be able to recognize the different fault categories of planetary gearboxes.
2013-05-01
specifics of the correlation will be explored followed by discussion of new paradigms— the ordered event list (OEL) and the decision tree — that result from...4.2.1 Brief Overview of the Decision Tree Paradigm ................................................15 4.2.2 OEL Explained...6 Figure 3. A depiction of a notional fault/activation tree . ................................................................7
Conversion of Questionnaire Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Powell, Danny H; Elwood Jr, Robert H
During the survey, respondents are asked to provide qualitative answers (well, adequate, needs improvement) on how well material control and accountability (MC&A) functions are being performed. These responses can be used to develop failure probabilities for basic events performed during routine operation of the MC&A systems. The failure frequencies for individual events may be used to estimate total system effectiveness using a fault tree in a probabilistic risk analysis (PRA). Numeric risk values are required for the PRA fault tree calculations that are performed to evaluate system effectiveness. So, the performance ratings in the questionnaire must be converted to relativemore » risk values for all of the basic MC&A tasks performed in the facility. If a specific material protection, control, and accountability (MPC&A) task is being performed at the 'perfect' level, the task is considered to have a near zero risk of failure. If the task is performed at a less than perfect level, the deficiency in performance represents some risk of failure for the event. As the degree of deficiency in performance increases, the risk of failure increases. If a task that should be performed is not being performed, that task is in a state of failure. The failure probabilities of all basic events contribute to the total system risk. Conversion of questionnaire MPC&A system performance data to numeric values is a separate function from the process of completing the questionnaire. When specific questions in the questionnaire are answered, the focus is on correctly assessing and reporting, in an adjectival manner, the actual performance of the related MC&A function. Prior to conversion, consideration should not be given to the numeric value that will be assigned during the conversion process. In the conversion process, adjectival responses to questions on system performance are quantified based on a log normal scale typically used in human error analysis (see A.D. Swain and H.E. Guttmann, 'Handbook of Human Reliability Analysis with Emphasis on Nuclear Power Plant Applications,' NUREG/CR-1278). This conversion produces the basic event risk of failure values required for the fault tree calculations. The fault tree is a deductive logic structure that corresponds to the operational nuclear MC&A system at a nuclear facility. The conventional Delphi process is a time-honored approach commonly used in the risk assessment field to extract numerical values for the failure rates of actions or activities when statistically significant data is absent.« less
Signal processing and neural network toolbox and its application to failure diagnosis and prognosis
NASA Astrophysics Data System (ADS)
Tu, Fang; Wen, Fang; Willett, Peter K.; Pattipati, Krishna R.; Jordan, Eric H.
2001-07-01
Many systems are comprised of components equipped with self-testing capability; however, if the system is complex involving feedback and the self-testing itself may occasionally be faulty, tracing faults to a single or multiple causes is difficult. Moreover, many sensors are incapable of reliable decision-making on their own. In such cases, a signal processing front-end that can match inference needs will be very helpful. The work is concerned with providing an object-oriented simulation environment for signal processing and neural network-based fault diagnosis and prognosis. In the toolbox, we implemented a wide range of spectral and statistical manipulation methods such as filters, harmonic analyzers, transient detectors, and multi-resolution decomposition to extract features for failure events from data collected by data sensors. Then we evaluated multiple learning paradigms for general classification, diagnosis and prognosis. The network models evaluated include Restricted Coulomb Energy (RCE) Neural Network, Learning Vector Quantization (LVQ), Decision Trees (C4.5), Fuzzy Adaptive Resonance Theory (FuzzyArtmap), Linear Discriminant Rule (LDR), Quadratic Discriminant Rule (QDR), Radial Basis Functions (RBF), Multiple Layer Perceptrons (MLP) and Single Layer Perceptrons (SLP). Validation techniques, such as N-fold cross-validation and bootstrap techniques, are employed for evaluating the robustness of network models. The trained networks are evaluated for their performance using test data on the basis of percent error rates obtained via cross-validation, time efficiency, generalization ability to unseen faults. Finally, the usage of neural networks for the prediction of residual life of turbine blades with thermal barrier coatings is described and the results are shown. The neural network toolbox has also been applied to fault diagnosis in mixed-signal circuits.
Monitoring of Microseismicity with ArrayTechniques in the Peach Tree Valley Region
NASA Astrophysics Data System (ADS)
Garcia-Reyes, J. L.; Clayton, R. W.
2016-12-01
This study is focused on the analysis of microseismicity along the San Andreas Fault in the PeachTree Valley region. This zone is part of the transition zone between the locked portion to the south (Parkfield, CA) and the creeping section to the north (Jovilet, et al., JGR, 2014). The data for the study comes from a 2-week deployment of 116 Zland nodes in a cross-shaped configuration along (8.2 km) and across (9 km) the Fault. We analyze the distribution of microseismicity using a 3D backprojection technique, and we explore the use of Hidden Markov Models to identify different patterns of microseismicity (Hammer et al., GJI, 2013). The goal of the study is to relate the style of seismicity to the mechanical state of the Fault. The results show the evolution of seismic activity as well as at least two different patterns of seismic signals.
Viewpoint on ISA TR84.0.02--simplified methods and fault tree analysis.
Summers, A E
2000-01-01
ANSI/ISA-S84.01-1996 and IEC 61508 require the establishment of a safety integrity level for any safety instrumented system or safety related system used to mitigate risk. Each stage of design, operation, maintenance, and testing is judged against this safety integrity level. Quantitative techniques can be used to verify whether the safety integrity level is met. ISA-dTR84.0.02 is a technical report under development by ISA, which discusses how to apply quantitative analysis techniques to safety instrumented systems. This paper discusses two of those techniques: (1) Simplified equations and (2) Fault tree analysis.
TH-EF-BRC-03: Fault Tree Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thomadsen, B.
2016-06-15
This Hands-on Workshop will be focused on providing participants with experience with the principal tools of TG 100 and hence start to build both competence and confidence in the use of risk-based quality management techniques. The three principal tools forming the basis of TG 100’s risk analysis: Process mapping, Failure-Modes and Effects Analysis and fault-tree analysis will be introduced with a 5 minute refresher presentation and each presentation will be followed by a 30 minute small group exercise. An exercise on developing QM from the risk analysis follows. During the exercise periods, participants will apply the principles in 2 differentmore » clinical scenarios. At the conclusion of each exercise there will be ample time for participants to discuss with each other and the faculty their experience and any challenges encountered. Learning Objectives: To review the principles of Process Mapping, Failure Modes and Effects Analysis and Fault Tree Analysis. To gain familiarity with these three techniques in a small group setting. To share and discuss experiences with the three techniques with faculty and participants. Director, TreatSafely, LLC. Director, Center for the Assessment of Radiological Sciences. Occasional Consultant to the IAEA and Varian.« less
Estimating earthquake-induced failure probability and downtime of critical facilities.
Porter, Keith; Ramer, Kyle
2012-01-01
Fault trees have long been used to estimate failure risk in earthquakes, especially for nuclear power plants (NPPs). One interesting application is that one can assess and manage the probability that two facilities - a primary and backup - would be simultaneously rendered inoperative in a single earthquake. Another is that one can calculate the probabilistic time required to restore a facility to functionality, and the probability that, during any given planning period, the facility would be rendered inoperative for any specified duration. A large new peer-reviewed library of component damageability and repair-time data for the first time enables fault trees to be used to calculate the seismic risk of operational failure and downtime for a wide variety of buildings other than NPPs. With the new library, seismic risk of both the failure probability and probabilistic downtime can be assessed and managed, considering the facility's unique combination of structural and non-structural components, their seismic installation conditions, and the other systems on which the facility relies. An example is offered of real computer data centres operated by a California utility. The fault trees were created and tested in collaboration with utility operators, and the failure probability and downtime results validated in several ways.
Huang, Weiqing; Fan, Hongbo; Qiu, Yongfu; Cheng, Zhiyu; Qian, Yu
2016-02-15
Haze weather has become a serious environmental pollution problem which occurs in many Chinese cities. One of the most critical factors for the formation of haze weather is the exhausts of coal combustion, thus it is meaningful to figure out the causation mechanism between urban haze and the exhausts of coal combustion. Based on above considerations, the fault tree analysis (FAT) approach was employed for the causation mechanism of urban haze in Beijing by considering the risk events related with the exhausts of coal combustion for the first time. Using this approach, firstly the fault tree of the urban haze causation system connecting with coal combustion exhausts was established; consequently the risk events were discussed and identified; then, the minimal cut sets were successfully determined using Boolean algebra; finally, the structure, probability and critical importance degree analysis of the risk events were completed for the qualitative and quantitative assessment. The study results proved that the FTA was an effective and simple tool for the causation mechanism analysis and risk management of urban haze in China. Copyright © 2015 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Mulyana, Cukup; Muhammad, Fajar; Saad, Aswad H.; Mariah, Riveli, Nowo
2017-03-01
Storage tank component is the most critical component in LNG regasification terminal. It has the risk of failure and accident which impacts to human health and environment. Risk assessment is conducted to detect and reduce the risk of failure in storage tank. The aim of this research is determining and calculating the probability of failure in regasification unit of LNG. In this case, the failure is caused by Boiling Liquid Expanding Vapor Explosion (BLEVE) and jet fire in LNG storage tank component. The failure probability can be determined by using Fault Tree Analysis (FTA). Besides that, the impact of heat radiation which is generated is calculated. Fault tree for BLEVE and jet fire on storage tank component has been determined and obtained with the value of failure probability for BLEVE of 5.63 × 10-19 and for jet fire of 9.57 × 10-3. The value of failure probability for jet fire is high enough and need to be reduced by customizing PID scheme of regasification LNG unit in pipeline number 1312 and unit 1. The value of failure probability after customization has been obtained of 4.22 × 10-6.
NASA Astrophysics Data System (ADS)
Li, Shuanghong; Cao, Hongliang; Yang, Yupu
2018-02-01
Fault diagnosis is a key process for the reliability and safety of solid oxide fuel cell (SOFC) systems. However, it is difficult to rapidly and accurately identify faults for complicated SOFC systems, especially when simultaneous faults appear. In this research, a data-driven Multi-Label (ML) pattern identification approach is proposed to address the simultaneous fault diagnosis of SOFC systems. The framework of the simultaneous-fault diagnosis primarily includes two components: feature extraction and ML-SVM classifier. The simultaneous-fault diagnosis approach can be trained to diagnose simultaneous SOFC faults, such as fuel leakage, air leakage in different positions in the SOFC system, by just using simple training data sets consisting only single fault and not demanding simultaneous faults data. The experimental result shows the proposed framework can diagnose the simultaneous SOFC system faults with high accuracy requiring small number training data and low computational burden. In addition, Fault Inference Tree Analysis (FITA) is employed to identify the correlations among possible faults and their corresponding symptoms at the system component level.
NASA Astrophysics Data System (ADS)
Schwartz, D. P.; Haeussler, P. J.; Seitz, G. G.; Dawson, T. E.; Stenner, H. D.; Matmon, A.; Crone, A. J.; Personius, S.; Burns, P. B.; Cadena, A.; Thoms, E.
2005-12-01
Developing accurate rupture histories of long, high-slip-rate strike-slip faults is is especially challenging where recurrence is relatively short (hundreds of years), adjacent segments may fail within decades of each other, and uncertainties in dating can be as large as, or larger than, the time between events. The Denali Fault system (DFS) is the major active structure of interior Alaska, but received little study since pioneering fault investigations in the early 1970s. Until the summer of 2003 essentially no data existed on the timing or spatial distribution of past ruptures on the DFS. This changed with the occurrence of the M7.9 2002 Denali fault earthquake, which has been a catalyst for present paleoseismic investigations. It provided a well-constrained rupture length and slip distribution. Strike-slip faulting occurred along 290 km of the Denali and Totschunda faults, leaving unruptured ?140km of the eastern Denali fault, ?180 km of the western Denali fault, and ?70 km of the eastern Totschunda fault. The DFS presents us with a blank canvas on which to fill a chronology of past earthquakes using modern paleoseismic techniques. Aware of correlation issues with potentially closely-timed earthquakes we have a) investigated 11 paleoseismic sites that allow a variety of dating techniques, b) measured paleo offsets, which provide insight into magnitude and rupture length of past events, at 18 locations, and c) developed late Pleistocene and Holocene slip rates using exposure age dating to constrain long-term fault behavior models. We are in the process of: 1) radiocarbon-dating peats involved in faulting and liquefaction, and especially short-lived forest floor vegetation that includes outer rings of trees, spruce needles, and blueberry leaves killed and buried during paleoearthquakes; 2) supporting development of a 700-900 year tree-ring time-series for precise dating of trees used in event timing; 3) employing Pb 210 for constraining the youngest ruptures in sag ponds on the eastern and western Denali fault; and 4) using volcanic ashes in trenches for dating and correlation. Initial results are: 1) Large earthquakes occurred along the 2002 rupture section 350-700 yrb02 (2-sigma, calendar-corrected, years before 2002) with offsets about the same as 2002. The Denali penultimate rupture appears younger (350-570 yrb02) than the Totschunda (580-700 yrb02); 2) The western Denali fault is geomorphically fresh, its MRE likely occurred within the past 250 years, the penultimate event occurred 570-680 yrb02, and slip in each event was 4m; 3) The eastern Denali MRE post-dates peat dated at 550-680 yrb02, is younger than the penultimate Totschunda event, and could be part of the penultimate Denali fault rupture or a separate earthquake; 4) A 120-km section of the Denali fault between tNenana glacier and the Delta River may be a zone of overlap for large events and/or capable of producing smaller earthquakes; its western part has fresh scarps with small (1m) offsets. 2004/2005 field observations show there are longer datable records, with 4-5 events recorded in trenches on the eastern Denali fault and the west end of the 2002 rupture, 2-3 events on the western part of the fault in Denali National Park, and 3-4 events on the Totschunda fault. These and extensive datable material provide the basis to define the paleoseismic history of DFS earthquake ruptures through multiple and complete earthquake cycles.
Support vector machines-based fault diagnosis for turbo-pump rotor
NASA Astrophysics Data System (ADS)
Yuan, Sheng-Fa; Chu, Fu-Lei
2006-05-01
Most artificial intelligence methods used in fault diagnosis are based on empirical risk minimisation principle and have poor generalisation when fault samples are few. Support vector machines (SVM) is a new general machine-learning tool based on structural risk minimisation principle that exhibits good generalisation even when fault samples are few. Fault diagnosis based on SVM is discussed. Since basic SVM is originally designed for two-class classification, while most of fault diagnosis problems are multi-class cases, a new multi-class classification of SVM named 'one to others' algorithm is presented to solve the multi-class recognition problems. It is a binary tree classifier composed of several two-class classifiers organised by fault priority, which is simple, and has little repeated training amount, and the rate of training and recognition is expedited. The effectiveness of the method is verified by the application to the fault diagnosis for turbo pump rotor.
EDNA: Expert fault digraph analysis using CLIPS
NASA Technical Reports Server (NTRS)
Dixit, Vishweshwar V.
1990-01-01
Traditionally fault models are represented by trees. Recently, digraph models have been proposed (Sack). Digraph models closely imitate the real system dependencies and hence are easy to develop, validate and maintain. However, they can also contain directed cycles and analysis algorithms are hard to find. Available algorithms tend to be complicated and slow. On the other hand, the tree analysis (VGRH, Tayl) is well understood and rooted in vast research effort and analytical techniques. The tree analysis algorithms are sophisticated and orders of magnitude faster. Transformation of a digraph (cyclic) into trees (CLP, LP) is a viable approach to blend the advantages of the representations. Neither the digraphs nor the trees provide the ability to handle heuristic knowledge. An expert system, to capture the engineering knowledge, is essential. We propose an approach here, namely, expert network analysis. We combine the digraph representation and tree algorithms. The models are augmented by probabilistic and heuristic knowledge. CLIPS, an expert system shell from NASA-JSC will be used to develop a tool. The technique provides the ability to handle probabilities and heuristic knowledge. Mixed analysis, some nodes with probabilities, is possible. The tool provides graphics interface for input, query, and update. With the combined approach it is expected to be a valuable tool in the design process as well in the capture of final design knowledge.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sattison, M.B.
The Idaho National Engineering Laboratory (INEL) over the three years has created 75 plant-specific Accident Sequence Precursor (ASP) models using the SAPHIRE suite of PRA codes. Along with the new models, the INEL has also developed a new module for SAPHIRE which is tailored specifically to the unique needs of ASP evaluations. These models and software will be the next generation of risk tools for the evaluation of accident precursors by both the U.S. Nuclear Regulatory Commission`s (NRC`s) Office of Nuclear Reactor Regulation (NRR) and the Office for Analysis and Evaluation of Operational Data (AEOD). This paper presents an overviewmore » of the models and software. Key characteristics include: (1) classification of the plant models according to plant response with a unique set of event trees for each plant class, (2) plant-specific fault trees using supercomponents, (3) generation and retention of all system and sequence cutsets, (4) full flexibility in modifying logic, regenerating cutsets, and requantifying results, and (5) user interface for streamlined evaluation of ASP events. Future plans for the ASP models is also presented.« less
NASA Astrophysics Data System (ADS)
Hu, Bingbing; Li, Bing
2016-02-01
It is very difficult to detect weak fault signatures due to the large amount of noise in a wind turbine system. Multiscale noise tuning stochastic resonance (MSTSR) has proved to be an effective way to extract weak signals buried in strong noise. However, the MSTSR method originally based on discrete wavelet transform (DWT) has disadvantages such as shift variance and the aliasing effects in engineering application. In this paper, the dual-tree complex wavelet transform (DTCWT) is introduced into the MSTSR method, which makes it possible to further improve the system output signal-to-noise ratio and the accuracy of fault diagnosis by the merits of DTCWT (nearly shift invariant and reduced aliasing effects). Moreover, this method utilizes the relationship between the two dual-tree wavelet basis functions, instead of matching the single wavelet basis function to the signal being analyzed, which may speed up the signal processing and be employed in on-line engineering monitoring. The proposed method is applied to the analysis of bearing outer ring and shaft coupling vibration signals carrying fault information. The results confirm that the method performs better in extracting the fault features than the original DWT-based MSTSR, the wavelet transform with post spectral analysis, and EMD-based spectral analysis methods.
Locating hardware faults in a parallel computer
Archer, Charles J.; Megerian, Mark G.; Ratterman, Joseph D.; Smith, Brian E.
2010-04-13
Locating hardware faults in a parallel computer, including defining within a tree network of the parallel computer two or more sets of non-overlapping test levels of compute nodes of the network that together include all the data communications links of the network, each non-overlapping test level comprising two or more adjacent tiers of the tree; defining test cells within each non-overlapping test level, each test cell comprising a subtree of the tree including a subtree root compute node and all descendant compute nodes of the subtree root compute node within a non-overlapping test level; performing, separately on each set of non-overlapping test levels, an uplink test on all test cells in a set of non-overlapping test levels; and performing, separately from the uplink tests and separately on each set of non-overlapping test levels, a downlink test on all test cells in a set of non-overlapping test levels.
Model authoring system for fail safe analysis
NASA Technical Reports Server (NTRS)
Sikora, Scott E.
1990-01-01
The Model Authoring System is a prototype software application for generating fault tree analyses and failure mode and effects analyses for circuit designs. Utilizing established artificial intelligence and expert system techniques, the circuits are modeled as a frame-based knowledge base in an expert system shell, which allows the use of object oriented programming and an inference engine. The behavior of the circuit is then captured through IF-THEN rules, which then are searched to generate either a graphical fault tree analysis or failure modes and effects analysis. Sophisticated authoring techniques allow the circuit to be easily modeled, permit its behavior to be quickly defined, and provide abstraction features to deal with complexity.
A quantitative analysis of the F18 flight control system
NASA Technical Reports Server (NTRS)
Doyle, Stacy A.; Dugan, Joanne B.; Patterson-Hine, Ann
1993-01-01
This paper presents an informal quantitative analysis of the F18 flight control system (FCS). The analysis technique combines a coverage model with a fault tree model. To demonstrate the method's extensive capabilities, we replace the fault tree with a digraph model of the F18 FCS, the only model available to us. The substitution shows that while digraphs have primarily been used for qualitative analysis, they can also be used for quantitative analysis. Based on our assumptions and the particular failure rates assigned to the F18 FCS components, we show that coverage does have a significant effect on the system's reliability and thus it is important to include coverage in the reliability analysis.
Chen, Gang; Song, Yongduan; Lewis, Frank L
2016-05-03
This paper investigates the distributed fault-tolerant control problem of networked Euler-Lagrange systems with actuator and communication link faults. An adaptive fault-tolerant cooperative control scheme is proposed to achieve the coordinated tracking control of networked uncertain Lagrange systems on a general directed communication topology, which contains a spanning tree with the root node being the active target system. The proposed algorithm is capable of compensating for the actuator bias fault, the partial loss of effectiveness actuation fault, the communication link fault, the model uncertainty, and the external disturbance simultaneously. The control scheme does not use any fault detection and isolation mechanism to detect, separate, and identify the actuator faults online, which largely reduces the online computation and expedites the responsiveness of the controller. To validate the effectiveness of the proposed method, a test-bed of multiple robot-arm cooperative control system is developed for real-time verification. Experiments on the networked robot-arms are conduced and the results confirm the benefits and the effectiveness of the proposed distributed fault-tolerant control algorithms.
Friction Laws Derived From the Acoustic Emissions of a Laboratory Fault by Machine Learning
NASA Astrophysics Data System (ADS)
Rouet-Leduc, B.; Hulbert, C.; Ren, C. X.; Bolton, D. C.; Marone, C.; Johnson, P. A.
2017-12-01
Fault friction controls nearly all aspects of fault rupture, yet it is only possible to measure in the laboratory. Here we describe laboratory experiments where acoustic emissions are recorded from the fault. We find that by applying a machine learning approach known as "extreme gradient boosting trees" to the continuous acoustical signal, the fault friction can be directly inferred, showing that instantaneous characteristics of the acoustic signal are a fingerprint of the frictional state. This machine learning-based inference leads to a simple law that links the acoustic signal to the friction state, and holds for every stress cycle the laboratory fault goes through. The approach does not use any other measured parameter than instantaneous statistics of the acoustic signal. This finding may have importance for inferring frictional characteristics from seismic waves in Earth where fault friction cannot be measured.
The Design of a Fault-Tolerant COTS-Based Bus Architecture for Space Applications
NASA Technical Reports Server (NTRS)
Chau, Savio N.; Alkalai, Leon; Tai, Ann T.
2000-01-01
The high-performance, scalability and miniaturization requirements together with the power, mass and cost constraints mandate the use of commercial-off-the-shelf (COTS) components and standards in the X2000 avionics system architecture for deep-space missions. In this paper, we report our experiences and findings on the design of an IEEE 1394 compliant fault-tolerant COTS-based bus architecture. While the COTS standard IEEE 1394 adequately supports power management, high performance and scalability, its topological criteria impose restrictions on fault tolerance realization. To circumvent the difficulties, we derive a "stack-tree" topology that not only complies with the IEEE 1394 standard but also facilitates fault tolerance realization in a spaceborne system with limited dedicated resource redundancies. Moreover, by exploiting pertinent standard features of the 1394 interface which are not purposely designed for fault tolerance, we devise a comprehensive set of fault detection mechanisms to support the fault-tolerant bus architecture.
Fault-zone waves observed at the southern Joshua Tree earthquake rupture zone
Hough, S.E.; Ben-Zion, Y.; Leary, P.
1994-01-01
Waveform and spectral characteristics of several aftershocks of the M 6.1 22 April 1992 Joshua Tree earthquake recorded at stations just north of the Indio Hills in the Coachella Valley can be interpreted in terms of waves propagating within narrow, low-velocity, high-attenuation, vertical zones. Evidence for our interpretation consists of: (1) emergent P arrivals prior to and opposite in polarity to the impulsive direct phase; these arrivals can be modeled as headwaves indicative of a transfault velocity contrast; (2) spectral peaks in the S wave train that can be interpreted as internally reflected, low-velocity fault-zone wave energy; and (3) spatial selectivity of event-station pairs at which these data are observed, suggesting a long, narrow geologic structure. The observed waveforms are modeled using the analytical solution of Ben-Zion and Aki (1990) for a plane-parallel layered fault-zone structure. Synthetic waveform fits to the observed data indicate the presence of NS-trending vertical fault-zone layers characterized by a thickness of 50 to 100 m, a velocity decrease of 10 to 15% relative to the surrounding rock, and a P-wave quality factor in the range 25 to 50.
Probability and possibility-based representations of uncertainty in fault tree analysis.
Flage, Roger; Baraldi, Piero; Zio, Enrico; Aven, Terje
2013-01-01
Expert knowledge is an important source of input to risk analysis. In practice, experts might be reluctant to characterize their knowledge and the related (epistemic) uncertainty using precise probabilities. The theory of possibility allows for imprecision in probability assignments. The associated possibilistic representation of epistemic uncertainty can be combined with, and transformed into, a probabilistic representation; in this article, we show this with reference to a simple fault tree analysis. We apply an integrated (hybrid) probabilistic-possibilistic computational framework for the joint propagation of the epistemic uncertainty on the values of the (limiting relative frequency) probabilities of the basic events of the fault tree, and we use possibility-probability (probability-possibility) transformations for propagating the epistemic uncertainty within purely probabilistic and possibilistic settings. The results of the different approaches (hybrid, probabilistic, and possibilistic) are compared with respect to the representation of uncertainty about the top event (limiting relative frequency) probability. Both the rationale underpinning the approaches and the computational efforts they require are critically examined. We conclude that the approaches relevant in a given setting depend on the purpose of the risk analysis, and that further research is required to make the possibilistic approaches operational in a risk analysis context. © 2012 Society for Risk Analysis.
Bayesian-network-based safety risk assessment for steel construction projects.
Leu, Sou-Sen; Chang, Ching-Miao
2013-05-01
There are four primary accident types at steel building construction (SC) projects: falls (tumbles), object falls, object collapse, and electrocution. Several systematic safety risk assessment approaches, such as fault tree analysis (FTA) and failure mode and effect criticality analysis (FMECA), have been used to evaluate safety risks at SC projects. However, these traditional methods ineffectively address dependencies among safety factors at various levels that fail to provide early warnings to prevent occupational accidents. To overcome the limitations of traditional approaches, this study addresses the development of a safety risk-assessment model for SC projects by establishing the Bayesian networks (BN) based on fault tree (FT) transformation. The BN-based safety risk-assessment model was validated against the safety inspection records of six SC building projects and nine projects in which site accidents occurred. The ranks of posterior probabilities from the BN model were highly consistent with the accidents that occurred at each project site. The model accurately provides site safety-management abilities by calculating the probabilities of safety risks and further analyzing the causes of accidents based on their relationships in BNs. In practice, based on the analysis of accident risks and significant safety factors, proper preventive safety management strategies can be established to reduce the occurrence of accidents on SC sites. Copyright © 2013 Elsevier Ltd. All rights reserved.
Fault Tree Based Diagnosis with Optimal Test Sequencing for Field Service Engineers
NASA Technical Reports Server (NTRS)
Iverson, David L.; George, Laurence L.; Patterson-Hine, F. A.; Lum, Henry, Jr. (Technical Monitor)
1994-01-01
When field service engineers go to customer sites to service equipment, they want to diagnose and repair failures quickly and cost effectively. Symptoms exhibited by failed equipment frequently suggest several possible causes which require different approaches to diagnosis. This can lead the engineer to follow several fruitless paths in the diagnostic process before they find the actual failure. To assist in this situation, we have developed the Fault Tree Diagnosis and Optimal Test Sequence (FTDOTS) software system that performs automated diagnosis and ranks diagnostic hypotheses based on failure probability and the time or cost required to isolate and repair each failure. FTDOTS first finds a set of possible failures that explain exhibited symptoms by using a fault tree reliability model as a diagnostic knowledge to rank the hypothesized failures based on how likely they are and how long it would take or how much it would cost to isolate and repair them. This ordering suggests an optimal sequence for the field service engineer to investigate the hypothesized failures in order to minimize the time or cost required to accomplish the repair task. Previously, field service personnel would arrive at the customer site and choose which components to investigate based on past experience and service manuals. Using FTDOTS running on a portable computer, they can now enter a set of symptoms and get a list of possible failures ordered in an optimal test sequence to help them in their decisions. If facilities are available, the field engineer can connect the portable computer to the malfunctioning device for automated data gathering. FTDOTS is currently being applied to field service of medical test equipment. The techniques are flexible enough to use for many different types of devices. If a fault tree model of the equipment and information about component failure probabilities and isolation times or costs are available, a diagnostic knowledge base for that device can be developed easily.
Sequential Test Strategies for Multiple Fault Isolation
NASA Technical Reports Server (NTRS)
Shakeri, M.; Pattipati, Krishna R.; Raghavan, V.; Patterson-Hine, Ann; Kell, T.
1997-01-01
In this paper, we consider the problem of constructing near optimal test sequencing algorithms for diagnosing multiple faults in redundant (fault-tolerant) systems. The computational complexity of solving the optimal multiple-fault isolation problem is super-exponential, that is, it is much more difficult than the single-fault isolation problem, which, by itself, is NP-hard. By employing concepts from information theory and Lagrangian relaxation, we present several static and dynamic (on-line or interactive) test sequencing algorithms for the multiple fault isolation problem that provide a trade-off between the degree of suboptimality and computational complexity. Furthermore, we present novel diagnostic strategies that generate a static diagnostic directed graph (digraph), instead of a static diagnostic tree, for multiple fault diagnosis. Using this approach, the storage complexity of the overall diagnostic strategy reduces substantially. Computational results based on real-world systems indicate that the size of a static multiple fault strategy is strictly related to the structure of the system, and that the use of an on-line multiple fault strategy can diagnose faults in systems with as many as 10,000 failure sources.
MacDonald Iii, Angus W; Zick, Jennifer L; Chafee, Matthew V; Netoff, Theoden I
2015-01-01
The grand challenges of schizophrenia research are linking the causes of the disorder to its symptoms and finding ways to overcome those symptoms. We argue that the field will be unable to address these challenges within psychiatry's standard neo-Kraepelinian (DSM) perspective. At the same time the current corrective, based in molecular genetics and cognitive neuroscience, is also likely to flounder due to its neglect for psychiatry's syndromal structure. We suggest adopting a new approach long used in reliability engineering, which also serves as a synthesis of these approaches. This approach, known as fault tree analysis, can be combined with extant neuroscientific data collection and computational modeling efforts to uncover the causal structures underlying the cognitive and affective failures in people with schizophrenia as well as other complex psychiatric phenomena. By making explicit how causes combine from basic faults to downstream failures, this approach makes affordances for: (1) causes that are neither necessary nor sufficient in and of themselves; (2) within-diagnosis heterogeneity; and (3) between diagnosis co-morbidity.
Optical fiber-fault surveillance for passive optical networks in S-band operation window
NASA Astrophysics Data System (ADS)
Yeh, Chien-Hung; Chi, Sien
2005-07-01
An S-band (1470 to 1520 nm) fiber laser scheme, which uses multiple fiber Bragg grating (FBG) elements as feedback elements on each passive branch, is proposed and described for in-service fault identification in passive optical networks (PONs). By tuning a wavelength selective filter located within the laser cavity over a gain bandwidth, the fiber-fault of each branch can be monitored without affecting the in-service channels. In our experiment, an S-band four-branch monitoring tree-structured PON system is demonstrated and investigated experimentally.
Optical fiber-fault surveillance for passive optical networks in S-band operation window.
Yeh, Chien-Hung; Chi, Sien
2005-07-11
An S-band (1470 to 1520 nm) fiber laser scheme, which uses multiple fiber Bragg grating (FBG) elements as feedback elements on each passive branch, is proposed and described for in-service fault identification in passive optical networks (PONs). By tuning a wavelength selective filter located within the laser cavity over a gain bandwidth, the fiber-fault of each branch can be monitored without affecting the in-service channels. In our experiment, an S-band four-branch monitoring tree-structured PON system is demonstrated and investigated experimentally.
Sun, Weifang; Yao, Bin; Zeng, Nianyin; He, Yuchao; Cao, Xincheng; He, Wangpeng
2017-01-01
As a typical example of large and complex mechanical systems, rotating machinery is prone to diversified sorts of mechanical faults. Among these faults, one of the prominent causes of malfunction is generated in gear transmission chains. Although they can be collected via vibration signals, the fault signatures are always submerged in overwhelming interfering contents. Therefore, identifying the critical fault’s characteristic signal is far from an easy task. In order to improve the recognition accuracy of a fault’s characteristic signal, a novel intelligent fault diagnosis method is presented. In this method, a dual-tree complex wavelet transform (DTCWT) is employed to acquire the multiscale signal’s features. In addition, a convolutional neural network (CNN) approach is utilized to automatically recognise a fault feature from the multiscale signal features. The experiment results of the recognition for gear faults show the feasibility and effectiveness of the proposed method, especially in the gear’s weak fault features. PMID:28773148
Development of a Software Safety Process and a Case Study of Its Use
NASA Technical Reports Server (NTRS)
Knight, J. C.
1996-01-01
Research in the year covered by this reporting period has been primarily directed toward: continued development of mock-ups of computer screens for operator of a digital reactor control system; development of a reactor simulation to permit testing of various elements of the control system; formal specification of user interfaces; fault-tree analysis including software; evaluation of formal verification techniques; and continued development of a software documentation system. Technical results relating to this grant and the remainder of the principal investigator's research program are contained in various reports and papers.
NASA Astrophysics Data System (ADS)
Chartier, Thomas; Scotti, Oona; Boiselet, Aurelien; Lyon-Caen, Hélène
2016-04-01
Including faults in probabilistic seismic hazard assessment tends to increase the degree of uncertainty in the results due to the intrinsically uncertain nature of the fault data. This is especially the case in the low to moderate seismicity regions of Europe, where slow slipping faults are difficult to characterize. In order to better understand the key parameters that control the uncertainty in the fault-related hazard computations, we propose to build an analytic tool that provides a clear link between the different components of the fault-related hazard computations and their impact on the results. This will allow identifying the important parameters that need to be better constrained in order to reduce the resulting uncertainty in hazard and also provide a more hazard-oriented strategy for collecting relevant fault parameters in the field. The tool will be illustrated through the example of the West Corinth rifts fault-models. Recent work performed in the gulf has shown the complexity of the normal faulting system that is accommodating the extensional deformation of the rift. A logic-tree approach is proposed to account for this complexity and the multiplicity of scientifically defendable interpretations. At the nodes of the logic tree, different options that could be considered at each step of the fault-related seismic hazard will be considered. The first nodes represent the uncertainty in the geometries of the faults and their slip rates, which can derive from different data and methodologies. The subsequent node explores, for a given geometry/slip rate of faults, different earthquake rupture scenarios that may occur in the complex network of faults. The idea is to allow the possibility of several faults segments to break together in a single rupture scenario. To build these multiple-fault-segment scenarios, two approaches are considered: one based on simple rules (i.e. minimum distance between faults) and a second one that relies on physically-based simulations. The following nodes represents for each rupture scenario different rupture forecast models (i.e; characteristic or Gutenberg-Richter) and for a given rupture forecast, two probability models commonly used in seismic hazard assessment: poissonian or time-dependent. The final node represents an exhaustive set of ground motion prediction equations chosen in order to be compatible with the region. Finally, the expected probability of exceeding a given ground motion level is computed at each sites. Results will be discussed for a few specific localities of the West Corinth Gulf.
Redundancy management for efficient fault recovery in NASA's distributed computing system
NASA Technical Reports Server (NTRS)
Malek, Miroslaw; Pandya, Mihir; Yau, Kitty
1991-01-01
The management of redundancy in computer systems was studied and guidelines were provided for the development of NASA's fault-tolerant distributed systems. Fault recovery and reconfiguration mechanisms were examined. A theoretical foundation was laid for redundancy management by efficient reconfiguration methods and algorithmic diversity. Algorithms were developed to optimize the resources for embedding of computational graphs of tasks in the system architecture and reconfiguration of these tasks after a failure has occurred. The computational structure represented by a path and the complete binary tree was considered and the mesh and hypercube architectures were targeted for their embeddings. The innovative concept of Hybrid Algorithm Technique was introduced. This new technique provides a mechanism for obtaining fault tolerance while exhibiting improved performance.
Failure mode effect analysis and fault tree analysis as a combined methodology in risk management
NASA Astrophysics Data System (ADS)
Wessiani, N. A.; Yoshio, F.
2018-04-01
There have been many studies reported the implementation of Failure Mode Effect Analysis (FMEA) and Fault Tree Analysis (FTA) as a method in risk management. However, most of the studies usually only choose one of these two methods in their risk management methodology. On the other side, combining these two methods will reduce the drawbacks of each methods when implemented separately. This paper aims to combine the methodology of FMEA and FTA in assessing risk. A case study in the metal company will illustrate how this methodology can be implemented. In the case study, this combined methodology will assess the internal risks that occur in the production process. Further, those internal risks should be mitigated based on their level of risks.
Zeng, Hongcheng; Lu, Tao; Jenkins, Hillary; ...
2016-03-17
Earthquakes can produce significant tree mortality, and consequently affect regional carbon dynamics. Unfortunately, detailed studies quantifying the influence of earthquake on forest mortality are currently rare. The committed forest biomass carbon loss associated with the 2008 Wenchuan earthquake in China is assessed by a synthetic approach in this study that integrated field investigation, remote sensing analysis, empirical models and Monte Carlo simulation. The newly developed approach significantly improved the forest disturbance evaluation by quantitatively defining the earthquake impact boundary and detailed field survey to validate the mortality models. Based on our approach, a total biomass carbon of 10.9 Tg·C wasmore » lost in Wenchuan earthquake, which offset 0.23% of the living biomass carbon stock in Chinese forests. Tree mortality was highly clustered at epicenter, and declined rapidly with distance away from the fault zone. It is suggested that earthquakes represent a signif icant driver to forest carbon dynamics, and the earthquake-induced biomass carbon loss should be included in estimating forest carbon budgets.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zeng, Hongcheng; Lu, Tao; Jenkins, Hillary
Earthquakes can produce significant tree mortality, and consequently affect regional carbon dynamics. Unfortunately, detailed studies quantifying the influence of earthquake on forest mortality are currently rare. The committed forest biomass carbon loss associated with the 2008 Wenchuan earthquake in China is assessed by a synthetic approach in this study that integrated field investigation, remote sensing analysis, empirical models and Monte Carlo simulation. The newly developed approach significantly improved the forest disturbance evaluation by quantitatively defining the earthquake impact boundary and detailed field survey to validate the mortality models. Based on our approach, a total biomass carbon of 10.9 Tg·C wasmore » lost in Wenchuan earthquake, which offset 0.23% of the living biomass carbon stock in Chinese forests. Tree mortality was highly clustered at epicenter, and declined rapidly with distance away from the fault zone. It is suggested that earthquakes represent a signif icant driver to forest carbon dynamics, and the earthquake-induced biomass carbon loss should be included in estimating forest carbon budgets.« less
Using Decision Trees to Detect and Isolate Simulated Leaks in the J-2X Rocket Engine
NASA Technical Reports Server (NTRS)
Schwabacher, Mark A.; Aguilar, Robert; Figueroa, Fernando F.
2009-01-01
The goal of this work was to use data-driven methods to automatically detect and isolate faults in the J-2X rocket engine. It was decided to use decision trees, since they tend to be easier to interpret than other data-driven methods. The decision tree algorithm automatically "learns" a decision tree by performing a search through the space of possible decision trees to find one that fits the training data. The particular decision tree algorithm used is known as C4.5. Simulated J-2X data from a high-fidelity simulator developed at Pratt & Whitney Rocketdyne and known as the Detailed Real-Time Model (DRTM) was used to "train" and test the decision tree. Fifty-six DRTM simulations were performed for this purpose, with different leak sizes, different leak locations, and different times of leak onset. To make the simulations as realistic as possible, they included simulated sensor noise, and included a gradual degradation in both fuel and oxidizer turbine efficiency. A decision tree was trained using 11 of these simulations, and tested using the remaining 45 simulations. In the training phase, the C4.5 algorithm was provided with labeled examples of data from nominal operation and data including leaks in each leak location. From the data, it "learned" a decision tree that can classify unseen data as having no leak or having a leak in one of the five leak locations. In the test phase, the decision tree produced very low false alarm rates and low missed detection rates on the unseen data. It had very good fault isolation rates for three of the five simulated leak locations, but it tended to confuse the remaining two locations, perhaps because a large leak at one of these two locations can look very similar to a small leak at the other location.
Witter, Robert C.; Zhang, Yinglong J.; Wang, Kelin; Priest, George R.; Goldfinger, Chris; Stimely, Laura; English, John T.; Ferro, Paul A.
2013-01-01
Characterizations of tsunami hazards along the Cascadia subduction zone hinge on uncertainties in megathrust rupture models used for simulating tsunami inundation. To explore these uncertainties, we constructed 15 megathrust earthquake scenarios using rupture models that supply the initial conditions for tsunami simulations at Bandon, Oregon. Tsunami inundation varies with the amount and distribution of fault slip assigned to rupture models, including models where slip is partitioned to a splay fault in the accretionary wedge and models that vary the updip limit of slip on a buried fault. Constraints on fault slip come from onshore and offshore paleoseismological evidence. We rank each rupture model using a logic tree that evaluates a model’s consistency with geological and geophysical data. The scenarios provide inputs to a hydrodynamic model, SELFE, used to simulate tsunami generation, propagation, and inundation on unstructured grids with <5–15 m resolution in coastal areas. Tsunami simulations delineate the likelihood that Cascadia tsunamis will exceed mapped inundation lines. Maximum wave elevations at the shoreline varied from ∼4 m to 25 m for earthquakes with 9–44 m slip and Mw 8.7–9.2. Simulated tsunami inundation agrees with sparse deposits left by the A.D. 1700 and older tsunamis. Tsunami simulations for large (22–30 m slip) and medium (14–19 m slip) splay fault scenarios encompass 80%–95% of all inundation scenarios and provide reasonable guidelines for land-use planning and coastal development. The maximum tsunami inundation simulated for the greatest splay fault scenario (36–44 m slip) can help to guide development of local tsunami evacuation zones.
MacDonald III, Angus W.; Zick, Jennifer L.; Chafee, Matthew V.; Netoff, Theoden I.
2016-01-01
The grand challenges of schizophrenia research are linking the causes of the disorder to its symptoms and finding ways to overcome those symptoms. We argue that the field will be unable to address these challenges within psychiatry’s standard neo-Kraepelinian (DSM) perspective. At the same time the current corrective, based in molecular genetics and cognitive neuroscience, is also likely to flounder due to its neglect for psychiatry’s syndromal structure. We suggest adopting a new approach long used in reliability engineering, which also serves as a synthesis of these approaches. This approach, known as fault tree analysis, can be combined with extant neuroscientific data collection and computational modeling efforts to uncover the causal structures underlying the cognitive and affective failures in people with schizophrenia as well as other complex psychiatric phenomena. By making explicit how causes combine from basic faults to downstream failures, this approach makes affordances for: (1) causes that are neither necessary nor sufficient in and of themselves; (2) within-diagnosis heterogeneity; and (3) between diagnosis co-morbidity. PMID:26779007
Geology of Joshua Tree National Park geodatabase
Powell, Robert E.; Matti, Jonathan C.; Cossette, Pamela M.
2015-09-16
The database in this Open-File Report describes the geology of Joshua Tree National Park and was completed in support of the National Cooperative Geologic Mapping Program of the U.S. Geological Survey (USGS) and in cooperation with the National Park Service (NPS). The geologic observations and interpretations represented in the database are relevant to both the ongoing scientific interests of the USGS in southern California and the management requirements of NPS, specifically of Joshua Tree National Park (JOTR).Joshua Tree National Park is situated within the eastern part of California’s Transverse Ranges province and straddles the transition between the Mojave and Sonoran deserts. The geologically diverse terrain that underlies JOTR reveals a rich and varied geologic evolution, one that spans nearly two billion years of Earth history. The Park’s landscape is the current expression of this evolution, its varied landforms reflecting the differing origins of underlying rock types and their differing responses to subsequent geologic events. Crystalline basement in the Park consists of Proterozoic plutonic and metamorphic rocks intruded by a composite Mesozoic batholith of Triassic through Late Cretaceous plutons arrayed in northwest-trending lithodemic belts. The basement was exhumed during the Cenozoic and underwent differential deep weathering beneath a low-relief erosion surface, with the deepest weathering profiles forming on quartz-rich, biotite-bearing granitoid rocks. Disruption of the basement terrain by faults of the San Andreas system began ca. 20 Ma and the JOTR sinistral domain, preceded by basalt eruptions, began perhaps as early as ca. 7 Ma, but no later than 5 Ma. Uplift of the mountain blocks during this interval led to erosional stripping of the thick zones of weathered quartz-rich granitoid rocks to form etchplains dotted by bouldery tors—the iconic landscape of the Park. The stripped debris filled basins along the fault zones.Mountain ranges and basins in the Park exhibit an east-west physiographic grain controlled by left-lateral fault zones that form a sinistral domain within the broad zone of dextral shear along the transform boundary between the North American and Pacific plates. Geologic and geophysical evidence reveal that movement on the sinistral faults zones has resulted in left steps along the zones, resulting in the development of sub-basins beneath Pinto Basin and Shavers and Chuckwalla Valleys. The sinistral fault zones connect the Mojave Desert dextral faults of the Eastern California Shear Zone to the north and east with the Coachella Valley strands of the southern San Andreas Fault Zone to the west.Quaternary surficial deposits accumulated in alluvial washes and playas and lakes along the valley floors; in alluvial fans, washes, and sheet wash aprons along piedmonts flanking the mountain ranges; and in eolian dunes and sand sheets that span the transition from valley floor to piedmont slope. Sequences of Quaternary pediments are planed into piedmonts flanking valley-floor and upland basins, each pediment in turn overlain by successively younger residual and alluvial surficial deposits.
Improved FTA methodology and application to subsea pipeline reliability design.
Lin, Jing; Yuan, Yongbo; Zhang, Mingyuan
2014-01-01
An innovative logic tree, Failure Expansion Tree (FET), is proposed in this paper, which improves on traditional Fault Tree Analysis (FTA). It describes a different thinking approach for risk factor identification and reliability risk assessment. By providing a more comprehensive and objective methodology, the rather subjective nature of FTA node discovery is significantly reduced and the resulting mathematical calculations for quantitative analysis are greatly simplified. Applied to the Useful Life phase of a subsea pipeline engineering project, the approach provides a more structured analysis by constructing a tree following the laws of physics and geometry. Resulting improvements are summarized in comparison table form.
Improved FTA Methodology and Application to Subsea Pipeline Reliability Design
Lin, Jing; Yuan, Yongbo; Zhang, Mingyuan
2014-01-01
An innovative logic tree, Failure Expansion Tree (FET), is proposed in this paper, which improves on traditional Fault Tree Analysis (FTA). It describes a different thinking approach for risk factor identification and reliability risk assessment. By providing a more comprehensive and objective methodology, the rather subjective nature of FTA node discovery is significantly reduced and the resulting mathematical calculations for quantitative analysis are greatly simplified. Applied to the Useful Life phase of a subsea pipeline engineering project, the approach provides a more structured analysis by constructing a tree following the laws of physics and geometry. Resulting improvements are summarized in comparison table form. PMID:24667681
NASA Astrophysics Data System (ADS)
Rodríguez-Escales, Paula; Canelles, Arnau; Sanchez-Vila, Xavier; Folch, Albert; Kurtzman, Daniel; Rossetto, Rudy; Fernández-Escalante, Enrique; Lobo-Ferreira, João-Paulo; Sapiano, Manuel; San-Sebastián, Jon; Schüth, Christoph
2018-06-01
Managed aquifer recharge (MAR) can be affected by many risks. Those risks are related to different technical and non-technical aspects of recharge, like water availability, water quality, legislation, social issues, etc. Many other works have acknowledged risks of this nature theoretically; however, their quantification and definition has not been developed. In this study, the risk definition and quantification has been performed by means of fault trees
and probabilistic risk assessment (PRA). We defined a fault tree with 65 basic events applicable to the operation phase. After that, we have applied this methodology to six different managed aquifer recharge sites located in the Mediterranean Basin (Portugal, Spain, Italy, Malta, and Israel). The probabilities of the basic events were defined by expert criteria, based on the knowledge of the different managers of the facilities. From that, we conclude that in all sites, the perception of the expert criteria of the non-technical aspects were as much or even more important than the technical aspects. Regarding the risk results, we observe that the total risk in three of the six sites was equal to or above 0.90. That would mean that the MAR facilities have a risk of failure equal to or higher than 90 % in the period of 2-6 years. The other three sites presented lower risks (75, 29, and 18 % for Malta, Menashe, and Serchio, respectively).
Probabilistic Seismic Hazard Assessment of the Chiapas State (SE Mexico)
NASA Astrophysics Data System (ADS)
Rodríguez-Lomelí, Anabel Georgina; García-Mayordomo, Julián
2015-04-01
The Chiapas State, in southeastern Mexico, is a very active seismic region due to the interaction of three tectonic plates: Northamerica, Cocos and Caribe. We present a probabilistic seismic hazard assessment (PSHA) specifically performed to evaluate seismic hazard in the Chiapas state. The PSHA was based on a composited seismic catalogue homogenized to Mw and was used a logic tree procedure for the consideration of different seismogenic source models and ground motion prediction equations (GMPEs). The results were obtained in terms of peak ground acceleration as well as spectral accelerations. The earthquake catalogue was compiled from the International Seismological Center and the Servicio Sismológico Nacional de México sources. Two different seismogenic source zones (SSZ) models were devised based on a revision of the tectonics of the region and the available geomorphological and geological maps. The SSZ were finally defined by the analysis of geophysical data, resulting two main different SSZ models. The Gutenberg-Richter parameters for each SSZ were calculated from the declustered and homogenized catalogue, while the maximum expected earthquake was assessed from both the catalogue and geological criteria. Several worldwide and regional GMPEs for subduction and crustal zones were revised. For each SSZ model we considered four possible combinations of GMPEs. Finally, hazard was calculated in terms of PGA and SA for 500-, 1000-, and 2500-years return periods for each branch of the logic tree using the CRISIS2007 software. The final hazard maps represent the mean values obtained from the two seismogenic and four attenuation models considered in the logic tree. For the three return periods analyzed, the maps locate the most hazardous areas in the Chiapas Central Pacific Zone, the Pacific Coastal Plain and in the Motagua and Polochic Fault Zone; intermediate hazard values in the Chiapas Batholith Zone and in the Strike-Slip Faults Province. The hazard decreases towards the northeast across the Reverse Faults Province and up to Yucatan Platform, where the lowest values are reached. We also produced uniform hazard spectra (UHS) for the three main cities of Chiapas. Tapachula city presents the highest spectral accelerations, while Tuxtla Gutierrez and San Cristobal de las Casas cities show similar values. We conclude that seismic hazard in Chiapas is chiefly controlled by the subduction of the Cocos beneath Northamerica and Caribe tectonic plates, that makes the coastal areas the most hazardous. Additionally, the Motagua and Polochic Fault Zones are also important, increasing the hazard particularly in southeastern Chiapas.
The Application of a Residual Risk Evaluation Technique Used for Expendable Launch Vehicles
NASA Technical Reports Server (NTRS)
Latimer, John A.
2009-01-01
This presentation provides a Residual Risk Evaluation Technique (RRET) developed by Kennedy Space Center (KSC) Safety and Mission Assurance (S&MA) Launch Services Division. This technique is one of many procedures used by S&MA at KSC to evaluate residual risks for each Expendable Launch Vehicle (ELV) mission. RRET is a straight forward technique that incorporates the proven methodology of risk management, fault tree analysis, and reliability prediction. RRET derives a system reliability impact indicator from the system baseline reliability and the system residual risk reliability values. The system reliability impact indicator provides a quantitative measure of the reduction in the system baseline reliability due to the identified residual risks associated with the designated ELV mission. An example is discussed to provide insight into the application of RRET.
LIDAR Helps Identify Source of 1872 Earthquake Near Chelan, Washington
NASA Astrophysics Data System (ADS)
Sherrod, B. L.; Blakely, R. J.; Weaver, C. S.
2015-12-01
One of the largest historic earthquakes in the Pacific Northwest occurred on 15 December 1872 (M6.5-7) near the south end of Lake Chelan in north-central Washington State. Lack of recognized surface deformation suggested that the earthquake occurred on a blind, perhaps deep, fault. New LiDAR data show landslides and a ~6 km long, NW-side-up scarp in Spencer Canyon, ~30 km south of Lake Chelan. Two landslides in Spencer Canyon impounded small ponds. An historical account indicated that dead trees were visible in one pond in AD1884. Wood from a snag in the pond yielded a calibrated age of AD1670-1940. Tree ring counts show that the oldest living trees on each landslide are 130 and 128 years old. The larger of the two landslides obliterated the scarp and thus, post-dates the last scarp-forming event. Two trenches across the scarp exposed a NW-dipping thrust fault. One trench exposed alluvial fan deposits, Mazama ash, and scarp colluvium cut by a single thrust fault. Three charcoal samples from a colluvium buried during the last fault displacement had calibrated ages between AD1680 and AD1940. The second trench exposed gneiss thrust over colluvium during at least two, and possibly three fault displacements. The younger of two charcoal samples collected from a colluvium below gneiss had a calibrated age of AD1665- AD1905. For an historical constraint, we assume that the lack of felt reports for large earthquakes in the period between 1872 and today indicates that no large earthquakes capable of rupturing the ground surface occurred in the region after the 1872 earthquake; thus the last displacement on the Spencer Canyon scarp cannot post-date the 1872 earthquake. Modeling of the age data suggests that the last displacement occurred between AD1840 and AD1890. These data, combined with the historical record, indicate that this fault is the source of the 1872 earthquake. Analyses of aeromagnetic data reveal lithologic contacts beneath the scarp that form an ENE-striking, curvilinear zone ~2.5 km wide and ~55 km long. This zone coincides with monoclines mapped in Mesozoic bedrock and Miocene flood basalts. This study ends uncertainty regarding the source of the 1872 earthquake and provides important information for seismic hazard analyses of major infrastructure projects in Washington and British Columbia.
Fault detection and fault tolerance in robotics
NASA Technical Reports Server (NTRS)
Visinsky, Monica; Walker, Ian D.; Cavallaro, Joseph R.
1992-01-01
Robots are used in inaccessible or hazardous environments in order to alleviate some of the time, cost and risk involved in preparing men to endure these conditions. In order to perform their expected tasks, the robots are often quite complex, thus increasing their potential for failures. If men must be sent into these environments to repair each component failure in the robot, the advantages of using the robot are quickly lost. Fault tolerant robots are needed which can effectively cope with failures and continue their tasks until repairs can be realistically scheduled. Before fault tolerant capabilities can be created, methods of detecting and pinpointing failures must be perfected. This paper develops a basic fault tree analysis of a robot in order to obtain a better understanding of where failures can occur and how they contribute to other failures in the robot. The resulting failure flow chart can also be used to analyze the resiliency of the robot in the presence of specific faults. By simulating robot failures and fault detection schemes, the problems involved in detecting failures for robots are explored in more depth.
NASA Astrophysics Data System (ADS)
Lai, Wenqing; Wang, Yuandong; Li, Wenpeng; Sun, Guang; Qu, Guomin; Cui, Shigang; Li, Mengke; Wang, Yongqiang
2017-10-01
Based on long term vibration monitoring of the No.2 oil-immersed fat wave reactor in the ±500kV converter station in East Mongolia, the vibration signals in normal state and in core loose fault state were saved. Through the time-frequency analysis of the signals, the vibration characteristics of the core loose fault were obtained, and a fault diagnosis method based on the dual tree complex wavelet (DT-CWT) and support vector machine (SVM) was proposed. The vibration signals were analyzed by DT-CWT, and the energy entropy of the vibration signals were taken as the feature vector; the support vector machine was used to train and test the feature vector, and the accurate identification of the core loose fault of the flat wave reactor was realized. Through the identification of many groups of normal and core loose fault state vibration signals, the diagnostic accuracy of the result reached 97.36%. The effectiveness and accuracy of the method in the fault diagnosis of the flat wave reactor core is verified.
Method and system for dynamic probabilistic risk assessment
NASA Technical Reports Server (NTRS)
Dugan, Joanne Bechta (Inventor); Xu, Hong (Inventor)
2013-01-01
The DEFT methodology, system and computer readable medium extends the applicability of the PRA (Probabilistic Risk Assessment) methodology to computer-based systems, by allowing DFT (Dynamic Fault Tree) nodes as pivot nodes in the Event Tree (ET) model. DEFT includes a mathematical model and solution algorithm, supports all common PRA analysis functions and cutsets. Additional capabilities enabled by the DFT include modularization, phased mission analysis, sequence dependencies, and imperfect coverage.
Fault diagnosis of helical gearbox using acoustic signal and wavelets
NASA Astrophysics Data System (ADS)
Pranesh, SK; Abraham, Siju; Sugumaran, V.; Amarnath, M.
2017-05-01
The efficient transmission of power in machines is needed and gears are an appropriate choice. Faults in gears result in loss of energy and money. The monitoring and fault diagnosis are done by analysis of the acoustic and vibrational signals which are generally considered to be unwanted by products. This study proposes the usage of machine learning algorithm for condition monitoring of a helical gearbox by using the sound signals produced by the gearbox. Artificial faults were created and subsequently signals were captured by a microphone. An extensive study using different wavelet transformations for feature extraction from the acoustic signals was done, followed by waveletselection and feature selection using J48 decision tree and feature classification was performed using K star algorithm. Classification accuracy of 100% was obtained in the study
Probabilistic Seismic Hazard Assessment for a NPP in the Upper Rhine Graben, France
NASA Astrophysics Data System (ADS)
Clément, Christophe; Chartier, Thomas; Jomard, Hervé; Baize, Stéphane; Scotti, Oona; Cushing, Edward
2015-04-01
The southern part of the Upper Rhine Graben (URG) straddling the border between eastern France and western Germany, presents a relatively important seismic activity for an intraplate area. A magnitude 5 or greater shakes the URG every 25 years and in 1356 a magnitude greater than 6.5 struck the city of Basel. Several potentially active faults have been identified in the area and documented in the French Active Fault Database (web site in construction). These faults are located along the Graben boundaries and also inside the Graben itself, beneath heavily populated areas and critical facilities (including the Fessenheim Nuclear Power Plant). These faults are prone to produce earthquakes with magnitude 6 and above. Published regional models and preliminary geomorphological investigations provided provisional assessment of slip rates for the individual faults (0.1-0.001 mm/a) resulting in recurrence time of 10 000 years or greater for magnitude 6+ earthquakes. Using a fault model, ground motion response spectra are calculated for annual frequencies of exceedance (AFE) ranging from 10-4 to 10-8 per year, typical for design basis and probabilistic safety analyses of NPPs. A logic tree is implemented to evaluate uncertainties in seismic hazard assessment. The choice of ground motion prediction equations (GMPEs) and range of slip rate uncertainty are the main sources of seismic hazard variability at the NPP site. In fact, the hazard for AFE lower than 10-4 is mostly controlled by the potentially active nearby Rhine River fault. Compared with areal source zone models, a fault model localizes the hazard around the active faults and changes the shape of the Uniform Hazard Spectrum at the site. Seismic hazard deaggregations are performed to identify the earthquake scenarios (including magnitude, distance and the number of standard deviations from the median ground motion as predicted by GMPEs) that contribute to the exceedance of spectral acceleration for the different AFE levels. These scenarios are finally examined with respect to the seismicity data available in paleoseismic, historic and instrumental catalogues.
Inferring patterns in mitochondrial DNA sequences through hypercube independent spanning trees.
Silva, Eduardo Sant Ana da; Pedrini, Helio
2016-03-01
Given a graph G, a set of spanning trees rooted at a vertex r of G is said vertex/edge independent if, for each vertex v of G, v≠r, the paths of r to v in any pair of trees are vertex/edge disjoint. Independent spanning trees (ISTs) provide a number of advantages in data broadcasting due to their fault tolerant properties. For this reason, some studies have addressed the issue by providing mechanisms for constructing independent spanning trees efficiently. In this work, we investigate how to construct independent spanning trees on hypercubes, which are generated based upon spanning binomial trees, and how to use them to predict mitochondrial DNA sequence parts through paths on the hypercube. The prediction works both for inferring mitochondrial DNA sequences comprised of six bases as well as infer anomalies that probably should not belong to the mitochondrial DNA standard. Copyright © 2016 Elsevier Ltd. All rights reserved.
Towards generating ECSS-compliant fault tree analysis results via ConcertoFLA
NASA Astrophysics Data System (ADS)
Gallina, B.; Haider, Z.; Carlsson, A.
2018-05-01
Attitude Control Systems (ACSs) maintain the orientation of the satellite in three-dimensional space. ACSs need to be engineered in compliance with ECSS standards and need to ensure a certain degree of dependability. Thus, dependability analysis is conducted at various levels and by using ECSS-compliant techniques. Fault Tree Analysis (FTA) is one of these techniques. FTA is being automated within various Model Driven Engineering (MDE)-based methodologies. The tool-supported CHESS-methodology is one of them. This methodology incorporates ConcertoFLA, a dependability analysis technique enabling failure behavior analysis and thus FTA-results generation. ConcertoFLA, however, similarly to other techniques, still belongs to the academic research niche. To promote this technique within the space industry, we apply it on an ACS and discuss about its multi-faceted potentialities in the context of ECSS-compliant engineering.
NASA Astrophysics Data System (ADS)
Zeng, Yajun; Skibniewski, Miroslaw J.
2013-08-01
Enterprise resource planning (ERP) system implementations are often characterised with large capital outlay, long implementation duration, and high risk of failure. In order to avoid ERP implementation failure and realise the benefits of the system, sound risk management is the key. This paper proposes a probabilistic risk assessment approach for ERP system implementation projects based on fault tree analysis, which models the relationship between ERP system components and specific risk factors. Unlike traditional risk management approaches that have been mostly focused on meeting project budget and schedule objectives, the proposed approach intends to address the risks that may cause ERP system usage failure. The approach can be used to identify the root causes of ERP system implementation usage failure and quantify the impact of critical component failures or critical risk events in the implementation process.
Accelerated Monte Carlo Simulation for Safety Analysis of the Advanced Airspace Concept
NASA Technical Reports Server (NTRS)
Thipphavong, David
2010-01-01
Safe separation of aircraft is a primary objective of any air traffic control system. An accelerated Monte Carlo approach was developed to assess the level of safety provided by a proposed next-generation air traffic control system. It combines features of fault tree and standard Monte Carlo methods. It runs more than one order of magnitude faster than the standard Monte Carlo method while providing risk estimates that only differ by about 10%. It also preserves component-level model fidelity that is difficult to maintain using the standard fault tree method. This balance of speed and fidelity allows sensitivity analysis to be completed in days instead of weeks or months with the standard Monte Carlo method. Results indicate that risk estimates are sensitive to transponder, pilot visual avoidance, and conflict detection failure probabilities.
Logic flowgraph methodology - A tool for modeling embedded systems
NASA Technical Reports Server (NTRS)
Muthukumar, C. T.; Guarro, S. B.; Apostolakis, G. E.
1991-01-01
The logic flowgraph methodology (LFM), a method for modeling hardware in terms of its process parameters, has been extended to form an analytical tool for the analysis of integrated (hardware/software) embedded systems. In the software part of a given embedded system model, timing and the control flow among different software components are modeled by augmenting LFM with modified Petrinet structures. The objective of the use of such an augmented LFM model is to uncover possible errors and the potential for unanticipated software/hardware interactions. This is done by backtracking through the augmented LFM mode according to established procedures which allow the semiautomated construction of fault trees for any chosen state of the embedded system (top event). These fault trees, in turn, produce the possible combinations of lower-level states (events) that may lead to the top event.
Using certification trails to achieve software fault tolerance
NASA Technical Reports Server (NTRS)
Sullivan, Gregory F.; Masson, Gerald M.
1993-01-01
A conceptually novel and powerful technique to achieve fault tolerance in hardware and software systems is introduced. When used for software fault tolerance, this new technique uses time and software redundancy and can be outlined as follows. In the initial phase, a program is run to solve a problem and store the result. In addition, this program leaves behind a trail of data called a certification trail. In the second phase, another program is run which solves the original problem again. This program, however, has access to the certification trail left by the first program. Because of the availability of the certification trail, the second phase can be performed by a less complex program and can execute more quickly. In the final phase, the two results are accepted as correct; otherwise an error is indicated. An essential aspect of this approach is that the second program must always generate either an error indication or a correct output even when the certification trail it receives from the first program is incorrect. The certification trail approach to fault tolerance was formalized and it was illustrated by applying it to the fundamental problem of finding a minimum spanning tree. Cases in which the second phase can be run concorrectly with the first and act as a monitor are discussed. The certification trail approach was compared to other approaches to fault tolerance. Because of space limitations we have omitted examples of our technique applied to the Huffman tree, and convex hull problems. These can be found in the full version of this paper.
Fuzzy risk analysis of a modern γ-ray industrial irradiator.
Castiglia, F; Giardina, M
2011-06-01
Fuzzy fault tree analyses were used to investigate accident scenarios that involve radiological exposure to operators working in industrial γ-ray irradiation facilities. The HEART method, a first generation human reliability analysis method, was used to evaluate the probability of adverse human error in these analyses. This technique was modified on the basis of fuzzy set theory to more directly take into account the uncertainties in the error-promoting factors on which the methodology is based. Moreover, with regard to some identified accident scenarios, fuzzy radiological exposure risk, expressed in terms of potential annual death, was evaluated. The calculated fuzzy risks for the examined plant were determined to be well below the reference risk suggested by International Commission on Radiological Protection.
Bodin, Paul; Bilham, Roger; Behr, Jeff; Gomberg, Joan; Hudnut, Kenneth W.
1994-01-01
Five out of six functioning creepmeters on southern California faults recorded slip triggered at the time of some or all of the three largest events of the 1992 Landers earthquake sequence. Digital creep data indicate that dextral slip was triggered within 1 min of each mainshock and that maximum slip velocities occurred 2 to 3 min later. The duration of triggered slip events ranged from a few hours to several weeks. We note that triggered slip occurs commonly on faults that exhibit fault creep. To account for the observation that slip can be triggered repeatedly on a fault, we propose that the amplitude of triggered slip may be proportional to the depth of slip in the creep event and to the available near-surface tectonic strain that would otherwise eventually be released as fault creep. We advance the notion that seismic surface waves, perhaps amplified by sediments, generate transient local conditions that favor the release of tectonic strain to varying depths. Synthetic strain seismograms are presented that suggest increased pore pressure during periods of fault-normal contraction may be responsible for triggered slip, since maximum dextral shear strain transients correspond to times of maximum fault-normal contraction.
Seismic hazard in the Istanbul metropolitan area: A preliminary re-evaluation
Kalkan, E.; Gulkan, Polat; Ozturk, N.Y.; Celebi, M.
2008-01-01
In 1999, two destructive earthquakes (M7.4 Kocaeli and M7.2 Duzce) occurred in the north west of Turkey and resulted in major stress-drops on the western segment of the North Anatolian Fault system where it continues under the Marmara Sea. These undersea fault segments were recently explored using bathymetric and reflection surveys. These recent findings helped to reshape the seismotectonic environment of the Marmara basin, which is a perplexing tectonic domain. Based on collected new information, seismic hazard of the Marmara region, particularly Istanbul Metropolitan Area and its vicinity, were re-examined using a probabilistic approach. Two seismic source and alternate recurrence models combined with various indigenous and foreign attenuation relationships were adapted within a logic tree formulation to quantify and project the regional exposure on a set of hazard maps. The hazard maps show the peak horizontal ground acceleration and spectral acceleration at 1.0 s. These acceleration levels were computed for 2 and 10 % probabilities of transcendence in 50 years.
Machine Learning of Fault Friction
NASA Astrophysics Data System (ADS)
Johnson, P. A.; Rouet-Leduc, B.; Hulbert, C.; Marone, C.; Guyer, R. A.
2017-12-01
We are applying machine learning (ML) techniques to continuous acoustic emission (AE) data from laboratory earthquake experiments. Our goal is to apply explicit ML methods to this acoustic datathe AE in order to infer frictional properties of a laboratory fault. The experiment is a double direct shear apparatus comprised of fault blocks surrounding fault gouge comprised of glass beads or quartz powder. Fault characteristics are recorded, including shear stress, applied load (bulk friction = shear stress/normal load) and shear velocity. The raw acoustic signal is continuously recorded. We rely on explicit decision tree approaches (Random Forest and Gradient Boosted Trees) that allow us to identify important features linked to the fault friction. A training procedure that employs both the AE and the recorded shear stress from the experiment is first conducted. Then, testing takes place on data the algorithm has never seen before, using only the continuous AE signal. We find that these methods provide rich information regarding frictional processes during slip (Rouet-Leduc et al., 2017a; Hulbert et al., 2017). In addition, similar machine learning approaches predict failure times, as well as slip magnitudes in some cases. We find that these methods work for both stick slip and slow slip experiments, for periodic slip and for aperiodic slip. We also derive a fundamental relationship between the AE and the friction describing the frictional behavior of any earthquake slip cycle in a given experiment (Rouet-Leduc et al., 2017b). Our goal is to ultimately scale these approaches to Earth geophysical data to probe fault friction. References Rouet-Leduc, B., C. Hulbert, N. Lubbers, K. Barros, C. Humphreys and P. A. Johnson, Machine learning predicts laboratory earthquakes, in review (2017). https://arxiv.org/abs/1702.05774Rouet-LeDuc, B. et al., Friction Laws Derived From the Acoustic Emissions of a Laboratory Fault by Machine Learning (2017), AGU Fall Meeting Session S025: Earthquake source: from the laboratory to the fieldHulbert, C., Characterizing slow slip applying machine learning (2017), AGU Fall Meeting Session S019: Slow slip, Tectonic Tremor, and the Brittle-to-Ductile Transition Zone: What mechanisms control the diversity of slow and fast earthquakes?
1983-04-01
tolerances or spaci - able assets diagnostic/fault ness float fications isolation devices Operation of cannibalL- zation point Why Sustain materiel...with diagnostic software based on "fault tree " representation of the M65 ThS) to bridge the gap in diagnostics capability was demonstrated in 1980 and... identification friend or foe) which has much lower reliability than TSQ-73 peculiar hardware). Thus, as in other examples, reported readiness does not reflect
AADL Fault Modeling and Analysis Within an ARP4761 Safety Assessment
2014-10-01
Analysis Generator 27 3.2.3 Mapping to OpenFTA Format File 27 3.2.4 Mapping to Generic XML Format 28 3.2.5 AADL and FTA Mapping Rules 28 3.2.6 Issues...PSSA), System Safety Assessment (SSA), Common Cause Analysis (CCA), Fault Tree Analysis ( FTA ), Failure Modes and Effects Analysis (FMEA), Failure...Modes and Effects Summary, Mar - kov Analysis (MA), and Dependence Diagrams (DDs), also referred to as Reliability Block Dia- grams (RBDs). The
Unsupervised Learning —A Novel Clustering Method for Rolling Bearing Faults Identification
NASA Astrophysics Data System (ADS)
Kai, Li; Bo, Luo; Tao, Ma; Xuefeng, Yang; Guangming, Wang
2017-12-01
To promptly process the massive fault data and automatically provide accurate diagnosis results, numerous studies have been conducted on intelligent fault diagnosis of rolling bearing. Among these studies, such as artificial neural networks, support vector machines, decision trees and other supervised learning methods are used commonly. These methods can detect the failure of rolling bearing effectively, but to achieve better detection results, it often requires a lot of training samples. Based on above, a novel clustering method is proposed in this paper. This novel method is able to find the correct number of clusters automatically the effectiveness of the proposed method is validated using datasets from rolling element bearings. The diagnosis results show that the proposed method can accurately detect the fault types of small samples. Meanwhile, the diagnosis results are also relative high accuracy even for massive samples.
Fault Analysis on Bevel Gear Teeth Surface Damage of Aeroengine
NASA Astrophysics Data System (ADS)
Cheng, Li; Chen, Lishun; Li, Silu; Liang, Tao
2017-12-01
Aiming at the trouble phenomenon for bevel gear teeth surface damage of Aero-engine, Fault Tree of bevel gear teeth surface damage was drawing by logical relations, the possible cause of trouble was analyzed, scanning electron-microscope, energy spectrum analysis, Metallographic examination, hardness measurement and other analysis means were adopted to investigate the spall gear tooth. The results showed that Material composition, Metallographic structure, Micro-hardness, Carburization depth of the fault bevel gear accord with technical requirements. Contact fatigue spall defect caused bevel gear teeth surface damage. The small magnitude of Interference of accessory gearbox install hole and driving bevel gear bearing seat was mainly caused. Improved measures were proposed, after proof, Thermoelement measures are effective.
Li, Jun; Zhang, Hong; Han, Yinshan; Wang, Baodong
2016-01-01
Focusing on the diversity, complexity and uncertainty of the third-party damage accident, the failure probability of third-party damage to urban gas pipeline was evaluated on the theory of analytic hierarchy process and fuzzy mathematics. The fault tree of third-party damage containing 56 basic events was built by hazard identification of third-party damage. The fuzzy evaluation of basic event probabilities were conducted by the expert judgment method and using membership function of fuzzy set. The determination of the weight of each expert and the modification of the evaluation opinions were accomplished using the improved analytic hierarchy process, and the failure possibility of the third-party to urban gas pipeline was calculated. Taking gas pipelines of a certain large provincial capital city as an example, the risk assessment structure of the method was proved to conform to the actual situation, which provides the basis for the safety risk prevention.
Rizal, Datu; Tani, Shinichi; Nishiyama, Kimitoshi; Suzuki, Kazuhiko
2006-10-11
In this paper, a novel methodology in batch plant safety and reliability analysis is proposed using a dynamic simulator. A batch process involving several safety objects (e.g. sensors, controller, valves, etc.) is activated during the operational stage. The performance of the safety objects is evaluated by the dynamic simulation and a fault propagation model is generated. By using the fault propagation model, an improved fault tree analysis (FTA) method using switching signal mode (SSM) is developed for estimating the probability of failures. The timely dependent failures can be considered as unavailability of safety objects that can cause the accidents in a plant. Finally, the rank of safety object is formulated as performance index (PI) and can be estimated using the importance measures. PI shows the prioritization of safety objects that should be investigated for safety improvement program in the plants. The output of this method can be used for optimal policy in safety object improvement and maintenance. The dynamic simulator was constructed using Visual Modeler (VM, the plant simulator, developed by Omega Simulation Corp., Japan). A case study is focused on the loss of containment (LOC) incident at polyvinyl chloride (PVC) batch process which is consumed the hazardous material, vinyl chloride monomer (VCM).
Goal-Function Tree Modeling for Systems Engineering and Fault Management
NASA Technical Reports Server (NTRS)
Johnson, Stephen B.; Breckenridge, Jonathan T.
2013-01-01
This paper describes a new representation that enables rigorous definition and decomposition of both nominal and off-nominal system goals and functions: the Goal-Function Tree (GFT). GFTs extend the concept and process of functional decomposition, utilizing state variables as a key mechanism to ensure physical and logical consistency and completeness of the decomposition of goals (requirements) and functions, and enabling full and complete traceabilitiy to the design. The GFT also provides for means to define and represent off-nominal goals and functions that are activated when the system's nominal goals are not met. The physical accuracy of the GFT, and its ability to represent both nominal and off-nominal goals enable the GFT to be used for various analyses of the system, including assessments of the completeness and traceability of system goals and functions, the coverage of fault management failure detections, and definition of system failure scenarios.
Rocket engine system reliability analyses using probabilistic and fuzzy logic techniques
NASA Technical Reports Server (NTRS)
Hardy, Terry L.; Rapp, Douglas C.
1994-01-01
The reliability of rocket engine systems was analyzed by using probabilistic and fuzzy logic techniques. Fault trees were developed for integrated modular engine (IME) and discrete engine systems, and then were used with the two techniques to quantify reliability. The IRRAS (Integrated Reliability and Risk Analysis System) computer code, developed for the U.S. Nuclear Regulatory Commission, was used for the probabilistic analyses, and FUZZYFTA (Fuzzy Fault Tree Analysis), a code developed at NASA Lewis Research Center, was used for the fuzzy logic analyses. Although both techniques provided estimates of the reliability of the IME and discrete systems, probabilistic techniques emphasized uncertainty resulting from randomness in the system whereas fuzzy logic techniques emphasized uncertainty resulting from vagueness in the system. Because uncertainty can have both random and vague components, both techniques were found to be useful tools in the analysis of rocket engine system reliability.
Enterprise architecture availability analysis using fault trees and stakeholder interviews
NASA Astrophysics Data System (ADS)
Närman, Per; Franke, Ulrik; König, Johan; Buschle, Markus; Ekstedt, Mathias
2014-01-01
The availability of enterprise information systems is a key concern for many organisations. This article describes a method for availability analysis based on Fault Tree Analysis and constructs from the ArchiMate enterprise architecture (EA) language. To test the quality of the method, several case-studies within the banking and electrical utility industries were performed. Input data were collected through stakeholder interviews. The results from the case studies were compared with availability of log data to determine the accuracy of the method's predictions. In the five cases where accurate log data were available, the yearly downtime estimates were within eight hours from the actual downtimes. The cost of performing the analysis was low; no case study required more than 20 man-hours of work, making the method ideal for practitioners with an interest in obtaining rapid availability estimates of their enterprise information systems.
Khan, F I; Iqbal, A; Ramesh, N; Abbasi, S A
2001-10-12
As it is conventionally done, strategies for incorporating accident--prevention measures in any hazardous chemical process industry are developed on the basis of input from risk assessment. However, the two steps-- risk assessment and hazard reduction (or safety) measures--are not linked interactively in the existing methodologies. This prevents a quantitative assessment of the impacts of safety measures on risk control. We have made an attempt to develop a methodology in which risk assessment steps are interactively linked with implementation of safety measures. The resultant system tells us the extent of reduction of risk by each successive safety measure. It also tells based on sophisticated maximum credible accident analysis (MCAA) and probabilistic fault tree analysis (PFTA) whether a given unit can ever be made 'safe'. The application of the methodology has been illustrated with a case study.
Uncertainty analysis in fault tree models with dependent basic events.
Pedroni, Nicola; Zio, Enrico
2013-06-01
In general, two types of dependence need to be considered when estimating the probability of the top event (TE) of a fault tree (FT): "objective" dependence between the (random) occurrences of different basic events (BEs) in the FT and "state-of-knowledge" (epistemic) dependence between estimates of the epistemically uncertain probabilities of some BEs of the FT model. In this article, we study the effects on the TE probability of objective and epistemic dependences. The well-known Frèchet bounds and the distribution envelope determination (DEnv) method are used to model all kinds of (possibly unknown) objective and epistemic dependences, respectively. For exemplification, the analyses are carried out on a FT with six BEs. Results show that both types of dependence significantly affect the TE probability; however, the effects of epistemic dependence are likely to be overwhelmed by those of objective dependence (if present). © 2012 Society for Risk Analysis.
A fault tree model to assess probability of contaminant discharge from shipwrecks.
Landquist, H; Rosén, L; Lindhe, A; Norberg, T; Hassellöv, I-M; Lindgren, J F; Dahllöf, I
2014-11-15
Shipwrecks on the sea floor around the world may contain hazardous substances that can cause harm to the marine environment. Today there are no comprehensive methods for environmental risk assessment of shipwrecks, and thus there is poor support for decision-making on prioritization of mitigation measures. The purpose of this study was to develop a tool for quantitative risk estimation of potentially polluting shipwrecks, and in particular an estimation of the annual probability of hazardous substance discharge. The assessment of the probability of discharge is performed using fault tree analysis, facilitating quantification of the probability with respect to a set of identified hazardous events. This approach enables a structured assessment providing transparent uncertainty and sensitivity analyses. The model facilitates quantification of risk, quantification of the uncertainties in the risk calculation and identification of parameters to be investigated further in order to obtain a more reliable risk calculation. Copyright © 2014 Elsevier Ltd. All rights reserved.
Qualitative Importance Measures of Systems Components - A New Approach and Its Applications
NASA Astrophysics Data System (ADS)
Chybowski, Leszek; Gawdzińska, Katarzyna; Wiśnicki, Bogusz
2016-12-01
The paper presents an improved methodology of analysing the qualitative importance of components in the functional and reliability structures of the system. We present basic importance measures, i.e. the Birnbaum's structural measure, the order of the smallest minimal cut-set, the repetition count of an i-th event in the Fault Tree and the streams measure. A subsystem of circulation pumps and fuel heaters in the main engine fuel supply system of a container vessel illustrates the qualitative importance analysis. We constructed a functional model and a Fault Tree which we analysed using qualitative measures. Additionally, we compared the calculated measures and introduced corrected measures as a tool for improving the analysis. We proposed scaled measures and a common measure taking into account the location of the component in the reliability and functional structures. Finally, we proposed an area where the measures could be applied.
Schwartz, D.P.; Pantosti, D.; Okumura, K.; Powers, T.J.; Hamilton, J.C.
1998-01-01
Trenching, microgeomorphic mapping, and tree ring analysis provide information on timing of paleoearthquakes and behavior of the San Andreas fault in the Santa Cruz mountains. At the Grizzly Flat site alluvial units dated at 1640-1659 A.D., 1679-1894 A.D., 1668-1893 A.D., and the present ground surface are displaced by a single event. This was the 1906 surface rupture. Combined trench dates and tree ring analysis suggest that the penultimate event occurred in the mid-1600s, possibly in an interval as narrow as 1632-1659 A.D. There is no direct evidence in the trenches for the 1838 or 1865 earthquakes, which have been proposed as occurring on this part of the fault zone. In a minimum time of about 340 years only one large surface faulting event (1906) occurred at Grizzly Flat, in contrast to previous recurrence estimates of 95-110 years for the Santa Cruz mountains segment. Comparison with dates of the penultimate San Andreas earthquake at sites north of San Francisco suggests that the San Andreas fault between Point Arena and the Santa Cruz mountains may have failed either as a sequence of closely timed earthquakes on adjacent segments or as a single long rupture similar in length to the 1906 rupture around the mid-1600s. The 1906 coseismic geodetic slip and the late Holocene geologic slip rate on the San Francisco peninsula and southward are about 50-70% and 70% of their values north of San Francisco, respectively. The slip gradient along the 1906 rupture section of the San Andreas reflects partitioning of plate boundary slip onto the San Gregorio, Sargent, and other faults south of the Golden Gate. If a mid-1600s event ruptured the same section of the fault that failed in 1906, it supports the concept that long strike-slip faults can contain master rupture segments that repeat in both length and slip distribution. Recognition of a persistent slip rate gradient along the northern San Andreas fault and the concept of a master segment remove the requirement that lower slip sections of large events such as 1906 must fill in on a periodic basis with smaller and more frequent earthquakes.
Information processing requirements for on-board monitoring of automatic landing
NASA Technical Reports Server (NTRS)
Sorensen, J. A.; Karmarkar, J. S.
1977-01-01
A systematic procedure is presented for determining the information processing requirements for on-board monitoring of automatic landing systems. The monitoring system detects landing anomalies through use of appropriate statistical tests. The time-to-correct aircraft perturbations is determined from covariance analyses using a sequence of suitable aircraft/autoland/pilot models. The covariance results are used to establish landing safety and a fault recovery operating envelope via an event outcome tree. This procedure is demonstrated with examples using the NASA Terminal Configured Vehicle (B-737 aircraft). The procedure can also be used to define decision height, assess monitoring implementation requirements, and evaluate alternate autoland configurations.
Analysis of LNG peakshaving-facility release-prevention systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pelto, P.J.; Baker, E.G.; Powers, T.B.
1982-05-01
The purpose of this study is to provide an analysis of release prevention systems for a reference LNG peakshaving facility. An overview assessment of the reference peakshaving facility, which preceeded this effort, identified 14 release scenarios which are typical of the potential hazards involved in the operation of LNG peakshaving facilities. These scenarios formed the basis for this more detailed study. Failure modes and effects analysis and fault tree analysis were used to estimate the expected frequency of each release scenario for the reference peakshaving facility. In addition, the effectiveness of release prevention, release detection, and release control systems weremore » evaluated.« less
Moran, Michael J.; Wilson, Jon W.; Beard, L. Sue
2015-11-03
Several major faults, including the Salt Cedar Fault and the Palm Tree Fault, play an important role in the movement of groundwater. Groundwater may move along these faults and discharge where faults intersect volcanic breccias or fractured rock. Vertical movement of groundwater along faults is suggested as a mechanism for the introduction of heat energy present in groundwater from many of the springs. Groundwater altitudes in the study area indicate a potential for flow from Eldorado Valley to Black Canyon although current interpretations of the geology of this area do not favor such flow. If groundwater from Eldorado Valley discharges at springs in Black Canyon then the development of groundwater resources in Eldorado Valley could result in a decrease in discharge from the springs. Geology and structure indicate that it is not likely that groundwater can move between Detrital Valley and Black Canyon. Thus, the development of groundwater resources in Detrital Valley may not result in a decrease in discharge from springs in Black Canyon.
NASA Astrophysics Data System (ADS)
Abdelrhman, Ahmed M.; Sei Kien, Yong; Salman Leong, M.; Meng Hee, Lim; Al-Obaidi, Salah M. Ali
2017-07-01
The vibration signals produced by rotating machinery contain useful information for condition monitoring and fault diagnosis. Fault severities assessment is a challenging task. Wavelet Transform (WT) as a multivariate analysis tool is able to compromise between the time and frequency information in the signals and served as a de-noising method. The CWT scaling function gives different resolutions to the discretely signals such as very fine resolution at lower scale but coarser resolution at a higher scale. However, the computational cost increased as it needs to produce different signal resolutions. DWT has better low computation cost as the dilation function allowed the signals to be decomposed through a tree of low and high pass filters and no further analysing the high-frequency components. In this paper, a method for bearing faults identification is presented by combing Continuous Wavelet Transform (CWT) and Discrete Wavelet Transform (DWT) with envelope analysis for bearing fault diagnosis. The experimental data was sampled by Case Western Reserve University. The analysis result showed that the proposed method is effective in bearing faults detection, identify the exact fault’s location and severity assessment especially for the inner race and outer race faults.
Investigation of Fuel Oil/Lube Oil Spray Fires On Board Vessels. Volume 3.
1998-11-01
U.S. Coast Guard Research and Development Center 1082 Shennecossett Road, Groton, CT 06340-6096 Report No. CG-D-01-99, III Investigation of Fuel ...refinery). Developed the technical and mathematical specifications for BRAVO™2.0, a state-of-the-art Windows program for performing event tree and fault...tree analyses. Also managed the development of and prepared the technical specifications for QRA ROOTS™, a Windows program for storing, searching K-4
1992-01-01
boost plenum which houses the camshaft . The compressed mixture is metered by a throttle to intake valves of the engine. The engine is constructed from...difficulties associated with a time-tagged fault tree . In particular, recent work indicates that the multi-layer perception architecture can give good fdi...Abstract: In the past decade, wastepaper recycling has gained a wider acceptance. Depletion of tree stocks, waste water treatment demands and
CARE3MENU- A CARE III USER FRIENDLY INTERFACE
NASA Technical Reports Server (NTRS)
Pierce, J. L.
1994-01-01
CARE3MENU generates an input file for the CARE III program. CARE III is used for reliability prediction of complex, redundant, fault-tolerant systems including digital computers, aircraft, nuclear and chemical control systems. The CARE III input file often becomes complicated and is not easily formatted with a text editor. CARE3MENU provides an easy, interactive method of creating an input file by automatically formatting a set of user-supplied inputs for the CARE III system. CARE3MENU provides detailed on-line help for most of its screen formats. The reliability model input process is divided into sections using menu-driven screen displays. Each stage, or set of identical modules comprising the model, must be identified and described in terms of number of modules, minimum number of modules for stage operation, and critical fault threshold. The fault handling and fault occurence models are detailed in several screens by parameters such as transition rates, propagation and detection densities, Weibull or exponential characteristics, and model accuracy. The system fault tree and critical pairs fault tree screens are used to define the governing logic and to identify modules affected by component failures. Additional CARE3MENU screens prompt the user for output options and run time control values such as mission time and truncation values. There are fourteen major screens, many with default values and HELP options. The documentation includes: 1) a users guide with several examples of CARE III models, the dialog required to input them to CARE3MENU, and the output files created; and 2) a maintenance manual for assistance in changing the HELP files and modifying any of the menu formats or contents. CARE3MENU is written in FORTRAN 77 for interactive execution and has been implemented on a DEC VAX series computer operating under VMS. This program was developed in 1985.
NASA Astrophysics Data System (ADS)
Polverino, Pierpaolo; Pianese, Cesare; Sorrentino, Marco; Marra, Dario
2015-04-01
The paper focuses on the design of a procedure for the development of an on-field diagnostic algorithm for solid oxide fuel cell (SOFC) systems. The diagnosis design phase relies on an in-deep analysis of the mutual interactions among all system components by exploiting the physical knowledge of the SOFC system as a whole. This phase consists of the Fault Tree Analysis (FTA), which identifies the correlations among possible faults and their corresponding symptoms at system components level. The main outcome of the FTA is an inferential isolation tool (Fault Signature Matrix - FSM), which univocally links the faults to the symptoms detected during the system monitoring. In this work the FTA is considered as a starting point to develop an improved FSM. Making use of a model-based investigation, a fault-to-symptoms dependency study is performed. To this purpose a dynamic model, previously developed by the authors, is exploited to simulate the system under faulty conditions. Five faults are simulated, one for the stack and four occurring at BOP level. Moreover, the robustness of the FSM design is increased by exploiting symptom thresholds defined for the investigation of the quantitative effects of the simulated faults on the affected variables.
TU-AB-BRD-03: Fault Tree Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dunscombe, P.
2015-06-15
Current quality assurance and quality management guidelines provided by various professional organizations are prescriptive in nature, focusing principally on performance characteristics of planning and delivery devices. However, published analyses of events in radiation therapy show that most events are often caused by flaws in clinical processes rather than by device failures. This suggests the need for the development of a quality management program that is based on integrated approaches to process and equipment quality assurance. Industrial engineers have developed various risk assessment tools that are used to identify and eliminate potential failures from a system or a process before amore » failure impacts a customer. These tools include, but are not limited to, process mapping, failure modes and effects analysis, fault tree analysis. Task Group 100 of the American Association of Physicists in Medicine has developed these tools and used them to formulate an example risk-based quality management program for intensity-modulated radiotherapy. This is a prospective risk assessment approach that analyzes potential error pathways inherent in a clinical process and then ranks them according to relative risk, typically before implementation, followed by the design of a new process or modification of the existing process. Appropriate controls are then put in place to ensure that failures are less likely to occur and, if they do, they will more likely be detected before they propagate through the process, compromising treatment outcome and causing harm to the patient. Such a prospective approach forms the basis of the work of Task Group 100 that has recently been approved by the AAPM. This session will be devoted to a discussion of these tools and practical examples of how these tools can be used in a given radiotherapy clinic to develop a risk based quality management program. Learning Objectives: Learn how to design a process map for a radiotherapy process Learn how to perform failure modes and effects analysis analysis for a given process Learn what fault trees are all about Learn how to design a quality management program based upon the information obtained from process mapping, failure modes and effects analysis and fault tree analysis. Dunscombe: Director, TreatSafely, LLC and Center for the Assessment of Radiological Sciences; Consultant to IAEA and Varian Thomadsen: President, Center for the Assessment of Radiological Sciences Palta: Vice President of the Center for the Assessment of Radiological Sciences.« less
Selected considerations of implementation of the GNSS
NASA Astrophysics Data System (ADS)
Cwiklak, Janusz; Fellner, Andrzej; Fellner, Radoslaw; Jafernik, Henryk; Sledzinski, Janusz
2014-05-01
The article describes analysis of the safety and risk for the implementation of precise approach procedures (Localizer Performance and Vertical Guidance - LPV) with GNSS sensor at airports in Warsaw and Katowice. There were used some techniques of the identification of threats (inducing controlled flight into terrain, landing accident, mid-air collision) and evaluations methods based on Fault Tree Analysis, probability of the risk, safety risk evaluation matrix and Functional Hazard Assesment. Also safety goals were determined. Research led to determine probabilities of appearing of threats, as well as allow compare them with regard to the ILS. As a result of conducting the Preliminary System Safety Assessment (PSSA), there were defined requirements essential to reach the required level of the safety. It is worth to underline, that quantitative requirements were defined using FTA.
Probabilistic seismic hazard study based on active fault and finite element geodynamic models
NASA Astrophysics Data System (ADS)
Kastelic, Vanja; Carafa, Michele M. C.; Visini, Francesco
2016-04-01
We present a probabilistic seismic hazard analysis (PSHA) that is exclusively based on active faults and geodynamic finite element input models whereas seismic catalogues were used only in a posterior comparison. We applied the developed model in the External Dinarides, a slow deforming thrust-and-fold belt at the contact between Adria and Eurasia.. is the Our method consists of establishing s two earthquake rupture forecast models: (i) a geological active fault input (GEO) model and, (ii) a finite element (FEM) model. The GEO model is based on active fault database that provides information on fault location and its geometric and kinematic parameters together with estimations on its slip rate. By default in this model all deformation is set to be released along the active faults. The FEM model is based on a numerical geodynamic model developed for the region of study. In this model the deformation is, besides along the active faults, released also in the volumetric continuum elements. From both models we calculated their corresponding activity rates, its earthquake rates and their final expected peak ground accelerations. We investigated both the source model and the earthquake model uncertainties by varying the main active fault and earthquake rate calculation parameters through constructing corresponding branches of the seismic hazard logic tree. Hazard maps and UHS curves have been produced for horizontal ground motion on bedrock conditions VS 30 ≥ 800 m/s), thereby not considering local site amplification effects. The hazard was computed over a 0.2° spaced grid considering 648 branches of the logic tree and the mean value of 10% probability of exceedance in 50 years hazard level, while the 5th and 95th percentiles were also computed to investigate the model limits. We conducted a sensitivity analysis to control which of the input parameters influence the final hazard results in which measure. The results of such comparison evidence the deformation model and with their internal variability together with the choice of the ground motion prediction equations (GMPEs) are the most influencing parameter. Both of these parameters have significan affect on the hazard results. Thus having good knowledge of the existence of active faults and their geometric and activity characteristics is of key importance. We also show that PSHA models based exclusively on active faults and geodynamic inputs, which are thus not dependent on past earthquake occurrences, provide a valid method for seismic hazard calculation.
A fault is born: The Landers-Mojave earthquake line
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nur, A.; Ron, H.
1993-04-01
The epicenter and the southern portion of the 1992 Landers earthquake fell on an approximately N-S earthquake line, defined by both epicentral locations and by the rupture directions of four previous M>5 earthquakes in the Mojave: The 1947 Manix; 1975 Galway Lake; 1979 Homestead Valley: and 1992 Joshua Tree events. Another M 5.2 earthquake epicenter in 1965 fell on this line where it intersects the Calico fault. In contrast, the northern part of the Landers rupture followed the NW-SE trending Camp Rock and parallel faults, exhibiting an apparently unusual rupture kink. The block tectonic model (Ron et al., 1984) combiningmore » fault kinematic and mechanics, explains both the alignment of the events, and their ruptures (Nur et al., 1986, 1989), as well as the Landers kink (Nur et al., 1992). Accordingly, the now NW oriented faults have rotated into their present direction away from the direction of maximum shortening, close to becoming locked, whereas a new fault set, optimally oriented relative to the direction of shortening, is developing to accommodate current crustal deformation. The Mojave-Landers line may thus be a new fault in formation. During the transition of faulting from the old, well developed and wak but poorly oriented faults to the strong, but favorably oriented new ones, both can slip simultaneously, giving rise to kinks such as Landers.« less
An Application of the Geo-Semantic Micro-services in Seamless Data-Model Integration
NASA Astrophysics Data System (ADS)
Jiang, P.; Elag, M.; Kumar, P.; Liu, R.; Hu, Y.; Marini, L.; Peckham, S. D.; Hsu, L.
2016-12-01
We are applying machine learning (ML) techniques to continuous acoustic emission (AE) data from laboratory earthquake experiments. Our goal is to apply explicit ML methods to this acoustic datathe AE in order to infer frictional properties of a laboratory fault. The experiment is a double direct shear apparatus comprised of fault blocks surrounding fault gouge comprised of glass beads or quartz powder. Fault characteristics are recorded, including shear stress, applied load (bulk friction = shear stress/normal load) and shear velocity. The raw acoustic signal is continuously recorded. We rely on explicit decision tree approaches (Random Forest and Gradient Boosted Trees) that allow us to identify important features linked to the fault friction. A training procedure that employs both the AE and the recorded shear stress from the experiment is first conducted. Then, testing takes place on data the algorithm has never seen before, using only the continuous AE signal. We find that these methods provide rich information regarding frictional processes during slip (Rouet-Leduc et al., 2017a; Hulbert et al., 2017). In addition, similar machine learning approaches predict failure times, as well as slip magnitudes in some cases. We find that these methods work for both stick slip and slow slip experiments, for periodic slip and for aperiodic slip. We also derive a fundamental relationship between the AE and the friction describing the frictional behavior of any earthquake slip cycle in a given experiment (Rouet-Leduc et al., 2017b). Our goal is to ultimately scale these approaches to Earth geophysical data to probe fault friction. References Rouet-Leduc, B., C. Hulbert, N. Lubbers, K. Barros, C. Humphreys and P. A. Johnson, Machine learning predicts laboratory earthquakes, in review (2017). https://arxiv.org/abs/1702.05774Rouet-LeDuc, B. et al., Friction Laws Derived From the Acoustic Emissions of a Laboratory Fault by Machine Learning (2017), AGU Fall Meeting Session S025: Earthquake source: from the laboratory to the fieldHulbert, C., Characterizing slow slip applying machine learning (2017), AGU Fall Meeting Session S019: Slow slip, Tectonic Tremor, and the Brittle-to-Ductile Transition Zone: What mechanisms control the diversity of slow and fast earthquakes?
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mays, S.E.; Poloski, J.P.; Sullivan, W.H.
1982-07-01
A probabilistic risk assessment (PRA) was made of the Browns Ferry, Unit 1, nuclear plant as part of the Nuclear Regulatory Commission's Interim Reliability Evaluation Program (IREP). Specific goals of the study were to identify the dominant contributors to core melt, develop a foundation for more extensive use of PRA methods, expand the cadre of experienced PRA practitioners, and apply procedures for extension of IREP analyses to other domestic light water reactors. Event tree and fault tree analyses were used to estimate the frequency of accident sequences initiated by transients and loss of coolant accidents. External events such as floods,more » fires, earthquakes, and sabotage were beyond the scope of this study and were, therefore, excluded. From these sequences, the dominant contributors to probable core melt frequency were chosen. Uncertainty and sensitivity analyses were performed on these sequences to better understand the limitations associated with the estimated sequence frequencies. Dominant sequences were grouped according to common containment failure modes and corresponding release categories on the basis of comparison with analyses of similar designs rather than on the basis of detailed plant-specific calculations.« less
Risk Analysis Methods for Deepwater Port Oil Transfer Systems
DOT National Transportation Integrated Search
1976-06-01
This report deals with the risk analysis methodology for oil spills from the oil transfer systems in deepwater ports. Failure mode and effect analysis in combination with fault tree analysis are identified as the methods best suited for the assessmen...
Jones, A Kyle; Heintz, Philip; Geiser, William; Goldman, Lee; Jerjian, Khachig; Martin, Melissa; Peck, Donald; Pfeiffer, Douglas; Ranger, Nicole; Yorkston, John
2015-11-01
Quality control (QC) in medical imaging is an ongoing process and not just a series of infrequent evaluations of medical imaging equipment. The QC process involves designing and implementing a QC program, collecting and analyzing data, investigating results that are outside the acceptance levels for the QC program, and taking corrective action to bring these results back to an acceptable level. The QC process involves key personnel in the imaging department, including the radiologist, radiologic technologist, and the qualified medical physicist (QMP). The QMP performs detailed equipment evaluations and helps with oversight of the QC program, the radiologic technologist is responsible for the day-to-day operation of the QC program. The continued need for ongoing QC in digital radiography has been highlighted in the scientific literature. The charge of this task group was to recommend consistency tests designed to be performed by a medical physicist or a radiologic technologist under the direction of a medical physicist to identify problems with an imaging system that need further evaluation by a medical physicist, including a fault tree to define actions that need to be taken when certain fault conditions are identified. The focus of this final report is the ongoing QC process, including rejected image analysis, exposure analysis, and artifact identification. These QC tasks are vital for the optimal operation of a department performing digital radiography.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jones, A. Kyle, E-mail: kyle.jones@mdanderson.org; Geiser, William; Heintz, Philip
Quality control (QC) in medical imaging is an ongoing process and not just a series of infrequent evaluations of medical imaging equipment. The QC process involves designing and implementing a QC program, collecting and analyzing data, investigating results that are outside the acceptance levels for the QC program, and taking corrective action to bring these results back to an acceptable level. The QC process involves key personnel in the imaging department, including the radiologist, radiologic technologist, and the qualified medical physicist (QMP). The QMP performs detailed equipment evaluations and helps with oversight of the QC program, the radiologic technologist ismore » responsible for the day-to-day operation of the QC program. The continued need for ongoing QC in digital radiography has been highlighted in the scientific literature. The charge of this task group was to recommend consistency tests designed to be performed by a medical physicist or a radiologic technologist under the direction of a medical physicist to identify problems with an imaging system that need further evaluation by a medical physicist, including a fault tree to define actions that need to be taken when certain fault conditions are identified. The focus of this final report is the ongoing QC process, including rejected image analysis, exposure analysis, and artifact identification. These QC tasks are vital for the optimal operation of a department performing digital radiography.« less
Study on the evaluation method for fault displacement based on characterized source model
NASA Astrophysics Data System (ADS)
Tonagi, M.; Takahama, T.; Matsumoto, Y.; Inoue, N.; Irikura, K.; Dalguer, L. A.
2016-12-01
In IAEA Specific Safety Guide (SSG) 9 describes that probabilistic methods for evaluating fault displacement should be used if no sufficient basis is provided to decide conclusively that the fault is not capable by using the deterministic methodology. In addition, International Seismic Safety Centre compiles as ANNEX to realize seismic hazard for nuclear facilities described in SSG-9 and shows the utility of the deterministic and probabilistic evaluation methods for fault displacement. In Japan, it is required that important nuclear facilities should be established on ground where fault displacement will not arise when earthquakes occur in the future. Under these situations, based on requirements, we need develop evaluation methods for fault displacement to enhance safety in nuclear facilities. We are studying deterministic and probabilistic methods with tentative analyses using observed records such as surface fault displacement and near-fault strong ground motions of inland crustal earthquake which fault displacements arose. In this study, we introduce the concept of evaluation methods for fault displacement. After that, we show parts of tentative analysis results for deterministic method as follows: (1) For the 1999 Chi-Chi earthquake, referring slip distribution estimated by waveform inversion, we construct a characterized source model (Miyake et al., 2003, BSSA) which can explain observed near-fault broad band strong ground motions. (2) Referring a characterized source model constructed in (1), we study an evaluation method for surface fault displacement using hybrid method, which combines particle method and distinct element method. At last, we suggest one of the deterministic method to evaluate fault displacement based on characterized source model. This research was part of the 2015 research project `Development of evaluating method for fault displacement` by the Secretariat of Nuclear Regulation Authority (S/NRA), Japan.
A-Priori Rupture Models for Northern California Type-A Faults
Wills, Chris J.; Weldon, Ray J.; Field, Edward H.
2008-01-01
This appendix describes how a-priori rupture models were developed for the northern California Type-A faults. As described in the main body of this report, and in Appendix G, ?a-priori? models represent an initial estimate of the rate of single and multi-segment surface ruptures on each fault. Whether or not a given model is moment balanced (i.e., satisfies section slip-rate data) depends on assumptions made regarding the average slip on each segment in each rupture (which in turn depends on the chosen magnitude-area relationship). Therefore, for a given set of assumptions, or branch on the logic tree, the methodology of the present Working Group (WGCEP-2007) is to find a final model that is as close as possible to the a-priori model, in the least squares sense, but that also satisfies slip rate and perhaps other data. This is analogous the WGCEP- 2002 approach of effectively voting on the relative rate of each possible rupture, and then finding the closest moment-balance model (under a more limiting set of assumptions than adopted by the present WGCEP, as described in detail in Appendix G). The 2002 Working Group Report (WCCEP, 2003, referred to here as WGCEP-2002), created segmented earthquake rupture forecast models for all faults in the region, including some that had been designated as Type B faults in the NSHMP, 1996, and one that had not previously been considered. The 2002 National Seismic Hazard Maps used the values from WGCEP-2002 for all the faults in the region, essentially treating all the listed faults as Type A faults. As discussed in Appendix A, the current WGCEP found that there are a number of faults with little or no data on slip-per-event, or dates of previous earthquakes. As a result, the WGCEP recommends that faults with minimal available earthquake recurrence data: the Greenville, Mount Diablo, San Gregorio, Monte Vista-Shannon and Concord-Green Valley be modeled as Type B faults to be consistent with similarly poorly-known faults statewide. As a result, the modified segmented models discussed here only concern the San Andreas, Hayward-Rodgers Creek, and Calaveras faults. Given the extensive level of effort given by the recent Bay-Area WGCEP-2002, our approach has been to adopt their final average models as our preferred a-prior models. We have modified the WGCEP-2002 models where necessary to match data that were not available or not used by that WGCEP and where the models needed by WGCEP-2007 for a uniform statewide model require different assumptions and/or logic-tree branch weights. In these cases we have made what are usually slight modifications to the WGCEP-2002 model. This Appendix presents the minor changes needed to accomodate updated information and model construction. We do not attempt to reproduce here the extensive documentation of data, model parameters and earthquake probablilities in the WG-2002 report.
Methodology for Designing Fault-Protection Software
NASA Technical Reports Server (NTRS)
Barltrop, Kevin; Levison, Jeffrey; Kan, Edwin
2006-01-01
A document describes a methodology for designing fault-protection (FP) software for autonomous spacecraft. The methodology embodies and extends established engineering practices in the technical discipline of Fault Detection, Diagnosis, Mitigation, and Recovery; and has been successfully implemented in the Deep Impact Spacecraft, a NASA Discovery mission. Based on established concepts of Fault Monitors and Responses, this FP methodology extends the notion of Opinion, Symptom, Alarm (aka Fault), and Response with numerous new notions, sub-notions, software constructs, and logic and timing gates. For example, Monitor generates a RawOpinion, which graduates into Opinion, categorized into no-opinion, acceptable, or unacceptable opinion. RaiseSymptom, ForceSymptom, and ClearSymptom govern the establishment and then mapping to an Alarm (aka Fault). Local Response is distinguished from FP System Response. A 1-to-n and n-to- 1 mapping is established among Monitors, Symptoms, and Responses. Responses are categorized by device versus by function. Responses operate in tiers, where the early tiers attempt to resolve the Fault in a localized step-by-step fashion, relegating more system-level response to later tier(s). Recovery actions are gated by epoch recovery timing, enabling strategy, urgency, MaxRetry gate, hardware availability, hazardous versus ordinary fault, and many other priority gates. This methodology is systematic, logical, and uses multiple linked tables, parameter files, and recovery command sequences. The credibility of the FP design is proven via a fault-tree analysis "top-down" approach, and a functional fault-mode-effects-and-analysis via "bottoms-up" approach. Via this process, the mitigation and recovery strategy(s) per Fault Containment Region scope (width versus depth) the FP architecture.
Automated Generation of Fault Management Artifacts from a Simple System Model
NASA Technical Reports Server (NTRS)
Kennedy, Andrew K.; Day, John C.
2013-01-01
Our understanding of off-nominal behavior - failure modes and fault propagation - in complex systems is often based purely on engineering intuition; specific cases are assessed in an ad hoc fashion as a (fallible) fault management engineer sees fit. This work is an attempt to provide a more rigorous approach to this understanding and assessment by automating the creation of a fault management artifact, the Failure Modes and Effects Analysis (FMEA) through querying a representation of the system in a SysML model. This work builds off the previous development of an off-nominal behavior model for the upcoming Soil Moisture Active-Passive (SMAP) mission at the Jet Propulsion Laboratory. We further developed the previous system model to more fully incorporate the ideas of State Analysis, and it was restructured in an organizational hierarchy that models the system as layers of control systems while also incorporating the concept of "design authority". We present software that was developed to traverse the elements and relationships in this model to automatically construct an FMEA spreadsheet. We further discuss extending this model to automatically generate other typical fault management artifacts, such as Fault Trees, to efficiently portray system behavior, and depend less on the intuition of fault management engineers to ensure complete examination of off-nominal behavior.
Naive Bayes Bearing Fault Diagnosis Based on Enhanced Independence of Data
Zhang, Nannan; Wu, Lifeng; Yang, Jing; Guan, Yong
2018-01-01
The bearing is the key component of rotating machinery, and its performance directly determines the reliability and safety of the system. Data-based bearing fault diagnosis has become a research hotspot. Naive Bayes (NB), which is based on independent presumption, is widely used in fault diagnosis. However, the bearing data are not completely independent, which reduces the performance of NB algorithms. In order to solve this problem, we propose a NB bearing fault diagnosis method based on enhanced independence of data. The method deals with data vector from two aspects: the attribute feature and the sample dimension. After processing, the classification limitation of NB is reduced by the independence hypothesis. First, we extract the statistical characteristics of the original signal of the bearings effectively. Then, the Decision Tree algorithm is used to select the important features of the time domain signal, and the low correlation features is selected. Next, the Selective Support Vector Machine (SSVM) is used to prune the dimension data and remove redundant vectors. Finally, we use NB to diagnose the fault with the low correlation data. The experimental results show that the independent enhancement of data is effective for bearing fault diagnosis. PMID:29401730
NASA Astrophysics Data System (ADS)
Li, Yongbo; Xu, Minqiang; Wang, Rixin; Huang, Wenhu
2016-01-01
This paper presents a new rolling bearing fault diagnosis method based on local mean decomposition (LMD), improved multiscale fuzzy entropy (IMFE), Laplacian score (LS) and improved support vector machine based binary tree (ISVM-BT). When the fault occurs in rolling bearings, the measured vibration signal is a multi-component amplitude-modulated and frequency-modulated (AM-FM) signal. LMD, a new self-adaptive time-frequency analysis method can decompose any complicated signal into a series of product functions (PFs), each of which is exactly a mono-component AM-FM signal. Hence, LMD is introduced to preprocess the vibration signal. Furthermore, IMFE that is designed to avoid the inaccurate estimation of fuzzy entropy can be utilized to quantify the complexity and self-similarity of time series for a range of scales based on fuzzy entropy. Besides, the LS approach is introduced to refine the fault features by sorting the scale factors. Subsequently, the obtained features are fed into the multi-fault classifier ISVM-BT to automatically fulfill the fault pattern identifications. The experimental results validate the effectiveness of the methodology and demonstrate that proposed algorithm can be applied to recognize the different categories and severities of rolling bearings.
Mumma, Joel M; Durso, Francis T; Ferguson, Ashley N; Gipson, Christina L; Casanova, Lisa; Erukunuakpor, Kimberly; Kraft, Colleen S; Walsh, Victoria L; Zimring, Craig; DuBose, Jennifer; Jacob, Jesse T
2018-03-05
Doffing protocols for personal protective equipment (PPE) are critical for keeping healthcare workers (HCWs) safe during care of patients with Ebola virus disease. We assessed the relationship between errors and self-contamination during doffing. Eleven HCWs experienced with doffing Ebola-level PPE participated in simulations in which HCWs donned PPE marked with surrogate viruses (ɸ6 and MS2), completed a clinical task, and were assessed for contamination after doffing. Simulations were video recorded, and a failure modes and effects analysis and fault tree analyses were performed to identify errors during doffing, quantify their risk (risk index), and predict contamination data. Fifty-one types of errors were identified, many having the potential to spread contamination. Hand hygiene and removing the powered air purifying respirator (PAPR) hood had the highest total risk indexes (111 and 70, respectively) and number of types of errors (9 and 13, respectively). ɸ6 was detected on 10% of scrubs and the fault tree predicted a 10.4% contamination rate, likely occurring when the PAPR hood inadvertently contacted scrubs during removal. MS2 was detected on 10% of hands, 20% of scrubs, and 70% of inner gloves and the predicted rates were 7.3%, 19.4%, 73.4%, respectively. Fault trees for MS2 and ɸ6 contamination suggested similar pathways. Ebola-level PPE can both protect and put HCWs at risk for self-contamination throughout the doffing process, even among experienced HCWs doffing with a trained observer. Human factors methodologies can identify error-prone steps, delineate the relationship between errors and self-contamination, and suggest remediation strategies.
NASA Astrophysics Data System (ADS)
Krechowicz, Maria
2017-10-01
Nowadays, one of the characteristic features of construction industry is an increased complexity of a growing number of projects. Almost each construction project is unique, has its project-specific purpose, its own project structural complexity, owner’s expectations, ground conditions unique to a certain location, and its own dynamics. Failure costs and costs resulting from unforeseen problems in complex construction projects are very high. Project complexity drivers pose many vulnerabilities to a successful completion of a number of projects. This paper discusses the process of effective risk management in complex construction projects in which renewable energy sources were used, on the example of the realization phase of the ENERGIS teaching-laboratory building, from the point of view of DORBUD S.A., its general contractor. This paper suggests a new approach to risk management for complex construction projects in which renewable energy sources were applied. The risk management process was divided into six stages: gathering information, identification of the top, critical project risks resulting from the project complexity, construction of the fault tree for each top, critical risks, logical analysis of the fault tree, quantitative risk assessment applying fuzzy logic and development of risk response strategy. A new methodology for the qualitative and quantitative risk assessment for top, critical risks in complex construction projects was developed. Risk assessment was carried out applying Fuzzy Fault Tree analysis on the example of one top critical risk. Application of the Fuzzy sets theory to the proposed model allowed to decrease uncertainty and eliminate problems with gaining the crisp values of the basic events probability, common during expert risk assessment with the objective to give the exact risk score of each unwanted event probability.
Taheriyoun, Masoud; Moradinejad, Saber
2015-01-01
The reliability of a wastewater treatment plant is a critical issue when the effluent is reused or discharged to water resources. Main factors affecting the performance of the wastewater treatment plant are the variation of the influent, inherent variability in the treatment processes, deficiencies in design, mechanical equipment, and operational failures. Thus, meeting the established reuse/discharge criteria requires assessment of plant reliability. Among many techniques developed in system reliability analysis, fault tree analysis (FTA) is one of the popular and efficient methods. FTA is a top down, deductive failure analysis in which an undesired state of a system is analyzed. In this study, the problem of reliability was studied on Tehran West Town wastewater treatment plant. This plant is a conventional activated sludge process, and the effluent is reused in landscape irrigation. The fault tree diagram was established with the violation of allowable effluent BOD as the top event in the diagram, and the deficiencies of the system were identified based on the developed model. Some basic events are operator's mistake, physical damage, and design problems. The analytical method is minimal cut sets (based on numerical probability) and Monte Carlo simulation. Basic event probabilities were calculated according to available data and experts' opinions. The results showed that human factors, especially human error had a great effect on top event occurrence. The mechanical, climate, and sewer system factors were in subsequent tier. Literature shows applying FTA has been seldom used in the past wastewater treatment plant (WWTP) risk analysis studies. Thus, the developed FTA model in this study considerably improves the insight into causal failure analysis of a WWTP. It provides an efficient tool for WWTP operators and decision makers to achieve the standard limits in wastewater reuse and discharge to the environment.
Jetter, J J; Forte, R; Rubenstein, R
2001-02-01
A fault tree analysis was used to estimate the number of refrigerant exposures of automotive service technicians and vehicle occupants in the United States. Exposures of service technicians can occur when service equipment or automotive air-conditioning systems leak during servicing. The number of refrigerant exposures of service technicians was estimated to be 135,000 per year. Exposures of vehicle occupants can occur when refrigerant enters passenger compartments due to sudden leaks in air-conditioning systems, leaks following servicing, or leaks caused by collisions. The total number of exposures of vehicle occupants was estimated to be 3,600 per year. The largest number of exposures of vehicle occupants was estimated for leaks caused by collisions, and the second largest number of exposures was estimated for leaks following servicing. Estimates used in the fault tree analysis were based on a survey of automotive air-conditioning service shops, the best available data from the literature, and the engineering judgement of the authors and expert reviewers from the Society of Automotive Engineers Interior Climate Control Standards Committee. Exposure concentrations and durations were estimated and compared with toxicity data for refrigerants currently used in automotive air conditioners. Uncertainty was high for the estimated numbers of exposures, exposure concentrations, and exposure durations. Uncertainty could be reduced in the future by conducting more extensive surveys, measurements of refrigerant concentrations, and exposure monitoring. Nevertheless, the analysis indicated that the risk of exposure of service technicians and vehicle occupants is significant, and it is recommended that no refrigerant that is substantially more toxic than currently available substitutes be accepted for use in vehicle air-conditioning systems, absent a means of mitigating exposure.
Naghibi, Seyed Amir; Pourghasemi, Hamid Reza; Dixon, Barnali
2016-01-01
Groundwater is considered one of the most valuable fresh water resources. The main objective of this study was to produce groundwater spring potential maps in the Koohrang Watershed, Chaharmahal-e-Bakhtiari Province, Iran, using three machine learning models: boosted regression tree (BRT), classification and regression tree (CART), and random forest (RF). Thirteen hydrological-geological-physiographical (HGP) factors that influence locations of springs were considered in this research. These factors include slope degree, slope aspect, altitude, topographic wetness index (TWI), slope length (LS), plan curvature, profile curvature, distance to rivers, distance to faults, lithology, land use, drainage density, and fault density. Subsequently, groundwater spring potential was modeled and mapped using CART, RF, and BRT algorithms. The predicted results from the three models were validated using the receiver operating characteristics curve (ROC). From 864 springs identified, 605 (≈70 %) locations were used for the spring potential mapping, while the remaining 259 (≈30 %) springs were used for the model validation. The area under the curve (AUC) for the BRT model was calculated as 0.8103 and for CART and RF the AUC were 0.7870 and 0.7119, respectively. Therefore, it was concluded that the BRT model produced the best prediction results while predicting locations of springs followed by CART and RF models, respectively. Geospatially integrated BRT, CART, and RF methods proved to be useful in generating the spring potential map (SPM) with reasonable accuracy.
Yazdi, Mohammad; Korhan, Orhan; Daneshvar, Sahand
2018-05-09
This study aimed at establishing fault tree analysis (FTA) using expert opinion to compute the probability of an event. To find the probability of the top event (TE), all probabilities of the basic events (BEs) should be available when the FTA is drawn. In this case, employing expert judgment can be used as an alternative to failure data in an awkward situation. The fuzzy analytical hierarchy process as a standard technique is used to give a specific weight to each expert, and fuzzy set theory is engaged for aggregating expert opinion. In this regard, the probability of BEs will be computed and, consequently, the probability of the TE obtained using Boolean algebra. Additionally, to reduce the probability of the TE in terms of three parameters (safety consequences, cost and benefit), the importance measurement technique and modified TOPSIS was employed. The effectiveness of the proposed approach is demonstrated with a real-life case study.
NASA Astrophysics Data System (ADS)
Guan, Yifeng; Zhao, Jie; Shi, Tengfei; Zhu, Peipei
2016-09-01
In recent years, China's increased interest in environmental protection has led to a promotion of energy-efficient dual fuel (diesel/natural gas) ships in Chinese inland rivers. A natural gas as ship fuel may pose dangers of fire and explosion if a gas leak occurs. If explosions or fires occur in the engine rooms of a ship, heavy damage and losses will be incurred. In this paper, a fault tree model is presented that considers both fires and explosions in a dual fuel ship; in this model, dual fuel engine rooms are the top events. All the basic events along with the minimum cut sets are obtained through the analysis. The primary factors that affect accidents involving fires and explosions are determined by calculating the degree of structure importance of the basic events. According to these results, corresponding measures are proposed to ensure and improve the safety and reliability of Chinese inland dual fuel ships.
Kingman, D M; Field, W E
2005-11-01
Findings reported by researchers at Illinois State University and Purdue University indicated that since 1980, an average of eight individuals per year have become engulfed and died in farm grain bins in the U.S. and Canada and that all these deaths are significant because they are believed to be preventable. During a recent effort to develop intervention strategies and recommendations for an ASAE farm grain bin safety standard, fault tree analysis (FTA) was utilized to identify contributing factors to engulfments in grain stored in on-farm grain bins. FTA diagrams provided a spatial perspective of the circumstances that occurred prior to engulfment incidents, a perspective never before presented in other hazard analyses. The FTA also demonstrated relationships and interrelationships of the contributing factors. FTA is a useful tool that should be applied more often in agricultural incident investigations to assist in the more complete understanding of the problem studied.
Fault tree analysis for data-loss in long-term monitoring networks.
Dirksen, J; ten Veldhuis, J A E; Schilperoort, R P S
2009-01-01
Prevention of data-loss is an important aspect in the design as well as the operational phase of monitoring networks since data-loss can seriously limit intended information yield. In the literature limited attention has been paid to the origin of unreliable or doubtful data from monitoring networks. Better understanding of causes of data-loss points out effective solutions to increase data yield. This paper introduces FTA as a diagnostic tool to systematically deduce causes of data-loss in long-term monitoring networks in urban drainage systems. In order to illustrate the effectiveness of FTA, a fault tree is developed for a monitoring network and FTA is applied to analyze the data yield of a UV/VIS submersible spectrophotometer. Although some of the causes of data-loss cannot be recovered because the historical database of metadata has been updated infrequently, the example points out that FTA still is a powerful tool to analyze the causes of data-loss and provides useful information on effective data-loss prevention.
Accurate reliability analysis method for quantum-dot cellular automata circuits
NASA Astrophysics Data System (ADS)
Cui, Huanqing; Cai, Li; Wang, Sen; Liu, Xiaoqiang; Yang, Xiaokuo
2015-10-01
Probabilistic transfer matrix (PTM) is a widely used model in the reliability research of circuits. However, PTM model cannot reflect the impact of input signals on reliability, so it does not completely conform to the mechanism of the novel field-coupled nanoelectronic device which is called quantum-dot cellular automata (QCA). It is difficult to get accurate results when PTM model is used to analyze the reliability of QCA circuits. To solve this problem, we present the fault tree models of QCA fundamental devices according to different input signals. After that, the binary decision diagram (BDD) is used to quantitatively investigate the reliability of two QCA XOR gates depending on the presented models. By employing the fault tree models, the impact of input signals on reliability can be identified clearly and the crucial components of a circuit can be found out precisely based on the importance values (IVs) of components. So this method is contributive to the construction of reliable QCA circuits.
NASA Astrophysics Data System (ADS)
Pham, Binh Thai; Prakash, Indra; Tien Bui, Dieu
2018-02-01
A hybrid machine learning approach of Random Subspace (RSS) and Classification And Regression Trees (CART) is proposed to develop a model named RSSCART for spatial prediction of landslides. This model is a combination of the RSS method which is known as an efficient ensemble technique and the CART which is a state of the art classifier. The Luc Yen district of Yen Bai province, a prominent landslide prone area of Viet Nam, was selected for the model development. Performance of the RSSCART model was evaluated through the Receiver Operating Characteristic (ROC) curve, statistical analysis methods, and the Chi Square test. Results were compared with other benchmark landslide models namely Support Vector Machines (SVM), single CART, Naïve Bayes Trees (NBT), and Logistic Regression (LR). In the development of model, ten important landslide affecting factors related with geomorphology, geology and geo-environment were considered namely slope angles, elevation, slope aspect, curvature, lithology, distance to faults, distance to rivers, distance to roads, and rainfall. Performance of the RSSCART model (AUC = 0.841) is the best compared with other popular landslide models namely SVM (0.835), single CART (0.822), NBT (0.821), and LR (0.723). These results indicate that performance of the RSSCART is a promising method for spatial landslide prediction.
NASA Astrophysics Data System (ADS)
Riyadi, Eko H.
2014-09-01
Initiating event is defined as any event either internal or external to the nuclear power plants (NPPs) that perturbs the steady state operation of the plant, if operating, thereby initiating an abnormal event such as transient or loss of coolant accident (LOCA) within the NPPs. These initiating events trigger sequences of events that challenge plant control and safety systems whose failure could potentially lead to core damage or large early release. Selection for initiating events consists of two steps i.e. first step, definition of possible events, such as by evaluating a comprehensive engineering, and by constructing a top level logic model. Then the second step, grouping of identified initiating event's by the safety function to be performed or combinations of systems responses. Therefore, the purpose of this paper is to discuss initiating events identification in event tree development process and to reviews other probabilistic safety assessments (PSA). The identification of initiating events also involves the past operating experience, review of other PSA, failure mode and effect analysis (FMEA), feedback from system modeling, and master logic diagram (special type of fault tree). By using the method of study for the condition of the traditional US PSA categorization in detail, could be obtained the important initiating events that are categorized into LOCA, transients and external events.
Li, Jun; Zhang, Hong; Han, Yinshan; Wang, Baodong
2016-01-01
Focusing on the diversity, complexity and uncertainty of the third-party damage accident, the failure probability of third-party damage to urban gas pipeline was evaluated on the theory of analytic hierarchy process and fuzzy mathematics. The fault tree of third-party damage containing 56 basic events was built by hazard identification of third-party damage. The fuzzy evaluation of basic event probabilities were conducted by the expert judgment method and using membership function of fuzzy set. The determination of the weight of each expert and the modification of the evaluation opinions were accomplished using the improved analytic hierarchy process, and the failure possibility of the third-party to urban gas pipeline was calculated. Taking gas pipelines of a certain large provincial capital city as an example, the risk assessment structure of the method was proved to conform to the actual situation, which provides the basis for the safety risk prevention. PMID:27875545
Detection of CMOS bridging faults using minimal stuck-at fault test sets
NASA Technical Reports Server (NTRS)
Ijaz, Nabeel; Frenzel, James F.
1993-01-01
The performance of minimal stuck-at fault test sets at detecting bridging faults are evaluated. New functional models of circuit primitives are presented which allow accurate representation of bridging faults under switch-level simulation. The effectiveness of the patterns is evaluated using both voltage and current testing.
49 CFR Appendix B to Part 236 - Risk Assessment Criteria
Code of Federal Regulations, 2012 CFR
2012-10-01
... availability calculations for subsystems and components, Fault Tree Analysis (FTA) of the subsystems, and... upper bound, as estimated with a sensitivity analysis, and the risk value selected must be demonstrated... interconnected subsystems/components? The risk assessment of each safety-critical system (product) must account...
49 CFR Appendix B to Part 236 - Risk Assessment Criteria
Code of Federal Regulations, 2014 CFR
2014-10-01
... availability calculations for subsystems and components, Fault Tree Analysis (FTA) of the subsystems, and... upper bound, as estimated with a sensitivity analysis, and the risk value selected must be demonstrated... interconnected subsystems/components? The risk assessment of each safety-critical system (product) must account...
49 CFR Appendix D to Part 236 - Independent Review of Verification and Validation
Code of Federal Regulations, 2010 CFR
2010-10-01
... standards. (f) The reviewer shall analyze all Fault Tree Analyses (FTA), Failure Mode and Effects... for each product vulnerability cited by the reviewer; (4) Identification of any documentation or... not properly followed; (6) Identification of the software verification and validation procedures, as...
14 CFR 417.309 - Flight safety system analysis.
Code of Federal Regulations, 2012 CFR
2012-01-01
... system anomaly occurring and all of its effects as determined by the single failure point analysis and... termination system. (c) Single failure point. A command control system must undergo an analysis that... fault tree analysis or a failure modes effects and criticality analysis; (2) Identify all possible...
14 CFR 417.309 - Flight safety system analysis.
Code of Federal Regulations, 2010 CFR
2010-01-01
... system anomaly occurring and all of its effects as determined by the single failure point analysis and... termination system. (c) Single failure point. A command control system must undergo an analysis that... fault tree analysis or a failure modes effects and criticality analysis; (2) Identify all possible...
14 CFR 417.309 - Flight safety system analysis.
Code of Federal Regulations, 2013 CFR
2013-01-01
... system anomaly occurring and all of its effects as determined by the single failure point analysis and... termination system. (c) Single failure point. A command control system must undergo an analysis that... fault tree analysis or a failure modes effects and criticality analysis; (2) Identify all possible...
14 CFR 417.309 - Flight safety system analysis.
Code of Federal Regulations, 2014 CFR
2014-01-01
... system anomaly occurring and all of its effects as determined by the single failure point analysis and... termination system. (c) Single failure point. A command control system must undergo an analysis that... fault tree analysis or a failure modes effects and criticality analysis; (2) Identify all possible...
14 CFR 417.309 - Flight safety system analysis.
Code of Federal Regulations, 2011 CFR
2011-01-01
... system anomaly occurring and all of its effects as determined by the single failure point analysis and... termination system. (c) Single failure point. A command control system must undergo an analysis that... fault tree analysis or a failure modes effects and criticality analysis; (2) Identify all possible...
Toward a Model-Based Approach for Flight System Fault Protection
NASA Technical Reports Server (NTRS)
Day, John; Meakin, Peter; Murray, Alex
2012-01-01
Use SysML/UML to describe the physical structure of the system This part of the model would be shared with other teams - FS Systems Engineering, Planning & Execution, V&V, Operations, etc., in an integrated model-based engineering environment Use the UML Profile mechanism, defining Stereotypes to precisely express the concepts of the FP domain This extends the UML/SysML languages to contain our FP concepts Use UML/SysML, along with our profile, to capture FP concepts and relationships in the model Generate typical FP engineering products (the FMECA, Fault Tree, MRD, V&V Matrices)
Preliminary Isostatic Gravity Map of Joshua Tree National Park and Vicinity, Southern California
Langenheim, V.E.; Biehler, Shawn; McPhee, D.K.; McCabe, C.A.; Watt, J.T.; Anderson, M.L.; Chuchel, B.A.; Stoffer, P.
2007-01-01
This isostatic residual gravity map is part of an effort to map the three-dimensional distribution of rocks in Joshua Tree National Park, southern California. This map will serve as a basis for modeling the shape of basins beneath the Park and in adjacent valleys and also for determining the location and geometry of faults within the area. Local spatial variations in the Earth's gravity field, after accounting for variations caused by elevation, terrain, and deep crustal structure, reflect the distribution of densities in the mid- to upper crust. Densities often can be related to rock type, and abrupt spatial changes in density commonly mark lithologic or structural boundaries. High-density basement rocks exposed within the Eastern Transverse Ranges include crystalline rocks that range in age from Proterozoic to Mesozoic and these rocks are generally present in the mountainous areas of the quadrangle. Alluvial sediments, usually located in the valleys, and Tertiary sedimentary rocks are characterized by low densities. However, with increasing depth of burial and age, the densities of these rocks may become indistinguishable from those of basement rocks. Tertiary volcanic rocks are characterized by a wide range of densities, but, on average, are less dense than the pre-Cenozoic basement rocks. Basalt within the Park is as dense as crystalline basement, but is generally thin (less than 100 m thick; e.g., Powell, 2003). Isostatic residual gravity values within the map area range from about 44 mGal over Coachella Valley to about 8 mGal between the Mecca Hills and the Orocopia Mountains. Steep linear gravity gradients are coincident with the traces of several Quaternary strike-slip faults, most notably along the San Andreas Fault bounding the east side of Coachella Valley and east-west-striking, left-lateral faults, such as the Pinto Mountain, Blue Cut, and Chiriaco Faults (Fig. 1). Gravity gradients also define concealed basin-bounding faults, such as those beneath the Chuckwalla Valley (e.g. Rotstein and others, 1976). These gradients result from juxtaposing dense basement rocks against thick Cenozoic sedimentary rocks.
Onboard Nonlinear Engine Sensor and Component Fault Diagnosis and Isolation Scheme
NASA Technical Reports Server (NTRS)
Tang, Liang; DeCastro, Jonathan A.; Zhang, Xiaodong
2011-01-01
A method detects and isolates in-flight sensor, actuator, and component faults for advanced propulsion systems. In sharp contrast to many conventional methods, which deal with either sensor fault or component fault, but not both, this method considers sensor fault, actuator fault, and component fault under one systemic and unified framework. The proposed solution consists of two main components: a bank of real-time, nonlinear adaptive fault diagnostic estimators for residual generation, and a residual evaluation module that includes adaptive thresholds and a Transferable Belief Model (TBM)-based residual evaluation scheme. By employing a nonlinear adaptive learning architecture, the developed approach is capable of directly dealing with nonlinear engine models and nonlinear faults without the need of linearization. Software modules have been developed and evaluated with the NASA C-MAPSS engine model. Several typical engine-fault modes, including a subset of sensor/actuator/components faults, were tested with a mild transient operation scenario. The simulation results demonstrated that the algorithm was able to successfully detect and isolate all simulated faults as long as the fault magnitudes were larger than the minimum detectable/isolable sizes, and no misdiagnosis occurred
Risk-informed Maintenance for Non-coherent Systems
NASA Astrophysics Data System (ADS)
Tao, Ye
Probabilistic Safety Assessment (PSA) is a systematic and comprehensive methodology to evaluate risks associated with a complex engineered technological entity. The information provided by PSA has been increasingly implemented for regulatory purposes but rarely used in providing information for operation and maintenance activities. As one of the key parts in PSA, Fault Tree Analysis (FTA) attempts to model and analyze failure processes of engineering and biological systems. The fault trees are composed of logic diagrams that display the state of the system and are constructed using graphical design techniques. Risk Importance Measures (RIMs) are information that can be obtained from both qualitative and quantitative aspects of FTA. Components within a system can be ranked with respect to each specific criterion defined by each RIM. Through a RIM, a ranking of the components or basic events can be obtained and provide valuable information for risk-informed decision making. Various RIMs have been applied in various applications. In order to provide a thorough understanding of RIMs and interpret the results, they are categorized with respect to risk significance (RS) and safety significance (SS) in this thesis. This has also tied them into different maintenance activities. When RIMs are used for maintenance purposes, it is called risk-informed maintenance. On the other hand, the majority of work produced on the FTA method has been concentrated on failure logic diagrams restricted to the direct or implied use of AND and OR operators. Such systems are considered as coherent systems. However, the NOT logic can also contribute to the information produced by PSA. The importance analysis of non-coherent systems is rather limited, even though the field has received more and more attention over the years. The non-coherent systems introduce difficulties in both qualitative and quantitative assessment of the fault tree compared with the coherent systems. In this thesis, a set of RIMs is analyzed and investigated. The 8 commonly used RIMs (Birnbaum's Measure, Criticality Importance Factor, Fussell-Vesely Measure, Improvement Potential, Conditional Probability, Risk Achievement, Risk Achievement Worth, and Risk Reduction Worth) are extended to non-coherent forms. Both coherent and non-coherent forms are classified into different categories in order to assist different types of maintenance activities. The real systems such as the Steam Generator Level Control System in CANDU Nuclear Power Plant (NPP), a Gas Detection System, and the Automatic Power Control System of the experimental nuclear reactor are presented to demonstrate the application of the results as case studies.
Quality-based Multimodal Classification Using Tree-Structured Sparsity
2014-03-08
Pennsylvania State University soheil@psu.edu Asok Ray Pennsylvania State University axr2@psu.edu@psu.edu Nasser M. Nasrabadi Army Research Laboratory...clustering for on- line fault detection and isolation. Applied Intelligence, 35(2):269–284, 2011. 4 [2] S. Bahrampour, A. Ray , S. Sarkar, T. Damarla, and N
Assessing Institutional Ineffectiveness: A Strategy for Improvement.
ERIC Educational Resources Information Center
Cameron, Kim S.
1984-01-01
Based on the theory that institutional change and improvement are motivated more by knowledge of problems than by knowledge of successes, a fault tree analysis technique using Boolean logic for assessing institutional ineffectiveness by determining weaknesses in the system is presented. Advantages and disadvantages of focusing on weakness rather…
An earthquake rate forecast for Europe based on smoothed seismicity and smoothed fault contribution
NASA Astrophysics Data System (ADS)
Hiemer, Stefan; Woessner, Jochen; Basili, Roberto; Wiemer, Stefan
2013-04-01
The main objective of project SHARE (Seismic Hazard Harmonization in Europe) is to develop a community-based seismic hazard model for the Euro-Mediterranean region. The logic tree of earthquake rupture forecasts comprises several methodologies including smoothed seismicity approaches. Smoothed seismicity thus represents an alternative concept to express the degree of spatial stationarity of seismicity and provides results that are more objective, reproducible, and testable. Nonetheless, the smoothed-seismicity approach suffers from the common drawback of being generally based on earthquake catalogs alone, i.e. the wealth of knowledge from geology is completely ignored. We present a model that applies the kernel-smoothing method to both past earthquake locations and slip rates on mapped crustal faults and subductions. The result is mainly driven by the data, being independent of subjective delineation of seismic source zones. The core parts of our model are two distinct location probability densities: The first is computed by smoothing past seismicity (using variable kernel smoothing to account for varying data density). The second is obtained by smoothing fault moment rate contributions. The fault moment rates are calculated by summing the moment rate of each fault patch on a fully parameterized and discretized fault as available from the SHARE fault database. We assume that the regional frequency-magnitude distribution of the entire study area is well known and estimate the a- and b-value of a truncated Gutenberg-Richter magnitude distribution based on a maximum likelihood approach that considers the spatial and temporal completeness history of the seismic catalog. The two location probability densities are linearly weighted as a function of magnitude assuming that (1) the occurrence of past seismicity is a good proxy to forecast occurrence of future seismicity and (2) future large-magnitude events occur more likely in the vicinity of known faults. Consequently, the underlying location density of our model depends on the magnitude. We scale the density with the estimated a-value in order to construct a forecast that specifies the earthquake rate in each longitude-latitude-magnitude bin. The model is intended to be one branch of SHARE's logic tree of rupture forecasts and provides rates of events in the magnitude range of 5 <= m <= 8.5 for the entire region of interest and is suitable for comparison with other long-term models in the framework of the Collaboratory for the Study of Earthquake Predictability (CSEP).
Improving online risk assessment with equipment prognostics and health monitoring
DOE Office of Scientific and Technical Information (OSTI.GOV)
Coble, Jamie B.; Liu, Xiaotong; Briere, Chris
The current approach to evaluating the risk of nuclear power plant (NPP) operation relies on static probabilities of component failure, which are based on industry experience with the existing fleet of nominally similar light water reactors (LWRs). As the nuclear industry looks to advanced reactor designs that feature non-light water coolants (e.g., liquid metal, high temperature gas, molten salt), this operating history is not available. Many advanced reactor designs use advanced components, such as electromagnetic pumps, that have not been used in the US commercial nuclear fleet. Given the lack of rich operating experience, we cannot accurately estimate the evolvingmore » probability of failure for basic components to populate the fault trees and event trees that typically comprise probabilistic risk assessment (PRA) models. Online equipment prognostics and health management (PHM) technologies can bridge this gap to estimate the failure probabilities for components under operation. The enhanced risk monitor (ERM) incorporates equipment condition assessment into the existing PRA and risk monitor framework to provide accurate and timely estimates of operational risk.« less
Managing Risk to Ensure a Successful Cassini/Huygens Saturn Orbit Insertion (SOI)
NASA Technical Reports Server (NTRS)
Witkowski, Mona M.; Huh, Shin M.; Burt, John B.; Webster, Julie L.
2004-01-01
I. Design: a) S/C designed to be largely single fault tolerant; b) Operate in flight demonstrated envelope, with margin; and c) Strict compliance with requirements & flight rules. II. Test: a) Baseline, fault & stress testing using flight system testbeds (H/W & S/W); b) In-flight checkout & demos to remove first time events. III. Failure Analysis: a) Critical event driven fault tree analysis; b) Risk mitigation & development of contingencies. IV) Residual Risks: a) Accepted pre-launch waivers to Single Point Failures; b) Unavoidable risks (e.g. natural disaster). V) Mission Assurance: a) Strict process for characterization of variances (ISAs, PFRs & Waivers; b) Full time Mission Assurance Manager reports to Program Manager: 1) Independent assessment of compliance with institutional standards; 2) Oversight & risk assessment of ISAs, PFRs & Waivers etc.; and 3) Risk Management Process facilitator.
Integration of Advanced Probabilistic Analysis Techniques with Multi-Physics Models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cetiner, Mustafa Sacit; none,; Flanagan, George F.
2014-07-30
An integrated simulation platform that couples probabilistic analysis-based tools with model-based simulation tools can provide valuable insights for reactive and proactive responses to plant operating conditions. The objective of this work is to demonstrate the benefits of a partial implementation of the Small Modular Reactor (SMR) Probabilistic Risk Assessment (PRA) Detailed Framework Specification through the coupling of advanced PRA capabilities and accurate multi-physics plant models. Coupling a probabilistic model with a multi-physics model will aid in design, operations, and safety by providing a more accurate understanding of plant behavior. This represents the first attempt at actually integrating these two typesmore » of analyses for a control system used for operations, on a faster than real-time basis. This report documents the development of the basic communication capability to exchange data with the probabilistic model using Reliability Workbench (RWB) and the multi-physics model using Dymola. The communication pathways from injecting a fault (i.e., failing a component) to the probabilistic and multi-physics models were successfully completed. This first version was tested with prototypic models represented in both RWB and Modelica. First, a simple event tree/fault tree (ET/FT) model was created to develop the software code to implement the communication capabilities between the dynamic-link library (dll) and RWB. A program, written in C#, successfully communicates faults to the probabilistic model through the dll. A systems model of the Advanced Liquid-Metal Reactor–Power Reactor Inherently Safe Module (ALMR-PRISM) design developed under another DOE project was upgraded using Dymola to include proper interfaces to allow data exchange with the control application (ConApp). A program, written in C+, successfully communicates faults to the multi-physics model. The results of the example simulation were successfully plotted.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Breuker, M.S.; Braun, J.E.
This paper presents a detailed evaluation of the performance of a statistical, rule-based fault detection and diagnostic (FDD) technique presented by Rossi and Braun (1997). Steady-state and transient tests were performed on a simple rooftop air conditioner over a range of conditions and fault levels. The steady-state data without faults were used to train models that predict outputs for normal operation. The transient data with faults were used to evaluate FDD performance. The effect of a number of design variables on FDD sensitivity for different faults was evaluated and two prototype systems were specified for more complete evaluation. Good performancemore » was achieved in detecting and diagnosing five faults using only six temperatures (2 input and 4 output) and linear models. The performance improved by about a factor of two when ten measurements (three input and seven output) and higher order models were used. This approach for evaluating and optimizing the performance of the statistical, rule-based FDD technique could be used as a design and evaluation tool when applying this FDD method to other packaged air-conditioning systems. Furthermore, the approach could also be modified to evaluate the performance of other FDD methods.« less
NASA Astrophysics Data System (ADS)
Cui, Lingli; Gong, Xiangyang; Zhang, Jianyu; Wang, Huaqing
2016-12-01
The quantitative diagnosis of rolling bearing fault severity is particularly crucial to realize a proper maintenance decision. Aiming at the fault feature of rolling bearing, a novel double-dictionary matching pursuit (DDMP) for fault extent evaluation of rolling bearing based on the Lempel-Ziv complexity (LZC) index is proposed in this paper. In order to match the features of rolling bearing fault, the impulse time-frequency dictionary and modulation dictionary are constructed to form the double-dictionary by using the method of parameterized function model. Then a novel matching pursuit method is proposed based on the new double-dictionary. For rolling bearing vibration signals with different fault sizes, the signals are decomposed and reconstructed by the DDMP. After the noise reduced and signals reconstructed, the LZC index is introduced to realize the fault extent evaluation. The applications of this method to the fault experimental signals of bearing outer race and inner race with different degree of injury have shown that the proposed method can effectively realize the fault extent evaluation.
NETRA: A parallel architecture for integrated vision systems. 1: Architecture and organization
NASA Technical Reports Server (NTRS)
Choudhary, Alok N.; Patel, Janak H.; Ahuja, Narendra
1989-01-01
Computer vision is regarded as one of the most complex and computationally intensive problems. An integrated vision system (IVS) is considered to be a system that uses vision algorithms from all levels of processing for a high level application (such as object recognition). A model of computation is presented for parallel processing for an IVS. Using the model, desired features and capabilities of a parallel architecture suitable for IVSs are derived. Then a multiprocessor architecture (called NETRA) is presented. This architecture is highly flexible without the use of complex interconnection schemes. The topology of NETRA is recursively defined and hence is easily scalable from small to large systems. Homogeneity of NETRA permits fault tolerance and graceful degradation under faults. It is a recursively defined tree-type hierarchical architecture where each of the leaf nodes consists of a cluster of processors connected with a programmable crossbar with selective broadcast capability to provide for desired flexibility. A qualitative evaluation of NETRA is presented. Then general schemes are described to map parallel algorithms onto NETRA. Algorithms are classified according to their communication requirements for parallel processing. An extensive analysis of inter-cluster communication strategies in NETRA is presented, and parameters affecting performance of parallel algorithms when mapped on NETRA are discussed. Finally, a methodology to evaluate performance of algorithms on NETRA is described.
SURE - SEMI-MARKOV UNRELIABILITY RANGE EVALUATOR (VAX VMS VERSION)
NASA Technical Reports Server (NTRS)
Butler, R. W.
1994-01-01
The Semi-Markov Unreliability Range Evaluator, SURE, is an analysis tool for reconfigurable, fault-tolerant systems. Traditional reliability analyses are based on aggregates of fault-handling and fault-occurrence models. SURE provides an efficient means for calculating accurate upper and lower bounds for the death state probabilities for a large class of semi-Markov models, not just those which can be reduced to critical-pair architectures. The calculated bounds are close enough (usually within 5 percent of each other) for use in reliability studies of ultra-reliable computer systems. The SURE bounding theorems have algebraic solutions and are consequently computationally efficient even for large and complex systems. SURE can optionally regard a specified parameter as a variable over a range of values, enabling an automatic sensitivity analysis. Highly reliable systems employ redundancy and reconfiguration as methods of ensuring operation. When such systems are modeled stochastically, some state transitions are orders of magnitude faster than others; that is, fault recovery is usually faster than fault arrival. SURE takes these time differences into account. Slow transitions are described by exponential functions and fast transitions are modeled by either the White or Lee theorems based on means, variances, and percentiles. The user must assign identifiers to every state in the system and define all transitions in the semi-Markov model. SURE input statements are composed of variables and constants related by FORTRAN-like operators such as =, +, *, SIN, EXP, etc. There are a dozen major commands such as READ, READO, SAVE, SHOW, PRUNE, TRUNCate, CALCulator, and RUN. Once the state transitions have been defined, SURE calculates the upper and lower probability bounds for entering specified death states within a specified mission time. SURE output is tabular. The mathematical approach chosen to solve a reliability problem may vary with the size and nature of the problem. Although different solution techniques are utilized on different programs, it is possible to have a common input language. The Systems Validation Methods group at NASA Langley Research Center has created a set of programs that form the basis for a reliability analysis workstation. The set of programs are: SURE reliability analysis program (COSMIC program LAR-13789, LAR-14921); the ASSIST specification interface program (LAR-14193, LAR-14923), PAWS/STEM reliability analysis programs (LAR-14165, LAR-14920); and the FTC fault tree tool (LAR-14586, LAR-14922). FTC is used to calculate the top-event probability for a fault tree. PAWS/STEM and SURE are programs which interpret the same SURE language, but utilize different solution methods. ASSIST is a preprocessor that generates SURE language from a more abstract definition. SURE, ASSIST, and PAWS/STEM are also offered as a bundle. Please see the abstract for COS-10039/COS-10041, SARA - SURE/ASSIST Reliability Analysis Workstation, for pricing details. SURE was originally developed for DEC VAX series computers running VMS and was later ported for use on Sun computers running SunOS. The VMS version (LAR13789) is written in PASCAL, C-language, and FORTRAN 77. The standard distribution medium for the VMS version of SURE is a 9-track 1600 BPI magnetic tape in VMSINSTAL format. It is also available on a TK50 tape cartridge in VMSINSTAL format. Executables are included. The Sun UNIX version (LAR14921) is written in ANSI C-language and PASCAL. An ANSI compliant C compiler is required in order to compile the C portion of this package. The standard distribution medium for the Sun version of SURE is a .25 inch streaming magnetic tape cartridge in UNIX tar format. Both Sun3 and Sun4 executables are included. SURE was developed in 1988 and last updated in 1992. DEC, VAX, VMS, and TK50 are trademarks of Digital Equipment Corporation. TEMPLATE is a registered trademark of Template Graphics Software, Inc. UNIX is a registered trademark of AT&T Bell Laboratories. Sun3 and Sun4 are trademarks of Sun Microsystems, Inc.
SURE - SEMI-MARKOV UNRELIABILITY RANGE EVALUATOR (SUN VERSION)
NASA Technical Reports Server (NTRS)
Butler, R. W.
1994-01-01
The Semi-Markov Unreliability Range Evaluator, SURE, is an analysis tool for reconfigurable, fault-tolerant systems. Traditional reliability analyses are based on aggregates of fault-handling and fault-occurrence models. SURE provides an efficient means for calculating accurate upper and lower bounds for the death state probabilities for a large class of semi-Markov models, not just those which can be reduced to critical-pair architectures. The calculated bounds are close enough (usually within 5 percent of each other) for use in reliability studies of ultra-reliable computer systems. The SURE bounding theorems have algebraic solutions and are consequently computationally efficient even for large and complex systems. SURE can optionally regard a specified parameter as a variable over a range of values, enabling an automatic sensitivity analysis. Highly reliable systems employ redundancy and reconfiguration as methods of ensuring operation. When such systems are modeled stochastically, some state transitions are orders of magnitude faster than others; that is, fault recovery is usually faster than fault arrival. SURE takes these time differences into account. Slow transitions are described by exponential functions and fast transitions are modeled by either the White or Lee theorems based on means, variances, and percentiles. The user must assign identifiers to every state in the system and define all transitions in the semi-Markov model. SURE input statements are composed of variables and constants related by FORTRAN-like operators such as =, +, *, SIN, EXP, etc. There are a dozen major commands such as READ, READO, SAVE, SHOW, PRUNE, TRUNCate, CALCulator, and RUN. Once the state transitions have been defined, SURE calculates the upper and lower probability bounds for entering specified death states within a specified mission time. SURE output is tabular. The mathematical approach chosen to solve a reliability problem may vary with the size and nature of the problem. Although different solution techniques are utilized on different programs, it is possible to have a common input language. The Systems Validation Methods group at NASA Langley Research Center has created a set of programs that form the basis for a reliability analysis workstation. The set of programs are: SURE reliability analysis program (COSMIC program LAR-13789, LAR-14921); the ASSIST specification interface program (LAR-14193, LAR-14923), PAWS/STEM reliability analysis programs (LAR-14165, LAR-14920); and the FTC fault tree tool (LAR-14586, LAR-14922). FTC is used to calculate the top-event probability for a fault tree. PAWS/STEM and SURE are programs which interpret the same SURE language, but utilize different solution methods. ASSIST is a preprocessor that generates SURE language from a more abstract definition. SURE, ASSIST, and PAWS/STEM are also offered as a bundle. Please see the abstract for COS-10039/COS-10041, SARA - SURE/ASSIST Reliability Analysis Workstation, for pricing details. SURE was originally developed for DEC VAX series computers running VMS and was later ported for use on Sun computers running SunOS. The VMS version (LAR13789) is written in PASCAL, C-language, and FORTRAN 77. The standard distribution medium for the VMS version of SURE is a 9-track 1600 BPI magnetic tape in VMSINSTAL format. It is also available on a TK50 tape cartridge in VMSINSTAL format. Executables are included. The Sun UNIX version (LAR14921) is written in ANSI C-language and PASCAL. An ANSI compliant C compiler is required in order to compile the C portion of this package. The standard distribution medium for the Sun version of SURE is a .25 inch streaming magnetic tape cartridge in UNIX tar format. Both Sun3 and Sun4 executables are included. SURE was developed in 1988 and last updated in 1992. DEC, VAX, VMS, and TK50 are trademarks of Digital Equipment Corporation. TEMPLATE is a registered trademark of Template Graphics Software, Inc. UNIX is a registered trademark of AT&T Bell Laboratories. Sun3 and Sun4 are trademarks of Sun Microsystems, Inc.
Siontorou, Christina G; Batzias, Fragiskos A
2014-03-01
Biosensor technology began in the 1960s to revolutionize instrumentation and measurement. Despite the glucose sensor market success that revolutionized medical diagnostics, and artificial pancreas promise currently the approval stage, the industry is reluctant to capitalize on other relevant university-produced knowledge and innovation. On the other hand, the scientific literature is extensive and persisting, while the number of university-hosted biosensor groups is growing. Considering the limited marketability of biosensors compared to the available research output, the biosensor field has been used by the present authors as a suitable paradigm for developing a methodological combined framework for "roadmapping" university research output in this discipline. This framework adopts the basic principles of the Analytic Hierarchy Process (AHP), replacing the lower level of technology alternatives with internal barriers (drawbacks, limitations, disadvantages), modeled through fault tree analysis (FTA) relying on fuzzy reasoning to count for uncertainty. The proposed methodology is validated retrospectively using ion selective field effect transistor (ISFET) - based biosensors as a case example, and then implemented prospectively membrane biosensors, putting an emphasis on the manufacturability issues. The analysis performed the trajectory of membrane platforms differently than the available market roadmaps that, considering the vast industrial experience in tailoring and handling crystallic forms, suggest the technology path of biomimetic and synthetic materials. The results presented herein indicate that future trajectories lie along with nanotechnology, and especially nanofabrication and nano-bioinformatics, and focused, more on the science-path, that is, on controlling the natural process of self-assembly and the thermodynamics of bioelement-lipid interaction. This retained the nature-derived sensitivity of the biosensor platform, pointing out the differences between the scope of academic research and the market viewpoint.
Communications and tracking expert systems study
NASA Technical Reports Server (NTRS)
Leibfried, T. F.; Feagin, Terry; Overland, David
1987-01-01
The original objectives of the study consisted of five broad areas of investigation: criteria and issues for explanation of communication and tracking system anomaly detection, isolation, and recovery; data storage simplification issues for fault detection expert systems; data selection procedures for decision tree pruning and optimization to enhance the abstraction of pertinent information for clear explanation; criteria for establishing levels of explanation suited to needs; and analysis of expert system interaction and modularization. Progress was made in all areas, but to a lesser extent in the criteria for establishing levels of explanation suited to needs. Among the types of expert systems studied were those related to anomaly or fault detection, isolation, and recovery.
[Medical Equipment Maintenance Methods].
Liu, Hongbin
2015-09-01
Due to the high technology and the complexity of medical equipment, as well as to the safety and effectiveness, it determines the high requirements of the medical equipment maintenance work. This paper introduces some basic methods of medical instrument maintenance, including fault tree analysis, node method and exclusive method which are the three important methods in the medical equipment maintenance, through using these three methods for the instruments that have circuit drawings, hardware breakdown maintenance can be done easily. And this paper introduces the processing methods of some special fault conditions, in order to reduce little detours in meeting the same problems. Learning is very important for stuff just engaged in this area.
NASA Astrophysics Data System (ADS)
Inoue, N.; Kitada, N.; Tonagi, M.
2016-12-01
Distributed fault displacements in Probabilistic Fault Displace- ment Analysis (PFDHA) have an important rule in evaluation of important facilities such as Nuclear Installations. In Japan, the Nu- clear Installations should be constructed where there is no possibility that the displacement by the earthquake on the active faults occurs. Youngs et al. (2003) defined the distributed fault as displacement on other faults or shears, or fractures in the vicinity of the principal rup- ture in response to the principal faulting. Other researchers treated the data of distribution fault around principal fault and modeled according to their definitions (e.g. Petersen et al., 2011; Takao et al., 2013 ). We organized Japanese fault displacements data and constructed the slip-distance relationship depending on fault types. In the case of reverse fault, slip-distance relationship on the foot-wall indicated difference trend compared with that on hanging-wall. The process zone or damaged zone have been studied as weak structure around principal faults. The density or number is rapidly decrease away from the principal faults. We contrasted the trend of these zones with that of distributed slip-distance distributions. The subsurface FEM simulation have been carried out to inves- tigate the distribution of stress around principal faults. The results indicated similar trend compared with the distribution of field obser- vations. This research was part of the 2014-2015 research project `Development of evaluating method for fault displacement` by the Secretariat of Nuclear Regulation Authority (S/NRA), Japan.
Probabilistic evaluation of on-line checks in fault-tolerant multiprocessor systems
NASA Technical Reports Server (NTRS)
Nair, V. S. S.; Hoskote, Yatin V.; Abraham, Jacob A.
1992-01-01
The analysis of fault-tolerant multiprocessor systems that use concurrent error detection (CED) schemes is much more difficult than the analysis of conventional fault-tolerant architectures. Various analytical techniques have been proposed to evaluate CED schemes deterministically. However, these approaches are based on worst-case assumptions related to the failure of system components. Often, the evaluation results do not reflect the actual fault tolerance capabilities of the system. A probabilistic approach to evaluate the fault detecting and locating capabilities of on-line checks in a system is developed. The various probabilities associated with the checking schemes are identified and used in the framework of the matrix-based model. Based on these probabilistic matrices, estimates for the fault tolerance capabilities of various systems are derived analytically.
Mori, J.
1996-01-01
Details of the M 4.3 foreshock to the Joshua Tree earthquake were studied using P waves recorded on the Southern California Seismic Network and the Anza network. Deconvolution, using an M 2.4 event as an empirical Green's function, corrected for complicated path and site effects in the seismograms and produced simple far-field displacement pulses that were inverted for a slip distribution. Both possible fault planes, north-south and east-west, for the focal mechanism were tested by a least-squares inversion procedure with a range of rupture velocities. The results showed that the foreshock ruptured the north-south plane, similar to the mainshock. The foreshock initiated a few hundred meters south of the mainshock and ruptured to the north, toward the mainshock hypocenter. The mainshock (M 6.1) initiated near the northern edge of the foreshock rupture 2 hr later. The foreshock had a high stress drop (320 to 800 bars) and broke a small portion of the fault adjacent to the mainshock but was not able to immediately initiate the mainshock rupture.
NASA Astrophysics Data System (ADS)
Kitada, N.; Inoue, N.; Tonagi, M.
2016-12-01
The purpose of Probabilistic Fault Displacement Hazard Analysis (PFDHA) is estimate fault displacement values and its extent of the impact. There are two types of fault displacement related to the earthquake fault: principal fault displacement and distributed fault displacement. Distributed fault displacement should be evaluated in important facilities, such as Nuclear Installations. PFDHA estimates principal fault and distributed fault displacement. For estimation, PFDHA uses distance-displacement functions, which are constructed from field measurement data. We constructed slip distance relation of principal fault displacement based on Japanese strike and reverse slip earthquakes in order to apply to Japan area that of subduction field. However, observed displacement data are sparse, especially reverse faults. Takao et al. (2013) tried to estimate the relation using all type fault systems (reverse fault and strike slip fault). After Takao et al. (2013), several inland earthquakes were occurred in Japan, so in this time, we try to estimate distance-displacement functions each strike slip fault type and reverse fault type especially add new fault displacement data set. To normalized slip function data, several criteria were provided by several researchers. We normalized principal fault displacement data based on several methods and compared slip-distance functions. The normalized by total length of Japanese reverse fault data did not show particular trend slip distance relation. In the case of segmented data, the slip-distance relationship indicated similar trend as strike slip faults. We will also discuss the relation between principal fault displacement distributions with source fault character. According to slip distribution function (Petersen et al., 2011), strike slip fault type shows the ratio of normalized displacement are decreased toward to the edge of fault. However, the data set of Japanese strike slip fault data not so decrease in the end of the fault. This result indicates that the fault displacement is difficult to appear at the edge of the fault displacement in Japan. This research was part of the 2014-2015 research project `Development of evaluating method for fault displacement` by the Secretariat of Nuclear Regulation Authority (NRA), Japan.
Bedrosian, Paul A.; Burgess, Matthew K.; Nishikawa, Tracy
2013-01-01
Within the south-western Mojave Desert, the Joshua Basin Water District is considering applying imported water into infiltration ponds in the Joshua Tree groundwater sub-basin in an attempt to artificially recharge the underlying aquifer. Scarce subsurface hydrogeological data are available near the proposed recharge site; therefore, time-domain electromagnetic (TDEM) data were collected and analysed to characterize the subsurface. TDEM soundings were acquired to estimate the depth to water on either side of the Pinto Mountain Fault, a major east-west trending strike-slip fault that transects the proposed recharge site. While TDEM is a standard technique for groundwater investigations, special care must be taken when acquiring and interpreting TDEM data in a twodimensional (2D) faulted environment. A subset of the TDEM data consistent with a layered-earth interpretation was identified through a combination of three-dimensional (3D) forward modelling and diffusion time-distance estimates. Inverse modelling indicates an offset in water table elevation of nearly 40 m across the fault. These findings imply that the fault acts as a low-permeability barrier to groundwater flow in the vicinity of the proposed recharge site. Existing production wells on the south side of the fault, together with a thick unsaturated zone and permeable near-surface deposits, suggest the southern half of the study area is suitable for artificial recharge. These results illustrate the effectiveness of targeted TDEM in support of hydrological studies in a heavily faulted desert environment where data are scarce and the cost of obtaining these data by conventional drilling techniques is prohibitive.
Langridge, R.M.; Stenner, Heidi D.; Fumal, T.E.; Christofferson, S.A.; Rockwell, T.K.; Hartleb, R.D.; Bachhuber, J.; Barka, A.A.
2002-01-01
The Mw 7.4 17 August 1999 İzmit earthquake ruptured five major fault segments of the dextral North Anatolian Fault Zone. The 26-km-long, N86°W-trending Sakarya fault segment (SFS) extends from the Sapanca releasing step-over in the west to near the town of Akyazi in the east. The SFS emerges from Lake Sapanca as two distinct fault traces that rejoin to traverse the Adapazari Plain to Akyazi. Offsets were measured across 88 cultural and natural features that cross the fault, such as roads, cornfield rows, rows of trees, walls, rails, field margins, ditches, vehicle ruts, a dike, and ground cracks. The maximum displacement observed for the İzmit earthquake (∼5.1 m) was encountered on this segment. Dextral displacement for the SFS rises from less than 1 m at Lake Sapanca to greater than 5 m near Arifiye, only 3 km away. Average slip decreases uniformly to the east from Arifiye until the fault steps left from Sagir to Kazanci to the N75°W, 6-km-long Akyazi strand, where slip drops to less than 1 m. The Akyazi strand passes eastward into the Akyazi Bend, which consists of a high-angle bend (18°-29°) between the Sakarya and Karadere fault segments, a 6-km gap in surface rupture, and high aftershock energy release. Complex structural geometries exist between the İzmit, Düzce, and 1967 Mudurnu fault segments that have arrested surface ruptures on timescales ranging from 30 sec to 88 days to 32 yr. The largest of these step-overs may have acted as a rupture segmentation boundary in previous earthquake cycles.
Certification trails for data structures
NASA Technical Reports Server (NTRS)
Sullivan, Gregory F.; Masson, Gerald M.
1993-01-01
Certification trails are a recently introduced and promising approach to fault detection and fault tolerance. The applicability of the certification trail technique is significantly generalized. Previously, certification trails had to be customized to each algorithm application; trails appropriate to wide classes of algorithms were developed. These certification trails are based on common data-structure operations such as those carried out using these sets of operations such as those carried out using balanced binary trees and heaps. Any algorithms using these sets of operations can therefore employ the certification trail method to achieve software fault tolerance. To exemplify the scope of the generalization of the certification trail technique provided, constructions of trails for abstract data types such as priority queues and union-find structures are given. These trails are applicable to any data-structure implementation of the abstract data type. It is also shown that these ideals lead naturally to monitors for data-structure operations.
NASA Technical Reports Server (NTRS)
Braden, W. B.
1992-01-01
This talk discusses the importance of providing a process operator with concise information about a process fault including a root cause diagnosis of the problem, a suggested best action for correcting the fault, and prioritization of the problem set. A decision tree approach is used to illustrate one type of approach for determining the root cause of a problem. Fault detection in several different types of scenarios is addressed, including pump malfunctions and pipeline leaks. The talk stresses the need for a good data rectification strategy and good process models along with a method for presenting the findings to the process operator in a focused and understandable way. A real time expert system is discussed as an effective tool to help provide operators with this type of information. The use of expert systems in the analysis of actual versus predicted results from neural networks and other types of process models is discussed.
Modeling Off-Nominal Behavior in SysML
NASA Technical Reports Server (NTRS)
Day, John C.; Donahue, Kenneth; Ingham, Michel; Kadesch, Alex; Kennedy, Andrew K.; Post, Ethan
2012-01-01
Specification and development of fault management functionality in systems is performed in an ad hoc way - more of an art than a science. Improvements to system reliability, availability, safety and resilience will be limited without infusion of additional formality into the practice of fault management. Key to the formalization of fault management is a precise representation of off-nominal behavior. Using the upcoming Soil Moisture Active-Passive (SMAP) mission for source material, we have modeled the off-nominal behavior of the SMAP system during its initial spin-up activity, using the System Modeling Language (SysML). In the course of developing these models, we have developed generic patterns for capturing off-nominal behavior in SysML. We show how these patterns provide useful ways of reasoning about the system (e.g., checking for completeness and effectiveness) and allow the automatic generation of typical artifacts (e.g., success trees and FMECAs) used in system analyses.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Riyadi, Eko H., E-mail: e.riyadi@bapeten.go.id
2014-09-30
Initiating event is defined as any event either internal or external to the nuclear power plants (NPPs) that perturbs the steady state operation of the plant, if operating, thereby initiating an abnormal event such as transient or loss of coolant accident (LOCA) within the NPPs. These initiating events trigger sequences of events that challenge plant control and safety systems whose failure could potentially lead to core damage or large early release. Selection for initiating events consists of two steps i.e. first step, definition of possible events, such as by evaluating a comprehensive engineering, and by constructing a top level logicmore » model. Then the second step, grouping of identified initiating event's by the safety function to be performed or combinations of systems responses. Therefore, the purpose of this paper is to discuss initiating events identification in event tree development process and to reviews other probabilistic safety assessments (PSA). The identification of initiating events also involves the past operating experience, review of other PSA, failure mode and effect analysis (FMEA), feedback from system modeling, and master logic diagram (special type of fault tree). By using the method of study for the condition of the traditional US PSA categorization in detail, could be obtained the important initiating events that are categorized into LOCA, transients and external events.« less
NASA Astrophysics Data System (ADS)
Powell, R. E.; Matti, J. C.
2006-12-01
The Little San Bernardino Mountains (LSBM) constitute a pivotal yet poorly understood structural domain along the right-lateral San Andreas Fault (SAF) in southern California. The LSBM, forming a dramatic escarpment between the eastern Transverse Ranges (ETR) and the Salton Trough, contain an array of N- to NW-trending faults that occupy the zone of intersections between the SAF and the coevolving E-trending left-slip faults of the ETR. One of the N-trending faults within the LSBM domain, the West Deception Canyon Fault, previously has been identified as the locus of the Joshua Tree earthquake (Mw 6.1) of 23 April 1992. That earthquake was the initial shock in the ensuing Landers earthquake sequence. During the evolution of the plate-margin shearing associated with the opening of the Gulf of California since about 5 Ma, the left-lateral faults of the ETR have provided the kinematic transition between the S end of the broad Eastern California Shear Zone (ECSZ) which extends northward through the Mojave Desert and along Walker Lane and the SAF proper in southern California. The long-term geologic record of cumulative displacement on the sinistral ETR faults and the dextral SAF and Mojave Desert faults indicates that these conjugate fault sets have mutually accommodated one another rather than exhibit cross-cutting relations. In contrast, the linear array of earthquakes that make up the dextral 1992 Landers sequence extends across the sinistral Pinto Mountain Fault and has been cited by some as evidence that ECSZ is coalescing southward along the N-trending dextral faults of the northern LSBM to join the ECSZ directly to southern SAF. To gain a better understanding of the array of faults in the LSBM, we are combining mapping within the crystalline basement terrane of the LSBM with mapping both of uplifted remnants of erosional surfaces developed on basement rocks and of volcanic and sedimentary rocks deposited on those surfaces. Our preliminary findings indicate the presence of both easterly and westerly dipping normal faults along the LSBM. Some of these faults offset a prominent uplifted erosion plain and overlying late Miocene basalt as well as younger strata that contain clasts of rocks not found locally, including rounded to very well rounded clasts of indurated sandstone, silicic hypabyssal, volcanic, and volcaniclastic rocks, gray- and greenschist, and quartzite. This distinctive clast assemblage is consistent with a western source subsequently displaced along the SAF. Taken together, these observations suggest that the long-term kinematic role(s) played by NW- to N- trending faults in the LSBM is more complex than that suggested by the simple transecting linear trend defined by the Landers earthquake sequence. By evaluating our findings in the context of our previously published palinspastic reconstructions of the SAF system, we are attempting to distinguish between two scenarios - not necessarily mutually exclusive - for the kinematic role of the LSBM faults, each scenario involving right-oblique extensional slip: (1) They developed initially about 5 Ma as a system of faults subparallel to the then newly forming part of the SAF associated with the opening of the Gulf of California. (2) They accommodate extension in the domains of acute intersection between the mutually developing right-lateral SAF and left-lateral ETR faults. In either of these scenarios, the LSBM faults are related to the opening of the Gulf of California since about 5 Ma and display an important history that predates their hypothesized very recent incorporation into a throughgoing dextral ECSZ.
Fault detection and diagnosis for gas turbines based on a kernelized information entropy model.
Wang, Weiying; Xu, Zhiqiang; Tang, Rui; Li, Shuying; Wu, Wei
2014-01-01
Gas turbines are considered as one kind of the most important devices in power engineering and have been widely used in power generation, airplanes, and naval ships and also in oil drilling platforms. However, they are monitored without man on duty in the most cases. It is highly desirable to develop techniques and systems to remotely monitor their conditions and analyze their faults. In this work, we introduce a remote system for online condition monitoring and fault diagnosis of gas turbine on offshore oil well drilling platforms based on a kernelized information entropy model. Shannon information entropy is generalized for measuring the uniformity of exhaust temperatures, which reflect the overall states of the gas paths of gas turbine. In addition, we also extend the entropy to compute the information quantity of features in kernel spaces, which help to select the informative features for a certain recognition task. Finally, we introduce the information entropy based decision tree algorithm to extract rules from fault samples. The experiments on some real-world data show the effectiveness of the proposed algorithms.
Fault Detection and Diagnosis for Gas Turbines Based on a Kernelized Information Entropy Model
Wang, Weiying; Xu, Zhiqiang; Tang, Rui; Li, Shuying; Wu, Wei
2014-01-01
Gas turbines are considered as one kind of the most important devices in power engineering and have been widely used in power generation, airplanes, and naval ships and also in oil drilling platforms. However, they are monitored without man on duty in the most cases. It is highly desirable to develop techniques and systems to remotely monitor their conditions and analyze their faults. In this work, we introduce a remote system for online condition monitoring and fault diagnosis of gas turbine on offshore oil well drilling platforms based on a kernelized information entropy model. Shannon information entropy is generalized for measuring the uniformity of exhaust temperatures, which reflect the overall states of the gas paths of gas turbine. In addition, we also extend the entropy to compute the information quantity of features in kernel spaces, which help to select the informative features for a certain recognition task. Finally, we introduce the information entropy based decision tree algorithm to extract rules from fault samples. The experiments on some real-world data show the effectiveness of the proposed algorithms. PMID:25258726
NASA Astrophysics Data System (ADS)
Gülerce, Zeynep; Buğra Soyman, Kadir; Güner, Barış; Kaymakci, Nuretdin
2017-12-01
This contribution provides an updated planar seismic source characterization (SSC) model to be used in the probabilistic seismic hazard assessment (PSHA) for Istanbul. It defines planar rupture systems for the four main segments of the North Anatolian fault zone (NAFZ) that are critical for the PSHA of Istanbul: segments covering the rupture zones of the 1999 Kocaeli and Düzce earthquakes, central Marmara, and Ganos/Saros segments. In each rupture system, the source geometry is defined in terms of fault length, fault width, fault plane attitude, and segmentation points. Activity rates and the magnitude recurrence models for each rupture system are established by considering geological and geodetic constraints and are tested based on the observed seismicity that is associated with the rupture system. Uncertainty in the SSC model parameters (e.g., b value, maximum magnitude, slip rate, weights of the rupture scenarios) is considered, whereas the uncertainty in the fault geometry is not included in the logic tree. To acknowledge the effect of earthquakes that are not associated with the defined rupture systems on the hazard, a background zone is introduced and the seismicity rates in the background zone are calculated using smoothed-seismicity approach. The state-of-the-art SSC model presented here is the first fully documented and ready-to-use fault-based SSC model developed for the PSHA of Istanbul.
NASA Astrophysics Data System (ADS)
Dygert, Nick; Liang, Yan
2015-06-01
Mantle peridotites from ophiolites are commonly interpreted as having mid-ocean ridge (MOR) or supra-subduction zone (SSZ) affinity. Recently, an REE-in-two-pyroxene thermometer was developed (Liang et al., 2013) that has higher closure temperatures (designated as TREE) than major element based two-pyroxene thermometers for mafic and ultramafic rocks that experienced cooling. The REE-in-two-pyroxene thermometer has the potential to extract meaningful cooling rates from ophiolitic peridotites and thus shed new light on the thermal history of the different tectonic regimes. We calculated TREE for available literature data from abyssal peridotites, subcontinental (SC) peridotites, and ophiolites around the world (Alps, Coast Range, Corsica, New Caledonia, Oman, Othris, Puerto Rico, Russia, and Turkey), and augmented the data with new measurements for peridotites from the Trinity and Josephine ophiolites and the Mariana trench. TREE are compared to major element based thermometers, including the two-pyroxene thermometer of Brey and Köhler (1990) (TBKN). Samples with SC affinity have TREE and TBKN in good agreement. Samples with MOR and SSZ affinity have near-solidus TREE but TBKN hundreds of degrees lower. Closure temperatures for REE and Fe-Mg in pyroxenes were calculated to compare cooling rates among abyssal peridotites, MOR ophiolites, and SSZ ophiolites. Abyssal peridotites appear to cool more rapidly than peridotites from most ophiolites. On average, SSZ ophiolites have lower closure temperatures than abyssal peridotites and many ophiolites with MOR affinity. We propose that these lower temperatures can be attributed to the residence time in the cooling oceanic lithosphere prior to obduction. MOR ophiolites define a continuum spanning cooling rates from SSZ ophiolites to abyssal peridotites. Consistent high closure temperatures for abyssal peridotites and the Oman and Corsica ophiolites suggests hydrothermal circulation and/or rapid cooling events (e.g., normal faulting, unroofing) control the late thermal histories of peridotites from transform faults and slow and fast spreading centers with or without a crustal section.
NASA Technical Reports Server (NTRS)
Bennett, Richard A.; Reilinger, Robert E.; Rodi, William; Li, Yingping; Toksoz, M. Nafi; Hudnut, Ken
1995-01-01
Coseismic surface deformation associated with the M(sub w) 6.1, April 23, 1992, Joshua Tree earthquake is well represented by estimates of geodetic monument displacements at 20 locations independently derived from Global Positioning System and trilateration measurements. The rms signal to noise ratio for these inferred displacements is 1.8 with near-fault displacement estimates exceeding 40 mm. In order to determine the long-wavelength distribution of slip over the plane of rupture, a Tikhonov regularization operator is applied to these estimates which minimizes stress variability subject to purely right-lateral slip and zero surface slip constraints. The resulting slip distribution yields a geodetic moment estimate of 1.7 x 10(exp 18) N m with corresponding maximum slip around 0.8 m and compares well with independent and complementary information including seismic moment and source time function estimates and main shock and aftershock locations. From empirical Green's functions analyses, a rupture duration of 5 s is obtained which implies a rupture radius of 6-8 km. Most of the inferred slip lies to the north of the hypocenter, consistent with northward rupture propagation. Stress drop estimates are in the range of 2-4 MPa. In addition, predicted Coulomb stress increases correlate remarkably well with the distribution of aftershock hypocenters; most of the aftershocks occur in areas for which the mainshock rupture produced stress increases larger than about 0.1 MPa. In contrast, predicted stress changes are near zero at the hypocenter of the M(sub w) 7.3, June 28, 1992, Landers earthquake which nucleated about 20 km beyond the northernmost edge of the Joshua Tree rupture. Based on aftershock migrations and the predicted static stress field, we speculate that redistribution of Joshua Tree-induced stress perturbations played a role in the spatio-temporal development of the earth sequence culminating in the Landers event.
14 CFR Special Federal Aviation... - Fuel Tank System Fault Tolerance Evaluation Requirements
Code of Federal Regulations, 2014 CFR
2014-01-01
... 14 Aeronautics and Space 1 2014-01-01 2014-01-01 false Fuel Tank System Fault Tolerance Evaluation Requirements Federal Special Federal Aviation Regulation No. 88 Aeronautics and Space FEDERAL AVIATION..., SFAR No. 88 Special Federal Aviation Regulation No. 88—Fuel Tank System Fault Tolerance Evaluation...
14 CFR Special Federal Aviation... - Fuel Tank System Fault Tolerance Evaluation Requirements
Code of Federal Regulations, 2011 CFR
2011-01-01
... 14 Aeronautics and Space 1 2011-01-01 2011-01-01 false Fuel Tank System Fault Tolerance Evaluation Requirements Federal Special Federal Aviation Regulation No. 88 Aeronautics and Space FEDERAL AVIATION..., SFAR No. 88 Special Federal Aviation Regulation No. 88—Fuel Tank System Fault Tolerance Evaluation...
14 CFR Special Federal Aviation... - Fuel Tank System Fault Tolerance Evaluation Requirements
Code of Federal Regulations, 2012 CFR
2012-01-01
... 14 Aeronautics and Space 1 2012-01-01 2012-01-01 false Fuel Tank System Fault Tolerance Evaluation Requirements Federal Special Federal Aviation Regulation No. 88 Aeronautics and Space FEDERAL AVIATION..., SFAR No. 88 Special Federal Aviation Regulation No. 88—Fuel Tank System Fault Tolerance Evaluation...
14 CFR Special Federal Aviation... - Fuel Tank System Fault Tolerance Evaluation Requirements
Code of Federal Regulations, 2010 CFR
2010-01-01
... 14 Aeronautics and Space 1 2010-01-01 2010-01-01 false Fuel Tank System Fault Tolerance Evaluation Requirements Federal Special Federal Aviation Regulation No. 88 Aeronautics and Space FEDERAL AVIATION..., SFAR No. 88 Special Federal Aviation Regulation No. 88—Fuel Tank System Fault Tolerance Evaluation...
14 CFR Special Federal Aviation... - Fuel Tank System Fault Tolerance Evaluation Requirements
Code of Federal Regulations, 2013 CFR
2013-01-01
... 14 Aeronautics and Space 1 2013-01-01 2013-01-01 false Fuel Tank System Fault Tolerance Evaluation Requirements Federal Special Federal Aviation Regulation No. 88 Aeronautics and Space FEDERAL AVIATION..., SFAR No. 88 Special Federal Aviation Regulation No. 88—Fuel Tank System Fault Tolerance Evaluation...
Safety Study of TCAS II for Logic Version 6.04
1992-07-01
used in the fault tree of the 198 tdy. The fu given for Logic and Altimetry effects represent the site averages, and we bued upon TCAS RAs always being...comparison with the results of Monte Carlo simulations. Five million iterations were carril out for each of the four cases (eqs. 3, 4, 6 and 7
Code of Federal Regulations, 2010 CFR
2010-10-01
..., national, or international standards. (f) The reviewer shall analyze all Fault Tree Analyses (FTA), Failure... cited by the reviewer; (4) Identification of any documentation or information sought by the reviewer...) Identification of the hardware and software verification and validation procedures for the PTC system's safety...
The Two-By-Two Array: An Aid in Conceptualization and Problem Solving
ERIC Educational Resources Information Center
Eberhart, James
2004-01-01
The fields of mathematics, science, and engineering are replete with diagrams of many varieties. They range in nature from the Venn diagrams of symbolic logic to the Periodic Chart of the Elements; and from the fault trees of risk assessment to the flow charts used to describe laboratory procedures, industrial processes, and computer programs. All…
SIGPI. Fault Tree Cut Set System Performance
DOE Office of Scientific and Technical Information (OSTI.GOV)
Patenaude, C.J.
1992-01-13
SIGPI computes the probabilistic performance of complex systems by combining cut set or other binary product data with probability information on each basic event. SIGPI is designed to work with either coherent systems, where the system fails when certain combinations of components fail, or noncoherent systems, where at least one cut set occurs only if at least one component of the system is operating properly. The program can handle conditionally independent components, dependent components, or a combination of component types and has been used to evaluate responses to environmental threats and seismic events. The three data types that can bemore » input are cut set data in disjoint normal form, basic component probabilities for independent basic components, and mean and covariance data for statistically dependent basic components.« less
SIGPI. Fault Tree Cut Set System Performance
DOE Office of Scientific and Technical Information (OSTI.GOV)
Patenaude, C.J.
1992-01-14
SIGPI computes the probabilistic performance of complex systems by combining cut set or other binary product data with probability information on each basic event. SIGPI is designed to work with either coherent systems, where the system fails when certain combinations of components fail, or noncoherent systems, where at least one cut set occurs only if at least one component of the system is operating properly. The program can handle conditionally independent components, dependent components, or a combination of component types and has been used to evaluate responses to environmental threats and seismic events. The three data types that can bemore » input are cut set data in disjoint normal form, basic component probabilities for independent basic components, and mean and covariance data for statistically dependent basic components.« less
NASA Astrophysics Data System (ADS)
Kamer, Yavor; Ouillon, Guy; Sornette, Didier; Wössner, Jochen
2014-05-01
We present applications of a new clustering method for fault network reconstruction based on the spatial distribution of seismicity. Unlike common approaches that start from the simplest large scale and gradually increase the complexity trying to explain the small scales, our method uses a bottom-up approach, by an initial sampling of the small scales and then reducing the complexity. The new approach also exploits the location uncertainty associated with each event in order to obtain a more accurate representation of the spatial probability distribution of the seismicity. For a given dataset, we first construct an agglomerative hierarchical cluster (AHC) tree based on Ward's minimum variance linkage. Such a tree starts out with one cluster and progressively branches out into an increasing number of clusters. To atomize the structure into its constitutive protoclusters, we initialize a Gaussian Mixture Modeling (GMM) at a given level of the hierarchical clustering tree. We then let the GMM converge using an Expectation Maximization (EM) algorithm. The kernels that become ill defined (less than 4 points) at the end of the EM are discarded. By incrementing the number of initialization clusters (by atomizing at increasingly populated levels of the AHC tree) and repeating the procedure above, we are able to determine the maximum number of Gaussian kernels the structure can hold. The kernels in this configuration constitute our protoclusters. In this setting, merging of any pair will lessen the likelihood (calculated over the pdf of the kernels) but in turn will reduce the model's complexity. The information loss/gain of any possible merging can thus be quantified based on the Minimum Description Length (MDL) principle. Similar to an inter-distance matrix, where the matrix element di,j gives the distance between points i and j, we can construct a MDL gain/loss matrix where mi,j gives the information gain/loss resulting from the merging of kernels i and j. Based on this matrix, merging events resulting in MDL gain are performed in descending order until no gainful merging is possible anymore. We envision that the results of this study could lead to a better understanding of the complex interactions within the Californian fault system and hopefully use the acquired insights for earthquake forecasting.
NASA Astrophysics Data System (ADS)
Shi, J. T.; Han, X. T.; Xie, J. F.; Yao, L.; Huang, L. T.; Li, L.
2013-03-01
A Pulsed High Magnetic Field Facility (PHMFF) has been established in Wuhan National High Magnetic Field Center (WHMFC) and various protection measures are applied in its control system. In order to improve the reliability and robustness of the control system, the safety analysis of the PHMFF is carried out based on Fault Tree Analysis (FTA) technique. The function and realization of 5 protection systems, which include sequence experiment operation system, safety assistant system, emergency stop system, fault detecting and processing system and accident isolating protection system, are given. The tests and operation indicate that these measures improve the safety of the facility and ensure the safety of people.
Evaluation of reliability modeling tools for advanced fault tolerant systems
NASA Technical Reports Server (NTRS)
Baker, Robert; Scheper, Charlotte
1986-01-01
The Computer Aided Reliability Estimation (CARE III) and Automated Reliability Interactice Estimation System (ARIES 82) reliability tools for application to advanced fault tolerance aerospace systems were evaluated. To determine reliability modeling requirements, the evaluation focused on the Draper Laboratories' Advanced Information Processing System (AIPS) architecture as an example architecture for fault tolerance aerospace systems. Advantages and limitations were identified for each reliability evaluation tool. The CARE III program was designed primarily for analyzing ultrareliable flight control systems. The ARIES 82 program's primary use was to support university research and teaching. Both CARE III and ARIES 82 were not suited for determining the reliability of complex nodal networks of the type used to interconnect processing sites in the AIPS architecture. It was concluded that ARIES was not suitable for modeling advanced fault tolerant systems. It was further concluded that subject to some limitations (the difficulty in modeling systems with unpowered spare modules, systems where equipment maintenance must be considered, systems where failure depends on the sequence in which faults occurred, and systems where multiple faults greater than a double near coincident faults must be considered), CARE III is best suited for evaluating the reliability of advanced tolerant systems for air transport.
Nelson, Alan R.; Personius, Stephen F.; Sherrod, Brian L.; Buck, Jason; Bradley, Lee-Ann; Henley, Gary; Liberty, Lee M.; Kelsey, Harvey M.; Witter, Robert C.; Koehler, R.D.; Schermer, Elizabeth R.; Nemser, Eliza S.; Cladouhos, Trenton T.
2008-01-01
As part of the effort to assess seismic hazard in the Puget Sound region, we map fault scarps on Airborne Laser Swath Mapping (ALSM, an application of LiDAR) imagery (with 2.5-m elevation contours on 1:4,000-scale maps) and show field and laboratory data from backhoe trenches across the scarps that are being used to develop a latest Pleistocene and Holocene history of large earthquakes on the Tacoma fault. We supplement previous Tacoma fault paleoseismic studies with data from five trenches on the hanging wall of the fault. In a new trench across the Catfish Lake scarp, broad folding of more tightly folded glacial sediment does not predate 4.3 ka because detrital charcoal of this age was found in stream-channel sand in the trench beneath the crest of the scarp. A post-4.3-ka age for scarp folding is consistent with previously identified uplift across the fault during AD 770-1160. In the trench across the younger of the two Stansberry Lake scarps, six maximum 14C ages on detrital charcoal in pre-faulting B and C soil horizons and three minimum ages on a tree root in post-faulting colluvium, limit a single oblique-slip (right-lateral) surface faulting event to AD 410-990. Stratigraphy and sedimentary structures in the trench across the older scarp at the same site show eroded glacial sediments, probably cut by a meltwater channel, with no evidence of post-glacial deformation. At the northeast end of the Sunset Beach scarps, charcoal ages in two trenches across graben-forming scarps give a close maximum age of 1.3 ka for graben formation. The ages that best limit the time of faulting and folding in each of the trenches are consistent with the time of the large regional earthquake in southern Puget Sound about AD 900-930.
Monotone Boolean approximation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hulme, B.L.
1982-12-01
This report presents a theory of approximation of arbitrary Boolean functions by simpler, monotone functions. Monotone increasing functions can be expressed without the use of complements. Nonconstant monotone increasing functions are important in their own right since they model a special class of systems known as coherent systems. It is shown here that when Boolean expressions for noncoherent systems become too large to treat exactly, then monotone approximations are easily defined. The algorithms proposed here not only provide simpler formulas but also produce best possible upper and lower monotone bounds for any Boolean function. This theory has practical application formore » the analysis of noncoherent fault trees and event tree sequences.« less
Rockwell, Thomas K.; Lindvall, Scott; Dawson, Tim; Langridge, Rob; Lettis, William; Klinger, Yann
2002-01-01
Surveys of multiple tree lines within groves of poplar trees, planted in straight lines across the fault prior to the earthquake, show surprisingly large lateral variations. In one grove, slip increases by nearly 1.8 m, or 35% of the maximum measured value, over a lateral distance of nearly 100 m. This and other observations along the 1999 ruptures suggest that the lateral variability of slip observed from displaced geomorphic features in many earthquakes of the past may represent a combination of (1) actual differences in slip at the surface and (2) the difficulty in recognizing distributed nonbrittle deformation.
Using minimal spanning trees to compare the reliability of network topologies
NASA Technical Reports Server (NTRS)
Leister, Karen J.; White, Allan L.; Hayhurst, Kelly J.
1990-01-01
Graph theoretic methods are applied to compute the reliability for several types of networks of moderate size. The graph theory methods used are minimal spanning trees for networks with bi-directional links and the related concept of strongly connected directed graphs for networks with uni-directional links. A comparison is conducted of ring networks and braided networks. The case is covered where just the links fail and the case where both links and nodes fail. Two different failure modes for the links are considered. For one failure mode, the link no longer carries messages. For the other failure mode, the link delivers incorrect messages. There is a description and comparison of link-redundancy versus path-redundancy as methods to achieve reliability. All the computations are carried out by means of a fault tree program.
NASA Technical Reports Server (NTRS)
Patterson, Jonathan D.; Breckenridge, Jonathan T.; Johnson, Stephen B.
2013-01-01
Building upon the purpose, theoretical approach, and use of a Goal-Function Tree (GFT) being presented by Dr. Stephen B. Johnson, described in a related Infotech 2013 ISHM abstract titled "Goal-Function Tree Modeling for Systems Engineering and Fault Management", this paper will describe the core framework used to implement the GFTbased systems engineering process using the Systems Modeling Language (SysML). These two papers are ideally accepted and presented together in the same Infotech session. Statement of problem: SysML, as a tool, is currently not capable of implementing the theoretical approach described within the "Goal-Function Tree Modeling for Systems Engineering and Fault Management" paper cited above. More generally, SysML's current capabilities to model functional decompositions in the rigorous manner required in the GFT approach are limited. The GFT is a new Model-Based Systems Engineering (MBSE) approach to the development of goals and requirements, functions, and its linkage to design. As a growing standard for systems engineering, it is important to develop methods to implement GFT in SysML. Proposed Method of Solution: Many of the central concepts of the SysML language are needed to implement a GFT for large complex systems. In the implementation of those central concepts, the following will be described in detail: changes to the nominal SysML process, model view definitions and examples, diagram definitions and examples, and detailed SysML construct and stereotype definitions.
FTAPE: A fault injection tool to measure fault tolerance
NASA Technical Reports Server (NTRS)
Tsai, Timothy K.; Iyer, Ravishankar K.
1995-01-01
The paper introduces FTAPE (Fault Tolerance And Performance Evaluator), a tool that can be used to compare fault-tolerant computers. The tool combines system-wide fault injection with a controllable workload. A workload generator is used to create high stress conditions for the machine. Faults are injected based on this workload activity in order to ensure a high level of fault propagation. The errors/fault ratio and performance degradation are presented as measures of fault tolerance.
NASA Technical Reports Server (NTRS)
Lala, J. H.; Smith, T. B., III
1983-01-01
The experimental test and evaluation of the Fault-Tolerant Multiprocessor (FTMP) is described. Major objectives of this exercise include expanding validation envelope, building confidence in the system, revealing any weaknesses in the architectural concepts and in their execution in hardware and software, and in general, stressing the hardware and software. To this end, pin-level faults were injected into one LRU of the FTMP and the FTMP response was measured in terms of fault detection, isolation, and recovery times. A total of 21,055 stuck-at-0, stuck-at-1 and invert-signal faults were injected in the CPU, memory, bus interface circuits, Bus Guardian Units, and voters and error latches. Of these, 17,418 were detected. At least 80 percent of undetected faults are estimated to be on unused pins. The multiprocessor identified all detected faults correctly and recovered successfully in each case. Total recovery time for all faults averaged a little over one second. This can be reduced to half a second by including appropriate self-tests.
Architecture Analysis with AADL: The Speed Regulation Case-Study
2014-11-01
Overview Functional Hazard Analysis ( FHA ) Failures inventory with description, classification, etc. Fault-Tree Analysis (FTA) Dependencies between...University Pittsburgh, PA 15213 Julien Delange Report Documentation Page Form ApprovedOMB No. 0704-0188 Public reporting burden for the collection of...Information Operations and Reports , 1215 Jefferson Davis Highway, Suite 1204, Arlington VA 22202-4302. Respondents should be aware that notwithstanding any
Journal of Air Transportation, Volume 12, No. 2 (ATRS Special Edition)
NASA Technical Reports Server (NTRS)
Bowen, Brent D. (Editor); Kabashkin, Igor (Editor); Fink, Mary (Editor)
2007-01-01
Topics covered include: Competition and Change in the Long-Haul Markets from Europe; Insights into the Maintenance, Repair, and Overhaul Configurations of European Airlines; Validation of Fault Tree Analysis in Aviation Safety Management; An Investigation into Airline Service Quality Performance between U.S. Legacy Carriers and Their EU Competitors and Partners; and Climate Impact of Aircraft Technology and Design Changes.
Risk Analysis of a Fuel Storage Terminal Using HAZOP and FTA
Baixauli-Pérez, Mª Piedad
2017-01-01
The size and complexity of industrial chemical plants, together with the nature of the products handled, means that an analysis and control of the risks involved is required. This paper presents a methodology for risk analysis in chemical and allied industries that is based on a combination of HAZard and OPerability analysis (HAZOP) and a quantitative analysis of the most relevant risks through the development of fault trees, fault tree analysis (FTA). Results from FTA allow prioritizing the preventive and corrective measures to minimize the probability of failure. An analysis of a case study is performed; it consists in the terminal for unloading chemical and petroleum products, and the fuel storage facilities of two companies, in the port of Valencia (Spain). HAZOP analysis shows that loading and unloading areas are the most sensitive areas of the plant and where the most significant danger is a fuel spill. FTA analysis indicates that the most likely event is a fuel spill in tank truck loading area. A sensitivity analysis from the FTA results show the importance of the human factor in all sequences of the possible accidents, so it should be mandatory to improve the training of the staff of the plants. PMID:28665325
TH-EF-BRC-04: Quality Management Program
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yorke, E.
2016-06-15
This Hands-on Workshop will be focused on providing participants with experience with the principal tools of TG 100 and hence start to build both competence and confidence in the use of risk-based quality management techniques. The three principal tools forming the basis of TG 100’s risk analysis: Process mapping, Failure-Modes and Effects Analysis and fault-tree analysis will be introduced with a 5 minute refresher presentation and each presentation will be followed by a 30 minute small group exercise. An exercise on developing QM from the risk analysis follows. During the exercise periods, participants will apply the principles in 2 differentmore » clinical scenarios. At the conclusion of each exercise there will be ample time for participants to discuss with each other and the faculty their experience and any challenges encountered. Learning Objectives: To review the principles of Process Mapping, Failure Modes and Effects Analysis and Fault Tree Analysis. To gain familiarity with these three techniques in a small group setting. To share and discuss experiences with the three techniques with faculty and participants. Director, TreatSafely, LLC. Director, Center for the Assessment of Radiological Sciences. Occasional Consultant to the IAEA and Varian.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
2016-06-15
This Hands-on Workshop will be focused on providing participants with experience with the principal tools of TG 100 and hence start to build both competence and confidence in the use of risk-based quality management techniques. The three principal tools forming the basis of TG 100’s risk analysis: Process mapping, Failure-Modes and Effects Analysis and fault-tree analysis will be introduced with a 5 minute refresher presentation and each presentation will be followed by a 30 minute small group exercise. An exercise on developing QM from the risk analysis follows. During the exercise periods, participants will apply the principles in 2 differentmore » clinical scenarios. At the conclusion of each exercise there will be ample time for participants to discuss with each other and the faculty their experience and any challenges encountered. Learning Objectives: To review the principles of Process Mapping, Failure Modes and Effects Analysis and Fault Tree Analysis. To gain familiarity with these three techniques in a small group setting. To share and discuss experiences with the three techniques with faculty and participants. Director, TreatSafely, LLC. Director, Center for the Assessment of Radiological Sciences. Occasional Consultant to the IAEA and Varian.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huq, M.
2016-06-15
This Hands-on Workshop will be focused on providing participants with experience with the principal tools of TG 100 and hence start to build both competence and confidence in the use of risk-based quality management techniques. The three principal tools forming the basis of TG 100’s risk analysis: Process mapping, Failure-Modes and Effects Analysis and fault-tree analysis will be introduced with a 5 minute refresher presentation and each presentation will be followed by a 30 minute small group exercise. An exercise on developing QM from the risk analysis follows. During the exercise periods, participants will apply the principles in 2 differentmore » clinical scenarios. At the conclusion of each exercise there will be ample time for participants to discuss with each other and the faculty their experience and any challenges encountered. Learning Objectives: To review the principles of Process Mapping, Failure Modes and Effects Analysis and Fault Tree Analysis. To gain familiarity with these three techniques in a small group setting. To share and discuss experiences with the three techniques with faculty and participants. Director, TreatSafely, LLC. Director, Center for the Assessment of Radiological Sciences. Occasional Consultant to the IAEA and Varian.« less
Risk Analysis of a Fuel Storage Terminal Using HAZOP and FTA.
Fuentes-Bargues, José Luis; González-Cruz, Mª Carmen; González-Gaya, Cristina; Baixauli-Pérez, Mª Piedad
2017-06-30
The size and complexity of industrial chemical plants, together with the nature of the products handled, means that an analysis and control of the risks involved is required. This paper presents a methodology for risk analysis in chemical and allied industries that is based on a combination of HAZard and OPerability analysis (HAZOP) and a quantitative analysis of the most relevant risks through the development of fault trees, fault tree analysis (FTA). Results from FTA allow prioritizing the preventive and corrective measures to minimize the probability of failure. An analysis of a case study is performed; it consists in the terminal for unloading chemical and petroleum products, and the fuel storage facilities of two companies, in the port of Valencia (Spain). HAZOP analysis shows that loading and unloading areas are the most sensitive areas of the plant and where the most significant danger is a fuel spill. FTA analysis indicates that the most likely event is a fuel spill in tank truck loading area. A sensitivity analysis from the FTA results show the importance of the human factor in all sequences of the possible accidents, so it should be mandatory to improve the training of the staff of the plants.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dunscombe, P.
This Hands-on Workshop will be focused on providing participants with experience with the principal tools of TG 100 and hence start to build both competence and confidence in the use of risk-based quality management techniques. The three principal tools forming the basis of TG 100’s risk analysis: Process mapping, Failure-Modes and Effects Analysis and fault-tree analysis will be introduced with a 5 minute refresher presentation and each presentation will be followed by a 30 minute small group exercise. An exercise on developing QM from the risk analysis follows. During the exercise periods, participants will apply the principles in 2 differentmore » clinical scenarios. At the conclusion of each exercise there will be ample time for participants to discuss with each other and the faculty their experience and any challenges encountered. Learning Objectives: To review the principles of Process Mapping, Failure Modes and Effects Analysis and Fault Tree Analysis. To gain familiarity with these three techniques in a small group setting. To share and discuss experiences with the three techniques with faculty and participants. Director, TreatSafely, LLC. Director, Center for the Assessment of Radiological Sciences. Occasional Consultant to the IAEA and Varian.« less
Rath, Frank
2008-01-01
This article examines the concepts of quality management (QM) and quality assurance (QA), as well as the current state of QM and QA practices in radiotherapy. A systematic approach incorporating a series of industrial engineering-based tools is proposed, which can be applied in health care organizations proactively to improve process outcomes, reduce risk and/or improve patient safety, improve through-put, and reduce cost. This tool set includes process mapping and process flowcharting, failure modes and effects analysis (FMEA), value stream mapping, and fault tree analysis (FTA). Many health care organizations do not have experience in applying these tools and therefore do not understand how and when to use them. As a result there are many misconceptions about how to use these tools, and they are often incorrectly applied. This article describes these industrial engineering-based tools and also how to use them, when they should be used (and not used), and the intended purposes for their use. In addition the strengths and weaknesses of each of these tools are described, and examples are given to demonstrate the application of these tools in health care settings.
Liu, Xiao Yu; Xue, Kang Ning; Rong, Rong; Zhao, Chi Hong
2016-01-01
Epidemic hemorrhagic fever has been an ongoing threat to laboratory personnel involved in animal care and use. Laboratory transmissions and severe infections occurred over the past twenty years, even though the standards and regulations for laboratory biosafety have been issued, upgraded, and implemented in China. Therefore, there is an urgent need to identify risk factors and to seek effective preventive measures that can curb the incidences of epidemic hemorrhagic fever among laboratory personnel. In the present study, we reviewed literature that relevant to animals laboratory-acquired hemorrhagic fever infections reported from 1995 to 2015, and analyzed these incidences using fault tree analysis (FTA). The results of data analysis showed that purchasing of qualified animals and guarding against wild rats which could make sure the laboratory animals without hantaviruses, are the basic measures to prevent infections. During the process of daily management, the consciousness of personal protecting and the ability of personal protecting need to be further improved. Undoubtedly vaccination is the most direct and effective method, while it plays role after infection. So avoiding infections can't rely entirely on vaccination. Copyright © 2016 The Editorial Board of Biomedical and Environmental Sciences. Published by China CDC. All rights reserved.
Fault tree analysis of the causes of waterborne outbreaks.
Risebro, Helen L; Doria, Miguel F; Andersson, Yvonne; Medema, Gertjan; Osborn, Keith; Schlosser, Olivier; Hunter, Paul R
2007-01-01
Prevention and containment of outbreaks requires examination of the contribution and interrelation of outbreak causative events. An outbreak fault tree was developed and applied to 61 enteric outbreaks related to public drinking water supplies in the EU. A mean of 3.25 causative events per outbreak were identified; each event was assigned a score based on percentage contribution per outbreak. Source and treatment system causative events often occurred concurrently (in 34 outbreaks). Distribution system causative events occurred less frequently (19 outbreaks) but were often solitary events contributing heavily towards the outbreak (a mean % score of 87.42). Livestock and rainfall in the catchment with no/inadequate filtration of water sources contributed concurrently to 11 of 31 Cryptosporidium outbreaks. Of the 23 protozoan outbreaks experiencing at least one treatment causative event, 90% of these events were filtration deficiencies; by contrast, for bacterial, viral, gastroenteritis and mixed pathogen outbreaks, 75% of treatment events were disinfection deficiencies. Roughly equal numbers of groundwater and surface water outbreaks experienced at least one treatment causative event (18 and 17 outbreaks, respectively). Retrospective analysis of multiple outbreaks of enteric disease can be used to inform outbreak investigations, facilitate corrective measures, and further develop multi-barrier approaches.
Fault and event tree analyses for process systems risk analysis: uncertainty handling formulations.
Ferdous, Refaul; Khan, Faisal; Sadiq, Rehan; Amyotte, Paul; Veitch, Brian
2011-01-01
Quantitative risk analysis (QRA) is a systematic approach for evaluating likelihood, consequences, and risk of adverse events. QRA based on event (ETA) and fault tree analyses (FTA) employs two basic assumptions. The first assumption is related to likelihood values of input events, and the second assumption is regarding interdependence among the events (for ETA) or basic events (for FTA). Traditionally, FTA and ETA both use crisp probabilities; however, to deal with uncertainties, the probability distributions of input event likelihoods are assumed. These probability distributions are often hard to come by and even if available, they are subject to incompleteness (partial ignorance) and imprecision. Furthermore, both FTA and ETA assume that events (or basic events) are independent. In practice, these two assumptions are often unrealistic. This article focuses on handling uncertainty in a QRA framework of a process system. Fuzzy set theory and evidence theory are used to describe the uncertainties in the input event likelihoods. A method based on a dependency coefficient is used to express interdependencies of events (or basic events) in ETA and FTA. To demonstrate the approach, two case studies are discussed. © 2010 Society for Risk Analysis.
Schwartz, David P.; Haeussler, Peter J.; Seitz, Gordon G.; Dawson, Timothy E.
2012-01-01
The propagation of the rupture of the Mw7.9 Denali fault earthquake from the central Denali fault onto the Totschunda fault has provided a basis for dynamic models of fault branching in which the angle of the regional or local prestress relative to the orientation of the main fault and branch plays a principal role in determining which fault branch is taken. GeoEarthScope LiDAR and paleoseismic data allow us to map the structure of the Denali-Totschunda fault intersection and evaluate controls of fault branching from a geological perspective. LiDAR data reveal the Denali-Totschunda fault intersection is structurally simple with the two faults directly connected. At the branch point, 227.2 km east of the 2002 epicenter, the 2002 rupture diverges southeast to become the Totschunda fault. We use paleoseismic data to propose that differences in the accumulated strain on each fault segment, which express differences in the elapsed time since the most recent event, was one important control of the branching direction. We suggest that data on event history, slip rate, paleo offsets, fault geometry and structure, and connectivity, especially on high slip rate-short recurrence interval faults, can be used to assess the likelihood of branching and its direction. Analysis of the Denali-Totschunda fault intersection has implications for evaluating the potential for a rupture to propagate across other types of fault intersections and for characterizing sources of future large earthquakes.
NASA Astrophysics Data System (ADS)
Aprilia, Ayu Rizky; Santoso, Imam; Ekasari, Dhita Murita
2017-05-01
Yogurt is a product based on milk, which has beneficial effects for health. The process for the production of yogurt is very susceptible to failure because it involves bacteria and fermentation. For an industry, the risks may cause harm and have a negative impact. In order for a product to be successful and profitable, it requires the analysis of risks that may occur during the production process. Risk analysis can identify the risks in detail and prevent as well as determine its handling, so that the risks can be minimized. Therefore, this study will analyze the risks of the production process with a case study in CV.XYZ. The method used in this research is the Fuzzy Failure Mode and Effect Analysis (fuzzy FMEA) and Fault Tree Analysis (FTA). The results showed that there are 6 risks from equipment variables, raw material variables, and process variables. Those risks include the critical risk, which is the risk of a lack of an aseptic process, more specifically if starter yogurt is damaged due to contamination by fungus or other bacteria and a lack of sanitation equipment. The results of quantitative analysis of FTA showed that the highest probability is the probability of the lack of an aseptic process, with a risk of 3.902%. The recommendations for improvement include establishing SOPs (Standard Operating Procedures), which include the process, workers, and environment, controlling the starter of yogurt and improving the production planning and sanitation equipment using hot water immersion.
Testing For EM Upsets In Aircraft Control Computers
NASA Technical Reports Server (NTRS)
Belcastro, Celeste M.
1994-01-01
Effects of transient electrical signals evaluated in laboratory tests. Method of evaluating nominally fault-tolerant, aircraft-type digital-computer-based control system devised. Provides for evaluation of susceptibility of system to upset and evaluation of integrity of control when system subjected to transient electrical signals like those induced by electromagnetic (EM) source, in this case lightning. Beyond aerospace applications, fault-tolerant control systems becoming more wide-spread in industry; such as in automobiles. Method supports practical, systematic tests for evaluation of designs of fault-tolerant control systems.
Intermittent/transient fault phenomena in digital systems
NASA Technical Reports Server (NTRS)
Masson, G. M.
1977-01-01
An overview of the intermittent/transient (IT) fault study is presented. An interval survivability evaluation of digital systems for IT faults is discussed along with a method for detecting and diagnosing IT faults in digital systems.
Development of a Software Safety Process and a Case Study of Its Use
NASA Technical Reports Server (NTRS)
Knight, J. C.
1997-01-01
Research in the year covered by this reporting period has been primarily directed toward the following areas: (1) Formal specification of user interfaces; (2) Fault-tree analysis including software; (3) Evaluation of formal specification notations; (4) Evaluation of formal verification techniques; (5) Expanded analysis of the shell architecture concept; (6) Development of techniques to address the problem of information survivability; and (7) Development of a sophisticated tool for the manipulation of formal specifications written in Z. This report summarizes activities under the grant. The technical results relating to this grant and the remainder of the principal investigator's research program are contained in various reports and papers. The remainder of this report is organized as follows. In the next section, an overview of the project is given. This is followed by a summary of accomplishments during the reporting period and details of students funded. Seminars presented describing work under this grant are listed in the following section, and the final section lists publications resulting from this grant.
Timing of late Holocene surface rupture of the Wairau Fault, Marlborough, New Zealand
Zachariasen, J.; Berryman, K.; Langridge, Rob; Prentice, C.; Rymer, M.; Stirling, M.; Villamor, P.
2006-01-01
Three trenches excavated across the central portion of the right-lateral strike-slip Wairau Fault in South Island, New Zealand, exposed a complex set of fault strands that have displaced a sequence of late Holocene alluvial and colluvial deposits. Abundant charcoal fragments provide age control for various stratigraphic horizons dating back to c. 5610 yr ago. Faulting relations from the Wadsworth trench show that the most recent surface rupture event occurred at least 1290 yr and at most 2740 yr ago. Drowned trees in landslide-dammed Lake Chalice, in combination with charcoal from the base of an unfaulted colluvial wedge at Wadsworth trench, suggest a narrower time bracket for this event of 1811-2301 cal. yr BP. The penultimate faulting event occurred between c. 2370 and 3380 yr, and possibly near 2680 ?? 60 cal. yr BP, when data from both the Wadsworth and Dillon trenches are combined. Two older events have been recognised from Dillon trench but remain poorly dated. A probable elapsed time of at least 1811 yr since the last surface rupture, and an average slip rate estimate for the Wairau Fault of 3-5 mm/yr, suggests that at least 5.4 m and up to 11.5 m of elastic shear strain has accumulated since the last rupture. This is near to or greater than the single-event displacement estimates of 5-7 m. The average recurrence interval for surface rupture of the fault determined from the trench data is 1150-1400 yr. Although the uncertainties in the timing of faulting events and variability in inter-event times remain high, the time elapsed since the last event is in the order of 1-2 times the average recurrence interval, implying that the Wairau Fault is near the end of its interseismic period. ?? The Royal Society of New Zealand 2006.
NASA Astrophysics Data System (ADS)
Dura-Gomez, I.; Addison, A.; Knapp, C. C.; Talwani, P.; Chapman, A.
2005-12-01
During the 1886 Charleston earthquake, two parallel tabby walls of Fort Dorchester broke left-laterally, and a strike of ~N25°W was inferred for the causative Sawmill Branch fault. To better define this fault, which does not have any surface expression, we planned to cut trenches across it. However, as Fort Dorchester is a protected archeological site, we were required to locate the fault accurately away from the fort, before permission could be obtained to cut short trenches. The present GPR investigations were planned as a preliminary step to determine locations for trenching. A pulseEKKO 100 GPR was used to collect data along eight profiles (varying in length from 10 m to 30 m) that were run across the projected strike of the fault, and one 50 m long profile that was run parallel to it. The locations of the profiles were obtained using a total station. To capture the signature of the fault, sixteen common-offset (COS) lines were acquired by using different antennas (50, 100 and 200 MHz) and stacking 64 times to increase the signal-to-noise ratio. The location of trees and stumps were recorded. In addition, two common-midpoint (CMP) tests were carried out, and gave an average velocity of about 0.097 m/ns. Processing included the subtraction of the low frequency "wow" on the trace (dewow), automatic gain control (AGC) and the application of bandpass filters. The signals using the 50 MHz, 100 MHz and 200 MHz antennas were found to penetrate up to about 30 meters, 20 meters and 12 meters respectively. Vertically offset reflectors and disruptions of the electrical signal were used to infer the location of the fault(s). Comparisons of the locations of these disruptions on various lines were used to infer the presence of a N30°W fault zone We plan to confirm these locations by cutting shallow trenches.
Stollhofen, Harald; Stanistreet, Ian G
2012-08-01
Normal faults displacing Upper Bed I and Lower Bed II strata of the Plio-Pleistocene Lake Olduvai were studied on the basis of facies and thickness changes as well as diversion of transport directions across them in order to establish criteria for their synsedimentary activity. Decompacted differential thicknesses across faults were then used to calculate average fault slip rates of 0.05-0.47 mm/yr for the Tuff IE/IF interval (Upper Bed I) and 0.01-0.13 mm/yr for the Tuff IF/IIA section (Lower Bed II). Considering fault recurrence intervals of ~1000 years, fault scarp heights potentially achieved average values of 0.05-0.47 m and a maximum value of 5.4 m during Upper Bed I, which dropped to average values of 0.01-0.13 m and a localized maximum of 0.72 m during Lower Bed II deposition. Synsedimentary faults were of importance to the form and paleoecology of landscapes utilized by early hominins, most traceably and provably Homo habilis as illustrated by the recurrent density and compositional pattern of Oldowan stone artifact assemblage variation across them. Two potential relationship factors are: (1) fault scarp topographies controlled sediment distribution, surface, and subsurface hydrology, and thus vegetation, so that a resulting mosaic of microenvironments and paleoecologies provided a variety of opportunities for omnivorous hominins; and (2) they ensured that the most voluminous and violent pyroclastic flows from the Mt. Olmoti volcano were dammed and conduited away from the Olduvai Basin depocenter, when otherwise a single or set of ignimbrite flows might have filled and devastated the topography that contained the central lake body. In addition, hydraulically active faults may have conduited groundwater, supporting freshwater springs and wetlands and favoring growth of trees. Copyright © 2011 Elsevier Ltd. All rights reserved.
Expert systems for fault diagnosis in nuclear reactor control
NASA Astrophysics Data System (ADS)
Jalel, N. A.; Nicholson, H.
1990-11-01
An expert system for accident analysis and fault diagnosis for the Loss Of Fluid Test (LOFT) reactor, a small scale pressurized water reactor, was developed for a personal computer. The knowledge of the system is presented using a production rule approach with a backward chaining inference engine. The data base of the system includes simulated dependent state variables of the LOFT reactor model. Another system is designed to assist the operator in choosing the appropriate cooling mode and to diagnose the fault in the selected cooling system. The response tree, which is used to provide the link between a list of very specific accident sequences and a set of generic emergency procedures which help the operator in monitoring system status, and to differentiate between different accident sequences and select the correct procedures, is used to build the system knowledge base. Both systems are written in TURBO PROLOG language and can be run on an IBM PC compatible with 640k RAM, 40 Mbyte hard disk and color graphics.
The use of automatic programming techniques for fault tolerant computing systems
NASA Technical Reports Server (NTRS)
Wild, C.
1985-01-01
It is conjectured that the production of software for ultra-reliable computing systems such as required by Space Station, aircraft, nuclear power plants and the like will require a high degree of automation as well as fault tolerance. In this paper, the relationship between automatic programming techniques and fault tolerant computing systems is explored. Initial efforts in the automatic synthesis of code from assertions to be used for error detection as well as the automatic generation of assertions and test cases from abstract data type specifications is outlined. Speculation on the ability to generate truly diverse designs capable of recovery from errors by exploring alternate paths in the program synthesis tree is discussed. Some initial thoughts on the use of knowledge based systems for the global detection of abnormal behavior using expectations and the goal-directed reconfiguration of resources to meet critical mission objectives are given. One of the sources of information for these systems would be the knowledge captured during the automatic programming process.
A method of real-time fault diagnosis for power transformers based on vibration analysis
NASA Astrophysics Data System (ADS)
Hong, Kaixing; Huang, Hai; Zhou, Jianping; Shen, Yimin; Li, Yujie
2015-11-01
In this paper, a novel probability-based classification model is proposed for real-time fault detection of power transformers. First, the transformer vibration principle is introduced, and two effective feature extraction techniques are presented. Next, the details of the classification model based on support vector machine (SVM) are shown. The model also includes a binary decision tree (BDT) which divides transformers into different classes according to health state. The trained model produces posterior probabilities of membership to each predefined class for a tested vibration sample. During the experiments, the vibrations of transformers under different conditions are acquired, and the corresponding feature vectors are used to train the SVM classifiers. The effectiveness of this model is illustrated experimentally on typical in-service transformers. The consistency between the results of the proposed model and the actual condition of the test transformers indicates that the model can be used as a reliable method for transformer fault detection.
NASA Astrophysics Data System (ADS)
Chen, BinQiang; Zhang, ZhouSuo; Zi, YanYang; He, ZhengJia; Sun, Chuang
2013-10-01
Detecting transient vibration signatures is of vital importance for vibration-based condition monitoring and fault detection of the rotating machinery. However, raw mechanical signals collected by vibration sensors are generally mixtures of physical vibrations of the multiple mechanical components installed in the examined machinery. Fault-generated incipient vibration signatures masked by interfering contents are difficult to be identified. The fast kurtogram (FK) is a concise and smart gadget for characterizing these vibration features. The multi-rate filter-bank (MRFB) and the spectral kurtosis (SK) indicator of the FK are less powerful when strong interfering vibration contents exist, especially when the FK are applied to vibration signals of short duration. It is encountered that the impulsive interfering contents not authentically induced by mechanical faults complicate the optimal analyzing process and lead to incorrect choosing of the optimal analysis subband, therefore the original FK may leave out the essential fault signatures. To enhance the analyzing performance of FK for industrial applications, an improved version of fast kurtogram, named as "fast spatial-spectral ensemble kurtosis kurtogram", is presented. In the proposed technique, discrete quasi-analytic wavelet tight frame (QAWTF) expansion methods are incorporated as the detection filters. The QAWTF, constructed based on dual tree complex wavelet transform, possesses better vibration transient signature extracting ability and enhanced time-frequency localizability compared with conventional wavelet packet transforms (WPTs). Moreover, in the constructed QAWTF, a non-dyadic ensemble wavelet subband generating strategy is put forward to produce extra wavelet subbands that are capable of identifying fault features located in transition-band of WPT. On the other hand, an enhanced signal impulsiveness evaluating indicator, named "spatial-spectral ensemble kurtosis" (SSEK), is put forward and utilized as the quantitative measure to select optimal analyzing parameters. The SSEK indicator is robuster in evaluating the impulsiveness intensity of vibration signals due to its better suppressing ability of Gaussian noise, harmonics and sporadic impulsive shocks. Numerical validations, an experimental test and two engineering applications were used to verify the effectiveness of the proposed technique. The analyzing results of the numerical validations, experimental tests and engineering applications demonstrate that the proposed technique possesses robuster transient vibration content detecting performance in comparison with the original FK and the WPT-based FK method, especially when they are applied to the processing of vibration signals of relative limited duration.
A benchmark for fault tolerant flight control evaluation
NASA Astrophysics Data System (ADS)
Smaili, H.; Breeman, J.; Lombaerts, T.; Stroosma, O.
2013-12-01
A large transport aircraft simulation benchmark (REconfigurable COntrol for Vehicle Emergency Return - RECOVER) has been developed within the GARTEUR (Group for Aeronautical Research and Technology in Europe) Flight Mechanics Action Group 16 (FM-AG(16)) on Fault Tolerant Control (2004 2008) for the integrated evaluation of fault detection and identification (FDI) and reconfigurable flight control strategies. The benchmark includes a suitable set of assessment criteria and failure cases, based on reconstructed accident scenarios, to assess the potential of new adaptive control strategies to improve aircraft survivability. The application of reconstruction and modeling techniques, based on accident flight data, has resulted in high-fidelity nonlinear aircraft and fault models to evaluate new Fault Tolerant Flight Control (FTFC) concepts and their real-time performance to accommodate in-flight failures.
Insurance Applications of Active Fault Maps Showing Epistemic Uncertainty
NASA Astrophysics Data System (ADS)
Woo, G.
2005-12-01
Insurance loss modeling for earthquakes utilizes available maps of active faulting produced by geoscientists. All such maps are subject to uncertainty, arising from lack of knowledge of fault geometry and rupture history. Field work to undertake geological fault investigations drains human and monetary resources, and this inevitably limits the resolution of fault parameters. Some areas are more accessible than others; some may be of greater social or economic importance than others; some areas may be investigated more rapidly or diligently than others; or funding restrictions may have curtailed the extent of the fault mapping program. In contrast with the aleatory uncertainty associated with the inherent variability in the dynamics of earthquake fault rupture, uncertainty associated with lack of knowledge of fault geometry and rupture history is epistemic. The extent of this epistemic uncertainty may vary substantially from one regional or national fault map to another. However aware the local cartographer may be, this uncertainty is generally not conveyed in detail to the international map user. For example, an area may be left blank for a variety of reasons, ranging from lack of sufficient investigation of a fault to lack of convincing evidence of activity. Epistemic uncertainty in fault parameters is of concern in any probabilistic assessment of seismic hazard, not least in insurance earthquake risk applications. A logic-tree framework is appropriate for incorporating epistemic uncertainty. Some insurance contracts cover specific high-value properties or transport infrastructure, and therefore are extremely sensitive to the geometry of active faulting. Alternative Risk Transfer (ART) to the capital markets may also be considered. In order for such insurance or ART contracts to be properly priced, uncertainty should be taken into account. Accordingly, an estimate is needed for the likelihood of surface rupture capable of causing severe damage. Especially where a high deductible is in force, this requires estimation of the epistemic uncertainty on fault geometry and activity. Transport infrastructure insurance is of practical interest in seismic countries. On the North Anatolian Fault in Turkey, there is uncertainty over an unbroken segment between the eastern end of the Dazce Fault and Bolu. This may have ruptured during the 1944 earthquake. Existing hazard maps may simply use a question mark to flag uncertainty. However, a far more informative type of hazard map might express spatial variations in the confidence level associated with a fault map. Through such visual guidance, an insurance risk analyst would be better placed to price earthquake cover, allowing for epistemic uncertainty.
NASA Astrophysics Data System (ADS)
Chartier, Thomas; Scotti, Oona; Lyon-Caen, Hélène; Boiselet, Aurélien
2017-10-01
Modeling the seismic potential of active faults is a fundamental step of probabilistic seismic hazard assessment (PSHA). An accurate estimation of the rate of earthquakes on the faults is necessary in order to obtain the probability of exceedance of a given ground motion. Most PSHA studies consider faults as independent structures and neglect the possibility of multiple faults or fault segments rupturing simultaneously (fault-to-fault, FtF, ruptures). The Uniform California Earthquake Rupture Forecast version 3 (UCERF-3) model takes into account this possibility by considering a system-level approach rather than an individual-fault-level approach using the geological, seismological and geodetical information to invert the earthquake rates. In many places of the world seismological and geodetical information along fault networks is often not well constrained. There is therefore a need to propose a methodology relying on geological information alone to compute earthquake rates of the faults in the network. In the proposed methodology, a simple distance criteria is used to define FtF ruptures and consider single faults or FtF ruptures as an aleatory uncertainty, similarly to UCERF-3. Rates of earthquakes on faults are then computed following two constraints: the magnitude frequency distribution (MFD) of earthquakes in the fault system as a whole must follow an a priori chosen shape and the rate of earthquakes on each fault is determined by the specific slip rate of each segment depending on the possible FtF ruptures. The modeled earthquake rates are then compared to the available independent data (geodetical, seismological and paleoseismological data) in order to weight different hypothesis explored in a logic tree.The methodology is tested on the western Corinth rift (WCR), Greece, where recent advancements have been made in the understanding of the geological slip rates of the complex network of normal faults which are accommodating the ˜ 15 mm yr-1 north-south extension. Modeling results show that geological, seismological and paleoseismological rates of earthquakes cannot be reconciled with only single-fault-rupture scenarios and require hypothesizing a large spectrum of possible FtF rupture sets. In order to fit the imposed regional Gutenberg-Richter (GR) MFD target, some of the slip along certain faults needs to be accommodated either with interseismic creep or as post-seismic processes. Furthermore, computed individual faults' MFDs differ depending on the position of each fault in the system and the possible FtF ruptures associated with the fault. Finally, a comparison of modeled earthquake rupture rates with those deduced from the regional and local earthquake catalog statistics and local paleoseismological data indicates a better fit with the FtF rupture set constructed with a distance criteria based on 5 km rather than 3 km, suggesting a high connectivity of faults in the WCR fault system.
NASA Astrophysics Data System (ADS)
Yang, Wen-Xian
2006-05-01
Available machine fault diagnostic methods show unsatisfactory performances on both on-line and intelligent analyses because their operations involve intensive calculations and are labour intensive. Aiming at improving this situation, this paper describes the development of an intelligent approach by using the Genetic Programming (abbreviated as GP) method. Attributed to the simple calculation of the mathematical model being constructed, different kinds of machine faults may be diagnosed correctly and quickly. Moreover, human input is significantly reduced in the process of fault diagnosis. The effectiveness of the proposed strategy is validated by an illustrative example, in which three kinds of valve states inherent in a six-cylinders/four-stroke cycle diesel engine, i.e. normal condition, valve-tappet clearance and gas leakage faults, are identified. In the example, 22 mathematical functions have been specially designed and 8 easily obtained signal features are used to construct the diagnostic model. Different from existing GPs, the diagnostic tree used in the algorithm is constructed in an intelligent way by applying a power-weight coefficient to each feature. The power-weight coefficients vary adaptively between 0 and 1 during the evolutionary process. Moreover, different evolutionary strategies are employed, respectively for selecting the diagnostic features and functions, so that the mathematical functions are sufficiently utilized and in the meantime, the repeated use of signal features may be fully avoided. The experimental results are illustrated diagrammatically in the following sections.
NASA Astrophysics Data System (ADS)
Okumura, K.
2011-12-01
Accurate location and geometry of seismic sources are critical to estimate strong ground motion. Complete and precise rupture history is also critical to estimate the probability of the future events. In order to better forecast future earthquakes and to reduce seismic hazards, we should consider over all options and choose the most likely parameter. Multiple options for logic trees are acceptable only after thorough examination of contradicting estimates and should not be a result from easy compromise or epoche. In the process of preparation and revisions of Japanese probabilistic and deterministic earthquake hazard maps by Headquarters for Earthquake Research Promotion since 1996, many decisions were made to select plausible parameters, but many contradicting estimates have been left without thorough examinations. There are several highly-active faults in central Japan such as Itoigawa-Shizuoka Tectonic Line active fault system (ISTL), West Nagano Basin fault system (WNBF), Inadani fault system (INFS), and Atera fault system (ATFS). The highest slip rate and the shortest recurrence interval are respectively ~1 cm/yr and 500 to 800 years, and estimated maximum magnitude is 7.5 to 8.5. Those faults are very hazardous because almost entire population and industries are located above the fault within tectonic depressions. As to the fault location, most uncertainties arises from interpretation of geomorphic features. Geomorphological interpretation without geological and structural insight often leads to wrong mapping. Though non-existent longer fault may be a safer estimate, incorrectness harm reliability of the forecast. Also this does not greatly affect strong motion estimates, but misleading to surface displacement issues. Fault geometry, on the other hand, is very important to estimate intensity distribution. For the middle portion of the ISTL, fast-moving left-lateral strike-slip up to 1 cm/yr is obvious. Recent seismicity possibly induced by 2011 Tohoku earthquake show pure strike-slip. However, thrusts are modeled from seismic profiles and gravity anomalies. Therefore, two contradicting models are presented for strong motion estimates. There should be a unique solution of the geometry, which will be discussed. As to the rupture history, there is plenty of paleoseismological evidence that supports segmentation of those faults above. However, in most fault zones, the largest and sometimes possibly less frequent earthquakes are modeled. Segmentation and modeling of coming earthquakes should be more carefully examined without leaving them in contradictions.
Time-Tagged Risk/Reliability Assessment Program for Development and Operation of Space System
NASA Astrophysics Data System (ADS)
Kubota, Yuki; Takegahara, Haruki; Aoyagi, Junichiro
We have investigated a new method of risk/reliability assessment for development and operation of space system. It is difficult to evaluate risk of spacecraft, because of long time operation, maintenance free and difficulty of test under the ground condition. Conventional methods are FMECA, FTA, ETA and miscellaneous. These are not enough to assess chronological anomaly and there is a problem to share information during R&D. A new method of risk and reliability assessment, T-TRAP (Time-tagged Risk/Reliability Assessment Program) is proposed as a management tool for the development and operation of space system. T-TRAP consisting of time-resolved Fault Tree and Criticality Analyses, upon occurrence of anomaly in the system, facilitates the responsible personnel to quickly identify the failure cause and decide corrective actions. This paper describes T-TRAP method and its availability.
Reliability of excess-flow check-valves in turbine lubrication systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dundas, R.E.
1996-12-31
Reliability studies on excess-flow check valves installed in a gas turbine lubrication system for prevention of spray fires subsequent to fracture or separation of lube lines were conducted. Fault-tree analyses are presented for the case of failure of a valve to close when called upon by separation of a downstream line, as well as for the case of accidental closure during normal operation, leading to interruption of lubricating oil flow to a bearing. The probabilities of either of these occurrences are evaluated. The results of a statistical analysis of accidental closure of excess-flow check valves in commercial airplanes in themore » period 1986--91 are also given, as well as a summary of reliability studies on the use of these valves in residential gas installations, conducted under the sponsorship of the Gas Research Institute.« less
NASA Technical Reports Server (NTRS)
Breckenridge, Jonathan T.; Johnson, Stephen B.
2013-01-01
This paper describes the core framework used to implement a Goal-Function Tree (GFT) based systems engineering process using the Systems Modeling Language. It defines a set of principles built upon by the theoretical approach described in the InfoTech 2013 ISHM paper titled "Goal-Function Tree Modeling for Systems Engineering and Fault Management" presented by Dr. Stephen B. Johnson. Using the SysML language, the principles in this paper describe the expansion of the SysML language as a baseline in order to: hierarchically describe a system, describe that system functionally within success space, and allocate detection mechanisms to success functions for system protection.
Earthquake Rupture Forecast of M>= 6 for the Corinth Rift System
NASA Astrophysics Data System (ADS)
Scotti, O.; Boiselet, A.; Lyon-Caen, H.; Albini, P.; Bernard, P.; Briole, P.; Ford, M.; Lambotte, S.; Matrullo, E.; Rovida, A.; Satriano, C.
2014-12-01
Fourteen years of multidisciplinary observations and data collection in the Western Corinth Rift (WCR) near-fault observatory have been recently synthesized (Boiselet, Ph.D. 2014) for the purpose of providing earthquake rupture forecasts (ERF) of M>=6 in WCR. The main contribution of this work consisted in paving the road towards the development of a "community-based" fault model reflecting the level of knowledge gathered thus far by the WCR working group. The most relevant available data used for this exercise are: - onshore/offshore fault traces, based on geological and high-resolution seismics, revealing a complex network of E-W striking, ~10 km long fault segments; microseismicity recorded by a dense network ( > 60000 events; 1.5
Joshuva, A; Sugumaran, V
2017-03-01
Wind energy is one of the important renewable energy resources available in nature. It is one of the major resources for production of energy because of its dependability due to the development of the technology and relatively low cost. Wind energy is converted into electrical energy using rotating blades. Due to environmental conditions and large structure, the blades are subjected to various vibration forces that may cause damage to the blades. This leads to a liability in energy production and turbine shutdown. The downtime can be reduced when the blades are diagnosed continuously using structural health condition monitoring. These are considered as a pattern recognition problem which consists of three phases namely, feature extraction, feature selection, and feature classification. In this study, statistical features were extracted from vibration signals, feature selection was carried out using a J48 decision tree algorithm and feature classification was performed using best-first tree algorithm and functional trees algorithm. The better algorithm is suggested for fault diagnosis of wind turbine blade. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
Fulton, P.M.; Saffer, D.M.; Bekins, B.A.
2009-01-01
Many plate boundary faults, including the San Andreas Fault, appear to slip at unexpectedly low shear stress. One long-standing explanation for a "weak" San Andreas Fault is that fluid release by dehydration reactions during regional metamorphism generates elevated fluid pressures that are localized within the fault, reducing the effective normal stress. We evaluate this hypothesis by calculating realistic fluid production rates for the San Andreas Fault system, and incorporating them into 2-D fluid flow models. Our results show that for a wide range of permeability distributions, fluid sources from crustal dehydration are too small and short-lived to generate, sustain, or localize fluid pressures in the fault sufficient to explain its apparent mechanical weakness. This suggests that alternative mechanisms, possibly acting locally within the fault zone, such as shear compaction or thermal pressurization, may be necessary to explain a weak San Andreas Fault. More generally, our results demonstrate the difficulty of localizing large fluid pressures generated by regional processes within near-vertical fault zones. ?? 2009 Elsevier B.V.
China Report, Science and Technology, No. 197.
1983-05-13
eucalyptus trees both make excellent lumber for ship build- ing. Bark from the casuarina equisetifolia contains 13-18 percent tannic acid which can be...34 - / 2.*. 77 NOTE JPRS publications contain information primarily from foreign newspapers, periodicals and books, but also from news agency transmissions...depression zone, Hainan Island-easterly extension of Hainan Island-Dongsha continental slope uplift zone, northern Xisha Islands faulted trough and Zhongsha
1981-01-01
are applied to determine what system states (usually failed states) are possible; deductive methods are applied to determine how a given system state...Similar considerations apply to the single failures of CVA, BVB and CVB and this important additional information has been displayed in the principal...way. The point "maximum tolerable failure" corresponds to the survival point of the company building the aircraft. Above that point, only intolerable
Publications - PIR 2011-1 | Alaska Division of Geological & Geophysical
content DGGS PIR 2011-1 Publication Details Title: Reconnaissance evaluation of the Lake Clark fault Koehler, R.D., and Reger, R.D., 2011, Reconnaissance evaluation of the Lake Clark fault, Tyonek area M) Keywords Cook Inlet; Glacial Stratigraphy; Lake Clark Fault; Neotectonics; STATEMAP Project Top
Software-implemented fault insertion: An FTMP example
NASA Technical Reports Server (NTRS)
Czeck, Edward W.; Siewiorek, Daniel P.; Segall, Zary Z.
1987-01-01
This report presents a model for fault insertion through software; describes its implementation on a fault-tolerant computer, FTMP; presents a summary of fault detection, identification, and reconfiguration data collected with software-implemented fault insertion; and compares the results to hardware fault insertion data. Experimental results show detection time to be a function of time of insertion and system workload. For the fault detection time, there is no correlation between software-inserted faults and hardware-inserted faults; this is because hardware-inserted faults must manifest as errors before detection, whereas software-inserted faults immediately exercise the error detection mechanisms. In summary, the software-implemented fault insertion is able to be used as an evaluation technique for the fault-handling capabilities of a system in fault detection, identification and recovery. Although the software-inserted faults do not map directly to hardware-inserted faults, experiments show software-implemented fault insertion is capable of emulating hardware fault insertion, with greater ease and automation.
Common faults and their impacts for rooftop air conditioners
DOE Office of Scientific and Technical Information (OSTI.GOV)
Breuker, M.S.; Braun, J.E.
This paper identifies important faults and their performance impacts for rooftop air conditioners. The frequencies of occurrence and the relative costs of service for different faults were estimated through analysis of service records. Several of the important and difficult to diagnose refrigeration cycle faults were simulated in the laboratory. Also, the impacts on several performance indices were quantified through transient testing for a range of conditions and fault levels. The transient test results indicated that fault detection and diagnostics could be performed using methods that incorporate steady-state assumptions and models. Furthermore, the fault testing led to a set of genericmore » rules for the impacts of faults on measurements that could be used for fault diagnoses. The average impacts of the faults on cooling capacity and coefficient of performance (COP) were also evaluated. Based upon the results, all of the faults are significant at the levels introduced, and should be detected and diagnosed by an FDD system. The data set obtained during this work was very comprehensive, and was used to design and evaluate the performance of an FDD method that will be reported in a future paper.« less
Fault-Sensitivity and Wear-Out Analysis of VLSI Systems.
1995-06-01
DESCRIPTION MIXED-MODE HIERARCIAIFAULT DESCRIPTION FAULT SIMULATION TYPE OF FAULT TRANSIENT/STUCK-AT LOCATION/TIME * _AUTOMATIC FAULT INJECTION TRACE...4219-4224, December 1985. [15] J. Sosnowski, "Evaluation of transient hazards in microprocessor controll - ers," Digest, FTCS-16, The Sixteenth
NASA Astrophysics Data System (ADS)
Naim, Nani Fadzlina; Ab-Rahman, Mohammad Syuhaimi; Kamaruddin, Nur Hasiba; Bakar, Ahmad Ashrif A.
2013-09-01
Nowadays, optical networks are becoming dense while detecting faulty branches in the tree-structured networks has become problematic. Conventional methods are inconvenient as they require an engineer to visit the failure site to check the optical fiber using an optical time-domain reflectometer. An innovative monitoring technique for tree-structured network topology in Ethernet passive optical networks (EPONs) by using the erbium-doped fiber amplifier to amplify the traffic signal is demonstrated, and in the meantime, a residual amplified spontaneous emission spectrum is used as the input signal to monitor the optical cable from the central office. Fiber Bragg gratings with distinct center wavelengths are employed to reflect the monitoring signals. Faulty branches of the tree-structured EPONs can be identified using a simple and low-cost receiver. We will show that this technique is capable of providing monitoring range up to 32 optical network units using a power meter with a sensitivity of -65 dBm while maintaining the bit error rate of 10-13.
NASA Technical Reports Server (NTRS)
Smith, T. B., Jr.; Lala, J. H.
1983-01-01
The basic organization of the fault tolerant multiprocessor, (FTMP) is that of a general purpose homogeneous multiprocessor. Three processors operate on a shared system (memory and I/O) bus. Replication and tight synchronization of all elements and hardware voting is employed to detect and correct any single fault. Reconfiguration is then employed to repair a fault. Multiple faults may be tolerated as a sequence of single faults with repair between fault occurrences.
Analysis of landslide hazard area in Ludian earthquake based on Random Forests
NASA Astrophysics Data System (ADS)
Xie, J.-C.; Liu, R.; Li, H.-W.; Lai, Z.-L.
2015-04-01
With the development of machine learning theory, more and more algorithms are evaluated for seismic landslides. After the Ludian earthquake, the research team combine with the special geological structure in Ludian area and the seismic filed exploration results, selecting SLOPE(PODU); River distance(HL); Fault distance(DC); Seismic Intensity(LD) and Digital Elevation Model(DEM), the normalized difference vegetation index(NDVI) which based on remote sensing images as evaluation factors. But the relationships among these factors are fuzzy, there also exists heavy noise and high-dimensional, we introduce the random forest algorithm to tolerate these difficulties and get the evaluation result of Ludian landslide areas, in order to verify the accuracy of the result, using the ROC graphs for the result evaluation standard, AUC covers an area of 0.918, meanwhile, the random forest's generalization error rate decreases with the increase of the classification tree to the ideal 0.08 by using Out Of Bag(OOB) Estimation. Studying the final landslides inversion results, paper comes to a statistical conclusion that near 80% of the whole landslides and dilapidations are in areas with high susceptibility and moderate susceptibility, showing the forecast results are reasonable and adopted.
Active tectonics of the northern Mojave Desert: The 2017 Desert Symposium field trip road log
Miller, David; Reynolds, R.E.; Phelps, Geoffrey; Honke, Jeff; Cyr, Andrew J.; Buesch, David C.; Schmidt, Kevin M.; Losson, G.
2017-01-01
The 2017 Desert Symposium field trip will highlight recent work by the U.S. Geological Survey geologists and geophysicists, who have been mapping young sediment and geomorphology associated with active tectonic features in the least well-known part of the eastern California Shear Zone (ECSZ). This area, stretching from Barstow eastward in a giant arc to end near the Granite Mountains on the south and the Avawatz Mountains on the north (Fig. 1-1), encompasses the two major structural components of the ECSZ—east-striking sinistral faults and northwest-striking dextral faults—as well as reverseoblique and normal-oblique faults that are associated with topographic highs and sags, respectively. In addition, folds and stepovers (both restraining stepovers that form pop-up structures and releasing stepovers that create narrow basins) have been identified. The ECSZ is a segment in the ‘soft’ distributed deformation of the North American plate east of the San Andreas fault (Fig. 1-1), where it takes up approximately 20-25% of plate motion in a broad zone of right-lateral shear (Sauber et al., 1994) The ECSZ (sensu strictu) begins in the Joshua Tree area and passes north through the Mojave Desert, past the Owens Valley-to-Death Valley swath and northward, where it is termed the Walker Lane. It has been defined as the locus of active faulting (Dokka and Travis, 1990), but when the full history from about 10 Ma forward is considered, it lies in a broader zone of right shear that passes westward in the Mojave Desert to the San Andreas fault (Mojave strike-slip province of Miller and Yount, 2002) and passes eastward to the Nevada state line or beyond (Miller, this volume).We will visit several accessible highlights for newly studied faults, signs of young deformation, and packages of syntectonic sediments. These pieces of a complex active tectonic puzzle have yielded some answers to longstanding questions such as: How is fault slip transfer in this area accommodated between northwest-striking dextral faults and eaststriking sinistral faults?How is active deformation on the Ludlow fault transferred northward, presumably to connect to the southern Death Valley fault zone?When were faults in this area of the central Mojave Desert initiated?Are faults in this area more or less active than faults in the ECSZ to the west?What is the role of NNW-striking faults and when did they form?How has fault slip changed over time? Locations and fault names are provided in figure 1-2. Important turns and locations are identified with locations in the projection: UTM, zone 11; datum NAD 83: (578530 3917335).
Neural networks and fault probability evaluation for diagnosis issues.
Kourd, Yahia; Lefebvre, Dimitri; Guersi, Noureddine
2014-01-01
This paper presents a new FDI technique for fault detection and isolation in unknown nonlinear systems. The objective of the research is to construct and analyze residuals by means of artificial intelligence and probabilistic methods. Artificial neural networks are first used for modeling issues. Neural networks models are designed for learning the fault-free and the faulty behaviors of the considered systems. Once the residuals generated, an evaluation using probabilistic criteria is applied to them to determine what is the most likely fault among a set of candidate faults. The study also includes a comparison between the contributions of these tools and their limitations, particularly through the establishment of quantitative indicators to assess their performance. According to the computation of a confidence factor, the proposed method is suitable to evaluate the reliability of the FDI decision. The approach is applied to detect and isolate 19 fault candidates in the DAMADICS benchmark. The results obtained with the proposed scheme are compared with the results obtained according to a usual thresholding method.
NASA Astrophysics Data System (ADS)
Hong, H.; Zhu, A. X.
2017-12-01
Climate change is a common phenomenon and it is very serious all over the world. The intensification of rainfall extremes with climate change is of key importance to society and then it may induce a large impact through landslides. This paper presents GIS-based new ensemble data mining techniques that weight-of-evidence, logistic model tree, linear and quadratic discriminant for landslide spatial modelling. This research was applied in Anfu County, which is a landslide-prone area in Jiangxi Province, China. According to a literature review and research the study area, we select the landslide influencing factor and their maps were digitized in a GIS environment. These landslide influencing factors are the altitude, plan curvature, profile curvature, slope degree, slope aspect, topographic wetness index (TWI), Stream Power Index (SPI), Topographic Wetness Index (SPI), distance to faults, distance to rivers, distance to roads, soil, lithology, normalized difference vegetation index and land use. According to historical information of individual landslide events, interpretation of the aerial photographs, and field surveys supported by the government of Jiangxi Meteorological Bureau of China, 367 landslides were identified in the study area. The landslide locations were divided into two subsets, namely, training and validating (70/30), based on a random selection scheme. In this research, Pearson's correlation was used for the evaluation of the relationship between the landslides and influencing factors. In the next step, three data mining techniques combined with the weight-of-evidence, logistic model tree, linear and quadratic discriminant, were used for the landslide spatial modelling and its zonation. Finally, the landslide susceptibility maps produced by the mentioned models were evaluated by the ROC curve. The results showed that the area under the curve (AUC) of all of the models was > 0.80. At the same time, the highest AUC value was for the linear and quadratic discriminant model (0.864), followed by logistic model tree (0.832), and weight-of-evidence (0.819). In general, the landslide maps can be applied for land use planning and management in the Anfu area.
A technique for evaluating the application of the pin-level stuck-at fault model to VLSI circuits
NASA Technical Reports Server (NTRS)
Palumbo, Daniel L.; Finelli, George B.
1987-01-01
Accurate fault models are required to conduct the experiments defined in validation methodologies for highly reliable fault-tolerant computers (e.g., computers with a probability of failure of 10 to the -9 for a 10-hour mission). Described is a technique by which a researcher can evaluate the capability of the pin-level stuck-at fault model to simulate true error behavior symptoms in very large scale integrated (VLSI) digital circuits. The technique is based on a statistical comparison of the error behavior resulting from faults applied at the pin-level of and internal to a VLSI circuit. As an example of an application of the technique, the error behavior of a microprocessor simulation subjected to internal stuck-at faults is compared with the error behavior which results from pin-level stuck-at faults. The error behavior is characterized by the time between errors and the duration of errors. Based on this example data, the pin-level stuck-at fault model is found to deliver less than ideal performance. However, with respect to the class of faults which cause a system crash, the pin-level, stuck-at fault model is found to provide a good modeling capability.
NASA Astrophysics Data System (ADS)
Bense, V. F.; Gleeson, T.; Loveless, S. E.; Bour, O.; Scibek, J.
2013-12-01
Deformation along faults in the shallow crust (< 1 km) introduces permeability heterogeneity and anisotropy, which has an important impact on processes such as regional groundwater flow, hydrocarbon migration, and hydrothermal fluid circulation. Fault zones have the capacity to be hydraulic conduits connecting shallow and deep geological environments, but simultaneously the fault cores of many faults often form effective barriers to flow. The direct evaluation of the impact of faults to fluid flow patterns remains a challenge and requires a multidisciplinary research effort of structural geologists and hydrogeologists. However, we find that these disciplines often use different methods with little interaction between them. In this review, we document the current multi-disciplinary understanding of fault zone hydrogeology. We discuss surface- and subsurface observations from diverse rock types from unlithified and lithified clastic sediments through to carbonate, crystalline, and volcanic rocks. For each rock type, we evaluate geological deformation mechanisms, hydrogeologic observations and conceptual models of fault zone hydrogeology. Outcrop observations indicate that fault zones commonly have a permeability structure suggesting they should act as complex conduit-barrier systems in which along-fault flow is encouraged and across-fault flow is impeded. Hydrogeological observations of fault zones reported in the literature show a broad qualitative agreement with outcrop-based conceptual models of fault zone hydrogeology. Nevertheless, the specific impact of a particular fault permeability structure on fault zone hydrogeology can only be assessed when the hydrogeological context of the fault zone is considered and not from outcrop observations alone. To gain a more integrated, comprehensive understanding of fault zone hydrogeology, we foresee numerous synergistic opportunities and challenges for the discipline of structural geology and hydrogeology to co-evolve and address remaining challenges by co-locating study areas, sharing approaches and fusing data, developing conceptual models from hydrogeologic data, numerical modeling, and training interdisciplinary scientists.
Generating Scenarios When Data Are Missing
NASA Technical Reports Server (NTRS)
Mackey, Ryan
2007-01-01
The Hypothetical Scenario Generator (HSG) is being developed in conjunction with other components of artificial-intelligence systems for automated diagnosis and prognosis of faults in spacecraft, aircraft, and other complex engineering systems. The HSG accepts, as input, possibly incomplete data on the current state of a system (see figure). The HSG models a potential fault scenario as an ordered disjunctive tree of conjunctive consequences, wherein the ordering is based upon the likelihood that a particular conjunctive path will be taken for the given set of inputs. The computation of likelihood is based partly on a numerical ranking of the degree of completeness of data with respect to satisfaction of the antecedent conditions of prognostic rules. The results from the HSG are then used by a model-based artificial- intelligence subsystem to predict realistic scenarios and states.
Reliability Practice at NASA Goddard Space Flight Center
NASA Technical Reports Server (NTRS)
Pruessner, Paula S.; Li, Ming
2008-01-01
This paper describes in brief the Reliability and Maintainability (R&M) Programs performed directly by the reliability branch at Goddard Space Flight Center (GSFC). The mission assurance requirements flow down is explained. GSFC practices for PRA, reliability prediction/fault tree analysis/reliability block diagram, FMEA, part stress and derating analysis, worst case analysis, trend analysis, limit life items are presented. Lessons learned are summarized and recommendations on improvement are identified.
Online Performance-Improvement Algorithms
1994-08-01
fault rate as the request sequence length approaches infinity. Their algorithms are based on an innovative use of the classical Ziv - Lempel [85] data ...Report CS-TR-348-91. [85] J. Ziv and A. Lempel . Compression of individual sequences via variable-rate coding. IEEE Trans. Inf. Theory, 24:530-53`, 1978. 94...Deferred Data Structuring Recall that our incremental multi-trip algorithm spreads the building of the fence-tree over several trips in order to
ERIC Educational Resources Information Center
Torres, Edgardo E.; And Others
This comprehensive investigation into the reasons behind the crucial problem of the student dropout in foreign language programs focuses on seven interrelated areas. These are: (1) student, (2) teacher, (3) administration, (4) counselor, (5) parent, (6) community, and (7) teacher training. A fault-tree analysis of the dropout problem provides a…
2015-09-01
15 4. Commander, Naval Regional Maintenance Center .................. 15 5 . Private Ship Repair Industry...TURBINE EXHAUST SYSTEM MAINTENANCE STRATEGY FOR THE CG-47 TICONDEROGA CLASS CRUISER 5 . FUNDING NUMBERS 6. AUTHOR(S) Sparks, Robert D. 7. PERFORMING...condition-based maintenance, condition-directed, failure finding, fault tree analysis 15 . NUMBER OF PAGES 133 16. PRICE CODE 17. SECURITY
Doytchev, Doytchin E; Szwillus, Gerd
2009-11-01
Understanding the reasons for incident and accident occurrence is important for an organization's safety. Different methods have been developed to achieve this goal. To better understand the human behaviour in incident occurrence we propose an analysis concept that combines Fault Tree Analysis (FTA) and Task Analysis (TA). The former method identifies the root causes of an accident/incident, while the latter analyses the way people perform the tasks in their work environment and how they interact with machines or colleagues. These methods were complemented with the use of the Human Error Identification in System Tools (HEIST) methodology and the concept of Performance Shaping Factors (PSF) to deepen the insight into the error modes of an operator's behaviour. HEIST shows the external error modes that caused the human error and the factors that prompted the human to err. To show the validity of the approach, a case study at a Bulgarian Hydro power plant was carried out. An incident - the flooding of the plant's basement - was analysed by combining the afore-mentioned methods. The case study shows that Task Analysis in combination with other methods can be applied successfully to human error analysis, revealing details about erroneous actions in a realistic situation.
Emery, R J; Charlton, M A; Orders, A B; Hernandez, M
2001-02-01
An enhanced coding system for the characterization of notices of violation (NOV's) issued to radiation permit holders in the State of Texas was developed based on a series of fault tree analyses serving to identify a set of common causes. The coding system enhancement was retroactively applied to a representative sample (n = 185) of NOV's issued to specific licensees of radioactive materials in Texas during calendar year 1999. The results obtained were then compared to the currently available summary NOV information for the same year. In addition to identifying the most common NOV's, the enhanced coding system revealed that approximately 70% of the sampled NOV's were issued for non-compliance with a specific regulation as opposed to a permit condition. Furthermore, an underlying cause of 94% of the NOV's was the failure on the part of the licensee to execute a specific task. The findings suggest that opportunities exist to improve permit holder compliance through various means, including the creation of summaries which detail specific tasks to be completed, and revising training programs with more focus on the identification and scheduling of permit-related requirements. Broad application of these results is cautioned due to the bias associated with the restricted scope of the project.
Fault diagnostic instrumentation design for environmental control and life support systems
NASA Technical Reports Server (NTRS)
Yang, P. Y.; You, K. C.; Wynveen, R. A.; Powell, J. D., Jr.
1979-01-01
As a development phase moves toward flight hardware, the system availability becomes an important design aspect which requires high reliability and maintainability. As part of continous development efforts, a program to evaluate, design, and demonstrate advanced instrumentation fault diagnostics was successfully completed. Fault tolerance designs for reliability and other instrumenation capabilities to increase maintainability were evaluated and studied.
NASA Astrophysics Data System (ADS)
Polverino, Pierpaolo; Esposito, Angelo; Pianese, Cesare; Ludwig, Bastian; Iwanschitz, Boris; Mai, Andreas
2016-02-01
In the current energetic scenario, Solid Oxide Fuel Cells (SOFCs) exhibit appealing features which make them suitable for environmental-friendly power production, especially for stationary applications. An example is represented by micro-combined heat and power (μ-CHP) generation units based on SOFC stacks, which are able to produce electric and thermal power with high efficiency and low pollutant and greenhouse gases emissions. However, the main limitations to their diffusion into the mass market consist in high maintenance and production costs and short lifetime. To improve these aspects, the current research activity focuses on the development of robust and generalizable diagnostic techniques, aimed at detecting and isolating faults within the entire system (i.e. SOFC stack and balance of plant). Coupled with appropriate recovery strategies, diagnosis can prevent undesired system shutdowns during faulty conditions, with consequent lifetime increase and maintenance costs reduction. This paper deals with the on-line experimental validation of a model-based diagnostic algorithm applied to a pre-commercial SOFC system. The proposed algorithm exploits a Fault Signature Matrix based on a Fault Tree Analysis and improved through fault simulations. The algorithm is characterized on the considered system and it is validated by means of experimental induction of faulty states in controlled conditions.
Operational Performance Risk Assessment in Support of A Supervisory Control System
DOE Office of Scientific and Technical Information (OSTI.GOV)
Denning, Richard S.; Muhlheim, Michael David; Cetiner, Sacit M.
Supervisory control system (SCS) is developed for multi-unit advanced small modular reactors to minimize human interventions in both normal and abnormal operations. In SCS, control action decisions made based on probabilistic risk assessment approach via Event Trees/Fault Trees. Although traditional PRA tools are implemented, their scope is extended to normal operations and application is reversed; success of non-safety related system instead failure of safety systems this extended PRA approach called as operational performance risk assessment (OPRA). OPRA helps to identify success paths, combination of control actions for transients and to quantify these success paths to provide possible actions without activatingmore » plant protection system. In this paper, a case study of the OPRA in supervisory control system is demonstrated within the context of the ALMR PRISM design, specifically power conversion system. The scenario investigated involved a condition that the feed water control valve is observed to be drifting to the closed position. Alternative plant configurations were identified via OPRA that would allow the plant to continue to operate at full or reduced power. Dynamic analyses were performed with a thermal-hydraulic model of the ALMR PRISM system using Modelica to evaluate remained safety margins. Successful recovery paths for the selected scenario are identified and quantified via SCS.« less
Distributed bearing fault diagnosis based on vibration analysis
NASA Astrophysics Data System (ADS)
Dolenc, Boštjan; Boškoski, Pavle; Juričić, Đani
2016-01-01
Distributed bearing faults appear under various circumstances, for example due to electroerosion or the progression of localized faults. Bearings with distributed faults tend to generate more complex vibration patterns than those with localized faults. Despite the frequent occurrence of such faults, their diagnosis has attracted limited attention. This paper examines a method for the diagnosis of distributed bearing faults employing vibration analysis. The vibrational patterns generated are modeled by incorporating the geometrical imperfections of the bearing components. Comparing envelope spectra of vibration signals shows that one can distinguish between localized and distributed faults. Furthermore, a diagnostic procedure for the detection of distributed faults is proposed. This is evaluated on several bearings with naturally born distributed faults, which are compared with fault-free bearings and bearings with localized faults. It is shown experimentally that features extracted from vibrations in fault-free, localized and distributed fault conditions form clearly separable clusters, thus enabling diagnosis.
Project delay analysis of HRSG
NASA Astrophysics Data System (ADS)
Silvianita; Novega, A. S.; Rosyid, D. M.; Suntoyo
2017-08-01
Completion of HRSG (Heat Recovery Steam Generator) fabrication project sometimes is not sufficient with the targeted time written on the contract. The delay on fabrication process can cause some disadvantages for fabricator, including forfeit payment, delay on HRSG construction process up until HRSG trials delay. In this paper, the author is using semi quantitative on HRSG pressure part fabrication delay with configuration plant 1 GT (Gas Turbine) + 1 HRSG + 1 STG (Steam Turbine Generator) using bow-tie analysis method. Bow-tie analysis method is a combination from FTA (Fault tree analysis) and ETA (Event tree analysis) to develop the risk matrix of HRSG. The result from FTA analysis is use as a threat for preventive measure. The result from ETA analysis is use as impact from fabrication delay.
2015-02-26
This image from NASA Terra spacecraft shows Prince Patrick Island, which is located in the Canadian Arctic Archipelago, and is the westernmost Elizabeth Island in the Northwest Territories of Canada. The island is underlain by sedimentary rocks, cut by still-active faults. The streams follow a dendritic drainage system: there are many contributing streams (analogous to the twigs of a tree), which are then joined together into the tributaries of the main river (the branches and the trunk of the tree, respectively). They develop where the river channel follows the slope of the terrain. The image covers an area of 22 by 27 km, was acquired July 2, 2011, and is located at 76.9 degrees north, 118.9 degrees west. http://photojournal.jpl.nasa.gov/catalog/PIA19222
NASA Technical Reports Server (NTRS)
Stoltzfus, Joel M. (Editor); Benz, Frank J. (Editor); Stradling, Jack S. (Editor)
1989-01-01
The present volume discusses the ignition of nonmetallic materials by the impact of high-pressure oxygen, the promoted combustion of nine structural metals in high-pressure gaseous oxygen, the oxygen sensitivity/compatibility ranking of several materials by different test methods, the ignition behavior of silicon greases in oxygen atmospheres, fire spread rates along cylindrical metal rods in high-pressure oxygen, and the design of an ignition-resistant, high pressure/temperature oxygen valve. Also discussed are the promoted ignition of oxygen regulators, the ignition of PTFE-lined flexible hoses by rapid pressurization with oxygen, evolving nonswelling elastomers for high-pressure oxygen environments, the evaluation of systems for oxygen service through the use of the quantitative fault-tree analysis, and oxygen-enriched fires during surgery of the head and neck.
SARA - SURE/ASSIST RELIABILITY ANALYSIS WORKSTATION (VAX VMS VERSION)
NASA Technical Reports Server (NTRS)
Butler, R. W.
1994-01-01
SARA, the SURE/ASSIST Reliability Analysis Workstation, is a bundle of programs used to solve reliability problems. The mathematical approach chosen to solve a reliability problem may vary with the size and nature of the problem. The Systems Validation Methods group at NASA Langley Research Center has created a set of four software packages that form the basis for a reliability analysis workstation, including three for use in analyzing reconfigurable, fault-tolerant systems and one for analyzing non-reconfigurable systems. The SARA bundle includes the three for reconfigurable, fault-tolerant systems: SURE reliability analysis program (COSMIC program LAR-13789, LAR-14921); the ASSIST specification interface program (LAR-14193, LAR-14923), and PAWS/STEM reliability analysis programs (LAR-14165, LAR-14920). As indicated by the program numbers in parentheses, each of these three packages is also available separately in two machine versions. The fourth package, which is only available separately, is FTC, the Fault Tree Compiler (LAR-14586, LAR-14922). FTC is used to calculate the top-event probability for a fault tree which describes a non-reconfigurable system. PAWS/STEM and SURE are analysis programs which utilize different solution methods, but have a common input language, the SURE language. ASSIST is a preprocessor that generates SURE language from a more abstract definition. ASSIST, SURE, and PAWS/STEM are described briefly in the following paragraphs. For additional details about the individual packages, including pricing, please refer to their respective abstracts. ASSIST, the Abstract Semi-Markov Specification Interface to the SURE Tool program, allows a reliability engineer to describe the failure behavior of a fault-tolerant computer system in an abstract, high-level language. The ASSIST program then automatically generates a corresponding semi-Markov model. A one-page ASSIST-language description may result in a semi-Markov model with thousands of states and transitions. The ASSIST program also includes model-reduction techniques to facilitate efficient modeling of large systems. The semi-Markov model generated by ASSIST is in the format needed for input to SURE and PAWS/STEM. The Semi-Markov Unreliability Range Evaluator, SURE, is an analysis tool for reconfigurable, fault-tolerant systems. SURE provides an efficient means for calculating accurate upper and lower bounds for the death state probabilities for a large class of semi-Markov models, not just those which can be reduced to critical-pair architectures. The calculated bounds are close enough (usually within 5 percent of each other) for use in reliability studies of ultra-reliable computer systems. The SURE bounding theorems have algebraic solutions and are consequently computationally efficient even for large and complex systems. SURE can optionally regard a specified parameter as a variable over a range of values, enabling an automatic sensitivity analysis. SURE output is tabular. The PAWS/STEM package includes two programs for the creation and evaluation of pure Markov models describing the behavior of fault-tolerant reconfigurable computer systems: the Pade Approximation with Scaling (PAWS) and Scaled Taylor Exponential Matrix (STEM) programs. PAWS and STEM produce exact solutions for the probability of system failure and provide a conservative estimate of the number of significant digits in the solution. Markov models of fault-tolerant architectures inevitably lead to numerically stiff differential equations. Both PAWS and STEM have the capability to solve numerically stiff models. These complementary programs use separate methods to determine the matrix exponential in the solution of the model's system of differential equations. In general, PAWS is better suited to evaluate small and dense models. STEM operates at lower precision, but works faster than PAWS for larger models. The programs that comprise the SARA package were originally developed for use on DEC VAX series computers running VMS and were later ported for use on Sun series computers running SunOS. They are written in C-language, Pascal, and FORTRAN 77. An ANSI compliant C compiler is required in order to compile the C portion of the Sun version source code. The Pascal and FORTRAN code can be compiled on Sun computers using Sun Pascal and Sun Fortran. For the VMS version, VAX C, VAX PASCAL, and VAX FORTRAN can be used to recompile the source code. The standard distribution medium for the VMS version of SARA (COS-10041) is a 9-track 1600 BPI magnetic tape in VMSINSTAL format. It is also available on a TK50 tape cartridge in VMSINSTAL format. Executables are included. The standard distribution medium for the Sun version of SARA (COS-10039) is a .25 inch streaming magnetic tape cartridge in UNIX tar format. Both Sun3 and Sun4 executables are included. Electronic copies of the ASSIST user's manual in TeX and PostScript formats are provided on the distribution medium. DEC, VAX, VMS, and TK50 are registered trademarks of Digital Equipment Corporation. Sun, Sun3, Sun4, and SunOS are trademarks of Sun Microsystems, Inc. TeX is a trademark of the American Mathematical Society. PostScript is a registered trademark of Adobe Systems Incorporated.
SARA - SURE/ASSIST RELIABILITY ANALYSIS WORKSTATION (UNIX VERSION)
NASA Technical Reports Server (NTRS)
Butler, R. W.
1994-01-01
SARA, the SURE/ASSIST Reliability Analysis Workstation, is a bundle of programs used to solve reliability problems. The mathematical approach chosen to solve a reliability problem may vary with the size and nature of the problem. The Systems Validation Methods group at NASA Langley Research Center has created a set of four software packages that form the basis for a reliability analysis workstation, including three for use in analyzing reconfigurable, fault-tolerant systems and one for analyzing non-reconfigurable systems. The SARA bundle includes the three for reconfigurable, fault-tolerant systems: SURE reliability analysis program (COSMIC program LAR-13789, LAR-14921); the ASSIST specification interface program (LAR-14193, LAR-14923), and PAWS/STEM reliability analysis programs (LAR-14165, LAR-14920). As indicated by the program numbers in parentheses, each of these three packages is also available separately in two machine versions. The fourth package, which is only available separately, is FTC, the Fault Tree Compiler (LAR-14586, LAR-14922). FTC is used to calculate the top-event probability for a fault tree which describes a non-reconfigurable system. PAWS/STEM and SURE are analysis programs which utilize different solution methods, but have a common input language, the SURE language. ASSIST is a preprocessor that generates SURE language from a more abstract definition. ASSIST, SURE, and PAWS/STEM are described briefly in the following paragraphs. For additional details about the individual packages, including pricing, please refer to their respective abstracts. ASSIST, the Abstract Semi-Markov Specification Interface to the SURE Tool program, allows a reliability engineer to describe the failure behavior of a fault-tolerant computer system in an abstract, high-level language. The ASSIST program then automatically generates a corresponding semi-Markov model. A one-page ASSIST-language description may result in a semi-Markov model with thousands of states and transitions. The ASSIST program also includes model-reduction techniques to facilitate efficient modeling of large systems. The semi-Markov model generated by ASSIST is in the format needed for input to SURE and PAWS/STEM. The Semi-Markov Unreliability Range Evaluator, SURE, is an analysis tool for reconfigurable, fault-tolerant systems. SURE provides an efficient means for calculating accurate upper and lower bounds for the death state probabilities for a large class of semi-Markov models, not just those which can be reduced to critical-pair architectures. The calculated bounds are close enough (usually within 5 percent of each other) for use in reliability studies of ultra-reliable computer systems. The SURE bounding theorems have algebraic solutions and are consequently computationally efficient even for large and complex systems. SURE can optionally regard a specified parameter as a variable over a range of values, enabling an automatic sensitivity analysis. SURE output is tabular. The PAWS/STEM package includes two programs for the creation and evaluation of pure Markov models describing the behavior of fault-tolerant reconfigurable computer systems: the Pade Approximation with Scaling (PAWS) and Scaled Taylor Exponential Matrix (STEM) programs. PAWS and STEM produce exact solutions for the probability of system failure and provide a conservative estimate of the number of significant digits in the solution. Markov models of fault-tolerant architectures inevitably lead to numerically stiff differential equations. Both PAWS and STEM have the capability to solve numerically stiff models. These complementary programs use separate methods to determine the matrix exponential in the solution of the model's system of differential equations. In general, PAWS is better suited to evaluate small and dense models. STEM operates at lower precision, but works faster than PAWS for larger models. The programs that comprise the SARA package were originally developed for use on DEC VAX series computers running VMS and were later ported for use on Sun series computers running SunOS. They are written in C-language, Pascal, and FORTRAN 77. An ANSI compliant C compiler is required in order to compile the C portion of the Sun version source code. The Pascal and FORTRAN code can be compiled on Sun computers using Sun Pascal and Sun Fortran. For the VMS version, VAX C, VAX PASCAL, and VAX FORTRAN can be used to recompile the source code. The standard distribution medium for the VMS version of SARA (COS-10041) is a 9-track 1600 BPI magnetic tape in VMSINSTAL format. It is also available on a TK50 tape cartridge in VMSINSTAL format. Executables are included. The standard distribution medium for the Sun version of SARA (COS-10039) is a .25 inch streaming magnetic tape cartridge in UNIX tar format. Both Sun3 and Sun4 executables are included. Electronic copies of the ASSIST user's manual in TeX and PostScript formats are provided on the distribution medium. DEC, VAX, VMS, and TK50 are registered trademarks of Digital Equipment Corporation. Sun, Sun3, Sun4, and SunOS are trademarks of Sun Microsystems, Inc. TeX is a trademark of the American Mathematical Society. PostScript is a registered trademark of Adobe Systems Incorporated.
NASA Astrophysics Data System (ADS)
Webb, J.; Gardner, T.
2016-12-01
In northwest Tasmania well-preserved mid-Holocene beach ridges with maximum radiocarbon ages of 5.25 ka occur along the coast; inland are a parallel set of lower relief beach ridges of probable MIS 5e age. The latter are cut by northeast-striking faults clearly visible on LIDAR images, with a maximum vertical displacement (evident as difference in topographic elevation) of 3 m. Also distinct on the LIDAR images are large sand boils along the fault lines; they are up to 5 m in diameter and 2-3 m high and mostly occur on the hanging wall close to the fault traces. Without LIDAR it would have been almost impossible to distinguish either the fault scarps or the sand boils. Excavations through the sand boils show that they are massive, with no internal structure, suggesting that they formed in a single event. They are composed of well-sorted, very fine white sand, identical to the sand in the underlying beach ridges. The sand boils overlie a peaty paleosol; this formed in the tea-tree swamp that formerly covered the area, and has been offset along the faults. Radiocarbon dating of the buried organic-rich paleosol gave ages of 14.8-7.2 ka, suggesting that the faulting is latest Pleistocene to early Holocene in age; it occurred prior to deposition of the mid-Holocene beach ridges, which are not offset. The beach ridge sediments are up to 7 m thick and contain an iron-cemented hard pan 1-3 m below the surface. The water table is very shallow and close to the ground surface, so the sands of the beach ridges are mostly saturated. During faulting these sands experienced extensive liquefaction. The resulting sand boils rose to a substantial height of 2-3 m, probably possibly reflecting the elevation of the potentiometric surface within the confined part of the beach ridge sediments below the iron-cemented hard pan. Motion on the faults was predominantly dip slip (shown by an absence of horizontal offset) and probably reverse, which is consistent with the present-day northwest-southeast compressive stress in this area.
NASA Astrophysics Data System (ADS)
Gulen, L.; EMME WP2 Team*
2011-12-01
The Earthquake Model of the Middle East (EMME) Project is a regional project of the GEM (Global Earthquake Model) project (http://www.emme-gem.org/). The EMME project covers Turkey, Georgia, Armenia, Azerbaijan, Syria, Lebanon, Jordan, Iran, Pakistan, and Afghanistan. Both EMME and SHARE projects overlap and Turkey becomes a bridge connecting the two projects. The Middle East region is tectonically and seismically very active part of the Alpine-Himalayan orogenic belt. Many major earthquakes have occurred in this region over the years causing casualties in the millions. The EMME project consists of three main modules: hazard, risk, and socio-economic modules. The EMME project uses PSHA approach for earthquake hazard and the existing source models have been revised or modified by the incorporation of newly acquired data. The most distinguishing aspect of the EMME project from the previous ones is its dynamic character. This very important characteristic is accomplished by the design of a flexible and scalable database that permits continuous update, refinement, and analysis. An up-to-date earthquake catalog of the Middle East region has been prepared and declustered by the WP1 team. EMME WP2 team has prepared a digital active fault map of the Middle East region in ArcGIS format. We have constructed a database of fault parameters for active faults that are capable of generating earthquakes above a threshold magnitude of Mw≥5.5. The EMME project database includes information on the geometry and rates of movement of faults in a "Fault Section Database", which contains 36 entries for each fault section. The "Fault Section" concept has a physical significance, in that if one or more fault parameters change, a new fault section is defined along a fault zone. So far 6,991 Fault Sections have been defined and 83,402 km of faults are fully parameterized in the Middle East region. A separate "Paleo-Sites Database" includes information on the timing and amounts of fault displacement for major fault zones. A digital reference library, that includes the pdf files of relevant papers, reports and maps, is also prepared. A logic tree approach is utilized to encompass different interpretations for the areas where there is no consensus. Finally seismic source zones in the Middle East region have been delineated using all available data. *EMME Project WP2 Team: Levent Gülen, Murat Utkucu, M. Dinçer Köksal, Hilal Yalçin, Yigit Ince, Mine Demircioglu, Shota Adamia, Nino Sadradze, Aleksandre Gvencadze, Arkadi Karakhanyan, Mher Avanesyan, Tahir Mammadli, Gurban Yetirmishli, Arif Axundov, Khaled Hessami, M. Asif Khan, M. Sayab.
Agent Based Fault Tolerance for the Mobile Environment
NASA Astrophysics Data System (ADS)
Park, Taesoon
This paper presents a fault-tolerance scheme based on mobile agents for the reliable mobile computing systems. Mobility of the agent is suitable to trace the mobile hosts and the intelligence of the agent makes it efficient to support the fault tolerance services. This paper presents two approaches to implement the mobile agent based fault tolerant service and their performances are evaluated and compared with other fault-tolerant schemes.
NASA Technical Reports Server (NTRS)
Kobayashi, Takahisa; Simon, Donald L.
2004-01-01
In this paper, an approach for in-flight fault detection and isolation (FDI) of aircraft engine sensors based on a bank of Kalman filters is developed. This approach utilizes multiple Kalman filters, each of which is designed based on a specific fault hypothesis. When the propulsion system experiences a fault, only one Kalman filter with the correct hypothesis is able to maintain the nominal estimation performance. Based on this knowledge, the isolation of faults is achieved. Since the propulsion system may experience component and actuator faults as well, a sensor FDI system must be robust in terms of avoiding misclassifications of any anomalies. The proposed approach utilizes a bank of (m+1) Kalman filters where m is the number of sensors being monitored. One Kalman filter is used for the detection of component and actuator faults while each of the other m filters detects a fault in a specific sensor. With this setup, the overall robustness of the sensor FDI system to anomalies is enhanced. Moreover, numerous component fault events can be accounted for by the FDI system. The sensor FDI system is applied to a commercial aircraft engine simulation, and its performance is evaluated at multiple power settings at a cruise operating point using various fault scenarios.
Dynamic rupture simulations on a fault network in the Corinth Rift
NASA Astrophysics Data System (ADS)
Durand, V.; Hok, S.; Boiselet, A.; Bernard, P.; Scotti, O.
2017-03-01
The Corinth rift (Greece) is made of a complex network of fault segments, typically 10-20 km long separated by stepovers. Assessing the maximum magnitude possible in this region requires accounting for multisegment rupture. Here we apply numerical models of dynamic rupture to quantify the probability of a multisegment rupture in the rift, based on the knowledge of the fault geometry and on the magnitude of the historical and palaeoearthquakes. We restrict our application to dynamic rupture on the most recent and active fault network of the western rift, located on the southern coast. We first define several models, varying the main physical parameters that control the rupture propagation. We keep the regional stress field and stress drop constant, and we test several fault geometries, several positions of the faults in their seismic cycle, several values of the critical distance (and so several fracture energies) and two different hypocentres (thus testing two directivity hypothesis). We obtain different scenarios in terms of the number of ruptured segments and the final magnitude (between M = 5.8 for a single segment rupture to M = 6.4 for a whole network rupture), and find that the main parameter controlling the variability of the scenarios is the fracture energy. We then use a probabilistic approach to quantify the probability of each generated scenario. To do that, we implement a logical tree associating a weight to each model input hypothesis. Combining these weights, we compute the probability of occurrence of each scenario, and show that the multisegment scenarios are very likely (52 per cent), but that the whole network rupture scenario is unlikely (14 per cent).
Development of a lane change risk index using vehicle trajectory data.
Park, Hyunjin; Oh, Cheol; Moon, Jaepil; Kim, Seongho
2018-01-01
Surrogate safety measures (SSMs) have been widely used to evaluate crash potential, which is fundamental for the development of effective safety countermeasures. Unlike existing SSMs, which are mainly focused on the evaluation of longitudinal vehicle maneuvering leading to rear-end crashes, this study proposes a new method for estimating crash risk while a subject vehicle changes lanes, referred to as the lane change risk index (LCRI). A novel feature of the proposed methodology is its incorporation of the amount of exposure time to potential crash and the expected crash severity level by applying a fault tree analysis (FTA) to the evaluation framework. Vehicle interactions between a subject vehicle and adjacent vehicles in the starting lane and the target lane are evaluated in terms of crash potential during lane change. Vehicle trajectory data obtained from a traffic stream, photographed using a drone flown over a freeway segment, is used to investigate the applicability of the proposed methodology. This study compares the characteristics of compulsory and discretionary lane changes observed in a work zone section and a general section of a freeway using the LCRI. It is expected that the outcome of this study will be valuable in evaluating the effectiveness of various traffic operations and control strategies in terms of lane change safety. Copyright © 2017 Elsevier Ltd. All rights reserved.
Software dependability in the Tandem GUARDIAN system
NASA Technical Reports Server (NTRS)
Lee, Inhwan; Iyer, Ravishankar K.
1995-01-01
Based on extensive field failure data for Tandem's GUARDIAN operating system this paper discusses evaluation of the dependability of operational software. Software faults considered are major defects that result in processor failures and invoke backup processes to take over. The paper categorizes the underlying causes of software failures and evaluates the effectiveness of the process pair technique in tolerating software faults. A model to describe the impact of software faults on the reliability of an overall system is proposed. The model is used to evaluate the significance of key factors that determine software dependability and to identify areas for improvement. An analysis of the data shows that about 77% of processor failures that are initially considered due to software are confirmed as software problems. The analysis shows that the use of process pairs to provide checkpointing and restart (originally intended for tolerating hardware faults) allows the system to tolerate about 75% of reported software faults that result in processor failures. The loose coupling between processors, which results in the backup execution (the processor state and the sequence of events) being different from the original execution, is a major reason for the measured software fault tolerance. Over two-thirds (72%) of measured software failures are recurrences of previously reported faults. Modeling, based on the data, shows that, in addition to reducing the number of software faults, software dependability can be enhanced by reducing the recurrence rate.
NASA Astrophysics Data System (ADS)
Chen, Chunfeng; Liu, Hua; Fan, Ge
2005-02-01
In this paper we consider the problem of designing a network of optical cross-connects(OXCs) to provide end-to-end lightpath services to label switched routers (LSRs). Like some previous work, we select the number of OXCs as our objective. Compared with the previous studies, we take into account the fault-tolerant characteristic of logical topology. First of all, using a Prufer number randomly generated, we generate a tree. By adding some edges to the tree, we can obtain a physical topology which consists of a certain number of OXCs and fiber links connecting OXCs. It is notable that we for the first time limit the number of layers of the tree produced according to the method mentioned above. Then we design the logical topologies based on the physical topologies mentioned above. In principle, we will select the shortest path in addition to some consideration on the load balancing of links and the limitation owing to the SRLG. Notably, we implement the routing algorithm for the nodes in increasing order of the degree of the nodes. With regarding to the problem of the wavelength assignment, we adopt the heuristic algorithm of the graph coloring commonly used. It is clear our problem is computationally intractable especially when the scale of the network is large. We adopt the taboo search algorithm to find the near optimal solution to our objective. We present numerical results for up to 1000 LSRs and for a wide range of system parameters such as the number of wavelengths supported by each fiber link and traffic. The results indicate that it is possible to build large-scale optical networks with rich connectivity in a cost-effective manner, using relatively few but properly dimensioned OXCs.
Gerlach, T.M.; Doukas, M.P.; McGee, K.A.; Kessler, R.
1998-01-01
We used the closed chamber method to measure soil CO2 efflux over a three-year period at the Horseshoe Lake tree kill (HLTK) - the largest tree kill on Mammoth Mountain in central eastern California. Efflux contour maps show a significant decline in the areas and rates of CO2 emission from 1995 to 1997. The emission rate fell from 350 t d-1 (metric tons per day) in 1995 to 130 t d-1 in 1997. The trend suggests a return to background soil CO2 efflux levels by early to mid 1999 and may reflect exhaustion of CO2 in a deep reservoir of accumulated gas and/or mechanical closure or sealing of fault conduits transmitting gas to the surface. However, emissions rose to 220 t d-1 on 23 September 1997 at the onset of a degassing event that lasted until 5 December 1997. Recent reservoir recharge and/or extension-enhanced gas flow may have caused the degassing event.
The Active Fault Parameters for Time-Dependent Earthquake Hazard Assessment in Taiwan
NASA Astrophysics Data System (ADS)
Lee, Y.; Cheng, C.; Lin, P.; Shao, K.; Wu, Y.; Shih, C.
2011-12-01
Taiwan is located at the boundary between the Philippine Sea Plate and the Eurasian Plate, with a convergence rate of ~ 80 mm/yr in a ~N118E direction. The plate motion is so active that earthquake is very frequent. In the Taiwan area, disaster-inducing earthquakes often result from active faults. For this reason, it's an important subject to understand the activity and hazard of active faults. The active faults in Taiwan are mainly located in the Western Foothills and the Eastern longitudinal valley. Active fault distribution map published by the Central Geological Survey (CGS) in 2010 shows that there are 31 active faults in the island of Taiwan and some of which are related to earthquake. Many researchers have investigated these active faults and continuously update new data and results, but few people have integrated them for time-dependent earthquake hazard assessment. In this study, we want to gather previous researches and field work results and then integrate these data as an active fault parameters table for time-dependent earthquake hazard assessment. We are going to gather the seismic profiles or earthquake relocation of a fault and then combine the fault trace on land to establish the 3D fault geometry model in GIS system. We collect the researches of fault source scaling in Taiwan and estimate the maximum magnitude from fault length or fault area. We use the characteristic earthquake model to evaluate the active fault earthquake recurrence interval. In the other parameters, we will collect previous studies or historical references and complete our parameter table of active faults in Taiwan. The WG08 have done the time-dependent earthquake hazard assessment of active faults in California. They established the fault models, deformation models, earthquake rate models, and probability models and then compute the probability of faults in California. Following these steps, we have the preliminary evaluated probability of earthquake-related hazards in certain faults in Taiwan. By accomplishing active fault parameters table in Taiwan, we would apply it in time-dependent earthquake hazard assessment. The result can also give engineers a reference for design. Furthermore, it can be applied in the seismic hazard map to mitigate disasters.
A Case Study of a Combat Helicopter’s Single Unit Vulnerability.
1987-03-01
22 2.6 Generic Fault Tree Diagram ----------------------- 24 2.7 Example Kill Diagram ----------------------------- 25 2.8 Example EEA Summary...that of the vulnerability program, a susceptibility program is subdivided into three major tasks. First is an essential elements analysis ( EEA ...which leads to the 27 i final undesired event in much the same manner as a FTA. An example EEA is provided in Figure 2.8. [Ref.l:p226] The
Probabilistic Risk Assessment: A Bibliography
NASA Technical Reports Server (NTRS)
2000-01-01
Probabilistic risk analysis is an integration of failure modes and effects analysis (FMEA), fault tree analysis and other techniques to assess the potential for failure and to find ways to reduce risk. This bibliography references 160 documents in the NASA STI Database that contain the major concepts, probabilistic risk assessment, risk and probability theory, in the basic index or major subject terms, An abstract is included with most citations, followed by the applicable subject terms.
Gaining Insight Into Femtosecond-scale CMOS Effects using FPGAs
2015-03-24
paths or detecting gross path delay faults , but for characterizing subtle aging effects, there is a need to isolate very short paths and detect very...data using COTS FPGAs and novel self-test. Hardware experiments using a 28 nm FPGA demonstrate isolation of small sets of transistors, detection of...hold the static configuration data specifying the LUT function. A set of inverters drive the SRAM contents into a pass-gate multiplexor tree; we
NASA Astrophysics Data System (ADS)
Hirono, Tetsuro; Asayama, Satoru; Kaneki, Shunya; Ito, Akihiro
2016-11-01
The criteria for designating an “Active Fault” not only are important for understanding regional tectonics, but also are a paramount issue for assessing the earthquake risk of faults that are near important structures such as nuclear power plants. Here we propose a proxy, based on the preservation of amorphous ultrafine particles, to assess fault activity within the last millennium. X-ray diffraction data and electron microscope observations of samples from an active fault demonstrated the preservation of large amounts of amorphous ultrafine particles in two slip zones that last ruptured in 1596 and 1999, respectively. A chemical kinetic evaluation of the dissolution process indicated that such particles could survive for centuries, which is consistent with the observations. Thus, preservation of amorphous ultrafine particles in a fault may be valuable for assessing the fault’s latest activity, aiding efforts to evaluate faults that may damage critical facilities in tectonically active zones.
Availability Performance Analysis of Thermal Power Plants
NASA Astrophysics Data System (ADS)
Bhangu, Navneet Singh; Singh, Rupinder; Pahuja, G. L.
2018-03-01
This case study presents the availability evaluation method of thermal power plants for conducting performance analysis in Indian environment. A generic availability model has been proposed for a maintained system (thermal plants) using reliability block diagrams and fault tree analysis. The availability indices have been evaluated under realistic working environment using inclusion exclusion principle. Four year failure database has been used to compute availability for different combinatory of plant capacity, that is, full working state, reduced capacity or failure state. Availability is found to be very less even at full rated capacity (440 MW) which is not acceptable especially in prevailing energy scenario. One of the probable reason for this may be the difference in the age/health of existing thermal power plants which requires special attention of each unit from case to case basis. The maintenance techniques being used are conventional (50 years old) and improper in context of the modern equipment, which further aggravate the problem of low availability. This study highlights procedure for finding critical plants/units/subsystems and helps in deciding preventive maintenance program.
Reliability Analysis of RSG-GAS Primary Cooling System to Support Aging Management Program
NASA Astrophysics Data System (ADS)
Deswandri; Subekti, M.; Sunaryo, Geni Rina
2018-02-01
Multipurpose Research Reactor G.A. Siwabessy (RSG-GAS) which has been operating since 1987 is one of the main facilities on supporting research, development and application of nuclear energy programs in BATAN. Until now, the RSG-GAS research reactor has been successfully operated safely and securely. However, because it has been operating for nearly 30 years, the structures, systems and components (SSCs) from the reactor would have started experiencing an aging phase. The process of aging certainly causes a decrease in reliability and safe performances of the reactor, therefore the aging management program is needed to resolve the issues. One of the programs in the aging management is to evaluate the safety and reliability of the system and also screening the critical components to be managed.One method that can be used for such purposes is the Fault Tree Analysis (FTA). In this papers FTA method is used to screening the critical components in the RSG-GAS Primary Cooling System. The evaluation results showed that the primary isolation valves are the basic events which are dominant against the system failure.
Computer-Aided Reliability Estimation
NASA Technical Reports Server (NTRS)
Bavuso, S. J.; Stiffler, J. J.; Bryant, L. A.; Petersen, P. L.
1986-01-01
CARE III (Computer-Aided Reliability Estimation, Third Generation) helps estimate reliability of complex, redundant, fault-tolerant systems. Program specifically designed for evaluation of fault-tolerant avionics systems. However, CARE III general enough for use in evaluation of other systems as well.
An evaluation of a real-time fault diagnosis expert system for aircraft applications
NASA Technical Reports Server (NTRS)
Schutte, Paul C.; Abbott, Kathy H.; Palmer, Michael T.; Ricks, Wendell R.
1987-01-01
A fault monitoring and diagnosis expert system called Faultfinder was conceived and developed to detect and diagnose in-flight failures in an aircraft. Faultfinder is an automated intelligent aid whose purpose is to assist the flight crew in fault monitoring, fault diagnosis, and recovery planning. The present implementation of this concept performs monitoring and diagnosis for a generic aircraft's propulsion and hydraulic subsystems. This implementation is capable of detecting and diagnosing failures of known and unknown (i.e., unforseeable) type in a real-time environment. Faultfinder uses both rule-based and model-based reasoning strategies which operate on causal, temporal, and qualitative information. A preliminary evaluation is made of the diagnostic concepts implemented in Faultfinder. The evaluation used actual aircraft accident and incident cases which were simulated to assess the effectiveness of Faultfinder in detecting and diagnosing failures. Results of this evaluation, together with the description of the current Faultfinder implementation, are presented.
Analyses of rear-end crashes based on classification tree models.
Yan, Xuedong; Radwan, Essam
2006-09-01
Signalized intersections are accident-prone areas especially for rear-end crashes due to the fact that the diversity of the braking behaviors of drivers increases during the signal change. The objective of this article is to improve knowledge of the relationship between rear-end crashes occurring at signalized intersections and a series of potential traffic risk factors classified by driver characteristics, environments, and vehicle types. Based on the 2001 Florida crash database, the classification tree method and Quasi-induced exposure concept were used to perform the statistical analysis. Two binary classification tree models were developed in this study. One was used for the crash comparison between rear-end and non-rear-end to identify those specific trends of the rear-end crashes. The other was constructed for the comparison between striking vehicles/drivers (at-fault) and struck vehicles/drivers (not-at-fault) to find more complex crash pattern associated with the traffic attributes of driver, vehicle, and environment. The modeling results showed that the rear-end crashes are over-presented in the higher speed limits (45-55 mph); the rear-end crash propensity for daytime is apparently larger than nighttime; and the reduction of braking capacity due to wet and slippery road surface conditions would definitely contribute to rear-end crashes, especially at intersections with higher speed limits. The tree model segmented drivers into four homogeneous age groups: < 21 years, 21-31 years, 32-75 years, and > 75 years. The youngest driver group shows the largest crash propensity; in the 21-31 age group, the male drivers are over-involved in rear-end crashes under adverse weather conditions and the 32-75 years drivers driving large size vehicles have a larger crash propensity compared to those driving passenger vehicles. Combined with the quasi-induced exposure concept, the classification tree method is a proper statistical tool for traffic-safety analysis to investigate crash propensity. Compared to the logistic regression models, tree models have advantages for handling continuous independent variables and easily explaining the complex interaction effect with more than two independent variables. This research recommended that at signalized intersections with higher speed limits, reducing the speed limit to 40 mph efficiently contribute to a lower accident rate. Drivers involved in alcohol use may increase not only rear-end crash risk but also the driver injury severity. Education and enforcement countermeasures should focus on the driver group younger than 21 years. Further studies are suggested to compare crash risk distributions of the driver age for other main crash types to seek corresponding traffic countermeasures.
Strong ground motions generated by earthquakes on creeping faults
Harris, Ruth A.; Abrahamson, Norman A.
2014-01-01
A tenet of earthquake science is that faults are locked in position until they abruptly slip during the sudden strain-relieving events that are earthquakes. Whereas it is expected that locked faults when they finally do slip will produce noticeable ground shaking, what is uncertain is how the ground shakes during earthquakes on creeping faults. Creeping faults are rare throughout much of the Earth's continental crust, but there is a group of them in the San Andreas fault system. Here we evaluate the strongest ground motions from the largest well-recorded earthquakes on creeping faults. We find that the peak ground motions generated by the creeping fault earthquakes are similar to the peak ground motions generated by earthquakes on locked faults. Our findings imply that buildings near creeping faults need to be designed to withstand the same level of shaking as those constructed near locked faults.
Fault latency in the memory - An experimental study on VAX 11/780
NASA Technical Reports Server (NTRS)
Chillarege, Ram; Iyer, Ravishankar K.
1986-01-01
Fault latency is the time between the physical occurrence of a fault and its corruption of data, causing an error. The measure of this time is difficult to obtain because the time of occurrence of a fault and the exact moment of generation of an error are not known. This paper describes an experiment to accurately study the fault latency in the memory subsystem. The experiment employs real memory data from a VAX 11/780 at the University of Illinois. Fault latency distributions are generated for s-a-0 and s-a-1 permanent fault models. Results show that the mean fault latency of a s-a-0 fault is nearly 5 times that of the s-a-1 fault. Large variations in fault latency are found for different regions in memory. An analysis of a variance model to quantify the relative influence of various workload measures on the evaluated latency is also given.
NASA Astrophysics Data System (ADS)
Deng, Feiyue; Yang, Shaopu; Tang, Guiji; Hao, Rujiang; Zhang, Mingliang
2017-04-01
Wheel bearings are essential mechanical components of trains, and fault detection of the wheel bearing is of great significant to avoid economic loss and casualty effectively. However, considering the operating conditions, detection and extraction of the fault features hidden in the heavy noise of the vibration signal have become a challenging task. Therefore, a novel method called adaptive multi-scale AVG-Hat morphology filter (MF) is proposed to solve it. The morphology AVG-Hat operator not only can suppress the interference of the strong background noise greatly, but also enhance the ability of extracting fault features. The improved envelope spectrum sparsity (IESS), as a new evaluation index, is proposed to select the optimal filtering signal processed by the multi-scale AVG-Hat MF. It can present a comprehensive evaluation about the intensity of fault impulse to the background noise. The weighted coefficients of the different scale structural elements (SEs) in the multi-scale MF are adaptively determined by the particle swarm optimization (PSO) algorithm. The effectiveness of the method is validated by analyzing the real wheel bearing fault vibration signal (e.g. outer race fault, inner race fault and rolling element fault). The results show that the proposed method could improve the performance in the extraction of fault features effectively compared with the multi-scale combined morphological filter (CMF) and multi-scale morphology gradient filter (MGF) methods.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bao, Jie; Hou, Zhangshuan; Fang, Yilin
2015-06-01
A series of numerical test cases reflecting broad and realistic ranges of geological formation and preexisting fault properties was developed to systematically evaluate the impacts of preexisting faults on pressure buildup and ground surface uplift during CO₂ injection. Numerical test cases were conducted using a coupled hydro-geomechanical simulator, eSTOMP (extreme-scale Subsurface Transport over Multiple Phases). For efficient sensitivity analysis and reliable construction of a reduced-order model, a quasi-Monte Carlo sampling method was applied to effectively sample a high-dimensional input parameter space to explore uncertainties associated with hydrologic, geologic, and geomechanical properties. The uncertainty quantification results show that the impacts onmore » geomechanical response from the pre-existing faults mainly depend on reservoir and fault permeability. When the fault permeability is two to three orders of magnitude smaller than the reservoir permeability, the fault can be considered as an impermeable block that resists fluid transport in the reservoir, which causes pressure increase near the fault. When the fault permeability is close to the reservoir permeability, or higher than 10⁻¹⁵ m² in this study, the fault can be considered as a conduit that penetrates the caprock, connecting the fluid flow between the reservoir and the upper rock.« less
NASA Astrophysics Data System (ADS)
Molnár, László; Vásárhelyi, Balázs; Tóth, Tivadar M.; Schubert, Félix
2015-01-01
The integrated evaluation of borecores from the Mezősas-Furta fractured metamorphic hydrocarbon reservoir suggests significantly distinct microstructural and rock mechanical features within the analysed fault rock samples. The statistical evaluation of the clast geometries revealed the dominantly cataclastic nature of the samples. Damage zone of the fault can be characterised by an extremely brittle nature and low uniaxial compressive strength, coupled with a predominately coarse fault breccia composition. In contrast, the microstructural manner of the increasing deformation coupled with higher uniaxial compressive strength, strain-hardening nature and low brittleness indicate a transitional interval between the weakly fragmented damage zone and strongly grinded fault core. Moreover, these attributes suggest this unit is mechanically the strongest part of the fault zone. Gougerich cataclasites mark the core zone of the fault, with their widespread plastic nature and locally pseudo-ductile microstructure. Strain localization tends to be strongly linked with the existence of fault gouge ribbons. The fault zone with ˜15 m total thickness can be defined as a significant migration pathway inside the fractured crystalline reservoir. Moreover, as a consequence of the distributed nature of the fault core, it may possibly have a key role in compartmentalisation of the local hydraulic system.
Fault detection and accommodation testing on an F100 engine in an F-15 airplane
NASA Technical Reports Server (NTRS)
Myers, L. P.; Baer-Riedhart, J. L.; Maxwell, M. D.
1985-01-01
The fault detection and accommodation (FDA) methodology for digital engine-control systems may range from simple comparisons of redundant parameters to the more complex and sophisticated observer models of the entire engine system. Evaluations of the various FDA schemes are done using analytical methods, simulation, and limited-altitude-facility testing. Flight testing of the FDA logic has been minimal because of the difficulty of inducing realistic faults in flight. A flight program was conducted to evaluate the fault detection and accommodation capability of a digital electronic engine control in an F-15 aircraft. The objective of the flight program was to induce selected faults and evaluate the resulting actions of the digital engine controller. Comparisons were made between the flight results and predictions. Several anomalies were found in flight and during the ground test. Simulation results showed that the inducement of dual pressure failures was not feasible since the FDA logic was not designed to accommodate these types of failures.
Plafter, George
1967-01-01
Two reverse faults on southwestern Montague Island in Prince William Sound were reactivated during the earthquake of March 27, 1964. New fault scarps, fissures, cracks, and flexures appeared in bedrock and unconsolidated surficial deposits along or near the fault traces. Average strike of the faults is between N. 37° E. and N. 47° E.; they dip northwest at angles ranging from 50° to 85°. The dominant motion was dip slip; the blocks northwest of the reactivated faults were relatively upthrown, and both blocks were upthrown relative to sea level. No other earthquake faults have been found on land. The Patton Bay fault on land is a complex system of en echelon strands marked by a series of spectacular landslides along the scarp and (or) by a zone of fissures and flexures on the upthrown block that locally is as much as 3,000 feet wide. The fault can be traced on land for 22 miles, and it has been mapped on the sea floor to the southwest of Montague Island an additional 17 miles. The maximum measured vertical component of slip is 20 to 23 feet and the maximum indicated dip slip is about 26 feet. A left-lateral strike-slip component of less than 2 feet occurs near the southern end of the fault on land where its strike changes from northeast to north. Indirect evidence from the seismic sea waves and aftershocks associated with the earthquake, and from the distribution of submarine scarps, suggests that the faulting on and near Montague Island occurred at the northeastern end of a reactivated submarine fault system that may extend discontinuously for more than 300 miles from Montague Island to the area offshore of the southeast coast of Kodiak Island. The Hanning Bay fault is a minor rupture only 4 miles long that is marked by an exceptionally well defined almost continuous scarp. The maximum measured vertical component of slip is 16⅓ feet near the midpoint, and the indicated dip slip is about 20 feet. There is a maximum left-lateral strike-slip component of one-half foot near the southern end of the scarp. Warping and extension cracking occurred in bedrock near the midpoint on the upthrown block within about 1,000 feet of the fault scarp. The reverse faults on Montague Island and their postulated submarine extensions lie within a tectonically important narrow zone of crustal attenuation and maximum uplift associated with the earthquake. However, there are no significant lithologic differences in the rock sequences across these faults to suggest that they form major tectonic boundaries. Their spatial distribution relative to the regional uplift associated with the earthquake, the earthquake focal region, and the epicenter of the main shock suggest that they are probably subsidiary features rather than the causative faults along which the earthquake originated. Approximately 70 percent of the new breakage along the Patton Bay and the Hanning Bay faults on Montague Island was along obvious preexisting active fault traces. The estimated ages of undisturbed trees on and near the fault trace indicate that no major disc placement had occurred on these faults for at least 150 to 300 years before the 1964 earthquake.
NASA Astrophysics Data System (ADS)
Takao, M.; Mizutani, H.
2009-05-01
At about 10:13 on July 16, 2007, a strong earthquake named 'Niigata-ken Chuetsu-oki Earthquake' of Mj6.8 on Japan Meteorological Agencyfs scale occurred offshore Niigata prefecture in Japan. However, all of the nuclear reactors at Kashiwazaki-Kariwa Nuclear Power Station (KKNPS) in Niigata prefecture operated by Tokyo Electric Power Company shut down safely. In other words, automatic safety function composed of shutdown, cooling and containment worked as designed immediately after the earthquake. During the earthquake, the peak acceleration of the ground motion exceeded the design-basis ground motion (DBGM), but the force due to the earthquake applied to safety-significant facilities was about the same as or less than the design basis taken into account as static seismic force. In order to assess anew the safety of nuclear power plants, we have evaluated a new DBGM after conducting geomorphological, geological, geophysical, seismological survey and analyses. [Geomorphological, Geological and Geophysical survey] In the land area, aerial photograph interpretation was performed at least within the 30km radius to extract geographies that could possibly be tectonic reliefs as a geomorphological survey. After that, geological reconnaissance was conducted to confirm whether the extracted landforms are tectonic reliefs or not. Especially we carefully investigated Nagaoka Plain Western Boundary Fault Zone (NPWBFZ), which consists of Kakuda-Yahiko fault, Kihinomiya fault and Katakai fault, because NPWBFZ is the one of the active faults which have potential of Mj8 class in Japan. In addition to the geological survey, seismic reflection prospecting of approximate 120km in total length was completed to evaluate the geological structure of the faults and to assess the consecutiveness of the component faults of NPWBFZ. As a result of geomorphological, geological and geophysical surveys, we evaluated that the three component faults of NPWBFZ are independent to each other from the viewpoint of geological structure, however we have decided to take into consideration simultaneous movement of the three faults which is 91km long in seismic design as a case of uncertainty. In the sea area, we conducted seismic reflection prospecting with sonic wave in the area stretching for about 140km along the coastline and 50km in the direction of perpendicular to the coastline. When we analyze the seismic profiles, we evaluated the activities of faults and foldings carefully on the basis of the way of thinking of 'fault-related-fault' because the sedimentary layers in the offing of Niigata prefecture are very thick and the geological structures are characterized by foldings. As a result of the seismic reflection survey and analyses, we assess that five active faults (foldings) to be taken into consideration to seismic design in the sea area and we evaluated that the F-B fault of 36km will have the largest impact on the KKNPS. [Seismological survey] As a result of analyses of the geological survey, data from NCOE and data from 2004 Chuetsu Earthquake, it became clear that there are factors that intensifies seismic motions in this area. For each of the two selected earthquake sources, namely NPWBFZ and F-B fault, we calculated seismic ground motions on the free surface of the base stratum as the design-basis ground motion (DBGM) Ss, using both empirical and numerical ground motion evaluation method. PGA value of DBGM is 2,300Gal for unit 1 to 4 located in the southern part of the KKNPS and 1,050Gal for unit 5 to 7 in the northern part of the site.
NASA Astrophysics Data System (ADS)
Gvillo, D.; Ragheb, M.; Parker, M.; Swartz, S.
1987-05-01
A Production-Rule Analysis System is developed for Nuclear Plant Monitoring. The signals generated by the Zion-1 Plant are considered. A Situation-Assessment and Decision-Aid capability is provided for monitoring the integrity of the Plant Radiation, the Reactor Coolant, the Fuel Clad, and the Containment Systems. A total of 41 signals are currently fed as facts to an Inference Engine functioning in the backward-chaining mode and built along the same structure as the E-Mycin system. The Goal-Tree constituting the Knowledge Base was generated using a representation in the form of Fault Trees deduced from plant procedures information. The system is constructed in support of the Data Analysis and Emergency Preparedness tasks at the Illinois Radiological Emergency Assessment Center (REAC).
Probabilistic seismic hazard analysis for a nuclear power plant site in southeast Brazil
NASA Astrophysics Data System (ADS)
de Almeida, Andréia Abreu Diniz; Assumpção, Marcelo; Bommer, Julian J.; Drouet, Stéphane; Riccomini, Claudio; Prates, Carlos L. M.
2018-05-01
A site-specific probabilistic seismic hazard analysis (PSHA) has been performed for the only nuclear power plant site in Brazil, located 130 km southwest of Rio de Janeiro at Angra dos Reis. Logic trees were developed for both the seismic source characterisation and ground-motion characterisation models, in both cases seeking to capture the appreciable ranges of epistemic uncertainty with relatively few branches. This logic-tree structure allowed the hazard calculations to be performed efficiently while obtaining results that reflect the inevitable uncertainty in long-term seismic hazard assessment in this tectonically stable region. An innovative feature of the study is an additional seismic source zone added to capture the potential contributions of characteristics earthquake associated with geological faults in the region surrounding the coastal site.
Geophysical expression of the Ghost Dance fault, Yucca Mountain, Nevada
Ponce, D.A.; Langenheim, V.E.; ,
1995-01-01
Gravity and ground magnetic data collected along surveyed traverses across Antler and Live Yucca Ridges, on the eastern flank of Yucca Mountain, Nevada, reveal small-scale faulting associated with the Ghost Dance and possibly other faults. These studies are part of an effort to evaluate faulting in the vicinity of a potential nuclear waste repository at Yucca Mountain.
Geophysical expression of the Ghost Dance Fault, Yucca Mountain, Nevada
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ponce, D.A.; Langenheim, V.E.
1995-12-01
Gravity and ground magnetic data collected along surveyed traverses across Antler and Live Yucca Ridges, on the eastern flank of Yucca Mountain, Nevada, reveal small-scale faulting associated with the Ghost Dance and possibly other faults. These studies are part of an effort to evaluate faulting in the vicinity of a potential nuclear waste repository at Yucca Mountain.
NASA Astrophysics Data System (ADS)
Newman, W. I.; Turcotte, D. L.
2002-12-01
We have studied a hybrid model combining the forest-fire model with the site-percolation model in order to better understand the earthquake cycle. We consider a square array of sites. At each time step, a "tree" is dropped on a randomly chosen site and is planted if the site is unoccupied. When a cluster of "trees" spans the site (a percolating cluster), all the trees in the cluster are removed ("burned") in a "fire." The removal of the cluster is analogous to a characteristic earthquake and planting "trees" is analogous to increasing the regional stress. The clusters are analogous to the metastable regions of a fault over which an earthquake rupture can propagate once triggered. We find that the frequency-area statistics of the metastable regions are power-law with a negative exponent of two (as in the forest-fire model). This is analogous to the Gutenberg-Richter distribution of seismicity. This "self-organized critical behavior" can be explained in terms of an inverse cascade of clusters. Individual trees move from small to larger clusters until they are destroyed. This inverse cascade of clusters is self-similar and the power-law distribution of cluster sizes has been shown to have an exponent of two. We have quantified the forecasting of the spanning fires using error diagrams. The assumption that "fires" (earthquakes) are quasi-periodic has moderate predictability. The density of trees gives an improved degree of predictability, while the size of the largest cluster of trees provides a substantial improvement in forecasting a "fire."
Implanted component faults and their effects on gas turbine engine performance
DOE Office of Scientific and Technical Information (OSTI.GOV)
MacLeod, J.D.; Taylor, V.; Laflamme, J.C.G.
Under the sponsorship of the Canadian Department of National Defence, the Engine Laboratory of the National Research Council of Canada (NRCC) has established a program for the evaluation of component deterioration on gas turbine engine performance. The effect is aimed at investigating the effects of typical in-service faults on the performance characteristics of each individual engine component. The objective of the program is the development of a generalized fault library, which will be used with fault identification techniques in the field, to reduce unscheduled maintenance. To evaluate the effects of implanted faults on the performance of a single spool engine,more » such as an Allison T56 turboprop engine, a series of faulted parts were installed. For this paper the following faults were analyzed: (a) first-stage turbine nozzle erosion damage; (b) first-stage turbine rotor blade untwist; (c) compressor seal wear; (d) first and second-stage compressor blade tip clearance increase. This paper describes the project objectives, the experimental installation, and the results of the fault implantation on engine performance. Discussed are performance variations on both engine and component characteristics. As the performance changes were significant, a rigorous measurement uncertainty analysis is included.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
2015-06-15
Current quality assurance and quality management guidelines provided by various professional organizations are prescriptive in nature, focusing principally on performance characteristics of planning and delivery devices. However, published analyses of events in radiation therapy show that most events are often caused by flaws in clinical processes rather than by device failures. This suggests the need for the development of a quality management program that is based on integrated approaches to process and equipment quality assurance. Industrial engineers have developed various risk assessment tools that are used to identify and eliminate potential failures from a system or a process before amore » failure impacts a customer. These tools include, but are not limited to, process mapping, failure modes and effects analysis, fault tree analysis. Task Group 100 of the American Association of Physicists in Medicine has developed these tools and used them to formulate an example risk-based quality management program for intensity-modulated radiotherapy. This is a prospective risk assessment approach that analyzes potential error pathways inherent in a clinical process and then ranks them according to relative risk, typically before implementation, followed by the design of a new process or modification of the existing process. Appropriate controls are then put in place to ensure that failures are less likely to occur and, if they do, they will more likely be detected before they propagate through the process, compromising treatment outcome and causing harm to the patient. Such a prospective approach forms the basis of the work of Task Group 100 that has recently been approved by the AAPM. This session will be devoted to a discussion of these tools and practical examples of how these tools can be used in a given radiotherapy clinic to develop a risk based quality management program. Learning Objectives: Learn how to design a process map for a radiotherapy process Learn how to perform failure modes and effects analysis analysis for a given process Learn what fault trees are all about Learn how to design a quality management program based upon the information obtained from process mapping, failure modes and effects analysis and fault tree analysis. Dunscombe: Director, TreatSafely, LLC and Center for the Assessment of Radiological Sciences; Consultant to IAEA and Varian Thomadsen: President, Center for the Assessment of Radiological Sciences Palta: Vice President of the Center for the Assessment of Radiological Sciences.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Palta, J.
2015-06-15
Current quality assurance and quality management guidelines provided by various professional organizations are prescriptive in nature, focusing principally on performance characteristics of planning and delivery devices. However, published analyses of events in radiation therapy show that most events are often caused by flaws in clinical processes rather than by device failures. This suggests the need for the development of a quality management program that is based on integrated approaches to process and equipment quality assurance. Industrial engineers have developed various risk assessment tools that are used to identify and eliminate potential failures from a system or a process before amore » failure impacts a customer. These tools include, but are not limited to, process mapping, failure modes and effects analysis, fault tree analysis. Task Group 100 of the American Association of Physicists in Medicine has developed these tools and used them to formulate an example risk-based quality management program for intensity-modulated radiotherapy. This is a prospective risk assessment approach that analyzes potential error pathways inherent in a clinical process and then ranks them according to relative risk, typically before implementation, followed by the design of a new process or modification of the existing process. Appropriate controls are then put in place to ensure that failures are less likely to occur and, if they do, they will more likely be detected before they propagate through the process, compromising treatment outcome and causing harm to the patient. Such a prospective approach forms the basis of the work of Task Group 100 that has recently been approved by the AAPM. This session will be devoted to a discussion of these tools and practical examples of how these tools can be used in a given radiotherapy clinic to develop a risk based quality management program. Learning Objectives: Learn how to design a process map for a radiotherapy process Learn how to perform failure modes and effects analysis analysis for a given process Learn what fault trees are all about Learn how to design a quality management program based upon the information obtained from process mapping, failure modes and effects analysis and fault tree analysis. Dunscombe: Director, TreatSafely, LLC and Center for the Assessment of Radiological Sciences; Consultant to IAEA and Varian Thomadsen: President, Center for the Assessment of Radiological Sciences Palta: Vice President of the Center for the Assessment of Radiological Sciences.« less
Health Management Applications for International Space Station
NASA Technical Reports Server (NTRS)
Alena, Richard; Duncavage, Dan
2005-01-01
Traditional mission and vehicle management involves teams of highly trained specialists monitoring vehicle status and crew activities, responding rapidly to any anomalies encountered during operations. These teams work from the Mission Control Center and have access to engineering support teams with specialized expertise in International Space Station (ISS) subsystems. Integrated System Health Management (ISHM) applications can significantly augment these capabilities by providing enhanced monitoring, prognostic and diagnostic tools for critical decision support and mission management. The Intelligent Systems Division of NASA Ames Research Center is developing many prototype applications using model-based reasoning, data mining and simulation, working with Mission Control through the ISHM Testbed and Prototypes Project. This paper will briefly describe information technology that supports current mission management practice, and will extend this to a vision for future mission control workflow incorporating new ISHM applications. It will describe ISHM applications currently under development at NASA and will define technical approaches for implementing our vision of future human exploration mission management incorporating artificial intelligence and distributed web service architectures using specific examples. Several prototypes are under development, each highlighting a different computational approach. The ISStrider application allows in-depth analysis of Caution and Warning (C&W) events by correlating real-time telemetry with the logical fault trees used to define off-nominal events. The application uses live telemetry data and the Livingstone diagnostic inference engine to display the specific parameters and fault trees that generated the C&W event, allowing a flight controller to identify the root cause of the event from thousands of possibilities by simply navigating animated fault tree models on their workstation. SimStation models the functional power flow for the ISS Electrical Power System and can predict power balance for nominal and off-nominal conditions. SimStation uses realtime telemetry data to keep detailed computational physics models synchronized with actual ISS power system state. In the event of failure, the application can then rapidly diagnose root cause, predict future resource levels and even correlate technical documents relevant to the specific failure. These advanced computational models will allow better insight and more precise control of ISS subsystems, increasing safety margins by speeding up anomaly resolution and reducing,engineering team effort and cost. This technology will make operating ISS more efficient and is directly applicable to next-generation exploration missions and Crew Exploration Vehicles.
TU-AB-BRD-04: Development of Quality Management Program
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thomadsen, B.
2015-06-15
Current quality assurance and quality management guidelines provided by various professional organizations are prescriptive in nature, focusing principally on performance characteristics of planning and delivery devices. However, published analyses of events in radiation therapy show that most events are often caused by flaws in clinical processes rather than by device failures. This suggests the need for the development of a quality management program that is based on integrated approaches to process and equipment quality assurance. Industrial engineers have developed various risk assessment tools that are used to identify and eliminate potential failures from a system or a process before amore » failure impacts a customer. These tools include, but are not limited to, process mapping, failure modes and effects analysis, fault tree analysis. Task Group 100 of the American Association of Physicists in Medicine has developed these tools and used them to formulate an example risk-based quality management program for intensity-modulated radiotherapy. This is a prospective risk assessment approach that analyzes potential error pathways inherent in a clinical process and then ranks them according to relative risk, typically before implementation, followed by the design of a new process or modification of the existing process. Appropriate controls are then put in place to ensure that failures are less likely to occur and, if they do, they will more likely be detected before they propagate through the process, compromising treatment outcome and causing harm to the patient. Such a prospective approach forms the basis of the work of Task Group 100 that has recently been approved by the AAPM. This session will be devoted to a discussion of these tools and practical examples of how these tools can be used in a given radiotherapy clinic to develop a risk based quality management program. Learning Objectives: Learn how to design a process map for a radiotherapy process Learn how to perform failure modes and effects analysis analysis for a given process Learn what fault trees are all about Learn how to design a quality management program based upon the information obtained from process mapping, failure modes and effects analysis and fault tree analysis. Dunscombe: Director, TreatSafely, LLC and Center for the Assessment of Radiological Sciences; Consultant to IAEA and Varian Thomadsen: President, Center for the Assessment of Radiological Sciences Palta: Vice President of the Center for the Assessment of Radiological Sciences.« less
TU-AB-BRD-02: Failure Modes and Effects Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huq, M.
2015-06-15
Current quality assurance and quality management guidelines provided by various professional organizations are prescriptive in nature, focusing principally on performance characteristics of planning and delivery devices. However, published analyses of events in radiation therapy show that most events are often caused by flaws in clinical processes rather than by device failures. This suggests the need for the development of a quality management program that is based on integrated approaches to process and equipment quality assurance. Industrial engineers have developed various risk assessment tools that are used to identify and eliminate potential failures from a system or a process before amore » failure impacts a customer. These tools include, but are not limited to, process mapping, failure modes and effects analysis, fault tree analysis. Task Group 100 of the American Association of Physicists in Medicine has developed these tools and used them to formulate an example risk-based quality management program for intensity-modulated radiotherapy. This is a prospective risk assessment approach that analyzes potential error pathways inherent in a clinical process and then ranks them according to relative risk, typically before implementation, followed by the design of a new process or modification of the existing process. Appropriate controls are then put in place to ensure that failures are less likely to occur and, if they do, they will more likely be detected before they propagate through the process, compromising treatment outcome and causing harm to the patient. Such a prospective approach forms the basis of the work of Task Group 100 that has recently been approved by the AAPM. This session will be devoted to a discussion of these tools and practical examples of how these tools can be used in a given radiotherapy clinic to develop a risk based quality management program. Learning Objectives: Learn how to design a process map for a radiotherapy process Learn how to perform failure modes and effects analysis analysis for a given process Learn what fault trees are all about Learn how to design a quality management program based upon the information obtained from process mapping, failure modes and effects analysis and fault tree analysis. Dunscombe: Director, TreatSafely, LLC and Center for the Assessment of Radiological Sciences; Consultant to IAEA and Varian Thomadsen: President, Center for the Assessment of Radiological Sciences Palta: Vice President of the Center for the Assessment of Radiological Sciences.« less
Predeployment validation of fault-tolerant systems through software-implemented fault insertion
NASA Technical Reports Server (NTRS)
Czeck, Edward W.; Siewiorek, Daniel P.; Segall, Zary Z.
1989-01-01
Fault injection-based automated testing (FIAT) environment, which can be used to experimentally characterize and evaluate distributed realtime systems under fault-free and faulted conditions is described. A survey is presented of validation methodologies. The need for fault insertion based on validation methodologies is demonstrated. The origins and models of faults, and motivation for the FIAT concept are reviewed. FIAT employs a validation methodology which builds confidence in the system through first providing a baseline of fault-free performance data and then characterizing the behavior of the system with faults present. Fault insertion is accomplished through software and allows faults or the manifestation of faults to be inserted by either seeding faults into memory or triggering error detection mechanisms. FIAT is capable of emulating a variety of fault-tolerant strategies and architectures, can monitor system activity, and can automatically orchestrate experiments involving insertion of faults. There is a common system interface which allows ease of use to decrease experiment development and run time. Fault models chosen for experiments on FIAT have generated system responses which parallel those observed in real systems under faulty conditions. These capabilities are shown by two example experiments each using a different fault-tolerance strategy.
Use of Terrestrial Laser Scanner for Rigid Airport Pavement Management.
Barbarella, Maurizio; D'Amico, Fabrizio; De Blasiis, Maria Rosaria; Di Benedetto, Alessandro; Fiani, Margherita
2017-12-26
The evaluation of the structural efficiency of airport infrastructures is a complex task. Faulting is one of the most important indicators of rigid pavement performance. The aim of our study is to provide a new method for faulting detection and computation on jointed concrete pavements. Nowadays, the assessment of faulting is performed with the use of laborious and time-consuming measurements that strongly hinder aircraft traffic. We proposed a field procedure for Terrestrial Laser Scanner data acquisition and a computation flow chart in order to identify and quantify the fault size at each joint of apron slabs. The total point cloud has been used to compute the least square plane fitting those points. The best-fit plane for each slab has been computed too. The attitude of each slab plane with respect to both the adjacent ones and the apron reference plane has been determined by the normal vectors to the surfaces. Faulting has been evaluated as the difference in elevation between the slab planes along chosen sections. For a more accurate evaluation of the faulting value, we have then considered a few strips of data covering rectangular areas of different sizes across the joints. The accuracy of the estimated quantities has been computed too.
Study of a phase-to-ground fault on a 400 kV overhead transmission line
NASA Astrophysics Data System (ADS)
Iagăr, A.; Popa, G. N.; Diniş, C. M.
2018-01-01
Power utilities need to supply their consumers at high power quality level. Because the faults that occur on High-Voltage and Extra-High-Voltage transmission lines can cause serious damages in underlying transmission and distribution systems, it is important to examine each fault in detail. In this work we studied a phase-to-ground fault (on phase 1) of 400 kV overhead transmission line Mintia-Arad. Indactic® 650 fault analyzing system was used to record the history of the fault. Signals (analog and digital) recorded by Indactic® 650 were visualized and analyzed by Focus program. Summary of fault report allowed evaluation of behavior of control and protection equipment and determination of cause and location of the fault.
NASA Astrophysics Data System (ADS)
Heinlein, S. N.; Pavlis, T. L.; Bruhn, R. L.; McCalpin, J. P.
2017-12-01
This study evaluates a surface structure using 3D visualization of LiDAR and aerial photography then analyzes these datasets using structure mapping techniques. Results provide new insight into the role of tectonics versus gravitational deformation. The study area is located in southern Alaska in the western edge of the St. Elias Orogen where the Yakutat microplate is colliding into Alaska. Computer applications were used to produce 3D terrain models to create a kinematic assessment of the Ragged Mountain fault which trends along the length of the east flank of Ragged Mountain. The area contains geomorphic and structural features which are utilize to determine the type of displacement on the fault. Previous studies described the Ragged Mountain fault as a very shallow (8°), west-dipping thrust fault that reactivated in the Late Holocene by westward-directed gravity sliding and inferred at least 180 m of normal slip, in a direction opposite to the (relative) eastward thrust transport of the structure inferred from stratigraphic juxtaposition. More recently this gravity sliding hypothesis has been questioned and this study evaluates one of these alternative hypotheses; that uphill facing normal fault-scarps along the Ragged Mountain fault trace represent extension above a buried ramp in a thrust and is evaluated with a fault-parallel flow model of hanging-wall folding and extension. Profiles across the scarp trace were used to illustrate the curvature of the topographic surfaces adjacent to the scarps system and evaluate their origin. This simple kinematic model tests the hypothesis that extensional fault scarps at the surface are produced by flexure above a deeper ramp in a largely blind thrust system. The data in the context of this model implies that the extensional scarp structures previously examined represent a combination of erosionally modified features overprinted by flexural extension above a thrust system. Analyses of scarp heights along the structure are combined with the model to suggest a decrease in Holocene slip from south to north along the Ragged Mountain fault from 11.3 m to 0.2 m, respectively.
ERIC Educational Resources Information Center
Wood, Rulon Kent
This study was designed to arrive at a possible consensus of expert opinions as related to teacher use of library media centers in American public education, and to analyze the essential teacher skills and knowledge suggested by the experts through this systematic methodology. This provided a national needs assessment to serve as a basis for…
Efficient Probabilistic Diagnostics for Electrical Power Systems
NASA Technical Reports Server (NTRS)
Mengshoel, Ole J.; Chavira, Mark; Cascio, Keith; Poll, Scott; Darwiche, Adnan; Uckun, Serdar
2008-01-01
We consider in this work the probabilistic approach to model-based diagnosis when applied to electrical power systems (EPSs). Our probabilistic approach is formally well-founded, as it based on Bayesian networks and arithmetic circuits. We investigate the diagnostic task known as fault isolation, and pay special attention to meeting two of the main challenges . model development and real-time reasoning . often associated with real-world application of model-based diagnosis technologies. To address the challenge of model development, we develop a systematic approach to representing electrical power systems as Bayesian networks, supported by an easy-to-use speci.cation language. To address the real-time reasoning challenge, we compile Bayesian networks into arithmetic circuits. Arithmetic circuit evaluation supports real-time diagnosis by being predictable and fast. In essence, we introduce a high-level EPS speci.cation language from which Bayesian networks that can diagnose multiple simultaneous failures are auto-generated, and we illustrate the feasibility of using arithmetic circuits, compiled from Bayesian networks, for real-time diagnosis on real-world EPSs of interest to NASA. The experimental system is a real-world EPS, namely the Advanced Diagnostic and Prognostic Testbed (ADAPT) located at the NASA Ames Research Center. In experiments with the ADAPT Bayesian network, which currently contains 503 discrete nodes and 579 edges, we .nd high diagnostic accuracy in scenarios where one to three faults, both in components and sensors, were inserted. The time taken to compute the most probable explanation using arithmetic circuits has a small mean of 0.2625 milliseconds and standard deviation of 0.2028 milliseconds. In experiments with data from ADAPT we also show that arithmetic circuit evaluation substantially outperforms joint tree propagation and variable elimination, two alternative algorithms for diagnosis using Bayesian network inference.
Earthquake Model of the Middle East (EMME) Project: Active Fault Database for the Middle East Region
NASA Astrophysics Data System (ADS)
Gülen, L.; Wp2 Team
2010-12-01
The Earthquake Model of the Middle East (EMME) Project is a regional project of the umbrella GEM (Global Earthquake Model) project (http://www.emme-gem.org/). EMME project region includes Turkey, Georgia, Armenia, Azerbaijan, Syria, Lebanon, Jordan, Iran, Pakistan, and Afghanistan. Both EMME and SHARE projects overlap and Turkey becomes a bridge connecting the two projects. The Middle East region is tectonically and seismically very active part of the Alpine-Himalayan orogenic belt. Many major earthquakes have occurred in this region over the years causing casualties in the millions. The EMME project will use PSHA approach and the existing source models will be revised or modified by the incorporation of newly acquired data. More importantly the most distinguishing aspect of the EMME project from the previous ones will be its dynamic character. This very important characteristic is accomplished by the design of a flexible and scalable database that will permit continuous update, refinement, and analysis. A digital active fault map of the Middle East region is under construction in ArcGIS format. We are developing a database of fault parameters for active faults that are capable of generating earthquakes above a threshold magnitude of Mw≥5.5. Similar to the WGCEP-2007 and UCERF-2 projects, the EMME project database includes information on the geometry and rates of movement of faults in a “Fault Section Database”. The “Fault Section” concept has a physical significance, in that if one or more fault parameters change, a new fault section is defined along a fault zone. So far over 3,000 Fault Sections have been defined and parameterized for the Middle East region. A separate “Paleo-Sites Database” includes information on the timing and amounts of fault displacement for major fault zones. A digital reference library that includes the pdf files of the relevant papers, reports is also being prepared. Another task of the WP-2 of the EMME project is to prepare a strain and slip rate map of the Middle East region by basically compiling already published data. The third task is to calculate b-values, Mmax and determine the activity rates. New data and evidences will be interpreted to revise or modify the existing source models. A logic tree approach will be utilized for the areas where there is no consensus to encompass different interpretations. Finally seismic source zones in the Middle East region will be delineated using all available data. EMME Project WP2 Team: Levent Gülen, Murat Utkucu, M. Dinçer Köksal, Hilal Domaç, Yigit Ince, Mine Demircioglu, Shota Adamia, Nino Sandradze, Aleksandre Gvencadze, Arkadi Karakhanyan, Mher Avanesyan, Tahir Mammadli, Gurban Yetirmishli, Arif Axundov, Khaled Hessami, M. Asif Khan, M. Sayab.
NASA Technical Reports Server (NTRS)
Rogers, William H.; Schutte, Paul C.
1993-01-01
Advanced fault management aiding concepts for commercial pilots are being developed in a research program at NASA Langley Research Center. One aim of this program is to re-evaluate current design principles for display of fault information to the flight crew: (1) from a cognitive engineering perspective and (2) in light of the availability of new types of information generated by advanced fault management aids. The study described in this paper specifically addresses principles for organizing fault information for display to pilots based on their mental models of fault management.
NASA Astrophysics Data System (ADS)
DeLong, S.; Donnellan, A.; Pickering, A.
2017-12-01
Aseismic fault creep, coseismic fault displacement, distributed deformation, and the relative contribution of each have important bearing on infrastructure resilience, risk reduction, and the study of earthquake physics. Furthermore, the impact of interseismic fault creep in rupture propagation scenarios, and its impact and consequently on fault segmentation and maximum earthquake magnitudes, is poorly resolved in current rupture forecast models. The creeping section of the San Andreas Fault (SAF) in Central California is an outstanding area for establishing methodology for future scientific response to damaging earthquakes and for characterizing the fine details of crustal deformation. Here, we describe how data from airborne and terrestrial laser scanning, airborne interferometric radar (UAVSAR), and optical data from satellites and UAVs can be used to characterize rates and map patterns of deformation within fault zones of varying complexity and geomorphic expression. We are evaluating laser point cloud processing, photogrammetric structure from motion, radar interferometry, sub-pixel correlation, and other techniques to characterize the relative ability of each to measure crustal deformation in two and three dimensions through time. We are collecting new and synthesizing existing data from the zone of highest interseismic creep rates along the SAF where a transition from a single main fault trace to a 1-km wide extensional stepover occurs. In the stepover region, creep measurements from alignment arrays 100 meters long across the main fault trace reveal lower rates than those in adjacent, geomorphically simpler parts of the fault. This indicates that deformation is distributed across the en echelon subsidiary faults, by creep and/or stick-slip behavior. Our objectives are to better understand how deformation is partitioned across a fault damage zone, how it is accommodated in the shallow subsurface, and to better characterize the relative amounts of fault creep and potential stick-slip fault behavior across the plate boundary at these sites in order to evaluate the potential for rupture propagation in large earthquakes.
NASA Technical Reports Server (NTRS)
Jammu, V. B.; Danai, K.; Lewicki, D. G.
1998-01-01
This paper presents the experimental evaluation of the Structure-Based Connectionist Network (SBCN) fault diagnostic system introduced in the preceding article. For this vibration data from two different helicopter gearboxes: OH-58A and S-61, are used. A salient feature of SBCN is its reliance on the knowledge of the gearbox structure and the type of features obtained from processed vibration signals as a substitute to training. To formulate this knowledge, approximate vibration transfer models are developed for the two gearboxes and utilized to derive the connection weights representing the influence of component faults on vibration features. The validity of the structural influences is evaluated by comparing them with those obtained from experimental RMS values. These influences are also evaluated ba comparing them with the weights of a connectionist network trained though supervised learning. The results indicate general agreement between the modeled and experimentally obtained influences. The vibration data from the two gearboxes are also used to evaluate the performance of SBCN in fault diagnosis. The diagnostic results indicate that the SBCN is effective in directing the presence of faults and isolating them within gearbox subsystems based on structural influences, but its performance is not as good in isolating faulty components, mainly due to lack of appropriate vibration features.
DEPEND: A simulation-based environment for system level dependability analysis
NASA Technical Reports Server (NTRS)
Goswami, Kumar; Iyer, Ravishankar K.
1992-01-01
The design and evaluation of highly reliable computer systems is a complex issue. Designers mostly develop such systems based on prior knowledge and experience and occasionally from analytical evaluations of simplified designs. A simulation-based environment called DEPEND which is especially geared for the design and evaluation of fault-tolerant architectures is presented. DEPEND is unique in that it exploits the properties of object-oriented programming to provide a flexible framework with which a user can rapidly model and evaluate various fault-tolerant systems. The key features of the DEPEND environment are described, and its capabilities are illustrated with a detailed analysis of a real design. In particular, DEPEND is used to simulate the Unix based Tandem Integrity fault-tolerance and evaluate how well it handles near-coincident errors caused by correlated and latent faults. Issues such as memory scrubbing, re-integration policies, and workload dependent repair times which affect how the system handles near-coincident errors are also evaluated. Issues such as the method used by DEPEND to simulate error latency and the time acceleration technique that provides enormous simulation speed up are also discussed. Unlike any other simulation-based dependability studies, the use of these approaches and the accuracy of the simulation model are validated by comparing the results of the simulations, with measurements obtained from fault injection experiments conducted on a production Tandem Integrity machine.
A Unified Nonlinear Adaptive Approach for Detection and Isolation of Engine Faults
NASA Technical Reports Server (NTRS)
Tang, Liang; DeCastro, Jonathan A.; Zhang, Xiaodong; Farfan-Ramos, Luis; Simon, Donald L.
2010-01-01
A challenging problem in aircraft engine health management (EHM) system development is to detect and isolate faults in system components (i.e., compressor, turbine), actuators, and sensors. Existing nonlinear EHM methods often deal with component faults, actuator faults, and sensor faults separately, which may potentially lead to incorrect diagnostic decisions and unnecessary maintenance. Therefore, it would be ideal to address sensor faults, actuator faults, and component faults under one unified framework. This paper presents a systematic and unified nonlinear adaptive framework for detecting and isolating sensor faults, actuator faults, and component faults for aircraft engines. The fault detection and isolation (FDI) architecture consists of a parallel bank of nonlinear adaptive estimators. Adaptive thresholds are appropriately designed such that, in the presence of a particular fault, all components of the residual generated by the adaptive estimator corresponding to the actual fault type remain below their thresholds. If the faults are sufficiently different, then at least one component of the residual generated by each remaining adaptive estimator should exceed its threshold. Therefore, based on the specific response of the residuals, sensor faults, actuator faults, and component faults can be isolated. The effectiveness of the approach was evaluated using the NASA C-MAPSS turbofan engine model, and simulation results are presented.
The determination of the stacking fault energy in copper-nickel alloys
NASA Technical Reports Server (NTRS)
Leighly, H. P., Jr.
1982-01-01
Methods for determining the stacking fault energies of a series of nickel-copper alloys to gain an insight into the embrittling effect of hydrogen are evaluated. Plans for employing weak beam dark field electron microscopy to determine stacking fault energies are outlined.
Hirono, Tetsuro; Asayama, Satoru; Kaneki, Shunya; Ito, Akihiro
2016-01-01
The criteria for designating an “Active Fault” not only are important for understanding regional tectonics, but also are a paramount issue for assessing the earthquake risk of faults that are near important structures such as nuclear power plants. Here we propose a proxy, based on the preservation of amorphous ultrafine particles, to assess fault activity within the last millennium. X-ray diffraction data and electron microscope observations of samples from an active fault demonstrated the preservation of large amounts of amorphous ultrafine particles in two slip zones that last ruptured in 1596 and 1999, respectively. A chemical kinetic evaluation of the dissolution process indicated that such particles could survive for centuries, which is consistent with the observations. Thus, preservation of amorphous ultrafine particles in a fault may be valuable for assessing the fault’s latest activity, aiding efforts to evaluate faults that may damage critical facilities in tectonically active zones. PMID:27827413
Evaluation of fault-tolerant parallel-processor architectures over long space missions
NASA Technical Reports Server (NTRS)
Johnson, Sally C.
1989-01-01
The impact of a five year space mission environment on fault-tolerant parallel processor architectures is examined. The target application is a Strategic Defense Initiative (SDI) satellite requiring 256 parallel processors to provide the computation throughput. The reliability requirements are that the system still be operational after five years with .99 probability and that the probability of system failure during one-half hour of full operation be less than 10(-7). The fault tolerance features an architecture must possess to meet these reliability requirements are presented, many potential architectures are briefly evaluated, and one candidate architecture, the Charles Stark Draper Laboratory's Fault-Tolerant Parallel Processor (FTPP) is evaluated in detail. A methodology for designing a preliminary system configuration to meet the reliability and performance requirements of the mission is then presented and demonstrated by designing an FTPP configuration.
Swan, F.H.; Wesling, J.R.; Angell, M.M.; Thomas, A.P.; Whitney, J.W.; Gibson, J.D.
2001-01-01
Evaluation of surface faulting that may pose a hazard to prospective surface facilities is an important element of the tectonic studies for the potential Yucca Mountain high-level radioactive waste repository in southwestern Nevada. For this purpose, a program of detailed geologic mapping and trenching was done to obtain surface and near-surface geologic data that are essential for determining the location and recency of faults at a prospective surface-facilities site located east of Exile Hill in Midway Valley, near the eastern base of Yucca Mountain. The dominant tectonic features in the Midway Valley area are the north- to northeast-trending, west-dipping normal faults that bound the Midway Valley structural block-the Bow Ridge fault on the west side of Exile Hill and the Paint-brush Canyon fault on the east side of the valley. Trenching of Quaternary sediments has exposed evidence of displacements, which demonstrate that these block-bounding faults repeatedly ruptured the surface during the middle to late Quaternary. Geologic mapping, subsurface borehole and geophysical data, and the results of trenching activities indicate the presence of north- to northeast-trending faults and northwest-trending faults in Tertiary volcanic rocks beneath alluvial and colluvial sediments near the prospective surface-facilities site. North to northeast-trending faults include the Exile Hill fault along the eastern base of Exile Hill and faults to the east beneath the surficial deposits of Midway Valley. These faults have no geomorphic expression, but two north- to northeast-trending zones of fractures exposed in excavated profiles of middle to late Pleistocene deposits at the prospective surface-facilities site appear to be associated with these faults. Northwest-trending faults include the West Portal and East Portal faults, but no disruption of Quaternary deposits by these faults is evident. The western zone of fractures is associated with the Exile Hill fault. The eastern zone of fractures is within Quaternary alluvial sediments, but no bedrock was encountered in trenches and soil pits in this part of the prospective surface facilities site; thus, the direct association of this zone with one or more bedrock faults is uncertain. No displacement of lithologic contacts and soil horizons could be detected in the fractured Quaternary deposits. The results of these investigations imply the absence of any appreciable late Quaternary faulting activity at the prospective surface-facilities site.
Investigation of fault modes in permanent magnet synchronous machines for traction applications
NASA Astrophysics Data System (ADS)
Choi, Gilsu
Over the past few decades, electric motor drives have been more widely adopted to power the transportation sector to reduce our dependence on foreign oil and carbon emissions. Permanent magnet synchronous machines (PMSMs) are popular in many applications in the aerospace and automotive industries that require high power density and high efficiency. However, the presence of magnets that cannot be turned off in the event of a fault has always been an issue that hinders adoption of PMSMs in these demanding applications. This work investigates the design and analysis of PMSMs for automotive traction applications with particular emphasis on fault-mode operation caused by faults appearing at the terminals of the machine. New models and analytical techniques are introduced for evaluating the steady-state and dynamic response of PMSM drives to various fault conditions. Attention is focused on modeling the PMSM drive including nonlinear magnetic behavior under several different fault conditions, evaluating the risks of irreversible demagnetization caused by the large fault currents, as well as developing fault mitigation techniques in terms of both the fault currents and demagnetization risks. Of the major classes of machine terminal faults that can occur in PMSMs, short-circuit (SC) faults produce much more dangerous fault currents than open-circuit faults. The impact of different PMSM topologies and parameters on their responses to symmetrical and asymmetrical short-circuit (SSC & ASC) faults has been investigated. A detailed investigation on both the SSC and ASC faults is presented including both closed-form and numerical analysis. The demagnetization characteristics caused by high fault-mode stator currents (i.e., armature reaction) for different types of PMSMs are investigated. A thorough analysis and comparison of the relative demagnetization vulnerability for different types of PMSMs is presented. This analysis includes design guidelines and recommendations for minimizing the demagnetization risks while examining corresponding trade-offs. Two PM machines have been tested to validate the predicted fault currents and braking torque as well as demagnetization risks in PMSM drives. The generality and scalability of key results have also been demonstrated by analyzing several PM machines with a variety of stator, rotor, and winding configurations for various power ratings.
Strain Accumulation and Release of the Gorkha, Nepal, Earthquake (M w 7.8, 25 April 2015)
NASA Astrophysics Data System (ADS)
Morsut, Federico; Pivetta, Tommaso; Braitenberg, Carla; Poretti, Giorgio
2017-08-01
The near-fault GNSS records of strong-ground movement are the most sensitive for defining the fault rupture. Here, two unpublished GNSS records are studied, a near-fault-strong-motion station (NAGA) and a distant station in a poorly covered area (PYRA). The station NAGA, located above the Gorkha fault, sensed a southward displacement of almost 1.7 m. The PYRA station that is positioned at a distance of about 150 km from the fault, near the Pyramid station in the Everest, showed static displacements in the order of some millimeters. The observed displacements were compared with the calculated displacements of a finite fault model in an elastic halfspace. We evaluated two slips on fault models derived from seismological and geodetic studies: the comparison of the observed and modelled fields reveals that our displacements are in better accordance with the geodetic derived fault model than the seismologic one. Finally, we evaluate the yearly strain rate of four GNSS stations in the area that were recording continuously the deformation field for at least 5 years. The strain rate is then compared with the strain released by the Gorkha earthquake, leading to an interval of 235 years to store a comparable amount of elastic energy. The three near-fault GNSS stations require a slightly wider fault than published, in the case of an equivalent homogeneous rupture, with an average uniform slip of 3.5 m occurring on an area of 150 km × 60 km.
Design study of Software-Implemented Fault-Tolerance (SIFT) computer
NASA Technical Reports Server (NTRS)
Wensley, J. H.; Goldberg, J.; Green, M. W.; Kutz, W. H.; Levitt, K. N.; Mills, M. E.; Shostak, R. E.; Whiting-Okeefe, P. M.; Zeidler, H. M.
1982-01-01
Software-implemented fault tolerant (SIFT) computer design for commercial aviation is reported. A SIFT design concept is addressed. Alternate strategies for physical implementation are considered. Hardware and software design correctness is addressed. System modeling and effectiveness evaluation are considered from a fault-tolerant point of view.
NASA Technical Reports Server (NTRS)
Hayashi, Miwa; Ravinder, Ujwala; McCann, Robert S.; Beutter, Brent; Spirkovska, Lily
2009-01-01
Performance enhancements associated with selected forms of automation were quantified in a recent human-in-the-loop evaluation of two candidate operational concepts for fault management on next-generation spacecraft. The baseline concept, called Elsie, featured a full-suite of "soft" fault management interfaces. However, operators were forced to diagnose malfunctions with minimal assistance from the standalone caution and warning system. The other concept, called Besi, incorporated a more capable C&W system with an automated fault diagnosis capability. Results from analyses of participants' eye movements indicate that the greatest empirical benefit of the automation stemmed from eliminating the need for text processing on cluttered, text-rich displays.
A Fuzzy Reasoning Design for Fault Detection and Diagnosis of a Computer-Controlled System
Ting, Y.; Lu, W.B.; Chen, C.H.; Wang, G.K.
2008-01-01
A Fuzzy Reasoning and Verification Petri Nets (FRVPNs) model is established for an error detection and diagnosis mechanism (EDDM) applied to a complex fault-tolerant PC-controlled system. The inference accuracy can be improved through the hierarchical design of a two-level fuzzy rule decision tree (FRDT) and a Petri nets (PNs) technique to transform the fuzzy rule into the FRVPNs model. Several simulation examples of the assumed failure events were carried out by using the FRVPNs and the Mamdani fuzzy method with MATLAB tools. The reasoning performance of the developed FRVPNs was verified by comparing the inference outcome to that of the Mamdani method. Both methods result in the same conclusions. Thus, the present study demonstratrates that the proposed FRVPNs model is able to achieve the purpose of reasoning, and furthermore, determining of the failure event of the monitored application program. PMID:19255619
The Landers earthquake; preliminary instrumental results
Jones, L.; Mori, J.; Hauksson, E.
1992-01-01
Early on the morning of June 28, 1992, millions of people in southern California were awakened by the largest earthquake to occur in the western United States in the past 40 yrs. At 4:58 a.m PDT (local time), faulting associated with the magnitude 7.3 earthquake broke through to earth's surface near the town of Landers, California. the surface rupture then propagated 70km (45 mi) to the north and northwest along a band of faults passing through the middle of the Mojave Desert. Fortunately, the strongest shaking occurred in uninhabited regions of the Mojave Desert. Still one child was killed in Yucca Valley, and about 400 people were injured in the surrounding area. the desert communities of Landers, Yucca Valley, and Joshua Tree in San Bernardino Country suffered considerable damage to buildings and roads. Damage to water and power lines caused problems in many areas.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Martin, M.
2000-04-01
This project is the first evaluation of model-based diagnostics to hydraulic robot systems. A greater understanding of fault detection for hydraulic robots has been gained, and a new theoretical fault detection model developed and evaluated.
Slip distribution, strain accumulation and aseismic slip on the Chaman Fault system
NASA Astrophysics Data System (ADS)
Amelug, F.
2015-12-01
The Chaman fault system is a transcurrent fault system developed due to the oblique convergence of the India and Eurasia plates in the western boundary of the India plate. To evaluate the contemporary rates of strain accumulation along and across the Chaman Fault system, we use 2003-2011 Envisat SAR imagery and InSAR time-series methods to obtain a ground velocity field in radar line-of-sight (LOS) direction. We correct the InSAR data for different sources of systematic biases including the phase unwrapping errors, local oscillator drift, topographic residuals and stratified tropospheric delay and evaluate the uncertainty due to the residual delay using time-series of MODIS observations of precipitable water vapor. The InSAR velocity field and modeling demonstrates the distribution of deformation across the Chaman fault system. In the central Chaman fault system, the InSAR velocity shows clear strain localization on the Chaman and Ghazaband faults and modeling suggests a total slip rate of ~24 mm/yr distributed on the two faults with rates of 8 and 16 mm/yr, respectively corresponding to the 80% of the total ~3 cm/yr plate motion between India and Eurasia at these latitudes and consistent with the kinematic models which have predicted a slip rate of ~17-24 mm/yr for the Chaman Fault. In the northern Chaman fault system (north of 30.5N), ~6 mm/yr of the relative plate motion is accommodated across Chaman fault. North of 30.5 N where the topographic expression of the Ghazaband fault vanishes, its slip does not transfer to the Chaman fault but rather distributes among different faults in the Kirthar range and Sulaiman lobe. Observed surface creep on the southern Chaman fault between Nushki and north of City of Chaman, indicates that the fault is partially locked, consistent with the recorded M<7 earthquakes in last century on this segment. The Chaman fault between north of the City of Chaman to North of Kabul, does not show an increase in the rate of strain accumulation. However, lack of seismicity on this segment, presents a significant hazard on Kabul. The high rate of strain accumulation on the Ghazaband fault and lack of evidence for the rupture of the fault during the 1935 Quetta earthquake, present a growing earthquake hazard to the Balochistan and the populated areas such as the city of Quetta.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1991-12-09
This report summarizes the authors review and evaluation of the existing seismic hazards program at Los Alamos National Laboratory (LANL). The report recommends that the original program be augmented with a probabilistic analysis of seismic hazards involving assignment of weighted probabilities of occurrence to all potential sources. This approach yields a more realistic evaluation of the likelihood of large earthquake occurrence particularly in regions where seismic sources may have recurrent intervals of several thousand years or more. The report reviews the locations and geomorphic expressions of identified fault lines along with the known displacements of these faults and last knowmore » occurrence of seismic activity. Faults are mapped and categorized into by their potential for actual movement. Based on geologic site characterization, recommendations are made for increased seismic monitoring; age-dating studies of faults and geomorphic features; increased use of remote sensing and aerial photography for surface mapping of faults; the development of a landslide susceptibility map; and to develop seismic design standards for all existing and proposed facilities at LANL.« less
Stepp, J.C.; Wong, I.; Whitney, J.; Quittmeyer, R.; Abrahamson, N.; Toro, G.; Young, S.R.; Coppersmith, K.; Savy, J.; Sullivan, T.
2001-01-01
Probabilistic seismic hazard analyses were conducted to estimate both ground motion and fault displacement hazards at the potential geologic repository for spent nuclear fuel and high-level radioactive waste at Yucca Mountain, Nevada. The study is believed to be the largest and most comprehensive analyses ever conducted for ground-shaking hazard and is a first-of-a-kind assessment of probabilistic fault displacement hazard. The major emphasis of the study was on the quantification of epistemic uncertainty. Six teams of three experts performed seismic source and fault displacement evaluations, and seven individual experts provided ground motion evaluations. State-of-the-practice expert elicitation processes involving structured workshops, consensus identification of parameters and issues to be evaluated, common sharing of data and information, and open exchanges about the basis for preliminary interpretations were implemented. Ground-shaking hazard was computed for a hypothetical rock outcrop at -300 m, the depth of the potential waste emplacement drifts, at the designated design annual exceedance probabilities of 10-3 and 10-4. The fault displacement hazard was calculated at the design annual exceedance probabilities of 10-4 and 10-5.
Simulated fault injection - A methodology to evaluate fault tolerant microprocessor architectures
NASA Technical Reports Server (NTRS)
Choi, Gwan S.; Iyer, Ravishankar K.; Carreno, Victor A.
1990-01-01
A simulation-based fault-injection method for validating fault-tolerant microprocessor architectures is described. The approach uses mixed-mode simulation (electrical/logic analysis), and injects transient errors in run-time to assess the resulting fault impact. As an example, a fault-tolerant architecture which models the digital aspects of a dual-channel real-time jet-engine controller is used. The level of effectiveness of the dual configuration with respect to single and multiple transients is measured. The results indicate 100 percent coverage of single transients. Approximately 12 percent of the multiple transients affect both channels; none result in controller failure since two additional levels of redundancy exist.
Advanced cloud fault tolerance system
NASA Astrophysics Data System (ADS)
Sumangali, K.; Benny, Niketa
2017-11-01
Cloud computing has become a prevalent on-demand service on the internet to store, manage and process data. A pitfall that accompanies cloud computing is the failures that can be encountered in the cloud. To overcome these failures, we require a fault tolerance mechanism to abstract faults from users. We have proposed a fault tolerant architecture, which is a combination of proactive and reactive fault tolerance. This architecture essentially increases the reliability and the availability of the cloud. In the future, we would like to compare evaluations of our proposed architecture with existing architectures and further improve it.
Use of Terrestrial Laser Scanner for Rigid Airport Pavement Management
Di Benedetto, Alessandro; Fiani, Margherita
2017-01-01
The evaluation of the structural efficiency of airport infrastructures is a complex task. Faulting is one of the most important indicators of rigid pavement performance. The aim of our study is to provide a new method for faulting detection and computation on jointed concrete pavements. Nowadays, the assessment of faulting is performed with the use of laborious and time-consuming measurements that strongly hinder aircraft traffic. We proposed a field procedure for Terrestrial Laser Scanner data acquisition and a computation flow chart in order to identify and quantify the fault size at each joint of apron slabs. The total point cloud has been used to compute the least square plane fitting those points. The best-fit plane for each slab has been computed too. The attitude of each slab plane with respect to both the adjacent ones and the apron reference plane has been determined by the normal vectors to the surfaces. Faulting has been evaluated as the difference in elevation between the slab planes along chosen sections. For a more accurate evaluation of the faulting value, we have then considered a few strips of data covering rectangular areas of different sizes across the joints. The accuracy of the estimated quantities has been computed too. PMID:29278386
NASA Astrophysics Data System (ADS)
Bejar, M.; Alvarez Gomez, J. A.; Staller, A.; Luna, M. P.; Perez Lopez, R.; Monserrat, O.; Chunga, K.; Herrera, G.; Jordá, L.; Lima, A.; Martínez-Díaz, J. J.
2017-12-01
It has long been recognized that earthquakes change the stress in the upper crust around the fault rupture and can influence the short-term behaviour of neighbouring faults and volcanoes. Rapid estimates of these stress changes can provide the authorities managing the post-disaster situation with a useful tool to identify and monitor potential threads and to update the estimates of seismic and volcanic hazard in a region. Space geodesy is now routinely used following an earthquake to image the displacement of the ground and estimate the rupture geometry and the distribution of slip. Using the obtained source model, it is possible to evaluate the remaining moment deficit and to infer the stress changes on nearby faults and volcanoes produced by the earthquake, which can be used to identify which faults and volcanoes are brought closer to failure or activation. Although these procedures are commonly used today, the transference of these results to the authorities managing the post-disaster situation is not straightforward and thus its usefulness is reduced in practice. Here we propose a methodology to evaluate the potential influence of an earthquake on nearby faults and volcanoes and create easy-to-understand maps for decision-making support after an earthquake. We apply this methodology to the Mw 7.8, 2016 Ecuador earthquake. Using Sentinel-1 SAR and continuous GPS data, we measure the coseismic ground deformation and estimate the distribution of slip. Then we use this model to evaluate the moment deficit on the subduction interface and changes of stress on the surrounding faults and volcanoes. The results are compared with the seismic and volcanic events that have occurred after the earthquake. We discuss potential and limits of the methodology and the lessons learnt from discussion with local authorities.
Introduction to Concurrent Engineering: Electronic Circuit Design and Production Applications
1992-09-01
STD-1629. Failure mode distribution data for many different types of parts may be found in RAC publication FMD -91. FMEA utilizes inductive logic in a...contrasts with a Fault Tree Analysis ( FTA ) which utilizes deductive logic in a "top down" approach. In FTA , a system failure is assumed and traced down...Analysis ( FTA ) is a graphical method of risk analysis used to identify critical failure modes within a system or equipment. Utilizing a pictorial approach
COMCAN: a computer program for common cause analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Burdick, G.R.; Marshall, N.H.; Wilson, J.R.
1976-05-01
The computer program, COMCAN, searches the fault tree minimal cut sets for shared susceptibility to various secondary events (common causes) and common links between components. In the case of common causes, a location check may also be performed by COMCAN to determine whether barriers to the common cause exist between components. The program can locate common manufacturers of components having events in the same minimal cut set. A relative ranking scheme for secondary event susceptibility is included in the program.
Strength reduction factors for seismic analyses of buildings exposed to near-fault ground motions
NASA Astrophysics Data System (ADS)
Qu, Honglue; Zhang, Jianjing; Zhao, J. X.
2011-06-01
To estimate the near-fault inelastic response spectra, the accuracy of six existing strength reduction factors ( R) proposed by different investigators were evaluated by using a suite of near-fault earthquake records with directivity-induced pulses. In the evaluation, the force-deformation relationship is modelled by elastic-perfectly plastic, bilinear and stiffness degrading models, and two site conditions, rock and soil, are considered. The R-value ratio (ratio of the R value obtained from the existing R-expressions (or the R-µ- T relationships) to that from inelastic analyses) is used as a measurement parameter. Results show that the R-expressions proposed by Ordaz & Perez-Rocha are the most suitable for near-fault ground motions, followed by the Newmark & Hall and the Berrill et al. relationships. Based on an analysis using the near-fault ground motion dataset, new expressions of R that consider the effects of site conditions are presented and verified.
Development of a methodology for assessing the safety of embedded software systems
NASA Technical Reports Server (NTRS)
Garrett, C. J.; Guarro, S. B.; Apostolakis, G. E.
1993-01-01
A Dynamic Flowgraph Methodology (DFM) based on an integrated approach to modeling and analyzing the behavior of software-driven embedded systems for assessing and verifying reliability and safety is discussed. DFM is based on an extension of the Logic Flowgraph Methodology to incorporate state transition models. System models which express the logic of the system in terms of causal relationships between physical variables and temporal characteristics of software modules are analyzed to determine how a certain state can be reached. This is done by developing timed fault trees which take the form of logical combinations of static trees relating the system parameters at different point in time. The resulting information concerning the hardware and software states can be used to eliminate unsafe execution paths and identify testing criteria for safety critical software functions.
NASA Astrophysics Data System (ADS)
Steel, E.; Simkins, L. M.; Reynolds, L.; Fidler, M. K.
2017-12-01
The Cenozoic Fish Creek - Vallecito Basin formed through extension and transtention associated with the localization of the Pacific-North American plate boundary in the Salton Trough region of Southern California. The exhumation of this basin along the hanging wall of the West Salton Detachment Fault since 1 Ma exposed a well-preserved sedimentary sequence that records an abrupt shift from the alluvial and fluvial deposits of the Elephant Trees Formation to the marine turbidites of the Latrania Formation. This transition marks the rapid marine incursion into the Gulf of California at 6.3 Ma (Dorsey et al., 2011). The Elephant Trees Formation is, therefore, a key transitional unit for understanding environmental change during the early stages of basin formation and the initial opening of the Gulf of California. Here, we present a detailed investigation of the characteristics of the Elephant Trees Formation, including bed thickness, clast size, paleoflow indicators, sedimentary structures, and sorting to understand the changing depositional environments associated with the onset of relative plate motion in the Gulf of California - Salton Trough corridor. This study aims to answer key questions regarding both regional tectonics and the dynamics of alluvial fan progradation, including 1) Does the Elephant Trees Formation record initiation of rapid basin subsidence and basinward progradation of alluvial fans? And 2) if so, what insights can the Elephant Trees Formation provide regarding the dynamics of debris flows and alluvial fan evolution? Our results improve understanding of proximal to distal facies variations within alluvial fan deposits and further refine the paleogeography during time of deposition of the Elephant Trees Formation ( 6.3 - 8.0 Ma) leading up to the timing of rapid marine incursion.
McGee, K.A.; Gerlach, T.M.
1998-01-01
Time-series sensor data reveal significant short-term and seasonal variations of magmatic CO2 in soil over a 12 month period in 1995-1996 at the largest tree-kill site on Mammoth Mountain, central-eastern California. Short-term variations leading to ground-level soil CO2 concentrations hazardous and lethal to humans were triggered by shallow faulting in the absence of increased seismicity or intrusion, consistent with tapping a reservoir of accumulated CO2, rather than direct magma degassing. Hydrologic processes closely modulated seasonal variations in CO2 concentrations, which rose to 65%-100% in soil gas under winter snowpack and plunged more than 25% in just days as the CO2 dissolved in spring snowmelt. The high efflux of CO2 through the tree-kill soils acts as an open-system CO2 buffer causing infiltration of waters with pH values commonly of < 4.2, acid loading of up to 7 keqH+.ha-1.yr-1, mobilization of toxic Al3+, and long-term decline of soil fertility.
A Sampling Based Approach to Spacecraft Autonomous Maneuvering with Safety Specifications
NASA Technical Reports Server (NTRS)
Starek, Joseph A.; Barbee, Brent W.; Pavone, Marco
2015-01-01
This paper presents a methods for safe spacecraft autonomous maneuvering that leverages robotic motion-planning techniques to spacecraft control. Specifically the scenario we consider is an in-plan rendezvous of a chaser spacecraft in proximity to a target spacecraft at the origin of the Clohessy Wiltshire Hill frame. The trajectory for the chaser spacecraft is generated in a receding horizon fashion by executing a sampling based robotic motion planning algorithm name Fast Marching Trees (FMT) which efficiently grows a tree of trajectories over a set of probabillistically drawn samples in the state space. To enforce safety the tree is only grown over actively safe samples for which there exists a one-burn collision avoidance maneuver that circularizes the spacecraft orbit along a collision-free coasting arc and that can be executed under potential thrusters failures. The overall approach establishes a provably correct framework for the systematic encoding of safety specifications into the spacecraft trajectory generations process and appears amenable to real time implementation on orbit. Simulation results are presented for a two-fault tolerant spacecraft during autonomous approach to a single client in Low Earth Orbit.
NASA Astrophysics Data System (ADS)
Shi, Xuhua; Wang, Yu; Sieh, Kerry; Weldon, Ray; Feng, Lujia; Chan, Chung-Han; Liu-Zeng, Jing
2018-03-01
Characterizing the 700 km wide system of active faults on the Shan Plateau, southeast of the eastern Himalayan syntaxis, is critical to understanding the geodynamics and seismic hazard of the large region that straddles neighboring China, Myanmar, Thailand, Laos, and Vietnam. Here we evaluate the fault styles and slip rates over multi-timescales, reanalyze previously published short-term Global Positioning System (GPS) velocities, and evaluate slip-rate gradients to interpret the regional kinematics and geodynamics that drive the crustal motion. Relative to the Sunda plate, GPS velocities across the Shan Plateau define a broad arcuate tongue-like crustal motion with a progressively northwestward increase in sinistral shear over a distance of 700 km followed by a decrease over the final 100 km to the syntaxis. The cumulative GPS slip rate across the entire sinistral-slip fault system on the Shan Plateau is 12 mm/year. Our observations of the fault geometry, slip rates, and arcuate southwesterly directed tongue-like patterns of GPS velocities across the region suggest that the fault kinematics is characterized by a regional southwestward distributed shear across the Shan Plateau, compared to more block-like rotation and indentation north of the Red River fault. The fault geometry, kinematics, and regional GPS velocities are difficult to reconcile with regional bookshelf faulting between the Red River and Sagaing faults or localized lower crustal channel flows beneath this region. The crustal motion and fault kinematics can be driven by a combination of basal traction of a clockwise, southwestward asthenospheric flow around the eastern Himalayan syntaxis and gravitation or shear-driven indentation from north of the Shan Plateau.
Re-Evaluation of Event Correlations in Virtual California Using Statistical Analysis
NASA Astrophysics Data System (ADS)
Glasscoe, M. T.; Heflin, M. B.; Granat, R. A.; Yikilmaz, M. B.; Heien, E.; Rundle, J.; Donnellan, A.
2010-12-01
Fusing the results of simulation tools with statistical analysis methods has contributed to our better understanding of the earthquake process. In a previous study, we used a statistical method to investigate emergent phenomena in data produced by the Virtual California earthquake simulator. The analysis indicated that there were some interesting fault interactions and possible triggering and quiescence relationships between events. We have converted the original code from Matlab to python/C++ and are now evaluating data from the most recent version of Virtual California in order to analyze and compare any new behavior exhibited by the model. The Virtual California earthquake simulator can be used to study fault and stress interaction scenarios for realistic California earthquakes. The simulation generates a synthetic earthquake catalog of events with a minimum size of ~M 5.8 that can be evaluated using statistical analysis methods. Virtual California utilizes realistic fault geometries and a simple Amontons - Coulomb stick and slip friction law in order to drive the earthquake process by means of a back-slip model where loading of each segment occurs due to the accumulation of a slip deficit at the prescribed slip rate of the segment. Like any complex system, Virtual California may generate emergent phenomena unexpected even by its designers. In order to investigate this, we have developed a statistical method that analyzes the interaction between Virtual California fault elements and thereby determine whether events on any given fault elements show correlated behavior. Our method examines events on one fault element and then determines whether there is an associated event within a specified time window on a second fault element. Note that an event in our analysis is defined as any time an element slips, rather than any particular “earthquake” along the entire fault length. Results are then tabulated and then differenced with an expected correlation, calculated by assuming a uniform distribution of events in time. We generate a correlation score matrix, which indicates how weakly or strongly correlated each fault element is to every other in the course of the VC simulation. We calculate correlation scores by summing the difference between the actual and expected correlations over all time window lengths and normalizing by the time window size. The correlation score matrix can focus attention on the most interesting areas for more in-depth analysis of event correlation vs. time. The previous study included 59 faults (639 elements) in the model, which included all the faults save the creeping section of the San Andreas. The analysis spanned 40,000 yrs of Virtual California-generated earthquake data. The newly revised VC model includes 70 faults, 8720 fault elements, and spans 110,000 years. Due to computational considerations, we will evaluate the elements comprising the southern California region, which our previous study indicated showed interesting fault interaction and event triggering/quiescence relationships.
Seattle - seeking balance between the Space Needle, Starbucks, the Seahawks, and subduction
NASA Astrophysics Data System (ADS)
Vidale, J. E.
2012-12-01
Seattle has rich natural hazards. Lahars from Mount Rainier flow from the south, volcanic ash drifts from the East, the South Whidbey Island fault lies north and east, the Cascadia subduction zone dives underfoot from the west, and the Seattle fault lies just below the surface. Past and future landslides are sprinkled democratically across the surface, and Lake Washington and Puget Sound are known to seiche. All are ultimately due to subduction tectonics. As in most tectonically-exposed cities, the hazards are due mainly (1) to the buildings predating the relatively recent revelation that faulting here is active, (2) transportation corridors built long ago that are aging without a good budget for renewal, and (3) the unknown unknowns. These hazards are hard to quantify. Only the largest earthquakes on the Cascadia megathrust have a 10,000-year history, and even for them the down-dip rupture limits, stress drop and attenuation have unacceptable uncertainty. For the threatening faults closer in the upper crust, written history is short, glacial erosion and blanketing preclude many geophysical investigations, and healthy forests frustrate InSAR. On the brighter side, the direct hazard of earthquake shaking is being addressed as well as it can be. The current seismic hazard estimate is derived by methods among the most sophisticated in the world. Logic trees informed by consensus forged from a series of workshops delineate the scenarios. Finite difference calculations that include the world-class deep and soggy basins project the shaking from fault to vulnerable city. One useful cartoon synthesizing the earthquake hazard, based on Art Frankel's report, is shown below. It illustrates that important areas will be strongly shaken, and issues remain to be addressed. Fortunately, with great coffee and good perspective, we are moving toward improved disaster preparedness and resilience.
Transpressional Rupture Cascade of the 2016 Mw 7.8 Kaikoura Earthquake, New Zealand
NASA Astrophysics Data System (ADS)
Xu, Wenbin; Feng, Guangcai; Meng, Lingsen; Zhang, Ailin; Ampuero, Jean Paul; Bürgmann, Roland; Fang, Lihua
2018-03-01
Large earthquakes often do not occur on a simple planar fault but involve rupture of multiple geometrically complex faults. The 2016 Mw 7.8 Kaikoura earthquake, New Zealand, involved the rupture of at least 21 faults, propagating from southwest to northeast for about 180 km. Here we combine space geodesy and seismology techniques to study subsurface fault geometry, slip distribution, and the kinematics of the rupture. Our finite-fault slip model indicates that the fault motion changes from predominantly right-lateral slip near the epicenter to transpressional slip in the northeast with a maximum coseismic surface displacement of about 10 m near the intersection between the Kekerengu and Papatea faults. Teleseismic back projection imaging shows that rupture speed was overall slow (1.4 km/s) but faster on individual fault segments (approximately 2 km/s) and that the conjugate, oblique-reverse, north striking faults released the largest high-frequency energy. We show that the linking Conway-Charwell faults aided in propagation of rupture across the step over from the Humps fault zone to the Hope fault. Fault slip cascaded along the Jordan Thrust, Kekerengu, and Needles faults, causing stress perturbations that activated two major conjugate faults, the Hundalee and Papatea faults. Our results shed important light on the study of earthquakes and seismic hazard evaluation in geometrically complex fault systems.
Robust Fault Detection Using Robust Z1 Estimation and Fuzzy Logic
NASA Technical Reports Server (NTRS)
Curry, Tramone; Collins, Emmanuel G., Jr.; Selekwa, Majura; Guo, Ten-Huei (Technical Monitor)
2001-01-01
This research considers the application of robust Z(sub 1), estimation in conjunction with fuzzy logic to robust fault detection for an aircraft fight control system. It begins with the development of robust Z(sub 1) estimators based on multiplier theory and then develops a fixed threshold approach to fault detection (FD). It then considers the use of fuzzy logic for robust residual evaluation and FD. Due to modeling errors and unmeasurable disturbances, it is difficult to distinguish between the effects of an actual fault and those caused by uncertainty and disturbance. Hence, it is the aim of a robust FD system to be sensitive to faults while remaining insensitive to uncertainty and disturbances. While fixed thresholds only allow a decision on whether a fault has or has not occurred, it is more valuable to have the residual evaluation lead to a conclusion related to the degree of, or probability of, a fault. Fuzzy logic is a viable means of determining the degree of a fault and allows the introduction of human observations that may not be incorporated in the rigorous threshold theory. Hence, fuzzy logic can provide a more reliable and informative fault detection process. Using an aircraft flight control system, the results of FD using robust Z(sub 1) estimation with a fixed threshold are demonstrated. FD that combines robust Z(sub 1) estimation and fuzzy logic is also demonstrated. It is seen that combining the robust estimator with fuzzy logic proves to be advantageous in increasing the sensitivity to smaller faults while remaining insensitive to uncertainty and disturbances.
Spatial distribution of block falls using volumetric GIS-decision-tree models
NASA Astrophysics Data System (ADS)
Abdallah, C.
2010-10-01
Block falls are considered a significant aspect of surficial instability contributing to losses in land and socio-economic aspects through their damaging effects to natural and human environments. This paper predicts and maps the geographic distribution and volumes of block falls in central Lebanon using remote sensing, geographic information systems (GIS) and decision-tree modeling (un-pruned and pruned trees). Eleven terrain parameters (lithology, proximity to fault line, karst type, soil type, distance to drainage line, elevation, slope gradient, slope aspect, slope curvature, land cover/use, and proximity to roads) were generated to statistically explain the occurrence of block falls. The latter were discriminated using SPOT4 satellite imageries, and their dimensions were determined during field surveys. The un-pruned tree model based on all considered parameters explained 86% of the variability in field block fall measurements. Once pruned, it classifies 50% in block falls' volumes by selecting just four parameters (lithology, slope gradient, soil type, and land cover/use). Both tree models (un-pruned and pruned) were converted to quantitative 1:50,000 block falls' maps with different classes; starting from Nil (no block falls) to more than 4000 m 3. These maps are fairly matching with coincidence value equal to 45%; however, both can be used to prioritize the choice of specific zones for further measurement and modeling, as well as for land-use management. The proposed tree models are relatively simple, and may also be applied to other areas (i.e. the choice of un-pruned or pruned model is related to the availability of terrain parameters in a given area).
Software reliability through fault-avoidance and fault-tolerance
NASA Technical Reports Server (NTRS)
Vouk, Mladen A.; Mcallister, David F.
1992-01-01
Accomplishments in the following research areas are summarized: structure based testing, reliability growth, and design testability with risk evaluation; reliability growth models and software risk management; and evaluation of consensus voting, consensus recovery block, and acceptance voting. Four papers generated during the reporting period are included as appendices.
USDA-ARS?s Scientific Manuscript database
Nine years of periodic acoustical monitoring of 93 trees active with Formosan subterranean termite, Coptotermes formosanus Shiraki, evaluated imidacloprid tree foam and noviflumuron bait on activity in trees. Long term, imidacloprid suppressed but did not eliminate termite activity in treated trees...
Inductive Learning Approaches for Improving Pilot Awareness of Aircraft Faults
NASA Technical Reports Server (NTRS)
Spikovska, Lilly; Iverson, David L.; Poll, Scott; Pryor, anna
2005-01-01
Neural network flight controllers are able to accommodate a variety of aircraft control surface faults without detectable degradation of aircraft handling qualities. Under some faults, however, the effective flight envelope is reduced; this can lead to unexpected behavior if a pilot performs an action that exceeds the remaining control authority of the damaged aircraft. The goal of our work is to increase the pilot s situational awareness by informing him of the type of damage and resulting reduction in flight envelope. Our methodology integrates two inductive learning systems with novel visualization techniques. One learning system, the Inductive Monitoring System (IMS), learns to detect when a simulation includes faulty controls, while two others, Inductive Classification System (INCLASS) and multiple binary decision tree system (utilizing C4.5), determine the type of fault. In off-line training using only non-failure data, IMS constructs a characterization of nominal flight control performance based on control signals issued by the neural net flight controller. This characterization can be used to determine the degree of control augmentation required in the pitch, roll, and yaw command channels to counteract control surface failures. This derived information is typically sufficient to distinguish between the various control surface failures and is used to train both INCLASS and C4.5. Using data from failed control surface flight simulations, INCLASS and C4.5 independently discover and amplify features in IMS results that can be used to differentiate each distinct control surface failure situation. In real-time flight simulations, distinguishing features learned during training are used to classify control surface failures. Knowledge about the type of failure can be used by an additional automated system to alter its approach for planning tactical and strategic maneuvers. The knowledge can also be used directly to increase the pilot s situational awareness and inform manual maneuver decisions. Our multi-modal display of this information provides speech output to issue control surface failure warnings to a lesser-used communication channel and provides graphical displays with pilot-selectable !eve!s of details to issues additional information about the failure. We also describe a potential presentation for flight envelope reduction that can be viewed separately or integrated with an existing attitude indicator instrument. Preliminary results suggest that the inductive approach is capable of detecting that a control surface has failed and determining the type of fault. Furthermore, preliminary evaluations suggest that the interface discloses a concise summary of this information to the pilot.
Probabilistic Seismic Hazard Maps for Ecuador
NASA Astrophysics Data System (ADS)
Mariniere, J.; Beauval, C.; Yepes, H. A.; Laurence, A.; Nocquet, J. M.; Alvarado, A. P.; Baize, S.; Aguilar, J.; Singaucho, J. C.; Jomard, H.
2017-12-01
A probabilistic seismic hazard study is led for Ecuador, a country facing a high seismic hazard, both from megathrust subduction earthquakes and shallow crustal moderate to large earthquakes. Building on the knowledge produced in the last years in historical seismicity, earthquake catalogs, active tectonics, geodynamics, and geodesy, several alternative earthquake recurrence models are developed. An area source model is first proposed, based on the seismogenic crustal and inslab sources defined in Yepes et al. (2016). A slightly different segmentation is proposed for the subduction interface, with respect to Yepes et al. (2016). Three earthquake catalogs are used to account for the numerous uncertainties in the modeling of frequency-magnitude distributions. The hazard maps obtained highlight several source zones enclosing fault systems that exhibit low seismic activity, not representative of the geological and/or geodetical slip rates. Consequently, a fault model is derived, including faults with an earthquake recurrence model inferred from geological and/or geodetical slip rate estimates. The geodetical slip rates on the set of simplified faults are estimated from a GPS horizontal velocity field (Nocquet et al. 2014). Assumptions on the aseismic component of the deformation are required. Combining these alternative earthquake models in a logic tree, and using a set of selected ground-motion prediction equations adapted to Ecuador's different tectonic contexts, a mean hazard map is obtained. Hazard maps corresponding to the percentiles 16 and 84% are also derived, highlighting the zones where uncertainties on the hazard are highest.
Bad Actors Criticality Assessment for Pipeline system
NASA Astrophysics Data System (ADS)
Nasir, Meseret; Chong, Kit wee; Osman, Sabtuni; Siaw Khur, Wee
2015-04-01
Failure of a pipeline system could bring huge economic loss. In order to mitigate such catastrophic loss, it is required to evaluate and rank the impact of each bad actor of the pipeline system. In this study, bad actors are known as the root causes or any potential factor leading to the system downtime. Fault Tree Analysis (FTA) is used to analyze the probability of occurrence for each bad actor. Bimbaum's Importance and criticality measure (BICM) is also employed to rank the impact of each bad actor on the pipeline system failure. The results demonstrate that internal corrosion; external corrosion and construction damage are critical and highly contribute to the pipeline system failure with 48.0%, 12.4% and 6.0% respectively. Thus, a minor improvement in internal corrosion; external corrosion and construction damage would bring significant changes in the pipeline system performance and reliability. These results could also be useful to develop efficient maintenance strategy by identifying the critical bad actors.
Facility Targeting, Protection and Mission Decision Making Using the VISAC Code
NASA Technical Reports Server (NTRS)
Morris, Robert H.; Sulfredge, C. David
2011-01-01
The Visual Interactive Site Analysis Code (VISAC) has been used by DTRA and several other agencies to aid in targeting facilities and to predict the associated collateral effects for the go, no go mission decision making process. VISAC integrates the three concepts of target geometric modeling, damage assessment capabilities, and an event/fault tree methodology for evaluating accident/incident consequences. It can analyze a variety of accidents/incidents at nuclear or industrial facilities, ranging from simple component sabotage to an attack with military or terrorist weapons. For nuclear facilities, VISAC predicts the facility damage, estimated downtime, amount and timing of any radionuclides released. Used in conjunction with DTRA's HPAC code, VISAC also can analyze transport and dispersion of the radionuclides, levels of contamination of the surrounding area, and the population at risk. VISAC has also been used by the NRC to aid in the development of protective measures for nuclear facilities that may be subjected to attacks by car/truck bombs.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ponce, D.A.; Langenheim, V.E.
1995-12-31
Ground magnetic and gravity data collected along traverses across the Ghost Dance and Solitario Canyon faults on the eastern and western flanks, respectively, of Yucca Mountain in southwest Nevada are interpreted. These data were collected as part of an effort to evaluate faulting in the vicinity of a potential nuclear waste repository at Yucca Mountain. Gravity and magnetic data and models along traverses across the Ghost Dance and Solitario Canyon faults show prominent anomalies associated with known faults and reveal a number of possible concealed faults beneath the eastern flank of Yucca Mountain. The central part of the eastern flankmore » of Yucca Mountain is characterized by several small amplitude anomalies that probably reflect small scale faulting.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cheung, Howard; Braun, James E.
This report describes models of building faults created for OpenStudio to support the ongoing development of fault detection and diagnostic (FDD) algorithms at the National Renewable Energy Laboratory. Building faults are operating abnormalities that degrade building performance, such as using more energy than normal operation, failing to maintain building temperatures according to the thermostat set points, etc. Models of building faults in OpenStudio can be used to estimate fault impacts on building performance and to develop and evaluate FDD algorithms. The aim of the project is to develop fault models of typical heating, ventilating and air conditioning (HVAC) equipment inmore » the United States, and the fault models in this report are grouped as control faults, sensor faults, packaged and split air conditioner faults, water-cooled chiller faults, and other uncategorized faults. The control fault models simulate impacts of inappropriate thermostat control schemes such as an incorrect thermostat set point in unoccupied hours and manual changes of thermostat set point due to extreme outside temperature. Sensor fault models focus on the modeling of sensor biases including economizer relative humidity sensor bias, supply air temperature sensor bias, and water circuit temperature sensor bias. Packaged and split air conditioner fault models simulate refrigerant undercharging, condenser fouling, condenser fan motor efficiency degradation, non-condensable entrainment in refrigerant, and liquid line restriction. Other fault models that are uncategorized include duct fouling, excessive infiltration into the building, and blower and pump motor degradation.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cheung, Howard; Braun, James E.
2015-12-31
This report describes models of building faults created for OpenStudio to support the ongoing development of fault detection and diagnostic (FDD) algorithms at the National Renewable Energy Laboratory. Building faults are operating abnormalities that degrade building performance, such as using more energy than normal operation, failing to maintain building temperatures according to the thermostat set points, etc. Models of building faults in OpenStudio can be used to estimate fault impacts on building performance and to develop and evaluate FDD algorithms. The aim of the project is to develop fault models of typical heating, ventilating and air conditioning (HVAC) equipment inmore » the United States, and the fault models in this report are grouped as control faults, sensor faults, packaged and split air conditioner faults, water-cooled chiller faults, and other uncategorized faults. The control fault models simulate impacts of inappropriate thermostat control schemes such as an incorrect thermostat set point in unoccupied hours and manual changes of thermostat set point due to extreme outside temperature. Sensor fault models focus on the modeling of sensor biases including economizer relative humidity sensor bias, supply air temperature sensor bias, and water circuit temperature sensor bias. Packaged and split air conditioner fault models simulate refrigerant undercharging, condenser fouling, condenser fan motor efficiency degradation, non-condensable entrainment in refrigerant, and liquid line restriction. Other fault models that are uncategorized include duct fouling, excessive infiltration into the building, and blower and pump motor degradation.« less
A Fault Alarm and Diagnosis Method Based on Sensitive Parameters and Support Vector Machine
NASA Astrophysics Data System (ADS)
Zhang, Jinjie; Yao, Ziyun; Lv, Zhiquan; Zhu, Qunxiong; Xu, Fengtian; Jiang, Zhinong
2015-08-01
Study on the extraction of fault feature and the diagnostic technique of reciprocating compressor is one of the hot research topics in the field of reciprocating machinery fault diagnosis at present. A large number of feature extraction and classification methods have been widely applied in the related research, but the practical fault alarm and the accuracy of diagnosis have not been effectively improved. Developing feature extraction and classification methods to meet the requirements of typical fault alarm and automatic diagnosis in practical engineering is urgent task. The typical mechanical faults of reciprocating compressor are presented in the paper, and the existing data of online monitoring system is used to extract fault feature parameters within 15 types in total; the inner sensitive connection between faults and the feature parameters has been made clear by using the distance evaluation technique, also sensitive characteristic parameters of different faults have been obtained. On this basis, a method based on fault feature parameters and support vector machine (SVM) is developed, which will be applied to practical fault diagnosis. A better ability of early fault warning has been proved by the experiment and the practical fault cases. Automatic classification by using the SVM to the data of fault alarm has obtained better diagnostic accuracy.
Constructive Epistemic Modeling: A Hierarchical Bayesian Model Averaging Method
NASA Astrophysics Data System (ADS)
Tsai, F. T. C.; Elshall, A. S.
2014-12-01
Constructive epistemic modeling is the idea that our understanding of a natural system through a scientific model is a mental construct that continually develops through learning about and from the model. Using the hierarchical Bayesian model averaging (HBMA) method [1], this study shows that segregating different uncertain model components through a BMA tree of posterior model probabilities, model prediction, within-model variance, between-model variance and total model variance serves as a learning tool [2]. First, the BMA tree of posterior model probabilities permits the comparative evaluation of the candidate propositions of each uncertain model component. Second, systemic model dissection is imperative for understanding the individual contribution of each uncertain model component to the model prediction and variance. Third, the hierarchical representation of the between-model variance facilitates the prioritization of the contribution of each uncertain model component to the overall model uncertainty. We illustrate these concepts using the groundwater modeling of a siliciclastic aquifer-fault system. The sources of uncertainty considered are from geological architecture, formation dip, boundary conditions and model parameters. The study shows that the HBMA analysis helps in advancing knowledge about the model rather than forcing the model to fit a particularly understanding or merely averaging several candidate models. [1] Tsai, F. T.-C., and A. S. Elshall (2013), Hierarchical Bayesian model averaging for hydrostratigraphic modeling: Uncertainty segregation and comparative evaluation. Water Resources Research, 49, 5520-5536, doi:10.1002/wrcr.20428. [2] Elshall, A.S., and F. T.-C. Tsai (2014). Constructive epistemic modeling of groundwater flow with geological architecture and boundary condition uncertainty under Bayesian paradigm, Journal of Hydrology, 517, 105-119, doi: 10.1016/j.jhydrol.2014.05.027.
Xiao, Xinqing; Fu, Zetian; Qi, Lin; Mira, Trebar; Zhang, Xiaoshuan
2015-10-01
The main export varieties in China are brand-name, high-quality bred aquatic products. Among them, tilapia has become the most important and fast-growing species since extensive consumer markets in North America and Europe have evolved as a result of commodity prices, year-round availability and quality of fresh and frozen products. As the largest tilapia farming country, China has over one-third of its tilapia production devoted to further processing and meeting foreign market demand. Using by tilapia fillet processing, this paper introduces the efforts for developing and evaluating ITS-TF: an intelligent traceability system integrated with statistical process control (SPC) and fault tree analysis (FTA). Observations, literature review and expert questionnaires were used for system requirement and knowledge acquisition; scenario simulation was applied to evaluate and validate ITS-TF performance. The results show that traceability requirement is evolved from a firefighting model to a proactive model for enhancing process management capacity for food safety; ITS-TF transforms itself as an intelligent system to provide functions on early warnings and process management by integrated SPC and FTA. The valuable suggestion that automatic data acquisition and communication technology should be integrated into ITS-TF was achieved for further system optimization, perfection and performance improvement. © 2014 Society of Chemical Industry.
Application of a Bank of Kalman Filters for Aircraft Engine Fault Diagnostics
NASA Technical Reports Server (NTRS)
Kobayashi, Takahisa; Simon, Donald L.
2003-01-01
In this paper, a bank of Kalman filters is applied to aircraft gas turbine engine sensor and actuator fault detection and isolation (FDI) in conjunction with the detection of component faults. This approach uses multiple Kalman filters, each of which is designed for detecting a specific sensor or actuator fault. In the event that a fault does occur, all filters except the one using the correct hypothesis will produce large estimation errors, thereby isolating the specific fault. In the meantime, a set of parameters that indicate engine component performance is estimated for the detection of abrupt degradation. The proposed FDI approach is applied to a nonlinear engine simulation at nominal and aged conditions, and the evaluation results for various engine faults at cruise operating conditions are given. The ability of the proposed approach to reliably detect and isolate sensor and actuator faults is demonstrated.
A fault isolation method based on the incidence matrix of an augmented system
NASA Astrophysics Data System (ADS)
Chen, Changxiong; Chen, Liping; Ding, Jianwan; Wu, Yizhong
2018-03-01
A new approach is proposed for isolating faults and fast identifying the redundant sensors of a system in this paper. By introducing fault signal as additional state variable, an augmented system model is constructed by the original system model, fault signals and sensor measurement equations. The structural properties of an augmented system model are provided in this paper. From the viewpoint of evaluating fault variables, the calculating correlations of the fault variables in the system can be found, which imply the fault isolation properties of the system. Compared with previous isolation approaches, the highlights of the new approach are that it can quickly find the faults which can be isolated using exclusive residuals, at the same time, and can identify the redundant sensors in the system, which are useful for the design of diagnosis system. The simulation of a four-tank system is reported to validate the proposed method.
A signal-based fault detection and classification method for heavy haul wagons
NASA Astrophysics Data System (ADS)
Li, Chunsheng; Luo, Shihui; Cole, Colin; Spiryagin, Maksym; Sun, Yanquan
2017-12-01
This paper proposes a signal-based fault detection and isolation (FDI) system for heavy haul wagons considering the special requirements of low cost and robustness. The sensor network of the proposed system consists of just two accelerometers mounted on the front left and rear right of the carbody. Seven fault indicators (FIs) are proposed based on the cross-correlation analyses of the sensor-collected acceleration signals. Bolster spring fault conditions are focused on in this paper, including two different levels (small faults and moderate faults) and two locations (faults in the left and right bolster springs of the first bogie). A fully detailed dynamic model of a typical 40t axle load heavy haul wagon is developed to evaluate the deterioration of dynamic behaviour under proposed fault conditions and demonstrate the detectability of the proposed FDI method. Even though the fault conditions considered in this paper did not deteriorate the wagon dynamic behaviour dramatically, the proposed FIs show great sensitivity to the bolster spring faults. The most effective and efficient FIs are chosen for fault detection and classification. Analysis results indicate that it is possible to detect changes in bolster stiffness of ±25% and identify the fault location.
Model-Based Fault Tolerant Control
NASA Technical Reports Server (NTRS)
Kumar, Aditya; Viassolo, Daniel
2008-01-01
The Model Based Fault Tolerant Control (MBFTC) task was conducted under the NASA Aviation Safety and Security Program. The goal of MBFTC is to develop and demonstrate real-time strategies to diagnose and accommodate anomalous aircraft engine events such as sensor faults, actuator faults, or turbine gas-path component damage that can lead to in-flight shutdowns, aborted take offs, asymmetric thrust/loss of thrust control, or engine surge/stall events. A suite of model-based fault detection algorithms were developed and evaluated. Based on the performance and maturity of the developed algorithms two approaches were selected for further analysis: (i) multiple-hypothesis testing, and (ii) neural networks; both used residuals from an Extended Kalman Filter to detect the occurrence of the selected faults. A simple fusion algorithm was implemented to combine the results from each algorithm to obtain an overall estimate of the identified fault type and magnitude. The identification of the fault type and magnitude enabled the use of an online fault accommodation strategy to correct for the adverse impact of these faults on engine operability thereby enabling continued engine operation in the presence of these faults. The performance of the fault detection and accommodation algorithm was extensively tested in a simulation environment.
Is the useful field of view a good predictor of at-fault crash risk in elderly Japanese drivers?
Sakai, Hiroyuki; Uchiyama, Yuji; Takahara, Miwa; Doi, Shun'ichi; Kubota, Fumiko; Yoshimura, Takayoshi; Tachibana, Atsumichi; Kurahashi, Tetsuo
2015-05-01
Although age-related decline in the useful field of view (UFOV) is well recognized as a risk factor for at-fault crash involvement in elderly drivers, there is still room to study its applicability to elderly Japanese drivers. In the current study, we thus examined the relationship between UFOV and at-fault crash history in an elderly Japanese population. We also explored whether potential factors that create awareness of reduced driving fitness could be a trigger for the self-regulation of driving in elderly drivers. We measured UFOV and at-fault crash history from 151 community-dwelling Japanese aged 60 years or older, and compared UFOV of at-fault crash-free and crash-involved drivers. We also measured self-evaluated driving style using a questionnaire. UFOV in crash-involved drivers was significantly lower than that in crash-free drivers. No significant difference was found in self-evaluated driving style between crash-free and crash-involved drivers. In addition, there was no significant association between UFOV and self-evaluated driving style. The present study showed that UFOV is a good predictor of at-fault crash risk in elderly Japanese drivers. Furthermore, our data imply that it might be difficult for elderly drivers to adopt appropriate driving strategies commensurate with their current driving competence. © 2014 Japan Geriatrics Society.
Risk-based maintenance of ethylene oxide production facilities.
Khan, Faisal I; Haddara, Mahmoud R
2004-05-20
This paper discusses a methodology for the design of an optimum inspection and maintenance program. The methodology, called risk-based maintenance (RBM) is based on integrating a reliability approach and a risk assessment strategy to obtain an optimum maintenance schedule. First, the likely equipment failure scenarios are formulated. Out of many likely failure scenarios, the ones, which are most probable, are subjected to a detailed study. Detailed consequence analysis is done for the selected scenarios. Subsequently, these failure scenarios are subjected to a fault tree analysis to determine their probabilities. Finally, risk is computed by combining the results of the consequence and the probability analyses. The calculated risk is compared against known acceptable criteria. The frequencies of the maintenance tasks are obtained by minimizing the estimated risk. A case study involving an ethylene oxide production facility is presented. Out of the five most hazardous units considered, the pipeline used for the transportation of the ethylene is found to have the highest risk. Using available failure data and a lognormal reliability distribution function human health risk factors are calculated. Both societal risk factors and individual risk factors exceeded the acceptable risk criteria. To determine an optimal maintenance interval, a reverse fault tree analysis was used. The maintenance interval was determined such that the original high risk is brought down to an acceptable level. A sensitivity analysis is also undertaken to study the impact of changing the distribution of the reliability model as well as the error in the distribution parameters on the maintenance interval.
NASA Astrophysics Data System (ADS)
Inoue, N.; Kitada, N.; Irikura, K.
2013-12-01
A probability of surface rupture is important to configure the seismic source, such as area sources or fault models, for a seismic hazard evaluation. In Japan, Takemura (1998) estimated the probability based on the historical earthquake data. Kagawa et al. (2004) evaluated the probability based on a numerical simulation of surface displacements. The estimated probability indicates a sigmoid curve and increases between Mj (the local magnitude defined and calculated by Japan Meteorological Agency) =6.5 and Mj=7.0. The probability of surface rupture is also used in a probabilistic fault displacement analysis (PFDHA). The probability is determined from the collected earthquake catalog, which were classified into two categories: with surface rupture or without surface rupture. The logistic regression is performed for the classified earthquake data. Youngs et al. (2003), Ross and Moss (2011) and Petersen et al. (2011) indicate the logistic curves of the probability of surface rupture by normal, reverse and strike-slip faults, respectively. Takao et al. (2013) shows the logistic curve derived from only Japanese earthquake data. The Japanese probability curve shows the sharply increasing in narrow magnitude range by comparison with other curves. In this study, we estimated the probability of surface rupture applying the logistic analysis to the surface displacement derived from a surface displacement calculation. A source fault was defined in according to the procedure of Kagawa et al. (2004), which determined a seismic moment from a magnitude and estimated the area size of the asperity and the amount of slip. Strike slip and reverse faults were considered as source faults. We applied Wang et al. (2003) for calculations. The surface displacements with defined source faults were calculated by varying the depth of the fault. A threshold value as 5cm of surface displacement was used to evaluate whether a surface rupture reach or do not reach to the surface. We carried out the logistic regression analysis to the calculated displacements, which were classified by the above threshold. The estimated probability curve indicated the similar trend to the result of Takao et al. (2013). The probability of revere faults is larger than that of strike slip faults. On the other hand, PFDHA results show different trends. The probability of reverse faults at higher magnitude is lower than that of strike slip and normal faults. Ross and Moss (2011) suggested that the sediment and/or rock over the fault compress and not reach the displacement to the surface enough. The numerical theory applied in this study cannot deal with a complex initial situation such as topography.
Dynamic modeling of gearbox faults: A review
NASA Astrophysics Data System (ADS)
Liang, Xihui; Zuo, Ming J.; Feng, Zhipeng
2018-01-01
Gearbox is widely used in industrial and military applications. Due to high service load, harsh operating conditions or inevitable fatigue, faults may develop in gears. If the gear faults cannot be detected early, the health will continue to degrade, perhaps causing heavy economic loss or even catastrophe. Early fault detection and diagnosis allows properly scheduled shutdowns to prevent catastrophic failure and consequently result in a safer operation and higher cost reduction. Recently, many studies have been done to develop gearbox dynamic models with faults aiming to understand gear fault generation mechanism and then develop effective fault detection and diagnosis methods. This paper focuses on dynamics based gearbox fault modeling, detection and diagnosis. State-of-art and challenges are reviewed and discussed. This detailed literature review limits research results to the following fundamental yet key aspects: gear mesh stiffness evaluation, gearbox damage modeling and fault diagnosis techniques, gearbox transmission path modeling and method validation. In the end, a summary and some research prospects are presented.
Kim, MinJeong; Liu, Hongbin; Kim, Jeong Tai; Yoo, ChangKyoo
2014-08-15
Sensor faults in metro systems provide incorrect information to indoor air quality (IAQ) ventilation systems, resulting in the miss-operation of ventilation systems and adverse effects on passenger health. In this study, a new sensor validation method is proposed to (1) detect, identify and repair sensor faults and (2) evaluate the influence of sensor reliability on passenger health risk. To address the dynamic non-Gaussianity problem of IAQ data, dynamic independent component analysis (DICA) is used. To detect and identify sensor faults, the DICA-based squared prediction error and sensor validity index are used, respectively. To restore the faults to normal measurements, a DICA-based iterative reconstruction algorithm is proposed. The comprehensive indoor air-quality index (CIAI) that evaluates the influence of the current IAQ on passenger health is then compared using the faulty and reconstructed IAQ data sets. Experimental results from a metro station showed that the DICA-based method can produce an improved IAQ level in the metro station and reduce passenger health risk since it more accurately validates sensor faults than do conventional methods. Copyright © 2014 Elsevier B.V. All rights reserved.
Davatzes, N.C.; Aydin, A.
2005-01-01
We examined the distribution of fault rock and damage zone structures in sandstone and shale along the Moab fault, a basin-scale normal fault with nearly 1 km (0.62 mi) of throw, in southeast Utah. We find that fault rock and damage zone structures vary along strike and dip. Variations are related to changes in fault geometry, faulted slip, lithology, and the mechanism of faulting. In sandstone, we differentiated two structural assemblages: (1) deformation bands, zones of deformation bands, and polished slip surfaces and (2) joints, sheared joints, and breccia. These structural assemblages result from the deformation band-based mechanism and the joint-based mechanism, respectively. Along the Moab fault, where both types of structures are present, joint-based deformation is always younger. Where shale is juxtaposed against the fault, a third faulting mechanism, smearing of shale by ductile deformation and associated shale fault rocks, occurs. Based on the knowledge of these three mechanisms, we projected the distribution of their structural products in three dimensions along idealized fault surfaces and evaluated the potential effect on fluid and hydrocarbon flow. We contend that these mechanisms could be used to facilitate predictions of fault and damage zone structures and their permeability from limited data sets. Copyright ?? 2005 by The American Association of Petroleum Geologists.
NASA Astrophysics Data System (ADS)
Heinlein, S. N.
2013-12-01
Remote sensing data sets are widely used for evaluation of surface manifestations of active tectonics. This study utilizes ASTER GDEM and Landsat ETM+ data sets with Google Earth images draped over terrain models. This study evaluates 1) the surrounding surface geomorphology of the study area with these data sets and 2) the morphology of the Kumroch Fault using diffusion modeling to estimate constant diffusivity (κ) and estimate slip rates by means of real ground data measured across fault scarps by Kozhurin et al. (2006). Models of the evolution of fault scarp morphology provide time elapsed since slip initiated on a faults surface and may therefore provide more accurate estimates of slip rate than the rate calculated by dividing scarp offset by the age of the ruptured surface. Profile modeling of scarps collected by Kozhurin et al. (2006) formed by several events distributed through time and were evaluated using a constant slip rate (CSR) solution which yields a value A/κ (1/2 slip rate/diffusivity). Time elapsed since slip initiated on the fault is determined by establishing a value for κ and measuring total scarp offset. CSR nonlinear modeling estimated of κ range from 8m2/ka - 14m2/ka on the Kumroch Fault which indicates a slip rates of 0.6 mm/yr - 1.0 mm/yr since 3.4 ka -3.7 ka. This method provides a quick and inexpensive way to gather data for a regional tectonic study and establish estimated rates of tectonic activity. Analyses of the remote sensing data are providing new insight into the role of active tectonics within the region. Results from fault scarp diffusion models of Mattson and Bruhn (2001) and DuRoss and Bruhn (2004) and Kozhurin et al. (2006), Kozhurin (2007), Kozhurin et al. (2008) and Pinegina et al. 2012 trench profiles of the KF as calibrated age fault scarp diffusion rates were estimated. (-) mean that no data could be determined.
Hayward Fault rate constraints at Berkeley: Evaluation of the 335-meter Strawberry Creek offset
NASA Astrophysics Data System (ADS)
Williams, P. L.
2007-12-01
At UC Berkeley the active channel of Strawberry Creek is offset 335 meters by the Hayward fault and two abandoned channels of Strawberry Creek are laterally offset 580 and 730 meters. These relationships record the displacement of the northern Hayward fault at Berkeley over a period of tens of millennia. The Strawberry Creek site has a similar geometry to the central San Andreas fault's Wallace Creek site, which arguably provides the best geological evidence of "millennial" fault kinematics in California (Sieh and Jahns, 1984). Slip rate determinations are an essential component of overall hazard evaluation for the Hayward fault, and this site is ripe to disclose a long-term form of this parameter, to contrast with geodetic and other geological rate evidence. Large offsets at the site may lower uncertainty in the rate equation relative to younger sites, as the affect of stream abandonment age, generally the greatest source of rate uncertainty, is greatly reduced. This is helpful here because it more-than-offsets uncertainties resulting from piercing projections to the fault. Strawberry Creek and its ancestral channels suggest west-side-up vertical deformation across the Hayward fault at this location. The development of the vertical deformation parameter will complement ongoing geodetic measurements, particularly InSAR, and motivate testing of other geological constraints. Up-to-the-west motion across the Hayward fault at Berkeley has important implications for the partitioning of strain and kinematics of the northern Hayward fault, and may explain anomalous up-on-the-west landforms elsewhere along the fault. For example, geological features of the western Berkeley Hills are consistent with rapid and recent uplift to the west of the fault. On the basis of a preliminary analysis of the offset channels of Strawberry Creek, up-to-the-west uplift is about 0.5mm/yr across the Hayward fault at Berkeley. If this is in fact the long-term rate, the 150 m height of the Hills to the northwest of the Strawberry Creek site was produced during the past about 300,000 years by a significant dip- slip (thrust) component of Hayward fault motion. Rapid and recent uplift of some portions of the East Bay Hills has important implications for fault geometries and slope stability, and should strongly influence the investigation fault hazards in areas that are more complexly deformed.
NASA Astrophysics Data System (ADS)
Bonanno, Emanuele; Bonini, Lorenzo; Basili, Roberto; Toscani, Giovanni; Seno, Silvio
2016-04-01
Fault-related folding kinematic models are widely used to explain accommodation of crustal shortening. These models, however, include simplifications, such as the assumption of constant growth rate of faults. This value sometimes is not constant in isotropic materials, and even more variable if one considers naturally anisotropic geological systems. , This means that these simplifications could lead to incorrect interpretations of the reality. In this study, we use analogue models to evaluate how thin, mechanical discontinuities, such as beddings or thin weak layers, influence the propagation of reverse faults and related folds. The experiments are performed with two different settings to simulate initially-blind master faults dipping at 30° and 45°. The 30° dip represents one of the Andersonian conjugate fault, and 45° dip is very frequent in positive reactivation of normal faults. The experimental apparatus consists of a clay layer placed above two plates: one plate, the footwall, is fixed; the other one, the hanging wall, is mobile. Motor-controlled sliding of the hanging wall plate along an inclined plane reproduces the reverse fault movement. We run thirty-six experiments: eighteen with dip of 30° and eighteen with dip of 45°. For each dip-angle setting, we initially run isotropic experiments that serve as a reference. Then, we run the other experiments with one or two discontinuities (horizontal precuts performed into the clay layer). We monitored the experiments collecting side photographs every 1.0 mm of displacement of the master fault. These images have been analyzed through PIVlab software, a tool based on the Digital Image Correlation method. With the "displacement field analysis" (one of the PIVlab tools) we evaluated, the variation of the trishear zone shape and how the master-fault tip and newly-formed faults propagate into the clay medium. With the "strain distribution analysis", we observed the amount of the on-fault and off-fault deformation with respect to the faulting pattern and evolution. Secondly, using MOVE software, we extracted the positions of fault tips and folds every 5 mm of displacement on the master fault. Analyzing these positions in all of the experiments, we found that the growth rate of the faults and the related fold shape vary depending on the number of discontinuities in the clay medium. Other results can be summarized as follows: 1) the fault growth rate is not constant, but varies especially while the new faults interacts with precuts; 2) the new faults tend to crosscut the discontinuities when the angle between them is approximately 90°; 3) the trishear zone change its shape during the experiments especially when the main fault interacts with the discontinuities.