ERIC Educational Resources Information Center
Barker, Bruce O.; Petersen, Paul D.
This paper explores the fault-tree analysis approach to isolating failure modes within a system. Fault tree investigates potentially undesirable events and then looks for failures in sequence that would lead to their occurring. Relationships among these events are symbolized by AND or OR logic gates, AND used when single events must coexist to…
The FTA Method And A Possibility Of Its Application In The Area Of Road Freight Transport
NASA Astrophysics Data System (ADS)
Poliaková, Adela
2015-06-01
The Fault Tree process utilizes logic diagrams to portray and analyse potentially hazardous events. Three basic symbols (logic gates) are adequate for diagramming any fault tree. However, additional recently developed symbols can be used to reduce the time and effort required for analysis. A fault tree is a graphical representation of the relationship between certain specific events and the ultimate undesired event (2). This paper deals to method of Fault Tree Analysis basic description and provides a practical view on possibility of application by quality improvement in road freight transport company.
NASA Technical Reports Server (NTRS)
English, Thomas
2005-01-01
A standard tool of reliability analysis used at NASA-JSC is the event tree. An event tree is simply a probability tree, with the probabilities determining the next step through the tree specified at each node. The nodal probabilities are determined by a reliability study of the physical system at work for a particular node. The reliability study performed at a node is typically referred to as a fault tree analysis, with the potential of a fault tree existing.for each node on the event tree. When examining an event tree it is obvious why the event tree/fault tree approach has been adopted. Typical event trees are quite complex in nature, and the event tree/fault tree approach provides a systematic and organized approach to reliability analysis. The purpose of this study was two fold. Firstly, we wanted to explore the possibility that a semi-Markov process can create dependencies between sojourn times (the times it takes to transition from one state to the next) that can decrease the uncertainty when estimating time to failures. Using a generalized semi-Markov model, we studied a four element reliability model and were able to demonstrate such sojourn time dependencies. Secondly, we wanted to study the use of semi-Markov processes to introduce a time variable into the event tree diagrams that are commonly developed in PRA (Probabilistic Risk Assessment) analyses. Event tree end states which change with time are more representative of failure scenarios than are the usual static probability-derived end states.
MIRAP, microcomputer reliability analysis program
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jehee, J.N.T.
1989-01-01
A program for a microcomputer is outlined that can determine minimal cut sets from a specified fault tree logic. The speed and memory limitations of the microcomputers on which the program is implemented (Atari ST and IBM) are addressed by reducing the fault tree's size and by storing the cut set data on disk. Extensive well proven fault tree restructuring techniques, such as the identification of sibling events and of independent gate events, reduces the fault tree's size but does not alter its logic. New methods are used for the Boolean reduction of the fault tree logic. Special criteria formore » combining events in the 'AND' and 'OR' logic avoid the creation of many subsuming cut sets which all would cancel out due to existing cut sets. Figures and tables illustrates these methods. 4 refs., 5 tabs.« less
NASA Technical Reports Server (NTRS)
Butler, Ricky W.; Boerschlein, David P.
1993-01-01
Fault-Tree Compiler (FTC) program, is software tool used to calculate probability of top event in fault tree. Gates of five different types allowed in fault tree: AND, OR, EXCLUSIVE OR, INVERT, and M OF N. High-level input language easy to understand and use. In addition, program supports hierarchical fault-tree definition feature, which simplifies tree-description process and reduces execution time. Set of programs created forming basis for reliability-analysis workstation: SURE, ASSIST, PAWS/STEM, and FTC fault-tree tool (LAR-14586). Written in PASCAL, ANSI-compliant C language, and FORTRAN 77. Other versions available upon request.
Evidential Networks for Fault Tree Analysis with Imprecise Knowledge
NASA Astrophysics Data System (ADS)
Yang, Jianping; Huang, Hong-Zhong; Liu, Yu; Li, Yan-Feng
2012-06-01
Fault tree analysis (FTA), as one of the powerful tools in reliability engineering, has been widely used to enhance system quality attributes. In most fault tree analyses, precise values are adopted to represent the probabilities of occurrence of those events. Due to the lack of sufficient data or imprecision of existing data at the early stage of product design, it is often difficult to accurately estimate the failure rates of individual events or the probabilities of occurrence of the events. Therefore, such imprecision and uncertainty need to be taken into account in reliability analysis. In this paper, the evidential networks (EN) are employed to quantify and propagate the aforementioned uncertainty and imprecision in fault tree analysis. The detailed conversion processes of some logic gates to EN are described in fault tree (FT). The figures of the logic gates and the converted equivalent EN, together with the associated truth tables and the conditional belief mass tables, are also presented in this work. The new epistemic importance is proposed to describe the effect of ignorance degree of event. The fault tree of an aircraft engine damaged by oil filter plugs is presented to demonstrate the proposed method.
NASA Technical Reports Server (NTRS)
Butler, Ricky W.; Martensen, Anna L.
1992-01-01
FTC, Fault-Tree Compiler program, is reliability-analysis software tool used to calculate probability of top event of fault tree. Five different types of gates allowed in fault tree: AND, OR, EXCLUSIVE OR, INVERT, and M OF N. High-level input language of FTC easy to understand and use. Program supports hierarchical fault-tree-definition feature simplifying process of description of tree and reduces execution time. Solution technique implemented in FORTRAN, and user interface in Pascal. Written to run on DEC VAX computer operating under VMS operating system.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Powell, Danny H; Elwood Jr, Robert H
2011-01-01
Analysis of the material protection, control, and accountability (MPC&A) system is necessary to understand the limits and vulnerabilities of the system to internal threats. A self-appraisal helps the facility be prepared to respond to internal threats and reduce the risk of theft or diversion of nuclear material. The material control and accountability (MC&A) system effectiveness tool (MSET) fault tree was developed to depict the failure of the MPC&A system as a result of poor practices and random failures in the MC&A system. It can also be employed as a basis for assessing deliberate threats against a facility. MSET uses faultmore » tree analysis, which is a top-down approach to examining system failure. The analysis starts with identifying a potential undesirable event called a 'top event' and then determining the ways it can occur (e.g., 'Fail To Maintain Nuclear Materials Under The Purview Of The MC&A System'). The analysis proceeds by determining how the top event can be caused by individual or combined lower level faults or failures. These faults, which are the causes of the top event, are 'connected' through logic gates. The MSET model uses AND-gates and OR-gates and propagates the effect of event failure using Boolean algebra. To enable the fault tree analysis calculations, the basic events in the fault tree are populated with probability risk values derived by conversion of questionnaire data to numeric values. The basic events are treated as independent variables. This assumption affects the Boolean algebraic calculations used to calculate results. All the necessary calculations are built into the fault tree codes, but it is often useful to estimate the probabilities manually as a check on code functioning. The probability of failure of a given basic event is the probability that the basic event primary question fails to meet the performance metric for that question. The failure probability is related to how well the facility performs the task identified in that basic event over time (not just one performance or exercise). Fault tree calculations provide a failure probability for the top event in the fault tree. The basic fault tree calculations establish a baseline relative risk value for the system. This probability depicts relative risk, not absolute risk. Subsequent calculations are made to evaluate the change in relative risk that would occur if system performance is improved or degraded. During the development effort of MSET, the fault tree analysis program used was SAPHIRE. SAPHIRE is an acronym for 'Systems Analysis Programs for Hands-on Integrated Reliability Evaluations.' Version 1 of the SAPHIRE code was sponsored by the Nuclear Regulatory Commission in 1987 as an innovative way to draw, edit, and analyze graphical fault trees primarily for safe operation of nuclear power reactors. When the fault tree calculations are performed, the fault tree analysis program will produce several reports that can be used to analyze the MPC&A system. SAPHIRE produces reports showing risk importance factors for all basic events in the operational MC&A system. The risk importance information is used to examine the potential impacts when performance of certain basic events increases or decreases. The initial results produced by the SAPHIRE program are considered relative risk values. None of the results can be interpreted as absolute risk values since the basic event probability values represent estimates of risk associated with the performance of MPC&A tasks throughout the material balance area (MBA). The RRR for a basic event represents the decrease in total system risk that would result from improvement of that one event to a perfect performance level. Improvement of the basic event with the greatest RRR value produces a greater decrease in total system risk than improvement of any other basic event. Basic events with the greatest potential for system risk reduction are assigned performance improvement values, and new fault tree calculations show the improvement in total system risk. The operational impact or cost-effectiveness from implementing the performance improvements can then be evaluated. The improvements being evaluated can be system performance improvements, or they can be potential, or actual, upgrades to the system. The RIR for a basic event represents the increase in total system risk that would result from failure of that one event. Failure of the basic event with the greatest RIR value produces a greater increase in total system risk than failure of any other basic event. Basic events with the greatest potential for system risk increase are assigned failure performance values, and new fault tree calculations show the increase in total system risk. This evaluation shows the importance of preventing performance degradation of the basic events. SAPHIRE identifies combinations of basic events where concurrent failure of the events results in failure of the top event.« less
Rymer, M.J.
2000-01-01
The Coachella Valley area was strongly shaken by the 1992 Joshua Tree (23 April) and Landers (28 June) earthquakes, and both events caused triggered slip on active faults within the area. Triggered slip associated with the Joshua Tree earthquake was on a newly recognized fault, the East Wide Canyon fault, near the southwestern edge of the Little San Bernardino Mountains. Slip associated with the Landers earthquake formed along the San Andreas fault in the southeastern Coachella Valley. Surface fractures formed along the East Wide Canyon fault in association with the Joshua Tree earthquake. The fractures extended discontinuously over a 1.5-km stretch of the fault, near its southern end. Sense of slip was consistently right-oblique, west side down, similar to the long-term style of faulting. Measured offset values were small, with right-lateral and vertical components of slip ranging from 1 to 6 mm and 1 to 4 mm, respectively. This is the first documented historic slip on the East Wide Canyon fault, which was first mapped only months before the Joshua Tree earthquake. Surface slip associated with the Joshua Tree earthquake most likely developed as triggered slip given its 5 km distance from the Joshua Tree epicenter and aftershocks. As revealed in a trench investigation, slip formed in an area with only a thin (<3 m thick) veneer of alluvium in contrast to earlier documented triggered slip events in this region, all in the deep basins of the Salton Trough. A paleoseismic trench study in an area of 1992 surface slip revealed evidence of two and possibly three surface faulting events on the East Wide Canyon fault during the late Quaternary, probably latest Pleistocene (first event) and mid- to late Holocene (second two events). About two months after the Joshua Tree earthquake, the Landers earthquake then triggered slip on many faults, including the San Andreas fault in the southeastern Coachella Valley. Surface fractures associated with this event formed discontinuous breaks over a 54-km-long stretch of the fault, from the Indio Hills southeastward to Durmid Hill. Sense of slip was right-lateral; only locally was there a minor (~1 mm) vertical component of slip. Measured dextral displacement values ranged from 1 to 20 mm, with the largest amounts found in the Mecca Hills where large slip values have been measured following past triggered-slip events.
A diagnosis system using object-oriented fault tree models
NASA Technical Reports Server (NTRS)
Iverson, David L.; Patterson-Hine, F. A.
1990-01-01
Spaceborne computing systems must provide reliable, continuous operation for extended periods. Due to weight, power, and volume constraints, these systems must manage resources very effectively. A fault diagnosis algorithm is described which enables fast and flexible diagnoses in the dynamic distributed computing environments planned for future space missions. The algorithm uses a knowledge base that is easily changed and updated to reflect current system status. Augmented fault trees represented in an object-oriented form provide deep system knowledge that is easy to access and revise as a system changes. Given such a fault tree, a set of failure events that have occurred, and a set of failure events that have not occurred, this diagnosis system uses forward and backward chaining to propagate causal and temporal information about other failure events in the system being diagnosed. Once the system has established temporal and causal constraints, it reasons backward from heuristically selected failure events to find a set of basic failure events which are a likely cause of the occurrence of the top failure event in the fault tree. The diagnosis system has been implemented in common LISP using Flavors.
Preventing medical errors by designing benign failures.
Grout, John R
2003-07-01
One way to successfully reduce medical errors is to design health care systems that are more resistant to the tendencies of human beings to err. One interdisciplinary approach entails creating design changes, mitigating human errors, and making human error irrelevant to outcomes. This approach is intended to facilitate the creation of benign failures, which have been called mistake-proofing devices and forcing functions elsewhere. USING FAULT TREES TO DESIGN FORCING FUNCTIONS: A fault tree is a graphical tool used to understand the relationships that either directly cause or contribute to the cause of a particular failure. A careful analysis of a fault tree enables the analyst to anticipate how the process will behave after the change. EXAMPLE OF AN APPLICATION: A scenario in which a patient is scalded while bathing can serve as an example of how multiple fault trees can be used to design forcing functions. The first fault tree shows the undesirable event--patient scalded while bathing. The second fault tree has a benign event--no water. Adding a scald valve changes the outcome from the undesirable event ("patient scalded while bathing") to the benign event ("no water") Analysis of fault trees does not ensure or guarantee that changes necessary to eliminate error actually occur. Most mistake-proofing is used to prevent simple errors and to create well-defended processes, but complex errors can also result. The utilization of mistake-proofing or forcing functions can be thought of as changing the logic of a process. Errors that formerly caused undesirable failures can be converted into the causes of benign failures. The use of fault trees can provide a variety of insights into the design of forcing functions that will improve patient safety.
Khan, F I; Abbasi, S A
2000-07-10
Fault tree analysis (FTA) is based on constructing a hypothetical tree of base events (initiating events) branching into numerous other sub-events, propagating the fault and eventually leading to the top event (accident). It has been a powerful technique used traditionally in identifying hazards in nuclear installations and power industries. As the systematic articulation of the fault tree is associated with assigning probabilities to each fault, the exercise is also sometimes called probabilistic risk assessment. But powerful as this technique is, it is also very cumbersome and costly, limiting its area of application. We have developed a new algorithm based on analytical simulation (named as AS-II), which makes the application of FTA simpler, quicker, and cheaper; thus opening up the possibility of its wider use in risk assessment in chemical process industries. Based on the methodology we have developed a computer-automated tool. The details are presented in this paper.
Lognormal Approximations of Fault Tree Uncertainty Distributions.
El-Shanawany, Ashraf Ben; Ardron, Keith H; Walker, Simon P
2018-01-26
Fault trees are used in reliability modeling to create logical models of fault combinations that can lead to undesirable events. The output of a fault tree analysis (the top event probability) is expressed in terms of the failure probabilities of basic events that are input to the model. Typically, the basic event probabilities are not known exactly, but are modeled as probability distributions: therefore, the top event probability is also represented as an uncertainty distribution. Monte Carlo methods are generally used for evaluating the uncertainty distribution, but such calculations are computationally intensive and do not readily reveal the dominant contributors to the uncertainty. In this article, a closed-form approximation for the fault tree top event uncertainty distribution is developed, which is applicable when the uncertainties in the basic events of the model are lognormally distributed. The results of the approximate method are compared with results from two sampling-based methods: namely, the Monte Carlo method and the Wilks method based on order statistics. It is shown that the closed-form expression can provide a reasonable approximation to results obtained by Monte Carlo sampling, without incurring the computational expense. The Wilks method is found to be a useful means of providing an upper bound for the percentiles of the uncertainty distribution while being computationally inexpensive compared with full Monte Carlo sampling. The lognormal approximation method and Wilks's method appear attractive, practical alternatives for the evaluation of uncertainty in the output of fault trees and similar multilinear models. © 2018 Society for Risk Analysis.
Try Fault Tree Analysis, a Step-by-Step Way to Improve Organization Development.
ERIC Educational Resources Information Center
Spitzer, Dean
1980-01-01
Fault Tree Analysis, a systems safety engineering technology used to analyze organizational systems, is described. Explains the use of logic gates to represent the relationship between failure events, qualitative analysis, quantitative analysis, and effective use of Fault Tree Analysis. (CT)
Reliability computation using fault tree analysis
NASA Technical Reports Server (NTRS)
Chelson, P. O.
1971-01-01
A method is presented for calculating event probabilities from an arbitrary fault tree. The method includes an analytical derivation of the system equation and is not a simulation program. The method can handle systems that incorporate standby redundancy and it uses conditional probabilities for computing fault trees where the same basic failure appears in more than one fault path.
Object-oriented fault tree evaluation program for quantitative analyses
NASA Technical Reports Server (NTRS)
Patterson-Hine, F. A.; Koen, B. V.
1988-01-01
Object-oriented programming can be combined with fault free techniques to give a significantly improved environment for evaluating the safety and reliability of large complex systems for space missions. Deep knowledge about system components and interactions, available from reliability studies and other sources, can be described using objects that make up a knowledge base. This knowledge base can be interrogated throughout the design process, during system testing, and during operation, and can be easily modified to reflect design changes in order to maintain a consistent information source. An object-oriented environment for reliability assessment has been developed on a Texas Instrument (TI) Explorer LISP workstation. The program, which directly evaluates system fault trees, utilizes the object-oriented extension to LISP called Flavors that is available on the Explorer. The object representation of a fault tree facilitates the storage and retrieval of information associated with each event in the tree, including tree structural information and intermediate results obtained during the tree reduction process. Reliability data associated with each basic event are stored in the fault tree objects. The object-oriented environment on the Explorer also includes a graphical tree editor which was modified to display and edit the fault trees.
Structural system reliability calculation using a probabilistic fault tree analysis method
NASA Technical Reports Server (NTRS)
Torng, T. Y.; Wu, Y.-T.; Millwater, H. R.
1992-01-01
The development of a new probabilistic fault tree analysis (PFTA) method for calculating structural system reliability is summarized. The proposed PFTA procedure includes: developing a fault tree to represent the complex structural system, constructing an approximation function for each bottom event, determining a dominant sampling sequence for all bottom events, and calculating the system reliability using an adaptive importance sampling method. PFTA is suitable for complicated structural problems that require computer-intensive computer calculations. A computer program has been developed to implement the PFTA.
Direct evaluation of fault trees using object-oriented programming techniques
NASA Technical Reports Server (NTRS)
Patterson-Hine, F. A.; Koen, B. V.
1989-01-01
Object-oriented programming techniques are used in an algorithm for the direct evaluation of fault trees. The algorithm combines a simple bottom-up procedure for trees without repeated events with a top-down recursive procedure for trees with repeated events. The object-oriented approach results in a dynamic modularization of the tree at each step in the reduction process. The algorithm reduces the number of recursive calls required to solve trees with repeated events and calculates intermediate results as well as the solution of the top event. The intermediate results can be reused if part of the tree is modified. An example is presented in which the results of the algorithm implemented with conventional techniques are compared to those of the object-oriented approach.
Review: Evaluation of Foot-and-Mouth Disease Control Using Fault Tree Analysis.
Isoda, N; Kadohira, M; Sekiguchi, S; Schuppers, M; Stärk, K D C
2015-06-01
An outbreak of foot-and-mouth disease (FMD) causes huge economic losses and animal welfare problems. Although much can be learnt from past FMD outbreaks, several countries are not satisfied with their degree of contingency planning and aiming at more assurance that their control measures will be effective. The purpose of the present article was to develop a generic fault tree framework for the control of an FMD outbreak as a basis for systematic improvement and refinement of control activities and general preparedness. Fault trees are typically used in engineering to document pathways that can lead to an undesired event, that is, ineffective FMD control. The fault tree method allows risk managers to identify immature parts of the control system and to analyse the events or steps that will most probably delay rapid and effective disease control during a real outbreak. The present developed fault tree is generic and can be tailored to fit the specific needs of countries. For instance, the specific fault tree for the 2001 FMD outbreak in the UK was refined based on control weaknesses discussed in peer-reviewed articles. Furthermore, the specific fault tree based on the 2001 outbreak was applied to the subsequent FMD outbreak in 2007 to assess the refinement of control measures following the earlier, major outbreak. The FMD fault tree can assist risk managers to develop more refined and adequate control activities against FMD outbreaks and to find optimum strategies for rapid control. Further application using the current tree will be one of the basic measures for FMD control worldwide. © 2013 Blackwell Verlag GmbH.
Object-Oriented Algorithm For Evaluation Of Fault Trees
NASA Technical Reports Server (NTRS)
Patterson-Hine, F. A.; Koen, B. V.
1992-01-01
Algorithm for direct evaluation of fault trees incorporates techniques of object-oriented programming. Reduces number of calls needed to solve trees with repeated events. Provides significantly improved software environment for such computations as quantitative analyses of safety and reliability of complicated systems of equipment (e.g., spacecraft or factories).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sarrack, A.G.
The purpose of this report is to document fault tree analyses which have been completed for the Defense Waste Processing Facility (DWPF) safety analysis. Logic models for equipment failures and human error combinations that could lead to flammable gas explosions in various process tanks, or failure of critical support systems were developed for internal initiating events and for earthquakes. These fault trees provide frequency estimates for support systems failures and accidents that could lead to radioactive and hazardous chemical releases both on-site and off-site. Top event frequency results from these fault trees will be used in further APET analyses tomore » calculate accident risk associated with DWPF facility operations. This report lists and explains important underlying assumptions, provides references for failure data sources, and briefly describes the fault tree method used. Specific commitments from DWPF to provide new procedural/administrative controls or system design changes are listed in the ''Facility Commitments'' section. The purpose of the ''Assumptions'' section is to clarify the basis for fault tree modeling, and is not necessarily a list of items required to be protected by Technical Safety Requirements (TSRs).« less
Graphical fault tree analysis for fatal falls in the construction industry.
Chi, Chia-Fen; Lin, Syuan-Zih; Dewi, Ratna Sari
2014-11-01
The current study applied a fault tree analysis to represent the causal relationships among events and causes that contributed to fatal falls in the construction industry. Four hundred and eleven work-related fatalities in the Taiwanese construction industry were analyzed in terms of age, gender, experience, falling site, falling height, company size, and the causes for each fatality. Given that most fatal accidents involve multiple events, the current study coded up to a maximum of three causes for each fall fatality. After the Boolean algebra and minimal cut set analyses, accident causes associated with each falling site can be presented as a fault tree to provide an overview of the basic causes, which could trigger fall fatalities in the construction industry. Graphical icons were designed for each falling site along with the associated accident causes to illustrate the fault tree in a graphical manner. A graphical fault tree can improve inter-disciplinary discussion of risk management and the communication of accident causation to first line supervisors. Copyright © 2014 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Martensen, Anna L.; Butler, Ricky W.
1987-01-01
The Fault Tree Compiler Program is a new reliability tool used to predict the top event probability for a fault tree. Five different gate types are allowed in the fault tree: AND, OR, EXCLUSIVE OR, INVERT, and M OF N gates. The high level input language is easy to understand and use when describing the system tree. In addition, the use of the hierarchical fault tree capability can simplify the tree description and decrease program execution time. The current solution technique provides an answer precise (within the limits of double precision floating point arithmetic) to the five digits in the answer. The user may vary one failure rate or failure probability over a range of values and plot the results for sensitivity analyses. The solution technique is implemented in FORTRAN; the remaining program code is implemented in Pascal. The program is written to run on a Digital Corporation VAX with the VMS operation system.
The Fault Tree Compiler (FTC): Program and mathematics
NASA Technical Reports Server (NTRS)
Butler, Ricky W.; Martensen, Anna L.
1989-01-01
The Fault Tree Compiler Program is a new reliability tool used to predict the top-event probability for a fault tree. Five different gate types are allowed in the fault tree: AND, OR, EXCLUSIVE OR, INVERT, AND m OF n gates. The high-level input language is easy to understand and use when describing the system tree. In addition, the use of the hierarchical fault tree capability can simplify the tree description and decrease program execution time. The current solution technique provides an answer precisely (within the limits of double precision floating point arithmetic) within a user specified number of digits accuracy. The user may vary one failure rate or failure probability over a range of values and plot the results for sensitivity analyses. The solution technique is implemented in FORTRAN; the remaining program code is implemented in Pascal. The program is written to run on a Digital Equipment Corporation (DEC) VAX computer with the VMS operation system.
Fire safety in transit systems fault tree analysis
DOT National Transportation Integrated Search
1981-09-01
Fire safety countermeasures applicable to transit vehicles are identified and evaluated. This document contains fault trees which illustrate the sequences of events which may lead to a transit-fire related casualty. A description of the basis for the...
Interim reliability evaluation program, Browns Ferry fault trees
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stewart, M.E.
1981-01-01
An abbreviated fault tree method is used to evaluate and model Browns Ferry systems in the Interim Reliability Evaluation programs, simplifying the recording and displaying of events, yet maintaining the system of identifying faults. The level of investigation is not changed. The analytical thought process inherent in the conventional method is not compromised. But the abbreviated method takes less time, and the fault modes are much more visible.
CUTSETS - MINIMAL CUT SET CALCULATION FOR DIGRAPH AND FAULT TREE RELIABILITY MODELS
NASA Technical Reports Server (NTRS)
Iverson, D. L.
1994-01-01
Fault tree and digraph models are frequently used for system failure analysis. Both type of models represent a failure space view of the system using AND and OR nodes in a directed graph structure. Fault trees must have a tree structure and do not allow cycles or loops in the graph. Digraphs allow any pattern of interconnection between loops in the graphs. A common operation performed on digraph and fault tree models is the calculation of minimal cut sets. A cut set is a set of basic failures that could cause a given target failure event to occur. A minimal cut set for a target event node in a fault tree or digraph is any cut set for the node with the property that if any one of the failures in the set is removed, the occurrence of the other failures in the set will not cause the target failure event. CUTSETS will identify all the minimal cut sets for a given node. The CUTSETS package contains programs that solve for minimal cut sets of fault trees and digraphs using object-oriented programming techniques. These cut set codes can be used to solve graph models for reliability analysis and identify potential single point failures in a modeled system. The fault tree minimal cut set code reads in a fault tree model input file with each node listed in a text format. In the input file the user specifies a top node of the fault tree and a maximum cut set size to be calculated. CUTSETS will find minimal sets of basic events which would cause the failure at the output of a given fault tree gate. The program can find all the minimal cut sets of a node, or minimal cut sets up to a specified size. The algorithm performs a recursive top down parse of the fault tree, starting at the specified top node, and combines the cut sets of each child node into sets of basic event failures that would cause the failure event at the output of that gate. Minimal cut set solutions can be found for all nodes in the fault tree or just for the top node. The digraph cut set code uses the same techniques as the fault tree cut set code, except it includes all upstream digraph nodes in the cut sets for a given node and checks for cycles in the digraph during the solution process. CUTSETS solves for specified nodes and will not automatically solve for all upstream digraph nodes. The cut sets will be output as a text file. CUTSETS includes a utility program that will convert the popular COD format digraph model description files into text input files suitable for use with the CUTSETS programs. FEAT (MSC-21873) and FIRM (MSC-21860) available from COSMIC are examples of programs that produce COD format digraph model description files that may be converted for use with the CUTSETS programs. CUTSETS is written in C-language to be machine independent. It has been successfully implemented on a Sun running SunOS, a DECstation running ULTRIX, a Macintosh running System 7, and a DEC VAX running VMS. The RAM requirement varies with the size of the models. CUTSETS is available in UNIX tar format on a .25 inch streaming magnetic tape cartridge (standard distribution) or on a 3.5 inch diskette. It is also available on a 3.5 inch Macintosh format diskette or on a 9-track 1600 BPI magnetic tape in DEC VAX FILES-11 format. Sample input and sample output are provided on the distribution medium. An electronic copy of the documentation in Macintosh Microsoft Word format is included on the distribution medium. Sun and SunOS are trademarks of Sun Microsystems, Inc. DEC, DeCstation, ULTRIX, VAX, and VMS are trademarks of Digital Equipment Corporation. UNIX is a registered trademark of AT&T Bell Laboratories. Macintosh is a registered trademark of Apple Computer, Inc.
Fault tree analysis for urban flooding.
ten Veldhuis, J A E; Clemens, F H L R; van Gelder, P H A J M
2009-01-01
Traditional methods to evaluate flood risk generally focus on heavy storm events as the principal cause of flooding. Conversely, fault tree analysis is a technique that aims at modelling all potential causes of flooding. It quantifies both overall flood probability and relative contributions of individual causes of flooding. This paper presents a fault model for urban flooding and an application to the case of Haarlem, a city of 147,000 inhabitants. Data from a complaint register, rainfall gauges and hydrodynamic model calculations are used to quantify probabilities of basic events in the fault tree. This results in a flood probability of 0.78/week for Haarlem. It is shown that gully pot blockages contribute to 79% of flood incidents, whereas storm events contribute only 5%. This implies that for this case more efficient gully pot cleaning is a more effective strategy to reduce flood probability than enlarging drainage system capacity. Whether this is also the most cost-effective strategy can only be decided after risk assessment has been complemented with a quantification of consequences of both types of events. To do this will be the next step in this study.
The weakest t-norm based intuitionistic fuzzy fault-tree analysis to evaluate system reliability.
Kumar, Mohit; Yadav, Shiv Prasad
2012-07-01
In this paper, a new approach of intuitionistic fuzzy fault-tree analysis is proposed to evaluate system reliability and to find the most critical system component that affects the system reliability. Here weakest t-norm based intuitionistic fuzzy fault tree analysis is presented to calculate fault interval of system components from integrating expert's knowledge and experience in terms of providing the possibility of failure of bottom events. It applies fault-tree analysis, α-cut of intuitionistic fuzzy set and T(ω) (the weakest t-norm) based arithmetic operations on triangular intuitionistic fuzzy sets to obtain fault interval and reliability interval of the system. This paper also modifies Tanaka et al.'s fuzzy fault-tree definition. In numerical verification, a malfunction of weapon system "automatic gun" is presented as a numerical example. The result of the proposed method is compared with the listing approaches of reliability analysis methods. Copyright © 2012 ISA. Published by Elsevier Ltd. All rights reserved.
Fault tree analysis: NiH2 aerospace cells for LEO mission
NASA Technical Reports Server (NTRS)
Klein, Glenn C.; Rash, Donald E., Jr.
1992-01-01
The Fault Tree Analysis (FTA) is one of several reliability analyses or assessments applied to battery cells to be utilized in typical Electric Power Subsystems for spacecraft in low Earth orbit missions. FTA is generally the process of reviewing and analytically examining a system or equipment in such a way as to emphasize the lower level fault occurrences which directly or indirectly contribute to the major fault or top level event. This qualitative FTA addresses the potential of occurrence for five specific top level events: hydrogen leakage through either discrete leakage paths or through pressure vessel rupture; and four distinct modes of performance degradation - high charge voltage, suppressed discharge voltage, loss of capacity, and high pressure.
SPACE PROPULSION SYSTEM PHASED-MISSION PROBABILITY ANALYSIS USING CONVENTIONAL PRA METHODS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Curtis Smith; James Knudsen
As part of a series of papers on the topic of advance probabilistic methods, a benchmark phased-mission problem has been suggested. This problem consists of modeling a space mission using an ion propulsion system, where the mission consists of seven mission phases. The mission requires that the propulsion operate for several phases, where the configuration changes as a function of phase. The ion propulsion system itself consists of five thruster assemblies and a single propellant supply, where each thruster assembly has one propulsion power unit and two ion engines. In this paper, we evaluate the probability of mission failure usingmore » the conventional methodology of event tree/fault tree analysis. The event tree and fault trees are developed and analyzed using Systems Analysis Programs for Hands-on Integrated Reliability Evaluations (SAPHIRE). While the benchmark problem is nominally a "dynamic" problem, in our analysis the mission phases are modeled in a single event tree to show the progression from one phase to the next. The propulsion system is modeled in fault trees to account for the operation; or in this case, the failure of the system. Specifically, the propulsion system is decomposed into each of the five thruster assemblies and fed into the appropriate N-out-of-M gate to evaluate mission failure. A separate fault tree for the propulsion system is developed to account for the different success criteria of each mission phase. Common-cause failure modeling is treated using traditional (i.e., parametrically) methods. As part of this paper, we discuss the overall results in addition to the positive and negative aspects of modeling dynamic situations with non-dynamic modeling techniques. One insight from the use of this conventional method for analyzing the benchmark problem is that it requires significant manual manipulation to the fault trees and how they are linked into the event tree. The conventional method also requires editing the resultant cut sets to obtain the correct results. While conventional methods may be used to evaluate a dynamic system like that in the benchmark, the level of effort required may preclude its use on real-world problems.« less
Survey of critical failure events in on-chip interconnect by fault tree analysis
NASA Astrophysics Data System (ADS)
Yokogawa, Shinji; Kunii, Kyousuke
2018-07-01
In this paper, a framework based on reliability physics is proposed for adopting fault tree analysis (FTA) to the on-chip interconnect system of a semiconductor. By integrating expert knowledge and experience regarding the possibilities of failure on basic events, critical issues of on-chip interconnect reliability will be evaluated by FTA. In particular, FTA is used to identify the minimal cut sets with high risk priority. Critical events affecting the on-chip interconnect reliability are identified and discussed from the viewpoint of long-term reliability assessment. The moisture impact is evaluated as an external event.
Li, Jia; Wang, Deming; Huang, Zonghou
2017-01-01
Coal dust explosions (CDE) are one of the main threats to the occupational safety of coal miners. Aiming to identify and assess the risk of CDE, this paper proposes a novel method of fuzzy fault tree analysis combined with the Visual Basic (VB) program. In this methodology, various potential causes of the CDE are identified and a CDE fault tree is constructed. To overcome drawbacks from the lack of exact probability data for the basic events, fuzzy set theory is employed and the probability data of each basic event is treated as intuitionistic trapezoidal fuzzy numbers. In addition, a new approach for calculating the weighting of each expert is also introduced in this paper to reduce the error during the expert elicitation process. Specifically, an in-depth quantitative analysis of the fuzzy fault tree, such as the importance measure of the basic events and the cut sets, and the CDE occurrence probability is given to assess the explosion risk and acquire more details of the CDE. The VB program is applied to simplify the analysis process. A case study and analysis is provided to illustrate the effectiveness of this proposed method, and some suggestions are given to take preventive measures in advance and avoid CDE accidents. PMID:28793348
Wang, Hetang; Li, Jia; Wang, Deming; Huang, Zonghou
2017-01-01
Coal dust explosions (CDE) are one of the main threats to the occupational safety of coal miners. Aiming to identify and assess the risk of CDE, this paper proposes a novel method of fuzzy fault tree analysis combined with the Visual Basic (VB) program. In this methodology, various potential causes of the CDE are identified and a CDE fault tree is constructed. To overcome drawbacks from the lack of exact probability data for the basic events, fuzzy set theory is employed and the probability data of each basic event is treated as intuitionistic trapezoidal fuzzy numbers. In addition, a new approach for calculating the weighting of each expert is also introduced in this paper to reduce the error during the expert elicitation process. Specifically, an in-depth quantitative analysis of the fuzzy fault tree, such as the importance measure of the basic events and the cut sets, and the CDE occurrence probability is given to assess the explosion risk and acquire more details of the CDE. The VB program is applied to simplify the analysis process. A case study and analysis is provided to illustrate the effectiveness of this proposed method, and some suggestions are given to take preventive measures in advance and avoid CDE accidents.
Fault trees for decision making in systems analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lambert, Howard E.
1975-10-09
The application of fault tree analysis (FTA) to system safety and reliability is presented within the framework of system safety analysis. The concepts and techniques involved in manual and automated fault tree construction are described and their differences noted. The theory of mathematical reliability pertinent to FTA is presented with emphasis on engineering applications. An outline of the quantitative reliability techniques of the Reactor Safety Study is given. Concepts of probabilistic importance are presented within the fault tree framework and applied to the areas of system design, diagnosis and simulation. The computer code IMPORTANCE ranks basic events and cut setsmore » according to a sensitivity analysis. A useful feature of the IMPORTANCE code is that it can accept relative failure data as input. The output of the IMPORTANCE code can assist an analyst in finding weaknesses in system design and operation, suggest the most optimal course of system upgrade, and determine the optimal location of sensors within a system. A general simulation model of system failure in terms of fault tree logic is described. The model is intended for efficient diagnosis of the causes of system failure in the event of a system breakdown. It can also be used to assist an operator in making decisions under a time constraint regarding the future course of operations. The model is well suited for computer implementation. New results incorporated in the simulation model include an algorithm to generate repair checklists on the basis of fault tree logic and a one-step-ahead optimization procedure that minimizes the expected time to diagnose system failure.« less
A fast bottom-up algorithm for computing the cut sets of noncoherent fault trees
DOE Office of Scientific and Technical Information (OSTI.GOV)
Corynen, G.C.
1987-11-01
An efficient procedure for finding the cut sets of large fault trees has been developed. Designed to address coherent or noncoherent systems, dependent events, shared or common-cause events, the method - called SHORTCUT - is based on a fast algorithm for transforming a noncoherent tree into a quasi-coherent tree (COHERE), and on a new algorithm for reducing cut sets (SUBSET). To assure sufficient clarity and precision, the procedure is discussed in the language of simple sets, which is also developed in this report. Although the new method has not yet been fully implemented on the computer, we report theoretical worst-casemore » estimates of its computational complexity. 12 refs., 10 figs.« less
A Fault Tree Approach to Analysis of Behavioral Systems: An Overview.
ERIC Educational Resources Information Center
Stephens, Kent G.
Developed at Brigham Young University, Fault Tree Analysis (FTA) is a technique for enhancing the probability of success in any system by analyzing the most likely modes of failure that could occur. It provides a logical, step-by-step description of possible failure events within a system and their interaction--the combinations of potential…
DG TO FT - AUTOMATIC TRANSLATION OF DIGRAPH TO FAULT TREE MODELS
NASA Technical Reports Server (NTRS)
Iverson, D. L.
1994-01-01
Fault tree and digraph models are frequently used for system failure analysis. Both types of models represent a failure space view of the system using AND and OR nodes in a directed graph structure. Each model has its advantages. While digraphs can be derived in a fairly straightforward manner from system schematics and knowledge about component failure modes and system design, fault tree structure allows for fast processing using efficient techniques developed for tree data structures. The similarities between digraphs and fault trees permits the information encoded in the digraph to be translated into a logically equivalent fault tree. The DG TO FT translation tool will automatically translate digraph models, including those with loops or cycles, into fault tree models that have the same minimum cut set solutions as the input digraph. This tool could be useful, for example, if some parts of a system have been modeled using digraphs and others using fault trees. The digraphs could be translated and incorporated into the fault trees, allowing them to be analyzed using a number of powerful fault tree processing codes, such as cut set and quantitative solution codes. A cut set for a given node is a group of failure events that will cause the failure of the node. A minimum cut set for a node is any cut set that, if any of the failures in the set were to be removed, the occurrence of the other failures in the set will not cause the failure of the event represented by the node. Cut sets calculations can be used to find dependencies, weak links, and vital system components whose failures would cause serious systems failure. The DG TO FT translation system reads in a digraph with each node listed as a separate object in the input file. The user specifies a terminal node for the digraph that will be used as the top node of the resulting fault tree. A fault tree basic event node representing the failure of that digraph node is created and becomes a child of the terminal root node. A subtree is created for each of the inputs to the digraph terminal node and the root of those subtrees are added as children of the top node of the fault tree. Every node in the digraph upstream of the terminal node will be visited and converted. During the conversion process, the algorithm keeps track of the path from the digraph terminal node to the current digraph node. If a node is visited twice, then the program has found a cycle in the digraph. This cycle is broken by finding the minimal cut sets of the twice visited digraph node and forming those cut sets into subtrees. Another implementation of the algorithm resolves loops by building a subtree based on the digraph minimal cut sets calculation. It does not reduce the subtree to minimal cut set form. This second implementation produces larger fault trees, but runs much faster than the version using minimal cut sets since it does not spend time reducing the subtrees to minimal cut sets. The fault trees produced by DG TO FT will contain OR gates, AND gates, Basic Event nodes, and NOP gates. The results of a translation can be output as a text object description of the fault tree similar to the text digraph input format. The translator can also output a LISP language formatted file and an augmented LISP file which can be used by the FTDS (ARC-13019) diagnosis system, available from COSMIC, which performs diagnostic reasoning using the fault tree as a knowledge base. DG TO FT is written in C-language to be machine independent. It has been successfully implemented on a Sun running SunOS, a DECstation running ULTRIX, a Macintosh running System 7, and a DEC VAX running VMS. The RAM requirement varies with the size of the models. DG TO FT is available in UNIX tar format on a .25 inch streaming magnetic tape cartridge (standard distribution) or on a 3.5 inch diskette. It is also available on a 3.5 inch Macintosh format diskette or on a 9-track 1600 BPI magnetic tape in DEC VAX FILES-11 format. Sample input and sample output are provided on the distribution medium. An electronic copy of the documentation in Macintosh Microsoft Word format is provided on the distribution medium. DG TO FT was developed in 1992. Sun, and SunOS are trademarks of Sun Microsystems, Inc. DECstation, ULTRIX, VAX, and VMS are trademarks of Digital Equipment Corporation. UNIX is a registered trademark of AT&T Bell Laboratories. Macintosh is a registered trademark of Apple Computer, Inc. System 7 is a trademark of Apple Computers Inc. Microsoft Word is a trademark of Microsoft Corporation.
NASA Astrophysics Data System (ADS)
Schwartz, D. P.; Haeussler, P. J.; Seitz, G. G.; Dawson, T. E.; Stenner, H. D.; Matmon, A.; Crone, A. J.; Personius, S.; Burns, P. B.; Cadena, A.; Thoms, E.
2005-12-01
Developing accurate rupture histories of long, high-slip-rate strike-slip faults is is especially challenging where recurrence is relatively short (hundreds of years), adjacent segments may fail within decades of each other, and uncertainties in dating can be as large as, or larger than, the time between events. The Denali Fault system (DFS) is the major active structure of interior Alaska, but received little study since pioneering fault investigations in the early 1970s. Until the summer of 2003 essentially no data existed on the timing or spatial distribution of past ruptures on the DFS. This changed with the occurrence of the M7.9 2002 Denali fault earthquake, which has been a catalyst for present paleoseismic investigations. It provided a well-constrained rupture length and slip distribution. Strike-slip faulting occurred along 290 km of the Denali and Totschunda faults, leaving unruptured ?140km of the eastern Denali fault, ?180 km of the western Denali fault, and ?70 km of the eastern Totschunda fault. The DFS presents us with a blank canvas on which to fill a chronology of past earthquakes using modern paleoseismic techniques. Aware of correlation issues with potentially closely-timed earthquakes we have a) investigated 11 paleoseismic sites that allow a variety of dating techniques, b) measured paleo offsets, which provide insight into magnitude and rupture length of past events, at 18 locations, and c) developed late Pleistocene and Holocene slip rates using exposure age dating to constrain long-term fault behavior models. We are in the process of: 1) radiocarbon-dating peats involved in faulting and liquefaction, and especially short-lived forest floor vegetation that includes outer rings of trees, spruce needles, and blueberry leaves killed and buried during paleoearthquakes; 2) supporting development of a 700-900 year tree-ring time-series for precise dating of trees used in event timing; 3) employing Pb 210 for constraining the youngest ruptures in sag ponds on the eastern and western Denali fault; and 4) using volcanic ashes in trenches for dating and correlation. Initial results are: 1) Large earthquakes occurred along the 2002 rupture section 350-700 yrb02 (2-sigma, calendar-corrected, years before 2002) with offsets about the same as 2002. The Denali penultimate rupture appears younger (350-570 yrb02) than the Totschunda (580-700 yrb02); 2) The western Denali fault is geomorphically fresh, its MRE likely occurred within the past 250 years, the penultimate event occurred 570-680 yrb02, and slip in each event was 4m; 3) The eastern Denali MRE post-dates peat dated at 550-680 yrb02, is younger than the penultimate Totschunda event, and could be part of the penultimate Denali fault rupture or a separate earthquake; 4) A 120-km section of the Denali fault between tNenana glacier and the Delta River may be a zone of overlap for large events and/or capable of producing smaller earthquakes; its western part has fresh scarps with small (1m) offsets. 2004/2005 field observations show there are longer datable records, with 4-5 events recorded in trenches on the eastern Denali fault and the west end of the 2002 rupture, 2-3 events on the western part of the fault in Denali National Park, and 3-4 events on the Totschunda fault. These and extensive datable material provide the basis to define the paleoseismic history of DFS earthquake ruptures through multiple and complete earthquake cycles.
Huang, Weiqing; Fan, Hongbo; Qiu, Yongfu; Cheng, Zhiyu; Qian, Yu
2016-02-15
Haze weather has become a serious environmental pollution problem which occurs in many Chinese cities. One of the most critical factors for the formation of haze weather is the exhausts of coal combustion, thus it is meaningful to figure out the causation mechanism between urban haze and the exhausts of coal combustion. Based on above considerations, the fault tree analysis (FAT) approach was employed for the causation mechanism of urban haze in Beijing by considering the risk events related with the exhausts of coal combustion for the first time. Using this approach, firstly the fault tree of the urban haze causation system connecting with coal combustion exhausts was established; consequently the risk events were discussed and identified; then, the minimal cut sets were successfully determined using Boolean algebra; finally, the structure, probability and critical importance degree analysis of the risk events were completed for the qualitative and quantitative assessment. The study results proved that the FTA was an effective and simple tool for the causation mechanism analysis and risk management of urban haze in China. Copyright © 2015 Elsevier B.V. All rights reserved.
[Impact of water pollution risk in water transfer project based on fault tree analysis].
Liu, Jian-Chang; Zhang, Wei; Wang, Li-Min; Li, Dai-Qing; Fan, Xiu-Ying; Deng, Hong-Bing
2009-09-15
The methods to assess water pollution risk for medium water transfer are gradually being explored. The event-nature-proportion method was developed to evaluate the probability of the single event. Fault tree analysis on the basis of calculation on single event was employed to evaluate the extent of whole water pollution risk for the channel water body. The result indicates, that the risk of pollutants from towns and villages along the line of water transfer project to the channel water body is at high level with the probability of 0.373, which will increase pollution to the channel water body at the rate of 64.53 mg/L COD, 4.57 mg/L NH4(+) -N and 0.066 mg/L volatilization hydroxybenzene, respectively. The measurement of fault probability on the basis of proportion method is proved to be useful in assessing water pollution risk under much uncertainty.
Risk Analysis of Return Support Material on Gas Compressor Platform Project
NASA Astrophysics Data System (ADS)
Silvianita; Aulia, B. U.; Khakim, M. L. N.; Rosyid, Daniel M.
2017-07-01
On a fixed platforms project are not only carried out by a contractor, but two or more contractors. Cooperation in the construction of fixed platforms is often not according to plan, it is caused by several factors. It takes a good synergy between the contractor to avoid miss communication may cause problems on the project. For the example is about support material (sea fastening, skid shoe and shipping support) used in the process of sending a jacket structure to operation place often does not return to the contractor. It needs a systematic method to overcome the problem of support material. This paper analyses the causes and effects of GAS Compressor Platform that support material is not return, using Fault Tree Analysis (FTA) and Event Tree Analysis (ETA). From fault tree analysis, the probability of top event is 0.7783. From event tree analysis diagram, the contractors lose Rp.350.000.000, - to Rp.10.000.000.000, -.
Mines Systems Safety Improvement Using an Integrated Event Tree and Fault Tree Analysis
NASA Astrophysics Data System (ADS)
Kumar, Ranjan; Ghosh, Achyuta Krishna
2017-04-01
Mines systems such as ventilation system, strata support system, flame proof safety equipment, are exposed to dynamic operational conditions such as stress, humidity, dust, temperature, etc., and safety improvement of such systems can be done preferably during planning and design stage. However, the existing safety analysis methods do not handle the accident initiation and progression of mine systems explicitly. To bridge this gap, this paper presents an integrated Event Tree (ET) and Fault Tree (FT) approach for safety analysis and improvement of mine systems design. This approach includes ET and FT modeling coupled with redundancy allocation technique. In this method, a concept of top hazard probability is introduced for identifying system failure probability and redundancy is allocated to the system either at component or system level. A case study on mine methane explosion safety with two initiating events is performed. The results demonstrate that the presented method can reveal the accident scenarios and improve the safety of complex mine systems simultaneously.
FTC - THE FAULT-TREE COMPILER (SUN VERSION)
NASA Technical Reports Server (NTRS)
Butler, R. W.
1994-01-01
FTC, the Fault-Tree Compiler program, is a tool used to calculate the top-event probability for a fault-tree. Five different gate types are allowed in the fault tree: AND, OR, EXCLUSIVE OR, INVERT, and M OF N. The high-level input language is easy to understand and use. In addition, the program supports a hierarchical fault tree definition feature which simplifies the tree-description process and reduces execution time. A rigorous error bound is derived for the solution technique. This bound enables the program to supply an answer precisely (within the limits of double precision floating point arithmetic) at a user-specified number of digits accuracy. The program also facilitates sensitivity analysis with respect to any specified parameter of the fault tree such as a component failure rate or a specific event probability by allowing the user to vary one failure rate or the failure probability over a range of values and plot the results. The mathematical approach chosen to solve a reliability problem may vary with the size and nature of the problem. Although different solution techniques are utilized on different programs, it is possible to have a common input language. The Systems Validation Methods group at NASA Langley Research Center has created a set of programs that form the basis for a reliability analysis workstation. The set of programs are: SURE reliability analysis program (COSMIC program LAR-13789, LAR-14921); the ASSIST specification interface program (LAR-14193, LAR-14923), PAWS/STEM reliability analysis programs (LAR-14165, LAR-14920); and the FTC fault tree tool (LAR-14586, LAR-14922). FTC is used to calculate the top-event probability for a fault tree. PAWS/STEM and SURE are programs which interpret the same SURE language, but utilize different solution methods. ASSIST is a preprocessor that generates SURE language from a more abstract definition. SURE, ASSIST, and PAWS/STEM are also offered as a bundle. Please see the abstract for COS-10039/COS-10041, SARA - SURE/ASSIST Reliability Analysis Workstation, for pricing details. FTC was originally developed for DEC VAX series computers running VMS and was later ported for use on Sun computers running SunOS. The program is written in PASCAL, ANSI compliant C-language, and FORTRAN 77. The TEMPLATE graphics library is required to obtain graphical output. The standard distribution medium for the VMS version of FTC (LAR-14586) is a 9-track 1600 BPI magnetic tape in VMSINSTAL format. It is also available on a TK50 tape cartridge in VMSINSTAL format. Executables are included. The standard distribution medium for the Sun version of FTC (LAR-14922) is a .25 inch streaming magnetic tape cartridge in UNIX tar format. Both Sun3 and Sun4 executables are included. FTC was developed in 1989 and last updated in 1992. DEC, VAX, VMS, and TK50 are trademarks of Digital Equipment Corporation. UNIX is a registered trademark of AT&T Bell Laboratories. SunOS is a trademark of Sun Microsystems, Inc.
FTC - THE FAULT-TREE COMPILER (VAX VMS VERSION)
NASA Technical Reports Server (NTRS)
Butler, R. W.
1994-01-01
FTC, the Fault-Tree Compiler program, is a tool used to calculate the top-event probability for a fault-tree. Five different gate types are allowed in the fault tree: AND, OR, EXCLUSIVE OR, INVERT, and M OF N. The high-level input language is easy to understand and use. In addition, the program supports a hierarchical fault tree definition feature which simplifies the tree-description process and reduces execution time. A rigorous error bound is derived for the solution technique. This bound enables the program to supply an answer precisely (within the limits of double precision floating point arithmetic) at a user-specified number of digits accuracy. The program also facilitates sensitivity analysis with respect to any specified parameter of the fault tree such as a component failure rate or a specific event probability by allowing the user to vary one failure rate or the failure probability over a range of values and plot the results. The mathematical approach chosen to solve a reliability problem may vary with the size and nature of the problem. Although different solution techniques are utilized on different programs, it is possible to have a common input language. The Systems Validation Methods group at NASA Langley Research Center has created a set of programs that form the basis for a reliability analysis workstation. The set of programs are: SURE reliability analysis program (COSMIC program LAR-13789, LAR-14921); the ASSIST specification interface program (LAR-14193, LAR-14923), PAWS/STEM reliability analysis programs (LAR-14165, LAR-14920); and the FTC fault tree tool (LAR-14586, LAR-14922). FTC is used to calculate the top-event probability for a fault tree. PAWS/STEM and SURE are programs which interpret the same SURE language, but utilize different solution methods. ASSIST is a preprocessor that generates SURE language from a more abstract definition. SURE, ASSIST, and PAWS/STEM are also offered as a bundle. Please see the abstract for COS-10039/COS-10041, SARA - SURE/ASSIST Reliability Analysis Workstation, for pricing details. FTC was originally developed for DEC VAX series computers running VMS and was later ported for use on Sun computers running SunOS. The program is written in PASCAL, ANSI compliant C-language, and FORTRAN 77. The TEMPLATE graphics library is required to obtain graphical output. The standard distribution medium for the VMS version of FTC (LAR-14586) is a 9-track 1600 BPI magnetic tape in VMSINSTAL format. It is also available on a TK50 tape cartridge in VMSINSTAL format. Executables are included. The standard distribution medium for the Sun version of FTC (LAR-14922) is a .25 inch streaming magnetic tape cartridge in UNIX tar format. Both Sun3 and Sun4 executables are included. FTC was developed in 1989 and last updated in 1992. DEC, VAX, VMS, and TK50 are trademarks of Digital Equipment Corporation. UNIX is a registered trademark of AT&T Bell Laboratories. SunOS is a trademark of Sun Microsystems, Inc.
NASA Astrophysics Data System (ADS)
de Barros, Felipe P. J.; Bolster, Diogo; Sanchez-Vila, Xavier; Nowak, Wolfgang
2011-05-01
Assessing health risk in hydrological systems is an interdisciplinary field. It relies on the expertise in the fields of hydrology and public health and needs powerful translation concepts to provide decision support and policy making. Reliable health risk estimates need to account for the uncertainties and variabilities present in hydrological, physiological, and human behavioral parameters. Despite significant theoretical advancements in stochastic hydrology, there is still a dire need to further propagate these concepts to practical problems and to society in general. Following a recent line of work, we use fault trees to address the task of probabilistic risk analysis and to support related decision and management problems. Fault trees allow us to decompose the assessment of health risk into individual manageable modules, thus tackling a complex system by a structural divide and conquer approach. The complexity within each module can be chosen individually according to data availability, parsimony, relative importance, and stage of analysis. Three differences are highlighted in this paper when compared to previous works: (1) The fault tree proposed here accounts for the uncertainty in both hydrological and health components, (2) system failure within the fault tree is defined in terms of risk being above a threshold value, whereas previous studies that used fault trees used auxiliary events such as exceedance of critical concentration levels, and (3) we introduce a new form of stochastic fault tree that allows us to weaken the assumption of independent subsystems that is required by a classical fault tree approach. We illustrate our concept in a simple groundwater-related setting.
Using Fault Trees to Advance Understanding of Diagnostic Errors.
Rogith, Deevakar; Iyengar, M Sriram; Singh, Hardeep
2017-11-01
Diagnostic errors annually affect at least 5% of adults in the outpatient setting in the United States. Formal analytic techniques are only infrequently used to understand them, in part because of the complexity of diagnostic processes and clinical work flows involved. In this article, diagnostic errors were modeled using fault tree analysis (FTA), a form of root cause analysis that has been successfully used in other high-complexity, high-risk contexts. How factors contributing to diagnostic errors can be systematically modeled by FTA to inform error understanding and error prevention is demonstrated. A team of three experts reviewed 10 published cases of diagnostic error and constructed fault trees. The fault trees were modeled according to currently available conceptual frameworks characterizing diagnostic error. The 10 trees were then synthesized into a single fault tree to identify common contributing factors and pathways leading to diagnostic error. FTA is a visual, structured, deductive approach that depicts the temporal sequence of events and their interactions in a formal logical hierarchy. The visual FTA enables easier understanding of causative processes and cognitive and system factors, as well as rapid identification of common pathways and interactions in a unified fashion. In addition, it enables calculation of empirical estimates for causative pathways. Thus, fault trees might provide a useful framework for both quantitative and qualitative analysis of diagnostic errors. Future directions include establishing validity and reliability by modeling a wider range of error cases, conducting quantitative evaluations, and undertaking deeper exploration of other FTA capabilities. Copyright © 2017 The Joint Commission. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
LI, Y.; Yang, S. H.
2017-05-01
The Antarctica astronomical telescopes work chronically on the top of the unattended South Pole, and they have only one chance to maintain every year. Due to the complexity of the optical, mechanical, and electrical systems, the telescopes are hard to be maintained and need multi-tasker expedition teams, which means an excessive awareness is essential for the reliability of the Antarctica telescopes. Based on the fault mechanism and fault mode of the main-axis control system for the equatorial Antarctica astronomical telescope AST3-3 (Antarctic Schmidt Telescopes 3-3), the method of fault tree analysis is introduced in this article, and we obtains the importance degree of the top event from the importance degree of the bottom event structure. From the above results, the hidden problems and weak links can be effectively found out, which will indicate the direction for promoting the stability of the system and optimizing the design of the system.
Renjith, V R; Madhu, G; Nayagam, V Lakshmana Gomathi; Bhasi, A B
2010-11-15
The hazards associated with major accident hazard (MAH) industries are fire, explosion and toxic gas releases. Of these, toxic gas release is the worst as it has the potential to cause extensive fatalities. Qualitative and quantitative hazard analyses are essential for the identification and quantification of these hazards related to chemical industries. Fault tree analysis (FTA) is an established technique in hazard identification. This technique has the advantage of being both qualitative and quantitative, if the probabilities and frequencies of the basic events are known. This paper outlines the estimation of the probability of release of chlorine from storage and filling facility of chlor-alkali industry using FTA. An attempt has also been made to arrive at the probability of chlorine release using expert elicitation and proven fuzzy logic technique for Indian conditions. Sensitivity analysis has been done to evaluate the percentage contribution of each basic event that could lead to chlorine release. Two-dimensional fuzzy fault tree analysis (TDFFTA) has been proposed for balancing the hesitation factor involved in expert elicitation. Copyright © 2010 Elsevier B.V. All rights reserved.
Fault tree analysis for system modeling in case of intentional EMI
NASA Astrophysics Data System (ADS)
Genender, E.; Mleczko, M.; Döring, O.; Garbe, H.; Potthast, S.
2011-08-01
The complexity of modern systems on the one hand and the rising threat of intentional electromagnetic interference (IEMI) on the other hand increase the necessity for systematical risk analysis. Most of the problems can not be treated deterministically since slight changes in the configuration (source, position, polarization, ...) can dramatically change the outcome of an event. For that purpose, methods known from probabilistic risk analysis can be applied. One of the most common approaches is the fault tree analysis (FTA). The FTA is used to determine the system failure probability and also the main contributors to its failure. In this paper the fault tree analysis is introduced and a possible application of that method is shown using a small computer network as an example. The constraints of this methods are explained and conclusions for further research are drawn.
Logic flowgraph methodology - A tool for modeling embedded systems
NASA Technical Reports Server (NTRS)
Muthukumar, C. T.; Guarro, S. B.; Apostolakis, G. E.
1991-01-01
The logic flowgraph methodology (LFM), a method for modeling hardware in terms of its process parameters, has been extended to form an analytical tool for the analysis of integrated (hardware/software) embedded systems. In the software part of a given embedded system model, timing and the control flow among different software components are modeled by augmenting LFM with modified Petrinet structures. The objective of the use of such an augmented LFM model is to uncover possible errors and the potential for unanticipated software/hardware interactions. This is done by backtracking through the augmented LFM mode according to established procedures which allow the semiautomated construction of fault trees for any chosen state of the embedded system (top event). These fault trees, in turn, produce the possible combinations of lower-level states (events) that may lead to the top event.
Integrated Approach To Design And Analysis Of Systems
NASA Technical Reports Server (NTRS)
Patterson-Hine, F. A.; Iverson, David L.
1993-01-01
Object-oriented fault-tree representation unifies evaluation of reliability and diagnosis of faults. Programming/fault tree described more fully in "Object-Oriented Algorithm For Evaluation Of Fault Trees" (ARC-12731). Augmented fault tree object contains more information than fault tree object used in quantitative analysis of reliability. Additional information needed to diagnose faults in system represented by fault tree.
Applying fault tree analysis to the prevention of wrong-site surgery.
Abecassis, Zachary A; McElroy, Lisa M; Patel, Ronak M; Khorzad, Rebeca; Carroll, Charles; Mehrotra, Sanjay
2015-01-01
Wrong-site surgery (WSS) is a rare event that occurs to hundreds of patients each year. Despite national implementation of the Universal Protocol over the past decade, development of effective interventions remains a challenge. We performed a systematic review of the literature reporting root causes of WSS and used the results to perform a fault tree analysis to assess the reliability of the system in preventing WSS and identifying high-priority targets for interventions aimed at reducing WSS. Process components where a single error could result in WSS were labeled with OR gates; process aspects reinforced by verification were labeled with AND gates. The overall redundancy of the system was evaluated based on prevalence of AND gates and OR gates. In total, 37 studies described risk factors for WSS. The fault tree contains 35 faults, most of which fall into five main categories. Despite the Universal Protocol mandating patient verification, surgical site signing, and a brief time-out, a large proportion of the process relies on human transcription and verification. Fault tree analysis provides a standardized perspective of errors or faults within the system of surgical scheduling and site confirmation. It can be adapted by institutions or specialties to lead to more targeted interventions to increase redundancy and reliability within the preoperative process. Copyright © 2015 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Rodak, C. M.; McHugh, R.; Wei, X.
2016-12-01
The development and combination of horizontal drilling and hydraulic fracturing has unlocked unconventional hydrocarbon reserves around the globe. These advances have triggered a number of concerns regarding aquifer contamination and over-exploitation, leading to scientific studies investigating potential risks posed by directional hydraulic fracturing activities. These studies, balanced with potential economic benefits of energy production, are a crucial source of information for communities considering the development of unconventional reservoirs. However, probabilistic quantification of the overall risk posed by hydraulic fracturing at the system level are rare. Here we present the concept of fault tree analysis to determine the overall probability of groundwater contamination or over-exploitation, broadly referred to as the probability of failure. The potential utility of fault tree analysis for the quantification and communication of risks is approached with a general application. However, the fault tree design is robust and can handle various combinations of regional-specific data pertaining to relevant spatial scales, geological conditions, and industry practices where available. All available data are grouped into quantity and quality-based impacts and sub-divided based on the stage of the hydraulic fracturing process in which the data is relevant as described by the USEPA. Each stage is broken down into the unique basic events required for failure; for example, to quantify the risk of an on-site spill we must consider the likelihood, magnitude, composition, and subsurface transport of the spill. The structure of the fault tree described above can be used to render a highly complex system of variables into a straightforward equation for risk calculation based on Boolean logic. This project shows the utility of fault tree analysis for the visual communication of the potential risks of hydraulic fracturing activities on groundwater resources.
Risk assessment techniques with applicability in marine engineering
NASA Astrophysics Data System (ADS)
Rudenko, E.; Panaitescu, F. V.; Panaitescu, M.
2015-11-01
Nowadays risk management is a carefully planned process. The task of risk management is organically woven into the general problem of increasing the efficiency of business. Passive attitude to risk and awareness of its existence are replaced by active management techniques. Risk assessment is one of the most important stages of risk management, since for risk management it is necessary first to analyze and evaluate risk. There are many definitions of this notion but in general case risk assessment refers to the systematic process of identifying the factors and types of risk and their quantitative assessment, i.e. risk analysis methodology combines mutually complementary quantitative and qualitative approaches. Purpose of the work: In this paper we will consider as risk assessment technique Fault Tree analysis (FTA). The objectives are: understand purpose of FTA, understand and apply rules of Boolean algebra, analyse a simple system using FTA, FTA advantages and disadvantages. Research and methodology: The main purpose is to help identify potential causes of system failures before the failures actually occur. We can evaluate the probability of the Top event.The steps of this analize are: the system's examination from Top to Down, the use of symbols to represent events, the use of mathematical tools for critical areas, the use of Fault tree logic diagrams to identify the cause of the Top event. Results: In the finally of study it will be obtained: critical areas, Fault tree logical diagrams and the probability of the Top event. These results can be used for the risk assessment analyses.
2013-05-01
specifics of the correlation will be explored followed by discussion of new paradigms— the ordered event list (OEL) and the decision tree — that result from...4.2.1 Brief Overview of the Decision Tree Paradigm ................................................15 4.2.2 OEL Explained...6 Figure 3. A depiction of a notional fault/activation tree . ................................................................7
Fault tree analysis of the causes of waterborne outbreaks.
Risebro, Helen L; Doria, Miguel F; Andersson, Yvonne; Medema, Gertjan; Osborn, Keith; Schlosser, Olivier; Hunter, Paul R
2007-01-01
Prevention and containment of outbreaks requires examination of the contribution and interrelation of outbreak causative events. An outbreak fault tree was developed and applied to 61 enteric outbreaks related to public drinking water supplies in the EU. A mean of 3.25 causative events per outbreak were identified; each event was assigned a score based on percentage contribution per outbreak. Source and treatment system causative events often occurred concurrently (in 34 outbreaks). Distribution system causative events occurred less frequently (19 outbreaks) but were often solitary events contributing heavily towards the outbreak (a mean % score of 87.42). Livestock and rainfall in the catchment with no/inadequate filtration of water sources contributed concurrently to 11 of 31 Cryptosporidium outbreaks. Of the 23 protozoan outbreaks experiencing at least one treatment causative event, 90% of these events were filtration deficiencies; by contrast, for bacterial, viral, gastroenteritis and mixed pathogen outbreaks, 75% of treatment events were disinfection deficiencies. Roughly equal numbers of groundwater and surface water outbreaks experienced at least one treatment causative event (18 and 17 outbreaks, respectively). Retrospective analysis of multiple outbreaks of enteric disease can be used to inform outbreak investigations, facilitate corrective measures, and further develop multi-barrier approaches.
Fault and event tree analyses for process systems risk analysis: uncertainty handling formulations.
Ferdous, Refaul; Khan, Faisal; Sadiq, Rehan; Amyotte, Paul; Veitch, Brian
2011-01-01
Quantitative risk analysis (QRA) is a systematic approach for evaluating likelihood, consequences, and risk of adverse events. QRA based on event (ETA) and fault tree analyses (FTA) employs two basic assumptions. The first assumption is related to likelihood values of input events, and the second assumption is regarding interdependence among the events (for ETA) or basic events (for FTA). Traditionally, FTA and ETA both use crisp probabilities; however, to deal with uncertainties, the probability distributions of input event likelihoods are assumed. These probability distributions are often hard to come by and even if available, they are subject to incompleteness (partial ignorance) and imprecision. Furthermore, both FTA and ETA assume that events (or basic events) are independent. In practice, these two assumptions are often unrealistic. This article focuses on handling uncertainty in a QRA framework of a process system. Fuzzy set theory and evidence theory are used to describe the uncertainties in the input event likelihoods. A method based on a dependency coefficient is used to express interdependencies of events (or basic events) in ETA and FTA. To demonstrate the approach, two case studies are discussed. © 2010 Society for Risk Analysis.
Uncertainty analysis in fault tree models with dependent basic events.
Pedroni, Nicola; Zio, Enrico
2013-06-01
In general, two types of dependence need to be considered when estimating the probability of the top event (TE) of a fault tree (FT): "objective" dependence between the (random) occurrences of different basic events (BEs) in the FT and "state-of-knowledge" (epistemic) dependence between estimates of the epistemically uncertain probabilities of some BEs of the FT model. In this article, we study the effects on the TE probability of objective and epistemic dependences. The well-known Frèchet bounds and the distribution envelope determination (DEnv) method are used to model all kinds of (possibly unknown) objective and epistemic dependences, respectively. For exemplification, the analyses are carried out on a FT with six BEs. Results show that both types of dependence significantly affect the TE probability; however, the effects of epistemic dependence are likely to be overwhelmed by those of objective dependence (if present). © 2012 Society for Risk Analysis.
[The Application of the Fault Tree Analysis Method in Medical Equipment Maintenance].
Liu, Hongbin
2015-11-01
In this paper, the traditional fault tree analysis method is presented, detailed instructions for its application characteristics in medical instrument maintenance is made. It is made significant changes when the traditional fault tree analysis method is introduced into the medical instrument maintenance: gave up the logic symbolic, logic analysis and calculation, gave up its complicated programs, and only keep its image and practical fault tree diagram, and the fault tree diagram there are also differences: the fault tree is no longer a logical tree but the thinking tree in troubleshooting, the definition of the fault tree's nodes is different, the composition of the fault tree's branches is also different.
NASA Technical Reports Server (NTRS)
Lee, Charles; Alena, Richard L.; Robinson, Peter
2004-01-01
We started from ISS fault trees example to migrate to decision trees, presented a method to convert fault trees to decision trees. The method shows that the visualizations of root cause of fault are easier and the tree manipulating becomes more programmatic via available decision tree programs. The visualization of decision trees for the diagnostic shows a format of straight forward and easy understands. For ISS real time fault diagnostic, the status of the systems could be shown by mining the signals through the trees and see where it stops at. The other advantage to use decision trees is that the trees can learn the fault patterns and predict the future fault from the historic data. The learning is not only on the static data sets but also can be online, through accumulating the real time data sets, the decision trees can gain and store faults patterns in the trees and recognize them when they come.
NASA Astrophysics Data System (ADS)
Sanchez-Vila, X.; de Barros, F.; Bolster, D.; Nowak, W.
2010-12-01
Assessing the potential risk of hydro(geo)logical supply systems to human population is an interdisciplinary field. It relies on the expertise in fields as distant as hydrogeology, medicine, or anthropology, and needs powerful translation concepts to provide decision support and policy making. Reliable health risk estimates need to account for the uncertainties in hydrological, physiological and human behavioral parameters. We propose the use of fault trees to address the task of probabilistic risk analysis (PRA) and to support related management decisions. Fault trees allow decomposing the assessment of health risk into individual manageable modules, thus tackling a complex system by a structural “Divide and Conquer” approach. The complexity within each module can be chosen individually according to data availability, parsimony, relative importance and stage of analysis. The separation in modules allows for a true inter- and multi-disciplinary approach. This presentation highlights the three novel features of our work: (1) we define failure in terms of risk being above a threshold value, whereas previous studies used auxiliary events such as exceedance of critical concentration levels, (2) we plot an integrated fault tree that handles uncertainty in both hydrological and health components in a unified way, and (3) we introduce a new form of stochastic fault tree that allows to weaken the assumption of independent subsystems that is required by a classical fault tree approach. We illustrate our concept in a simple groundwater-related setting.
Bodin, Paul; Bilham, Roger; Behr, Jeff; Gomberg, Joan; Hudnut, Kenneth W.
1994-01-01
Five out of six functioning creepmeters on southern California faults recorded slip triggered at the time of some or all of the three largest events of the 1992 Landers earthquake sequence. Digital creep data indicate that dextral slip was triggered within 1 min of each mainshock and that maximum slip velocities occurred 2 to 3 min later. The duration of triggered slip events ranged from a few hours to several weeks. We note that triggered slip occurs commonly on faults that exhibit fault creep. To account for the observation that slip can be triggered repeatedly on a fault, we propose that the amplitude of triggered slip may be proportional to the depth of slip in the creep event and to the available near-surface tectonic strain that would otherwise eventually be released as fault creep. We advance the notion that seismic surface waves, perhaps amplified by sediments, generate transient local conditions that favor the release of tectonic strain to varying depths. Synthetic strain seismograms are presented that suggest increased pore pressure during periods of fault-normal contraction may be responsible for triggered slip, since maximum dextral shear strain transients correspond to times of maximum fault-normal contraction.
Automatic translation of digraph to fault-tree models
NASA Technical Reports Server (NTRS)
Iverson, David L.
1992-01-01
The author presents a technique for converting digraph models, including those models containing cycles, to a fault-tree format. A computer program which automatically performs this translation using an object-oriented representation of the models has been developed. The fault-trees resulting from translations can be used for fault-tree analysis and diagnosis. Programs to calculate fault-tree and digraph cut sets and perform diagnosis with fault-tree models have also been developed. The digraph to fault-tree translation system has been successfully tested on several digraphs of varying size and complexity. Details of some representative translation problems are presented. Most of the computation performed by the program is dedicated to finding minimal cut sets for digraph nodes in order to break cycles in the digraph. Fault-trees produced by the translator have been successfully used with NASA's Fault-Tree Diagnosis System (FTDS) to produce automated diagnostic systems.
Schwartz, D.P.; Pantosti, D.; Okumura, K.; Powers, T.J.; Hamilton, J.C.
1998-01-01
Trenching, microgeomorphic mapping, and tree ring analysis provide information on timing of paleoearthquakes and behavior of the San Andreas fault in the Santa Cruz mountains. At the Grizzly Flat site alluvial units dated at 1640-1659 A.D., 1679-1894 A.D., 1668-1893 A.D., and the present ground surface are displaced by a single event. This was the 1906 surface rupture. Combined trench dates and tree ring analysis suggest that the penultimate event occurred in the mid-1600s, possibly in an interval as narrow as 1632-1659 A.D. There is no direct evidence in the trenches for the 1838 or 1865 earthquakes, which have been proposed as occurring on this part of the fault zone. In a minimum time of about 340 years only one large surface faulting event (1906) occurred at Grizzly Flat, in contrast to previous recurrence estimates of 95-110 years for the Santa Cruz mountains segment. Comparison with dates of the penultimate San Andreas earthquake at sites north of San Francisco suggests that the San Andreas fault between Point Arena and the Santa Cruz mountains may have failed either as a sequence of closely timed earthquakes on adjacent segments or as a single long rupture similar in length to the 1906 rupture around the mid-1600s. The 1906 coseismic geodetic slip and the late Holocene geologic slip rate on the San Francisco peninsula and southward are about 50-70% and 70% of their values north of San Francisco, respectively. The slip gradient along the 1906 rupture section of the San Andreas reflects partitioning of plate boundary slip onto the San Gregorio, Sargent, and other faults south of the Golden Gate. If a mid-1600s event ruptured the same section of the fault that failed in 1906, it supports the concept that long strike-slip faults can contain master rupture segments that repeat in both length and slip distribution. Recognition of a persistent slip rate gradient along the northern San Andreas fault and the concept of a master segment remove the requirement that lower slip sections of large events such as 1906 must fill in on a periodic basis with smaller and more frequent earthquakes.
Shi, Lei; Shuai, Jian; Xu, Kui
2014-08-15
Fire and explosion accidents of steel oil storage tanks (FEASOST) occur occasionally during the petroleum and chemical industry production and storage processes and often have devastating impact on lives, the environment and property. To contribute towards the development of a quantitative approach for assessing the occurrence probability of FEASOST, a fault tree of FEASOST is constructed that identifies various potential causes. Traditional fault tree analysis (FTA) can achieve quantitative evaluation if the failure data of all of the basic events (BEs) are available, which is almost impossible due to the lack of detailed data, as well as other uncertainties. This paper makes an attempt to perform FTA of FEASOST by a hybrid application between an expert elicitation based improved analysis hierarchy process (AHP) and fuzzy set theory, and the occurrence possibility of FEASOST is estimated for an oil depot in China. A comparison between statistical data and calculated data using fuzzy fault tree analysis (FFTA) based on traditional and improved AHP is also made. Sensitivity and importance analysis has been performed to identify the most crucial BEs leading to FEASOST that will provide insights into how managers should focus effective mitigation. Copyright © 2014 Elsevier B.V. All rights reserved.
Probability and possibility-based representations of uncertainty in fault tree analysis.
Flage, Roger; Baraldi, Piero; Zio, Enrico; Aven, Terje
2013-01-01
Expert knowledge is an important source of input to risk analysis. In practice, experts might be reluctant to characterize their knowledge and the related (epistemic) uncertainty using precise probabilities. The theory of possibility allows for imprecision in probability assignments. The associated possibilistic representation of epistemic uncertainty can be combined with, and transformed into, a probabilistic representation; in this article, we show this with reference to a simple fault tree analysis. We apply an integrated (hybrid) probabilistic-possibilistic computational framework for the joint propagation of the epistemic uncertainty on the values of the (limiting relative frequency) probabilities of the basic events of the fault tree, and we use possibility-probability (probability-possibility) transformations for propagating the epistemic uncertainty within purely probabilistic and possibilistic settings. The results of the different approaches (hybrid, probabilistic, and possibilistic) are compared with respect to the representation of uncertainty about the top event (limiting relative frequency) probability. Both the rationale underpinning the approaches and the computational efforts they require are critically examined. We conclude that the approaches relevant in a given setting depend on the purpose of the risk analysis, and that further research is required to make the possibilistic approaches operational in a risk analysis context. © 2012 Society for Risk Analysis.
Application Research of Fault Tree Analysis in Grid Communication System Corrective Maintenance
NASA Astrophysics Data System (ADS)
Wang, Jian; Yang, Zhenwei; Kang, Mei
2018-01-01
This paper attempts to apply the fault tree analysis method to the corrective maintenance field of grid communication system. Through the establishment of the fault tree model of typical system and the engineering experience, the fault tree analysis theory is used to analyze the fault tree model, which contains the field of structural function, probability importance and so on. The results show that the fault tree analysis can realize fast positioning and well repairing of the system. Meanwhile, it finds that the analysis method of fault tree has some guiding significance to the reliability researching and upgrading f the system.
Yazdi, Mohammad; Korhan, Orhan; Daneshvar, Sahand
2018-05-09
This study aimed at establishing fault tree analysis (FTA) using expert opinion to compute the probability of an event. To find the probability of the top event (TE), all probabilities of the basic events (BEs) should be available when the FTA is drawn. In this case, employing expert judgment can be used as an alternative to failure data in an awkward situation. The fuzzy analytical hierarchy process as a standard technique is used to give a specific weight to each expert, and fuzzy set theory is engaged for aggregating expert opinion. In this regard, the probability of BEs will be computed and, consequently, the probability of the TE obtained using Boolean algebra. Additionally, to reduce the probability of the TE in terms of three parameters (safety consequences, cost and benefit), the importance measurement technique and modified TOPSIS was employed. The effectiveness of the proposed approach is demonstrated with a real-life case study.
NASA Astrophysics Data System (ADS)
Guan, Yifeng; Zhao, Jie; Shi, Tengfei; Zhu, Peipei
2016-09-01
In recent years, China's increased interest in environmental protection has led to a promotion of energy-efficient dual fuel (diesel/natural gas) ships in Chinese inland rivers. A natural gas as ship fuel may pose dangers of fire and explosion if a gas leak occurs. If explosions or fires occur in the engine rooms of a ship, heavy damage and losses will be incurred. In this paper, a fault tree model is presented that considers both fires and explosions in a dual fuel ship; in this model, dual fuel engine rooms are the top events. All the basic events along with the minimum cut sets are obtained through the analysis. The primary factors that affect accidents involving fires and explosions are determined by calculating the degree of structure importance of the basic events. According to these results, corresponding measures are proposed to ensure and improve the safety and reliability of Chinese inland dual fuel ships.
Fault Tree in the Trenches, A Success Story
NASA Technical Reports Server (NTRS)
Long, R. Allen; Goodson, Amanda (Technical Monitor)
2000-01-01
Getting caught up in the explanation of Fault Tree Analysis (FTA) minutiae is easy. In fact, most FTA literature tends to address FTA concepts and methodology. Yet there seems to be few articles addressing actual design changes resulting from the successful application of fault tree analysis. This paper demonstrates how fault tree analysis was used to identify and solve a potentially catastrophic mechanical problem at a rocket motor manufacturer. While developing the fault tree given in this example, the analyst was told by several organizations that the piece of equipment in question had been evaluated by several committees and organizations, and that the analyst was wasting his time. The fault tree/cutset analysis resulted in a joint-redesign of the control system by the tool engineering group and the fault tree analyst, as well as bragging rights for the analyst. (That the fault tree found problems where other engineering reviews had failed was not lost on the other engineering groups.) Even more interesting was that this was the analyst's first fault tree which further demonstrates how effective fault tree analysis can be in guiding (i.e., forcing) the analyst to take a methodical approach in evaluating complex systems.
Timing of late Holocene surface rupture of the Wairau Fault, Marlborough, New Zealand
Zachariasen, J.; Berryman, K.; Langridge, Rob; Prentice, C.; Rymer, M.; Stirling, M.; Villamor, P.
2006-01-01
Three trenches excavated across the central portion of the right-lateral strike-slip Wairau Fault in South Island, New Zealand, exposed a complex set of fault strands that have displaced a sequence of late Holocene alluvial and colluvial deposits. Abundant charcoal fragments provide age control for various stratigraphic horizons dating back to c. 5610 yr ago. Faulting relations from the Wadsworth trench show that the most recent surface rupture event occurred at least 1290 yr and at most 2740 yr ago. Drowned trees in landslide-dammed Lake Chalice, in combination with charcoal from the base of an unfaulted colluvial wedge at Wadsworth trench, suggest a narrower time bracket for this event of 1811-2301 cal. yr BP. The penultimate faulting event occurred between c. 2370 and 3380 yr, and possibly near 2680 ?? 60 cal. yr BP, when data from both the Wadsworth and Dillon trenches are combined. Two older events have been recognised from Dillon trench but remain poorly dated. A probable elapsed time of at least 1811 yr since the last surface rupture, and an average slip rate estimate for the Wairau Fault of 3-5 mm/yr, suggests that at least 5.4 m and up to 11.5 m of elastic shear strain has accumulated since the last rupture. This is near to or greater than the single-event displacement estimates of 5-7 m. The average recurrence interval for surface rupture of the fault determined from the trench data is 1150-1400 yr. Although the uncertainties in the timing of faulting events and variability in inter-event times remain high, the time elapsed since the last event is in the order of 1-2 times the average recurrence interval, implying that the Wairau Fault is near the end of its interseismic period. ?? The Royal Society of New Zealand 2006.
Taheriyoun, Masoud; Moradinejad, Saber
2015-01-01
The reliability of a wastewater treatment plant is a critical issue when the effluent is reused or discharged to water resources. Main factors affecting the performance of the wastewater treatment plant are the variation of the influent, inherent variability in the treatment processes, deficiencies in design, mechanical equipment, and operational failures. Thus, meeting the established reuse/discharge criteria requires assessment of plant reliability. Among many techniques developed in system reliability analysis, fault tree analysis (FTA) is one of the popular and efficient methods. FTA is a top down, deductive failure analysis in which an undesired state of a system is analyzed. In this study, the problem of reliability was studied on Tehran West Town wastewater treatment plant. This plant is a conventional activated sludge process, and the effluent is reused in landscape irrigation. The fault tree diagram was established with the violation of allowable effluent BOD as the top event in the diagram, and the deficiencies of the system were identified based on the developed model. Some basic events are operator's mistake, physical damage, and design problems. The analytical method is minimal cut sets (based on numerical probability) and Monte Carlo simulation. Basic event probabilities were calculated according to available data and experts' opinions. The results showed that human factors, especially human error had a great effect on top event occurrence. The mechanical, climate, and sewer system factors were in subsequent tier. Literature shows applying FTA has been seldom used in the past wastewater treatment plant (WWTP) risk analysis studies. Thus, the developed FTA model in this study considerably improves the insight into causal failure analysis of a WWTP. It provides an efficient tool for WWTP operators and decision makers to achieve the standard limits in wastewater reuse and discharge to the environment.
Reliability analysis of the solar array based on Fault Tree Analysis
NASA Astrophysics Data System (ADS)
Jianing, Wu; Shaoze, Yan
2011-07-01
The solar array is an important device used in the spacecraft, which influences the quality of in-orbit operation of the spacecraft and even the launches. This paper analyzes the reliability of the mechanical system and certifies the most vital subsystem of the solar array. The fault tree analysis (FTA) model is established according to the operating process of the mechanical system based on DFH-3 satellite; the logical expression of the top event is obtained by Boolean algebra and the reliability of the solar array is calculated. The conclusion shows that the hinges are the most vital links between the solar arrays. By analyzing the structure importance(SI) of the hinge's FTA model, some fatal causes, including faults of the seal, insufficient torque of the locking spring, temperature in space, and friction force, can be identified. Damage is the initial stage of the fault, so limiting damage is significant to prevent faults. Furthermore, recommendations for improving reliability associated with damage limitation are discussed, which can be used for the redesigning of the solar array and the reliability growth planning.
Tutorial: Advanced fault tree applications using HARP
NASA Technical Reports Server (NTRS)
Dugan, Joanne Bechta; Bavuso, Salvatore J.; Boyd, Mark A.
1993-01-01
Reliability analysis of fault tolerant computer systems for critical applications is complicated by several factors. These modeling difficulties are discussed and dynamic fault tree modeling techniques for handling them are described and demonstrated. Several advanced fault tolerant computer systems are described, and fault tree models for their analysis are presented. HARP (Hybrid Automated Reliability Predictor) is a software package developed at Duke University and NASA Langley Research Center that is capable of solving the fault tree models presented.
Technology transfer by means of fault tree synthesis
NASA Astrophysics Data System (ADS)
Batzias, Dimitris F.
2012-12-01
Since Fault Tree Analysis (FTA) attempts to model and analyze failure processes of engineering, it forms a common technique for good industrial practice. On the contrary, fault tree synthesis (FTS) refers to the methodology of constructing complex trees either from dentritic modules built ad hoc or from fault tress already used and stored in a Knowledge Base. In both cases, technology transfer takes place in a quasi-inductive mode, from partial to holistic knowledge. In this work, an algorithmic procedure, including 9 activity steps and 3 decision nodes is developed for performing effectively this transfer when the fault under investigation occurs within one of the latter stages of an industrial procedure with several stages in series. The main parts of the algorithmic procedure are: (i) the construction of a local fault tree within the corresponding production stage, where the fault has been detected, (ii) the formation of an interface made of input faults that might occur upstream, (iii) the fuzzy (to count for uncertainty) multicriteria ranking of these faults according to their significance, and (iv) the synthesis of an extended fault tree based on the construction of part (i) and on the local fault tree of the first-ranked fault in part (iii). An implementation is presented, referring to 'uneven sealing of Al anodic film', thus proving the functionality of the developed methodology.
Method and system for dynamic probabilistic risk assessment
NASA Technical Reports Server (NTRS)
Dugan, Joanne Bechta (Inventor); Xu, Hong (Inventor)
2013-01-01
The DEFT methodology, system and computer readable medium extends the applicability of the PRA (Probabilistic Risk Assessment) methodology to computer-based systems, by allowing DFT (Dynamic Fault Tree) nodes as pivot nodes in the Event Tree (ET) model. DEFT includes a mathematical model and solution algorithm, supports all common PRA analysis functions and cutsets. Additional capabilities enabled by the DFT include modularization, phased mission analysis, sequence dependencies, and imperfect coverage.
Conversion of Questionnaire Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Powell, Danny H; Elwood Jr, Robert H
During the survey, respondents are asked to provide qualitative answers (well, adequate, needs improvement) on how well material control and accountability (MC&A) functions are being performed. These responses can be used to develop failure probabilities for basic events performed during routine operation of the MC&A systems. The failure frequencies for individual events may be used to estimate total system effectiveness using a fault tree in a probabilistic risk analysis (PRA). Numeric risk values are required for the PRA fault tree calculations that are performed to evaluate system effectiveness. So, the performance ratings in the questionnaire must be converted to relativemore » risk values for all of the basic MC&A tasks performed in the facility. If a specific material protection, control, and accountability (MPC&A) task is being performed at the 'perfect' level, the task is considered to have a near zero risk of failure. If the task is performed at a less than perfect level, the deficiency in performance represents some risk of failure for the event. As the degree of deficiency in performance increases, the risk of failure increases. If a task that should be performed is not being performed, that task is in a state of failure. The failure probabilities of all basic events contribute to the total system risk. Conversion of questionnaire MPC&A system performance data to numeric values is a separate function from the process of completing the questionnaire. When specific questions in the questionnaire are answered, the focus is on correctly assessing and reporting, in an adjectival manner, the actual performance of the related MC&A function. Prior to conversion, consideration should not be given to the numeric value that will be assigned during the conversion process. In the conversion process, adjectival responses to questions on system performance are quantified based on a log normal scale typically used in human error analysis (see A.D. Swain and H.E. Guttmann, 'Handbook of Human Reliability Analysis with Emphasis on Nuclear Power Plant Applications,' NUREG/CR-1278). This conversion produces the basic event risk of failure values required for the fault tree calculations. The fault tree is a deductive logic structure that corresponds to the operational nuclear MC&A system at a nuclear facility. The conventional Delphi process is a time-honored approach commonly used in the risk assessment field to extract numerical values for the failure rates of actions or activities when statistically significant data is absent.« less
Faults Discovery By Using Mined Data
NASA Technical Reports Server (NTRS)
Lee, Charles
2005-01-01
Fault discovery in the complex systems consist of model based reasoning, fault tree analysis, rule based inference methods, and other approaches. Model based reasoning builds models for the systems either by mathematic formulations or by experiment model. Fault Tree Analysis shows the possible causes of a system malfunction by enumerating the suspect components and their respective failure modes that may have induced the problem. The rule based inference build the model based on the expert knowledge. Those models and methods have one thing in common; they have presumed some prior-conditions. Complex systems often use fault trees to analyze the faults. Fault diagnosis, when error occurs, is performed by engineers and analysts performing extensive examination of all data gathered during the mission. International Space Station (ISS) control center operates on the data feedback from the system and decisions are made based on threshold values by using fault trees. Since those decision-making tasks are safety critical and must be done promptly, the engineers who manually analyze the data are facing time challenge. To automate this process, this paper present an approach that uses decision trees to discover fault from data in real-time and capture the contents of fault trees as the initial state of the trees.
Fault trees and sequence dependencies
NASA Technical Reports Server (NTRS)
Dugan, Joanne Bechta; Boyd, Mark A.; Bavuso, Salvatore J.
1990-01-01
One of the frequently cited shortcomings of fault-tree models, their inability to model so-called sequence dependencies, is discussed. Several sources of such sequence dependencies are discussed, and new fault-tree gates to capture this behavior are defined. These complex behaviors can be included in present fault-tree models because they utilize a Markov solution. The utility of the new gates is demonstrated by presenting several models of the fault-tolerant parallel processor, which include both hot and cold spares.
Fault tree analysis for integrated and probabilistic risk analysis of drinking water systems.
Lindhe, Andreas; Rosén, Lars; Norberg, Tommy; Bergstedt, Olof
2009-04-01
Drinking water systems are vulnerable and subject to a wide range of risks. To avoid sub-optimisation of risk-reduction options, risk analyses need to include the entire drinking water system, from source to tap. Such an integrated approach demands tools that are able to model interactions between different events. Fault tree analysis is a risk estimation tool with the ability to model interactions between events. Using fault tree analysis on an integrated level, a probabilistic risk analysis of a large drinking water system in Sweden was carried out. The primary aims of the study were: (1) to develop a method for integrated and probabilistic risk analysis of entire drinking water systems; and (2) to evaluate the applicability of Customer Minutes Lost (CML) as a measure of risk. The analysis included situations where no water is delivered to the consumer (quantity failure) and situations where water is delivered but does not comply with water quality standards (quality failure). Hard data as well as expert judgements were used to estimate probabilities of events and uncertainties in the estimates. The calculations were performed using Monte Carlo simulations. CML is shown to be a useful measure of risks associated with drinking water systems. The method presented provides information on risk levels, probabilities of failure, failure rates and downtimes of the system. This information is available for the entire system as well as its different sub-systems. Furthermore, the method enables comparison of the results with performance targets and acceptable levels of risk. The method thus facilitates integrated risk analysis and consequently helps decision-makers to minimise sub-optimisation of risk-reduction options.
McElroy, Lisa M; Khorzad, Rebeca; Rowe, Theresa A; Abecassis, Zachary A; Apley, Daniel W; Barnard, Cynthia; Holl, Jane L
The purpose of this study was to use fault tree analysis to evaluate the adequacy of quality reporting programs in identifying root causes of postoperative bloodstream infection (BSI). A systematic review of the literature was used to construct a fault tree to evaluate 3 postoperative BSI reporting programs: National Surgical Quality Improvement Program (NSQIP), Centers for Medicare and Medicaid Services (CMS), and The Joint Commission (JC). The literature review revealed 699 eligible publications, 90 of which were used to create the fault tree containing 105 faults. A total of 14 identified faults are currently mandated for reporting to NSQIP, 5 to CMS, and 3 to JC; 2 or more programs require 4 identified faults. The fault tree identifies numerous contributing faults to postoperative BSI and reveals substantial variation in the requirements and ability of national quality data reporting programs to capture these potential faults. Efforts to prevent postoperative BSI require more comprehensive data collection to identify the root causes and develop high-reliability improvement strategies.
Managing Risk to Ensure a Successful Cassini/Huygens Saturn Orbit Insertion (SOI)
NASA Technical Reports Server (NTRS)
Witkowski, Mona M.; Huh, Shin M.; Burt, John B.; Webster, Julie L.
2004-01-01
I. Design: a) S/C designed to be largely single fault tolerant; b) Operate in flight demonstrated envelope, with margin; and c) Strict compliance with requirements & flight rules. II. Test: a) Baseline, fault & stress testing using flight system testbeds (H/W & S/W); b) In-flight checkout & demos to remove first time events. III. Failure Analysis: a) Critical event driven fault tree analysis; b) Risk mitigation & development of contingencies. IV) Residual Risks: a) Accepted pre-launch waivers to Single Point Failures; b) Unavoidable risks (e.g. natural disaster). V) Mission Assurance: a) Strict process for characterization of variances (ISAs, PFRs & Waivers; b) Full time Mission Assurance Manager reports to Program Manager: 1) Independent assessment of compliance with institutional standards; 2) Oversight & risk assessment of ISAs, PFRs & Waivers etc.; and 3) Risk Management Process facilitator.
NASA Astrophysics Data System (ADS)
Riyadi, Eko H.
2014-09-01
Initiating event is defined as any event either internal or external to the nuclear power plants (NPPs) that perturbs the steady state operation of the plant, if operating, thereby initiating an abnormal event such as transient or loss of coolant accident (LOCA) within the NPPs. These initiating events trigger sequences of events that challenge plant control and safety systems whose failure could potentially lead to core damage or large early release. Selection for initiating events consists of two steps i.e. first step, definition of possible events, such as by evaluating a comprehensive engineering, and by constructing a top level logic model. Then the second step, grouping of identified initiating event's by the safety function to be performed or combinations of systems responses. Therefore, the purpose of this paper is to discuss initiating events identification in event tree development process and to reviews other probabilistic safety assessments (PSA). The identification of initiating events also involves the past operating experience, review of other PSA, failure mode and effect analysis (FMEA), feedback from system modeling, and master logic diagram (special type of fault tree). By using the method of study for the condition of the traditional US PSA categorization in detail, could be obtained the important initiating events that are categorized into LOCA, transients and external events.
Field, Edward; Biasi, Glenn P.; Bird, Peter; Dawson, Timothy E.; Felzer, Karen R.; Jackson, David A.; Johnson, Kaj M.; Jordan, Thomas H.; Madden, Christopher; Michael, Andrew J.; Milner, Kevin; Page, Morgan T.; Parsons, Thomas E.; Powers, Peter; Shaw, Bruce E.; Thatcher, Wayne R.; Weldon, Ray J.; Zeng, Yuehua
2015-01-01
The 2014 Working Group on California Earthquake Probabilities (WGCEP 2014) presents time-dependent earthquake probabilities for the third Uniform California Earthquake Rupture Forecast (UCERF3). Building on the UCERF3 time-independent model, published previously, renewal models are utilized to represent elastic-rebound-implied probabilities. A new methodology has been developed that solves applicability issues in the previous approach for un-segmented models. The new methodology also supports magnitude-dependent aperiodicity and accounts for the historic open interval on faults that lack a date-of-last-event constraint. Epistemic uncertainties are represented with a logic tree, producing 5,760 different forecasts. Results for a variety of evaluation metrics are presented, including logic-tree sensitivity analyses and comparisons to the previous model (UCERF2). For 30-year M≥6.7 probabilities, the most significant changes from UCERF2 are a threefold increase on the Calaveras fault and a threefold decrease on the San Jacinto fault. Such changes are due mostly to differences in the time-independent models (e.g., fault slip rates), with relaxation of segmentation and inclusion of multi-fault ruptures being particularly influential. In fact, some UCERF2 faults were simply too long to produce M 6.7 sized events given the segmentation assumptions in that study. Probability model differences are also influential, with the implied gains (relative to a Poisson model) being generally higher in UCERF3. Accounting for the historic open interval is one reason. Another is an effective 27% increase in the total elastic-rebound-model weight. The exact factors influencing differences between UCERF2 and UCERF3, as well as the relative importance of logic-tree branches, vary throughout the region, and depend on the evaluation metric of interest. For example, M≥6.7 probabilities may not be a good proxy for other hazard or loss measures. This sensitivity, coupled with the approximate nature of the model and known limitations, means the applicability of UCERF3 should be evaluated on a case-by-case basis.
Investigation of Fuel Oil/Lube Oil Spray Fires On Board Vessels. Volume 3.
1998-11-01
U.S. Coast Guard Research and Development Center 1082 Shennecossett Road, Groton, CT 06340-6096 Report No. CG-D-01-99, III Investigation of Fuel ...refinery). Developed the technical and mathematical specifications for BRAVO™2.0, a state-of-the-art Windows program for performing event tree and fault...tree analyses. Also managed the development of and prepared the technical specifications for QRA ROOTS™, a Windows program for storing, searching K-4
Interim reliability evaluation program, Browns Ferry 1
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mays, S.E.; Poloski, J.P.; Sullivan, W.H.
1981-01-01
Probabilistic risk analysis techniques, i.e., event tree and fault tree analysis, were utilized to provide a risk assessment of the Browns Ferry Nuclear Plant Unit 1. Browns Ferry 1 is a General Electric boiling water reactor of the BWR 4 product line with a Mark 1 (drywell and torus) containment. Within the guidelines of the IREP Procedure and Schedule Guide, dominant accident sequences that contribute to public health and safety risks were identified and grouped according to release categories.
A dynamic fault tree model of a propulsion system
NASA Technical Reports Server (NTRS)
Xu, Hong; Dugan, Joanne Bechta; Meshkat, Leila
2006-01-01
We present a dynamic fault tree model of the benchmark propulsion system, and solve it using Galileo. Dynamic fault trees (DFT) extend traditional static fault trees with special gates to model spares and other sequence dependencies. Galileo solves DFT models using a judicious combination of automatically generated Markov and Binary Decision Diagram models. Galileo easily handles the complexities exhibited by the benchmark problem. In particular, Galileo is designed to model phased mission systems.
Chen, Yingyi; Zhen, Zhumi; Yu, Huihui; Xu, Jing
2017-01-14
In the Internet of Things (IoT) equipment used for aquaculture is often deployed in outdoor ponds located in remote areas. Faults occur frequently in these tough environments and the staff generally lack professional knowledge and pay a low degree of attention in these areas. Once faults happen, expert personnel must carry out maintenance outdoors. Therefore, this study presents an intelligent method for fault diagnosis based on fault tree analysis and a fuzzy neural network. In the proposed method, first, the fault tree presents a logic structure of fault symptoms and faults. Second, rules extracted from the fault trees avoid duplicate and redundancy. Third, the fuzzy neural network is applied to train the relationship mapping between fault symptoms and faults. In the aquaculture IoT, one fault can cause various fault symptoms, and one symptom can be caused by a variety of faults. Four fault relationships are obtained. Results show that one symptom-to-one fault, two symptoms-to-two faults, and two symptoms-to-one fault relationships can be rapidly diagnosed with high precision, while one symptom-to-two faults patterns perform not so well, but are still worth researching. This model implements diagnosis for most kinds of faults in the aquaculture IoT.
Chen, Yingyi; Zhen, Zhumi; Yu, Huihui; Xu, Jing
2017-01-01
In the Internet of Things (IoT) equipment used for aquaculture is often deployed in outdoor ponds located in remote areas. Faults occur frequently in these tough environments and the staff generally lack professional knowledge and pay a low degree of attention in these areas. Once faults happen, expert personnel must carry out maintenance outdoors. Therefore, this study presents an intelligent method for fault diagnosis based on fault tree analysis and a fuzzy neural network. In the proposed method, first, the fault tree presents a logic structure of fault symptoms and faults. Second, rules extracted from the fault trees avoid duplicate and redundancy. Third, the fuzzy neural network is applied to train the relationship mapping between fault symptoms and faults. In the aquaculture IoT, one fault can cause various fault symptoms, and one symptom can be caused by a variety of faults. Four fault relationships are obtained. Results show that one symptom-to-one fault, two symptoms-to-two faults, and two symptoms-to-one fault relationships can be rapidly diagnosed with high precision, while one symptom-to-two faults patterns perform not so well, but are still worth researching. This model implements diagnosis for most kinds of faults in the aquaculture IoT. PMID:28098822
A fault is born: The Landers-Mojave earthquake line
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nur, A.; Ron, H.
1993-04-01
The epicenter and the southern portion of the 1992 Landers earthquake fell on an approximately N-S earthquake line, defined by both epicentral locations and by the rupture directions of four previous M>5 earthquakes in the Mojave: The 1947 Manix; 1975 Galway Lake; 1979 Homestead Valley: and 1992 Joshua Tree events. Another M 5.2 earthquake epicenter in 1965 fell on this line where it intersects the Calico fault. In contrast, the northern part of the Landers rupture followed the NW-SE trending Camp Rock and parallel faults, exhibiting an apparently unusual rupture kink. The block tectonic model (Ron et al., 1984) combiningmore » fault kinematic and mechanics, explains both the alignment of the events, and their ruptures (Nur et al., 1986, 1989), as well as the Landers kink (Nur et al., 1992). Accordingly, the now NW oriented faults have rotated into their present direction away from the direction of maximum shortening, close to becoming locked, whereas a new fault set, optimally oriented relative to the direction of shortening, is developing to accommodate current crustal deformation. The Mojave-Landers line may thus be a new fault in formation. During the transition of faulting from the old, well developed and wak but poorly oriented faults to the strong, but favorably oriented new ones, both can slip simultaneously, giving rise to kinks such as Landers.« less
COMCAN: a computer program for common cause analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Burdick, G.R.; Marshall, N.H.; Wilson, J.R.
1976-05-01
The computer program, COMCAN, searches the fault tree minimal cut sets for shared susceptibility to various secondary events (common causes) and common links between components. In the case of common causes, a location check may also be performed by COMCAN to determine whether barriers to the common cause exist between components. The program can locate common manufacturers of components having events in the same minimal cut set. A relative ranking scheme for secondary event susceptibility is included in the program.
Master Logic Diagram: method for hazard and initiating event identification in process plants.
Papazoglou, I A; Aneziris, O N
2003-02-28
Master Logic Diagram (MLD), a method for identifying events initiating accidents in chemical installations, is presented. MLD is a logic diagram that resembles a fault tree but without the formal mathematical properties of the latter. MLD starts with a Top Event "Loss of Containment" and decomposes it into simpler contributing events. A generic MLD has been developed which may be applied to all chemical installations storing toxic and/or flammable substances. The method is exemplified through its application to an ammonia storage facility.
A Flexible Hierarchical Bayesian Modeling Technique for Risk Analysis of Major Accidents.
Yu, Hongyang; Khan, Faisal; Veitch, Brian
2017-09-01
Safety analysis of rare events with potentially catastrophic consequences is challenged by data scarcity and uncertainty. Traditional causation-based approaches, such as fault tree and event tree (used to model rare event), suffer from a number of weaknesses. These include the static structure of the event causation, lack of event occurrence data, and need for reliable prior information. In this study, a new hierarchical Bayesian modeling based technique is proposed to overcome these drawbacks. The proposed technique can be used as a flexible technique for risk analysis of major accidents. It enables both forward and backward analysis in quantitative reasoning and the treatment of interdependence among the model parameters. Source-to-source variability in data sources is also taken into account through a robust probabilistic safety analysis. The applicability of the proposed technique has been demonstrated through a case study in marine and offshore industry. © 2017 Society for Risk Analysis.
Systems Theoretic Process Analysis Applied to an Offshore Supply Vessel Dynamic Positioning System
2016-06-01
additional safety issues that were either not identified or inadequately mitigated through the use of Fault Tree Analysis and Failure Modes and...Techniques ...................................................................................................... 15 1.3.1. Fault Tree Analysis...49 3.2. Fault Tree Analysis Comparison
An overview of the phase-modular fault tree approach to phased mission system analysis
NASA Technical Reports Server (NTRS)
Meshkat, L.; Xing, L.; Donohue, S. K.; Ou, Y.
2003-01-01
We look at how fault tree analysis (FTA), a primary means of performing reliability analysis of PMS, can meet this challenge in this paper by presenting an overview of the modular approach to solving fault trees that represent PMS.
Fault Tree Analysis: A Research Tool for Educational Planning. Technical Report No. 1.
ERIC Educational Resources Information Center
Alameda County School Dept., Hayward, CA. PACE Center.
This ESEA Title III report describes fault tree analysis and assesses its applicability to education. Fault tree analysis is an operations research tool which is designed to increase the probability of success in any system by analyzing the most likely modes of failure that could occur. A graphic portrayal, which has the form of a tree, is…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Riyadi, Eko H., E-mail: e.riyadi@bapeten.go.id
2014-09-30
Initiating event is defined as any event either internal or external to the nuclear power plants (NPPs) that perturbs the steady state operation of the plant, if operating, thereby initiating an abnormal event such as transient or loss of coolant accident (LOCA) within the NPPs. These initiating events trigger sequences of events that challenge plant control and safety systems whose failure could potentially lead to core damage or large early release. Selection for initiating events consists of two steps i.e. first step, definition of possible events, such as by evaluating a comprehensive engineering, and by constructing a top level logicmore » model. Then the second step, grouping of identified initiating event's by the safety function to be performed or combinations of systems responses. Therefore, the purpose of this paper is to discuss initiating events identification in event tree development process and to reviews other probabilistic safety assessments (PSA). The identification of initiating events also involves the past operating experience, review of other PSA, failure mode and effect analysis (FMEA), feedback from system modeling, and master logic diagram (special type of fault tree). By using the method of study for the condition of the traditional US PSA categorization in detail, could be obtained the important initiating events that are categorized into LOCA, transients and external events.« less
Fault-zone waves observed at the southern Joshua Tree earthquake rupture zone
Hough, S.E.; Ben-Zion, Y.; Leary, P.
1994-01-01
Waveform and spectral characteristics of several aftershocks of the M 6.1 22 April 1992 Joshua Tree earthquake recorded at stations just north of the Indio Hills in the Coachella Valley can be interpreted in terms of waves propagating within narrow, low-velocity, high-attenuation, vertical zones. Evidence for our interpretation consists of: (1) emergent P arrivals prior to and opposite in polarity to the impulsive direct phase; these arrivals can be modeled as headwaves indicative of a transfault velocity contrast; (2) spectral peaks in the S wave train that can be interpreted as internally reflected, low-velocity fault-zone wave energy; and (3) spatial selectivity of event-station pairs at which these data are observed, suggesting a long, narrow geologic structure. The observed waveforms are modeled using the analytical solution of Ben-Zion and Aki (1990) for a plane-parallel layered fault-zone structure. Synthetic waveform fits to the observed data indicate the presence of NS-trending vertical fault-zone layers characterized by a thickness of 50 to 100 m, a velocity decrease of 10 to 15% relative to the surrounding rock, and a P-wave quality factor in the range 25 to 50.
Software For Fault-Tree Diagnosis Of A System
NASA Technical Reports Server (NTRS)
Iverson, Dave; Patterson-Hine, Ann; Liao, Jack
1993-01-01
Fault Tree Diagnosis System (FTDS) computer program is automated-diagnostic-system program identifying likely causes of specified failure on basis of information represented in system-reliability mathematical models known as fault trees. Is modified implementation of failure-cause-identification phase of Narayanan's and Viswanadham's methodology for acquisition of knowledge and reasoning in analyzing failures of systems. Knowledge base of if/then rules replaced with object-oriented fault-tree representation. Enhancement yields more-efficient identification of causes of failures and enables dynamic updating of knowledge base. Written in C language, C++, and Common LISP.
NASA Astrophysics Data System (ADS)
Zeng, Yajun; Skibniewski, Miroslaw J.
2013-08-01
Enterprise resource planning (ERP) system implementations are often characterised with large capital outlay, long implementation duration, and high risk of failure. In order to avoid ERP implementation failure and realise the benefits of the system, sound risk management is the key. This paper proposes a probabilistic risk assessment approach for ERP system implementation projects based on fault tree analysis, which models the relationship between ERP system components and specific risk factors. Unlike traditional risk management approaches that have been mostly focused on meeting project budget and schedule objectives, the proposed approach intends to address the risks that may cause ERP system usage failure. The approach can be used to identify the root causes of ERP system implementation usage failure and quantify the impact of critical component failures or critical risk events in the implementation process.
Fault tree models for fault tolerant hypercube multiprocessors
NASA Technical Reports Server (NTRS)
Boyd, Mark A.; Tuazon, Jezus O.
1991-01-01
Three candidate fault tolerant hypercube architectures are modeled, their reliability analyses are compared, and the resulting implications of these methods of incorporating fault tolerance into hypercube multiprocessors are discussed. In the course of performing the reliability analyses, the use of HARP and fault trees in modeling sequence dependent system behaviors is demonstrated.
Comparative analysis of techniques for evaluating the effectiveness of aircraft computing systems
NASA Technical Reports Server (NTRS)
Hitt, E. F.; Bridgman, M. S.; Robinson, A. C.
1981-01-01
Performability analysis is a technique developed for evaluating the effectiveness of fault-tolerant computing systems in multiphase missions. Performability was evaluated for its accuracy, practical usefulness, and relative cost. The evaluation was performed by applying performability and the fault tree method to a set of sample problems ranging from simple to moderately complex. The problems involved as many as five outcomes, two to five mission phases, permanent faults, and some functional dependencies. Transient faults and software errors were not considered. A different analyst was responsible for each technique. Significantly more time and effort were required to learn performability analysis than the fault tree method. Performability is inherently as accurate as fault tree analysis. For the sample problems, fault trees were more practical and less time consuming to apply, while performability required less ingenuity and was more checkable. Performability offers some advantages for evaluating very complex problems.
Trade Studies of Space Launch Architectures using Modular Probabilistic Risk Analysis
NASA Technical Reports Server (NTRS)
Mathias, Donovan L.; Go, Susie
2006-01-01
A top-down risk assessment in the early phases of space exploration architecture development can provide understanding and intuition of the potential risks associated with new designs and technologies. In this approach, risk analysts draw from their past experience and the heritage of similar existing systems as a source for reliability data. This top-down approach captures the complex interactions of the risk driving parts of the integrated system without requiring detailed knowledge of the parts themselves, which is often unavailable in the early design stages. Traditional probabilistic risk analysis (PRA) technologies, however, suffer several drawbacks that limit their timely application to complex technology development programs. The most restrictive of these is a dependence on static planning scenarios, expressed through fault and event trees. Fault trees incorporating comprehensive mission scenarios are routinely constructed for complex space systems, and several commercial software products are available for evaluating fault statistics. These static representations cannot capture the dynamic behavior of system failures without substantial modification of the initial tree. Consequently, the development of dynamic models using fault tree analysis has been an active area of research in recent years. This paper discusses the implementation and demonstration of dynamic, modular scenario modeling for integration of subsystem fault evaluation modules using the Space Architecture Failure Evaluation (SAFE) tool. SAFE is a C++ code that was originally developed to support NASA s Space Launch Initiative. It provides a flexible framework for system architecture definition and trade studies. SAFE supports extensible modeling of dynamic, time-dependent risk drivers of the system and functions at the level of fidelity for which design and failure data exists. The approach is scalable, allowing inclusion of additional information as detailed data becomes available. The tool performs a Monte Carlo analysis to provide statistical estimates. Example results of an architecture system reliability study are summarized for an exploration system concept using heritage data from liquid-fueled expendable Saturn V/Apollo launch vehicles.
Product Support Manager Guidebook
2011-04-01
package is being developed using supportability analysis concepts such as Failure Mode, Effects and Criticality Analysis (FMECA), Fault Tree Analysis ( FTA ...Analysis (LORA) Condition Based Maintenance + (CBM+) Fault Tree Analysis ( FTA ) Failure Mode, Effects, and Criticality Analysis (FMECA) Maintenance Task...Reporting and Corrective Action System (FRACAS), Fault Tree Analysis ( FTA ), Level of Repair Analysis (LORA), Maintenance Task Analysis (MTA
Adaptive Sampling using Support Vector Machines
DOE Office of Scientific and Technical Information (OSTI.GOV)
D. Mandelli; C. Smith
2012-11-01
Reliability/safety analysis of stochastic dynamic systems (e.g., nuclear power plants, airplanes, chemical plants) is currently performed through a combination of Event-Tress and Fault-Trees. However, these conventional methods suffer from certain drawbacks: • Timing of events is not explicitly modeled • Ordering of events is preset by the analyst • The modeling of complex accident scenarios is driven by expert-judgment For these reasons, there is currently an increasing interest into the development of dynamic PRA methodologies since they can be used to address the deficiencies of conventional methods listed above.
NASA Astrophysics Data System (ADS)
Krechowicz, Maria
2017-10-01
Nowadays, one of the characteristic features of construction industry is an increased complexity of a growing number of projects. Almost each construction project is unique, has its project-specific purpose, its own project structural complexity, owner’s expectations, ground conditions unique to a certain location, and its own dynamics. Failure costs and costs resulting from unforeseen problems in complex construction projects are very high. Project complexity drivers pose many vulnerabilities to a successful completion of a number of projects. This paper discusses the process of effective risk management in complex construction projects in which renewable energy sources were used, on the example of the realization phase of the ENERGIS teaching-laboratory building, from the point of view of DORBUD S.A., its general contractor. This paper suggests a new approach to risk management for complex construction projects in which renewable energy sources were applied. The risk management process was divided into six stages: gathering information, identification of the top, critical project risks resulting from the project complexity, construction of the fault tree for each top, critical risks, logical analysis of the fault tree, quantitative risk assessment applying fuzzy logic and development of risk response strategy. A new methodology for the qualitative and quantitative risk assessment for top, critical risks in complex construction projects was developed. Risk assessment was carried out applying Fuzzy Fault Tree analysis on the example of one top critical risk. Application of the Fuzzy sets theory to the proposed model allowed to decrease uncertainty and eliminate problems with gaining the crisp values of the basic events probability, common during expert risk assessment with the objective to give the exact risk score of each unwanted event probability.
Fault Tree Analysis: Its Implications for Use in Education.
ERIC Educational Resources Information Center
Barker, Bruce O.
This study introduces the concept of Fault Tree Analysis as a systems tool and examines the implications of Fault Tree Analysis (FTA) as a technique for isolating failure modes in educational systems. A definition of FTA and discussion of its history, as it relates to education, are provided. The step by step process for implementation and use of…
Fault Tree Analysis Application for Safety and Reliability
NASA Technical Reports Server (NTRS)
Wallace, Dolores R.
2003-01-01
Many commercial software tools exist for fault tree analysis (FTA), an accepted method for mitigating risk in systems. The method embedded in the tools identifies a root as use in system components, but when software is identified as a root cause, it does not build trees into the software component. No commercial software tools have been built specifically for development and analysis of software fault trees. Research indicates that the methods of FTA could be applied to software, but the method is not practical without automated tool support. With appropriate automated tool support, software fault tree analysis (SFTA) may be a practical technique for identifying the underlying cause of software faults that may lead to critical system failures. We strive to demonstrate that existing commercial tools for FTA can be adapted for use with SFTA, and that applied to a safety-critical system, SFTA can be used to identify serious potential problems long before integrator and system testing.
Paleoearthquakes on the Denali-Totschunda Fault system: Preliminary Observations of Slip and Timing
NASA Astrophysics Data System (ADS)
Schwartz, D. P.; Denali Fault Earthquake Geology Wp, .
2003-12-01
Understanding the behavior of large strike-slip fault systems requires information about the amount of slip and timing of past earthquakes at different locations along a fault. A historical surface rupture adds a critically important baseline for calibration. During July 2003 we performed additional mapping of the 2002 Denali-Totschunda surface rupture with the goal of also measuring and dating slip during previous earthquakes. We were able to obtain slip values for prior events at a dozen locations along Denali-Totschunda strike-slip rupture. We focused on the penultimate event, which is easiest to distinguish (slip from individual older events can eventually be measured). On the Denali fault just west of the intersection with the Susitna Glacier thrust 2002 slip was low, 1.0 m to 1.5 m; cumulative slip from two events was 2.5-3.0, which is essentially double. On the 100-km-long section between Black Rapids Glacier and Gillett Pass, where 2002 slip averaged 5 m, three measurements indicate penultimate-event slip was about the same as 2002. The 7-8 m offset section east of Gillett Pass has the clearest paleoevent slip history. We measured three locations where 2002 slip was 7-8m and cumulative offset on channels was 14.5-16 m. Along this section previous workers noted gullies with 15 m offsets before the 2002 earthquake, suggesting the past three events here had similar slip. On the Totschunda fault paleo offsets appear to be similar in amount to 2002. At one locality we measured 2.8 m in 2002 and 5.4 m for two events. A second site had 1.0-1.4 m of offset in 2002 and 3.1 m for two events. A third location yielded 3.3 m in 2002 and 10.8 m on a paleochannel, which could represent three events with similar slip. A location in the Denali-Totschunda transition zone had a 5-6 m-high scarp and a well-developed sag pond, indicating that this complex part of the fault system has been active in previous events. The major observation is that the paleo offset measurements, though presently limited in number, indicate that penultimate event slip was very similar to the 2002 offset along the length of the ruptured Denali and Totschundafaults, and may have been similar for at least a third event back. For most of the it's length the 2002 rupture is expressed as a narrow mole track (typically 1m to 3m wide) but locally it has produced pull aparts and large fissures. These features contain a variety of organic deposits associated with the ground surface at the time of the penultimate earthquake(s) on the Denali and Totschunda faults. We sampled five of these, and recovered peat, pine needles, and trees that were toppled during the penultimate event(s). Including a test pit west of the Delta River, we have six sample sites that span the 5m and 7-8m rupture segments of the Denali, the Denali-Totschunda transition zone, and the Totschunda fault. Preliminary radiocarbon dates indicate that the timing of the penultimate event on the Denali fault is younger than 1400 to 1289 yr BP and may have occurred as recently as 520 to 310 yr BP. The penultimate event on the Totschunda fault occurred after 1340 to 1130 yr BP and most likely occurred shortly after 660 to 530 years BP. The Denali-Totschunda fault system is a remarkable laboratory, particularly in terms of preservation of fault geomorphology and organic material, for studying large strike-slip faults. These initial observations of paleoslip and event dates are the first steps in unraveling the behavior of this major strike-slip zone. Denali Fault Earthquake Geology Working Group: T. Dawson, P. Haeussler, J. Lienkaemper, A. Matmon, D. Schwartz, H.Stenner, B. Sherrod (USGS), F. Cinti, P. Montone (INGV, Rome), G. Carver. G.Plafker (Alyeska)
Object-oriented fault tree models applied to system diagnosis
NASA Technical Reports Server (NTRS)
Iverson, David L.; Patterson-Hine, F. A.
1990-01-01
When a diagnosis system is used in a dynamic environment, such as the distributed computer system planned for use on Space Station Freedom, it must execute quickly and its knowledge base must be easily updated. Representing system knowledge as object-oriented augmented fault trees provides both features. The diagnosis system described here is based on the failure cause identification process of the diagnostic system described by Narayanan and Viswanadham. Their system has been enhanced in this implementation by replacing the knowledge base of if-then rules with an object-oriented fault tree representation. This allows the system to perform its task much faster and facilitates dynamic updating of the knowledge base in a changing diagnosis environment. Accessing the information contained in the objects is more efficient than performing a lookup operation on an indexed rule base. Additionally, the object-oriented fault trees can be easily updated to represent current system status. This paper describes the fault tree representation, the diagnosis algorithm extensions, and an example application of this system. Comparisons are made between the object-oriented fault tree knowledge structure solution and one implementation of a rule-based solution. Plans for future work on this system are also discussed.
A fault tree model to assess probability of contaminant discharge from shipwrecks.
Landquist, H; Rosén, L; Lindhe, A; Norberg, T; Hassellöv, I-M; Lindgren, J F; Dahllöf, I
2014-11-15
Shipwrecks on the sea floor around the world may contain hazardous substances that can cause harm to the marine environment. Today there are no comprehensive methods for environmental risk assessment of shipwrecks, and thus there is poor support for decision-making on prioritization of mitigation measures. The purpose of this study was to develop a tool for quantitative risk estimation of potentially polluting shipwrecks, and in particular an estimation of the annual probability of hazardous substance discharge. The assessment of the probability of discharge is performed using fault tree analysis, facilitating quantification of the probability with respect to a set of identified hazardous events. This approach enables a structured assessment providing transparent uncertainty and sensitivity analyses. The model facilitates quantification of risk, quantification of the uncertainties in the risk calculation and identification of parameters to be investigated further in order to obtain a more reliable risk calculation. Copyright © 2014 Elsevier Ltd. All rights reserved.
Qualitative Importance Measures of Systems Components - A New Approach and Its Applications
NASA Astrophysics Data System (ADS)
Chybowski, Leszek; Gawdzińska, Katarzyna; Wiśnicki, Bogusz
2016-12-01
The paper presents an improved methodology of analysing the qualitative importance of components in the functional and reliability structures of the system. We present basic importance measures, i.e. the Birnbaum's structural measure, the order of the smallest minimal cut-set, the repetition count of an i-th event in the Fault Tree and the streams measure. A subsystem of circulation pumps and fuel heaters in the main engine fuel supply system of a container vessel illustrates the qualitative importance analysis. We constructed a functional model and a Fault Tree which we analysed using qualitative measures. Additionally, we compared the calculated measures and introduced corrected measures as a tool for improving the analysis. We proposed scaled measures and a common measure taking into account the location of the component in the reliability and functional structures. Finally, we proposed an area where the measures could be applied.
Probabilistic fault tree analysis of a radiation treatment system.
Ekaette, Edidiong; Lee, Robert C; Cooke, David L; Iftody, Sandra; Craighead, Peter
2007-12-01
Inappropriate administration of radiation for cancer treatment can result in severe consequences such as premature death or appreciably impaired quality of life. There has been little study of vulnerable treatment process components and their contribution to the risk of radiation treatment (RT). In this article, we describe the application of probabilistic fault tree methods to assess the probability of radiation misadministration to patients at a large cancer treatment center. We conducted a systematic analysis of the RT process that identified four process domains: Assessment, Preparation, Treatment, and Follow-up. For the Preparation domain, we analyzed possible incident scenarios via fault trees. For each task, we also identified existing quality control measures. To populate the fault trees we used subjective probabilities from experts and compared results with incident report data. Both the fault tree and the incident report analysis revealed simulation tasks to be most prone to incidents, and the treatment prescription task to be least prone to incidents. The probability of a Preparation domain incident was estimated to be in the range of 0.1-0.7% based on incident reports, which is comparable to the mean value of 0.4% from the fault tree analysis using probabilities from the expert elicitation exercise. In conclusion, an analysis of part of the RT system using a fault tree populated with subjective probabilities from experts was useful in identifying vulnerable components of the system, and provided quantitative data for risk management.
Health Management Applications for International Space Station
NASA Technical Reports Server (NTRS)
Alena, Richard; Duncavage, Dan
2005-01-01
Traditional mission and vehicle management involves teams of highly trained specialists monitoring vehicle status and crew activities, responding rapidly to any anomalies encountered during operations. These teams work from the Mission Control Center and have access to engineering support teams with specialized expertise in International Space Station (ISS) subsystems. Integrated System Health Management (ISHM) applications can significantly augment these capabilities by providing enhanced monitoring, prognostic and diagnostic tools for critical decision support and mission management. The Intelligent Systems Division of NASA Ames Research Center is developing many prototype applications using model-based reasoning, data mining and simulation, working with Mission Control through the ISHM Testbed and Prototypes Project. This paper will briefly describe information technology that supports current mission management practice, and will extend this to a vision for future mission control workflow incorporating new ISHM applications. It will describe ISHM applications currently under development at NASA and will define technical approaches for implementing our vision of future human exploration mission management incorporating artificial intelligence and distributed web service architectures using specific examples. Several prototypes are under development, each highlighting a different computational approach. The ISStrider application allows in-depth analysis of Caution and Warning (C&W) events by correlating real-time telemetry with the logical fault trees used to define off-nominal events. The application uses live telemetry data and the Livingstone diagnostic inference engine to display the specific parameters and fault trees that generated the C&W event, allowing a flight controller to identify the root cause of the event from thousands of possibilities by simply navigating animated fault tree models on their workstation. SimStation models the functional power flow for the ISS Electrical Power System and can predict power balance for nominal and off-nominal conditions. SimStation uses realtime telemetry data to keep detailed computational physics models synchronized with actual ISS power system state. In the event of failure, the application can then rapidly diagnose root cause, predict future resource levels and even correlate technical documents relevant to the specific failure. These advanced computational models will allow better insight and more precise control of ISS subsystems, increasing safety margins by speeding up anomaly resolution and reducing,engineering team effort and cost. This technology will make operating ISS more efficient and is directly applicable to next-generation exploration missions and Crew Exploration Vehicles.
Reconfigurable tree architectures using subtree oriented fault tolerance
NASA Technical Reports Server (NTRS)
Lowrie, Matthew B.
1987-01-01
An approach to the design of reconfigurable tree architecture is presented in which spare processors are allocated at the leaves. The approach is unique in that spares are associated with subtrees and sharing of spares between these subtrees can occur. The Subtree Oriented Fault Tolerance (SOFT) approach is more reliable than previous approaches capable of tolerating link and switch failures for both single chip and multichip tree implementations while reducing redundancy in terms of both spare processors and links. VLSI layout is 0(n) for binary trees and is directly extensible to N-ary trees and fault tolerance through performance degradation.
Secure Embedded System Design Methodologies for Military Cryptographic Systems
2016-03-31
Fault- Tree Analysis (FTA); Built-In Self-Test (BIST) Introduction Secure access-control systems restrict operations to authorized users via methods...failures in the individual software/processor elements, the question of exactly how unlikely is difficult to answer. Fault- Tree Analysis (FTA) has a...Collins of Sandia National Laboratories for years of sharing his extensive knowledge of Fail-Safe Design Assurance and Fault- Tree Analysis
TU-AB-BRD-03: Fault Tree Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dunscombe, P.
2015-06-15
Current quality assurance and quality management guidelines provided by various professional organizations are prescriptive in nature, focusing principally on performance characteristics of planning and delivery devices. However, published analyses of events in radiation therapy show that most events are often caused by flaws in clinical processes rather than by device failures. This suggests the need for the development of a quality management program that is based on integrated approaches to process and equipment quality assurance. Industrial engineers have developed various risk assessment tools that are used to identify and eliminate potential failures from a system or a process before amore » failure impacts a customer. These tools include, but are not limited to, process mapping, failure modes and effects analysis, fault tree analysis. Task Group 100 of the American Association of Physicists in Medicine has developed these tools and used them to formulate an example risk-based quality management program for intensity-modulated radiotherapy. This is a prospective risk assessment approach that analyzes potential error pathways inherent in a clinical process and then ranks them according to relative risk, typically before implementation, followed by the design of a new process or modification of the existing process. Appropriate controls are then put in place to ensure that failures are less likely to occur and, if they do, they will more likely be detected before they propagate through the process, compromising treatment outcome and causing harm to the patient. Such a prospective approach forms the basis of the work of Task Group 100 that has recently been approved by the AAPM. This session will be devoted to a discussion of these tools and practical examples of how these tools can be used in a given radiotherapy clinic to develop a risk based quality management program. Learning Objectives: Learn how to design a process map for a radiotherapy process Learn how to perform failure modes and effects analysis analysis for a given process Learn what fault trees are all about Learn how to design a quality management program based upon the information obtained from process mapping, failure modes and effects analysis and fault tree analysis. Dunscombe: Director, TreatSafely, LLC and Center for the Assessment of Radiological Sciences; Consultant to IAEA and Varian Thomadsen: President, Center for the Assessment of Radiological Sciences Palta: Vice President of the Center for the Assessment of Radiological Sciences.« less
The 1992 Landers earthquake sequence; seismological observations
Egill Hauksson,; Jones, Lucile M.; Hutton, Kate; Eberhart-Phillips, Donna
1993-01-01
The (MW6.1, 7.3, 6.2) 1992 Landers earthquakes began on April 23 with the MW6.1 1992 Joshua Tree preshock and form the most substantial earthquake sequence to occur in California in the last 40 years. This sequence ruptured almost 100 km of both surficial and concealed faults and caused aftershocks over an area 100 km wide by 180 km long. The faulting was predominantly strike slip and three main events in the sequence had unilateral rupture to the north away from the San Andreas fault. The MW6.1 Joshua Tree preshock at 33°N58′ and 116°W19′ on 0451 UT April 23 was preceded by a tightly clustered foreshock sequence (M≤4.6) beginning 2 hours before the mainshock and followed by a large aftershock sequence with more than 6000 aftershocks. The aftershocks extended along a northerly trend from about 10 km north of the San Andreas fault, northwest of Indio, to the east-striking Pinto Mountain fault. The Mw7.3 Landers mainshock occurred at 34°N13′ and 116°W26′ at 1158 UT, June 28, 1992, and was preceded for 12 hours by 25 small M≤3 earthquakes at the mainshock epicenter. The distribution of more than 20,000 aftershocks, analyzed in this study, and short-period focal mechanisms illuminate a complex sequence of faulting. The aftershocks extend 60 km to the north of the mainshock epicenter along a system of at least five different surficial faults, and 40 km to the south, crossing the Pinto Mountain fault through the Joshua Tree aftershock zone towards the San Andreas fault near Indio. The rupture initiated in the depth range of 3–6 km, similar to previous M∼5 earthquakes in the region, although the maximum depth of aftershocks is about 15 km. The mainshock focal mechanism showed right-lateral strike-slip faulting with a strike of N10°W on an almost vertical fault. The rupture formed an arclike zone well defined by both surficial faulting and aftershocks, with more westerly faulting to the north. This change in strike is accomplished by jumping across dilational jogs connecting surficial faults with strikes rotated progressively to the west. A 20-km-long linear cluster of aftershocks occurred 10–20 km north of Barstow, or 30–40 km north of the end of the mainshock rupture. The most prominent off-fault aftershock cluster occurred 30 km to the west of the Landers mainshock. The largest aftershock was within this cluster, the Mw6.2 Big Bear aftershock occurring at 34°N10′ and 116°W49′ at 1505 UT June 28. It exhibited left-lateral strike-slip faulting on a northeast striking and steeply dipping plane. The Big Bear aftershocks form a linear trend extending 20 km to the northeast with a scattered distribution to the north. The Landers mainshock occurred near the southernmost extent of the Eastern California Shear Zone, an 80-km-wide, more than 400-km-long zone of deformation. This zone extends into the Death Valley region and accommodates about 10 to 20% of the plate motion between the Pacific and North American plates. The Joshua Tree preshock, its aftershocks, and Landers aftershocks form a previously missing link that connects the Eastern California Shear Zone to the southern San Andreas fault.
Planning effectiveness may grow on fault trees.
Chow, C W; Haddad, K; Mannino, B
1991-10-01
The first step of a strategic planning process--identifying and analyzing threats and opportunities--requires subjective judgments. By using an analytical tool known as a fault tree, healthcare administrators can reduce the unreliability of subjective decision making by creating a logical structure for problem solving and decision making. A case study of 11 healthcare administrators showed that an analysis technique called prospective hindsight can add to a fault tree's ability to improve a strategic planning process.
NASA Astrophysics Data System (ADS)
Batzias, Dimitris F.
2012-12-01
Fault Tree Analysis (FTA) can be used for technology transfer when the relevant problem (called 'top even' in FTA) is solved in a technology centre and the results are diffused to interested parties (usually Small Medium Enterprises - SMEs) that have not the proper equipment and the required know-how to solve the problem by their own. Nevertheless, there is a significant drawback in this procedure: the information usually provided by the SMEs to the technology centre, about production conditions and corresponding quality characteristics of the product, and (sometimes) the relevant expertise in the Knowledge Base of this centre may be inadequate to form a complete fault tree. Since such cases are quite frequent in practice, we have developed a methodology for transforming incomplete fault tree to Ishikawa diagram, which is more flexible and less strict in establishing causal chains, because it uses a surface phenomenological level with a limited number of categories of faults. On the other hand, such an Ishikawa diagram can be extended to simulate a fault tree as relevant knowledge increases. An implementation of this transformation, referring to anodization of aluminium, is presented.
A systematic risk management approach employed on the CloudSat project
NASA Technical Reports Server (NTRS)
Basilio, R. R.; Plourde, K. S.; Lam, T.
2000-01-01
The CloudSat Project has developed a simplified approach for fault tree analysis and probabilistic risk assessment. A system-level fault tree has been constructed to identify credible fault scenarios and failure modes leading up to a potential failure to meet the nominal mission success criteria.
Fault Tree Analysis: A Bibliography
NASA Technical Reports Server (NTRS)
2000-01-01
Fault tree analysis is a top-down approach to the identification of process hazards. It is as one of the best methods for systematically identifying an graphically displaying the many ways some things can go wrong. This bibliography references 266 documents in the NASA STI Database that contain the major concepts. fault tree analysis, risk an probability theory, in the basic index or major subject terms. An abstract is included with most citations, followed by the applicable subject terms.
Gerlach, T.M.; Doukas, M.P.; McGee, K.A.; Kessler, R.
1998-01-01
We used the closed chamber method to measure soil CO2 efflux over a three-year period at the Horseshoe Lake tree kill (HLTK) - the largest tree kill on Mammoth Mountain in central eastern California. Efflux contour maps show a significant decline in the areas and rates of CO2 emission from 1995 to 1997. The emission rate fell from 350 t d-1 (metric tons per day) in 1995 to 130 t d-1 in 1997. The trend suggests a return to background soil CO2 efflux levels by early to mid 1999 and may reflect exhaustion of CO2 in a deep reservoir of accumulated gas and/or mechanical closure or sealing of fault conduits transmitting gas to the surface. However, emissions rose to 220 t d-1 on 23 September 1997 at the onset of a degassing event that lasted until 5 December 1997. Recent reservoir recharge and/or extension-enhanced gas flow may have caused the degassing event.
Fault Tree Analysis for an Inspection Robot in a Nuclear Power Plant
NASA Astrophysics Data System (ADS)
Ferguson, Thomas A.; Lu, Lixuan
2017-09-01
The life extension of current nuclear reactors has led to an increasing demand on inspection and maintenance of critical reactor components that are too expensive to replace. To reduce the exposure dosage to workers, robotics have become an attractive alternative as a preventative safety tool in nuclear power plants. It is crucial to understand the reliability of these robots in order to increase the veracity and confidence of their results. This study presents the Fault Tree (FT) analysis to a coolant outlet piper snake-arm inspection robot in a nuclear power plant. Fault trees were constructed for a qualitative analysis to determine the reliability of the robot. Insight on the applicability of fault tree methods for inspection robotics in the nuclear industry is gained through this investigation.
Locating hardware faults in a data communications network of a parallel computer
Archer, Charles J.; Megerian, Mark G.; Ratterman, Joseph D.; Smith, Brian E.
2010-01-12
Hardware faults location in a data communications network of a parallel computer. Such a parallel computer includes a plurality of compute nodes and a data communications network that couples the compute nodes for data communications and organizes the compute node as a tree. Locating hardware faults includes identifying a next compute node as a parent node and a root of a parent test tree, identifying for each child compute node of the parent node a child test tree having the child compute node as root, running a same test suite on the parent test tree and each child test tree, and identifying the parent compute node as having a defective link connected from the parent compute node to a child compute node if the test suite fails on the parent test tree and succeeds on all the child test trees.
A Fuzzy Reasoning Design for Fault Detection and Diagnosis of a Computer-Controlled System
Ting, Y.; Lu, W.B.; Chen, C.H.; Wang, G.K.
2008-01-01
A Fuzzy Reasoning and Verification Petri Nets (FRVPNs) model is established for an error detection and diagnosis mechanism (EDDM) applied to a complex fault-tolerant PC-controlled system. The inference accuracy can be improved through the hierarchical design of a two-level fuzzy rule decision tree (FRDT) and a Petri nets (PNs) technique to transform the fuzzy rule into the FRVPNs model. Several simulation examples of the assumed failure events were carried out by using the FRVPNs and the Mamdani fuzzy method with MATLAB tools. The reasoning performance of the developed FRVPNs was verified by comparing the inference outcome to that of the Mamdani method. Both methods result in the same conclusions. Thus, the present study demonstratrates that the proposed FRVPNs model is able to achieve the purpose of reasoning, and furthermore, determining of the failure event of the monitored application program. PMID:19255619
Monotone Boolean approximation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hulme, B.L.
1982-12-01
This report presents a theory of approximation of arbitrary Boolean functions by simpler, monotone functions. Monotone increasing functions can be expressed without the use of complements. Nonconstant monotone increasing functions are important in their own right since they model a special class of systems known as coherent systems. It is shown here that when Boolean expressions for noncoherent systems become too large to treat exactly, then monotone approximations are easily defined. The algorithms proposed here not only provide simpler formulas but also produce best possible upper and lower monotone bounds for any Boolean function. This theory has practical application formore » the analysis of noncoherent fault trees and event tree sequences.« less
Reliability database development for use with an object-oriented fault tree evaluation program
NASA Technical Reports Server (NTRS)
Heger, A. Sharif; Harringtton, Robert J.; Koen, Billy V.; Patterson-Hine, F. Ann
1989-01-01
A description is given of the development of a fault-tree analysis method using object-oriented programming. In addition, the authors discuss the programs that have been developed or are under development to connect a fault-tree analysis routine to a reliability database. To assess the performance of the routines, a relational database simulating one of the nuclear power industry databases has been constructed. For a realistic assessment of the results of this project, the use of one of existing nuclear power reliability databases is planned.
Fault diagnosis of power transformer based on fault-tree analysis (FTA)
NASA Astrophysics Data System (ADS)
Wang, Yongliang; Li, Xiaoqiang; Ma, Jianwei; Li, SuoYu
2017-05-01
Power transformers is an important equipment in power plants and substations, power distribution transmission link is made an important hub of power systems. Its performance directly affects the quality and health of the power system reliability and stability. This paper summarizes the five parts according to the fault type power transformers, then from the time dimension divided into three stages of power transformer fault, use DGA routine analysis and infrared diagnostics criterion set power transformer running state, finally, according to the needs of power transformer fault diagnosis, by the general to the section by stepwise refinement of dendritic tree constructed power transformer fault
An earthquake rate forecast for Europe based on smoothed seismicity and smoothed fault contribution
NASA Astrophysics Data System (ADS)
Hiemer, Stefan; Woessner, Jochen; Basili, Roberto; Wiemer, Stefan
2013-04-01
The main objective of project SHARE (Seismic Hazard Harmonization in Europe) is to develop a community-based seismic hazard model for the Euro-Mediterranean region. The logic tree of earthquake rupture forecasts comprises several methodologies including smoothed seismicity approaches. Smoothed seismicity thus represents an alternative concept to express the degree of spatial stationarity of seismicity and provides results that are more objective, reproducible, and testable. Nonetheless, the smoothed-seismicity approach suffers from the common drawback of being generally based on earthquake catalogs alone, i.e. the wealth of knowledge from geology is completely ignored. We present a model that applies the kernel-smoothing method to both past earthquake locations and slip rates on mapped crustal faults and subductions. The result is mainly driven by the data, being independent of subjective delineation of seismic source zones. The core parts of our model are two distinct location probability densities: The first is computed by smoothing past seismicity (using variable kernel smoothing to account for varying data density). The second is obtained by smoothing fault moment rate contributions. The fault moment rates are calculated by summing the moment rate of each fault patch on a fully parameterized and discretized fault as available from the SHARE fault database. We assume that the regional frequency-magnitude distribution of the entire study area is well known and estimate the a- and b-value of a truncated Gutenberg-Richter magnitude distribution based on a maximum likelihood approach that considers the spatial and temporal completeness history of the seismic catalog. The two location probability densities are linearly weighted as a function of magnitude assuming that (1) the occurrence of past seismicity is a good proxy to forecast occurrence of future seismicity and (2) future large-magnitude events occur more likely in the vicinity of known faults. Consequently, the underlying location density of our model depends on the magnitude. We scale the density with the estimated a-value in order to construct a forecast that specifies the earthquake rate in each longitude-latitude-magnitude bin. The model is intended to be one branch of SHARE's logic tree of rupture forecasts and provides rates of events in the magnitude range of 5 <= m <= 8.5 for the entire region of interest and is suitable for comparison with other long-term models in the framework of the Collaboratory for the Study of Earthquake Predictability (CSEP).
Earthquake Rupture Forecast of M>= 6 for the Corinth Rift System
NASA Astrophysics Data System (ADS)
Scotti, O.; Boiselet, A.; Lyon-Caen, H.; Albini, P.; Bernard, P.; Briole, P.; Ford, M.; Lambotte, S.; Matrullo, E.; Rovida, A.; Satriano, C.
2014-12-01
Fourteen years of multidisciplinary observations and data collection in the Western Corinth Rift (WCR) near-fault observatory have been recently synthesized (Boiselet, Ph.D. 2014) for the purpose of providing earthquake rupture forecasts (ERF) of M>=6 in WCR. The main contribution of this work consisted in paving the road towards the development of a "community-based" fault model reflecting the level of knowledge gathered thus far by the WCR working group. The most relevant available data used for this exercise are: - onshore/offshore fault traces, based on geological and high-resolution seismics, revealing a complex network of E-W striking, ~10 km long fault segments; microseismicity recorded by a dense network ( > 60000 events; 1.5
A Case Study of a Combat Helicopter’s Single Unit Vulnerability.
1987-03-01
22 2.6 Generic Fault Tree Diagram ----------------------- 24 2.7 Example Kill Diagram ----------------------------- 25 2.8 Example EEA Summary...that of the vulnerability program, a susceptibility program is subdivided into three major tasks. First is an essential elements analysis ( EEA ...which leads to the 27 i final undesired event in much the same manner as a FTA. An example EEA is provided in Figure 2.8. [Ref.l:p226] The
System Analysis by Mapping a Fault-tree into a Bayesian-network
NASA Astrophysics Data System (ADS)
Sheng, B.; Deng, C.; Wang, Y. H.; Tang, L. H.
2018-05-01
In view of the limitations of fault tree analysis in reliability assessment, Bayesian Network (BN) has been studied as an alternative technology. After a brief introduction to the method for mapping a Fault Tree (FT) into an equivalent BN, equations used to calculate the structure importance degree, the probability importance degree and the critical importance degree are presented. Furthermore, the correctness of these equations is proved mathematically. Combining with an aircraft landing gear’s FT, an equivalent BN is developed and analysed. The results show that richer and more accurate information have been achieved through the BN method than the FT, which demonstrates that the BN is a superior technique in both reliability assessment and fault diagnosis.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sattison, M.B.; Schroeder, J.A.; Russell, K.D.
The Idaho National Engineering Laboratory (INEL) over the past year has created 75 plant-specific Accident Sequence Precursor (ASP) models using the SAPHIRE suite of PRA codes. Along with the new models, the INEL has also developed a new module for SAPHIRE which is tailored specifically to the unique needs of ASP evaluations. These models and software will be the next generation of risk tools for the evaluation of accident precursors by both NRR and AEOD. This paper presents an overview of the models and software. Key characteristics include: (1) classification of the plant models according to plant response with amore » unique set of event trees for each plant class, (2) plant-specific fault trees using supercomponents, (3) generation and retention of all system and sequence cutsets, (4) full flexibility in modifying logic, regenerating cutsets, and requantifying results, and (5) user interface for streamlined evaluation of ASP events.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sattison, M.B.; Schroeder, J.A.; Russell, K.D.
The Idaho National Engineering Laboratory (INEL) over the past year has created 75 plant-specific Accident Sequence Precursor (ASP) models using the SAPHIRE suite of PRA codes. Along with the new models, the INEL has also developed a new module for SAPHIRE which is tailored specifically to the unique needs of conditional core damage probability (CCDP) evaluations. These models and software will be the next generation of risk tools for the evaluation of accident precursors by both NRR and AEOD. This paper presents an overview of the models and software. Key characteristics include: (1) classification of the plant models according tomore » plant response with a unique set of event trees for each plant class, (2) plant-specific fault trees using supercomponents, (3) generation and retention of all system and sequence cutsets, (4) full flexibility in modifying logic, regenerating cutsets, and requantifying results, and (5) user interface for streamlined evaluation of ASP events.« less
Reset Tree-Based Optical Fault Detection
Lee, Dong-Geon; Choi, Dooho; Seo, Jungtaek; Kim, Howon
2013-01-01
In this paper, we present a new reset tree-based scheme to protect cryptographic hardware against optical fault injection attacks. As one of the most powerful invasive attacks on cryptographic hardware, optical fault attacks cause semiconductors to misbehave by injecting high-energy light into a decapped integrated circuit. The contaminated result from the affected chip is then used to reveal secret information, such as a key, from the cryptographic hardware. Since the advent of such attacks, various countermeasures have been proposed. Although most of these countermeasures are strong, there is still the possibility of attack. In this paper, we present a novel optical fault detection scheme that utilizes the buffers on a circuit's reset signal tree as a fault detection sensor. To evaluate our proposal, we model radiation-induced currents into circuit components and perform a SPICE simulation. The proposed scheme is expected to be used as a supplemental security tool. PMID:23698267
Fault tree applications within the safety program of Idaho Nuclear Corporation
NASA Technical Reports Server (NTRS)
Vesely, W. E.
1971-01-01
Computerized fault tree analyses are used to obtain both qualitative and quantitative information about the safety and reliability of an electrical control system that shuts the reactor down when certain safety criteria are exceeded, in the design of a nuclear plant protection system, and in an investigation of a backup emergency system for reactor shutdown. The fault tree yields the modes by which the system failure or accident will occur, the most critical failure or accident causing areas, detailed failure probabilities, and the response of safety or reliability to design modifications and maintenance schemes.
Fault Tree Analysis as a Planning and Management Tool: A Case Study
ERIC Educational Resources Information Center
Witkin, Belle Ruth
1977-01-01
Fault Tree Analysis is an operations research technique used to analyse the most probable modes of failure in a system, in order to redesign or monitor the system more closely in order to increase its likelihood of success. (Author)
Pet-Armacost, J J; Sepulveda, J; Sakude, M
1999-12-01
The US Department of Transportation was interested in the risks associated with transporting Hydrazine in tanks with and without relief devices. Hydrazine is both highly toxic and flammable, as well as corrosive. Consequently, there was a conflict as to whether a relief device should be used or not. Data were not available on the impact of relief devices on release probabilities or the impact of Hydrazine on the likelihood of fires and explosions. In this paper, a Monte Carlo sensitivity analysis of the unknown parameters was used to assess the risks associated with highway transport of Hydrazine. To help determine whether or not relief devices should be used, fault trees and event trees were used to model the sequences of events that could lead to adverse consequences during transport of Hydrazine. The event probabilities in the event trees were derived as functions of the parameters whose effects were not known. The impacts of these parameters on the risk of toxic exposures, fires, and explosions were analyzed through a Monte Carlo sensitivity analysis and analyzed statistically through an analysis of variance. The analysis allowed the determination of which of the unknown parameters had a significant impact on the risks. It also provided the necessary support to a critical transportation decision even though the values of several key parameters were not known.
NASA Astrophysics Data System (ADS)
Kamer, Yavor; Ouillon, Guy; Sornette, Didier; Wössner, Jochen
2014-05-01
We present applications of a new clustering method for fault network reconstruction based on the spatial distribution of seismicity. Unlike common approaches that start from the simplest large scale and gradually increase the complexity trying to explain the small scales, our method uses a bottom-up approach, by an initial sampling of the small scales and then reducing the complexity. The new approach also exploits the location uncertainty associated with each event in order to obtain a more accurate representation of the spatial probability distribution of the seismicity. For a given dataset, we first construct an agglomerative hierarchical cluster (AHC) tree based on Ward's minimum variance linkage. Such a tree starts out with one cluster and progressively branches out into an increasing number of clusters. To atomize the structure into its constitutive protoclusters, we initialize a Gaussian Mixture Modeling (GMM) at a given level of the hierarchical clustering tree. We then let the GMM converge using an Expectation Maximization (EM) algorithm. The kernels that become ill defined (less than 4 points) at the end of the EM are discarded. By incrementing the number of initialization clusters (by atomizing at increasingly populated levels of the AHC tree) and repeating the procedure above, we are able to determine the maximum number of Gaussian kernels the structure can hold. The kernels in this configuration constitute our protoclusters. In this setting, merging of any pair will lessen the likelihood (calculated over the pdf of the kernels) but in turn will reduce the model's complexity. The information loss/gain of any possible merging can thus be quantified based on the Minimum Description Length (MDL) principle. Similar to an inter-distance matrix, where the matrix element di,j gives the distance between points i and j, we can construct a MDL gain/loss matrix where mi,j gives the information gain/loss resulting from the merging of kernels i and j. Based on this matrix, merging events resulting in MDL gain are performed in descending order until no gainful merging is possible anymore. We envision that the results of this study could lead to a better understanding of the complex interactions within the Californian fault system and hopefully use the acquired insights for earthquake forecasting.
Fault Tree Analysis: An Emerging Methodology for Instructional Science.
ERIC Educational Resources Information Center
Wood, R. Kent; And Others
1979-01-01
Describes Fault Tree Analysis, a tool for systems analysis which attempts to identify possible modes of failure in systems to increase the probability of success. The article defines the technique and presents the steps of FTA construction, focusing on its application to education. (RAO)
NASA Astrophysics Data System (ADS)
Rodríguez-Escales, Paula; Canelles, Arnau; Sanchez-Vila, Xavier; Folch, Albert; Kurtzman, Daniel; Rossetto, Rudy; Fernández-Escalante, Enrique; Lobo-Ferreira, João-Paulo; Sapiano, Manuel; San-Sebastián, Jon; Schüth, Christoph
2018-06-01
Managed aquifer recharge (MAR) can be affected by many risks. Those risks are related to different technical and non-technical aspects of recharge, like water availability, water quality, legislation, social issues, etc. Many other works have acknowledged risks of this nature theoretically; however, their quantification and definition has not been developed. In this study, the risk definition and quantification has been performed by means of fault trees
and probabilistic risk assessment (PRA). We defined a fault tree with 65 basic events applicable to the operation phase. After that, we have applied this methodology to six different managed aquifer recharge sites located in the Mediterranean Basin (Portugal, Spain, Italy, Malta, and Israel). The probabilities of the basic events were defined by expert criteria, based on the knowledge of the different managers of the facilities. From that, we conclude that in all sites, the perception of the expert criteria of the non-technical aspects were as much or even more important than the technical aspects. Regarding the risk results, we observe that the total risk in three of the six sites was equal to or above 0.90. That would mean that the MAR facilities have a risk of failure equal to or higher than 90 % in the period of 2-6 years. The other three sites presented lower risks (75, 29, and 18 % for Malta, Menashe, and Serchio, respectively).
Mori, J.
1996-01-01
Details of the M 4.3 foreshock to the Joshua Tree earthquake were studied using P waves recorded on the Southern California Seismic Network and the Anza network. Deconvolution, using an M 2.4 event as an empirical Green's function, corrected for complicated path and site effects in the seismograms and produced simple far-field displacement pulses that were inverted for a slip distribution. Both possible fault planes, north-south and east-west, for the focal mechanism were tested by a least-squares inversion procedure with a range of rupture velocities. The results showed that the foreshock ruptured the north-south plane, similar to the mainshock. The foreshock initiated a few hundred meters south of the mainshock and ruptured to the north, toward the mainshock hypocenter. The mainshock (M 6.1) initiated near the northern edge of the foreshock rupture 2 hr later. The foreshock had a high stress drop (320 to 800 bars) and broke a small portion of the fault adjacent to the mainshock but was not able to immediately initiate the mainshock rupture.
Program listing for fault tree analysis of JPL technical report 32-1542
NASA Technical Reports Server (NTRS)
Chelson, P. O.
1971-01-01
The computer program listing for the MAIN program and those subroutines unique to the fault tree analysis are described. Some subroutines are used for analyzing the reliability block diagram. The program is written in FORTRAN 5 and is running on a UNIVAC 1108.
Cognitive Support During High-Consequence Episodes of Care in Cardiovascular Surgery.
Conboy, Heather M; Avrunin, George S; Clarke, Lori A; Osterweil, Leon J; Christov, Stefan C; Goldman, Julian M; Yule, Steven J; Zenati, Marco A
2017-03-01
Despite significant efforts to reduce preventable adverse events in medical processes, such events continue to occur at unacceptable rates. This paper describes a computer science approach that uses formal process modeling to provide situationally aware monitoring and management support to medical professionals performing complex processes. These process models represent both normative and non-normative situations, and are validated by rigorous automated techniques such as model checking and fault tree analysis, in addition to careful review by experts. Context-aware Smart Checklists are then generated from the models, providing cognitive support during high-consequence surgical episodes. The approach is illustrated with a case study in cardiovascular surgery.
LIDAR Helps Identify Source of 1872 Earthquake Near Chelan, Washington
NASA Astrophysics Data System (ADS)
Sherrod, B. L.; Blakely, R. J.; Weaver, C. S.
2015-12-01
One of the largest historic earthquakes in the Pacific Northwest occurred on 15 December 1872 (M6.5-7) near the south end of Lake Chelan in north-central Washington State. Lack of recognized surface deformation suggested that the earthquake occurred on a blind, perhaps deep, fault. New LiDAR data show landslides and a ~6 km long, NW-side-up scarp in Spencer Canyon, ~30 km south of Lake Chelan. Two landslides in Spencer Canyon impounded small ponds. An historical account indicated that dead trees were visible in one pond in AD1884. Wood from a snag in the pond yielded a calibrated age of AD1670-1940. Tree ring counts show that the oldest living trees on each landslide are 130 and 128 years old. The larger of the two landslides obliterated the scarp and thus, post-dates the last scarp-forming event. Two trenches across the scarp exposed a NW-dipping thrust fault. One trench exposed alluvial fan deposits, Mazama ash, and scarp colluvium cut by a single thrust fault. Three charcoal samples from a colluvium buried during the last fault displacement had calibrated ages between AD1680 and AD1940. The second trench exposed gneiss thrust over colluvium during at least two, and possibly three fault displacements. The younger of two charcoal samples collected from a colluvium below gneiss had a calibrated age of AD1665- AD1905. For an historical constraint, we assume that the lack of felt reports for large earthquakes in the period between 1872 and today indicates that no large earthquakes capable of rupturing the ground surface occurred in the region after the 1872 earthquake; thus the last displacement on the Spencer Canyon scarp cannot post-date the 1872 earthquake. Modeling of the age data suggests that the last displacement occurred between AD1840 and AD1890. These data, combined with the historical record, indicate that this fault is the source of the 1872 earthquake. Analyses of aeromagnetic data reveal lithologic contacts beneath the scarp that form an ENE-striking, curvilinear zone ~2.5 km wide and ~55 km long. This zone coincides with monoclines mapped in Mesozoic bedrock and Miocene flood basalts. This study ends uncertainty regarding the source of the 1872 earthquake and provides important information for seismic hazard analyses of major infrastructure projects in Washington and British Columbia.
NASA Astrophysics Data System (ADS)
Guns, K. A.; Bennett, R. A.; Blisniuk, K.
2017-12-01
To better evaluate the distribution and transfer of strain and slip along the Southern San Andreas Fault (SSAF) zone in the northern Coachella valley in southern California, we integrate geological and geodetic observations to test whether strain is being transferred away from the SSAF system towards the Eastern California Shear Zone through microblock rotation of the Eastern Transverse Ranges (ETR). The faults of the ETR consist of five east-west trending left lateral strike slip faults that have measured cumulative offsets of up to 20 km and as low as 1 km. Present kinematic and block models present a variety of slip rate estimates, from as low as zero to as high as 7 mm/yr, suggesting a gap in our understanding of what role these faults play in the larger system. To determine whether present-day block rotation along these faults is contributing to strain transfer in the region, we are applying 10Be surface exposure dating methods to observed offset channel and alluvial fan deposits in order to estimate fault slip rates along two faults in the ETR. We present observations of offset geomorphic landforms using field mapping and LiDAR data at three sites along the Blue Cut Fault and one site along the Smoke Tree Wash Fault in Joshua Tree National Park which indicate recent Quaternary fault activity. Initial results of site mapping and clast count analyses reveal at least three stages of offset, including potential Holocene offsets, for one site along the Blue Cut Fault, while preliminary 10Be geochronology is in progress. This geologic slip rate data, combined with our new geodetic surface velocity field derived from updated campaign-based GPS measurements within Joshua Tree National Park will allow us to construct a suite of elastic fault block models to elucidate rates of strain transfer away from the SSAF and how that strain transfer may be affecting the length of the interseismic period along the SSAF.
FAULT TREE ANALYSIS FOR EXPOSURE TO REFRIGERANTS USED FOR AUTOMOTIVE AIR CONDITIONING IN THE U.S.
A fault tree analysis was used to estimate the number of refrigerant exposures of automotive service technicians and vehicle occupants in the United States. Exposures of service technicians can occur when service equipment or automotive air-conditioning systems leak during servic...
A Fault Tree Approach to Analysis of Organizational Communication Systems.
ERIC Educational Resources Information Center
Witkin, Belle Ruth; Stephens, Kent G.
Fault Tree Analysis (FTA) is a method of examing communication in an organization by focusing on: (1) the complex interrelationships in human systems, particularly in communication systems; (2) interactions across subsystems and system boundaries; and (3) the need to select and "prioritize" channels which will eliminate noise in the…
A computational framework for prime implicants identification in noncoherent dynamic systems.
Di Maio, Francesco; Baronchelli, Samuele; Zio, Enrico
2015-01-01
Dynamic reliability methods aim at complementing the capability of traditional static approaches (e.g., event trees [ETs] and fault trees [FTs]) by accounting for the system dynamic behavior and its interactions with the system state transition process. For this, the system dynamics is here described by a time-dependent model that includes the dependencies with the stochastic transition events. In this article, we present a novel computational framework for dynamic reliability analysis whose objectives are i) accounting for discrete stochastic transition events and ii) identifying the prime implicants (PIs) of the dynamic system. The framework entails adopting a multiple-valued logic (MVL) to consider stochastic transitions at discretized times. Then, PIs are originally identified by a differential evolution (DE) algorithm that looks for the optimal MVL solution of a covering problem formulated for MVL accident scenarios. For testing the feasibility of the framework, a dynamic noncoherent system composed of five components that can fail at discretized times has been analyzed, showing the applicability of the framework to practical cases. © 2014 Society for Risk Analysis.
Bow-tie diagrams for risk management in anaesthesia.
Culwick, M D; Merry, A F; Clarke, D M; Taraporewalla, K J; Gibbs, N M
2016-11-01
Bow-tie analysis is a risk analysis and management tool that has been readily adopted into routine practice in many high reliability industries such as engineering, aviation and emergency services. However, it has received little exposure so far in healthcare. Nevertheless, its simplicity, versatility, and pictorial display may have benefits for the analysis of a range of healthcare risks, including complex and multiple risks and their interactions. Bow-tie diagrams are a combination of a fault tree and an event tree, which when combined take the shape of a bow tie. Central to bow-tie methodology is the concept of an undesired or 'Top Event', which occurs if a hazard progresses past all prevention controls. Top Events may also occasionally occur idiosyncratically. Irrespective of the cause of a Top Event, mitigation and recovery controls may influence the outcome. Hence the relationship of hazard to outcome can be viewed in one diagram along with possible causal sequences or accident trajectories. Potential uses for bow-tie diagrams in anaesthesia risk management include improved understanding of anaesthesia hazards and risks, pre-emptive identification of absent or inadequate hazard controls, investigation of clinical incidents, teaching anaesthesia risk management, and demonstrating risk management strategies to third parties when required.
Project delay analysis of HRSG
NASA Astrophysics Data System (ADS)
Silvianita; Novega, A. S.; Rosyid, D. M.; Suntoyo
2017-08-01
Completion of HRSG (Heat Recovery Steam Generator) fabrication project sometimes is not sufficient with the targeted time written on the contract. The delay on fabrication process can cause some disadvantages for fabricator, including forfeit payment, delay on HRSG construction process up until HRSG trials delay. In this paper, the author is using semi quantitative on HRSG pressure part fabrication delay with configuration plant 1 GT (Gas Turbine) + 1 HRSG + 1 STG (Steam Turbine Generator) using bow-tie analysis method. Bow-tie analysis method is a combination from FTA (Fault tree analysis) and ETA (Event tree analysis) to develop the risk matrix of HRSG. The result from FTA analysis is use as a threat for preventive measure. The result from ETA analysis is use as impact from fabrication delay.
Langenheim, Victoria E.; Rymer, Michael J.; Catchings, Rufus D.; Goldman, Mark R.; Watt, Janet T.; Powell, Robert E.; Matti, Jonathan C.
2016-03-02
We describe high-resolution gravity and seismic refraction surveys acquired to determine the thickness of valley-fill deposits and to delineate geologic structures that might influence groundwater flow beneath the Smoke Tree Wash area in Joshua Tree National Park. These surveys identified a sedimentary basin that is fault-controlled. A profile across the Smoke Tree Wash fault zone reveals low gravity values and seismic velocities that coincide with a mapped strand of the Smoke Tree Wash fault. Modeling of the gravity data reveals a basin about 2–2.5 km long and 1 km wide that is roughly centered on this mapped strand, and bounded by inferred faults. According to the gravity model the deepest part of the basin is about 270 m, but this area coincides with low velocities that are not characteristic of typical basement complex rocks. Most likely, the density contrast assumed in the inversion is too high or the uncharacteristically low velocities represent highly fractured or weathered basement rocks, or both. A longer seismic profile extending onto basement outcrops would help differentiate which scenario is more accurate. The seismic velocities also determine the depth to water table along the profile to be about 40–60 m, consistent with water levels measured in water wells near the northern end of the profile.
A Fault Tree Approach to Needs Assessment -- An Overview.
ERIC Educational Resources Information Center
Stephens, Kent G.
A "failsafe" technology is presented based on a new unified theory of needs assessment. Basically the paper discusses fault tree analysis as a technique for enhancing the probability of success in any system by analyzing the most likely modes of failure that could occur and then suggesting high priority avoidance strategies for those…
Huang, Weiqing; Fan, Hongbo; Qiu, Yongfu; Cheng, Zhiyu; Xu, Pingru; Qian, Yu
2016-05-01
Recently, China has frequently experienced large-scale, severe and persistent haze pollution due to surging urbanization and industrialization and a rapid growth in the number of motor vehicles and energy consumption. The vehicle emission due to the consumption of a large number of fossil fuels is no doubt a critical factor of the haze pollution. This work is focused on the causation mechanism of haze pollution related to the vehicle emission for Guangzhou city by employing the Fault Tree Analysis (FTA) method for the first time. With the establishment of the fault tree system of "Haze weather-Vehicle exhausts explosive emission", all of the important risk factors are discussed and identified by using this deductive FTA method. The qualitative and quantitative assessments of the fault tree system are carried out based on the structure, probability and critical importance degree analysis of the risk factors. The study may provide a new simple and effective tool/strategy for the causation mechanism analysis and risk management of haze pollution in China. Copyright © 2016 Elsevier Ltd. All rights reserved.
Geology of Joshua Tree National Park geodatabase
Powell, Robert E.; Matti, Jonathan C.; Cossette, Pamela M.
2015-09-16
The database in this Open-File Report describes the geology of Joshua Tree National Park and was completed in support of the National Cooperative Geologic Mapping Program of the U.S. Geological Survey (USGS) and in cooperation with the National Park Service (NPS). The geologic observations and interpretations represented in the database are relevant to both the ongoing scientific interests of the USGS in southern California and the management requirements of NPS, specifically of Joshua Tree National Park (JOTR).Joshua Tree National Park is situated within the eastern part of California’s Transverse Ranges province and straddles the transition between the Mojave and Sonoran deserts. The geologically diverse terrain that underlies JOTR reveals a rich and varied geologic evolution, one that spans nearly two billion years of Earth history. The Park’s landscape is the current expression of this evolution, its varied landforms reflecting the differing origins of underlying rock types and their differing responses to subsequent geologic events. Crystalline basement in the Park consists of Proterozoic plutonic and metamorphic rocks intruded by a composite Mesozoic batholith of Triassic through Late Cretaceous plutons arrayed in northwest-trending lithodemic belts. The basement was exhumed during the Cenozoic and underwent differential deep weathering beneath a low-relief erosion surface, with the deepest weathering profiles forming on quartz-rich, biotite-bearing granitoid rocks. Disruption of the basement terrain by faults of the San Andreas system began ca. 20 Ma and the JOTR sinistral domain, preceded by basalt eruptions, began perhaps as early as ca. 7 Ma, but no later than 5 Ma. Uplift of the mountain blocks during this interval led to erosional stripping of the thick zones of weathered quartz-rich granitoid rocks to form etchplains dotted by bouldery tors—the iconic landscape of the Park. The stripped debris filled basins along the fault zones.Mountain ranges and basins in the Park exhibit an east-west physiographic grain controlled by left-lateral fault zones that form a sinistral domain within the broad zone of dextral shear along the transform boundary between the North American and Pacific plates. Geologic and geophysical evidence reveal that movement on the sinistral faults zones has resulted in left steps along the zones, resulting in the development of sub-basins beneath Pinto Basin and Shavers and Chuckwalla Valleys. The sinistral fault zones connect the Mojave Desert dextral faults of the Eastern California Shear Zone to the north and east with the Coachella Valley strands of the southern San Andreas Fault Zone to the west.Quaternary surficial deposits accumulated in alluvial washes and playas and lakes along the valley floors; in alluvial fans, washes, and sheet wash aprons along piedmonts flanking the mountain ranges; and in eolian dunes and sand sheets that span the transition from valley floor to piedmont slope. Sequences of Quaternary pediments are planed into piedmonts flanking valley-floor and upland basins, each pediment in turn overlain by successively younger residual and alluvial surficial deposits.
A fuzzy decision tree for fault classification.
Zio, Enrico; Baraldi, Piero; Popescu, Irina C
2008-02-01
In plant accident management, the control room operators are required to identify the causes of the accident, based on the different patterns of evolution of the monitored process variables thereby developing. This task is often quite challenging, given the large number of process parameters monitored and the intense emotional states under which it is performed. To aid the operators, various techniques of fault classification have been engineered. An important requirement for their practical application is the physical interpretability of the relationships among the process variables underpinning the fault classification. In this view, the present work propounds a fuzzy approach to fault classification, which relies on fuzzy if-then rules inferred from the clustering of available preclassified signal data, which are then organized in a logical and transparent decision tree structure. The advantages offered by the proposed approach are precisely that a transparent fault classification model is mined out of the signal data and that the underlying physical relationships among the process variables are easily interpretable as linguistic if-then rules that can be explicitly visualized in the decision tree structure. The approach is applied to a case study regarding the classification of simulated faults in the feedwater system of a boiling water reactor.
NASA Technical Reports Server (NTRS)
Vitali, Roberto; Lutomski, Michael G.
2004-01-01
National Aeronautics and Space Administration s (NASA) International Space Station (ISS) Program uses Probabilistic Risk Assessment (PRA) as part of its Continuous Risk Management Process. It is used as a decision and management support tool to not only quantify risk for specific conditions, but more importantly comparing different operational and management options to determine the lowest risk option and provide rationale for management decisions. This paper presents the derivation of the probability distributions used to quantify the failure rates and the probability of failures of the basic events employed in the PRA model of the ISS. The paper will show how a Bayesian approach was used with different sources of data including the actual ISS on orbit failures to enhance the confidence in results of the PRA. As time progresses and more meaningful data is gathered from on orbit failures, an increasingly accurate failure rate probability distribution for the basic events of the ISS PRA model can be obtained. The ISS PRA has been developed by mapping the ISS critical systems such as propulsion, thermal control, or power generation into event sequences diagrams and fault trees. The lowest level of indenture of the fault trees was the orbital replacement units (ORU). The ORU level was chosen consistently with the level of statistically meaningful data that could be obtained from the aerospace industry and from the experts in the field. For example, data was gathered for the solenoid valves present in the propulsion system of the ISS. However valves themselves are composed of parts and the individual failure of these parts was not accounted for in the PRA model. In other words the failure of a spring within a valve was considered a failure of the valve itself.
Recent Mega-Thrust Tsunamigenic Earthquakes and PTHA
NASA Astrophysics Data System (ADS)
Lorito, S.
2013-05-01
The occurrence of several mega-thrust tsunamigenic earthquakes in the last decade, including but not limited to the 2004 Sumatra-Andaman, the 2010 Maule, and 2011 Tohoku earthquakes, has been a dramatic reminder of the limitations in our capability of assessing earthquake and tsunami hazard and risk. However, the increasingly high-quality geophysical observational networks allowed the retrieval of most accurate than ever models of the rupture process of mega-thrust earthquakes, thus paving the way for future improved hazard assessments. Probabilistic Tsunami Hazard Analysis (PTHA) methodology, in particular, is less mature than its seismic counterpart, PSHA. Worldwide recent research efforts of the tsunami science community allowed to start filling this gap, and to define some best practices that are being progressively employed in PTHA for different regions and coasts at threat. In the first part of my talk, I will briefly review some rupture models of recent mega-thrust earthquakes, and highlight some of their surprising features that likely result in bigger error bars associated to PTHA results. More specifically, recent events of unexpected size at a given location, and with unexpected rupture process features, posed first-order open questions which prevent the definition of an heterogeneous rupture probability along a subduction zone, despite of several recent promising results on the subduction zone seismic cycle. In the second part of the talk, I will dig a bit more into a specific ongoing effort for improving PTHA methods, in particular as regards epistemic and aleatory uncertainties determination, and the computational PTHA feasibility when considering the full assumed source variability. Only logic trees are usually explicated in PTHA studies, accounting for different possible assumptions on the source zone properties and behavior. The selection of the earthquakes to be actually modelled is then in general made on a qualitative basis or remains implicit, despite different methods like event trees have been used for different applications. I will define a quite general PTHA framework, based on the mixed use of logic and event trees. I will first discuss a particular class of epistemic uncertainties, i.e. those related to the parametric fault characterization in terms of geometry, kinematics, and assessment of activity rates. A systematic classification in six justification levels of epistemic uncertainty related with the existence and behaviour of fault sources will be presented. Then, a particular branch of the logic tree is chosen in order to discuss just the aleatory variability of earthquake parameters, represented with an event tree. Even so, PTHA based on numerical scenarios is a too demanding computational task, particularly when probabilistic inundation maps are needed. For trying to reduce the computational burden without under-representing the source variability, the event tree is first constructed by taking care of densely (over-)sampling the earthquake parameter space, and then the earthquakes are filtered basing on their associated tsunami impact offshore, before calculating inundation maps. I'll describe this approach by means of a case study in the Mediterranean Sea, namely the PTHA for some locations of Eastern Sicily coasts and Southern Crete coast due to potential subduction earthquakes occurring on the Hellenic Arc.
The engine fuel system fault analysis
NASA Astrophysics Data System (ADS)
Zhang, Yong; Song, Hanqiang; Yang, Changsheng; Zhao, Wei
2017-05-01
For improving the reliability of the engine fuel system, the typical fault factor of the engine fuel system was analyzed from the point view of structure and functional. The fault character was gotten by building the fuel system fault tree. According the utilizing of fault mode effect analysis method (FMEA), several factors of key component fuel regulator was obtained, which include the fault mode, the fault cause, and the fault influences. All of this made foundation for next development of fault diagnosis system.
Risk Analysis of a Fuel Storage Terminal Using HAZOP and FTA
Baixauli-Pérez, Mª Piedad
2017-01-01
The size and complexity of industrial chemical plants, together with the nature of the products handled, means that an analysis and control of the risks involved is required. This paper presents a methodology for risk analysis in chemical and allied industries that is based on a combination of HAZard and OPerability analysis (HAZOP) and a quantitative analysis of the most relevant risks through the development of fault trees, fault tree analysis (FTA). Results from FTA allow prioritizing the preventive and corrective measures to minimize the probability of failure. An analysis of a case study is performed; it consists in the terminal for unloading chemical and petroleum products, and the fuel storage facilities of two companies, in the port of Valencia (Spain). HAZOP analysis shows that loading and unloading areas are the most sensitive areas of the plant and where the most significant danger is a fuel spill. FTA analysis indicates that the most likely event is a fuel spill in tank truck loading area. A sensitivity analysis from the FTA results show the importance of the human factor in all sequences of the possible accidents, so it should be mandatory to improve the training of the staff of the plants. PMID:28665325
Risk Analysis of a Fuel Storage Terminal Using HAZOP and FTA.
Fuentes-Bargues, José Luis; González-Cruz, Mª Carmen; González-Gaya, Cristina; Baixauli-Pérez, Mª Piedad
2017-06-30
The size and complexity of industrial chemical plants, together with the nature of the products handled, means that an analysis and control of the risks involved is required. This paper presents a methodology for risk analysis in chemical and allied industries that is based on a combination of HAZard and OPerability analysis (HAZOP) and a quantitative analysis of the most relevant risks through the development of fault trees, fault tree analysis (FTA). Results from FTA allow prioritizing the preventive and corrective measures to minimize the probability of failure. An analysis of a case study is performed; it consists in the terminal for unloading chemical and petroleum products, and the fuel storage facilities of two companies, in the port of Valencia (Spain). HAZOP analysis shows that loading and unloading areas are the most sensitive areas of the plant and where the most significant danger is a fuel spill. FTA analysis indicates that the most likely event is a fuel spill in tank truck loading area. A sensitivity analysis from the FTA results show the importance of the human factor in all sequences of the possible accidents, so it should be mandatory to improve the training of the staff of the plants.
Integration of Advanced Probabilistic Analysis Techniques with Multi-Physics Models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cetiner, Mustafa Sacit; none,; Flanagan, George F.
2014-07-30
An integrated simulation platform that couples probabilistic analysis-based tools with model-based simulation tools can provide valuable insights for reactive and proactive responses to plant operating conditions. The objective of this work is to demonstrate the benefits of a partial implementation of the Small Modular Reactor (SMR) Probabilistic Risk Assessment (PRA) Detailed Framework Specification through the coupling of advanced PRA capabilities and accurate multi-physics plant models. Coupling a probabilistic model with a multi-physics model will aid in design, operations, and safety by providing a more accurate understanding of plant behavior. This represents the first attempt at actually integrating these two typesmore » of analyses for a control system used for operations, on a faster than real-time basis. This report documents the development of the basic communication capability to exchange data with the probabilistic model using Reliability Workbench (RWB) and the multi-physics model using Dymola. The communication pathways from injecting a fault (i.e., failing a component) to the probabilistic and multi-physics models were successfully completed. This first version was tested with prototypic models represented in both RWB and Modelica. First, a simple event tree/fault tree (ET/FT) model was created to develop the software code to implement the communication capabilities between the dynamic-link library (dll) and RWB. A program, written in C#, successfully communicates faults to the probabilistic model through the dll. A systems model of the Advanced Liquid-Metal Reactor–Power Reactor Inherently Safe Module (ALMR-PRISM) design developed under another DOE project was upgraded using Dymola to include proper interfaces to allow data exchange with the control application (ConApp). A program, written in C+, successfully communicates faults to the multi-physics model. The results of the example simulation were successfully plotted.« less
Modular techniques for dynamic fault-tree analysis
NASA Technical Reports Server (NTRS)
Patterson-Hine, F. A.; Dugan, Joanne B.
1992-01-01
It is noted that current approaches used to assess the dependability of complex systems such as Space Station Freedom and the Air Traffic Control System are incapable of handling the size and complexity of these highly integrated designs. A novel technique for modeling such systems which is built upon current techniques in Markov theory and combinatorial analysis is described. It enables the development of a hierarchical representation of system behavior which is more flexible than either technique alone. A solution strategy which is based on an object-oriented approach to model representation and evaluation is discussed. The technique is virtually transparent to the user since the fault tree models can be built graphically and the objects defined automatically. The tree modularization procedure allows the two model types, Markov and combinatoric, to coexist and does not require that the entire fault tree be translated to a Markov chain for evaluation. This effectively reduces the size of the Markov chain required and enables solutions with less truncation, making analysis of longer mission times possible. Using the fault-tolerant parallel processor as an example, a model is built and solved for a specific mission scenario and the solution approach is illustrated in detail.
Soft error evaluation and vulnerability analysis in Xilinx Zynq-7010 system-on chip
NASA Astrophysics Data System (ADS)
Du, Xuecheng; He, Chaohui; Liu, Shuhuan; Zhang, Yao; Li, Yonghong; Xiong, Ceng; Tan, Pengkang
2016-09-01
Radiation-induced soft errors are an increasingly important threat to the reliability of modern electronic systems. In order to evaluate system-on chip's reliability and soft error, the fault tree analysis method was used in this work. The system fault tree was constructed based on Xilinx Zynq-7010 All Programmable SoC. Moreover, the soft error rates of different components in Zynq-7010 SoC were tested by americium-241 alpha radiation source. Furthermore, some parameters that used to evaluate the system's reliability and safety were calculated using Isograph Reliability Workbench 11.0, such as failure rate, unavailability and mean time to failure (MTTF). According to fault tree analysis for system-on chip, the critical blocks and system reliability were evaluated through the qualitative and quantitative analysis.
Learning from examples - Generation and evaluation of decision trees for software resource analysis
NASA Technical Reports Server (NTRS)
Selby, Richard W.; Porter, Adam A.
1988-01-01
A general solution method for the automatic generation of decision (or classification) trees is investigated. The approach is to provide insights through in-depth empirical characterization and evaluation of decision trees for software resource data analysis. The trees identify classes of objects (software modules) that had high development effort. Sixteen software systems ranging from 3,000 to 112,000 source lines were selected for analysis from a NASA production environment. The collection and analysis of 74 attributes (or metrics), for over 4,700 objects, captured information about the development effort, faults, changes, design style, and implementation style. A total of 9,600 decision trees were automatically generated and evaluated. The trees correctly identified 79.3 percent of the software modules that had high development effort or faults, and the trees generated from the best parameter combinations correctly identified 88.4 percent of the modules on the average.
A Seismic Source Model for Central Europe and Italy
NASA Astrophysics Data System (ADS)
Nyst, M.; Williams, C.; Onur, T.
2006-12-01
We present a seismic source model for Central Europe (Belgium, Germany, Switzerland, and Austria) and Italy, as part of an overall seismic risk and loss modeling project for this region. A separate presentation at this conference discusses the probabilistic seismic hazard and risk assessment (Williams et al., 2006). Where available we adopt regional consensus models and adjusts these to fit our format, otherwise we develop our own model. Our seismic source model covers the whole region under consideration and consists of the following components: 1. A subduction zone environment in Calabria, SE Italy, with interface events between the Eurasian and African plates and intraslab events within the subducting slab. The subduction zone interface is parameterized as a set of dipping area sources that follow the geometry of the surface of the subducting plate, whereas intraslab events are modeled as plane sources at depth; 2. The main normal faults in the upper crust along the Apennines mountain range, in Calabria and Central Italy. Dipping faults and (sub-) vertical faults are parameterized as dipping plane and line sources, respectively; 3. The Upper and Lower Rhine Graben regime that runs from northern Italy into eastern Belgium, parameterized as a combination of dipping plane and line sources, and finally 4. Background seismicity, parameterized as area sources. The fault model is based on slip rates using characteristic recurrence. The modeling of background and subduction zone seismicity is based on a compilation of several national and regional historic seismic catalogs using a Gutenberg-Richter recurrence model. Merging the catalogs encompasses the deletion of double, fake and very old events and the application of a declustering algorithm (Reasenberg, 2000). The resulting catalog contains a little over 6000 events, has an average b-value of -0.9, is complete for moment magnitudes 4.5 and larger, and is used to compute a gridded a-value model (smoothed historical seismicity) for the region. The logic tree weighs various completeness intervals and minimum magnitudes. Using a weighted scheme of European and global ground motion models together with a detailed site classification map for Europe based on Eurocode 8, we generate hazard maps for recurrence periods of 200, 475, 1000 and 2500 yrs.
Decision tree and PCA-based fault diagnosis of rotating machinery
NASA Astrophysics Data System (ADS)
Sun, Weixiang; Chen, Jin; Li, Jiaqing
2007-04-01
After analysing the flaws of conventional fault diagnosis methods, data mining technology is introduced to fault diagnosis field, and a new method based on C4.5 decision tree and principal component analysis (PCA) is proposed. In this method, PCA is used to reduce features after data collection, preprocessing and feature extraction. Then, C4.5 is trained by using the samples to generate a decision tree model with diagnosis knowledge. At last the tree model is used to make diagnosis analysis. To validate the method proposed, six kinds of running states (normal or without any defect, unbalance, rotor radial rub, oil whirl, shaft crack and a simultaneous state of unbalance and radial rub), are simulated on Bently Rotor Kit RK4 to test C4.5 and PCA-based method and back-propagation neural network (BPNN). The result shows that C4.5 and PCA-based diagnosis method has higher accuracy and needs less training time than BPNN.
Ergodicity and Phase Transitions and Their Implications for Earthquake Forecasting.
NASA Astrophysics Data System (ADS)
Klein, W.
2017-12-01
Forecasting earthquakes or even predicting the statistical distribution of events on a given fault is extremely difficult. One reason for this difficulty is the large number of fault characteristics that can affect the distribution and timing of events. The range of stress transfer, the level of noise, and the nature of the friction force all influence the type of the events and the values of these parameters can vary from fault to fault and also vary with time. In addition, the geometrical structure of the faults and the correlation of events on different faults plays an important role in determining the event size and their distribution. Another reason for the difficulty is that the important fault characteristics are not easily measured. The noise level, fault structure, stress transfer range, and the nature of the friction force are extremely difficult, if not impossible to ascertain. Given this lack of information, one of the most useful approaches to understanding the effect of fault characteristics and the way they interact is to develop and investigate models of faults and fault systems.In this talk I will present results obtained from a series of models of varying abstraction and compare them with data from actual faults. We are able to provide a physical basis for several observed phenomena such as the earthquake cycle, thefact that some faults display Gutenburg-Richter scaling and others do not, and that some faults exhibit quasi-periodic characteristic events and others do not. I will also discuss some surprising results such as the fact that some faults are in thermodynamic equilibrium depending on the stress transfer range and the noise level. An example of an important conclusion that can be drawn from this work is that the statistical distribution of earthquake events can vary from fault to fault and that an indication of an impending large event such as accelerating moment release may be relevant on some faults but not on others.
NASA Technical Reports Server (NTRS)
Chang, Chi-Yung (Inventor); Fang, Wai-Chi (Inventor); Curlander, John C. (Inventor)
1995-01-01
A system for data compression utilizing systolic array architecture for Vector Quantization (VQ) is disclosed for both full-searched and tree-searched. For a tree-searched VQ, the special case of a Binary Tree-Search VQ (BTSVQ) is disclosed with identical Processing Elements (PE) in the array for both a Raw-Codebook VQ (RCVQ) and a Difference-Codebook VQ (DCVQ) algorithm. A fault tolerant system is disclosed which allows a PE that has developed a fault to be bypassed in the array and replaced by a spare at the end of the array, with codebook memory assignment shifted one PE past the faulty PE of the array.
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
2015-06-15
Current quality assurance and quality management guidelines provided by various professional organizations are prescriptive in nature, focusing principally on performance characteristics of planning and delivery devices. However, published analyses of events in radiation therapy show that most events are often caused by flaws in clinical processes rather than by device failures. This suggests the need for the development of a quality management program that is based on integrated approaches to process and equipment quality assurance. Industrial engineers have developed various risk assessment tools that are used to identify and eliminate potential failures from a system or a process before amore » failure impacts a customer. These tools include, but are not limited to, process mapping, failure modes and effects analysis, fault tree analysis. Task Group 100 of the American Association of Physicists in Medicine has developed these tools and used them to formulate an example risk-based quality management program for intensity-modulated radiotherapy. This is a prospective risk assessment approach that analyzes potential error pathways inherent in a clinical process and then ranks them according to relative risk, typically before implementation, followed by the design of a new process or modification of the existing process. Appropriate controls are then put in place to ensure that failures are less likely to occur and, if they do, they will more likely be detected before they propagate through the process, compromising treatment outcome and causing harm to the patient. Such a prospective approach forms the basis of the work of Task Group 100 that has recently been approved by the AAPM. This session will be devoted to a discussion of these tools and practical examples of how these tools can be used in a given radiotherapy clinic to develop a risk based quality management program. Learning Objectives: Learn how to design a process map for a radiotherapy process Learn how to perform failure modes and effects analysis analysis for a given process Learn what fault trees are all about Learn how to design a quality management program based upon the information obtained from process mapping, failure modes and effects analysis and fault tree analysis. Dunscombe: Director, TreatSafely, LLC and Center for the Assessment of Radiological Sciences; Consultant to IAEA and Varian Thomadsen: President, Center for the Assessment of Radiological Sciences Palta: Vice President of the Center for the Assessment of Radiological Sciences.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Palta, J.
2015-06-15
Current quality assurance and quality management guidelines provided by various professional organizations are prescriptive in nature, focusing principally on performance characteristics of planning and delivery devices. However, published analyses of events in radiation therapy show that most events are often caused by flaws in clinical processes rather than by device failures. This suggests the need for the development of a quality management program that is based on integrated approaches to process and equipment quality assurance. Industrial engineers have developed various risk assessment tools that are used to identify and eliminate potential failures from a system or a process before amore » failure impacts a customer. These tools include, but are not limited to, process mapping, failure modes and effects analysis, fault tree analysis. Task Group 100 of the American Association of Physicists in Medicine has developed these tools and used them to formulate an example risk-based quality management program for intensity-modulated radiotherapy. This is a prospective risk assessment approach that analyzes potential error pathways inherent in a clinical process and then ranks them according to relative risk, typically before implementation, followed by the design of a new process or modification of the existing process. Appropriate controls are then put in place to ensure that failures are less likely to occur and, if they do, they will more likely be detected before they propagate through the process, compromising treatment outcome and causing harm to the patient. Such a prospective approach forms the basis of the work of Task Group 100 that has recently been approved by the AAPM. This session will be devoted to a discussion of these tools and practical examples of how these tools can be used in a given radiotherapy clinic to develop a risk based quality management program. Learning Objectives: Learn how to design a process map for a radiotherapy process Learn how to perform failure modes and effects analysis analysis for a given process Learn what fault trees are all about Learn how to design a quality management program based upon the information obtained from process mapping, failure modes and effects analysis and fault tree analysis. Dunscombe: Director, TreatSafely, LLC and Center for the Assessment of Radiological Sciences; Consultant to IAEA and Varian Thomadsen: President, Center for the Assessment of Radiological Sciences Palta: Vice President of the Center for the Assessment of Radiological Sciences.« less
TU-AB-BRD-04: Development of Quality Management Program
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thomadsen, B.
2015-06-15
Current quality assurance and quality management guidelines provided by various professional organizations are prescriptive in nature, focusing principally on performance characteristics of planning and delivery devices. However, published analyses of events in radiation therapy show that most events are often caused by flaws in clinical processes rather than by device failures. This suggests the need for the development of a quality management program that is based on integrated approaches to process and equipment quality assurance. Industrial engineers have developed various risk assessment tools that are used to identify and eliminate potential failures from a system or a process before amore » failure impacts a customer. These tools include, but are not limited to, process mapping, failure modes and effects analysis, fault tree analysis. Task Group 100 of the American Association of Physicists in Medicine has developed these tools and used them to formulate an example risk-based quality management program for intensity-modulated radiotherapy. This is a prospective risk assessment approach that analyzes potential error pathways inherent in a clinical process and then ranks them according to relative risk, typically before implementation, followed by the design of a new process or modification of the existing process. Appropriate controls are then put in place to ensure that failures are less likely to occur and, if they do, they will more likely be detected before they propagate through the process, compromising treatment outcome and causing harm to the patient. Such a prospective approach forms the basis of the work of Task Group 100 that has recently been approved by the AAPM. This session will be devoted to a discussion of these tools and practical examples of how these tools can be used in a given radiotherapy clinic to develop a risk based quality management program. Learning Objectives: Learn how to design a process map for a radiotherapy process Learn how to perform failure modes and effects analysis analysis for a given process Learn what fault trees are all about Learn how to design a quality management program based upon the information obtained from process mapping, failure modes and effects analysis and fault tree analysis. Dunscombe: Director, TreatSafely, LLC and Center for the Assessment of Radiological Sciences; Consultant to IAEA and Varian Thomadsen: President, Center for the Assessment of Radiological Sciences Palta: Vice President of the Center for the Assessment of Radiological Sciences.« less
TU-AB-BRD-02: Failure Modes and Effects Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huq, M.
2015-06-15
Current quality assurance and quality management guidelines provided by various professional organizations are prescriptive in nature, focusing principally on performance characteristics of planning and delivery devices. However, published analyses of events in radiation therapy show that most events are often caused by flaws in clinical processes rather than by device failures. This suggests the need for the development of a quality management program that is based on integrated approaches to process and equipment quality assurance. Industrial engineers have developed various risk assessment tools that are used to identify and eliminate potential failures from a system or a process before amore » failure impacts a customer. These tools include, but are not limited to, process mapping, failure modes and effects analysis, fault tree analysis. Task Group 100 of the American Association of Physicists in Medicine has developed these tools and used them to formulate an example risk-based quality management program for intensity-modulated radiotherapy. This is a prospective risk assessment approach that analyzes potential error pathways inherent in a clinical process and then ranks them according to relative risk, typically before implementation, followed by the design of a new process or modification of the existing process. Appropriate controls are then put in place to ensure that failures are less likely to occur and, if they do, they will more likely be detected before they propagate through the process, compromising treatment outcome and causing harm to the patient. Such a prospective approach forms the basis of the work of Task Group 100 that has recently been approved by the AAPM. This session will be devoted to a discussion of these tools and practical examples of how these tools can be used in a given radiotherapy clinic to develop a risk based quality management program. Learning Objectives: Learn how to design a process map for a radiotherapy process Learn how to perform failure modes and effects analysis analysis for a given process Learn what fault trees are all about Learn how to design a quality management program based upon the information obtained from process mapping, failure modes and effects analysis and fault tree analysis. Dunscombe: Director, TreatSafely, LLC and Center for the Assessment of Radiological Sciences; Consultant to IAEA and Varian Thomadsen: President, Center for the Assessment of Radiological Sciences Palta: Vice President of the Center for the Assessment of Radiological Sciences.« less
NASA Astrophysics Data System (ADS)
Akinci, A.; Pace, B.
2017-12-01
In this study, we discuss the seismic hazard variability of peak ground acceleration (PGA) at 475 years return period in the Southern Apennines of Italy. The uncertainty and parametric sensitivity are presented to quantify the impact of the several fault parameters on ground motion predictions for 10% exceedance in 50-year hazard. A time-independent PSHA model is constructed based on the long-term recurrence behavior of seismogenic faults adopting the characteristic earthquake model for those sources capable of rupturing the entire fault segment with a single maximum magnitude. The fault-based source model uses the dimensions and slip rates of mapped fault to develop magnitude-frequency estimates for characteristic earthquakes. Variability of the selected fault parameter is given with a truncated normal random variable distribution presented by standard deviation about a mean value. A Monte Carlo approach, based on the random balanced sampling by logic tree, is used in order to capture the uncertainty in seismic hazard calculations. For generating both uncertainty and sensitivity maps, we perform 200 simulations for each of the fault parameters. The results are synthesized both in frequency-magnitude distribution of modeled faults as well as the different maps: the overall uncertainty maps provide a confidence interval for the PGA values and the parameter uncertainty maps determine the sensitivity of hazard assessment to variability of every logic tree branch. These branches of logic tree, analyzed through the Monte Carlo approach, are maximum magnitudes, fault length, fault width, fault dip and slip rates. The overall variability of these parameters is determined by varying them simultaneously in the hazard calculations while the sensitivity of each parameter to overall variability is determined varying each of the fault parameters while fixing others. However, in this study we do not investigate the sensitivity of mean hazard results to the consideration of different GMPEs. Distribution of possible seismic hazard results is illustrated by 95% confidence factor map, which indicates the dispersion about mean value, and coefficient of variation map, which shows percent variability. The results of our study clearly illustrate the influence of active fault parameters to probabilistic seismic hazard maps.
A comparative critical study between FMEA and FTA risk analysis methods
NASA Astrophysics Data System (ADS)
Cristea, G.; Constantinescu, DM
2017-10-01
Today there is used an overwhelming number of different risk analyses techniques with acronyms such as: FMEA (Failure Modes and Effects Analysis) and its extension FMECA (Failure Mode, Effects, and Criticality Analysis), DRBFM (Design Review by Failure Mode), FTA (Fault Tree Analysis) and and its extension ETA (Event Tree Analysis), HAZOP (Hazard & Operability Studies), HACCP (Hazard Analysis and Critical Control Points) and What-if/Checklist. However, the most used analysis techniques in the mechanical and electrical industry are FMEA and FTA. In FMEA, which is an inductive method, information about the consequences and effects of the failures is usually collected through interviews with experienced people, and with different knowledge i.e., cross-functional groups. The FMEA is used to capture potential failures/risks & impacts and prioritize them on a numeric scale called Risk Priority Number (RPN) which ranges from 1 to 1000. FTA is a deductive method i.e., a general system state is decomposed into chains of more basic events of components. The logical interrelationship of how such basic events depend on and affect each other is often described analytically in a reliability structure which can be visualized as a tree. Both methods are very time-consuming to be applied thoroughly, and this is why it is oftenly not done so. As a consequence possible failure modes may not be identified. To address these shortcomings, it is proposed to use a combination of FTA and FMEA.
Quantification of source uncertainties in Seismic Probabilistic Tsunami Hazard Analysis (SPTHA)
NASA Astrophysics Data System (ADS)
Selva, J.; Tonini, R.; Molinari, I.; Tiberti, M. M.; Romano, F.; Grezio, A.; Melini, D.; Piatanesi, A.; Basili, R.; Lorito, S.
2016-06-01
We propose a procedure for uncertainty quantification in Probabilistic Tsunami Hazard Analysis (PTHA), with a special emphasis on the uncertainty related to statistical modelling of the earthquake source in Seismic PTHA (SPTHA), and on the separate treatment of subduction and crustal earthquakes (treated as background seismicity). An event tree approach and ensemble modelling are used in spite of more classical approaches, such as the hazard integral and the logic tree. This procedure consists of four steps: (1) exploration of aleatory uncertainty through an event tree, with alternative implementations for exploring epistemic uncertainty; (2) numerical computation of tsunami generation and propagation up to a given offshore isobath; (3) (optional) site-specific quantification of inundation; (4) simultaneous quantification of aleatory and epistemic uncertainty through ensemble modelling. The proposed procedure is general and independent of the kind of tsunami source considered; however, we implement step 1, the event tree, specifically for SPTHA, focusing on seismic source uncertainty. To exemplify the procedure, we develop a case study considering seismic sources in the Ionian Sea (central-eastern Mediterranean Sea), using the coasts of Southern Italy as a target zone. The results show that an efficient and complete quantification of all the uncertainties is feasible even when treating a large number of potential sources and a large set of alternative model formulations. We also find that (i) treating separately subduction and background (crustal) earthquakes allows for optimal use of available information and for avoiding significant biases; (ii) both subduction interface and crustal faults contribute to the SPTHA, with different proportions that depend on source-target position and tsunami intensity; (iii) the proposed framework allows sensitivity and deaggregation analyses, demonstrating the applicability of the method for operational assessments.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sattison, M.B.
The Idaho National Engineering Laboratory (INEL) over the three years has created 75 plant-specific Accident Sequence Precursor (ASP) models using the SAPHIRE suite of PRA codes. Along with the new models, the INEL has also developed a new module for SAPHIRE which is tailored specifically to the unique needs of ASP evaluations. These models and software will be the next generation of risk tools for the evaluation of accident precursors by both the U.S. Nuclear Regulatory Commission`s (NRC`s) Office of Nuclear Reactor Regulation (NRR) and the Office for Analysis and Evaluation of Operational Data (AEOD). This paper presents an overviewmore » of the models and software. Key characteristics include: (1) classification of the plant models according to plant response with a unique set of event trees for each plant class, (2) plant-specific fault trees using supercomponents, (3) generation and retention of all system and sequence cutsets, (4) full flexibility in modifying logic, regenerating cutsets, and requantifying results, and (5) user interface for streamlined evaluation of ASP events. Future plans for the ASP models is also presented.« less
Graphical workstation capability for reliability modeling
NASA Technical Reports Server (NTRS)
Bavuso, Salvatore J.; Koppen, Sandra V.; Haley, Pamela J.
1992-01-01
In addition to computational capabilities, software tools for estimating the reliability of fault-tolerant digital computer systems must also provide a means of interfacing with the user. Described here is the new graphical interface capability of the hybrid automated reliability predictor (HARP), a software package that implements advanced reliability modeling techniques. The graphics oriented (GO) module provides the user with a graphical language for modeling system failure modes through the selection of various fault-tree gates, including sequence-dependency gates, or by a Markov chain. By using this graphical input language, a fault tree becomes a convenient notation for describing a system. In accounting for any sequence dependencies, HARP converts the fault-tree notation to a complex stochastic process that is reduced to a Markov chain, which it can then solve for system reliability. The graphics capability is available for use on an IBM-compatible PC, a Sun, and a VAX workstation. The GO module is written in the C programming language and uses the graphical kernal system (GKS) standard for graphics implementation. The PC, VAX, and Sun versions of the HARP GO module are currently in beta-testing stages.
NASA Astrophysics Data System (ADS)
Koji, Yusuke; Kitamura, Yoshinobu; Kato, Yoshikiyo; Tsutsui, Yoshio; Mizoguchi, Riichiro
In conceptual design, it is important to develop functional structures which reflect the rich experience in the knowledge from previous design failures. Especially, if a designer learns possible abnormal behaviors from a previous design failure, he or she can add an additional function which prevents such abnormal behaviors and faults. To do this, it is a crucial issue to share such knowledge about possible faulty phenomena and how to cope with them. In fact, a part of such knowledge is described in FMEA (Failure Mode and Effect Analysis) sheets, function structure models for systematic design and fault trees for FTA (Fault Tree Analysis).
Failure analysis of energy storage spring in automobile composite brake chamber
NASA Astrophysics Data System (ADS)
Luo, Zai; Wei, Qing; Hu, Xiaofeng
2015-02-01
This paper set energy storage spring of parking brake cavity, part of automobile composite brake chamber, as the research object. And constructed the fault tree model of energy storage spring which caused parking brake failure based on the fault tree analysis method. Next, the parking brake failure model of energy storage spring was established by analyzing the working principle of composite brake chamber. Finally, the data of working load and the push rod stroke measured by comprehensive test-bed valve was used to validate the failure model above. The experimental result shows that the failure model can distinguish whether the energy storage spring is faulted.
A-Priori Rupture Models for Northern California Type-A Faults
Wills, Chris J.; Weldon, Ray J.; Field, Edward H.
2008-01-01
This appendix describes how a-priori rupture models were developed for the northern California Type-A faults. As described in the main body of this report, and in Appendix G, ?a-priori? models represent an initial estimate of the rate of single and multi-segment surface ruptures on each fault. Whether or not a given model is moment balanced (i.e., satisfies section slip-rate data) depends on assumptions made regarding the average slip on each segment in each rupture (which in turn depends on the chosen magnitude-area relationship). Therefore, for a given set of assumptions, or branch on the logic tree, the methodology of the present Working Group (WGCEP-2007) is to find a final model that is as close as possible to the a-priori model, in the least squares sense, but that also satisfies slip rate and perhaps other data. This is analogous the WGCEP- 2002 approach of effectively voting on the relative rate of each possible rupture, and then finding the closest moment-balance model (under a more limiting set of assumptions than adopted by the present WGCEP, as described in detail in Appendix G). The 2002 Working Group Report (WCCEP, 2003, referred to here as WGCEP-2002), created segmented earthquake rupture forecast models for all faults in the region, including some that had been designated as Type B faults in the NSHMP, 1996, and one that had not previously been considered. The 2002 National Seismic Hazard Maps used the values from WGCEP-2002 for all the faults in the region, essentially treating all the listed faults as Type A faults. As discussed in Appendix A, the current WGCEP found that there are a number of faults with little or no data on slip-per-event, or dates of previous earthquakes. As a result, the WGCEP recommends that faults with minimal available earthquake recurrence data: the Greenville, Mount Diablo, San Gregorio, Monte Vista-Shannon and Concord-Green Valley be modeled as Type B faults to be consistent with similarly poorly-known faults statewide. As a result, the modified segmented models discussed here only concern the San Andreas, Hayward-Rodgers Creek, and Calaveras faults. Given the extensive level of effort given by the recent Bay-Area WGCEP-2002, our approach has been to adopt their final average models as our preferred a-prior models. We have modified the WGCEP-2002 models where necessary to match data that were not available or not used by that WGCEP and where the models needed by WGCEP-2007 for a uniform statewide model require different assumptions and/or logic-tree branch weights. In these cases we have made what are usually slight modifications to the WGCEP-2002 model. This Appendix presents the minor changes needed to accomodate updated information and model construction. We do not attempt to reproduce here the extensive documentation of data, model parameters and earthquake probablilities in the WG-2002 report.
NASA Astrophysics Data System (ADS)
Dygert, Nick; Liang, Yan
2015-06-01
Mantle peridotites from ophiolites are commonly interpreted as having mid-ocean ridge (MOR) or supra-subduction zone (SSZ) affinity. Recently, an REE-in-two-pyroxene thermometer was developed (Liang et al., 2013) that has higher closure temperatures (designated as TREE) than major element based two-pyroxene thermometers for mafic and ultramafic rocks that experienced cooling. The REE-in-two-pyroxene thermometer has the potential to extract meaningful cooling rates from ophiolitic peridotites and thus shed new light on the thermal history of the different tectonic regimes. We calculated TREE for available literature data from abyssal peridotites, subcontinental (SC) peridotites, and ophiolites around the world (Alps, Coast Range, Corsica, New Caledonia, Oman, Othris, Puerto Rico, Russia, and Turkey), and augmented the data with new measurements for peridotites from the Trinity and Josephine ophiolites and the Mariana trench. TREE are compared to major element based thermometers, including the two-pyroxene thermometer of Brey and Köhler (1990) (TBKN). Samples with SC affinity have TREE and TBKN in good agreement. Samples with MOR and SSZ affinity have near-solidus TREE but TBKN hundreds of degrees lower. Closure temperatures for REE and Fe-Mg in pyroxenes were calculated to compare cooling rates among abyssal peridotites, MOR ophiolites, and SSZ ophiolites. Abyssal peridotites appear to cool more rapidly than peridotites from most ophiolites. On average, SSZ ophiolites have lower closure temperatures than abyssal peridotites and many ophiolites with MOR affinity. We propose that these lower temperatures can be attributed to the residence time in the cooling oceanic lithosphere prior to obduction. MOR ophiolites define a continuum spanning cooling rates from SSZ ophiolites to abyssal peridotites. Consistent high closure temperatures for abyssal peridotites and the Oman and Corsica ophiolites suggests hydrothermal circulation and/or rapid cooling events (e.g., normal faulting, unroofing) control the late thermal histories of peridotites from transform faults and slow and fast spreading centers with or without a crustal section.
NASA Technical Reports Server (NTRS)
Bennett, Richard A.; Reilinger, Robert E.; Rodi, William; Li, Yingping; Toksoz, M. Nafi; Hudnut, Ken
1995-01-01
Coseismic surface deformation associated with the M(sub w) 6.1, April 23, 1992, Joshua Tree earthquake is well represented by estimates of geodetic monument displacements at 20 locations independently derived from Global Positioning System and trilateration measurements. The rms signal to noise ratio for these inferred displacements is 1.8 with near-fault displacement estimates exceeding 40 mm. In order to determine the long-wavelength distribution of slip over the plane of rupture, a Tikhonov regularization operator is applied to these estimates which minimizes stress variability subject to purely right-lateral slip and zero surface slip constraints. The resulting slip distribution yields a geodetic moment estimate of 1.7 x 10(exp 18) N m with corresponding maximum slip around 0.8 m and compares well with independent and complementary information including seismic moment and source time function estimates and main shock and aftershock locations. From empirical Green's functions analyses, a rupture duration of 5 s is obtained which implies a rupture radius of 6-8 km. Most of the inferred slip lies to the north of the hypocenter, consistent with northward rupture propagation. Stress drop estimates are in the range of 2-4 MPa. In addition, predicted Coulomb stress increases correlate remarkably well with the distribution of aftershock hypocenters; most of the aftershocks occur in areas for which the mainshock rupture produced stress increases larger than about 0.1 MPa. In contrast, predicted stress changes are near zero at the hypocenter of the M(sub w) 7.3, June 28, 1992, Landers earthquake which nucleated about 20 km beyond the northernmost edge of the Joshua Tree rupture. Based on aftershock migrations and the predicted static stress field, we speculate that redistribution of Joshua Tree-induced stress perturbations played a role in the spatio-temporal development of the earth sequence culminating in the Landers event.
Electromagnetic Compatibility (EMC) in Microelectronics.
1983-02-01
Fault Tree Analysis", System Saftey Symposium, June 8-9, 1965, Seattle: The Boeing Company . 12. Fussell, J.B., "Fault Tree Analysis-Concepts and...procedure for assessing EMC in microelectronics and for applying DD, 1473 EOiTO OP I, NOV6 IS OESOL.ETE UNCLASSIFIED SECURITY CLASSIFICATION OF THIS...CRITERIA 2.1 Background 2 2.2 The Probabilistic Nature of EMC 2 2.3 The Probabilistic Approach 5 2.4 The Compatibility Factor 6 3 APPLYING PROBABILISTIC
A graphical language for reliability model generation
NASA Technical Reports Server (NTRS)
Howell, Sandra V.; Bavuso, Salvatore J.; Haley, Pamela J.
1990-01-01
A graphical interface capability of the hybrid automated reliability predictor (HARP) is described. The graphics-oriented (GO) module provides the user with a graphical language for modeling system failure modes through the selection of various fault tree gates, including sequence dependency gates, or by a Markov chain. With this graphical input language, a fault tree becomes a convenient notation for describing a system. In accounting for any sequence dependencies, HARP converts the fault-tree notation to a complex stochastic process that is reduced to a Markov chain which it can then solve for system reliability. The graphics capability is available for use on an IBM-compatible PC, a Sun, and a VAX workstation. The GO module is written in the C programming language and uses the Graphical Kernel System (GKS) standard for graphics implementation. The PC, VAX, and Sun versions of the HARP GO module are currently in beta-testing.
NASA Astrophysics Data System (ADS)
Brothers, Daniel Stephen
Five studies along the Pacific-North America (PA-NA) plate boundary offer new insights into continental margin processes, the development of the PA-NA tectonic margin and regional earthquake hazards. This research is based on the collection and analysis of several new marine geophysical and geological datasets. Two studies used seismic CHIRP surveys and sediment coring in Fallen Leaf Lake (FLL) and Lake Tahoe to constrain tectonic and geomorphic processes in the lakes, but also the slip-rate and earthquake history along the West Tahoe-Dollar Point Fault. CHIRP profiles image vertically offset and folded strata that record deformation associated with the most recent event (MRE). Radiocarbon dating of organic material extracted from piston cores constrain the age of the MRE to be between 4.1--4.5 k.y. B.P. Offset of Tioga aged glacial deposits yield a slip rate of 0.4--0.8 mm/yr. An ancillary study in FLL determined that submerged, in situ pine trees that date to between 900-1250 AD are related to a medieval megadrought in the Lake Tahoe Basin. The timing and severity of this event match medieval megadroughts observed in the western United States and in Europe. CHIRP profiles acquired in the Salton Sea, California provide new insights into the processes that control pull-apart basin development and earthquake hazards along the southernmost San Andreas Fault. Differential subsidence (>10 mm/yr) in the southern sea suggests the existence of northwest-dipping basin-bounding faults near the southern shoreline. In contrast to previous models, the rapid subsidence and fault architecture observed in the southern part of the sea are consistent with experimental models for pull-apart basins. Geophysical surveys imaged more than 15 ˜N15°E oriented faults, some of which have produced up to 10 events in the last 2-3 kyr. Potentially 2 of the last 5 events on the southern San Andreas Fault (SAF) were synchronous with rupture on offshore faults, but it appears that ruptures on three offshore faults are synchronous with Colorado River diversions into the basin. The final study was used coincident wide-angle seismic refraction and multichannel seismic reflection surveys that spanned the width of the of the southern Baja California (BC) Peninsula. The data provide insight into the spatial and temporal evolution of the BC microplate capture by the Pacific Plate. Seismic reflection profiles constrain the upper crustal structure and deformation history along fault zone on the western Baja margin and in the Gulf of California. Stratal divergence in two transtensional basins along the Magdalena Shelf records the onset of extension across the Tosco-Abreojos and Santa Margarita faults. We define an upper bound of 12 Ma on the age of the pre-rift sediments and an age of ˜8 Ma for the onset of extension. Tomographic imaging reveals a very heterogeneous upper crust and a narrow, high velocity zone that extends ˜40 km east of the paleotrench and is interpreted to be remnant oceanic crust.
Adaptively Adjusted Event-Triggering Mechanism on Fault Detection for Networked Control Systems.
Wang, Yu-Long; Lim, Cheng-Chew; Shi, Peng
2016-12-08
This paper studies the problem of adaptively adjusted event-triggering mechanism-based fault detection for a class of discrete-time networked control system (NCS) with applications to aircraft dynamics. By taking into account the fault occurrence detection progress and the fault occurrence probability, and introducing an adaptively adjusted event-triggering parameter, a novel event-triggering mechanism is proposed to achieve the efficient utilization of the communication network bandwidth. Both the sensor-to-control station and the control station-to-actuator network-induced delays are taken into account. The event-triggered sensor and the event-triggered control station are utilized simultaneously to establish new network-based closed-loop models for the NCS subject to faults. Based on the established models, the event-triggered simultaneous design of fault detection filter (FDF) and controller is presented. A new algorithm for handling the adaptively adjusted event-triggering parameter is proposed. Performance analysis verifies the effectiveness of the adaptively adjusted event-triggering mechanism, and the simultaneous design of FDF and controller.
Nelson, Alan R.; Personius, Stephen F.; Sherrod, Brian L.; Buck, Jason; Bradley, Lee-Ann; Henley, Gary; Liberty, Lee M.; Kelsey, Harvey M.; Witter, Robert C.; Koehler, R.D.; Schermer, Elizabeth R.; Nemser, Eliza S.; Cladouhos, Trenton T.
2008-01-01
As part of the effort to assess seismic hazard in the Puget Sound region, we map fault scarps on Airborne Laser Swath Mapping (ALSM, an application of LiDAR) imagery (with 2.5-m elevation contours on 1:4,000-scale maps) and show field and laboratory data from backhoe trenches across the scarps that are being used to develop a latest Pleistocene and Holocene history of large earthquakes on the Tacoma fault. We supplement previous Tacoma fault paleoseismic studies with data from five trenches on the hanging wall of the fault. In a new trench across the Catfish Lake scarp, broad folding of more tightly folded glacial sediment does not predate 4.3 ka because detrital charcoal of this age was found in stream-channel sand in the trench beneath the crest of the scarp. A post-4.3-ka age for scarp folding is consistent with previously identified uplift across the fault during AD 770-1160. In the trench across the younger of the two Stansberry Lake scarps, six maximum 14C ages on detrital charcoal in pre-faulting B and C soil horizons and three minimum ages on a tree root in post-faulting colluvium, limit a single oblique-slip (right-lateral) surface faulting event to AD 410-990. Stratigraphy and sedimentary structures in the trench across the older scarp at the same site show eroded glacial sediments, probably cut by a meltwater channel, with no evidence of post-glacial deformation. At the northeast end of the Sunset Beach scarps, charcoal ages in two trenches across graben-forming scarps give a close maximum age of 1.3 ka for graben formation. The ages that best limit the time of faulting and folding in each of the trenches are consistent with the time of the large regional earthquake in southern Puget Sound about AD 900-930.
NASA Astrophysics Data System (ADS)
Wu, Jianing; Yan, Shaoze; Xie, Liyang
2011-12-01
To address the impact of solar array anomalies, it is important to perform analysis of the solar array reliability. This paper establishes the fault tree analysis (FTA) and fuzzy reasoning Petri net (FRPN) models of a solar array mechanical system and analyzes reliability to find mechanisms of the solar array fault. The index final truth degree (FTD) and cosine matching function (CMF) are employed to resolve the issue of how to evaluate the importance and influence of different faults. So an improvement reliability analysis method is developed by means of the sorting of FTD and CMF. An example is analyzed using the proposed method. The analysis results show that harsh thermal environment and impact caused by particles in space are the most vital causes of the solar array fault. Furthermore, other fault modes and the corresponding improvement methods are discussed. The results reported in this paper could be useful for the spacecraft designers, particularly, in the process of redesigning the solar array and scheduling its reliability growth plan.
Seera, Manjeevan; Lim, Chee Peng; Ishak, Dahaman; Singh, Harapajan
2012-01-01
In this paper, a novel approach to detect and classify comprehensive fault conditions of induction motors using a hybrid fuzzy min-max (FMM) neural network and classification and regression tree (CART) is proposed. The hybrid model, known as FMM-CART, exploits the advantages of both FMM and CART for undertaking data classification and rule extraction problems. A series of real experiments is conducted, whereby the motor current signature analysis method is applied to form a database comprising stator current signatures under different motor conditions. The signal harmonics from the power spectral density are extracted as discriminative input features for fault detection and classification with FMM-CART. A comprehensive list of induction motor fault conditions, viz., broken rotor bars, unbalanced voltages, stator winding faults, and eccentricity problems, has been successfully classified using FMM-CART with good accuracy rates. The results are comparable, if not better, than those reported in the literature. Useful explanatory rules in the form of a decision tree are also elicited from FMM-CART to analyze and understand different fault conditions of induction motors.
USGS National Seismic Hazard Maps
Frankel, A.D.; Mueller, C.S.; Barnhard, T.P.; Leyendecker, E.V.; Wesson, R.L.; Harmsen, S.C.; Klein, F.W.; Perkins, D.M.; Dickman, N.C.; Hanson, S.L.; Hopper, M.G.
2000-01-01
The U.S. Geological Survey (USGS) recently completed new probabilistic seismic hazard maps for the United States, including Alaska and Hawaii. These hazard maps form the basis of the probabilistic component of the design maps used in the 1997 edition of the NEHRP Recommended Provisions for Seismic Regulations for New Buildings and Other Structures, prepared by the Building Seismic Safety Council arid published by FEMA. The hazard maps depict peak horizontal ground acceleration and spectral response at 0.2, 0.3, and 1.0 sec periods, with 10%, 5%, and 2% probabilities of exceedance in 50 years, corresponding to return times of about 500, 1000, and 2500 years, respectively. In this paper we outline the methodology used to construct the hazard maps. There are three basic components to the maps. First, we use spatially smoothed historic seismicity as one portion of the hazard calculation. In this model, we apply the general observation that moderate and large earthquakes tend to occur near areas of previous small or moderate events, with some notable exceptions. Second, we consider large background source zones based on broad geologic criteria to quantify hazard in areas with little or no historic seismicity, but with the potential for generating large events. Third, we include the hazard from specific fault sources. We use about 450 faults in the western United States (WUS) and derive recurrence times from either geologic slip rates or the dating of pre-historic earthquakes from trenching of faults or other paleoseismic methods. Recurrence estimates for large earthquakes in New Madrid and Charleston, South Carolina, were taken from recent paleoliquefaction studies. We used logic trees to incorporate different seismicity models, fault recurrence models, Cascadia great earthquake scenarios, and ground-motion attenuation relations. We present disaggregation plots showing the contribution to hazard at four cities from potential earthquakes with various magnitudes and distances.
Microseismic Analysis of Fracture of an Intact Rock Asperity Traversing a Sawcut Fault
NASA Astrophysics Data System (ADS)
Mclaskey, G.; Lockner, D. A.
2017-12-01
Microseismic events carry information related to stress state, fault geometry, and other subsurface properties, but their relationship to large and potentially damaging earthquakes is not well defined. We conducted laboratory rock mechanics experiments that highlight the interaction between a sawcut fault and an asperity composed of an intact rock "pin". The sample is a 76 mm diameter cylinder of Westerly granite with a 21 mm diameter cylinder (the pin) of intact Westerly granite that crosses the sawcut fault. Upon loading to 80 MPa in a triaxial machine, we first observed a slip event that ruptured the sawcut fault, slipped about 35 mm, but was halted by the rock pin. With continued loading, the rock pin failed in a swarm of thousands of M -7 seismic events similar to the localized microcracking that occurs during the final fracture nucleation phase in an intact rock sample. Once the pin was fractured to a critical point, it permitted complete rupture events on the sawcut fault (stick-slip instabilities). No seismicity was detected on the sawcut fault plane until the pin was sheared. Subsequent slip events were preceded by 10s of foreshocks, all located on the fault plane. We also identified an aseismic zone on the fault plane surrounding the fractured rock pin. A post-mortem analysis of the sample showed a thick gouge layer where the pin intersected the fault, suggesting that this gouge propped open the fault and prevented microseismic events in its vicinity. This experiment is an excellent case study in microseismicity since the events separate neatly into three categories: slip on the sawcut fault, fracture of the intact rock pin, and off-fault seismicity associated with pin-related rock joints. The distinct locations, timing, and focal mechanisms of the different categories of microseismic events allow us to study how their occurrence is related to the mechanics of the deforming rock.
SIGPI. Fault Tree Cut Set System Performance
DOE Office of Scientific and Technical Information (OSTI.GOV)
Patenaude, C.J.
1992-01-13
SIGPI computes the probabilistic performance of complex systems by combining cut set or other binary product data with probability information on each basic event. SIGPI is designed to work with either coherent systems, where the system fails when certain combinations of components fail, or noncoherent systems, where at least one cut set occurs only if at least one component of the system is operating properly. The program can handle conditionally independent components, dependent components, or a combination of component types and has been used to evaluate responses to environmental threats and seismic events. The three data types that can bemore » input are cut set data in disjoint normal form, basic component probabilities for independent basic components, and mean and covariance data for statistically dependent basic components.« less
SIGPI. Fault Tree Cut Set System Performance
DOE Office of Scientific and Technical Information (OSTI.GOV)
Patenaude, C.J.
1992-01-14
SIGPI computes the probabilistic performance of complex systems by combining cut set or other binary product data with probability information on each basic event. SIGPI is designed to work with either coherent systems, where the system fails when certain combinations of components fail, or noncoherent systems, where at least one cut set occurs only if at least one component of the system is operating properly. The program can handle conditionally independent components, dependent components, or a combination of component types and has been used to evaluate responses to environmental threats and seismic events. The three data types that can bemore » input are cut set data in disjoint normal form, basic component probabilities for independent basic components, and mean and covariance data for statistically dependent basic components.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rutqvist, Jonny; Rinaldi, Antonio P.; Cappa, Frédéric
2013-07-01
We have conducted numerical simulation studies to assess the potential for injection-induced fault reactivation and notable seismic events associated with shale-gas hydraulic fracturing operations. The modeling is generally tuned towards conditions usually encountered in the Marcellus shale play in the Northeastern US at an approximate depth of 1500 m (~;;4,500 feet). Our modeling simulations indicate that when faults are present, micro-seismic events are possible, the magnitude of which is somewhat larger than the one associated with micro-seismic events originating from regular hydraulic fracturing because of the larger surface area that is available for rupture. The results of our simulations indicatedmore » fault rupture lengths of about 10 to 20 m, which, in rare cases can extend to over 100 m, depending on the fault permeability, the in situ stress field, and the fault strength properties. In addition to a single event rupture length of 10 to 20 m, repeated events and aseismic slip amounted to a total rupture length of 50 m, along with a shear offset displacement of less than 0.01 m. This indicates that the possibility of hydraulically induced fractures at great depth (thousands of meters) causing activation of faults and creation of a new flow path that can reach shallow groundwater resources (or even the surface) is remote. The expected low permeability of faults in producible shale is clearly a limiting factor for the possible rupture length and seismic magnitude. In fact, for a fault that is initially nearly-impermeable, the only possibility of larger fault slip event would be opening by hydraulic fracturing; this would allow pressure to penetrate the matrix along the fault and to reduce the frictional strength over a sufficiently large fault surface patch. However, our simulation results show that if the fault is initially impermeable, hydraulic fracturing along the fault results in numerous small micro-seismic events along with the propagation, effectively preventing larger events from occurring. Nevertheless, care should be taken with continuous monitoring of induced seismicity during the entire injection process to detect any runaway fracturing along faults.« less
NASA Astrophysics Data System (ADS)
Quigley, Mark C.; Hughes, Matthew W.; Bradley, Brendon A.; van Ballegooy, Sjoerd; Reid, Catherine; Morgenroth, Justin; Horton, Travis; Duffy, Brendan; Pettinga, Jarg R.
2016-03-01
Seismic shaking and tectonic deformation during strong earthquakes can trigger widespread environmental effects. The severity and extent of a given effect relates to the characteristics of the causative earthquake and the intrinsic properties of the affected media. Documentation of earthquake environmental effects in well-instrumented, historical earthquakes can enable seismologic triggering thresholds to be estimated across a spectrum of geologic, topographic and hydrologic site conditions, and implemented into seismic hazard assessments, geotechnical engineering designs, palaeoseismic interpretations, and forecasts of the impacts of future earthquakes. The 2010-2011 Canterbury Earthquake Sequence (CES), including the moment magnitude (Mw) 7.1 Darfield earthquake and Mw 6.2, 6.0, 5.9, and 5.8 aftershocks, occurred on a suite of previously unidentified, primarily blind, active faults in the eastern South Island of New Zealand. The CES is one of Earth's best recorded historical earthquake sequences. The location of the CES proximal to and beneath a major urban centre enabled rapid and detailed collection of vast amounts of field, geospatial, geotechnical, hydrologic, biologic, and seismologic data, and allowed incremental and cumulative environmental responses to seismic forcing to be documented throughout a protracted earthquake sequence. The CES caused multiple instances of tectonic surface deformation (≥ 3 events), surface manifestations of liquefaction (≥ 11 events), lateral spreading (≥ 6 events), rockfall (≥ 6 events), cliff collapse (≥ 3 events), subsidence (≥ 4 events), and hydrological (10s of events) and biological shifts (≥ 3 events). The terrestrial area affected by strong shaking (e.g. peak ground acceleration (PGA) ≥ 0.1-0.3 g), and the maximum distances between earthquake rupture and environmental response (Rrup), both generally increased with increased earthquake Mw, but were also influenced by earthquake location and source characteristics. However, the severity of a given environmental response at any given site related predominantly to ground shaking characteristics (PGA, peak ground velocities) and site conditions (water table depth, soil type, geomorphic and topographic setting) rather than earthquake Mw. In most cases, the most severe liquefaction, rockfall, cliff collapse, subsidence, flooding, tree damage, and biologic habitat changes were triggered by proximal, moderate magnitude (Mw ≤ 6.2) earthquakes on blind faults. CES environmental effects will be incompletely preserved in the geologic record and variably diagnostic of spatial and temporal earthquake clustering. Liquefaction feeder dikes in areas of severe and recurrent liquefaction will provide the best preserved and potentially most diagnostic CES features. Rockfall talus deposits and boulders will be well preserved and potentially diagnostic of the strong intensity of CES shaking, but challenging to decipher in terms of single versus multiple events. Most other phenomena will be transient (e.g., distal groundwater responses), not uniquely diagnostic of earthquakes (e.g., flooding), or more ambiguous (e.g. biologic changes). Preliminary palaeoseismic investigations in the CES region indicate recurrence of liquefaction in susceptible sediments of 100 to 300 yr, recurrence of severe rockfall event(s) of ca. 6000 to 8000 yr, and recurrence of surface rupturing on the largest CES source fault of ca. 20,000 to 30,000 yr. These data highlight the importance of utilising multiple proxy datasets in palaeoearthquake studies. The severity of environmental effects triggered during the strongest CES earthquakes was as great as or equivalent to any historic or prehistoric effects recorded in the geologic record. We suggest that the shaking caused by rupture of local blind faults in the CES comprised a 'worst case' seismic shaking scenario for parts of the Christchurch urban area. Moderate Mw blind fault earthquakes may contribute the highest proportion of seismic hazard, be the most important drivers of landscape evolution, and dominate the palaeoseismic record in some locations on Earth, including locations distal from any identified active faults. A high scientific priority should be placed on improving the spatial extent and quality of 'off-fault' shaking records of strong earthquakes, particularly near major urban centres.
[Application of root cause analysis in healthcare].
Hsu, Tsung-Fu
2007-12-01
The main purpose of this study was to explore various aspects of root cause analysis (RCA), including its definition, rationale concept, main objective, implementation procedures, most common analysis methodology (fault tree analysis, FTA), and advantages and methodologic limitations in regard to healthcare. Several adverse events that occurred at a certain hospital were also analyzed by the author using FTA as part of this study. RCA is a process employed to identify basic and contributing causal factors underlying performance variations associated with adverse events. The rationale concept of RCA offers a systemic approach to improving patient safety that does not assign blame or liability to individuals. The four-step process involved in conducting an RCA includes: RCA preparation, proximate cause identification, root cause identification, and recommendation generation and implementation. FTA is a logical, structured process that can help identify potential causes of system failure before actual failures occur. Some advantages and significant methodologic limitations of RCA were discussed. Finally, we emphasized that errors stem principally from faults attributable to system design, practice guidelines, work conditions, and other human factors, which induce health professionals to make negligence or mistakes with regard to healthcare. We must explore the root causes of medical errors to eliminate potential RCA system failure factors. Also, a systemic approach is needed to resolve medical errors and move beyond a current culture centered on assigning fault to individuals. In constructing a real environment of patient-centered safety healthcare, we can help encourage clients to accept state-of-the-art healthcare services.
Aydin, Ilhan; Karakose, Mehmet; Akin, Erhan
2014-03-01
Although reconstructed phase space is one of the most powerful methods for analyzing a time series, it can fail in fault diagnosis of an induction motor when the appropriate pre-processing is not performed. Therefore, boundary analysis based a new feature extraction method in phase space is proposed for diagnosis of induction motor faults. The proposed approach requires the measurement of one phase current signal to construct the phase space representation. Each phase space is converted into an image, and the boundary of each image is extracted by a boundary detection algorithm. A fuzzy decision tree has been designed to detect broken rotor bars and broken connector faults. The results indicate that the proposed approach has a higher recognition rate than other methods on the same dataset. © 2013 ISA Published by ISA All rights reserved.
The P-Mesh: A Commodity-based Scalable Network Architecture for Clusters
NASA Technical Reports Server (NTRS)
Nitzberg, Bill; Kuszmaul, Chris; Stockdale, Ian; Becker, Jeff; Jiang, John; Wong, Parkson; Tweten, David (Technical Monitor)
1998-01-01
We designed a new network architecture, the P-Mesh which combines the scalability and fault resilience of a torus with the performance of a switch. We compare the scalability, performance, and cost of the hub, switch, torus, tree, and P-Mesh architectures. The latter three are capable of scaling to thousands of nodes, however, the torus has severe performance limitations with that many processors. The tree and P-Mesh have similar latency, bandwidth, and bisection bandwidth, but the P-Mesh outperforms the switch architecture (a lower bound for tree performance) on 16-node NAB Parallel Benchmark tests by up to 23%, and costs 40% less. Further, the P-Mesh has better fault resilience characteristics. The P-Mesh architecture trades increased management overhead for lower cost, and is a good bridging technology while the price of tree uplinks is expensive.
Ruiz, Javier A.; Hayes, Gavin P.; Carrizo, Daniel; Kanamori, Hiroo; Socquet, Anne; Comte, Diana
2014-01-01
On 2010 March 11, a sequence of large, shallow continental crust earthquakes shook central Chile. Two normal faulting events with magnitudes around Mw 7.0 and Mw 6.9 occurred just 15 min apart, located near the town of Pichilemu. These kinds of large intraplate, inland crustal earthquakes are rare above the Chilean subduction zone, and it is important to better understand their relationship with the 2010 February 27, Mw 8.8, Maule earthquake, which ruptured the adjacent megathrust plate boundary. We present a broad seismological analysis of these earthquakes by using both teleseismic and regional data. We compute seismic moment tensors for both events via a W-phase inversion, and test sensitivities to various inversion parameters in order to assess the stability of the solutions. The first event, at 14 hr 39 min GMT, is well constrained, displaying a fault plane with strike of N145°E, and a preferred dip angle of 55°SW, consistent with the trend of aftershock locations and other published results. Teleseismic finite-fault inversions for this event show a large slip zone along the southern part of the fault, correlating well with the reported spatial density of aftershocks. The second earthquake (14 hr 55 min GMT) appears to have ruptured a fault branching southward from the previous ruptured fault, within the hanging wall of the first event. Modelling seismograms at regional to teleseismic distances (Δ > 10°) is quite challenging because the observed seismic wave fields of both events overlap, increasing apparent complexity for the second earthquake. We perform both point- and extended-source inversions at regional and teleseismic distances, assessing model sensitivities resulting from variations in fault orientation, dimension, and hypocentre location. Results show that the focal mechanism for the second event features a steeper dip angle and a strike rotated slightly clockwise with respect to the previous event. This kind of geological fault configuration, with secondary rupture in the hanging wall of a large normal fault, is commonly observed in extensional geological regimes. We propose that both earthquakes form part of a typical normal fault diverging splay, where the secondary fault connects to the main fault at depth. To ascertain more information on the spatial and temporal details of slip for both events, we gathered near-fault seismological and geodetic data. Through forward modelling of near-fault synthetic seismograms we build a kinematic k−2 earthquake source model with spatially distributed slip on the fault that, to first-order, explains both coseismic static displacement GPS vectors and short-period seismometer observations at the closest sites. As expected, the results for the first event agree with the focal mechanism derived from teleseismic modelling, with a magnitude Mw 6.97. Similarly, near-fault modelling for the second event suggests rupture along a normal fault, Mw 6.90, characterized by a steeper dip angle (dip = 74°) and a strike clockwise rotated (strike = 155°) with respect to the previous event.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stacey M. L. Hendrickson; April M. Whaley; Ronald L. Boring
The Office of Nuclear Regulatory Research (RES) is sponsoring work in response to a Staff Requirements Memorandum (SRM) directing an effort to establish a single human reliability analysis (HRA) method for the agency or guidance for the use of multiple methods. As part of this effort an attempt to develop a comprehensive HRA qualitative approach is being pursued. This paper presents a draft of the method’s middle layer, a part of the qualitative analysis phase that links failure mechanisms to performance shaping factors. Starting with a Crew Response Tree (CRT) that has identified human failure events, analysts identify potential failuremore » mechanisms using the mid-layer model. The mid-layer model presented in this paper traces the identification of the failure mechanisms using the Information-Diagnosis/Decision-Action (IDA) model and cognitive models from the psychological literature. Each failure mechanism is grouped according to a phase of IDA. Under each phase of IDA, the cognitive models help identify the relevant performance shaping factors for the failure mechanism. The use of IDA and cognitive models can be traced through fault trees, which provide a detailed complement to the CRT.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shen, Song-Hua; Chang, James Y. H.; Boring,Ronald L.
2010-03-01
The Office of Nuclear Regulatory Research (RES) at the US Nuclear Regulatory Commission (USNRC) is sponsoring work in response to a Staff Requirements Memorandum (SRM) directing an effort to establish a single human reliability analysis (HRA) method for the agency or guidance for the use of multiple methods. As part of this effort an attempt to develop a comprehensive HRA qualitative approach is being pursued. This paper presents a draft of the method's middle layer, a part of the qualitative analysis phase that links failure mechanisms to performance shaping factors. Starting with a Crew Response Tree (CRT) that has identifiedmore » human failure events, analysts identify potential failure mechanisms using the mid-layer model. The mid-layer model presented in this paper traces the identification of the failure mechanisms using the Information-Diagnosis/Decision-Action (IDA) model and cognitive models from the psychological literature. Each failure mechanism is grouped according to a phase of IDA. Under each phase of IDA, the cognitive models help identify the relevant performance shaping factors for the failure mechanism. The use of IDA and cognitive models can be traced through fault trees, which provide a detailed complement to the CRT.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mays, S.E.; Poloski, J.P.; Sullivan, W.H.
1982-07-01
A probabilistic risk assessment (PRA) was made of the Browns Ferry, Unit 1, nuclear plant as part of the Nuclear Regulatory Commission's Interim Reliability Evaluation Program (IREP). Specific goals of the study were to identify the dominant contributors to core melt, develop a foundation for more extensive use of PRA methods, expand the cadre of experienced PRA practitioners, and apply procedures for extension of IREP analyses to other domestic light water reactors. Event tree and fault tree analyses were used to estimate the frequency of accident sequences initiated by transients and loss of coolant accidents. External events such as floods,more » fires, earthquakes, and sabotage were beyond the scope of this study and were, therefore, excluded. From these sequences, the dominant contributors to probable core melt frequency were chosen. Uncertainty and sensitivity analyses were performed on these sequences to better understand the limitations associated with the estimated sequence frequencies. Dominant sequences were grouped according to common containment failure modes and corresponding release categories on the basis of comparison with analyses of similar designs rather than on the basis of detailed plant-specific calculations.« less
Information processing requirements for on-board monitoring of automatic landing
NASA Technical Reports Server (NTRS)
Sorensen, J. A.; Karmarkar, J. S.
1977-01-01
A systematic procedure is presented for determining the information processing requirements for on-board monitoring of automatic landing systems. The monitoring system detects landing anomalies through use of appropriate statistical tests. The time-to-correct aircraft perturbations is determined from covariance analyses using a sequence of suitable aircraft/autoland/pilot models. The covariance results are used to establish landing safety and a fault recovery operating envelope via an event outcome tree. This procedure is demonstrated with examples using the NASA Terminal Configured Vehicle (B-737 aircraft). The procedure can also be used to define decision height, assess monitoring implementation requirements, and evaluate alternate autoland configurations.
NASA Astrophysics Data System (ADS)
Woo, J. U.; Rhie, J.; Kang, T. S.; Kim, S.; Chai, G.; Cho, E.
2017-12-01
Complex inherent fault system is one of key factors controlling the main shock occurrence and the pattern of aftershock sequence. Many field studies have shown that the fault systems in the Korean Peninsula are complex because they formed by various tectonic events since Proterozoic. Apart from that the mainshock is the largest one (ML 5.8) ever recorded in South Korea, the Gyeongju earthquake sequence shows particularly interesting features: ML 5.1 event preceded ML 5.8 event by 50 min and they are located closely to each other ( 1 km). In addition, ML 4.5 event occurred 2 3 km away from the two events after a week of the mainshock. Considering reported focal mechanisms and hypocenters of the three major events, it is unlikely that the earthquake sequence occurs on a single fault plane. To depict the detailed fault geometry associated with the sequence, we precisely determine the relative locations of 1,400 aftershocks recorded by 27 broadband stations, which started to be deployed less than one hour after the mainshock. Double difference algorithm is applied using relative travel time measurements by a waveform cross-correlation method. Relocated hypocenters show that a major fault striking NE-SW and some minor faults get involved in the sequence. In particular, aftershocks immediately following ML 4.5 event seem to occur on a fault striking NW-SE, which is orthogonal to the strike of a major fault. We expect that the Gyeongju earthquake sequence resulted from the stress transfer controlled by the complex inherent fault system in this region.
RSRM Nozzle Anomalous Throat Erosion Investigation Overview
NASA Technical Reports Server (NTRS)
Clinton, R. G., Jr.; Wendel, Gary M.
1998-01-01
In September, 1996, anomalous pocketing erosion was observed in the aft end of the throat ring of the nozzle of one of the reusable solid rocket motors (RSRM 56B) used on NASA's space transportation system (STS) mission 79. The RSRM throat ring is constructed of bias tape-wrapped carbon cloth/ phenolic (CCP) ablative material. A comprehensive investigation revealed necessary and sufficient conditions for occurrence of the pocketing event and provided rationale that the solid rocket motors for the subsequent mission, STS-80, were safe to fly. The nozzles of both of these motors also exhibited anomalous erosion similar to, but less extensive than that observed on STS-79. Subsequent to this flight, the investigation to identify both the specific causes and the corrective actions for elimination of the necessary and sufficient conditions for the pocketing erosion was intensified. A detailed fault tree approach was utilized to examine potential material and process contributors to the anomalous performance. The investigation involved extensive constituent and component material property testing, pedigree assessments, supplier audits, process audits, full scale processing test article fabrication and evaluation, thermal and thermostructural analyses, nondestructive evaluation, and material performance tests conducted using hot fire simulation in laboratory test beds and subscale and full scale solid rocket motor static test firings. This presentation will provide an over-view of the observed anomalous nozzle erosion and the comprehensive, fault-tree based investigation conducted to resolve this issue.
Large mid-Holocene and late Pleistocene earthquakes on the Oquirrh fault zone, Utah
Olig, S.S.; Lund, W.R.; Black, B.D.
1994-01-01
The Oquirrh fault zone is a range-front normal fault that bounds the east side of Tooele Valley and it has long been recognized as a potential source for large earthquakes that pose a significant hazard to population centers along the Wasatch Front in central Utah. Scarps of the Oquirrh fault zone offset the Provo shoreline of Lake Bonneville and previous studies of scarp morphology suggested that the most recent surface-faulting earthquake occurred between 9000 and 13,500 years ago. Based on a potential rupture length of 12 to 21 km from previous mapping, moment magnitude (Mw) estimates for this event range from 6.3 to 6.6 In contrast, our results from detailed mapping and trench excavations at two sites indicate that the most-recent event actually occurred between 4300 and 6900 yr B.P. (4800 and 7900 cal B.P.) and net vertical displacements were 2.2 to 2.7 m, much larger than expected considering estimated rupture lengths for this event. Empirical relations between magnitude and displacement yield Mw 7.0 to 7.2. A few, short discontinuous fault scarps as far south as Stockton, Utah have been identified in a recent mapping investigation and our results suggest that they may be part of the Oquirrh fault zone, increasing the total fault length to 32 km. These results emphasize the importance of integrating stratigraphic and geomorphic information in fault investigations for earthquake hazard evaluations. At both the Big Canyon and Pole Canyon sites, trenches exposed faulted Lake Bonneville sediments and thick wedges of fault-scarp derived colluvium associated with the most-recent event. Bulk sediment samples from a faulted debris-flow deposit at the Big Canyon site yield radiocarbon ages of 7650 ?? 90 yr B.P. and 6840 ?? 100 yr B.P. (all lab errors are ??1??). A bulk sediment sample from unfaulted fluvial deposits that bury the fault scarp yield a radiocarbon age estimate of 4340 ?? 60 yr B.P. Stratigraphic evidence for a pre-Bonneville lake cycle penultimate earthquake was exposed at the Pole Canyon site, and although displacement is not well constrained, the penultimate event colluvial wedge is comparable in size to the most-recent event wedges. Charcoal from a marsh deposit, which overlies the penultimate event colluvium and was deposited during the Bonneville lake cycle transgression, yields an AMS radiocarbon age of 20,370 ?? 120 yr B.P. Multiple charcoal fragments from fluvial deposits faulted during the penultimate event yield an AMS radiocarbon age of 26,200 ?? 200 yr B.P. Indirect stratigraphic evidence for an antepenultimate event was also exposed at Pole Canyon. Charcoal from fluvial sediments overlying the eroded free-face for this event yields an AMS age of 33,950 ?? 1160 yr B.P., providing a minimum limiting age on the antepenultimate event. Ages for the past two events on the Oquirrh fault zone yield a recurrence interval of 13,300 to 22,100 radiocarbon years and estimated slip rates of 0.1 to 0.2 mm/yr. Temporal clustering of earthquakes on the nearby Wasatch fault zone in the late Holocene does not appear to have influenced activity on the Oquirrh fault zone. However, consistent with findings on the Wasatch fault zone and with some other Quaternary faults within the Bonneville basin, we found evidence for higher rates of activity during interpluvial periods than during the Bonneville lake cycle. If a causal relation between rates of strain release along faults and changes in loads imposed by the lake does exist, it may have implications for fault dips and mechanics. However, our data are only complete for one deep-lake cycle (the past 32,000 radiocarbon years), and whether this pattern persisted during the previous Cutler Dam and Little Valley deep-lake cycles is unknown. ?? 1994.
Parallel Fault Strands at 9-km Depth Resolved on the Imperial Fault, Southern California
NASA Astrophysics Data System (ADS)
Shearer, P. M.
2001-12-01
The Imperial Fault is one of the most active faults in California with several M>6 events during the 20th century and geodetic results suggesting that it currently carries almost 80% of the total plate motion between the Pacific and North American plates. We apply waveform cross-correlation to a group of ~1500 microearthquakes along the Imperial Fault and find that about 25% of the events form similar event clusters. Event relocation based on precise differential times among events in these clusters reveals multiple streaks of seismicity up to 5 km in length that are at a nearly constant depth of ~9 km but are spaced about 0.5 km apart in map view. These multiples are unlikely to be a location artifact because they are spaced more widely than the computed location errors and different streaks can be resolved within individual similar event clusters. The streaks are parallel to the mapped surface rupture of the 1979 Mw=6.5 Imperial Valley earthquake. No obvious temporal migration of the event locations is observed. Limited focal mechanism data for the events within the streaks are consistent with right-lateral slip on vertical fault planes. The seismicity not contained in similar event clusters cannot be located as precisely; our locations for these events scatter between 7 and 11 km depth, but it is possible that their true locations could be much more tightly clustered. The observed streaks have some similarities to those previously observed in northern California along the San Andreas and Hayward faults (e.g., Rubin et al., 1999; Waldhauser et al., 1999); however those streaks were imaged within a single fault plane rather than the multiple faults resolved on the Imperial Fault. The apparent constant depth of the Imperial streaks is similar to that seen in Hawaii at much shallower depth by Gillard et al. (1996). Geodetic results (e.g., Lyons et al., 2001) suggest that the Imperial Fault is currently slipping at 45 mm/yr below a locked portion that extends to ~10 km depth. We interpret our observed seismicity streaks as representing activity on multiple fault strands at transition depths between the locked shallow part of the Imperial Fault and the slipping portion at greater depths. It is likely that these strands extend into the aseismic region below, suggesting that the lower crustal shear zone is at least 2 km wide.
Fault tree safety analysis of a large Li/SOCl(sub)2 spacecraft battery
NASA Technical Reports Server (NTRS)
Uy, O. Manuel; Maurer, R. H.
1987-01-01
The results of the safety fault tree analysis on the eight module, 576 F cell Li/SOCl2 battery on the spacecraft and in the integration and test environment prior to launch on the ground are presented. The analysis showed that with the right combination of blocking diodes, electrical fuses, thermal fuses, thermal switches, cell balance, cell vents, and battery module vents the probability of a single cell or a 72 cell module exploding can be reduced to .000001, essentially the probability due to explosion for unexplained reasons.
NASA Astrophysics Data System (ADS)
Ratchkovski, N. A.; Hansen, R. A.; Kore, K. R.
2003-04-01
The largest earthquake ever recorded on the Denali fault system (magnitude 7.9) struck central Alaska on November 3, 2002. It was preceded by a magnitude 6.7 earthquake on October 23. This earlier earthquake and its zone of aftershocks were located ~20 km to the west of the 7.9 quake. Aftershock locations and surface slip observations from the 7.9 quake indicate that the rupture was predominately unilateral in the eastward direction. The geologists mapped a ~300-km-long rupture and measured maximum offsets of 8.8 meters. The 7.9 event ruptured three different faults. The rupture began on the northeast trending Susitna Glacier Thrust fault, a splay fault south of the Denali fault. Then the rupture transferred to the Denali fault and propagated eastward for 220 km. At about 143W the rupture moved onto the adjacent southeast-trending Totschunda fault and propagated for another 55 km. The cumulative length of the 6.7 and 7.9 aftershock zones along the Denali and Totschunda faults is about 380 km. The earthquakes were recorded and processed by the Alaska Earthquake Information Center (AEIC). The AEIC acquires and processes data from the Alaska Seismic Network, consisting of over 350 seismograph stations. Nearly 40 of these sites are equipped with the broad-band sensors, some of which also have strong motion sensors. The rest of the stations are either 1 or 3-component short-period instruments. The data from these stations are collected, processed and archived at the AEIC. The AEIC staff installed a temporary seismic network of 6 instruments following the 6.7 earthquake and an additional 20 stations following the 7.9 earthquake. Prior to the 7.9 Denali Fault event, the AEIC was locating 35 to 50 events per day. After the event, the processing load increased to over 300 events per day during the first week following the event. In this presentation, we will present and interpret the aftershock location patterns, first motion focal mechanism solutions, and regional seismic moment tensors for the larger events. We used the double difference method to relocate aftershocks of both the 6.7 and 7.9 events. The relocated aftershocks indicate complex faulting along the rupture zone. The aftershocks are located not only along the main rupture zone, but also illuminate multiple splay faults north and south of the Denali fault. We calculated principal stress directions along the Denali fault both before and after the 7.9 event from the focal mechanisms. The stress orientations before and after the event are nearly identical. The maximum horizontal compressive stress is nearly normal to the trace of the Denali fault and rotates gradually from NW orientation at the western end of the rupture zone to NE orientation near the junction with the Totschunda fault.
Paleoseismological surveys on the Hinagu fault zone in Kumamoto, central Kyushu, Japan
NASA Astrophysics Data System (ADS)
Azuma, T.
2017-12-01
The Hinagu fault zone is located on the south of the Futagawa fault zone, which was a main part of the source fault of the 2016 Kumamoto earthquake of Mj 7.3. Northernmost part of the Hinagu fault zone was also acted in 2016 event and surface faults with right-lateral displacement upto ca. 50 cm were appeared. Seismicity along the central part of the Hinagu fault was increased just after the 2016 Kumamoto Earthquake. It seems that the Hinagu fault zone would produce the next large earthquake in the near future, although it has not occurred yet. The Headquarters of the Earthquake Research Promotions (HERP) conducted active fault surveys on the Hinagu fault zone to recognize the probability of the occurrence of the next faulting event. The Hinagu fault zone is composed with 3 fault segments, Takano-Shirahata, Hinagu, and Yatsushiro Bay. Yatsushiro Bay segment is offshore fault. In FY2016, we conducted paleoseismological trenching surveys at 2 sites (Yamaide, Minamibeta) and offshore drilling. Those result showed evidences that the recurrence intervals of the Hinagu fault zone was rather short and the last faulting event occurred around 1500-2000 yrsBP. In FY2017, we are planning another trenching survey on the southern part of the central segment, where Yatsushiro city located close to the fault.
Multiple large earthquakes in the past 1500 years on a fault in metropolitan Manila, the Philippines
Nelson, A.R.; Personius, S.F.; Rimando, R.E.; Punongbayan, R.S.; Tungol, N.; Mirabueno, H.; Rasdas, A.
2000-01-01
The first 14C-based paleoseismic study of an active fault in the Philippines shows that a right-lateral fault on the northeast edge of metropolitan Manila poses a greater seismic hazard than previously thought. Faulted hillslope colluvium, stream-channel alluvium, and debris-flow deposits exposed in trenches across the northern part of the west Marikina Valley fault record two or three surface-faulting events. Three eroded, clay-rich soil B horizons suggest thousands of years between surface faulting events, whereas 14C ages on detrital charcoal constrain the entire stratigraphic sequence to the past 1300-1700 years. We rely on the 14C ages to infer faulting recurrence of hundreds rather than thousands of years. Minimal soil development and modern 14C ages from colluvium overlying a faulted debris-flow deposit in a nearby stream exposure point to a historic age for a probable third or fourth (most recent) faulting event.
Li, Yunji; Wu, QingE; Peng, Li
2018-01-23
In this paper, a synthesized design of fault-detection filter and fault estimator is considered for a class of discrete-time stochastic systems in the framework of event-triggered transmission scheme subject to unknown disturbances and deception attacks. A random variable obeying the Bernoulli distribution is employed to characterize the phenomena of the randomly occurring deception attacks. To achieve a fault-detection residual is only sensitive to faults while robust to disturbances, a coordinate transformation approach is exploited. This approach can transform the considered system into two subsystems and the unknown disturbances are removed from one of the subsystems. The gain of fault-detection filter is derived by minimizing an upper bound of filter error covariance. Meanwhile, system faults can be reconstructed by the remote fault estimator. An recursive approach is developed to obtain fault estimator gains as well as guarantee the fault estimator performance. Furthermore, the corresponding event-triggered sensor data transmission scheme is also presented for improving working-life of the wireless sensor node when measurement information are aperiodically transmitted. Finally, a scaled version of an industrial system consisting of local PC, remote estimator and wireless sensor node is used to experimentally evaluate the proposed theoretical results. In particular, a novel fault-alarming strategy is proposed so that the real-time capacity of fault-detection is guaranteed when the event condition is triggered.
Isotropic events observed with a borehole array in the Chelungpu fault zone, Taiwan.
Ma, Kuo-Fong; Lin, Yen-Yu; Lee, Shiann-Jong; Mori, Jim; Brodsky, Emily E
2012-07-27
Shear failure is the dominant mode of earthquake-causing rock failure along faults. High fluid pressure can also potentially induce rock failure by opening cavities and cracks, but an active example of this process has not been directly observed in a fault zone. Using borehole array data collected along the low-stress Chelungpu fault zone, Taiwan, we observed several small seismic events (I-type events) in a fluid-rich permeable zone directly below the impermeable slip zone of the 1999 moment magnitude 7.6 Chi-Chi earthquake. Modeling of the events suggests an isotropic, nonshear source mechanism likely associated with natural hydraulic fractures. These seismic events may be associated with the formation of veins and other fluid features often observed in rocks surrounding fault zones and may be similar to artificially induced hydraulic fracturing.
Derailment-based Fault Tree Analysis on Risk Management of Railway Turnout Systems
NASA Astrophysics Data System (ADS)
Dindar, Serdar; Kaewunruen, Sakdirat; An, Min; Gigante-Barrera, Ángel
2017-10-01
Railway turnouts are fundamental mechanical infrastructures, which allow a rolling stock to divert one direction to another. As those are of a large number of engineering subsystems, e.g. track, signalling, earthworks, these particular sub-systems are expected to induce high potential through various kind of failure mechanisms. This could be a cause of any catastrophic event. A derailment, one of undesirable events in railway operation, often results, albeit rare occurs, in damaging to rolling stock, railway infrastructure and disrupt service, and has the potential to cause casualties and even loss of lives. As a result, it is quite significant that a well-designed risk analysis is performed to create awareness of hazards and to identify what parts of the systems may be at risk. This study will focus on all types of environment based failures as a result of numerous contributing factors noted officially as accident reports. This risk analysis is designed to help industry to minimise the occurrence of accidents at railway turnouts. The methodology of the study relies on accurate assessment of derailment likelihood, and is based on statistical multiple factors-integrated accident rate analysis. The study is prepared in the way of establishing product risks and faults, and showing the impact of potential process by Boolean algebra.
Initiating Event Analysis of a Lithium Fluoride Thorium Reactor
NASA Astrophysics Data System (ADS)
Geraci, Nicholas Charles
The primary purpose of this study is to perform an Initiating Event Analysis for a Lithium Fluoride Thorium Reactor (LFTR) as the first step of a Probabilistic Safety Assessment (PSA). The major objective of the research is to compile a list of key initiating events capable of resulting in failure of safety systems and release of radioactive material from the LFTR. Due to the complex interactions between engineering design, component reliability and human reliability, probabilistic safety assessments are most useful when the scope is limited to a single reactor plant. Thus, this thesis will study the LFTR design proposed by Flibe Energy. An October 2015 Electric Power Research Institute report on the Flibe Energy LFTR asked "what-if?" questions of subject matter experts and compiled a list of key hazards with the most significant consequences to the safety or integrity of the LFTR. The potential exists for unforeseen hazards to pose additional risk for the LFTR, but the scope of this thesis is limited to evaluation of those key hazards already identified by Flibe Energy. These key hazards are the starting point for the Initiating Event Analysis performed in this thesis. Engineering evaluation and technical study of the plant using a literature review and comparison to reference technology revealed four hazards with high potential to cause reactor core damage. To determine the initiating events resulting in realization of these four hazards, reference was made to previous PSAs and existing NRC and EPRI initiating event lists. Finally, fault tree and event tree analyses were conducted, completing the logical classification of initiating events. Results are qualitative as opposed to quantitative due to the early stages of system design descriptions and lack of operating experience or data for the LFTR. In summary, this thesis analyzes initiating events using previous research and inductive and deductive reasoning through traditional risk management techniques to arrive at a list of key initiating events that can be used to address vulnerabilities during the design phases of LFTR development.
PAWS/STEM - PADE APPROXIMATION WITH SCALING AND SCALED TAYLOR EXPONENTIAL MATRIX (VAX VMS VERSION)
NASA Technical Reports Server (NTRS)
Butler, R. W.
1994-01-01
Traditional fault-tree techniques for analyzing the reliability of large, complex systems fail to model the dynamic reconfiguration capabilities of modern computer systems. Markov models, on the other hand, can describe fault-recovery (via system reconfiguration) as well as fault-occurrence. The Pade Approximation with Scaling (PAWS) and Scaled Taylor Exponential Matrix (STEM) programs provide a flexible, user-friendly, language-based interface for the creation and evaluation of Markov models describing the behavior of fault-tolerant reconfigurable computer systems. PAWS and STEM produce exact solutions for the probability of system failure and provide a conservative estimate of the number of significant digits in the solution. The calculation of the probability of entering a death state of a Markov model (representing system failure) requires the solution of a set of coupled differential equations. Because of the large disparity between the rates of fault arrivals and system recoveries, Markov models of fault-tolerant architectures inevitably lead to numerically stiff differential equations. Both PAWS and STEM have the capability to solve numerically stiff models. These complementary programs use separate methods to determine the matrix exponential in the solution of the model's system of differential equations. In general, PAWS is better suited to evaluate small and dense models. STEM operates at lower precision, but works faster than PAWS for larger models. The mathematical approach chosen to solve a reliability problem may vary with the size and nature of the problem. Although different solution techniques are utilized on different programs, it is possible to have a common input language. The Systems Validation Methods group at NASA Langley Research Center has created a set of programs that form the basis for a reliability analysis workstation. The set of programs are: SURE reliability analysis program (COSMIC program LAR-13789, LAR-14921); the ASSIST specification interface program (LAR-14193, LAR-14923), PAWS/STEM reliability analysis programs (LAR-14165, LAR-14920); and the FTC fault tree tool (LAR-14586, LAR-14922). FTC is used to calculate the top-event probability for a fault tree. PAWS/STEM and SURE are programs which interpret the same SURE language, but utilize different solution methods. ASSIST is a preprocessor that generates SURE language from a more abstract definition. SURE, ASSIST, and PAWS/STEM are also offered as a bundle. Please see the abstract for COS-10039/COS-10041, SARA - SURE/ASSIST Reliability Analysis Workstation, for pricing details. PAWS/STEM was originally developed for DEC VAX series computers running VMS and was later ported for use on Sun computers running SunOS. The package is written in PASCAL, ANSI compliant C-language, and FORTRAN 77. The standard distribution medium for the VMS version of PAWS/STEM (LAR-14165) is a 9-track 1600 BPI magnetic tape in VMSINSTAL format. It is also available on a TK50 tape cartridge in VMSINSTAL format. Executables are included. The standard distribution medium for the Sun version of PAWS/STEM (LAR-14920) is a .25 inch streaming magnetic tape cartridge in UNIX tar format. Both Sun3 and Sun4 executables are included. PAWS/STEM was developed in 1989 and last updated in 1991. DEC, VAX, VMS, and TK50 are trademarks of Digital Equipment Corporation. SunOS, Sun3, and Sun4 are trademarks of Sun Microsystems, Inc. UNIX is a registered trademark of AT&T Bell Laboratories.
PAWS/STEM - PADE APPROXIMATION WITH SCALING AND SCALED TAYLOR EXPONENTIAL MATRIX (SUN VERSION)
NASA Technical Reports Server (NTRS)
Butler, R. W.
1994-01-01
Traditional fault-tree techniques for analyzing the reliability of large, complex systems fail to model the dynamic reconfiguration capabilities of modern computer systems. Markov models, on the other hand, can describe fault-recovery (via system reconfiguration) as well as fault-occurrence. The Pade Approximation with Scaling (PAWS) and Scaled Taylor Exponential Matrix (STEM) programs provide a flexible, user-friendly, language-based interface for the creation and evaluation of Markov models describing the behavior of fault-tolerant reconfigurable computer systems. PAWS and STEM produce exact solutions for the probability of system failure and provide a conservative estimate of the number of significant digits in the solution. The calculation of the probability of entering a death state of a Markov model (representing system failure) requires the solution of a set of coupled differential equations. Because of the large disparity between the rates of fault arrivals and system recoveries, Markov models of fault-tolerant architectures inevitably lead to numerically stiff differential equations. Both PAWS and STEM have the capability to solve numerically stiff models. These complementary programs use separate methods to determine the matrix exponential in the solution of the model's system of differential equations. In general, PAWS is better suited to evaluate small and dense models. STEM operates at lower precision, but works faster than PAWS for larger models. The mathematical approach chosen to solve a reliability problem may vary with the size and nature of the problem. Although different solution techniques are utilized on different programs, it is possible to have a common input language. The Systems Validation Methods group at NASA Langley Research Center has created a set of programs that form the basis for a reliability analysis workstation. The set of programs are: SURE reliability analysis program (COSMIC program LAR-13789, LAR-14921); the ASSIST specification interface program (LAR-14193, LAR-14923), PAWS/STEM reliability analysis programs (LAR-14165, LAR-14920); and the FTC fault tree tool (LAR-14586, LAR-14922). FTC is used to calculate the top-event probability for a fault tree. PAWS/STEM and SURE are programs which interpret the same SURE language, but utilize different solution methods. ASSIST is a preprocessor that generates SURE language from a more abstract definition. SURE, ASSIST, and PAWS/STEM are also offered as a bundle. Please see the abstract for COS-10039/COS-10041, SARA - SURE/ASSIST Reliability Analysis Workstation, for pricing details. PAWS/STEM was originally developed for DEC VAX series computers running VMS and was later ported for use on Sun computers running SunOS. The package is written in PASCAL, ANSI compliant C-language, and FORTRAN 77. The standard distribution medium for the VMS version of PAWS/STEM (LAR-14165) is a 9-track 1600 BPI magnetic tape in VMSINSTAL format. It is also available on a TK50 tape cartridge in VMSINSTAL format. Executables are included. The standard distribution medium for the Sun version of PAWS/STEM (LAR-14920) is a .25 inch streaming magnetic tape cartridge in UNIX tar format. Both Sun3 and Sun4 executables are included. PAWS/STEM was developed in 1989 and last updated in 1991. DEC, VAX, VMS, and TK50 are trademarks of Digital Equipment Corporation. SunOS, Sun3, and Sun4 are trademarks of Sun Microsystems, Inc. UNIX is a registered trademark of AT&T Bell Laboratories.
Stress before and after the 2002 Denali fault earthquake
Wesson, R.L.; Boyd, O.S.
2007-01-01
Spatially averaged, absolute deviatoric stress tensors along the faults ruptured during the 2002 Denali fault earthquake, both before and after the event, are derived, using a new method, from estimates of the orientations of the principal stresses and the stress change associated with the earthquake. Stresses are estimated in three regions along the Denali fault, one of which also includes the Susitna Glacier fault, and one region along the Totschunda fault. Estimates of the spatially averaged shear stress before the earthquake resolved onto the faults that ruptured during the event range from near 1 MPa to near 4 MPa. Shear stresses estimated along the faults in all these regions after the event are near zero (0 ?? 1 MPa). These results suggest that deviatoric stresses averaged over a few tens of km along strike are low, and that the stress drop during the earthquake was complete or nearly so.
NASA Astrophysics Data System (ADS)
Lin, Aiming; Sano, Mikako; Wang, Maomao; Yan, Bing; Bian, Di; Fueta, Ryoji; Hosoya, Takashi
2017-07-01
The Mw 6.2 (Mj 6.8) Nagano (Japan) earthquake of 22 November 2014 produced a 9.3-km long surface rupture zone with a thrust-dominated displacement of up to 1.5 m, which duplicated the pre-existing Kamishiro Fault along the Itoigawa-Shizuoka Tectonic Line (ISTL), the plate-boundary between the Eurasian and North American plates, northern Nagano Prefecture, central Japan. To characterize the activity of the seismogenic fault zone, we conducted a paleoseismic study of the Kamishiro Fault. Field investigations and trench excavations revealed that seven morphogenic paleohistorical earthquakes (E2-E8) prior to the 2014 Mw 6.2 Nagano earthquake (E1) have occurred on the Kamishiro Fault during the last ca. 6000 years. Three of these events (E2-E4) are well constrained and correspond to historical earthquakes occurring in the last ca. 1200 years. This suggests an average recurrence interval of ca. 300-400 years on the seismogenic fault of the 2014 Kamishiro earthquake in the past 1200 years. The most recent event prior to the 2014 earthquakes (E1) is E2 and the penultimate and antepenultimate faulting events are E3 and E4, respectively. The penultimate faulting event (E3) occurred during the period of AD 1800-1400 and is associated with the 1791 Mw 6.8 earthquake. The antepenultimate faulting event (E4) is inferred to have occurred during the period of ca. AD 1000-700, likely corresponding to the AD 841 Mw 6.5 earthquake. The oldest faulting event (E8) in the study area is thought to have occurred during the period of ca. 5600-6000 years. The throw rate during the early Holocene is estimated to be 1.2-3.3 mm/a (average, 2.2 mm/a) with an average amount of characteristic offset of 0.7-1.1 m produced by individual event. When compared with active intraplate faults on Honshu Island, Japan, these slip rates and recurrence interval estimated for morphogenic earthquakes on the Kamishiro Fault along the ISTL appear high and short, respectively. This indicates that present activity on this fault is closely related to seismic faulting along the plate boundary between the Eurasian and North American plates.
Lin, Aiming; Sano, Mikako; Wang, Maomao; Yan, Bing; Bian, Di; Fueta, Ryoji; Hosoya, Takashi
2017-01-01
The M w 6.2 (Mj 6.8) Nagano (Japan) earthquake of 22 November 2014 produced a 9.3-km long surface rupture zone with a thrust-dominated displacement of up to 1.5 m, which duplicated the pre-existing Kamishiro Fault along the Itoigawa-Shizuoka Tectonic Line (ISTL), the plate-boundary between the Eurasian and North American plates, northern Nagano Prefecture, central Japan. To characterize the activity of the seismogenic fault zone, we conducted a paleoseismic study of the Kamishiro Fault. Field investigations and trench excavations revealed that seven morphogenic paleohistorical earthquakes (E2-E8) prior to the 2014 M w 6.2 Nagano earthquake (E1) have occurred on the Kamishiro Fault during the last ca. 6000 years. Three of these events (E2-E4) are well constrained and correspond to historical earthquakes occurring in the last ca. 1200 years. This suggests an average recurrence interval of ca. 300-400 years on the seismogenic fault of the 2014 Kamishiro earthquake in the past 1200 years. The most recent event prior to the 2014 earthquakes (E1) is E2 and the penultimate and antepenultimate faulting events are E3 and E4, respectively. The penultimate faulting event (E3) occurred during the period of AD 1800-1400 and is associated with the 1791 M w 6.8 earthquake. The antepenultimate faulting event (E4) is inferred to have occurred during the period of ca. AD 1000-700, likely corresponding to the AD 841 M w 6.5 earthquake. The oldest faulting event (E8) in the study area is thought to have occurred during the period of ca. 5600-6000 years. The throw rate during the early Holocene is estimated to be 1.2-3.3 mm/a (average, 2.2 mm/a) with an average amount of characteristic offset of 0.7-1.1 m produced by individual event. When compared with active intraplate faults on Honshu Island, Japan, these slip rates and recurrence interval estimated for morphogenic earthquakes on the Kamishiro Fault along the ISTL appear high and short, respectively. This indicates that present activity on this fault is closely related to seismic faulting along the plate boundary between the Eurasian and North American plates.
Lindvall, S.C.; Rockwell, T.K.; Dawson, T.E.; Helms, J.G.; Bowman, K.W.
2002-01-01
We conducted paleoseismic studies in a closed depression along the San Andreas fault on the north flank of Frazier Mountain near Frazier Park, California. We recognized two earthquake ruptures in our trench exposure and interpreted the most recent rupture, event 1, to represent the historical 1857 earthquake. We also exposed evidence of an earlier surface rupture, event 2, along an older group of faults that did not rerupture during event 1. Radiocarbon dating of the stratigraphy above and below the earlier event constrains its probable age to between A.D. 1460 and 1600. Because we documented continuous, unfaulted stratigraphy between the earlier event horizon and the youngest event horizon in the portion of the fault zone exposed, we infer event 2 to be the penultimate event. We observed no direct evidence of an 1812 earthquake in our exposures. However, we cannot preclude the presence of this event at our site due to limited age control in the upper part of the section and the possibility of other fault strands beyond the limits of our exposures. Based on overlapping age ranges, event 2 at Frazier Mountain may correlate with event B at the Bidart fan site in the Carrizo Plain to the northwest and events V and W4 at Pallett Creek and Wrightwood, respectively, to the southeast. If the events recognized at these multiple sites resulted from the same surface rupture, then it appears that the San Andreas fault has repeatedly failed in large ruptures similar in extent to 1857.
NASA Astrophysics Data System (ADS)
Okumura, K.
2011-12-01
Accurate location and geometry of seismic sources are critical to estimate strong ground motion. Complete and precise rupture history is also critical to estimate the probability of the future events. In order to better forecast future earthquakes and to reduce seismic hazards, we should consider over all options and choose the most likely parameter. Multiple options for logic trees are acceptable only after thorough examination of contradicting estimates and should not be a result from easy compromise or epoche. In the process of preparation and revisions of Japanese probabilistic and deterministic earthquake hazard maps by Headquarters for Earthquake Research Promotion since 1996, many decisions were made to select plausible parameters, but many contradicting estimates have been left without thorough examinations. There are several highly-active faults in central Japan such as Itoigawa-Shizuoka Tectonic Line active fault system (ISTL), West Nagano Basin fault system (WNBF), Inadani fault system (INFS), and Atera fault system (ATFS). The highest slip rate and the shortest recurrence interval are respectively ~1 cm/yr and 500 to 800 years, and estimated maximum magnitude is 7.5 to 8.5. Those faults are very hazardous because almost entire population and industries are located above the fault within tectonic depressions. As to the fault location, most uncertainties arises from interpretation of geomorphic features. Geomorphological interpretation without geological and structural insight often leads to wrong mapping. Though non-existent longer fault may be a safer estimate, incorrectness harm reliability of the forecast. Also this does not greatly affect strong motion estimates, but misleading to surface displacement issues. Fault geometry, on the other hand, is very important to estimate intensity distribution. For the middle portion of the ISTL, fast-moving left-lateral strike-slip up to 1 cm/yr is obvious. Recent seismicity possibly induced by 2011 Tohoku earthquake show pure strike-slip. However, thrusts are modeled from seismic profiles and gravity anomalies. Therefore, two contradicting models are presented for strong motion estimates. There should be a unique solution of the geometry, which will be discussed. As to the rupture history, there is plenty of paleoseismological evidence that supports segmentation of those faults above. However, in most fault zones, the largest and sometimes possibly less frequent earthquakes are modeled. Segmentation and modeling of coming earthquakes should be more carefully examined without leaving them in contradictions.
Fault tree analysis of most common rolling bearing tribological failures
NASA Astrophysics Data System (ADS)
Vencl, Aleksandar; Gašić, Vlada; Stojanović, Blaža
2017-02-01
Wear as a tribological process has a major influence on the reliability and life of rolling bearings. Field examinations of bearing failures due to wear indicate possible causes and point to the necessary measurements for wear reduction or elimination. Wear itself is a very complex process initiated by the action of different mechanisms, and can be manifested by different wear types which are often related. However, the dominant type of wear can be approximately determined. The paper presents the classification of most common bearing damages according to the dominant wear type, i.e. abrasive wear, adhesive wear, surface fatigue wear, erosive wear, fretting wear and corrosive wear. The wear types are correlated with the terms used in ISO 15243 standard. Each wear type is illustrated with an appropriate photograph, and for each wear type, appropriate description of causes and manifestations is presented. Possible causes of rolling bearing failure are used for the fault tree analysis (FTA). It was performed to determine the root causes for bearing failures. The constructed fault tree diagram for rolling bearing failure can be useful tool for maintenance engineers.
The 2006-2007 Kuril Islands great earthquake sequence
Lay, T.; Kanamori, H.; Ammon, C.J.; Hutko, Alexander R.; Furlong, K.; Rivera, L.
2009-01-01
The southwestern half of a ???500 km long seismic gap in the central Kuril Island arc subduction zone experienced two great earthquakes with extensive preshock and aftershock sequences in late 2006 to early 2007. The nature of seismic coupling in the gap had been uncertain due to the limited historical record of prior large events and the presence of distinctive upper plate, trench and outer rise structures relative to adjacent regions along the arc that have experienced repeated great interplate earthquakes in the last few centuries. The intraplate region seaward of the seismic gap had several shallow compressional events during the preceding decades (notably an MS 7.2 event on 16 March 1963), leading to speculation that the interplate fault was seismically coupled. This issue was partly resolved by failure of the shallow portion of the interplate megathrust in an MW = 8.3 thrust event on 15 November 2006. This event ruptured ???250 km along the seismic gap, just northeast of the great 1963 Kuril Island (Mw = 8.5) earthquake rupture zone. Within minutes of the thrust event, intense earthquake activity commenced beneath the outer wall of the trench seaward of the interplate rupture, with the larger events having normal-faulting mechanisms. An unusual double band of interplate and intraplate aftershocks developed. On 13 January 2007, an MW = 8.1 extensional earthquake ruptured within the Pacific plate beneath the seaward edge of the Kuril trench. This event is the third largest normal-faulting earthquake seaward of a subduction zone on record, and its rupture zone extended to at least 33 km depth and paralleled most of the length of the 2006 rupture. The 13 January 2007 event produced stronger shaking in Japan than the larger thrust event, as a consequence of higher short-period energy radiation from the source. The great event aftershock sequences were dominated by the expected faulting geometries; thrust faulting for the 2006 rupture zone, and normal faulting for the 2007 rupture zone. A large intraplate compressional event occurred on 15 January 2009 (Mw = 7.4) near 45 km depth, below the rupture zone of the 2007 event and in the vicinity of the 16 March 1963 compressional event. The fault geometry, rupture process and slip distributions of the two great events are estimated using very broadband teleseismic body and surface wave observations. The occurrence of the thrust event in the shallowest portion of the interplate fault in a region with a paucity of large thrust events at greater depths suggests that the event removed most of the slip deficit on this portion of the interplate fault. This great earthquake doublet demonstrates the heightened seismic hazard posed by induced intraplate faulting following large interplate thrust events. Future seismic failure of the remainder of the seismic gap appears viable, with the northeastern region that has also experienced compressional activity seaward of the megathrust warranting particular attention. Copyright 2009 by the American Geophysical Union.
Choi, Yun Ho; Yoo, Sung Jin
2018-06-01
This paper investigates the event-triggered decentralized adaptive tracking problem of a class of uncertain interconnected nonlinear systems with unexpected actuator failures. It is assumed that local control signals are transmitted to local actuators with time-varying faults whenever predefined conditions for triggering events are satisfied. Compared with the existing control-input-based event-triggering strategy for adaptive control of uncertain nonlinear systems, the aim of this paper is to propose a tracking-error-based event-triggering strategy in the decentralized adaptive fault-tolerant tracking framework. The proposed approach can relax drastic changes in control inputs caused by actuator faults in the existing triggering strategy. The stability of the proposed event-triggering control system is analyzed in the Lyapunov sense. Finally, simulation comparisons of the proposed and existing approaches are provided to show the effectiveness of the proposed theoretical result in the presence of actuator faults. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.
Development and validation of techniques for improving software dependability
NASA Technical Reports Server (NTRS)
Knight, John C.
1992-01-01
A collection of document abstracts are presented on the topic of improving software dependability through NASA grant NAG-1-1123. Specific topics include: modeling of error detection; software inspection; test cases; Magnetic Stereotaxis System safety specifications and fault trees; and injection of synthetic faults into software.
Re-Evaluation of Event Correlations in Virtual California Using Statistical Analysis
NASA Astrophysics Data System (ADS)
Glasscoe, M. T.; Heflin, M. B.; Granat, R. A.; Yikilmaz, M. B.; Heien, E.; Rundle, J.; Donnellan, A.
2010-12-01
Fusing the results of simulation tools with statistical analysis methods has contributed to our better understanding of the earthquake process. In a previous study, we used a statistical method to investigate emergent phenomena in data produced by the Virtual California earthquake simulator. The analysis indicated that there were some interesting fault interactions and possible triggering and quiescence relationships between events. We have converted the original code from Matlab to python/C++ and are now evaluating data from the most recent version of Virtual California in order to analyze and compare any new behavior exhibited by the model. The Virtual California earthquake simulator can be used to study fault and stress interaction scenarios for realistic California earthquakes. The simulation generates a synthetic earthquake catalog of events with a minimum size of ~M 5.8 that can be evaluated using statistical analysis methods. Virtual California utilizes realistic fault geometries and a simple Amontons - Coulomb stick and slip friction law in order to drive the earthquake process by means of a back-slip model where loading of each segment occurs due to the accumulation of a slip deficit at the prescribed slip rate of the segment. Like any complex system, Virtual California may generate emergent phenomena unexpected even by its designers. In order to investigate this, we have developed a statistical method that analyzes the interaction between Virtual California fault elements and thereby determine whether events on any given fault elements show correlated behavior. Our method examines events on one fault element and then determines whether there is an associated event within a specified time window on a second fault element. Note that an event in our analysis is defined as any time an element slips, rather than any particular “earthquake” along the entire fault length. Results are then tabulated and then differenced with an expected correlation, calculated by assuming a uniform distribution of events in time. We generate a correlation score matrix, which indicates how weakly or strongly correlated each fault element is to every other in the course of the VC simulation. We calculate correlation scores by summing the difference between the actual and expected correlations over all time window lengths and normalizing by the time window size. The correlation score matrix can focus attention on the most interesting areas for more in-depth analysis of event correlation vs. time. The previous study included 59 faults (639 elements) in the model, which included all the faults save the creeping section of the San Andreas. The analysis spanned 40,000 yrs of Virtual California-generated earthquake data. The newly revised VC model includes 70 faults, 8720 fault elements, and spans 110,000 years. Due to computational considerations, we will evaluate the elements comprising the southern California region, which our previous study indicated showed interesting fault interaction and event triggering/quiescence relationships.
Premonitory acoustic emissions and stick-slip in natural and smooth-faulted Westerly granite
Thompson, B.D.; Young, R.P.; Lockner, David A.
2009-01-01
A stick-slip event was induced in a cylindrical sample of Westerly granite containing a preexisting natural fault by loading at constant confining pressure of 150 MPa. Continuously recorded acoustic emission (AE) data and computer tomography (CT)-generated images of the fault plane were combined to provide a detailed examination of microscale processes operating on the fault. The dynamic stick-slip event, considered to be a laboratory analog of an earthquake, generated an ultrasonic signal that was recorded as a large-amplitude AE event. First arrivals of this event were inverted to determine the nucleation site of slip, which is associated with a geometric asperity on the fault surface. CT images and AE locations suggest that a variety of asperities existed in the sample because of the intersection of branch or splay faults with the main fault. This experiment is compared with a stick-slip experiment on a sample prepared with a smooth, artificial saw-cut fault surface. Nearly a thousand times more AE were observed for the natural fault, which has a higher friction coefficient (0.78 compared to 0.53) and larger shear stress drop (140 compared to 68 MPa). However at the measured resolution, the ultrasonic signal emitted during slip initiation does not vary significantly between the two experiments, suggesting a similar dynamic rupture process. We propose that the natural faulted sample under triaxial compression provides a good laboratory analogue for a field-scale fault system in terms of the presence of asperities, fault surface heterogeneity, and interaction of branching faults. ?? 2009.
Li, Jun; Zhang, Hong; Han, Yinshan; Wang, Baodong
2016-01-01
Focusing on the diversity, complexity and uncertainty of the third-party damage accident, the failure probability of third-party damage to urban gas pipeline was evaluated on the theory of analytic hierarchy process and fuzzy mathematics. The fault tree of third-party damage containing 56 basic events was built by hazard identification of third-party damage. The fuzzy evaluation of basic event probabilities were conducted by the expert judgment method and using membership function of fuzzy set. The determination of the weight of each expert and the modification of the evaluation opinions were accomplished using the improved analytic hierarchy process, and the failure possibility of the third-party to urban gas pipeline was calculated. Taking gas pipelines of a certain large provincial capital city as an example, the risk assessment structure of the method was proved to conform to the actual situation, which provides the basis for the safety risk prevention.
Sun, Weifang; Yao, Bin; Zeng, Nianyin; Chen, Binqiang; He, Yuchao; Cao, Xincheng; He, Wangpeng
2017-07-12
As a typical example of large and complex mechanical systems, rotating machinery is prone to diversified sorts of mechanical faults. Among these faults, one of the prominent causes of malfunction is generated in gear transmission chains. Although they can be collected via vibration signals, the fault signatures are always submerged in overwhelming interfering contents. Therefore, identifying the critical fault's characteristic signal is far from an easy task. In order to improve the recognition accuracy of a fault's characteristic signal, a novel intelligent fault diagnosis method is presented. In this method, a dual-tree complex wavelet transform (DTCWT) is employed to acquire the multiscale signal's features. In addition, a convolutional neural network (CNN) approach is utilized to automatically recognise a fault feature from the multiscale signal features. The experiment results of the recognition for gear faults show the feasibility and effectiveness of the proposed method, especially in the gear's weak fault features.
Late Quaternary faulting along the Death Valley-Furnace Creek fault system, California and Nevada
Brogan, George E.; Kellogg, Karl; Slemmons, D. Burton; Terhune, Christina L.
1991-01-01
The Death Valley-Furnace Creek fault system, in California and Nevada, has a variety of impressive late Quaternary neotectonic features that record a long history of recurrent earthquake-induced faulting. Although no neotectonic features of unequivocal historical age are known, paleoseismic features from multiple late Quaternary events of surface faulting are well developed throughout the length of the system. Comparison of scarp heights to amount of horizontal offset of stream channels and the relationships of both scarps and channels to the ages of different geomorphic surfaces demonstrate that Quaternary faulting along the northwest-trending Furnace Creek fault zone is predominantly right lateral, whereas that along the north-trending Death Valley fault zone is predominantly normal. These observations are compatible with tectonic models of Death Valley as a northwest-trending pull-apart basin. The largest late Quaternary scarps along the Furnace Creek fault zone, with vertical separation of late Pleistocene surfaces of as much as 64 m (meters), are in Fish Lake Valley. Despite the predominance of normal faulting along the Death Valley fault zone, vertical offset of late Pleistocene surfaces along the Death Valley fault zone apparently does not exceed about 15 m. Evidence for four to six separate late Holocene faulting events along the Furnace Creek fault zone and three or more late Holocene events along the Death Valley fault zone are indicated by rupturing of Q1B (about 200-2,000 years old) geomorphic surfaces. Probably the youngest neotectonic feature observed along the Death Valley-Furnace Creek fault system, possibly historic in age, is vegetation lineaments in southernmost Fish Lake Valley. Near-historic faulting in Death Valley, within several kilometers south of Furnace Creek Ranch, is represented by (1) a 2,000-year-old lake shoreline that is cut by sinuous scarps, and (2) a system of young scarps with free-faceted faces (representing several faulting events) that cuts Q1B surfaces.
NASA Astrophysics Data System (ADS)
Benedetti, L. C.; Tesson, J.; Perouse, E.; Puliti, I.; Fleury, J.; Rizza, M.; Billant, J.; Pace, B.
2017-12-01
The use of 36Cl cosmogenic nuclide as a paleoseismological tool for normal faults in the Mediterranean has revolutionized our understanding of their seismic cycle (Gran Mitchell et al. 2001, Benedetti et al. 2002). Here we synthetized results obtained on 13 faults in Central Italy. Those records cover a period of 8 to 45 ka. The mean recurrence time of retrieved seismic events is 5.5 ±6 ka, with a mean slip per event of 2.5 ± 1.8 m and a mean slip-rate from 0.1 to 2.4 mm/yr. Most retrieved events correspond to single events according to scaling relationships. This is also supported by the 2 m-high co-seismic slip observed on the Mt Vettore fault after the October 30, 2016 M6.5 earthquake in Central Italy (EMERGEO working group). Our results suggest that all faults have experienced one or several periods of slip acceleration with bursts of seismic activity, associated with very high slip-rate of 1.7-9 mm/yr, corresponding to 2-20 times their long-term slip-rate. The duration of those bursts is variable from a fault to another (from < 2 kyr to 4-10 kyr). Those periods of acceleration are generally separated by longer periods of quiescence with no or very few events. Those alternating periods correspond to a long-term variation of the strain level with all faults oscillating between strain maximum and minimum, the length of strain loading and release being significantly different from one fault to another, those supercycles occurring over periods of 8 to 45 ka. We found relationships between the mean slip-rate, the mean slip per event and the mean recurrence time. This might suggest that the seismic activity of those faults could be controlled by their intrinsic properties (e.g. long-term slip-rate, fault-length, state of structural maturity). Our results also show events clustering with several faults rupturing in less than 500 yrs on adjacent or distant faults within the studied area. The Norcia-Amatrice seismic sequence, ≈ 50 km north of our study area, also evidenced this clustering behaviour, with over the last 20 yrs several successive events of Mw 5 to 6.5 (from north to south: Colfiorito 1997 Mw6.0, Norcia 2016 Mw6.5, L'Aquila 2009 Mw6.3), rupturing various fault systems, over a total length of ≈100 km. This sequence will allow to better understand earthquake kinematics and spatiotemporal slip distribution during those seismic bursts.
McAuliffe, Lee J.; Dolan, James F.; Rhodes, Edward J.; Hubbard, Judith; Shaw, John H.; Pratt, Thomas L.
2015-01-01
Detailed analysis of continuously cored boreholes and cone penetrometer tests (CPTs), high-resolution seismic-reflection data, and luminescence and 14C dates from Holocene strata folded above the tip of the Ventura blind thrust fault constrain the ages and displacements of the two (or more) most recent earthquakes. These two earthquakes, which are identified by a prominent surface fold scarp and a stratigraphic sequence that thickens across an older buried fold scarp, occurred before the 235-yr-long historic era and after 805 ± 75 yr ago (most recent folding event[s]) and between 4065 and 4665 yr ago (previous folding event[s]). Minimum uplift in these two scarp-forming events was ∼6 m for the most recent earthquake(s) and ∼5.2 m for the previous event(s). Large uplifts such as these typically occur in large-magnitude earthquakes in the range of Mw7.5–8.0. Any such events along the Ventura fault would likely involve rupture of other Transverse Ranges faults to the east and west and/or rupture downward onto the deep, low-angle décollements that underlie these faults. The proximity of this large reverse-fault system to major population centers, including the greater Los Angeles region, and the potential for tsunami generation during ruptures extending offshore along the western parts of the system highlight the importance of understanding the complex behavior of these faults for probabilistic seismic hazard assessment.
NASA Astrophysics Data System (ADS)
Niwa, Y.; Sugai, T.; Matsuzaki, H.
2012-12-01
The Kuwana fault is located on coastal area situated on inner part of the Ise Bay, central Japan, which opens to the Nankai Trough. This reverse fault displaces a late Pleistocene terrace surface with 1 to 2 mm/yr of average vertical slip rate, and a topset of delta at several meters, respectively. And, this fault is estimated to have generated two historical earthquakes (the AD 745 Tempyo and the AD 1586 Tensho earthquakes). We identified two event sand layers from upper Holocene sequence on the upthrown side of the Kuwana fault. Upper Holocene deposits in this study area show prograding delta sequence; prodelta mud, delta front sandy silt to sand, and flood plain sand/mud, respectively, from lower to upper. Two sand layers intervene in delta front sandy silt layer, respectively. Lower sand layer (S1) shows upward-coarsening succession, whereas upper sand layer (S2) upward-fining succession. These sand layers contain sharp contact, rip-up crust, and shell fragment, indicating strong stream flow. Radiocarbon ages show that these strong stream flow events occurred between 3000 and 1600 years ago. Decreasing of salinity is estimated from decreasing trend of electrical conductivity (EC) across S1. Based on the possibility that decreasing of salinity can be occurred by shallowing of water depth caused by coseismic uplift, and that S1 can be correlated with previously known faulting event on the Kuwana fault, S1 is considered to be tsunami deposits caused by faulting on the Kuwana fault. On the other hand, S2, which cannot be correlated with previously known faulting events on the Kuwana fault, may be tsunami deposits by ocean-trench earthquake or storm deposits. In the presentation, we will discuss more detail correlation of these sand deposits not only in the upthrown side of the Kuwana fault, but also downthrown side of the fault.
NASA Astrophysics Data System (ADS)
Zhao, P.; Peng, Z.
2008-12-01
We systemically identify repeating earthquakes and investigate spatio-temporal variations of fault zone properties associated with the 2004 Mw6.0 Parkfield earthquake along the Parkfield section of the San Andreas fault, and the 1984 Mw6.2 Morgan Hill earthquake along the central Calaveras fault. The procedure for identifying repeating earthquakes is based on overlapping of the source regions and the waveform similarity, and is briefly described as follows. First, we estimate the source radius of each event based on a circular crack model and a normal stress drop of 3 MPa. Next, we compute inter-hypocentral distance for events listed in the relocated catalog of Thurber et al. (2006) around Parkfield, and Schaff et al. (2002) along the Calaveras fault. Then, we group all events into 'initial' clusters by requiring the separation distance between each event pair to be less than the source radius of larger event, and their magnitude difference to be less than 1. Next, we calculate the correlation coefficients between every event pair within each 'initial' cluster using a 3-s time window around the direct P waves for all available stations. The median value of the correlation coefficients is used as a measure of similarity between each event pair. We drop an event if the median similarity to the rest events in that cluster is less than 0.9. After identifying repeating clusters in both regions, our next step is to apply a sliding window waveform cross-correlation technique (Niu et al., 2003; Peng and Ben-Zion, 2006) to calculate the delay time and decorrelation index for each repeating cluster. By measuring temporal changes in waveforms of repeating clusters at different locations and depth, we hope to obtain a better constraint on spatio-temporal variations of fault zone properties and near-surface layers associated with the occurrence of major earthquakes.
NASA Astrophysics Data System (ADS)
Webb, J.; Gardner, T.
2016-12-01
In northwest Tasmania well-preserved mid-Holocene beach ridges with maximum radiocarbon ages of 5.25 ka occur along the coast; inland are a parallel set of lower relief beach ridges of probable MIS 5e age. The latter are cut by northeast-striking faults clearly visible on LIDAR images, with a maximum vertical displacement (evident as difference in topographic elevation) of 3 m. Also distinct on the LIDAR images are large sand boils along the fault lines; they are up to 5 m in diameter and 2-3 m high and mostly occur on the hanging wall close to the fault traces. Without LIDAR it would have been almost impossible to distinguish either the fault scarps or the sand boils. Excavations through the sand boils show that they are massive, with no internal structure, suggesting that they formed in a single event. They are composed of well-sorted, very fine white sand, identical to the sand in the underlying beach ridges. The sand boils overlie a peaty paleosol; this formed in the tea-tree swamp that formerly covered the area, and has been offset along the faults. Radiocarbon dating of the buried organic-rich paleosol gave ages of 14.8-7.2 ka, suggesting that the faulting is latest Pleistocene to early Holocene in age; it occurred prior to deposition of the mid-Holocene beach ridges, which are not offset. The beach ridge sediments are up to 7 m thick and contain an iron-cemented hard pan 1-3 m below the surface. The water table is very shallow and close to the ground surface, so the sands of the beach ridges are mostly saturated. During faulting these sands experienced extensive liquefaction. The resulting sand boils rose to a substantial height of 2-3 m, probably possibly reflecting the elevation of the potentiometric surface within the confined part of the beach ridge sediments below the iron-cemented hard pan. Motion on the faults was predominantly dip slip (shown by an absence of horizontal offset) and probably reverse, which is consistent with the present-day northwest-southeast compressive stress in this area.
Analysis of a hardware and software fault tolerant processor for critical applications
NASA Technical Reports Server (NTRS)
Dugan, Joanne B.
1993-01-01
Computer systems for critical applications must be designed to tolerate software faults as well as hardware faults. A unified approach to tolerating hardware and software faults is characterized by classifying faults in terms of duration (transient or permanent) rather than source (hardware or software). Errors arising from transient faults can be handled through masking or voting, but errors arising from permanent faults require system reconfiguration to bypass the failed component. Most errors which are caused by software faults can be considered transient, in that they are input-dependent. Software faults are triggered by a particular set of inputs. Quantitative dependability analysis of systems which exhibit a unified approach to fault tolerance can be performed by a hierarchical combination of fault tree and Markov models. A methodology for analyzing hardware and software fault tolerant systems is applied to the analysis of a hypothetical system, loosely based on the Fault Tolerant Parallel Processor. The models consider both transient and permanent faults, hardware and software faults, independent and related software faults, automatic recovery, and reconfiguration.
Trust index based fault tolerant multiple event localization algorithm for WSNs.
Xu, Xianghua; Gao, Xueyong; Wan, Jian; Xiong, Naixue
2011-01-01
This paper investigates the use of wireless sensor networks for multiple event source localization using binary information from the sensor nodes. The events could continually emit signals whose strength is attenuated inversely proportional to the distance from the source. In this context, faults occur due to various reasons and are manifested when a node reports a wrong decision. In order to reduce the impact of node faults on the accuracy of multiple event localization, we introduce a trust index model to evaluate the fidelity of information which the nodes report and use in the event detection process, and propose the Trust Index based Subtract on Negative Add on Positive (TISNAP) localization algorithm, which reduces the impact of faulty nodes on the event localization by decreasing their trust index, to improve the accuracy of event localization and performance of fault tolerance for multiple event source localization. The algorithm includes three phases: first, the sink identifies the cluster nodes to determine the number of events occurred in the entire region by analyzing the binary data reported by all nodes; then, it constructs the likelihood matrix related to the cluster nodes and estimates the location of all events according to the alarmed status and trust index of the nodes around the cluster nodes. Finally, the sink updates the trust index of all nodes according to the fidelity of their information in the previous reporting cycle. The algorithm improves the accuracy of localization and performance of fault tolerance in multiple event source localization. The experiment results show that when the probability of node fault is close to 50%, the algorithm can still accurately determine the number of the events and have better accuracy of localization compared with other algorithms.
Trust Index Based Fault Tolerant Multiple Event Localization Algorithm for WSNs
Xu, Xianghua; Gao, Xueyong; Wan, Jian; Xiong, Naixue
2011-01-01
This paper investigates the use of wireless sensor networks for multiple event source localization using binary information from the sensor nodes. The events could continually emit signals whose strength is attenuated inversely proportional to the distance from the source. In this context, faults occur due to various reasons and are manifested when a node reports a wrong decision. In order to reduce the impact of node faults on the accuracy of multiple event localization, we introduce a trust index model to evaluate the fidelity of information which the nodes report and use in the event detection process, and propose the Trust Index based Subtract on Negative Add on Positive (TISNAP) localization algorithm, which reduces the impact of faulty nodes on the event localization by decreasing their trust index, to improve the accuracy of event localization and performance of fault tolerance for multiple event source localization. The algorithm includes three phases: first, the sink identifies the cluster nodes to determine the number of events occurred in the entire region by analyzing the binary data reported by all nodes; then, it constructs the likelihood matrix related to the cluster nodes and estimates the location of all events according to the alarmed status and trust index of the nodes around the cluster nodes. Finally, the sink updates the trust index of all nodes according to the fidelity of their information in the previous reporting cycle. The algorithm improves the accuracy of localization and performance of fault tolerance in multiple event source localization. The experiment results show that when the probability of node fault is close to 50%, the algorithm can still accurately determine the number of the events and have better accuracy of localization compared with other algorithms. PMID:22163972
Determining preventability of pediatric readmissions using fault tree analysis.
Jonas, Jennifer A; Devon, Erin Pete; Ronan, Jeanine C; Ng, Sonia C; Owusu-McKenzie, Jacqueline Y; Strausbaugh, Janet T; Fieldston, Evan S; Hart, Jessica K
2016-05-01
Previous studies attempting to distinguish preventable from nonpreventable readmissions reported challenges in completing reviews efficiently and consistently. (1) Examine the efficiency and reliability of a Web-based fault tree tool designed to guide physicians through chart reviews to a determination about preventability. (2) Investigate root causes of general pediatrics readmissions and identify the percent that are preventable. General pediatricians from The Children's Hospital of Philadelphia used a Web-based fault tree tool to classify root causes of all general pediatrics 15-day readmissions in 2014. The tool guided reviewers through a logical progression of questions, which resulted in 1 of 18 root causes of readmission, 8 of which were considered potentially preventable. Twenty percent of cases were cross-checked to measure inter-rater reliability. Of the 7252 discharges, 248 were readmitted, for an all-cause general pediatrics 15-day readmission rate of 3.4%. Of those readmissions, 15 (6.0%) were deemed potentially preventable, corresponding to 0.2% of total discharges. The most common cause of potentially preventable readmissions was premature discharge. For the 50 cross-checked cases, both reviews resulted in the same root cause for 44 (86%) of files (κ = 0.79; 95% confidence interval: 0.60-0.98). Completing 1 review using the tool took approximately 20 minutes. The Web-based fault tree tool helped physicians to identify root causes of hospital readmissions and classify them as either preventable or not preventable in an efficient and consistent way. It also confirmed that only a small percentage of general pediatrics 15-day readmissions are potentially preventable. Journal of Hospital Medicine 2016;11:329-335. © 2016 Society of Hospital Medicine. © 2016 Society of Hospital Medicine.
The Rurrand Fault, Germany: A Holocene surface rupture and new slip rate estimates
NASA Astrophysics Data System (ADS)
Grützner, Christoph; Fischer, Peter; Reicherter, Klaus
2016-04-01
Very low deformation rates in continental interiors are a challenge for research on active tectonics and seismic hazard. Faults tend to have very long earthquake recurrence intervals and morphological evidence of surface faulting is often obliterated by erosion and sedimentation. The Lower Rhine Graben in Central Europe is characterized by slow active faults with individual slip rates of well less than 0.1 mm/a. As a consequence, most geodetic techniques fail to record tectonic motions and the morphological expression of the faults is subtle. Although damaging events are known from this region, e.g. the 1755/56 Düren earthquakes series, there is no account for surface rupturing events in instrumental and historical records. Owing to the short temporal coverage with respect to the fault recurrence intervals, these records probably fail to depict the maximum possible magnitudes. In this study we used morphological evidence from a 1 m airborne LiDAR survey, near surface geophysics, and paleoseismological trenching to identify surface rupturing earthquakes at the Rurrand Fault between Cologne and Aachen in W Germany. LiDAR data allowed identifying a young fault strand parallel to the already known main fault with the subtle morphological expression of recent surface faulting. In the paleoseismological trenches we found evidence for two surface rupturing earthquakes. The most recent event occurred in the Holocene, and a previous earthquake probably happened in the last 150 ka. Geophysical data allowed us to estimate a minimum slip rate of 0.03 mm/a from an offset gravel horizon. We estimate paleomagnitudes of MW5.9-6.8 based on the observed offsets in the trench (<0.5 m per event) and fault scaling relationships. Our data imply that the Rurrand Fault did not creep during the last 150 ka, but rather failed in large earthquakes. These events were much stronger than those known from historical sources. We are able to show that the Rurrand Fault did not rupture the surface during the Düren 1755/56 seismic crisis and conclude that these events likely occurred on another nearby fault system or did not rupture the surface at all. The very long recurrence interval of 25-65 ka for surface rupturing events illustrates the problems of assessing earthquake hazard in such slowly deforming regions. We emphasize that geological data must be included in seismic hazard and surface rupture hazard assessments in order to obtain a complete picture of a region's seismic potential.
Triggering of destructive earthquakes in El Salvador
NASA Astrophysics Data System (ADS)
Martínez-Díaz, José J.; Álvarez-Gómez, José A.; Benito, Belén; Hernández, Douglas
2004-01-01
We investigate the existence of a mechanism of static stress triggering driven by the interaction of normal faults in the Middle American subduction zone and strike-slip faults in the El Salvador volcanic arc. The local geology points to a large strike-slip fault zone, the El Salvador fault zone, as the source of several destructive earthquakes in El Salvador along the volcanic arc. We modeled the Coulomb failure stress (CFS) change produced by the June 1982 and January 2001 subduction events on planes parallel to the El Salvador fault zone. The results have broad implications for future risk management in the region, as they suggest a causative relationship between the position of the normal-slip events in the subduction zone and the strike-slip events in the volcanic arc. After the February 2001 event, an important area of the El Salvador fault zone was loaded with a positive change in Coulomb failure stress (>0.15 MPa). This scenario must be considered in the seismic hazard assessment studies that will be carried out in this area.
NASA Astrophysics Data System (ADS)
Li, Yongbo; Li, Guoyan; Yang, Yuantao; Liang, Xihui; Xu, Minqiang
2018-05-01
The fault diagnosis of planetary gearboxes is crucial to reduce the maintenance costs and economic losses. This paper proposes a novel fault diagnosis method based on adaptive multi-scale morphological filter (AMMF) and modified hierarchical permutation entropy (MHPE) to identify the different health conditions of planetary gearboxes. In this method, AMMF is firstly adopted to remove the fault-unrelated components and enhance the fault characteristics. Second, MHPE is utilized to extract the fault features from the denoised vibration signals. Third, Laplacian score (LS) approach is employed to refine the fault features. In the end, the obtained features are fed into the binary tree support vector machine (BT-SVM) to accomplish the fault pattern identification. The proposed method is numerically and experimentally demonstrated to be able to recognize the different fault categories of planetary gearboxes.
Earthquake mechanism and predictability shown by a laboratory fault
King, C.-Y.
1994-01-01
Slip events generated in a laboratory fault model consisting of a circulinear chain of eight spring-connected blocks of approximately equal weight elastically driven to slide on a frictional surface are studied. It is found that most of the input strain energy is released by a relatively few large events, which are approximately time predictable. A large event tends to roughen stress distribution along the fault, whereas the subsequent smaller events tend to smooth the stress distribution and prepare a condition of simultaneous criticality for the occurrence of the next large event. The frequency-size distribution resembles the Gutenberg-Richter relation for earthquakes, except for a falloff for the largest events due to the finite energy-storage capacity of the fault system. Slip distributions, in different events are commonly dissimilar. Stress drop, slip velocity, and rupture velocity all tend to increase with event size. Rupture-initiation locations are usually not close to the maximum-slip locations. ?? 1994 Birkha??user Verlag.
Characterizing the structural maturity of fault zones using high-resolution earthquake locations.
NASA Astrophysics Data System (ADS)
Perrin, C.; Waldhauser, F.; Scholz, C. H.
2017-12-01
We use high-resolution earthquake locations to characterize the three-dimensional structure of active faults in California and how it evolves with fault structural maturity. We investigate the distribution of aftershocks of several recent large earthquakes that occurred on immature faults (i.e., slow moving and small cumulative displacement), such as the 1992 (Mw7.3) Landers and 1999 (Mw7.1) Hector Mine events, and earthquakes that occurred on mature faults, such as the 1984 (Mw6.2) Morgan Hill and 2004 (Mw6.0) Parkfield events. Unlike previous studies which typically estimated the width of fault zones from the distribution of earthquakes perpendicular to the surface fault trace, we resolve fault zone widths with respect to the 3D fault surface estimated from principal component analysis of local seismicity. We find that the zone of brittle deformation around the fault core is narrower along mature faults compared to immature faults. We observe a rapid fall off of the number of events at a distance range of 70 - 100 m from the main fault surface of mature faults (140-200 m fault zone width), and 200-300 m from the fault surface of immature faults (400-600 m fault zone width). These observations are in good agreement with fault zone widths estimated from guided waves trapped in low velocity damage zones. The total width of the active zone of deformation surrounding the main fault plane reach 1.2 km and 2-4 km for mature and immature faults, respectively. The wider zone of deformation presumably reflects the increased heterogeneity in the stress field along complex and discontinuous faults strands that make up immature faults. In contrast, narrower deformation zones tend to align with well-defined fault planes of mature faults where most of the deformation is concentrated. Our results are in line with previous studies suggesting that surface fault traces become smoother, and thus fault zones simpler, as cumulative fault slip increases.
NASA Astrophysics Data System (ADS)
Valoroso, L.; Chiaraluce, L.; Di Stefano, R.; Piccinini, D.; Schaff, D. P.; Waldhauser, F.
2011-12-01
On April 6th 2009, a MW 6.1 normal faulting earthquake struck the axial area of the Abruzzo region in Central Italy. We present high-precision hypocenter locations of an extraordinary dataset composed by 64,000 earthquakes recorded at a very dense seismic network of 60 stations operating for 9 months after the main event. Events span in magnitude (ML) between -0.9 to 5.9, reaching a completeness magnitude of 0.7. The dataset has been processed by integrating an accurate automatic picking procedure together with cross-correlation and double-difference relative location methods. The combined use of these procedures results in earthquake relative location uncertainties in the range of a few meters to tens of meters, comparable/lower than the spatial dimension of the earthquakes themselves). This data set allows us to image the complex inner geometry of individual faults from the kilometre to meter scale. The aftershock distribution illuminates the anatomy of the en-echelon fault system composed of two major faults. The mainshock breaks the entire upper crust from 10 km depth to the surface along a 14-km long normal fault. A second segment, located north of the normal fault and activated by two Mw>5 events, shows a striking listric geometry completely blind. We focus on the analysis of about 300 clusters of co-located events to characterize the mechanical behavior of the different portions of the fault system. The number of events in each cluster ranges from 4 to 24 events and they exhibit strongly correlated seismograms at common stations. They mostly occur where secondary structures join the main fault planes and along unfavorably oriented segments. Moreover, larger clusters nucleate on secondary faults located in the overlapping area between the two main segments, where the rate of earthquake production is very high with a long-lasting seismic decay.
Monitoring of Microseismicity with ArrayTechniques in the Peach Tree Valley Region
NASA Astrophysics Data System (ADS)
Garcia-Reyes, J. L.; Clayton, R. W.
2016-12-01
This study is focused on the analysis of microseismicity along the San Andreas Fault in the PeachTree Valley region. This zone is part of the transition zone between the locked portion to the south (Parkfield, CA) and the creeping section to the north (Jovilet, et al., JGR, 2014). The data for the study comes from a 2-week deployment of 116 Zland nodes in a cross-shaped configuration along (8.2 km) and across (9 km) the Fault. We analyze the distribution of microseismicity using a 3D backprojection technique, and we explore the use of Hidden Markov Models to identify different patterns of microseismicity (Hammer et al., GJI, 2013). The goal of the study is to relate the style of seismicity to the mechanical state of the Fault. The results show the evolution of seismic activity as well as at least two different patterns of seismic signals.
Crone, A.J.; De Martini, P. M.; Machette, M.M.; Okumura, K.; Prescott, J.R.
2003-01-01
Paleoseismic studies of two historically aseismic Quaternary faults in Australia confirm that cratonic faults in stable continental regions (SCR) typically have a long-term behavior characterized by episodes of activity separated by quiescent intervals of at least 10,000 and commonly 100,000 years or more. Studies of the approximately 30-km-long Roopena fault in South Australia and the approximately 30-km-long Hyden fault in Western Australia document multiple Quaternary surface-faulting events that are unevenly spaced in time. The episodic clustering of events on cratonic SCR faults may be related to temporal fluctuations of fault-zone fluid pore pressures in a volume of strained crust. The long-term slip rate on cratonic SCR faults is extremely low, so the geomorphic expression of many cratonic SCR faults is subtle, and scarps may be difficult to detect because they are poorly preserved. Both the Roopena and Hyden faults are in areas of limited or no significant seismicity; these and other faults that we have studied indicate that many potentially hazardous SCR faults cannot be recognized solely on the basis of instrumental data or historical earthquakes. Although cratonic SCR faults may appear to be nonhazardous because they have been historically aseismic, those that are favorably oriented for movement in the current stress field can and have produced unexpected damaging earthquakes. Paleoseismic studies of modern and prehistoric SCR faulting events provide the basis for understanding of the long-term behavior of these faults and ultimately contribute to better seismic-hazard assessments.
NASA Astrophysics Data System (ADS)
Lapusta, N.; Liu, Y.
2007-12-01
Heterogeneity in fault properties can have significant effect on dynamic rupture propagation and aseismic slip. It is often assumed that a fixed heterogeneity would have similar effect on fault slip throughout the slip history. We investigate dynamic rupture interaction with a fault patch of higher normal stress over several earthquake cycles in a three-dimensional model. We find that the influence of the heterogeneity on dynamic events has significant variation and depends on prior slip history. We consider a planar strike-slip fault governed by rate and state friction and driven by slow tectonic loading on deeper extension of the fault. The 30 km by 12 km velocity-weakening region, which is potentially seismogenic, is surrounded by steady-state velocity-strengthening region. The normal stress is constant over the fault, except in a circular patch of 2 km in diameter located in the seismogenic region, where normal stress is higher than on the rest of the fault. Our simulations employ the methodology developed by Lapusta and Liu (AGU, 2006), which is able to resolve both dynamic and quasi-static stages of spontaneous slip accumulation in a single computational procedure. The initial shear stress is constant on the fault, except in a small area where it is higher and where the first large dynamic event initiates. For patches with 20%, 40%, 60% higher normal stress, the first event has significant dynamic interaction with the patch, creating a rupture speed decrease followed by a supershear burst and larger slip around the patch. Hence, in the first event, the patch acts as a seismic asperity. For the case of 100% higher stress, the rupture is not able to break the patch in the first event. In subsequent dynamic events, the behavior depends on the strength of heterogeneity. For the patch with 20% higher normal stress, dynamic rupture in subsequent events propagates through the patch without any noticeable perturbation in rupture speed or slip. In particular, supershear propagation and additional slip accumulation around the patch are never repeated in the simulated history of the fault, and the patch stops manifesting itself as a seismic asperity. This is due to higher shear stress that is established at the patch after the first earthquake cycle. For patches with higher normal stress, shear stress redistribution also occurs, but it is less effective. The patches with 40% and 60% higher normal stress continue to affect rupture speed and fault slip in some of subsequent events, although the effect is much diminished with respect to the first event. For example, there are no supershear bursts. The patch with 100% higher normal stress is first broken in the second large event, and it retains significant influence on rupture speed and slip throughout the fault history, occasionally resulting in supershear bursts. Additional slip complexity emerges for patches with 40% and higher normal stress contrast. Since higher normal stress corresponds to a smaller nucleation size, nucleation of some events moves from the rheological transitions (where nucleation occurs in the cases with no stronger patch and with the patch of 20% higher normal stress) to the patches of higher normal stress. The patches nucleate both large, model-spanning, events, and small events that arrest soon after exiting the patch. Hence not every event that originates at the location of a potential seismic asperity is destined to be large, as its subsequent propagation is significantly influenced by the state of stress outside the patch.
Improving Ms Estimates by Calibrating Variable-Period Magnitude Scales at Regional Distances
2008-09-01
TF), or oblique - slip variations of normal and thrust faults using the Zoback (1992) classification scheme. For normal faults , 2008 Monitoring...between the observed and Ms-predicted Mw have a definable faulting mechanism effect, especially when strike- slip events are compared to those with...between true and Ms-predicted Mw have a definable faulting mechanism effect, especially when strike- slip events are compared to those with other
Viewpoint on ISA TR84.0.02--simplified methods and fault tree analysis.
Summers, A E
2000-01-01
ANSI/ISA-S84.01-1996 and IEC 61508 require the establishment of a safety integrity level for any safety instrumented system or safety related system used to mitigate risk. Each stage of design, operation, maintenance, and testing is judged against this safety integrity level. Quantitative techniques can be used to verify whether the safety integrity level is met. ISA-dTR84.0.02 is a technical report under development by ISA, which discusses how to apply quantitative analysis techniques to safety instrumented systems. This paper discusses two of those techniques: (1) Simplified equations and (2) Fault tree analysis.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nance, M.A.
1993-04-01
Detailed mapping, stratigraphic structural analysis in the Mountain Pass area has resulted in a reinterpretation of Mesozoic and Cenozoic tectonic events in the area. Mesozoic events are characterized by north vergent folds and thrust faults followed by east vergent thrusting. Folding created two synclines and an anticline which were than cut at different stratigraphic levels by subsequent thrust faults. Thrusting created composite tectono-stratigraphic sections containing autochthonous, para-autothonous, and allochthonous sections. Normal faults cutting these composite sections including North, Kokoweef, White Line, and Piute fault must be post-thrusting, not pre-thrusting as in previous interpretations. Detailed study of these faults results inmore » differentiation of at least three orders of faults and suggest they represent Cenozoic extension correlated with regional extensional events between 11 and 19 my. Mesozoic stratigraphy reflects regional orogenic uplift, magmatic activity, and thrusting. Inclusion of Kaibab clasts in the Chinle, Kaibab and Chinle clasts in the Aztec, and Chinle, Aztec, and previously deposited Delfonte Volcanics clasts in the younger members of the Delfonte Volcanics suggest regional uplift prior to the thrusting of Cambrian Bonanza King over Delfonte Volcanics by the Mescal Thrust fault. The absence of clasts younger than Kaibab argues against pre-thrusting activity for the Kokoweef fault.« less
TH-EF-BRC-03: Fault Tree Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thomadsen, B.
2016-06-15
This Hands-on Workshop will be focused on providing participants with experience with the principal tools of TG 100 and hence start to build both competence and confidence in the use of risk-based quality management techniques. The three principal tools forming the basis of TG 100’s risk analysis: Process mapping, Failure-Modes and Effects Analysis and fault-tree analysis will be introduced with a 5 minute refresher presentation and each presentation will be followed by a 30 minute small group exercise. An exercise on developing QM from the risk analysis follows. During the exercise periods, participants will apply the principles in 2 differentmore » clinical scenarios. At the conclusion of each exercise there will be ample time for participants to discuss with each other and the faculty their experience and any challenges encountered. Learning Objectives: To review the principles of Process Mapping, Failure Modes and Effects Analysis and Fault Tree Analysis. To gain familiarity with these three techniques in a small group setting. To share and discuss experiences with the three techniques with faculty and participants. Director, TreatSafely, LLC. Director, Center for the Assessment of Radiological Sciences. Occasional Consultant to the IAEA and Varian.« less
Estimating earthquake-induced failure probability and downtime of critical facilities.
Porter, Keith; Ramer, Kyle
2012-01-01
Fault trees have long been used to estimate failure risk in earthquakes, especially for nuclear power plants (NPPs). One interesting application is that one can assess and manage the probability that two facilities - a primary and backup - would be simultaneously rendered inoperative in a single earthquake. Another is that one can calculate the probabilistic time required to restore a facility to functionality, and the probability that, during any given planning period, the facility would be rendered inoperative for any specified duration. A large new peer-reviewed library of component damageability and repair-time data for the first time enables fault trees to be used to calculate the seismic risk of operational failure and downtime for a wide variety of buildings other than NPPs. With the new library, seismic risk of both the failure probability and probabilistic downtime can be assessed and managed, considering the facility's unique combination of structural and non-structural components, their seismic installation conditions, and the other systems on which the facility relies. An example is offered of real computer data centres operated by a California utility. The fault trees were created and tested in collaboration with utility operators, and the failure probability and downtime results validated in several ways.
NASA Astrophysics Data System (ADS)
Mulyana, Cukup; Muhammad, Fajar; Saad, Aswad H.; Mariah, Riveli, Nowo
2017-03-01
Storage tank component is the most critical component in LNG regasification terminal. It has the risk of failure and accident which impacts to human health and environment. Risk assessment is conducted to detect and reduce the risk of failure in storage tank. The aim of this research is determining and calculating the probability of failure in regasification unit of LNG. In this case, the failure is caused by Boiling Liquid Expanding Vapor Explosion (BLEVE) and jet fire in LNG storage tank component. The failure probability can be determined by using Fault Tree Analysis (FTA). Besides that, the impact of heat radiation which is generated is calculated. Fault tree for BLEVE and jet fire on storage tank component has been determined and obtained with the value of failure probability for BLEVE of 5.63 × 10-19 and for jet fire of 9.57 × 10-3. The value of failure probability for jet fire is high enough and need to be reduced by customizing PID scheme of regasification LNG unit in pipeline number 1312 and unit 1. The value of failure probability after customization has been obtained of 4.22 × 10-6.
Event-Triggered Fault Detection of Nonlinear Networked Systems.
Li, Hongyi; Chen, Ziran; Wu, Ligang; Lam, Hak-Keung; Du, Haiping
2017-04-01
This paper investigates the problem of fault detection for nonlinear discrete-time networked systems under an event-triggered scheme. A polynomial fuzzy fault detection filter is designed to generate a residual signal and detect faults in the system. A novel polynomial event-triggered scheme is proposed to determine the transmission of the signal. A fault detection filter is designed to guarantee that the residual system is asymptotically stable and satisfies the desired performance. Polynomial approximated membership functions obtained by Taylor series are employed for filtering analysis. Furthermore, sufficient conditions are represented in terms of sum of squares (SOSs) and can be solved by SOS tools in MATLAB environment. A numerical example is provided to demonstrate the effectiveness of the proposed results.
NASA Astrophysics Data System (ADS)
Yule, J.; McBurnett, P.; Ramzan, S.
2011-12-01
The largest discontinuity in the surface trace of the San Andreas fault occurs in southern California at San Gorgonio Pass. Here, San Andreas motion moves through a 20 km-wide compressive stepover on the dextral-oblique-slip thrust system known as the San Gorgonio Pass fault zone. This thrust-dominated system is thought to rupture during very large San Andreas events that also involve strike-slip fault segments north and south of the Pass region. A wealth of paleoseismic data document that the San Andreas fault segments on either side of the Pass, in the San Bernardino/Mojave Desert and Coachella Valley regions, rupture on average every ~100 yrs and ~200 yrs, respectively. In contrast, we report here a notably longer return period for ruptures of the San Gorgonio Pass fault zone. For example, features exposed in trenches at the Cabezon site reveal that the most recent earthquake occurred 600-700 yrs ago (this and other ages reported here are constrained by C-14 calibrated ages from charcoal). The rupture at Cabezon broke a 10 m-wide zone of east-west striking thrusts and produced a >2 m-high scarp. Slip during this event is estimated to be >4.5 m. Evidence for a penultimate event was not uncovered but presumably lies beneath ~1000 yr-old strata at the base of the trenches. In Millard Canyon, 5 km to the west of Cabezon, the San Gorgonio Pass fault zone splits into two splays. The northern splay is expressed by 2.5 ± 0.7 m and 5.0 ± 0.7 m scarps in alluvial terraces constrained to be ~1300 and ~2500 yrs old, respectively. The scarp on the younger, low terrace postdates terrace abandonment ~1300 yrs ago and probably correlates with the 600-700 yr-old event at Cabezon, though we cannot rule out that a different event produced the northern Millard scarp. Trenches excavated in the low terrace reveal growth folding and secondary faulting and clear evidence for a penultimate event ~1350-1450 yrs ago, during alluvial deposition prior to the abandonment of the low terrace. Subtle evidence for a third event is poorly constrained by age data to have occurred between 1600 and 2500 yrs ago. The southern splay at Millard Canyon forms a 1.5 ± 0.1 m scarp in an alluvial terrace that is inset into the lowest terrace at the northern Millard site, and therefore must be < ~1300 yrs old. Slip on this fault probably occurred during the most recent rupture in the Pass. In summary, we think that the most recent earthquake occurred 600-700 yrs ago and generated ~6 m of slip on the San Gorgonio Pass fault zone. The evidence for two older earthquakes is less complete but suggests that they are similar in style and magnitude to the most recent event. The available data therefore suggest that the San Gorgonio Pass fault zone has produced three large (~6 m) events in the last ~2000 yrs, a return period of ~700 yrs assuming that the next rupture is imminent. We prefer a model whereby a majority of San Andreas fault ruptures end as they approach the Pass region from the north or the south (like the Wrightwood event of A.D. 1812 and possibly the Coachella Valley event of ~A.D. 1680). Relatively rare (once-per-millennia?), through-going San Andreas events break the San Gorgonio Pass fault zone and produce the region's largest earthquakes.
NASA Astrophysics Data System (ADS)
Li, Shuanghong; Cao, Hongliang; Yang, Yupu
2018-02-01
Fault diagnosis is a key process for the reliability and safety of solid oxide fuel cell (SOFC) systems. However, it is difficult to rapidly and accurately identify faults for complicated SOFC systems, especially when simultaneous faults appear. In this research, a data-driven Multi-Label (ML) pattern identification approach is proposed to address the simultaneous fault diagnosis of SOFC systems. The framework of the simultaneous-fault diagnosis primarily includes two components: feature extraction and ML-SVM classifier. The simultaneous-fault diagnosis approach can be trained to diagnose simultaneous SOFC faults, such as fuel leakage, air leakage in different positions in the SOFC system, by just using simple training data sets consisting only single fault and not demanding simultaneous faults data. The experimental result shows the proposed framework can diagnose the simultaneous SOFC system faults with high accuracy requiring small number training data and low computational burden. In addition, Fault Inference Tree Analysis (FITA) is employed to identify the correlations among possible faults and their corresponding symptoms at the system component level.
Support vector machines-based fault diagnosis for turbo-pump rotor
NASA Astrophysics Data System (ADS)
Yuan, Sheng-Fa; Chu, Fu-Lei
2006-05-01
Most artificial intelligence methods used in fault diagnosis are based on empirical risk minimisation principle and have poor generalisation when fault samples are few. Support vector machines (SVM) is a new general machine-learning tool based on structural risk minimisation principle that exhibits good generalisation even when fault samples are few. Fault diagnosis based on SVM is discussed. Since basic SVM is originally designed for two-class classification, while most of fault diagnosis problems are multi-class cases, a new multi-class classification of SVM named 'one to others' algorithm is presented to solve the multi-class recognition problems. It is a binary tree classifier composed of several two-class classifiers organised by fault priority, which is simple, and has little repeated training amount, and the rate of training and recognition is expedited. The effectiveness of the method is verified by the application to the fault diagnosis for turbo pump rotor.
Earthquake scaling laws for rupture geometry and slip heterogeneity
NASA Astrophysics Data System (ADS)
Thingbaijam, Kiran K. S.; Mai, P. Martin; Goda, Katsuichiro
2016-04-01
We analyze an extensive compilation of finite-fault rupture models to investigate earthquake scaling of source geometry and slip heterogeneity to derive new relationships for seismic and tsunami hazard assessment. Our dataset comprises 158 earthquakes with a total of 316 rupture models selected from the SRCMOD database (http://equake-rc.info/srcmod). We find that fault-length does not saturate with earthquake magnitude, while fault-width reveals inhibited growth due to the finite seismogenic thickness. For strike-slip earthquakes, fault-length grows more rapidly with increasing magnitude compared to events of other faulting types. Interestingly, our derived relationship falls between the L-model and W-model end-members. In contrast, both reverse and normal dip-slip events are more consistent with self-similar scaling of fault-length. However, fault-width scaling relationships for large strike-slip and normal dip-slip events, occurring on steeply dipping faults (δ~90° for strike-slip faults, and δ~60° for normal faults), deviate from self-similarity. Although reverse dip-slip events in general show self-similar scaling, the restricted growth of down-dip fault extent (with upper limit of ~200 km) can be seen for mega-thrust subduction events (M~9.0). Despite this fact, for a given earthquake magnitude, subduction reverse dip-slip events occupy relatively larger rupture area, compared to shallow crustal events. In addition, we characterize slip heterogeneity in terms of its probability distribution and spatial correlation structure to develop a complete stochastic random-field characterization of earthquake slip. We find that truncated exponential law best describes the probability distribution of slip, with observable scale parameters determined by the average and maximum slip. Applying Box-Cox transformation to slip distributions (to create quasi-normal distributed data) supports cube-root transformation, which also implies distinctive non-Gaussian slip distributions. To further characterize the spatial correlations of slip heterogeneity, we analyze the power spectral decay of slip applying the 2-D von Karman auto-correlation function (parameterized by the Hurst exponent, H, and correlation lengths along strike and down-slip). The Hurst exponent is scale invariant, H = 0.83 (± 0.12), while the correlation lengths scale with source dimensions (seismic moment), thus implying characteristic physical scales of earthquake ruptures. Our self-consistent scaling relationships allow constraining the generation of slip-heterogeneity scenarios for physics-based ground-motion and tsunami simulations.
EDNA: Expert fault digraph analysis using CLIPS
NASA Technical Reports Server (NTRS)
Dixit, Vishweshwar V.
1990-01-01
Traditionally fault models are represented by trees. Recently, digraph models have been proposed (Sack). Digraph models closely imitate the real system dependencies and hence are easy to develop, validate and maintain. However, they can also contain directed cycles and analysis algorithms are hard to find. Available algorithms tend to be complicated and slow. On the other hand, the tree analysis (VGRH, Tayl) is well understood and rooted in vast research effort and analytical techniques. The tree analysis algorithms are sophisticated and orders of magnitude faster. Transformation of a digraph (cyclic) into trees (CLP, LP) is a viable approach to blend the advantages of the representations. Neither the digraphs nor the trees provide the ability to handle heuristic knowledge. An expert system, to capture the engineering knowledge, is essential. We propose an approach here, namely, expert network analysis. We combine the digraph representation and tree algorithms. The models are augmented by probabilistic and heuristic knowledge. CLIPS, an expert system shell from NASA-JSC will be used to develop a tool. The technique provides the ability to handle probabilities and heuristic knowledge. Mixed analysis, some nodes with probabilities, is possible. The tool provides graphics interface for input, query, and update. With the combined approach it is expected to be a valuable tool in the design process as well in the capture of final design knowledge.
Fault Identification by Unsupervised Learning Algorithm
NASA Astrophysics Data System (ADS)
Nandan, S.; Mannu, U.
2012-12-01
Contemporary fault identification techniques predominantly rely on the surface expression of the fault. This biased observation is inadequate to yield detailed fault structures in areas with surface cover like cities deserts vegetation etc and the changes in fault patterns with depth. Furthermore it is difficult to estimate faults structure which do not generate any surface rupture. Many disastrous events have been attributed to these blind faults. Faults and earthquakes are very closely related as earthquakes occur on faults and faults grow by accumulation of coseismic rupture. For a better seismic risk evaluation it is imperative to recognize and map these faults. We implement a novel approach to identify seismically active fault planes from three dimensional hypocenter distribution by making use of unsupervised learning algorithms. We employ K-means clustering algorithm and Expectation Maximization (EM) algorithm modified to identify planar structures in spatial distribution of hypocenter after filtering out isolated events. We examine difference in the faults reconstructed by deterministic assignment in K- means and probabilistic assignment in EM algorithm. The method is conceptually identical to methodologies developed by Ouillion et al (2008, 2010) and has been extensively tested on synthetic data. We determined the sensitivity of the methodology to uncertainties in hypocenter location, density of clustering and cross cutting fault structures. The method has been applied to datasets from two contrasting regions. While Kumaon Himalaya is a convergent plate boundary, Koyna-Warna lies in middle of the Indian Plate but has a history of triggered seismicity. The reconstructed faults were validated by examining the fault orientation of mapped faults and the focal mechanism of these events determined through waveform inversion. The reconstructed faults could be used to solve the fault plane ambiguity in focal mechanism determination and constrain the fault orientations for finite source inversions. The faults produced by the method exhibited good correlation with the fault planes obtained by focal mechanism solutions and previously mapped faults.
NASA Astrophysics Data System (ADS)
Ratchkovski, N. A.; Hansen, R. A.; Christensen, D.; Kore, K.
2002-12-01
The largest earthquake ever recorded on the Denali fault system (magnitude 7.9) struck central Alaska on November 3, 2002. It was preceded by a magnitude 6.7 foreshock on October 23. This earlier earthquake and its zone of aftershocks were located slightly to the west of the 7.9 quake. Aftershock locations and surface slip observations from the 7.9 quake indicate that the rupture was predominately unilateral in the eastward direction. Near Mentasta Lake, a village that experienced some of the worst damage in the quake, the surface rupture scar turns from the Denali fault to the adjacent Totschunda fault, which trends toward more southeasterly toward the Canadian border. Overall, the geologists found that measurable scarps indicate that the north side of the Denali fault moved to the east and vertically up relative to the south. Maximum offsets on the Denali fault were 8.8 meters at the Tok Highway cutoff, and were 2.2 meters on the Totschunda fault. The Alaska regional seismic network consists of over 250 station sites, operated by the Alaska Earthquake Information Center (AEIC), the Alaska Volcano Observatory (AVO), and the Pacific Tsunami Warning Center (PTWC). Over 25 sites are equipped with the broad-band sensors, some of which have in addition the strong motion sensors. The rest of the stations are either 1 or 3-component short-period instruments. The data from these stations are collected, processed and archived at the AEIC. The AEIC staff installed a temporary network with over 20 instruments following the 6.7 Nenana Mountain and the 7.9 events. Prior to the M 7.9 Denali Fault event, the automatic earthquake detection system at AEIC was locating between 15 and 30 events per day. After the event, the system had over 200-400 automatic locations per day for at least 10 days following the 7.9 event. The processing of the data is ongoing with the priority given to the larger events. The cumulative length of the 6.7 and 7.9 aftershock locations along the Denali and Totschunda faults is about 300 km. We will present the aftershock locations, first motion focal mechanisms for M4+ events and regional moment tensors for M4.5+ events. The first motion focal mechanism for the main event indicates thrusting on the NE-trending plane with a dip of 48 degrees. We will present results of the double difference relocation of the aftershocks of the M7.9 event. The relocated aftershocks indicate a NW-dipping fault plane in the epicentral area of the event and a vertical plane along the rest of the rupture length.
NASA Astrophysics Data System (ADS)
Hu, Bingbing; Li, Bing
2016-02-01
It is very difficult to detect weak fault signatures due to the large amount of noise in a wind turbine system. Multiscale noise tuning stochastic resonance (MSTSR) has proved to be an effective way to extract weak signals buried in strong noise. However, the MSTSR method originally based on discrete wavelet transform (DWT) has disadvantages such as shift variance and the aliasing effects in engineering application. In this paper, the dual-tree complex wavelet transform (DTCWT) is introduced into the MSTSR method, which makes it possible to further improve the system output signal-to-noise ratio and the accuracy of fault diagnosis by the merits of DTCWT (nearly shift invariant and reduced aliasing effects). Moreover, this method utilizes the relationship between the two dual-tree wavelet basis functions, instead of matching the single wavelet basis function to the signal being analyzed, which may speed up the signal processing and be employed in on-line engineering monitoring. The proposed method is applied to the analysis of bearing outer ring and shaft coupling vibration signals carrying fault information. The results confirm that the method performs better in extracting the fault features than the original DWT-based MSTSR, the wavelet transform with post spectral analysis, and EMD-based spectral analysis methods.
Paleoseismic investigations at the Cal thrust fault, Mendoza, Argentina
NASA Astrophysics Data System (ADS)
Salomon, Eric; Schmidt, Silke; Hetzel, Ralf; Mingorance, Francisco
2010-05-01
Along the active mountain front of the Andean Precordillera between 30°S and 34°S in western Argentina several earthquakes occurred in recent times, including a 7.0 Ms event in 1861 which destroyed the city of Mendoza and killed two thirds of its population. The 1861 event and two other earthquakes (Ms = 5.7 in 1929 and Ms = 5.6 in 1967) were generated on the Cal thrust fault, which extends over a distance of 31 km north-south and runs straight through the center of Mendoza. In the city, which has now more than 1 million inhabitants, the fault forms a 3-m-high fault scarp. Although the Cal thrust fault poses a serious seismic hazard, the paleoseismologic history of this fault and its long-term slip rate remains largely unknown (Mingorance, 2006). We present the first results of an ongoing paleoseismologic study of the Cal thrust at a site located 5 km north of Mendoza. Here, the fault offsets Late Holocene alluvial fan sediments by 2.5 m vertically and exhibits a well developed fault scarp. A 15-m-long and 2-3-m-deep trench across the scarp reveals three east-vergent folds that we interpret to have formed during three earthquakes. Successive retrodeformation of the two youngest folds suggests that the most recent event (presumably the 1861 earthquake) caused ~1.1 m of vertical offset and ~1.8 m of horizontal shortening. For the penultimate event we obtain a vertical offset of ~0.7 m and a horizontal shortening of ~1.9 m. A vertical displacement of ~0.7 m observed on a steeply west-dipping fault may be associated with an older event. The cumulative vertical offset of 2.5 m for the three inferred events is in excellent agreement with the height of the scarp. Based on the retrodeformation of the trench deposits the fault plane dips ~25° to the west. In the deepest part of the trench evidence for even older seismic events is preserved beneath an angular unconformity that was formed during a period of erosion and pre-dates the present-day scarp. Dating of samples to determine the recurrence interval of these seismic events and the long-term slip rate of the fault is in progress. References Mingorance, F. (2006): Morfometría de la escarpa de falla historica identificada al norte del cerro La Cal, zona de falla La Cal, Mendoza. Revista de la Asociación Geológica Argentina, 61 (4), 620-638.
NASA Astrophysics Data System (ADS)
Guccione, Margaret J.
2005-10-01
The New Madrid seismic zone (NMSZ) is an intraplate right-lateral strike-slip and thrust fault system contained mostly within the Mississippi Alluvial Valley. The most recent earthquake sequence in the zone occurred in 1811 1812 and had estimated moment magnitudes of 7 8 (e.g., [Johnston, A.C., 1996. Seismic moment assessment of stable continental earthquakes, Part 3: 1811 1812 New Madrid, 1886 Charleston, and 1755 Lisbon. Geophysical Journal International 126, 314 344; Johnston, A.C., Schweig III, E.S, 1996. The enigma of the New Madrid earthquakes of 1811 1812. Annual Reviews of Earth and Planetary Sciences 24, 339 384; Hough, S.E., Armbruster, J.G., Seeber, L., Hough, J.F., 2000. On the modified Mercalli intensities and magnitudes of the New Madrid earthquakes. Journal of Geophysical Research 105 (B10), 23,839 23,864; Tuttle, M.P., 2001. The use of liquefaction features in paleoseismology: Lessons learned in the New Madrid seismic zone, central United States. Journal of Seismology 5, 361 380]). Four earlier prehistoric earthquakes or earthquake sequences have been dated A.D. 1450 ± 150, 900 ± 100, 300 ± 200, and 2350 B.C. ± 200 years using paleoliquefaction features, particularly those associated with native American artifacts, and in some cases surface deformation ([Craven, J. A. 1995. Paleoseismology study in the New Madrid seismic zone using geological and archeological features to constrain ages of liquefaction deposits. M.S thesis, University of Memphis, Memphis, TN, U.S.A.; Tuttle, M.P., Lafferty III, R.H., Guccione, M.J., Schweig III, E.S., Lopinot, N., Cande, R., Dyer-Williams, K., Haynes, M., 1996. Use of archaeology to date liquefaction features and seismic events in the New Madrid seismic zone, central United States. Geoarchaeology 11, 451 480; Guccione, M.J., Mueller, K., Champion, J., Shepherd, S., Odhiambo, B., 2002b. Stream response to repeated co-seismic folding, Tiptonville dome, western Tennessee. Geomorphology 43(2002), 313 349; Tuttle, M.P., Schweig, E.S., Sims, J.D., Lafferty, R.H., Wolf, L.W., Haynes, M.L., 2002. The earthquake potential of the New Madrid seismic zone, Bulletin of the Seismological Society of America, v 92, n. 6, p. 2080 2089; Tuttle, M.P., Schweig III, E.S., Campbell, J., Thomas, P.M., Sims, J.D., Lafferty III, R.H., 2005. Evidence for New Madrid earthquakes in A.D. 300 and 2350 B.C. Seismological Research Letters 76, 489 501]). The two most recent prehistoric and the 2350 B.C. events were probably also earthquake sequences with approximately the same magnitude as the historic sequence. Surface deformation (faulting and folding) in an alluvial setting provides many examples of stream response to gradient changes that can also be used to date past earthquake events. Stream responses include changes in channel morphology, deviations in the channel path from the regional gradient, changes in the direction of flow, anomalous longitudinal profiles, and aggradation or incision of the channel ([Merritts, D., Hesterberg, T, 1994. Stream networks and long-term surface uplift in the New Madrid seismic zone. Science 265, 1081 1084.; Guccione, M.J., Mueller, K., Champion, J., Shepherd, S., Odhiambo, B., 2002b. Stream response to repeated co-seismic folding, Tiptonville dome, western Tennessee. Geomorphology 43 (2002), 313 349]). Uplift or depression of the floodplain affects the frequency of flooding and thus the thickness and style of vertical accretion or drowning of a meander scar to form a lake. Vegetation may experience trauma, mortality, and in some cases growth enhancement due to ground failure during the earthquake and hydrologic changes after the earthquake ([VanArdale, R.B., Stahle, D.W., Cleaveland, M.K., Guccione, M.J., 1998. Earthquake signals in tree-ring data from the New Madrid seismic zone and implications for paleoseismicity. Geology 26, 515 518]). Identification and dating these physical and biologic responses allows source areas to be identified and seismic events to be dated. Seven fault segments are recognized by microseismicity and geomorphology. Surface faulting has been recognized at three of these segments, Reelfoot fault, New Madrid North fault, and Bootheel fault. The Reelfoot fault is a compressive stepover along the strike-slip fault and has up to 11 m of surface relief ([Carlson, S.D., 2000. Formation and geomorphic history of Reelfoot Lake: insight into the New Madrid seismic zone. M.S. Thesis, University of Arkansas, Fayetteville, Arkansas, U.S.A]) deforming abandoned and active Mississippi River channels ([Guccione, M.J., Mueller, K., Champion, J., Shepherd, S., Odhiambo, B., 2002b. Stream response to repeated co-seismic folding, Tiptonville dome, western Tennessee. Geomorphology 43 (2002), 313 349]). The New Madrid North fault apparently has only strike-slip motion and is recognized by modern microseismicity, geomorphic anomalies, and sand cataclasis ([Baldwin, J.N., Barron A.D., Kelson, K.I., Harris, J.B., Cashman, S., 2002. Preliminary paleoseismic and geophysical investigation of the North Farrenburg lineament: primary tectonic deformation associated with the New Madrid North Fault?. Seismological Research Letters 73, 393 413]). The Bootheel fault, which is not identified by the modern microseismicity, is associated with extensive liquefaction and offset channels ([Guccione, M.J., Marple, R., Autin, W.J., 2005, Evidence for Holocene displacements on the Bootheel fault (lineament) in southeastern Missouri: Seismotectonic implications for the New Madrid region. Geological Society of America Bulletin 117, 319 333]). The fault has dominantly strike-slip motion but also has a vertical component of slip. Other recognized surface deformation includes relatively low-relief folding at Big Lake/Manila high ([Guccione, M.J., VanArdale, R.B., Hehr, L.H., 2000. Origin and age of the Manila high and associated Big Lake “Sunklands”, New Madrid seismic zone, northeastern Arkansas. Geological Society of America Bulletin 112, 579 590]) and Lake St. Francis/Marked Tree high ([Guccione, M.J., VanArsdale, R.B., 1995. Origin and age of the St. Francis Sunklands using drainage patterns and sedimentology. Final report submitted to the U. S. Geological Survey, Award Number 1434-93-G-2354, Washington D.C.]), both along the subsurface Blytheville arch. Deformation at each of the fault segments does not occur during each earthquake event, indicating that earthquake sources have varied throughout the Holocene.
NASA Astrophysics Data System (ADS)
Williams, P. L.; Phillips, D. A.; Bowles-Martinez, E.; Masana, E.; Stepancikova, P.
2010-12-01
Terrestrial and airborne LiDAR data, and low altitude aerial photography have been utilized in conjunction with field work to identify and map single and multiple-event stream-offsets along all strands of the San Andreas fault in the Coachella Valley. Goals of the work are characterizing the range of displacements associated with the fault’s prehistoric surface ruptures, evaluating patterns of along-fault displacement, and disclosing processes associated with the prominent Banning-Mission Creek fault junction. Preservation offsets is associated with landscape conditions including: (1) well-confined and widely spaced source streams up-slope of the fault; (2) persistent geomorphic surfaces below the fault; (3) slope directions oriented approximately perpendicular to the fault. Notably, a pair of multiple-event offset sites have been recognized in coarse fan deposits below the Mission Creek fault near 1000 Palms oasis. Each of these sites is associated with a single source drainage oriented approximately perpendicular to the fault, and preserves a record of individual fault displacements affecting the southern portion of the Mission Creek branch of the San Andreas fault. The two sites individually record long (>10 event) slip-per-event histories. Documentation of the sites indicates a prevalence of moderate displacements and a small number of large offsets. This is consistent with evidence developed in systematic mapping of individual and multiple event stream offsets in the area extending 70 km south to Durmid Hill. Challenges to site interpretation include the presence of closely spaced en echelon fault branches and indications of stream avulsion in the area of the modern fault crossing. Conversely, strong bar and swale topography produce high quality offset indicators that can be identified across en echelon branches in most cases. To accomplish the detailed mapping needed to fully recover the complex yet well-preserved geomorphic features under investigation, a program of terrestrial laser scanning (TLS) was conducted at the 1000 Palms oasis stream offset sites. Data products and map interpretations will be presented along with initial applications of the study to characterizing San Andreas fault rupture hazard. Continuing work will seek to more fully populate the dataset of larger offsets, evaluate means to objectively date the larger offsets, and, as completely as possible, to characterize magnitudes of past surface ruptures of the San Andreas fault in the Coachella Valley.
Bell, J.W.; DePolo, C.M.; Ramelli, A.R.; Sarna-Wojcicki, A. M.; Meyer, C.E.
1999-01-01
The 1932 Cedar Mountain earthquake (Ms 7.2) was one of the largest historical events in the Walker Lane region of western Nevada, and it produced a complicated strike-slip rupture pattern on multiple Quaternary faults distributed through three valleys. Primary, right-lateral surface ruptures occurred on north-striking faults in Monte Cristo Valley; small-scale lateral and normal offsets occurred in Stewart Valley; and secondary, normal faulting occurred on north-northeast-striking faults in the Gabbs Valley epicentral region. A reexamination of the surface ruptures provides new displacement and fault-zone data: maximum cumulative offset is estimated to be 2.7 m, and newly recognized faults extend the maximum width and end-to-end length of the rupture zone to 17 and 75 km, respectively. A detailed Quaternary allostratigraphic chronology based on regional alluvialgeomorphic relationships, tephrochronology, and radiocarbon dating provides a framework for interpreting the paleoseismic history of the fault zone. A late Wisconsinan alluvial-fan and piedmont unit containing a 32-36 ka tephra layer is a key stratigraphic datum for paleoseismic measurements. Exploratory trenching and radiocarbon dating of tectonic stratigraphy provide the first estimates for timing of late Quaternary faulting along the Cedar Mountain fault zone. Three trenches display evidence for six faulting events, including that in 1932, during the past 32-36 ka. Radiocarbon dating of organic soils interstratified with tectonically ponded silts establishes best-fit ages of the pre-1932 events at 4, 5,12,15, and 18 ka, each with ??2 ka uncertainties. On the basis of an estimated cumulative net slip of 6-12 m for the six faulting events, minimum and maximum late Quaternary slip rates are 0.2 and 0.7 mm/yr, respectively, and the preferred rate is 0.4-0.5 mm/yr. The average recurrence (interseismic) interval is 3600 yr. The relatively uniform thickness of the ponded deposits suggests that similar-size, characteristic rupture events may characterize late Quaternary slip on the zone. A comparison of event timing with the average late Quaternary recurrence interval indicates that slip has been largely regular (periodic) rather than temporally clustered. To account for the spatial separation of the primary surface faulting in Monte Cristo Valley from the epicenter and for a factor-of-two-to-three disparity between the instrumentally and geologically determined seismic moments associated with the earthquake, we hypothesize two alternative tectonic models containing undetected subevents. Either model would adequately account for the observed faulting on the basis of wrench-fault kinematics that may be associated with the Walker Lane. The 1932 Cedar Mountain earthquake is considered an important modern analogue for seismotectonic modeling and estimating seismic hazard in the Walker Lane region. In contrast to most other historical events in the Basin and Range province, the 1932 event did not occur along a major range-bounding fault, and no single, throughgoing basement structure can account for the observed rupture pattern. The 1932 faulting supports the concept that major earthquakes in the Basin and Range province can exhibit complicated distributive rupture patterns and that slip rate may not be a reliable criterion for modeling seismic hazard.
Locating hardware faults in a parallel computer
Archer, Charles J.; Megerian, Mark G.; Ratterman, Joseph D.; Smith, Brian E.
2010-04-13
Locating hardware faults in a parallel computer, including defining within a tree network of the parallel computer two or more sets of non-overlapping test levels of compute nodes of the network that together include all the data communications links of the network, each non-overlapping test level comprising two or more adjacent tiers of the tree; defining test cells within each non-overlapping test level, each test cell comprising a subtree of the tree including a subtree root compute node and all descendant compute nodes of the subtree root compute node within a non-overlapping test level; performing, separately on each set of non-overlapping test levels, an uplink test on all test cells in a set of non-overlapping test levels; and performing, separately from the uplink tests and separately on each set of non-overlapping test levels, a downlink test on all test cells in a set of non-overlapping test levels.
The 2016 Mw7.0 Kumamoto, Japan earthquake: the rupture propagation under extensional stress
NASA Astrophysics Data System (ADS)
Zhang, Y.; Shan, X.; Zhang, G.; Gong, W.
2016-12-01
On April 16, 2016, the Kumamoto city was hit by an Mw7.0 earthquake, the largest earthquake since 1900 in the central part of Kyushu Island in Japan. It is an event with two foreshocks and rather complex source faults and surface rupture scarps. The Mw7.0 Kumamoto earthquake and its foreshocks and aftershocks occurred on the Futagawa and Hinagu faults, which are previously mapped and formed the southwest portion of the median tectonic line on Kyushu Island. These faults are mainly controlled by extensional and right-lateral shear stress. In this study, we obtained the deformation filed of the Kumamoto earthquake using both of descending and ascending Sentinel-1A data. We then invert the fault slip distribution based on the displacements obtained by InSAR. A three-segment fault model is established by trial and error. We analyze the rupture propagation and the conclusions are listed as following: The Mw 7.0 earthquake is a right-lateral striking event with a slight normal component. Most of the slip distributed on the Futagawa fault segment, with a maximum slip of 4.9 m at 5 km depth below the surface. The energy released on this Futagawa fault segment is equivalent to an Mw6.9 event. The slip distribution on the Hinagu fault segment is also right-lateral, but with a maximum slip of 2 m. Compared to the southern two segments, the northern source fault segment has the steepest dipping segment, which is almost vertical, with a dip as high as 80°; The normal component of the Kumamoto event is controlled by extensional stress due to the tectonic background. The Beppu-Shimabara half graben is the largest extensional structure on Kyushu Island and its formation could strongly be affected by Philippine Sea slab (PHS) convergence and Okinawa Trough extension, so we argue the Kumamoto event maybe exhibits the concrete manifestation of Okinawa Trough extension to Kyushu Island; Continuous surface rupture trace is observed from InSAR coseismic deformation and field investigation, based on which we confirm that the Kumamoto event jumped a 1 km wide step over of the Kiyama fault and two 0.6km wide gaps. However, the mainshock do not jump a 1.7 km wide step over of the Futagawa fault, so its magnitude moment is constrained. In addition, both the Mw6.4 and Mw6.5 events could not go through a 2 km wide at the northeast termination of the Hinagu faults.
Paleoearthquake recurrence on the East Paradise fault zone, metropolitan Albuquerque, New Mexico
Personius, Stephen F.; Mahan, Shannon
2000-01-01
A fortuitous exposure of the East Paradise fault zone near Arroyo de las Calabacillas has helped us determine a post-middle Pleistocene history for a long-forgotten Quaternary fault in the City of Albuquerque, New Mexico. Mapping of two exposures of the fault zone allowed us to measure a total vertical offset of 2.75 m across middle Pleistocene fluvial and eolian deposits and to estimate individual surface-faulting events of about 1, 0.5, and 1.25 m. These measurements and several thermoluminescence ages allow us to calculate a long-term average slip rate of 0.01 ± 0.001 mm/yr and date two surface-faulting events to 208 ± 25 ka and 75 ± 7 ka. The youngest event probably occurred in the late Pleistocene, sometime after 75 ± 7 ka. These data yield a single recurrence interval of 133 ± 26 ka and an average recurrence interval of 90 ± 10 ka. However, recurrence intervals are highly variable because the two youngest events occurred in less than 75 ka. Offsets of 0.5-1.25 m and a fault length of 13-20 km indicate that surface-rupturing paleoearthquakes on the East Paradise fault zone had probable Ms or Mw magnitudes of 6.8-7.0. Although recurrence intervals are long on the East Paradise fault zone, these data are significant because they represent some of the first published slip rate, paleoearthquake magnitude, and recurrence information for any of the numerous Quaternary faults in the rapidly growing Albuquerque-Rio Rancho metropolitan area.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dorostkar, Omid; Guyer, Robert A.; Johnson, Paul A.
The presence of fault gouge has considerable influence on slip properties of tectonic faults and the physics of earthquake rupture. The presence of fluids within faults also plays a significant role in faulting and earthquake processes. In this study, we present 3-D discrete element simulations of dry and fluid-saturated granular fault gouge and analyze the effect of fluids on stick-slip behavior. Fluid flow is modeled using computational fluid dynamics based on the Navier-Stokes equations for an incompressible fluid and modified to take into account the presence of particles. Analysis of a long time train of slip events shows that themore » (1) drop in shear stress, (2) compaction of granular layer, and (3) the kinetic energy release during slip all increase in magnitude in the presence of an incompressible fluid, compared to dry conditions. We also observe that on average, the recurrence interval between slip events is longer for fluid-saturated granular fault gouge compared to the dry case. This observation is consistent with the occurrence of larger events in the presence of fluid. It is found that the increase in kinetic energy during slip events for saturated conditions can be attributed to the increased fluid flow during slip. Finally, our observations emphasize the important role that fluid flow and fluid-particle interactions play in tectonic fault zones and show in particular how discrete element method (DEM) models can help understand the hydromechanical processes that dictate fault slip.« less
Dorostkar, Omid; Guyer, Robert A.; Johnson, Paul A.; ...
2017-05-01
The presence of fault gouge has considerable influence on slip properties of tectonic faults and the physics of earthquake rupture. The presence of fluids within faults also plays a significant role in faulting and earthquake processes. In this study, we present 3-D discrete element simulations of dry and fluid-saturated granular fault gouge and analyze the effect of fluids on stick-slip behavior. Fluid flow is modeled using computational fluid dynamics based on the Navier-Stokes equations for an incompressible fluid and modified to take into account the presence of particles. Analysis of a long time train of slip events shows that themore » (1) drop in shear stress, (2) compaction of granular layer, and (3) the kinetic energy release during slip all increase in magnitude in the presence of an incompressible fluid, compared to dry conditions. We also observe that on average, the recurrence interval between slip events is longer for fluid-saturated granular fault gouge compared to the dry case. This observation is consistent with the occurrence of larger events in the presence of fluid. It is found that the increase in kinetic energy during slip events for saturated conditions can be attributed to the increased fluid flow during slip. Finally, our observations emphasize the important role that fluid flow and fluid-particle interactions play in tectonic fault zones and show in particular how discrete element method (DEM) models can help understand the hydromechanical processes that dictate fault slip.« less
NASA Astrophysics Data System (ADS)
Dorostkar, Omid; Guyer, Robert A.; Johnson, Paul A.; Marone, Chris; Carmeliet, Jan
2017-05-01
The presence of fault gouge has considerable influence on slip properties of tectonic faults and the physics of earthquake rupture. The presence of fluids within faults also plays a significant role in faulting and earthquake processes. In this paper, we present 3-D discrete element simulations of dry and fluid-saturated granular fault gouge and analyze the effect of fluids on stick-slip behavior. Fluid flow is modeled using computational fluid dynamics based on the Navier-Stokes equations for an incompressible fluid and modified to take into account the presence of particles. Analysis of a long time train of slip events shows that the (1) drop in shear stress, (2) compaction of granular layer, and (3) the kinetic energy release during slip all increase in magnitude in the presence of an incompressible fluid, compared to dry conditions. We also observe that on average, the recurrence interval between slip events is longer for fluid-saturated granular fault gouge compared to the dry case. This observation is consistent with the occurrence of larger events in the presence of fluid. It is found that the increase in kinetic energy during slip events for saturated conditions can be attributed to the increased fluid flow during slip. Our observations emphasize the important role that fluid flow and fluid-particle interactions play in tectonic fault zones and show in particular how discrete element method (DEM) models can help understand the hydromechanical processes that dictate fault slip.
NASA Technical Reports Server (NTRS)
Carreno, Victor
2006-01-01
This document describes a method to demonstrate that a UAS, operating in the NAS, can avoid collisions with an equivalent level of safety compared to a manned aircraft. The method is based on the calculation of a collision probability for a UAS , the calculation of a collision probability for a base line manned aircraft, and the calculation of a risk ratio given by: Risk Ratio = P(collision_UAS)/P(collision_manned). A UAS will achieve an equivalent level of safety for collision risk if the Risk Ratio is less than or equal to one. Calculation of the probability of collision for UAS and manned aircraft is accomplished through event/fault trees.
Neural computing for numeric-to-symbolic conversion in control systems
NASA Technical Reports Server (NTRS)
Passino, Kevin M.; Sartori, Michael A.; Antsaklis, Panos J.
1989-01-01
A type of neural network, the multilayer perceptron, is used to classify numeric data and assign appropriate symbols to various classes. This numeric-to-symbolic conversion results in a type of information extraction, which is similar to what is called data reduction in pattern recognition. The use of the neural network as a numeric-to-symbolic converter is introduced, its application in autonomous control is discussed, and several applications are studied. The perceptron is used as a numeric-to-symbolic converter for a discrete-event system controller supervising a continuous variable dynamic system. It is also shown how the perceptron can implement fault trees, which provide useful information (alarms) in a biological system and information for failure diagnosis and control purposes in an aircraft example.
NASA Astrophysics Data System (ADS)
Cochran, U. A.; Clark, K. J.; Howarth, J. D.; Biasi, G. P.; Langridge, R. M.; Villamor, P.; Berryman, K. R.; Vandergoes, M. J.
2017-04-01
Discovery and investigation of millennial-scale geological records of past large earthquakes improve understanding of earthquake frequency, recurrence behaviour, and likelihood of future rupture of major active faults. Here we present a ∼2000 year-long, seven-event earthquake record from John O'Groats wetland adjacent to the Alpine fault in New Zealand, one of the most active strike-slip faults in the world. We linked this record with the 7000 year-long, 22-event earthquake record from Hokuri Creek (20 km along strike to the north) to refine estimates of earthquake frequency and recurrence behaviour for the South Westland section of the plate boundary fault. Eight cores from John O'Groats wetland revealed a sequence that alternated between organic-dominated and clastic-dominated sediment packages. Transitions from a thick organic unit to a thick clastic unit that were sharp, involved a significant change in depositional environment, and were basin-wide, were interpreted as evidence of past surface-rupturing earthquakes. Radiocarbon dates of short-lived organic fractions either side of these transitions were modelled to provide estimates for earthquake ages. Of the seven events recognised at the John O'Groats site, three post-date the most recent event at Hokuri Creek, two match events at Hokuri Creek, and two events at John O'Groats occurred in a long interval during which the Hokuri Creek site may not have been recording earthquakes clearly. The preferred John O'Groats-Hokuri Creek earthquake record consists of 27 events since ∼6000 BC for which we calculate a mean recurrence interval of 291 ± 23 years, shorter than previously estimated for the South Westland section of the fault and shorter than the current interseismic period. The revised 50-year conditional probability of a surface-rupturing earthquake on this fault section is 29%. The coefficient of variation is estimated at 0.41. We suggest the low recurrence variability is likely to be a feature of other strike-slip plate boundary faults similar to the Alpine fault.
Thompson, B.D.; Young, R.P.; Lockner, D.A.
2005-01-01
To investigate laboratory earthquakes, stick-slip events were induced on a saw-cut Westerly granite sample by triaxial loading at 150 MPa confining pressure. Acoustic emissions (AE) were monitored using an innovative continuous waveform recorder. The first motion of each stick slip was recorded as a large-amplitude AE signal. These events source locate onto the saw-cut fault plane, implying that they represent the nucleation sites of the dynamic failure stick-slip events. The precise location of nucleation varied between events and was probably controlled by heterogeneity of stress or surface conditions on the fault. The initial nucleation diameter of each dynamic instability was inferred to be less than 3 mm. A small number of AE were recorded prior to each macro slip event. For the second and third slip events, premonitory AE source mechanisms mimic the large scale fault plane geometry. Copyright 2005 by the American Geophysical Union.
Dokas, Ioannis M; Panagiotakopoulos, Demetrios C
2006-08-01
The available expertise on managing and operating solid waste management (SWM) facilities varies among countries and among types of facilities. Few experts are willing to record their experience, while few researchers systematically investigate the chains of events that could trigger operational failures in a facility; expertise acquisition and dissemination, in SWM, is neither popular nor easy, despite the great need for it. This paper presents a knowledge acquisition process aimed at capturing, codifying and expanding reliable expertise and propagating it to non-experts. The knowledge engineer (KE), the person performing the acquisition, must identify the events (or causes) that could trigger a failure, determine whether a specific event could trigger more than one failure, and establish how various events are related among themselves and how they are linked to specific operational problems. The proposed process, which utilizes logic diagrams (fault trees) widely used in system safety and reliability analyses, was used for the analysis of 24 common landfill operational problems. The acquired knowledge led to the development of a web-based expert system (Landfill Operation Management Advisor, http://loma.civil.duth.gr), which estimates the occurrence possibility of operational problems, provides advice and suggests solutions.
Simpson, Robert W.; Graymer, Russell W.; Jachens, Robert C.; Ponce, David A.; Wentworth, Carl M.
2004-01-01
We present cross-section and map views of earthquakes that occurred from 1984 to 2000 in the vicinity of the Hayward and Calaveras faults in the San Francisco Bay region, California. These earthquakes came from a catalog of events relocated using the double-difference technique, which provides superior relative locations of nearby events. As a result, structures such as fault surfaces and alignments of events along these surfaces are more sharply defined than in previous catalogs.
NASA Astrophysics Data System (ADS)
Roland, E. C.; Walton, M. A. L.; Ruppert, N. A.; Gulick, S. P. S.; Christeson, G. L.; Haeussler, P. J.
2014-12-01
In January 2013, a Mw 7.5 earthquake ruptured a segment of the Queen Charlotte Fault offshore the town of Craig in southeast Alaska. The region of the fault that slipped during the Craig earthquake is adjacent to and possibly overlapping with the northern extent of the 1949 M 8.1 Queen Charlotte earthquake rupture (Canada's largest recorded earthquake), and is just south of the rupture area of the 1972 M 7.6 earthquake near Sitka, Alaska. Here we present aftershock locations and focal mechanisms for events that occurred four months following the mainshock using data recorded on an Ocean Bottom Seismometer (OBS) array that was deployed offshore of Prince of Wales Island. This array consisted of 9 short period instruments surrounding the fault segment, and recorded hundreds of aftershocks during the months of April and May, 2013. In addition to highlighting the primary mainshock rupture plane, aftershocks also appear to be occurring along secondary fault structures adjacent to the main fault trace, illuminating complicated structure, particularly toward the northern extent of the Craig rupture. Focal mechanisms for the larger events recorded during the OBS deployment show both near-vertical strike slip motion consistent with the mainshock mechanism, as well as events with varying strike and a component of normal faulting. Although fault structure along this northern segment of the QCF appears to be considerably simpler than to the south, where a higher degree of oblique convergence leads to sub-parallel compressional deformation structures, secondary faulting structures apparent in legacy seismic reflection data near the Craig rupture may be consistent with the observed seismicity patterns. In combination, these data may help to characterize structural heterogeneity along the northern segment of the Queen Charlotte Fault that contributes to rupture segmentation during large strike slip events.
Model authoring system for fail safe analysis
NASA Technical Reports Server (NTRS)
Sikora, Scott E.
1990-01-01
The Model Authoring System is a prototype software application for generating fault tree analyses and failure mode and effects analyses for circuit designs. Utilizing established artificial intelligence and expert system techniques, the circuits are modeled as a frame-based knowledge base in an expert system shell, which allows the use of object oriented programming and an inference engine. The behavior of the circuit is then captured through IF-THEN rules, which then are searched to generate either a graphical fault tree analysis or failure modes and effects analysis. Sophisticated authoring techniques allow the circuit to be easily modeled, permit its behavior to be quickly defined, and provide abstraction features to deal with complexity.
A quantitative analysis of the F18 flight control system
NASA Technical Reports Server (NTRS)
Doyle, Stacy A.; Dugan, Joanne B.; Patterson-Hine, Ann
1993-01-01
This paper presents an informal quantitative analysis of the F18 flight control system (FCS). The analysis technique combines a coverage model with a fault tree model. To demonstrate the method's extensive capabilities, we replace the fault tree with a digraph model of the F18 FCS, the only model available to us. The substitution shows that while digraphs have primarily been used for qualitative analysis, they can also be used for quantitative analysis. Based on our assumptions and the particular failure rates assigned to the F18 FCS components, we show that coverage does have a significant effect on the system's reliability and thus it is important to include coverage in the reliability analysis.
Chen, Gang; Song, Yongduan; Lewis, Frank L
2016-05-03
This paper investigates the distributed fault-tolerant control problem of networked Euler-Lagrange systems with actuator and communication link faults. An adaptive fault-tolerant cooperative control scheme is proposed to achieve the coordinated tracking control of networked uncertain Lagrange systems on a general directed communication topology, which contains a spanning tree with the root node being the active target system. The proposed algorithm is capable of compensating for the actuator bias fault, the partial loss of effectiveness actuation fault, the communication link fault, the model uncertainty, and the external disturbance simultaneously. The control scheme does not use any fault detection and isolation mechanism to detect, separate, and identify the actuator faults online, which largely reduces the online computation and expedites the responsiveness of the controller. To validate the effectiveness of the proposed method, a test-bed of multiple robot-arm cooperative control system is developed for real-time verification. Experiments on the networked robot-arms are conduced and the results confirm the benefits and the effectiveness of the proposed distributed fault-tolerant control algorithms.
A rapid calculation system for tsunami propagation in Japan by using the AQUA-MT/CMT solutions
NASA Astrophysics Data System (ADS)
Nakamura, T.; Suzuki, W.; Yamamoto, N.; Kimura, H.; Takahashi, N.
2017-12-01
We developed a rapid calculation system of geodetic deformations and tsunami propagation in and around Japan. The system automatically conducts their forward calculations by using point source parameters estimated by the AQUA system (Matsumura et al., 2006), which analyze magnitude, hypocenter, and moment tensors for an event occurring in Japan in 3 minutes of the origin time at the earliest. An optimized calculation code developed by Nakamura and Baba (2016) is employed for the calculations on our computer server with 12 core processors of Intel Xeon 2.60 GHz. Assuming a homogeneous fault slip in the single fault plane as the source fault, the developed system calculates each geodetic deformation and tsunami propagation by numerically solving the 2D linear long-wave equations for the grid interval of 1 arc-min from two fault orientations simultaneously; i.e., one fault and its conjugate fault plane. Because fault models based on moment tensor analyses of event data are used, the system appropriately evaluate tsunami propagation even for unexpected events such as normal faulting in the subduction zone, which differs with the evaluation of tsunami arrivals and heights from a pre-calculated database by using fault models assuming typical types of faulting in anticipated source areas (e.g., Tatehata, 1998; Titov et al., 2005; Yamamoto et al., 2016). By the complete automation from event detection to output graphical figures, the calculation results can be available via e-mail and web site in 4 minutes of the origin time at the earliest. For moderate-sized events such as M5 to 6 events, the system helps us to rapidly investigate whether amplitudes of tsunamis at nearshore and offshore stations exceed a noise level or not, and easily identify actual tsunamis at the stations by comparing with obtained synthetic waveforms. In the case of using source models investigated from GNSS data, such evaluations may be difficult because of the low resolution of sources due to a low signal to noise ratio at land stations. For large to huge events in offshore areas, the developed system may be useful to decide to starting or stopping preparations and precautions against tsunami arrivals, because calculation results including arrival times and heights of initial and maximum waves can be rapidly available before their arrivals at coastal areas.
Friction Laws Derived From the Acoustic Emissions of a Laboratory Fault by Machine Learning
NASA Astrophysics Data System (ADS)
Rouet-Leduc, B.; Hulbert, C.; Ren, C. X.; Bolton, D. C.; Marone, C.; Johnson, P. A.
2017-12-01
Fault friction controls nearly all aspects of fault rupture, yet it is only possible to measure in the laboratory. Here we describe laboratory experiments where acoustic emissions are recorded from the fault. We find that by applying a machine learning approach known as "extreme gradient boosting trees" to the continuous acoustical signal, the fault friction can be directly inferred, showing that instantaneous characteristics of the acoustic signal are a fingerprint of the frictional state. This machine learning-based inference leads to a simple law that links the acoustic signal to the friction state, and holds for every stress cycle the laboratory fault goes through. The approach does not use any other measured parameter than instantaneous statistics of the acoustic signal. This finding may have importance for inferring frictional characteristics from seismic waves in Earth where fault friction cannot be measured.
The Design of a Fault-Tolerant COTS-Based Bus Architecture for Space Applications
NASA Technical Reports Server (NTRS)
Chau, Savio N.; Alkalai, Leon; Tai, Ann T.
2000-01-01
The high-performance, scalability and miniaturization requirements together with the power, mass and cost constraints mandate the use of commercial-off-the-shelf (COTS) components and standards in the X2000 avionics system architecture for deep-space missions. In this paper, we report our experiences and findings on the design of an IEEE 1394 compliant fault-tolerant COTS-based bus architecture. While the COTS standard IEEE 1394 adequately supports power management, high performance and scalability, its topological criteria impose restrictions on fault tolerance realization. To circumvent the difficulties, we derive a "stack-tree" topology that not only complies with the IEEE 1394 standard but also facilitates fault tolerance realization in a spaceborne system with limited dedicated resource redundancies. Moreover, by exploiting pertinent standard features of the 1394 interface which are not purposely designed for fault tolerance, we devise a comprehensive set of fault detection mechanisms to support the fault-tolerant bus architecture.
NASA Astrophysics Data System (ADS)
Valoroso, L.; Chiaraluce, L.
2017-12-01
Low-angle normal faults (dip < 30°) are geologically widely documented and considered responsible for accommodating the crustal extension within the brittle crust although their mechanical behavior and seismogenic potential is enigmatic. We study the anatomy and slip-behavior of the actively slipping Altotiberina low-angle (ATF) normal fault system using a high-resolution 5-years-long (2010-2014) earthquake catalogue composed of 37k events (ML<3.9 and completeness magnitude MC=0.5 ML), recorded by a dense permanent seismic network of the Altotiberina Near Fault Observatory (TABOO). The seismic activity defines the fault system dominated at depth by the low-angle ATF surface (15-20°) coinciding to the ATF geometry imaged through seismic reflection data. The ATF extends for 50km along-strike and between 4-5 to 16km of depth. Seismicity also images the geometry of a set of higher angle faults (35-50°) located in the ATF hanging-wall (HW). The ATF-related seismicity accounts for 10% of the whole seismicity (3,700 events with ML<2.4), occurring at a remarkably constant rate of 2.2 events/day. This seismicity describes an about 1.5-km-thick fault zone composed by multiple sub-parallel slipping planes. The remaining events are instead organized in multiple mainshocks (MW>3) seismic sequences lasting from weeks to months, activating a contiguous network of 3-5-km-long syn- and antithetic fault segments within the ATF-HW. The space-time evolution of these minor sequences is consistent with subsequence failures promoted by fluid flow. The ATF-seismicity pattern includes 97 clusters of repeating events (RE) made of 299 events with ML<1.9. RE are located around locked patches identified by geodetic modeling, suggesting a mixed-mode (stick-slip and stable-sliding) slip-behavior along the fault plane in accommodating most of the NE-trending tectonic deformation with creeping dominating below 5 km depth. Consistently, the seismic moment released by the ATF-seismicity accounts for a small portion (30%) of the geodetic one. The rate of occurrence of RE, mostly composed by doublets with short inter-event time (e.g. hours), appears to modulate the seismic release of the ATF-HW, suggesting that creeping may drive the strain partitioning of the system.
NASA Astrophysics Data System (ADS)
Tal, Yuval; Hager, Bradford H.
2018-02-01
We study the response to slow tectonic loading of rough faults governed by velocity weakening rate and state friction, using a 2-D plane strain model. Our numerical approach accounts for all stages in the seismic cycle, and in each simulation we model a sequence of two earthquakes or more. We focus on the global behavior of the faults and find that as the roughness amplitude, br, increases and the minimum wavelength of roughness decreases, there is a transition from seismic slip to aseismic slip, in which the load on the fault is released by more slip events but with lower slip rate, lower seismic moment per unit length, M0,1d, and lower average static stress drop on the fault, Δτt. Even larger decreases with roughness are observed when these source parameters are estimated only for the dynamic stage of the rupture. For br ≤ 0.002, the source parameters M0,1d and Δτt decrease mutually and the relationship between Δτt and the average fault strain is similar to that of a smooth fault. For faults with larger values of br that are completely ruptured during the slip events, the average fault strain generally decreases more rapidly with roughness than Δτt.
NASA Astrophysics Data System (ADS)
Murotani, S.; Satake, K.
2017-12-01
Off Fukushima region, Mjma 7.4 (event A) and 6.9 (event B) events occurred on November 6, 1938, following the thrust fault type earthquakes of Mjma 7.5 and 7.3 on the previous day. These earthquakes were estimated as normal fault earthquakes by Abe (1977, Tectonophysics). An Mjma 7.0 earthquake occurred on July 12, 2014 near event B and an Mjma 7.4 earthquake occurred on November 22, 2016 near event A. These recent events are the only M 7 class earthquakes occurred off Fukushima since 1938. Except for the two 1938 events, normal fault earthquakes have not occurred until many aftershocks of the 2011 Tohoku earthquake. We compared the observed tsunami and seismic waveforms of the 1938, 2014, and 2016 earthquakes to examine the normal fault earthquakes occurred off Fukushima region. It is difficult to compare the tsunami waveforms of the 1938, 2014 and 2016 events because there were only a few observations at the same station. The teleseismic body wave inversion of the 2016 earthquake yielded with the focal mechanism of strike 42°, dip 35°, and rake -94°. Other source parameters were as follows: source area 70 km x 40 km, average slip 0.2 m, maximum slip 1.2 m, seismic moment 2.2 x 1019 Nm, and Mw 6.8. A large slip area is located near the hypocenter, and it is compatible with the tsunami source area estimated from tsunami travel times. The 2016 tsunami source area is smaller than that of the 1938 event, consistent with the difference in Mw: 7.7 for event A estimated by Abe (1977) and 6.8 for the 2016 event. Although the 2014 epicenter is very close to that of event B, the teleseismic waveforms of the 2014 event are similar to those of event A and the 2016 event. While Abe (1977) assumed that the mechanism of event B was the same as event A, the initial motions at some stations are opposite, indicating that the focal mechanisms of events A and B are different and more detailed examination is needed. The normal fault type earthquake seems to occur following the occurrence of M7 9 class thrust type earthquake at the plate boundary off Fukushima region.
NASA Astrophysics Data System (ADS)
Abe, Steffen; Krieger, Lars; Deckert, Hagen
2017-04-01
The changes of fluid pressures related to the injection of fluids into the deep underground, for example during geothermal energy production, can potentially reactivate faults and thus cause induced seismic events. Therefore, an important aspect in the planning and operation of such projects, in particular in densely populated regions such as the Upper Rhine Graben in Germany, is the estimation and mitigation of the induced seismic risk. The occurrence of induced seismicity depends on a combination of hydraulic properties of the underground, mechanical and geometric parameters of the fault, and the fluid injection regime. In this study we are therefore employing a numerical model to investigate the impact of fluid pressure changes on the dynamics of the faults and the resulting seismicity. The approach combines a model of the fluid flow around a geothermal well based on a 3D finite difference discretisation of the Darcy-equation with a 2D block-slider model of a fault. The models are coupled so that the evolving pore pressure at the relevant locations of the hydraulic model is taken into account in the calculation of the stick-slip dynamics of the fault model. Our modelling approach uses two subsequent modelling steps. Initially, the fault model is run by applying a fixed deformation rate for a given duration and without the influence of the hydraulic model in order to generate the background event statistics. Initial tests have shown that the response of the fault to hydraulic loading depends on the timing of the fluid injection relative to the seismic cycle of the fault. Therefore, multiple snapshots of the fault's stress- and displacement state are generated from the fault model. In a second step, these snapshots are then used as initial conditions in a set of coupled hydro-mechanical model runs including the effects of the fluid injection. This set of models is then compared with the background event statistics to evaluate the change in the probability of seismic events. The event data such as location, magnitude, and source characteristics can be used as input for numerical wave propagation models. This allows the translation of seismic event statistics generated by the model into ground shaking probabilities.
NASA Astrophysics Data System (ADS)
Muksin, Umar; Haberland, Christian; Nukman, Mochamad; Bauer, Klaus; Weber, Michael
2014-12-01
The Tarutung Basin is located at a right step-over in the northern central segment of the dextral strike-slip Sumatran Fault System (SFS). Details of the fault structure along the Tarutung Basin are derived from the relocations of seismicity as well as from focal mechanism and structural geology. The seismicity distribution derived by a 3D inversion for hypocenter relocation is clustered according to a fault-like seismicity distribution. The seismicity is relocated with a double-difference technique (HYPODD) involving the waveform cross-correlations. We used 46,904 and 3191 arrival differences obtained from catalogue data and cross-correlation analysis, respectively. Focal mechanisms of events were analyzed by applying a grid search method (HASH code). Although there is no significant shift of the hypocenters (10.8 m in average) and centroids (167 m in average), the application of the double difference relocation sharpens the earthquake distribution. The earthquake lineation reflects the fault system, the extensional duplex fault system, and the negative flower structure within the Tarutung Basin. The focal mechanisms of events at the edge of the basin are dominantly of strike-slip type representing the dextral strike-slip Sumatran Fault System. The almost north-south striking normal fault events along extensional zones beneath the basin correlate with the maximum principal stress direction which is the direction of the Indo-Australian plate motion. The extensional zones form an en-echelon pattern indicated by the presence of strike-slip faults striking NE-SW to NW-SE events. The detailed characteristics of the fault system derived from the seismological study are also corroborated by structural geology at the surface.
Evolution of Friction, Wear, and Seismic Radiation Along Experimental Bi-material Faults
NASA Astrophysics Data System (ADS)
Carpenter, B. M.; Zu, X.; Shadoan, T.; Self, A.; Reches, Z.
2017-12-01
Faults are commonly composed by rocks of different lithologies and mechanical properties that are positioned against one another by fault slip; such faults are referred to as bimaterial-faults (BF). We investigate the mechanical behavior, wear production, and seismic radiation of BF via laboratory experiments on a rotary shear apparatus. In the experiments, two rock blocks of dissimilar or similar lithology are sheared against each other. We used contrasting rock pairs of a stiff, igneous block (diorite, granite, or gabbro) against a more compliant, sedimentary block (sandstone, limestone, or dolomite). The cylindrical blocks have a ring-shaped contact, and are loaded under conditions of constant normal stress and shear velocity. Fault behavior was monitored with stress, velocity and dilation sensors. Acoustic activity is monitored with four 3D accelerometers mounted at 2 cm distance from the experimental fault. These sensors can measure accelerations up to 500 g, and their full waveform output is recorded at 1MHz for periods up to 14 sec. Our preliminary results indicate that the bi-material nature of the fault has a strong affect on slip initiation, wear evolution, and acoustic emission activity. In terms of wear, we observe enhanced wear in experiments with a sandstone block sheared against a gabbro or limestone block. Experiments with a limestone or sandstone block produced distinct slickenline striations. Further, significant differences appeared in the number and amplitude of acoustic events depending on the bi-material setting and slip-distance. A gabbro-gabbro fault showed a decrease in both amplitude and number of acoustic events with increasing slip. Conversely, a gabbro-limestone fault showed a decrease in the number of events, but an increase in average event amplitude. Ongoing work focuses on advanced characterization of mechanical, dynamic weakening, and acoustic, frequency content, parameters.
An Event-Based Approach to Distributed Diagnosis of Continuous Systems
NASA Technical Reports Server (NTRS)
Daigle, Matthew; Roychoudhurry, Indranil; Biswas, Gautam; Koutsoukos, Xenofon
2010-01-01
Distributed fault diagnosis solutions are becoming necessary due to the complexity of modern engineering systems, and the advent of smart sensors and computing elements. This paper presents a novel event-based approach for distributed diagnosis of abrupt parametric faults in continuous systems, based on a qualitative abstraction of measurement deviations from the nominal behavior. We systematically derive dynamic fault signatures expressed as event-based fault models. We develop a distributed diagnoser design algorithm that uses these models for designing local event-based diagnosers based on global diagnosability analysis. The local diagnosers each generate globally correct diagnosis results locally, without a centralized coordinator, and by communicating a minimal number of measurements between themselves. The proposed approach is applied to a multi-tank system, and results demonstrate a marked improvement in scalability compared to a centralized approach.
Reproducing the scaling laws for Slow and Fast ruptures
NASA Astrophysics Data System (ADS)
Romanet, Pierre; Bhat, Harsha; Madariaga, Raúl
2017-04-01
Modelling long term behaviour of large, natural fault systems, that are geometrically complex, is a challenging problem. This is why most of the research so far has concentrated on modelling the long term response of single planar fault system. To overcome this limitation, we appeal to a novel algorithm called the Fast Multipole Method which was developed in the context of modelling gravitational N-body problems. This method allows us to decrease the computational complexity of the calculation from O(N2) to O(N log N), N being the number of discretised elements on the fault. We then adapted this method to model the long term quasi-dynamic response of two faults, with step-over like geometry, that are governed by rate and state friction laws. We assume the faults have spatially uniform rate weakening friction. The results show that when stress interaction between faults is accounted, a complex spectrum of slip (including slow-slip events, dynamic ruptures and partial ruptures) emerges naturally. The simulated slow-slip and dynamic events follow the scaling law inferred by Ide et al. 2007 i. e. M ∝ T for slow-slip events and M ∝ T2 (in 2D) for dynamic events.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marzooqi, Y A; Abou Elenean, K M; Megahed, A S
2008-02-29
On March 10 and September 13, 2007 two felt earthquakes with moment magnitudes 3.66 and 3.94 occurred in the eastern part of United Arab Emirates (UAE). The two events were accompanied by few smaller events. Being well recorded by the digital UAE and Oman digital broadband stations, they provide us an excellent opportunity to study the tectonic process and present day stress field acting on this area. In this study, we determined the focal mechanisms of the two main shocks by two methods (polarities of P and regional waveform inversion). Our results indicate a normal faulting mechanism with slight strikemore » slip component for the two studied events along a fault plane trending NNE-SSW in consistent a suggested fault along the extension of the faults bounded Bani Hamid area. The Seismicity distribution between two earthquake sequences reveals a noticeable gap that may be a site of a future event. The source parameters (seismic moment, moment magnitude, fault radius, stress drop and displacement across the fault) were also estimated based on the far field displacement spectra and interpreted in the context of the tectonic setting.« less
The influence of normal fault on initial state of stress in rock mass
NASA Astrophysics Data System (ADS)
Tajduś, Antoni; Cała, Marek; Tajduś, Krzysztof
2016-03-01
Determination of original state of stress in rock mass is a very difficult task for rock mechanics. Yet, original state of stress in rock mass has fundamental influence on secondary state of stress, which occurs in the vicinity of mining headings. This, in turn, is the cause of the occurrence of a number of mining hazards, i.e., seismic events, rock bursts, gas and rock outbursts, falls of roof. From experience, it is known that original state of stress depends a lot on tectonic disturbances, i.e., faults and folds. In the area of faults, a great number of seismic events occur, often of high energies. These seismic events, in many cases, are the cause of rock bursts and damage to the constructions located inside the rock mass and on the surface of the ground. To estimate the influence of fault existence on the disturbance of original state of stress in rock mass, numerical calculations were done by means of Finite Element Method. In the calculations, it was tried to determine the influence of different factors on state of stress, which occurs in the vicinity of a normal fault, i.e., the influence of normal fault inclination, deformability of rock mass, values of friction coefficient on the fault contact. Critical value of friction coefficient was also determined, when mutual dislocation of rock mass part separated by a fault is impossible. The obtained results enabled formulation of a number of conclusions, which are important in the context of seismic events and rock bursts in the area of faults.
Shelly, David R.; Taira, Taka’aki; Prejean, Stephanie; Hill, David P.; Dreger, Douglas S.
2015-01-01
Faulting and fluid transport in the subsurface are highly coupled processes, which may manifest seismically as earthquake swarms. A swarm in February 2014 beneath densely monitored Mammoth Mountain, California, provides an opportunity to witness these interactions in high resolution. Toward this goal, we employ massive waveform-correlation-based event detection and relative relocation, which quadruples the swarm catalog to more than 6000 earthquakes and produces high-precision locations even for very small events. The swarm's main seismic zone forms a distributed fracture mesh, with individual faults activated in short earthquake bursts. The largest event of the sequence, M 3.1, apparently acted as a fault valve and was followed by a distinct wave of earthquakes propagating ~1 km westward from the updip edge of rupture, 1–2 h later. Late in the swarm, multiple small, shallower subsidiary faults activated with pronounced hypocenter migration, suggesting that a broader fluid pressure pulse propagated through the subsurface.
NASA Astrophysics Data System (ADS)
Martinez-Garzon, Patricia; Kwiatek, Grzegorz; Bohnhoff, Marco; Dresen, Georg
2017-04-01
Improving estimates of seismic hazard associated to reservoir stimulation requires advanced understanding of the physical processes governing induced seismicity, which can be better achieved by carefully processing large datasets. To this end, we investigate source-type processes (shear/tensile/compaction) and rupture geometries with respect to the local stress field using seismicity from The Geysers (TG) and Salton Sea geothermal reservoirs, California. Analysis of 869 well-constrained full moment tensors (MW 0.8-3.5) at TG reveals significant non-double-couple (NDC) components (>25%) for 65% of the events and remarkably diversity in the faulting mechanisms. Volumetric deformation is clearly governed by injection rates with larger NDC components observed near injection wells and during high injection periods. The overall volumetric deformation from the moment tensors increases with time, possibly reflecting a reservoir pore pressure increase after several years of fluid injection with no significant production nearby. The obtained source mechanisms and fault orientations are magnitude-dependent and vary significantly between faulting regimes. Normal faulting events (MW < 2) reveal substantial NDC components indicating dilatancy, and they occur on varying fault orientations. In contrast, strike-slip events dominantly reveal a double-couple source, larger magnitudes (MW > 2) and mostly occur on optimally oriented faults with respect to the local stress field. NDC components indicating closure of cracks and pore spaces in the source region are found for reverse faulting events with MW > 2.5. Our findings from TG are generally consistent with preliminary source-type results from a reduced subset of well-recorded seismicity at the Salton Sea geothermal reservoir. Combined results imply that source processes and magnitudes of geothermal-induced seismicity are strongly affected by and systematically related to the hydraulic operations and the local stress state.
Back analysis of fault-slip in burst prone environment
NASA Astrophysics Data System (ADS)
Sainoki, Atsushi; Mitri, Hani S.
2016-11-01
In deep underground mines, stress re-distribution induced by mining activities could cause fault-slip. Seismic waves arising from fault-slip occasionally induce rock ejection when hitting the boundary of mine openings, and as a result, severe damage could be inflicted. In general, it is difficult to estimate fault-slip-induced ground motion in the vicinity of mine openings because of the complexity of the dynamic response of faults and the presence of geological structures. In this paper, a case study is conducted for a Canadian underground mine, herein called "Mine-A", which is known for its seismic activities. Using a microseismic database collected from the mine, a back analysis of fault-slip is carried out with mine-wide 3-dimensional numerical modeling. A back analysis is conducted to estimate the physical and mechanical properties of the causative fracture or shear zones. One large seismic event has been selected for the back analysis to detect a fault-slip related seismic event. In the back analysis, the shear zone properties are estimated with respect to moment magnitude of the seismic event and peak particle velocity (PPV) recorded by a strong ground motion sensor. The estimated properties are then validated through comparison with peak ground acceleration recorded by accelerometers. Lastly, ground motion in active mining areas is estimated by conducting dynamic analysis with the estimated values. The present study implies that it would be possible to estimate the magnitude of seismic events that might occur in the near future by applying the estimated properties to the numerical model. Although the case study is conducted for a specific mine, the developed methodology can be equally applied to other mines suffering from fault-slip related seismic events.
Integrated Safety Risk Reduction Approach to Enhancing Human-Rated Spaceflight Safety
NASA Astrophysics Data System (ADS)
Mikula, J. F. Kip
2005-12-01
This paper explores and defines the current accepted concept and philosophy of safety improvement based on a Reliability enhancement (called here Reliability Enhancement Based Safety Theory [REBST]). In this theory a Reliability calculation is used as a measure of the safety achieved on the program. This calculation may be based on a math model or a Fault Tree Analysis (FTA) of the system, or on an Event Tree Analysis (ETA) of the system's operational mission sequence. In each case, the numbers used in this calculation are hardware failure rates gleaned from past similar programs. As part of this paper, a fictional but representative case study is provided that helps to illustrate the problems and inaccuracies of this approach to safety determination. Then a safety determination and enhancement approach based on hazard, worst case analysis, and safety risk determination (called here Worst Case Based Safety Theory [WCBST]) is included. This approach is defined and detailed using the same example case study as shown in the REBST case study. In the end it is concluded that an approach combining the two theories works best to reduce Safety Risk.
Improving online risk assessment with equipment prognostics and health monitoring
DOE Office of Scientific and Technical Information (OSTI.GOV)
Coble, Jamie B.; Liu, Xiaotong; Briere, Chris
The current approach to evaluating the risk of nuclear power plant (NPP) operation relies on static probabilities of component failure, which are based on industry experience with the existing fleet of nominally similar light water reactors (LWRs). As the nuclear industry looks to advanced reactor designs that feature non-light water coolants (e.g., liquid metal, high temperature gas, molten salt), this operating history is not available. Many advanced reactor designs use advanced components, such as electromagnetic pumps, that have not been used in the US commercial nuclear fleet. Given the lack of rich operating experience, we cannot accurately estimate the evolvingmore » probability of failure for basic components to populate the fault trees and event trees that typically comprise probabilistic risk assessment (PRA) models. Online equipment prognostics and health management (PHM) technologies can bridge this gap to estimate the failure probabilities for components under operation. The enhanced risk monitor (ERM) incorporates equipment condition assessment into the existing PRA and risk monitor framework to provide accurate and timely estimates of operational risk.« less
Takahashi, H.; Hirata, K.
2003-01-01
The 2000 Nemuro-Hanto-Oki earthquake (Mw6.8) occurred in the southwestern part of the Kuril Trench. The hypocenter was located close to the aftershock region of the great 1994 Kuril earthquake (Mw8.3), named "the 1994 Hokkaido-Toho-Oki earthquake" by the Japan Meteorological Agency, for which the fault plane is still in debate. Analysis of the 2000 event provides a clue to resolve the fault plane issue for the 1994 event. The hypocenters of the 2000 main shock and aftershocks are determined using arrival times from a combination of nearby inland and submarine seismic networks with an improved azimuthal coverage. They clearly show that the 2000 event was an intraslab event occurring on a shallow-dipping fault plane between 55 and 65 km in depth. The well-focused aftershock distribution of the 2000 event, the relative location of the 1994 event with respect to the 2000 event, and the similarity between their focal mechanisms strongly suggest that the faulting of the great 1994 earthquake also occurred on a shallow-dipping fault plane in the subducting slab. The recent hypocenter distribution around the 1994 aftershock region also supports this result. Large intraslab earthquakes occuring to the southeast of Hokkaido may occur due to a strong coupling on the plate boundary, which generates relatively large stress field within the subducting Pacific plate.
J.B. Salisbury,; T.K. Rockwell,; T.J. Middleton,; Hudnut, Kenneth W.
2012-01-01
We measured offsets on tectonically displaced geomorphic features along 80 km of the Clark strand of the San Jacinto fault (SJF) to estimate slip‐per‐event for the past several surface ruptures. We identify 168 offset features from which we make over 490 measurements using B4 light detection and ranging (LiDAR) imagery and field observations. Our results suggest that LiDAR technology is an exemplary supplement to traditional field methods in slip‐per‐event studies. Displacement estimates indicate that the most recent surface‐rupturing event (MRE) produced an average of 2.5–2.9 m of right‐lateral slip with maximum slip of nearly 4 m at Anza, a Mw 7.2–7.5 earthquake. Average multiple‐event offsets for the same 80 kms are ∼5.5 m, with maximum values of 3 m at Anza for the penultimate event. Cumulative displacements of 9–10 m through Anza suggest the third event was also similar in size. Paleoseismic work at Hog Lake dates the most recent surface rupture event at ca. 1790. A poorly located, large earthquake occurred in southern California on 22 November 1800; we relocate this event to the Clark fault based on the MRE at Hog Lake. We also recognize the occurrence of a younger rupture along ∼15–20 km of the fault in Blackburn Canyon with ∼1.25 m of average displacement. We attribute these offsets to the 21 April 1918 Mw 6.9 event. These data argue that much or all of the Clark fault, and possibly also the Casa Loma fault, fail together in large earthquakes, but that shorter sections may fail in smaller events.
Fault Tree Based Diagnosis with Optimal Test Sequencing for Field Service Engineers
NASA Technical Reports Server (NTRS)
Iverson, David L.; George, Laurence L.; Patterson-Hine, F. A.; Lum, Henry, Jr. (Technical Monitor)
1994-01-01
When field service engineers go to customer sites to service equipment, they want to diagnose and repair failures quickly and cost effectively. Symptoms exhibited by failed equipment frequently suggest several possible causes which require different approaches to diagnosis. This can lead the engineer to follow several fruitless paths in the diagnostic process before they find the actual failure. To assist in this situation, we have developed the Fault Tree Diagnosis and Optimal Test Sequence (FTDOTS) software system that performs automated diagnosis and ranks diagnostic hypotheses based on failure probability and the time or cost required to isolate and repair each failure. FTDOTS first finds a set of possible failures that explain exhibited symptoms by using a fault tree reliability model as a diagnostic knowledge to rank the hypothesized failures based on how likely they are and how long it would take or how much it would cost to isolate and repair them. This ordering suggests an optimal sequence for the field service engineer to investigate the hypothesized failures in order to minimize the time or cost required to accomplish the repair task. Previously, field service personnel would arrive at the customer site and choose which components to investigate based on past experience and service manuals. Using FTDOTS running on a portable computer, they can now enter a set of symptoms and get a list of possible failures ordered in an optimal test sequence to help them in their decisions. If facilities are available, the field engineer can connect the portable computer to the malfunctioning device for automated data gathering. FTDOTS is currently being applied to field service of medical test equipment. The techniques are flexible enough to use for many different types of devices. If a fault tree model of the equipment and information about component failure probabilities and isolation times or costs are available, a diagnostic knowledge base for that device can be developed easily.
Sequential Test Strategies for Multiple Fault Isolation
NASA Technical Reports Server (NTRS)
Shakeri, M.; Pattipati, Krishna R.; Raghavan, V.; Patterson-Hine, Ann; Kell, T.
1997-01-01
In this paper, we consider the problem of constructing near optimal test sequencing algorithms for diagnosing multiple faults in redundant (fault-tolerant) systems. The computational complexity of solving the optimal multiple-fault isolation problem is super-exponential, that is, it is much more difficult than the single-fault isolation problem, which, by itself, is NP-hard. By employing concepts from information theory and Lagrangian relaxation, we present several static and dynamic (on-line or interactive) test sequencing algorithms for the multiple fault isolation problem that provide a trade-off between the degree of suboptimality and computational complexity. Furthermore, we present novel diagnostic strategies that generate a static diagnostic directed graph (digraph), instead of a static diagnostic tree, for multiple fault diagnosis. Using this approach, the storage complexity of the overall diagnostic strategy reduces substantially. Computational results based on real-world systems indicate that the size of a static multiple fault strategy is strictly related to the structure of the system, and that the use of an on-line multiple fault strategy can diagnose faults in systems with as many as 10,000 failure sources.
MacDonald Iii, Angus W; Zick, Jennifer L; Chafee, Matthew V; Netoff, Theoden I
2015-01-01
The grand challenges of schizophrenia research are linking the causes of the disorder to its symptoms and finding ways to overcome those symptoms. We argue that the field will be unable to address these challenges within psychiatry's standard neo-Kraepelinian (DSM) perspective. At the same time the current corrective, based in molecular genetics and cognitive neuroscience, is also likely to flounder due to its neglect for psychiatry's syndromal structure. We suggest adopting a new approach long used in reliability engineering, which also serves as a synthesis of these approaches. This approach, known as fault tree analysis, can be combined with extant neuroscientific data collection and computational modeling efforts to uncover the causal structures underlying the cognitive and affective failures in people with schizophrenia as well as other complex psychiatric phenomena. By making explicit how causes combine from basic faults to downstream failures, this approach makes affordances for: (1) causes that are neither necessary nor sufficient in and of themselves; (2) within-diagnosis heterogeneity; and (3) between diagnosis co-morbidity.
NASA Astrophysics Data System (ADS)
Petukhin, A.; Galvez, P.; Somerville, P.; Ampuero, J. P.
2017-12-01
We perform earthquake cycle simulations to study the characteristics of source scaling relations and strong ground motions and in multi-segmented fault ruptures. For earthquake cycle modeling, a quasi-dynamic solver (QDYN, Luo et al, 2016) is used to nucleate events and the fully dynamic solver (SPECFEM3D, Galvez et al., 2014, 2016) is used to simulate earthquake ruptures. The Mw 7.3 Landers earthquake has been chosen as a target earthquake to validate our methodology. The SCEC fault geometry for the three-segmented Landers rupture is included and extended at both ends to a total length of 200 km. We followed the 2-D spatial correlated Dc distributions based on Hillers et. al. (2007) that associates Dc distribution with different degrees of fault maturity. The fault maturity is related to the variability of Dc on a microscopic scale. Large variations of Dc represents immature faults and lower variations of Dc represents mature faults. Moreover we impose a taper (a-b) at the fault edges and limit the fault depth to 15 km. Using these settings, earthquake cycle simulations are performed to nucleate seismic events on different sections of the fault, and dynamic rupture modeling is used to propagate the ruptures. The fault segmentation brings complexity into the rupture process. For instance, the change of strike between fault segments enhances strong variations of stress. In fact, Oglesby and Mai (2012) show the normal stress varies from positive (clamping) to negative (unclamping) between fault segments, which leads to favorable or unfavorable conditions for rupture growth. To replicate these complexities and the effect of fault segmentation in the rupture process, we perform earthquake cycles with dynamic rupture modeling and generate events similar to the Mw 7.3 Landers earthquake. We extract the asperities of these events and analyze the scaling relations between rupture area, average slip and combined area of asperities versus moment magnitude. Finally, the simulated ground motions will be validated by comparison of simulated response spectra with recorded response spectra and with response spectra from ground motion prediction models. This research is sponsored by the Japan Nuclear Regulation Authority.
NASA Astrophysics Data System (ADS)
Madden, E. H.; Pollard, D. D.
2009-12-01
Multi-fault, strike-slip earthquakes have proved difficult to incorporate into seismic hazard analyses due to the difficulty of determining the probability of these ruptures, despite collection of extensive data associated with such events. Modeling the mechanical behavior of these complex ruptures contributes to a better understanding of their occurrence by elucidating the relationship between surface and subsurface earthquake activity along transform faults. This insight is especially important for hazard mitigation, as multi-fault systems can produce earthquakes larger than those associated with any one fault involved. We present a linear elastic, quasi-static model of the southern portion of the 28 June 1992 Landers earthquake built in the boundary element software program Poly3D. This event did not rupture the extent of any one previously mapped fault, but trended 80km N and NW across segments of five sub-parallel, N-S and NW-SE striking faults. At M7.3, the earthquake was larger than the potential earthquakes associated with the individual faults that ruptured. The model extends from the Johnson Valley Fault, across the Landers-Kickapoo Fault, to the Homestead Valley Fault, using data associated with a six-week time period following the mainshock. It honors the complex surface deformation associated with this earthquake, which was well exposed in the desert environment and mapped extensively in the field and from aerial photos in the days immediately following the earthquake. Thus, the model incorporates the non-linearity and segmentation of the main rupture traces, the irregularity of fault slip distributions, and the associated secondary structures such as strike-slip splays and thrust faults. Interferometric Synthetic Aperture Radar (InSAR) images of the Landers event provided the first satellite images of ground deformation caused by a single seismic event and provide constraints on off-fault surface displacement in this six-week period. Insight is gained by comparing the density, magnitudes and focal plane orientations of relocated aftershocks for this time frame with the magnitude and orientation of planes of maximum Coulomb shear stress around the fault planes at depth.
Waldhauser, F.; Ellsworth, W.L.
2002-01-01
The relationship between small-magnitude seismicity and large-scale crustal faulting along the Hayward Fault, California, is investigated using a double-difference (DD) earthquake location algorithm. We used the DD method to determine high-resolution hypocenter locations of the seismicity that occurred between 1967 and 1998. The DD technique incorporates catalog travel time data and relative P and S wave arrival time measurements from waveform cross correlation to solve for the hypocentral separation between events. The relocated seismicity reveals a narrow, near-vertical fault zone at most locations. This zone follows the Hayward Fault along its northern half and then diverges from it to the east near San Leandro, forming the Mission trend. The relocated seismicity is consistent with the idea that slip from the Calaveras Fault is transferred over the Mission trend onto the northern Hayward Fault. The Mission trend is not clearly associated with any mapped active fault as it continues to the south and joins the Calaveras Fault at Calaveras Reservoir. In some locations, discrete structures adjacent to the main trace are seen, features that were previously hidden in the uncertainty of the network locations. The fine structure of the seismicity suggest that the fault surface on the northern Hayward Fault is curved or that the events occur on several substructures. Near San Leandro, where the more westerly striking trend of the Mission seismicity intersects with the surface trace of the (aseismic) southern Hayward Fault, the seismicity remains diffuse after relocation, with strong variation in focal mechanisms between adjacent events indicating a highly fractured zone of deformation. The seismicity is highly organized in space, especially on the northern Hayward Fault, where it forms horizontal, slip-parallel streaks of hypocenters of only a few tens of meters width, bounded by areas almost absent of seismic activity. During the interval from 1984 to 1998, when digital waveforms are available, we find that fewer than 6.5% of the earthquakes can be classified as repeating earthquakes, events that rupture the same fault patch more than one time. These most commonly are located in the shallow creeping part of the fault, or within the streaks at greater depth. The slow repeat rate of 2-3 times within the 15-year observation period for events with magnitudes around M = 1.5 is indicative of a low slip rate or a high stress drop. The absence of microearthquakes over large, contiguous areas of the northern Hayward Fault plane in the depth interval from ???5 to 10 km and the concentrations of seismicity at these depths suggest that the aseismic regions are either locked or retarded and are storing strain energy for release in future large-magnitude earthquakes.
Optical fiber-fault surveillance for passive optical networks in S-band operation window
NASA Astrophysics Data System (ADS)
Yeh, Chien-Hung; Chi, Sien
2005-07-01
An S-band (1470 to 1520 nm) fiber laser scheme, which uses multiple fiber Bragg grating (FBG) elements as feedback elements on each passive branch, is proposed and described for in-service fault identification in passive optical networks (PONs). By tuning a wavelength selective filter located within the laser cavity over a gain bandwidth, the fiber-fault of each branch can be monitored without affecting the in-service channels. In our experiment, an S-band four-branch monitoring tree-structured PON system is demonstrated and investigated experimentally.
Optical fiber-fault surveillance for passive optical networks in S-band operation window.
Yeh, Chien-Hung; Chi, Sien
2005-07-11
An S-band (1470 to 1520 nm) fiber laser scheme, which uses multiple fiber Bragg grating (FBG) elements as feedback elements on each passive branch, is proposed and described for in-service fault identification in passive optical networks (PONs). By tuning a wavelength selective filter located within the laser cavity over a gain bandwidth, the fiber-fault of each branch can be monitored without affecting the in-service channels. In our experiment, an S-band four-branch monitoring tree-structured PON system is demonstrated and investigated experimentally.
Li, Jun; Zhang, Hong; Han, Yinshan; Wang, Baodong
2016-01-01
Focusing on the diversity, complexity and uncertainty of the third-party damage accident, the failure probability of third-party damage to urban gas pipeline was evaluated on the theory of analytic hierarchy process and fuzzy mathematics. The fault tree of third-party damage containing 56 basic events was built by hazard identification of third-party damage. The fuzzy evaluation of basic event probabilities were conducted by the expert judgment method and using membership function of fuzzy set. The determination of the weight of each expert and the modification of the evaluation opinions were accomplished using the improved analytic hierarchy process, and the failure possibility of the third-party to urban gas pipeline was calculated. Taking gas pipelines of a certain large provincial capital city as an example, the risk assessment structure of the method was proved to conform to the actual situation, which provides the basis for the safety risk prevention. PMID:27875545
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vinnikov, B.; NRC Kurchatov Inst.
According to Scientific and Technical Cooperation between the USA and Russia in the field of nuclear engineering the Idaho National Laboratory has transferred to the possession of the National Research Center ' Kurchatov Inst. ' the SAPHIRE software without any fee. With the help of the software Kurchatov Inst. developed a Pilot Living PSA- Model of Leningrad NPP Unit 1. Computations of core damage frequencies were carried out for additional Initiating Events. In the submitted paper such additional Initiating Events are fires in various compartments of the NPP. During the computations of each fire, structure of the PSA - Modelmore » was not changed, but Fault Trees for the appropriate systems, which are removed from service during the fire, were changed. It follows from the computations, that for ten fires Core Damaged Frequencies (CDF) are not changed. Other six fires will cause additional core damage. On the basis of the calculated results it is possible to determine a degree of importance of these fires and to establish sequence of performance of fire-prevention measures in various places of the NPP. (authors)« less
Deep Borehole Emplacement Mode Hazard Analysis Revision 0
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sevougian, S. David
This letter report outlines a methodology and provides resource information for the Deep Borehole Emplacement Mode Hazard Analysis (DBEMHA). The main purpose is identify the accident hazards and accident event sequences associated with the two emplacement mode options (wireline or drillstring), to outline a methodology for computing accident probabilities and frequencies, and to point to available databases on the nature and frequency of accidents typically associated with standard borehole drilling and nuclear handling operations. Risk mitigation and prevention measures, which have been incorporated into the two emplacement designs (see Cochran and Hardin 2015), are also discussed. A key intent ofmore » this report is to provide background information to brief subject matter experts involved in the Emplacement Mode Design Study. [Note: Revision 0 of this report is concentrated more on the wireline emplacement mode. It is expected that Revision 1 will contain further development of the preliminary fault and event trees for the drill string emplacement mode.]« less
Sun, Weifang; Yao, Bin; Zeng, Nianyin; He, Yuchao; Cao, Xincheng; He, Wangpeng
2017-01-01
As a typical example of large and complex mechanical systems, rotating machinery is prone to diversified sorts of mechanical faults. Among these faults, one of the prominent causes of malfunction is generated in gear transmission chains. Although they can be collected via vibration signals, the fault signatures are always submerged in overwhelming interfering contents. Therefore, identifying the critical fault’s characteristic signal is far from an easy task. In order to improve the recognition accuracy of a fault’s characteristic signal, a novel intelligent fault diagnosis method is presented. In this method, a dual-tree complex wavelet transform (DTCWT) is employed to acquire the multiscale signal’s features. In addition, a convolutional neural network (CNN) approach is utilized to automatically recognise a fault feature from the multiscale signal features. The experiment results of the recognition for gear faults show the feasibility and effectiveness of the proposed method, especially in the gear’s weak fault features. PMID:28773148
Fault Isolation Filter for Networked Control System with Event-Triggered Sampling Scheme
Li, Shanbin; Sauter, Dominique; Xu, Bugong
2011-01-01
In this paper, the sensor data is transmitted only when the absolute value of difference between the current sensor value and the previously transmitted one is greater than the given threshold value. Based on this send-on-delta scheme which is one of the event-triggered sampling strategies, a modified fault isolation filter for a discrete-time networked control system with multiple faults is then implemented by a particular form of the Kalman filter. The proposed fault isolation filter improves the resource utilization with graceful fault estimation performance degradation. An illustrative example is given to show the efficiency of the proposed method. PMID:22346590
Qualitative Event-Based Diagnosis: Case Study on the Second International Diagnostic Competition
NASA Technical Reports Server (NTRS)
Daigle, Matthew; Roychoudhury, Indranil
2010-01-01
We describe a diagnosis algorithm entered into the Second International Diagnostic Competition. We focus on the first diagnostic problem of the industrial track of the competition in which a diagnosis algorithm must detect, isolate, and identify faults in an electrical power distribution testbed and provide corresponding recovery recommendations. The diagnosis algorithm embodies a model-based approach, centered around qualitative event-based fault isolation. Faults produce deviations in measured values from model-predicted values. The sequence of these deviations is matched to those predicted by the model in order to isolate faults. We augment this approach with model-based fault identification, which determines fault parameters and helps to further isolate faults. We describe the diagnosis approach, provide diagnosis results from running the algorithm on provided example scenarios, and discuss the issues faced, and lessons learned, from implementing the approach
Rymer, M.J.; Seitz, G.G.; Weaver, K.D.; Orgil, A.; Faneros, G.; Hamilton, J.C.; Goetz, C.
2002-01-01
Paleoseismic investigations of the Lavic Lake fault at Lavic Lake playa place constraints on the timing of a possible earlier earthquake along the 1999 Hector Mine rupture trace and reveal evidence of the timing of the penultimate earthquake on a strand of the Lavic Lake fault that did not rupture in 1999. Three of our four trenches, trenches A, B, and C, were excavated across the 1999 Hector Mine rupture; a fourth trench, D, was excavated across a vegetation lineament that had only minor slip at its southern end in 1999. Trenches A-C exposed strata that are broken only by the 1999 rupture; trench D exposed horizontal bedding that is locally warped and offset by faults. Stratigraphic evidence for the timing of an earlier earthquake along the 1999 rupture across Lavic Lake playa was not exposed. Thus, an earlier event, if there was one along that rupture trace, predates the lowest stratigraphic level exposed in our trenches. Radiocarbon dating of strata near the bottom of trenches constrains a possible earlier event to some time earlier than about 4950 B.C. Buried faults revealed in trench D are below a vegetation lineament at the ground surface. A depositional contact about 80 cm below the ground surface acts as the upward termination of fault breaks in trench D. Thus, this contact may be the event horizon for a surface-rupturing earthquake prior to 1999-the penultimate earthquake on the Lavic Lake fault. Radiocarbon ages of detrital charcoal samples from immediately below the event horizon indicate that the earthquake associated with the faulting occurred later than A.D. 260. An approximately 1300-year age difference between two samples at about the same stratigraphic level below the event horizon suggests the potential for a long residence time of detrital charcoal in the area. Coupled with a lack of bioturbation that could introduce young organic material into the stratigraphic section, the charcoal ages provide only a maximum bounding age; thus, the recognized event may be younger. There is abundant, subtle evidence for pre-1999 activity of the Lavic Lake fault in the playa area, even though the fault was not mapped near the playa prior to the Hector Mine earthquake. The most notable indicators for long-term presence of the fault are pronounced, persistent vegetation lineaments and uplifted basalt exposures. Primary and secondary slip occurred in 1999 on two southern vegetation lineaments, and minor slip locally formed on a northern lineament; trench exposures across the northern vegetation lineament revealed the post-A.D. 260 earthquake, and a geomorphic trough extends northward into alluvial fan deposits in line with this lineament. The presence of two basalt exposures in Lavic lake playa indicates the presence of persistent compressional steps and uplift along the fault. Fault-line scarps are additional geomorphic markers of repeated slip events in basalt exposures.
Recurrence Interval and Event Age Data for Type A Faults
Dawson, Timothy E.; Weldon, Ray J.; Biasi, Glenn P.
2008-01-01
This appendix summarizes available recurrence interval, event age, and timing of most recent event data for Type A faults considered in the Earthquake Rate Model 2 (ERM 2) and used in the ERM 2 Appendix C analysis as well as Appendix N (time-dependent probabilities). These data have been compiled into an Excel workbook named Appendix B A-fault event ages_recurrence_V5.0 (herein referred to as the Appendix B workbook). For convenience, the Appendix B workbook is attached to the end of this document as a series of tables. The tables within the Appendix B workbook include site locations, event ages, and recurrence data, and in some cases, the interval of time between earthquakes is also reported. The Appendix B workbook is organized as individual worksheets, with each worksheet named by fault and paleoseismic site. Each worksheet contains the site location in latitude and longitude, as well as information on event ages, and a summary of recurrence data. Because the data has been compiled from different sources with different presentation styles, descriptions of the contents of each worksheet within the Appendix B spreadsheet are summarized.
NASA Astrophysics Data System (ADS)
Chartier, Thomas; Scotti, Oona; Boiselet, Aurelien; Lyon-Caen, Hélène
2016-04-01
Including faults in probabilistic seismic hazard assessment tends to increase the degree of uncertainty in the results due to the intrinsically uncertain nature of the fault data. This is especially the case in the low to moderate seismicity regions of Europe, where slow slipping faults are difficult to characterize. In order to better understand the key parameters that control the uncertainty in the fault-related hazard computations, we propose to build an analytic tool that provides a clear link between the different components of the fault-related hazard computations and their impact on the results. This will allow identifying the important parameters that need to be better constrained in order to reduce the resulting uncertainty in hazard and also provide a more hazard-oriented strategy for collecting relevant fault parameters in the field. The tool will be illustrated through the example of the West Corinth rifts fault-models. Recent work performed in the gulf has shown the complexity of the normal faulting system that is accommodating the extensional deformation of the rift. A logic-tree approach is proposed to account for this complexity and the multiplicity of scientifically defendable interpretations. At the nodes of the logic tree, different options that could be considered at each step of the fault-related seismic hazard will be considered. The first nodes represent the uncertainty in the geometries of the faults and their slip rates, which can derive from different data and methodologies. The subsequent node explores, for a given geometry/slip rate of faults, different earthquake rupture scenarios that may occur in the complex network of faults. The idea is to allow the possibility of several faults segments to break together in a single rupture scenario. To build these multiple-fault-segment scenarios, two approaches are considered: one based on simple rules (i.e. minimum distance between faults) and a second one that relies on physically-based simulations. The following nodes represents for each rupture scenario different rupture forecast models (i.e; characteristic or Gutenberg-Richter) and for a given rupture forecast, two probability models commonly used in seismic hazard assessment: poissonian or time-dependent. The final node represents an exhaustive set of ground motion prediction equations chosen in order to be compatible with the region. Finally, the expected probability of exceeding a given ground motion level is computed at each sites. Results will be discussed for a few specific localities of the West Corinth Gulf.
Leveraging the BPEL Event Model to Support QoS-aware Process Execution
NASA Astrophysics Data System (ADS)
Zaid, Farid; Berbner, Rainer; Steinmetz, Ralf
Business processes executed using compositions of distributed Web Services are susceptible to different fault types. The Web Services Business Process Execution Language (BPEL) is widely used to execute such processes. While BPEL provides fault handling mechanisms to handle functional faults like invalid message types, it still lacks a flexible native mechanism to handle non-functional exceptions associated with violations of QoS levels that are typically specified in a governing Service Level Agreement (SLA), In this paper, we present an approach to complement BPEL's fault handling, where expected QoS levels and necessary recovery actions are specified declaratively in form of Event-Condition-Action (ECA) rules. Our main contribution is leveraging BPEL's standard event model which we use as an event space for the created ECA rules. We validate our approach by an extension to an open source BPEL engine.
Stress/strain changes and triggered seismicity following the MW7.3 Landers, California, earthquake
Gomberg, J.
1996-01-01
Calculations of dynamic stresses and strains, constrained by broadband seismograms, are used to investigate their role in generating the remotely triggered seismicity that followed the June 28, 1992, MW7.3 Landers, California earthquake. I compare straingrams and dynamic Coulomb failure functions calculated for the Landers earthquake at sites that did experience triggered seismicity with those at sites that did not. Bounds on triggering thresholds are obtained from analysis of dynamic strain spectra calculated for the Landers and MW,6.1 Joshua Tree, California, earthquakes at various sites, combined with results of static strain investigations by others. I interpret three principal results of this study with those of a companion study by Gomberg and Davis [this issue]. First, the dynamic elastic stress changes themselves cannot explain the spatial distribution of triggered seismicity, particularly the lack of triggered activity along the San Andreas fault system. In addition to the requirement to exceed a Coulomb failure stress level, this result implies the need to invoke and satisfy the requirements of appropriate slip instability theory. Second, results of this study are consistent with the existence of frequency- or rate-dependent stress/strain triggering thresholds, inferred from the companion study and interpreted in terms of earthquake initiation involving a competition of processes, one promoting failure and the other inhibiting it. Such competition is also part of relevant instability theories. Third, the triggering threshold must vary from site to site, suggesting that the potential for triggering strongly depends on site characteristics and response. The lack of triggering along the San Andreas fault system may be correlated with the advanced maturity of its fault gouge zone; the strains from the Landers earthquake were either insufficient to exceed its larger critical slip distance or some other critical failure parameter; or the faults failed stably as aseismic creep events. Variations in the triggering threshold at sites of triggered seismicity may be attributed to variations in gouge zone development and properties. Finally, these interpretations provide ready explanations for the time delays between the Landers earthquake and the triggered events.
Surface fault slip associated with the 2004 Parkfield, California, earthquake
Rymer, M.J.; Tinsley, J. C.; Treiman, J.A.; Arrowsmith, J.R.; Ciahan, K.B.; Rosinski, A.M.; Bryant, W.A.; Snyder, H.A.; Fuis, G.S.; Toke, N.A.; Bawden, G.W.
2006-01-01
Surface fracturing occurred along the San Andreas fault, the subparallel Southwest Fracture Zone, and six secondary faults in association with the 28 September 2004 (M 6.0) Parkfield earthquake. Fractures formed discontinuous breaks along a 32-km-long stretch of the San Andreas fault. Sense of slip was right lateral; only locally was there a minor (1-11 mm) vertical component of slip. Right-lateral slip in the first few weeks after the event, early in its afterslip period, ranged from 1 to 44 mm. Our observations in the weeks following the earthquake indicated that the highest slip values are in the Middle Mountain area, northwest of the mainshock epicenter (creepmeter measurements indicate a similar distribution of slip). Surface slip along the San Andreas fault developed soon after the mainshock; field checks in the area near Parkfield and about 5 km to the southeast indicated that surface slip developed more than 1 hr but generally less than 1 day after the event. Slip along the Southwest Fracture Zone developed coseismically and extended about 8 km. Sense of slip was right lateral; locally there was a minor to moderate (1-29 mm) vertical component of slip. Right-lateral slip ranged from 1 to 41 mm. Surface slip along secondary faults was right lateral; the right-lateral component of slip ranged from 3 to 5 mm. Surface slip in the 1966 and 2004 events occurred along both the San Andreas fault and the Southwest Fracture Zone. In 1966 the length of ground breakage along the San Andreas fault extended 5 km longer than that mapped in 2004. In contrast, the length of ground breakage along the Southwest Fracture Zone was the same in both events, yet the surface fractures were more continuous in 2004. Surface slip on secondary faults in 2004 indicated previously unmapped structural connections between the San Andreas fault and the Southwest Fracture Zone, further revealing aspects of the structural setting and fault interactions in the Parkfield area.
Aseismic Slip Events along the Southern San Andreas Fault System Captured by Radar Interferometry
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vincent, P
2001-10-01
A seismic slip is observed along several faults in the Salton Sea and southernmost Landers rupture zone regions using interferometric synthetic aperture radar (InSAR) data spanning different time periods between 1992 and 1997. In the southernmost Landers rupture zone, projecting south from the Pinto Mountain Fault, sharp discontinuities in the interferometric phase are observed along the sub-parallel Burnt Mountain and Eureka Peak Faults beginning three months after the Landers earthquake and is interpreted to be post-Landers after-slip. Abrupt phase offsets are also seen along the two southernmost contiguous 11 km Durmid Hill and North Shore segments of the San Andreasmore » Fault with an abrupt termination of slip near the northern end of the North Shore Segment. A sharp phase offset is seen across 20 km of the 30 km-long Superstition Hills Fault before phase decorrelation in the Imperial Valley along the southern 10 km of the fault prevents coherent imaging by InSAR. A time series of deformation interferograms suggest most of this slip occurred between 1993 and 1995 and none of it occurred between 1992 and 1993. A phase offset is also seen along a 5 km central segment of the Coyote Creek fault that forms a wedge with an adjoining northeast-southwest trending conjugate fault. Most of the slip observed on the southern San Andreas and Superstition Hills Faults occurred between 1993 and 1995--no slip is observed in the 92-93 interferograms. These slip events, especially the Burnt Mountain and Eureka Peak events, are inferred to be related to stress redistribution from the June, 1992 M{sub w} = 7.3 Landers earthquake. Best-fit elastic models of the San Andreas and Superstition Hills slip events suggest source mechanisms with seismic moments over three orders of magnitude larger than a maximum possible summation of seismic moments from all seismicity along each fault segment during the entire 4.8-year time interval spanned by the InSAR data. Aseismic moment releases of this magnitude (equivalent to M{sub w} = 5.3 and 5.6 events on the Superstition Hills and San Andreas Faults respectively) are hitherto unknown and have not been captured previously by any geodetic technique.« less
NASA Astrophysics Data System (ADS)
Ewiak, O.; Victor, P.; Ziegenhagen, T.; Oncken, O.
2012-04-01
The Chilean convergent plate boundary is one of the tectonically most active regions on earth and prone to large megathrust earthquakes as e. g. the 2010 Mw 8.8 Maule earthquake which ruptured a mature seismic gap in south-central Chile. In northern Chile historical data suggests the existence of a seismic gap between Arica and Mejillones Peninsula (MP), which has not ruptured since 1877. Further south, the 1995 Mw 8.0 Antofagasta earthquake ruptured the subduction interface between MP and Taltal. In this study we investigate the deformation at four active upper plate faults (dip-slip and strike-slip) located above the coupling zone of the subduction interface. The target faults (Mejillones Fault - MF, Salar del Carmen Fault - SCF, Cerro Fortuna Fault - CFF, Chomache Fault - CF) are situated in forearc segments, which are in different stages of the megathrust seismic cycle. The main question of this study is how strain is accumulated in the overriding plate, what is the response of the target faults to the megathrust seismic cycle and what are the mechanisms / processes involved. The hyper arid conditions of the Atacama desert and the extremely low erosion rates enable us to investigate geomorphic markers, e .g. fault scarps and knickpoints, which serve as a record for upper crustal deformation and fault activity about ten thousands years into the past. Fault scarp data has been acquired with Differential-GPS by measuring high-resolution topographic profiles perpendicular to the fault scarps and along incised gullies. The topographic data show clear variations between the target faults which possibly result from their position within the forearc. The surveyed faults, e. g. the SCF, exhibit clear along strike variations in the morphology of surface ruptures attributed to seismic events and can be subdivided into individual segments. The data allows us to distinguish single, composite and multiple fault scarps and thus to detect differences in fault growth initiated either by seismic rupture or fault creep. Additional information on the number of seismic events responsible for the cumulative displacement can be derived from the mapping of knickpoints. By reconstructing the stress field responsible for the formation of identified seismic surface ruptures we can determine stress conditions for failure of upper crustal faults. Comparing these paleo stress conditions with the recent forearc stresses (interseismic / coseismic) we can derive information about a possible activation of upper crustal faults during the megathrust seismic cycle. In addition to the morphotectonic surveys we explore the recent deformation of the target faults by analyzing time series of displacements recorded with micron precision by an array of creepmeters at the target faults for over three years. Total displacement is composed of steady state creep, creep events and sudden displacement events (SDEs) related to seismic rupture. The percentage of SDEs accounts for >50 % (SCF) to 90 % (CFF) of the cumulative displacement. This result very well reflects the field observation that a considerable amount of the total displacement has been accumulated during multiple seismic events.
A regional 17-18 MA thermal event in Southwestern Arizona
NASA Technical Reports Server (NTRS)
Brooks, W. E.
1985-01-01
A regional thermal event in southwestern Arizona 17 to 18 Ma ago is suggested by discordances between fission track (FT) and K-Ar dates in Tertiary volcanic and sedimentary rocks, by the abundance of primary hydrothermal orthoclase in quenched volcanic rocks, and by the concentration of Mn, Ba, Cu, Ag, and Au deposits near detachment faults. A high condont alteration index (CAI) of 3 to 7 is found in Paleozoic rocks of southwestern Arizona. The high CAI may have been caused by this mid-Tertiary thermal event. Resetting of temperature-sensitive TF dates (2) 17 to 18 Ma with respect to K-Ar dates of 24 and 20 Ma has occurred in upper plate volcanic rocks at the Harcuvar and Picacho Peak detachments. Discordances between FT and K-Ar dates are most pronounced at detachment faults. However, on a regional scale Ft dates from volcanic and sedimentary rocks approach 17 to 18 Ma event in areas away from known detachment faults. Effects of detachment faulting on the K-Ar system suggest that dates of correlative rocks will be younger as the detachment fault is approached.
NASA Astrophysics Data System (ADS)
Ostapchuk, Alexey; Saltykov, Nikolay
2017-04-01
Excessive tectonic stresses accumulated in the area of rock discontinuity are released while a process of slip along preexisting faults. Spectrum of slip modes includes not only creeps and regular earthquakes but also some transitional regimes - slow-slip events, low-frequency and very low-frequency earthquakes. However, there is still no agreement in Geophysics community if such fast and slow events have mutual nature [Peng, Gomberg, 2010] or they present different physical phenomena [Ide et al., 2007]. Models of nucleation and evolution of fault slip events could be evolved by laboratory experiments in which regularities of shear deformation of gouge-filled fault are investigated. In the course of the work we studied deformation regularities of experimental fault by slider frictional experiments for development of unified law of evolution of fault and revelation of its parameters responsible for deformation mode realization. The experiments were conducted as a classic slider-model experiment, in which block under normal and shear stresses moves along interface. The volume between two rough surfaces was filled by thin layer of granular matter. Shear force was applied by a spring which deformed with a constant rate. In such experiments elastic energy was accumulated in the spring, and regularities of its releases were determined by regularities of frictional behaviour of experimental fault. A full spectrum of slip modes was simulated in laboratory experiments. Slight change of gouge characteristics (granule shape, content of clay), viscosity of interstitial fluid and level of normal stress make it possible to obtained gradual transformation of the slip modes from steady sliding and slow slip to regular stick-slip, with various amplitude of 'coseismic' displacement. Using method of asymptotic analogies we have shown that different slip modes can be specified in term of single formalism and preparation of different slip modes have uniform evolution law. It is shown that shear stiffness of experimental fault is the parameter, which control realization of certain slip modes. It is worth to be mentioned that different serious of transformation is characterized by functional dependences, which have general view and differ only in normalization factors. Findings authenticate that slow and fast slip events have mutual nature. Determination of fault stiffness and testing of fault gouge allow to estimate intensity of seismic events. The reported study was funded by RFBR according to the research project № 16-05-00694.
Triggered creep as a possible mechanism for delayed dynamic triggering of tremor and earthquakes
Shelly, David R.; Peng, Zhigang; Hill, David P.; Aiken, Chastity
2011-01-01
The passage of radiating seismic waves generates transient stresses in the Earth's crust that can trigger slip on faults far away from the original earthquake source. The triggered fault slip is detectable in the form of earthquakes and seismic tremor. However, the significance of these triggered events remains controversial, in part because they often occur with some delay, long after the triggering stress has passed. Here we scrutinize the location and timing of tremor on the San Andreas fault between 2001 and 2010 in relation to distant earthquakes. We observe tremor on the San Andreas fault that is initiated by passing seismic waves, yet migrates along the fault at a much slower velocity than the radiating seismic waves. We suggest that the migrating tremor records triggered slow slip of the San Andreas fault as a propagating creep event. We find that the triggered tremor and fault creep can be initiated by distant earthquakes as small as magnitude 5.4 and can persist for several days after the seismic waves have passed. Our observations of prolonged tremor activity provide a clear example of the delayed dynamic triggering of seismic events. Fault creep has been shown to trigger earthquakes, and we therefore suggest that the dynamic triggering of prolonged fault creep could provide a mechanism for the delayed triggering of earthquakes. ?? 2011 Macmillan Publishers Limited. All rights reserved.
Within-Event and Between-Events Ground Motion Variability from Earthquake Rupture Scenarios
NASA Astrophysics Data System (ADS)
Crempien, Jorge G. F.; Archuleta, Ralph J.
2017-09-01
Measurement of ground motion variability is essential to estimate seismic hazard. Over-estimation of variability can lead to extremely high annual hazard estimates of ground motion exceedance. We explore different parameters that affect the variability of ground motion such as the spatial correlations of kinematic rupture parameters on a finite fault and the corner frequency of the moment-rate spectra. To quantify the variability of ground motion, we simulate kinematic rupture scenarios on several vertical strike-slip faults and compute ground motion using the representation theorem. In particular, for the entire suite of rupture scenarios, we quantify the within-event and the between-events ground motion variability of peak ground acceleration (PGA) and response spectra at several periods, at 40 stations—all approximately at an equal distance of 20 and 50 km from the fault. Both within-event and between-events ground motion variability increase when the slip correlation length on the fault increases. The probability density functions of ground motion tend to truncate at a finite value when the correlation length of slip decreases on the fault, therefore, we do not observe any long-tail distribution of peak ground acceleration when performing several rupture simulations for small correlation lengths. Finally, for a correlation length of 6 km, the within-event and between-events PGA log-normal standard deviations are 0.58 and 0.19, respectively, values slightly smaller than those reported by Boore et al. (Earthq Spectra, 30(3):1057-1085, 2014). The between-events standard deviation is consistently smaller than the within-event for all correlations lengths, a feature that agrees with recent ground motion prediction equations.
The Kumamoto Mw7.1 mainshock: deep initiation triggered by the shallow foreshocks
NASA Astrophysics Data System (ADS)
Shi, Q.; Wei, S.
2017-12-01
The Kumamoto Mw7.1 earthquake and its Mw6.2 foreshock struck the central Kyushu region in mid-April, 2016. The surface ruptures are characterized with multiple fault segments and a mix of strike-slip and normal motion extended from the intersection area of Hinagu and Futagawa faults to the southwest of Mt. Aso. Despite complex surface ruptures, most of the finite fault inversions use two fault segments to approximate the fault geometry. To study the rupture process and the complex fault geometry of this earthquake, we performed a multiple point source inversion for the mainshock using the data on 93 K-net and Kik-net stations. With path calibration from the Mw6.0 foreshock, we selected the frequency ranges for the Pnl waves (0.02 0.26 Hz) and surface waves (0.02 0.12 Hz), as well as the components that can be well modeled with the 1D velocity model. Our four-point-source results reveal a unilateral rupture towards Mt. Aso and varying fault geometries. The first sub-event is a high angle ( 79°) right-lateral strike-slip event at the depth of 16 km on the north end of the Hinagu fault. Notably the two M>6 foreshocks is located by our previous studies near the north end of the Hinagu fault at the depth of 5 9 km, which may give rise to the stress concentration at depth. The following three sub-events are distributed along the surface rupture of the Futagawa fault, with focal depths within 4 10 km. Their focal mechanisms present similar right-lateral fault slips with relatively small dip angles (62 67°) and apparent normal-fault component. Thus, the mainshock rupture initiated from the relatively deep part of the Hinagu fault and propagated through the fault-bend toward NE along the relatively shallow part of the Futagawa fault until it was terminated near Mt. Aso. Based on the four-point-source solution, we conducted a finite-fault inversion and obtained a kinematic rupture model of the mainshock. We then performed the Coulomb Stress analyses on the two foreshocks and the mainshock. The results support that the stress alternation after the foreshocks may have triggered the failure on the fault plane of the Mw7.1 earthquake. Therefore, the 2016 Kumamoto earthquake sequence is dominated by a series of large triggering events whose initiation is associated with the geometric barrier in the intersection of the Futagawa and Hinagu faults.
LiDAR and Field Observations of Earthquake Slip Distribution for the central San Jacinto fault
NASA Astrophysics Data System (ADS)
Salisbury, J. B.; Rockwell, T. K.; Middleton, T.; Hudnut, K. W.
2010-12-01
We mapped the tectonic geomorphology of 80 km of the Clark strand of the San Jacinto fault to determine slip per event for the past several surface ruptures. From the southeastern end of Clark Valley (east of Borrego Springs) northwest to the mouth of Blackburn Canyon (near Hemet), we identify 203 offset geomorphic features from which we make over 560 measurements on channel margins, channel thalwegs, ridge noses, and bar crests using filtered B4 LiDAR imagery, aerial photography, and field observations. Displacement estimates show that the most recent large event (MRE) produced an average of 2.5-2.9 m of right-lateral slip, with maximum slip of 3.5 to 4 m at Anza. Double-event offsets for the same 80 km section average ~5.5 m of right-lateral slip. Maximum values near Anza are estimated to be close to 3 m for the penultimate event, suggesting that the penultimate event was similar in size to the MRE. The third event is also similar in size, with cumulative displacement of 9-10 m through Anza for the past three events. Magnitude estimates for the MRE range from Mw 7.2 to Mw 7.5, depending on how far north the rupture continued. Historically, no earthquakes reported along the Clark fault are large enough to have produced the offset geomorphology we observe. However, recent paleoseismic work at Hog Lake dates the most recent surface rupture event at ca. 1790, potentially placing this event in the historic period. A poorly located, large earthquake occurred on November 22, 1800, and is reported to have caused extensive damage (MMI VII) at the San Diego and San Juan Capistrano missions. We infer slightly lower intensity values for the two missions (MMI VI-VII instead of VII) and relocate this event on the Clark fault based on dating of the MRE at Hog Lake. We also recognize the occurrence of a younger offset along ~15-20 km of the fault in Blackburn Canyon, apparently due to lower slip in that area in the November 22, 1800 event. With average displacement of ~1.25 m, we attribute these offsets to the M6.9 April 21, 1918 event. These data argue that much or all of the Clark fault, and possibly also the Casa Loma fault fail together in large earthquakes, but that shorter sections may also fail in smaller events.
Model-Based Fault Tolerant Control
NASA Technical Reports Server (NTRS)
Kumar, Aditya; Viassolo, Daniel
2008-01-01
The Model Based Fault Tolerant Control (MBFTC) task was conducted under the NASA Aviation Safety and Security Program. The goal of MBFTC is to develop and demonstrate real-time strategies to diagnose and accommodate anomalous aircraft engine events such as sensor faults, actuator faults, or turbine gas-path component damage that can lead to in-flight shutdowns, aborted take offs, asymmetric thrust/loss of thrust control, or engine surge/stall events. A suite of model-based fault detection algorithms were developed and evaluated. Based on the performance and maturity of the developed algorithms two approaches were selected for further analysis: (i) multiple-hypothesis testing, and (ii) neural networks; both used residuals from an Extended Kalman Filter to detect the occurrence of the selected faults. A simple fusion algorithm was implemented to combine the results from each algorithm to obtain an overall estimate of the identified fault type and magnitude. The identification of the fault type and magnitude enabled the use of an online fault accommodation strategy to correct for the adverse impact of these faults on engine operability thereby enabling continued engine operation in the presence of these faults. The performance of the fault detection and accommodation algorithm was extensively tested in a simulation environment.
NASA Astrophysics Data System (ADS)
Slater, Lee; Niemi, Tina M.
2003-06-01
Ground-penetrating radar (GPR) was used in an effort to locate a major active fault that traverses Aqaba City, Jordan. Measurements over an exposed (trenched) cross fault outside of the city identify a radar signature consisting of linear events and horizontal offset/flexured reflectors both showing a geometric correlation with two known faults at a control site. The asymmetric linear events are consistent with dipping planar reflectors matching the known direction of dip of the faults. However, other observations regarding this radar signature render the mechanism generating these events more complex and uncertain. GPR measurements in Aqaba City were limited to vacant lots. Seven GPR profiles were conducted approximately perpendicular to the assumed strike of the fault zone, based on regional geological evidence. A radar response very similar to that obtained over the cross fault was observed on five of the profiles in Aqaba City, although the response is weaker than that obtained at the control site. The positions of the identified responses form a near straight line with a strike of 45°. Although subsurface verification of the fault by trenching within the city is needed, the geophysical evidence for fault zone location is strong. The location of the interpreted fault zone relative to emergency services, military bases, commercial properties, and residential areas is defined to within a few meters. This study has significant implications for seismic hazard analysis in this tectonically active and heavily populated region.
NASA Astrophysics Data System (ADS)
Ji, Lingyun; Wang, Qingliang; Xu, Jing; Ji, Cunwei
2017-03-01
On July 11, 1995, an Mw 6.8 earthquake struck eastern Myanmar near the Chinese border; hereafter referred to as the 1995 Myanmar-China earthquake. Coseismic surface displacements associated with this event are identified from JERS-1 (Japanese Earth Resources Satellite-1) SAR (Synthetic Aperture Radar) images. The largest relative displacement reached 60 cm in the line-of-sight direction. We speculate that a previously unrecognized dextral strike-slip subvertical fault striking NW-SE was responsible for this event. The coseismic slip distribution on the fault planes is inverted based on the InSAR-derived deformation. The results indicate that the fault slip was confined to two lobes. The maximum slip reached approximately 2.5 m at a depth of 5 km in the northwestern part of the focal region. The inverted geodetic moment was approximately Mw = 6.69, which is consistent with seismological results. The 1995 Myanmar-China earthquake is one of the largest recorded earthquakes that has occurred around the "bookshelf faulting" system between the Sagaing fault in Myanmar and the Red River fault in southwestern China.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Doser, D.I.
1993-04-01
Source parameters determined from the body waveform modeling of large (M [>=] 5.5) historic earthquakes occurring between 1915 and 1956 along the San Jacinto and Imperial fault zones of southern California and the Cerro Prieto, Tres Hermanas and San Miguel fault zones of Baja California have been combined with information from post-1960's events to study regional variations in source parameters. The results suggest that large earthquakes along the relatively young San Miguel and Tres Hermanas fault zones have complex rupture histories, small source dimensions (< 25 km), high stress drops (60 bar average), and a high incidence of foreshock activity.more » This may be a reflection of the rough, highly segmented nature of the young faults. In contrast, Imperial-Cerro Prieto events of similar magnitude have low stress drops (16 bar average) and longer rupture lengths (42 km average), reflecting rupture along older, smoother fault planes. Events along the San Jacinto fault zone appear to lie in between these two groups. These results suggest a relationship between the structural and seismological properties of strike-slip faults that should be considered during seismic risk studies.« less
20 CFR 408.912 - When are you without fault regarding an overpayment?
Code of Federal Regulations, 2010 CFR
2010-04-01
... any lack of facility with the English language) you may have. We will determine that you were at fault..., your agreement to report events, your knowledge of the occurrence of events that should have been...
Microseismic data records fault activation before and after a Mw 4.1 induced earthquake
NASA Astrophysics Data System (ADS)
Eyre, T.; Eaton, D. W. S.
2017-12-01
Several large earthquakes (Mw 4) have been observed in the vicinity of the town of Fox Creek, Alberta. These events have been determined to be induced earthquakes related to hydraulic fracturing in the region. The largest of these has a magnitude Mw = 4.1, and is associated with a hydraulic-fracturing treatment close to Crooked Lake, about 30 km west of Fox Creek. The underlying factors that lead to localization of the high numbers of hydraulic fracturing induced events in this area remain poorly understood. The treatment that is associated with the Mw 4.1 event was monitored by 93 shallow three-level borehole arrays of sensors. Here we analyze the temporal and spatial evolution of the microseismic and seismic data recorded during the treatment. Contrary to expected microseismic event clustering parallel to the principal horizontal stress (NE - SW), the events cluster along obvious fault planes that align both NNE - SSW and N - S. As the treatment well is oriented N - S, it appears that each stage of the treatment intersects a new portion of the fracture network, causing seismicity to occur. Focal-plane solutions support a strike-slip failure along these faults, with nodal planes aligning with the microseismic cluster orientations. Each fault segment is activated with a cluster of microseismicity in the centre, gradually extending along the fault as time progresses. Once a portion of a fault is active, further seismicity can be induced, regardless if the present stage is distant from the fault. However, the large events seem to occur in regions with a gap in the microseismicity. Interestingly, most of the seismicity is located above the reservoir, including the larger events. Although a shallow-well array is used, these results are believed to have relatively high depth resolution, as the perforation shots are correctly located with an average error of 26 m in depth. This information contradicts previously held views that large induced earthquakes occur primarily, or even exclusively, in the underlying crystalline basement. The findings can give new insights into the dynamics of induced seismicity related to hydraulic fracturing. Additionally, real-time microseismic monitoring can be used to track the evolution of fault activation as it occurs, and can potentially indicate that large events are possible.
NASA Astrophysics Data System (ADS)
Roland, E. C.; McGuire, J. J.; Lizarralde, D.; Collins, J. A.
2010-12-01
East Pacific Rise (EPR) oceanic transform faults are known to exhibit a number of unique seismicity characteristics, including abundant seismic swarms, a prevalence of aseismic slip, and high rates of foreshock activity. Until recently the details of how this behavior fits into the seismic cycle of large events that occur periodically on transforms have remained poorly understood. In 2008 the most recent seismic cycle of the western segment (G3) of the Gofar fault (4 degrees South on the EPR) ended with a Mw 6.0 earthquake. Seismicity associated with this event was recorded by a local array of ocean bottom seismometers, and earthquake locations reveal several distinct segments with unique slip behavior on the G3 fault. Preceding the Mw 6.0 event, a significant foreshock sequence was recorded just to the east of the mainshock rupture zone that included more than 20,000 detected earthquakes. This foreshock zone formed the eastern barrier to the mainshock rupture, and following the mainshock, seismicity rates within the foreshock zone remained unchanged. Based on aftershock locations of events following the 2007 Mw 6.0 event that completed the seismic cycle on the eastern end of the G3 fault, it appears that the same foreshock zone may have served as the western rupture barrier for that prior earthquake. Moreover, mainshock rupture associated with each of the last 8 large (~ Mw 6.0) events on the G3 fault seems to terminate at the same foreshock zone. In order to elucidate some of the structural controls on fault slip and earthquake rupture along transform faults, we present a seismic P-wave velocity profile crossing the center of the foreshock zone of the Gofar fault, as well as a profile for comparison across the neighboring Quebrada fault. Although tectonically similar, Quebrada does not sustain large earthquakes and is thought to accommodate slip primarily aseismically and with small magnitude earthquake swarms. Velocity profiles were obtained using data collected from ~100 km refraction profiles crossing the two faults, each using 8 short period ocean bottom seismometers from OBSIP and over 900 shots from the RV Marcus Langseth. These data are modeled using a 2-D tomographic code that allows joint inversion of the Pg, PmP, and Pn arrivals. We resolve a significant low velocity zone associated with the faults, which likely indicates rocks that have undergone intensive brittle deformation. Low velocities may also signify the presence of metamorphic alteration and/or elevated fluid pressures, both of which could have a significant affect on the friction laws that govern fault slip in these regions. A broad low velocity zone is apparent in the shallow crust (< 3km) at both faults, with velocities that are reduced by more than 1 km/s relative to the surrounding oceanic crust. A narrower zone of reduced seismic velocity appears to extend to mantle depths, and particularly on the Gofar fault, this corresponds with the seismogenic zone inferred from located foreshock seismicity, spanning depths of 3-9 km beneath the seafloor.
Schwartz, David P.; Haeussler, Peter J.; Seitz, Gordon G.; Dawson, Timothy E.
2012-01-01
The propagation of the rupture of the Mw7.9 Denali fault earthquake from the central Denali fault onto the Totschunda fault has provided a basis for dynamic models of fault branching in which the angle of the regional or local prestress relative to the orientation of the main fault and branch plays a principal role in determining which fault branch is taken. GeoEarthScope LiDAR and paleoseismic data allow us to map the structure of the Denali-Totschunda fault intersection and evaluate controls of fault branching from a geological perspective. LiDAR data reveal the Denali-Totschunda fault intersection is structurally simple with the two faults directly connected. At the branch point, 227.2 km east of the 2002 epicenter, the 2002 rupture diverges southeast to become the Totschunda fault. We use paleoseismic data to propose that differences in the accumulated strain on each fault segment, which express differences in the elapsed time since the most recent event, was one important control of the branching direction. We suggest that data on event history, slip rate, paleo offsets, fault geometry and structure, and connectivity, especially on high slip rate-short recurrence interval faults, can be used to assess the likelihood of branching and its direction. Analysis of the Denali-Totschunda fault intersection has implications for evaluating the potential for a rupture to propagate across other types of fault intersections and for characterizing sources of future large earthquakes.
NASA Astrophysics Data System (ADS)
Li, Kang; Xu, Xiwei; Kirby, Eric; Tang, Fangtou; Kang, Wenjun
2018-04-01
How the eastward motion of crust in the central Tibetan Plateau is accommodated in the remote regions of the eastern Himalayan syntaxis remains uncertain. Although the Yarlung Zangbo suture (YZS) forms a striking lineament in the topography of the region, evidence for recent faulting along this zone has been equivocal. To understand whether faults along the YZS are active, we performed a geological investigation along the eastern segments of the YZS. Geomorphic observations suggest the presence of active faulting along several segments of the YZS, which we collectively refer to as the "Milin fault". Paleoseismologic data from trenches reveal evidence for one faulting event, which is constrained to occur between 5620 and 1945 a BP. The latest faulting event displaced alluvial surface T2 by 7 m. The offset on this earthquake place the minimum value on the vertical slip rate of 0.3 mm/yr. Empirical relationships between surface rupture length, displacement and magnitude, suggest that magnitude of the latest event could have been Mw 7.3-7.7. On the basis of this slip rate and the elapsed time since the last event, it is estimated that a seismic moment equivalent to Mw 7.0 has been accumulated on the Milin fault. It is pose a threat to the surrounding region. Our results suggest that shortening occurs in the vicinity of the eastern Himalayan syntaxis, and part of eastward motion of crust from the central Tibetan Plateau is absorbed by uplift of the eastern Himalayan syntaxis.
The Role of Seismic Directivity in Tele-seismically Induced Well Level Oscillations
NASA Astrophysics Data System (ADS)
Voss, N. K.; Wdowinski, S.
2013-12-01
Surface waves induced by large earthquakes travel large distances around the globe and can cause disturbances in ground water systems as they pass. Their most minimal disturbance is manifested through oscillations of hydraulic head in wells. In order to understand what controls well oscillatory response, we examine hydrograph records from 22 wells in South Florida's Floridan aquifer, which were acquired over a nine-year period (June 2003 - September 2012). We found a regional threshold of Mo=6.9 earthquakes for inducing well oscillations. Our record includes 99 main shock events at or above this magnitude. As the induced oscillations also depend on the distance between the earthquake and the well, we applied the commonly used Seismic Energy Density (SED) parameter [Wang and Manga 2010;Wang 2007;Wang 2008] and investigate the relationship between SED and well oscillations. The minimum SED value of events with oscillations in our data set was 0.002 j/m3. Wang and Manga [2010] found a similar value of 0.001 j/m3 for sustained groundwater change, very close to our observations for transient changes. However, our threshold value did not guarantee a well oscillatory response. In the South Florida well dataset only 16% of events with SED values at or above 0.002 j/m3 showed hydraulic head oscillation. When looking at events with SED of 0.005 j/m3 or above, 55% of events (16 out of 29) showed oscillations in hydraulic head. In this research we explore other parameters beside earthquake magnitude and distance to the well that can explain why 45% of events with SED>0.005 j/m3 still showed no oscillatory response. We hypothesize that inconsistent oscillatory response reflects the effect of seismic directivity. Direct calculation of Rayleigh wave amplitude directivity is a complicated procedure [Haskell 1990] and proved to be too difficult in the far field without the aid of data from well-situated seismometers. Thus instead, we examined the role of seismic directivity, as determined by the faulting mechanism (normal, reverse, and strike-slip) and fault orientation, with respect to the wells, on the occurrence of tele-seismic induced well oscillations. We calculated the angle between the fault orientation and the great circle initial bearing from the epicenter to the South Florida well field, which we termed Delta. We found that for each faulting type, oscillatory events correlate well with specific Delta angles. For strike-slip events of sufficient SED, a strong correlation was found with events pointed away, 180°>Delta>90°, from the initial direction on the great circle path back to South Florida. For reverse faulting events, good correlation was found between oscillatory events and earthquakes oriented at Delta angles of 45° and 135°. There were not enough normal faulting events in our dataset to draw conclusions for this fault type. Our results indicate that oscillatory response to large tele-seismic waves depends not only on the magnitude and distance to the event, but also on the faulting type and fault orientation to the far field well.
Experimental study on propagation of fault slip along a simulated rock fault
NASA Astrophysics Data System (ADS)
Mizoguchi, K.
2015-12-01
Around pre-existing geological faults in the crust, we have often observed off-fault damage zone where there are many fractures with various scales, from ~ mm to ~ m and their density typically increases with proximity to the fault. One of the fracture formation processes is considered to be dynamic shear rupture propagation on the faults, which leads to the occurrence of earthquakes. Here, I have conducted experiments on propagation of fault slip along a pre-cut rock surface to investigate the damaging behavior of rocks with slip propagation. For the experiments, I used a pair of metagabbro blocks from Tamil Nadu, India, of which the contacting surface simulates a fault of 35 cm in length and 1cm width. The experiments were done with the similar uniaxial loading configuration to Rosakis et al. (2007). Axial load σ is applied to the fault plane with an angle 60° to the loading direction. When σ is 5kN, normal and shear stresses on the fault are 1.25MPa and 0.72MPa, respectively. Timing and direction of slip propagation on the fault during the experiments were monitored with several strain gauges arrayed at an interval along the fault. The gauge data were digitally recorded with a 1MHz sampling rate and 16bit resolution. When σ is 4.8kN is applied, we observed some fault slip events where a slip nucleates spontaneously in a subsection of the fault and propagates to the whole fault. However, the propagation speed is about 1.2km/s, much lower than the S-wave velocity of the rock. This indicates that the slip events were not earthquake-like dynamic rupture ones. More efforts are needed to reproduce earthquake-like slip events in the experiments. This work is supported by the JSPS KAKENHI (26870912).
NASA Astrophysics Data System (ADS)
Victor, Pia; Ewiak, Oktawian; Thomas, Ziegenhagen; Monika, Sobiesiak; Bernd, Schurr; Gabriel, Gonzalez; Onno, Oncken
2016-04-01
The Atacama Fault System (AFS) is an active trench-parallel fault system, located in the forearc of N-Chile directly above the subduction zone interface. Due to its well-exposed position in the hyper arid forearc of N-Chile it is the perfect target to investigate the interaction between the deformation cycle in the overriding forearc and the subduction zone seismic cycle of the underlying megathrust. Although the AFS and large parts of the upper crust are devoid of any noteworthy seismicity, at least three M=7 earthquakes in the past 10 ky have been documented in the paleoseismological record, demonstrating the potential of large events in the future. We apply a two-fold approach to explore fault activation and reactivation patterns through time and to investigate the triggering potential of upper crustal faults. 1) A new methodology using high-resolution topographic data allows us to investigate the number of past earthquakes for any given segment of the fault system as well as the amount of vertical displacement of the last increment. This provides us with a detailed dataset of past earthquake rupture of upper plate faults which is potentially linked to large subduction zone earthquakes. 2) The IPOC Creepmeter array (http://www.ipoc-network.org/index.php/observatory/creepmeter.html) provides us with high-resolution time series of fault displacement accumulation for 11 stations along the 4 most active branches of the AFS. This array monitors the displacement across the fault with 2 samples/min with a resolution of 1μm. Collocated seismometers record the seismicity at two of the creepmeters, whereas the regional seismicity is provided by the IPOC Seismological Networks. Continuous time series of the creepmeter stations since 2009 show that the shallow segments of the fault do not creep permanently. Instead the accumulation of permanent deformation occurs by triggered slip caused by local or remote earthquakes. The 2014 Mw=8.2 Pisagua Earthquake, located close to the creepmeter array, triggered large displacement events on all stations. Another event recorded on all stations was the 2010 Mw=8.8 Maule earthquake located 1500km south of the array. Exploring observations from both datasets, we can clearly state that triggering of upper crustal faults is observed for small-scale displacements. These findings allow us to speculate that the observed larger events in the past are likely being triggered events that require a critically prestressed condition of the target fault that is unclamped by stress changes triggered by large or potentially even small subduction zone earthquakes.
Signal processing and neural network toolbox and its application to failure diagnosis and prognosis
NASA Astrophysics Data System (ADS)
Tu, Fang; Wen, Fang; Willett, Peter K.; Pattipati, Krishna R.; Jordan, Eric H.
2001-07-01
Many systems are comprised of components equipped with self-testing capability; however, if the system is complex involving feedback and the self-testing itself may occasionally be faulty, tracing faults to a single or multiple causes is difficult. Moreover, many sensors are incapable of reliable decision-making on their own. In such cases, a signal processing front-end that can match inference needs will be very helpful. The work is concerned with providing an object-oriented simulation environment for signal processing and neural network-based fault diagnosis and prognosis. In the toolbox, we implemented a wide range of spectral and statistical manipulation methods such as filters, harmonic analyzers, transient detectors, and multi-resolution decomposition to extract features for failure events from data collected by data sensors. Then we evaluated multiple learning paradigms for general classification, diagnosis and prognosis. The network models evaluated include Restricted Coulomb Energy (RCE) Neural Network, Learning Vector Quantization (LVQ), Decision Trees (C4.5), Fuzzy Adaptive Resonance Theory (FuzzyArtmap), Linear Discriminant Rule (LDR), Quadratic Discriminant Rule (QDR), Radial Basis Functions (RBF), Multiple Layer Perceptrons (MLP) and Single Layer Perceptrons (SLP). Validation techniques, such as N-fold cross-validation and bootstrap techniques, are employed for evaluating the robustness of network models. The trained networks are evaluated for their performance using test data on the basis of percent error rates obtained via cross-validation, time efficiency, generalization ability to unseen faults. Finally, the usage of neural networks for the prediction of residual life of turbine blades with thermal barrier coatings is described and the results are shown. The neural network toolbox has also been applied to fault diagnosis in mixed-signal circuits.
Dodge, D.A.; Beroza, G.C.; Ellsworth, W.L.
1996-01-01
We find that foreshocks provide clear evidence for an extended nucleation process before some earthquakes. In this study, we examine in detail the evolution of six California foreshock sequences, the 1986 Mount Lewis (ML, = 5.5), the 1986 Chalfant (ML = 6.4), the. 1986 Stone Canyon (ML = 4.7), the 1990 Upland (ML = 5.2), the 1992 Joshua Tree (MW= 6.1), and the 1992 Landers (MW = 7.3) sequence. Typically, uncertainties in hypocentral parameters are too large to establish the geometry of foreshock sequences and hence to understand their evolution. However, the similarity of location and focal mechanisms for the events in these sequences leads to similar foreshock waveforms that we cross correlate to obtain extremely accurate relative locations. We use these results to identify small-scale fault zone structures that could influence nucleation and to determine the stress evolution leading up to the mainshock. In general, these foreshock sequences are not compatible with a cascading failure nucleation model in which the foreshocks all occur on a single fault plane and trigger the mainshock by static stress transfer. Instead, the foreshocks seem to concentrate near structural discontinuities in the fault and may themselves be a product of an aseismic nucleation process. Fault zone heterogeneity may also be important in controlling the number of foreshocks, i.e., the stronger the heterogeneity, the greater the number of foreshocks. The size of the nucleation region, as measured by the extent of the foreshock sequence, appears to scale with mainshock moment in the same manner as determined independently by measurements of the seismic nucleation phase. We also find evidence for slip localization as predicted by some models of earthquake nucleation. Copyright 1996 by the American Geophysical Union.
NASA Astrophysics Data System (ADS)
Mencin, D.; Gottlieb, M. H.; Hodgkinson, K. M.; Bilham, R. G.; Mattioli, G. S.; Johnson, W.; Van Boskirk, E.; Meertens, C. M.
2015-12-01
Strainmeters and creepmeters have been operated along the San Andreas Fault, observing creep events for decades. In particular, the EarthScope Plate Boundary Observatory (PBO) has added a significant number of borehole strainmeters along the San Andreas Fault (SAF) over the last decade. The geodetic data cover a significant temporal portion of the inferred earthquake cycle along this portion of the SAF. Creepmeters measure the surface displacement over time (creep) with short apertures and have the ability to capture slow slip, coseismic rupture, and afterslip. Modern creepmeters deployed by the authors have a resolution of 5 µm over a range of 10 mm and a dynamic sensor with a resolution 25 µm over a range 2.2 m. Borehole strainmeters measure local deformation some distance from the fault with a broader aperture. Borehole tensor strainmeters principally deployed as part of the PBO, measure the horizontal strain tensor at a depth of 100-200 m with a resolution of 10-11 strain and are located 4 - 10 km from the fault with the ability to image a 1 mm creep event acting on an area of ~500 m2 from over 4 km away (fault perpendicular). A single borehole tensor strainmeter is capable of providing broad constraints on the creep event asperity size, location, direction and depth of a single creep event. The synthesis of these data from all the available geodetic instruments proximal to the SAF presents a unique opportunity to constrain the partitioning between aseismic and seismic slip on the central SAF. We show that simple elastic half-space models allow us to loosely constrain the location and depth of any individual creep event on the fault, even with a single instrument, and to image the accumulation of creep with time.
NASA Astrophysics Data System (ADS)
Ruhl, C. J.; Abercrombie, R. E.; Smith, K. D.; Zaliapin, I.
2016-11-01
After approximately 2 months of swarm-like earthquakes in the Mogul neighborhood of west Reno, NV, seismicity rates and event magnitudes increased over several days culminating in an Mw 4.9 dextral strike-slip earthquake on 26 April 2008. Although very shallow, the Mw 4.9 main shock had a different sense of slip than locally mapped dip-slip surface faults. We relocate 7549 earthquakes, calculate 1082 focal mechanisms, and statistically cluster the relocated earthquake catalog to understand the character and interaction of active structures throughout the Mogul, NV earthquake sequence. Rapid temporary instrument deployment provides high-resolution coverage of microseismicity, enabling a detailed analysis of swarm behavior and faulting geometry. Relocations reveal an internally clustered sequence in which foreshocks evolved on multiple structures surrounding the eventual main shock rupture. The relocated seismicity defines a fault-fracture mesh and detailed fault structure from approximately 2-6 km depth on the previously unknown Mogul fault that may be an evolving incipient strike-slip fault zone. The seismicity volume expands before the main shock, consistent with pore pressure diffusion, and the aftershock volume is much larger than is typical for an Mw 4.9 earthquake. We group events into clusters using space-time-magnitude nearest-neighbor distances between events and develop a cluster criterion through randomization of the relocated catalog. Identified clusters are largely main shock-aftershock sequences, without evidence for migration, occurring within the diffuse background seismicity. The migration rate of the largest foreshock cluster and simultaneous background events is consistent with it having triggered, or having been triggered by, an aseismic slip event.
NASA Astrophysics Data System (ADS)
Wu, S.; Mclaskey, G.
2017-12-01
We investigate foreshocks and aftershocks of dynamic stick-slip events generated on a newly constructed 3 m biaxial friction apparatus at Cornell University (attached figure). In a typical experiment, two rectangular granite blocks are squeezed together under 4 or 7 MPa of normal pressure ( 4 or 7 million N on a 1 m2 fault surface), and then shear stress is increased until the fault slips 10 - 400 microns in a dynamic rupture event similar to a M -2 to M -3 earthquake. Some ruptures nucleate near the north end of the fault, where the shear force is applied, other ruptures nucleate 2 m from the north end of the fault. The samples are instrumented with 16 piezoelectric sensors, 16 eddy current sensors, and 8 strain gage rosettes, evenly placed along the fault to measure vertical ground motion, local slip, and local stress, respectively. We studied sequences of tens of slip events and identified a total of 194 foreshocks and 66 aftershocks located within 6 s time windows around the stick-slip events and analyzed their timing and locations relative to the quasistatic nucleation process. We found that the locations of the foreshocks and aftershocks were distributed all along the length of the fault, with the majority located at the ends of the fault where local normal and shear stress is highest (caused by both edge effects and the finite stiffness of the steel frame surrounding the granite blocks). We also opened the laboratory fault and inspected the fault surface and found increased wear at the sample ends. To explore the foreshocks' and aftershocks' relationship to the nucleation and afterslip, we compared the occurrence of foreshocks to the local slip rate on the laboratory fault closest to each foreshock in space and time. We found that that majority of foreshocks were generated from local slip rates between 1 and 100 microns/s, though we were not able to resolve slip rate lower than about 1 micron/s. Our experiments provide insight into how foreshocks and aftershocks in natural earthquakes may be influenced both by fault structure and slow slip associated with nucleation or afterslip.
NASA Astrophysics Data System (ADS)
Ferranti, L.; Milano, G.; Pierro, M.
2017-11-01
We assess the seismotectonics of the western part of the border area between the Southern Apennines and Calabrian Arc, centered on the Mercure extensional basin, by integrating recent seismicity with a reconstruction of the structural frame from surface to deep crust. The analysis of low-magnitude (ML ≤ 3.5) events occurred in the area during 2013-2017, when evaluated in the context of the structural model, has revealed an unexpected complexity of seismotectonics processes. Hypocentral distribution and kinematics allow separating these events into three groups. Focal mechanisms of the shallower (< 9 km) set of events show extensional kinematics. These results are consistent with the last kinematic event recorded on outcropping faults, and with the typical depth and kinematics of normal faulting earthquakes in the axial part of southern Italy. By contrast, intermediate ( 9-17 km) and deep ( 17-23 km) events have fault plane solutions characterized by strike- to reverse-oblique slip, but they differ from each other in the orientation of the principal axes. The intermediate events have P axes with a NE-SW trend, which is at odds with the NW-SE trend recorded by strike-slip earthquakes affecting the Apulia foreland plate in the eastern part of southern Italy. The intermediate events are interpreted to reflect reactivation of faults in the Apulia unit involved in thrust uplift, and appears aligned along an WNW-ESE trending deep crustal, possibly lithospheric boundary. Instead, deep events beneath the basin, which have P-axis with a NW-SE trend, hint to the activity of a deep overthrust of the Tyrrhenian back-arc basin crust over the continental crust of the Apulia margin, or alternatively, to a tear fault in the underthrust Apulia plate. Results of this work suggest that extensional faulting, as believed so far, does not solely characterizes the seismotectonics of the axial part of the Southern Apennines.
NASA Astrophysics Data System (ADS)
Carlson, K.; Bemis, S. P.; Toke, N. A.; Bishop, B.; Taylor, P.
2015-12-01
Understanding the record of earthquakes along the Denali Fault (DF) is important for resource and infrastructure development and presents the potential to test earthquake rupture models in a tectonic environment with a larger ratio of event recurrence to geochronological uncertainty than well studied plate boundary faults such as the San Andreas. However, the fault system is over 1200 km in length and has proven challenging to identify paleoseismic sites that preserve more than 2-3 Paleoearthquakes (PEQ). In 2012 and 2015 we developed the 'Dead Mouse' site, providing the first long PEQ record west of the 2002 rupture extent. This site is located on the west-central segment of the DF near the southernmost intersection of the George Parks Hwy and the Nenana River (63.45285, -148.80249). We hand-excavated three fault-perpendicular trenches, including a fault-parallel extension that we excavated and recorded in a progressive sequence. We used Structure from Motion software to build mm-scale 3D models of the exposures. These models allowed us to produce orthorectified photomosaics for hand logging at 1:5 scale. We document evidence for 4-5 surface rupturing earthquakes that have deformed the upper 2.5 m of stratigraphy. Age control from our preliminary 2012 investigation indicates these events occurred within the past ~2,500 years. Evidence for these events include offset units, filled fissures, upward fault terminations, angular unconformities and minor scarp-derived colluvial deposits. Multiple lines of evidence from the primary fault zones and fault splays are apparent for each event. We are testing these correlations by constructing a georeferenced 3D site model and running an additional 20 geochronology samples including woody macrofossils, detrital and in-situ charcoal, and samples for post-IR IRSL from positions that should closely constrain stratigraphic evidence for earthquakes. We expect this long PEQ history to provide a critical test for future modeling of recurrence and fault segmentation on the DF.
NASA Astrophysics Data System (ADS)
Ishiyama, T.; Sugito, N.; Echigo, T.; Sato, H.; Suzuki, T.
2012-04-01
A month after March 11 gigantic M9.0 Tohoku-oki earthquake, M7.0 intraplate earthquake occurred at a depth of 5 km on April 11 beneath coastal area of near Iwaki city, Fukushima prefecture. Focal mechanism of the mainshock indicates that this earthquake is a normal faulting event. Based on field reconnaissance and LIDAR mapping by Geospatial Information Authority of Japan, we recognized coseismic surface ruptures, presumably associated with the main shock. Coseismic surface ruptures extend NNW for about 11 km in a right-stepping en echelon manner. Geomorphic expressions of these ruptures commonly include WWS-facing normal fault scarps and/or drape fold scarp with open cracks on their crests, on the hanging wall sides of steeply west-dipping normal fault planes subparallel to Cretaceous metamorphic rocks. Highest topographic scarp height is about 2.3 m. In this study we introduce preliminary results of a trenching survey across the coseismic surface ruptures at Shionohira site, to resolve timing of paleoseismic events along the Shionohira fault. Trench excavations were carried out at two sites (Ichinokura and Shionohira sites) in Iwaki, Fukushima. At Shionohira site a 2-m-deep trench was excavated across the coseismic fault scarp emerged on the alluvial plain on the eastern flank of the Abukuma Mountains. On the trench walls we observed pairs of steeply dipping normal faults that deform Neogene to Paleogene conglomerates and unconformably overlying, late Quaternary to Holocene fluvial units. Sense of fault slip observed on the trench walls (large dip-slip with small sinistral component) is consistent with that estimated from coseismic surface ruptures. Fault throw estimated from separation of piercing points on lower Unit I and vertical structural relief on folded upper Unit I is consistent with topographic height of the coseismic fault scarp at the trench site. In contrast, vertical separation of Unit II, unconformably overlain by Unit I, is measured as about 1.5 m, twice as large as coseismic vertical component of slip, indicative of penultimate seismic event prior to the 2011 earthquake. Abrupt thickening of overlying Unit I may also suggest preexisting topographic relief prior to its deposition. Radiocarbon dating of charred materials included in event horizons and tephrostratigraphy at two sites indicate that penultimate event prior to the 2011 event might occurred at about 40 ka. This normal fault earthquake is in contrast to compressional or neutral stress regimes in Tohoku region before the 2011 megaquake and rarity of the normal faulting earthquake inferred from these paleoseismic studies may reflect its mechanical relation to the gigantic megathrust earthquakes, such as unusual, enhanced extensional stress on the hangingwall block induced by mainshock and/or postseismic creep after the M~9 earthquake.
NASA Astrophysics Data System (ADS)
Hiemer, S.; Woessner, J.; Basili, R.; Danciu, L.; Giardini, D.; Wiemer, S.
2014-08-01
We present a time-independent gridded earthquake rate forecast for the European region including Turkey. The spatial component of our model is based on kernel density estimation techniques, which we applied to both past earthquake locations and fault moment release on mapped crustal faults and subduction zone interfaces with assigned slip rates. Our forecast relies on the assumption that the locations of past seismicity is a good guide to future seismicity, and that future large-magnitude events occur more likely in the vicinity of known faults. We show that the optimal weighted sum of the corresponding two spatial densities depends on the magnitude range considered. The kernel bandwidths and density weighting function are optimized using retrospective likelihood-based forecast experiments. We computed earthquake activity rates (a- and b-value) of the truncated Gutenberg-Richter distribution separately for crustal and subduction seismicity based on a maximum likelihood approach that considers the spatial and temporal completeness history of the catalogue. The final annual rate of our forecast is purely driven by the maximum likelihood fit of activity rates to the catalogue data, whereas its spatial component incorporates contributions from both earthquake and fault moment-rate densities. Our model constitutes one branch of the earthquake source model logic tree of the 2013 European seismic hazard model released by the EU-FP7 project `Seismic HAzard haRmonization in Europe' (SHARE) and contributes to the assessment of epistemic uncertainties in earthquake activity rates. We performed retrospective and pseudo-prospective likelihood consistency tests to underline the reliability of our model and SHARE's area source model (ASM) using the testing algorithms applied in the collaboratory for the study of earthquake predictability (CSEP). We comparatively tested our model's forecasting skill against the ASM and find a statistically significant better performance for testing periods of 10-20 yr. The testing results suggest that our model is a viable candidate model to serve for long-term forecasting on timescales of years to decades for the European region.
A systematic investigation into b values prior to coming large earthquakes
NASA Astrophysics Data System (ADS)
Nanjo, K.; Yoshida, A.
2017-12-01
The Gutenberg-Richter law for frequency-magnitude distribution of earthquakes is now well established in seismology. The b value, the slope of the distribution, is supposed to reflect heterogeneity of seismogenic region (e.g. Mogi 1962) and development of interplate coupling in subduction zone (e.g. Nanjo et al., 2012; Tormann et al. 2015). In the laboratory as well as in the Earth's crust, the b value is known to be inversely dependent on differential stresses (Scholz 1968, 2015). In this context, the b value could serve as a stress meter to help locate asperities, the highly-stressed patches, in fault planes where large rupture energy is released (e.g. Schorlemmer & Wiemer 2005). However, it still remains uncertain whether the b values of events prior to coming large earthquakes are always low significantly. To clarify this issue, we conducted a systematic investigation into b values prior to large earthquakes in the Japanese Mainland. Since no physical definition of mainshock, foreshock, and aftershock is known, we simply investigated b values of the events with magnitudes larger than the lower-cutoff magnitude, Mc, prior to earthquakes equal to or larger than a threshold magnitude, Mth, where Mth>Mc. Schorlemmer et al. (2005) showed that the b value for different fault types differs significantly, which is supposed to reflect the feature that the fracture stress depends on fault types. Therefore, we classified fault motions into normal, strike-slip, and thrust types based on the mechanism solution of earthquakes, and computed b values of events associated with each fault motion separately. We found that the target events (M≥Mth) and the events that occurred prior to the target events both show a common systematic change in b: normal faulting events have the highest b values, thrust events the lowest and strike-slip events intermediate values. Moreover, we found that the b values for the prior events (M≥Mc) are significantly lower than the b values for the target events (M≥Mth), though their b values change somewhat depending on the choice of the parameter values to define the target events (M≥Mth) and the prior events (M≥Mc). This finding indicates that the b value could be used as an effective index for foreseeing occurrence of large earthquakes, if the parameter values are well-tuned.
NASA Astrophysics Data System (ADS)
Breshears, D. D.; Allen, C. D.; McDowell, N. G.; Adams, H. D.; Barnes, M.; Barron-Gafford, G.; Bradford, J. B.; Cobb, N.; Field, J. P.; Froend, R.; Fontaine, J. B.; Garcia, E.; Hardy, G. E. S. J.; Huxman, T. E.; Kala, J.; Lague, M. M.; Martinez-Yrizar, A.; Matusick, G.; Minor, D. M.; Moore, D. J.; Ng, M.; Ruthrof, K. X.; Saleska, S. R.; Stark, S. C.; Swann, A. L. S.; Villegas, J. C.; Williams, A. P.; Zou, C.
2017-12-01
Evidence that tree mortality is increasingly likely occur in extensive die-off events across the terrestrial biosphere continues to mount. The consequences of such extensive mortality events are potentially profound, not only for the locations where die-off events occur, but also for other locations that could be impacted via ecoclimate teleconnections, whereby the land surface changes associated with die-off in one location could alter atmospheric circulation patterns and affect vegetation elsewhere. Here, we (1) recap the background of tree mortality as an emerging environmental issue, (2) highlight recent advances that could help us improve predictions of the vulnerability to tree mortality, including the underlying importance of hydraulic failure, the potential to develop climatic envelopes specific to tree mortality events, and consideration of the role of heat waves; and (3) initial bounding simulations that indicate the potential for tree die-off events in different locations to alter ecoclimate teleconnections. As we move toward globally coordinated carbon accounting and management, the high vulnerability to tree die-off events and the potential for such events to affect vegetation elsewhere will both need to be accounted for.
Streaks of Aftershocks Following the 2004 Sumatra-Andaman Earthquake
NASA Astrophysics Data System (ADS)
Waldhauser, F.; Schaff, D. P.; Engdahl, E. R.; Diehl, T.
2009-12-01
Five years after the devastating 26 December, 2004 M 9.3 Sumatra-Andaman earthquake, regional and global seismic networks have recorded tens of thousands of aftershocks. We use bulletin data from the International Seismological Centre (ISC) and the National Earthquake Information Center (NEIC), and waveforms from IRIS, to relocate more than 20,000 hypocenters between 1964 and 2008 using teleseimic cross-correlation and double-difference methods. Relative location uncertainties of a few km or less allow for detailed analysis of the seismogenic faults activated as a result of the massive stress changes associated with the mega-thrust event. We focus our interest on an area of intense aftershock activity off-shore Banda Aceh in northern Sumatra, where the relocated epicenters reveal a pattern of northeast oriented streaks. The two most prominent streaks are ~70 km long with widths of only a few km. Some sections of the streaks are formed by what appear to be small, NNE striking sub-streaks. Hypocenter depths indicate that the events locate both on the plate interface and in the overriding Sunda plate, within a ~20 km wide band overlying the plate interface. Events on the plate interface indicate that the slab dip changes from ~20° to ~30° at around 50 km depth. Locations of the larger events in the overriding plate indicate an extension of the steeper dipping mega thrust fault to the surface, imaging what appears to be a major splay fault that reaches the surface somewhere near the western edge of the Aceh basin. Additional secondary splay faults, which branch off the plate interface at shallower depths, may explain the diffuse distribution of smaller events in the overriding plate, although their relative locations are less well constrained. Focal mechanisms support the relocation results. They show a narrowing range of fault dips with increasing distance from the trench. Specifically, they show reverse faulting on ~30° dipping faults above the shallow (20°) dipping plate interface. The observation of active splay faults associated with the mega thrust event is consistent with co- and post-seismic motion data, and may have significant implications on the generation and size of the tsunami that caused 300,000 deaths.
Personius, S.F.; Mahan, S.A.
2003-01-01
The Hubbell Spring fault zone forms the modern eastern margin of the Rio Grande rift in the Albuquerque basin of north-central New Mexico. Knowledge of its seismic potential is important because the fault zone transects Kirtland Air Force Base/Sandia National Laboratories and underlies the southern Albuquerque metropolitan area. No earthquakes larger than ML 5.5 have been reported in the last 150 years in this region, so we excavated the first trench across this fault zone to determine its late Quaternary paleoseismic history. Our trench excavations revealed a complex, 16-m-wide fault zone overlain by four tapered blankets of mixed eolian sand and minor colluvium that we infer were deposited after four large-magnitude, surface-rupturing earthquakes. Although the first (oldest) rupture event is undated, we used luminescence (thermoluminescence and infrared-stimulated luminescence) ages to determine that the subsequent three rupture events occurred about 56 ?? 6, 29 ?? 3, and 12 ?? 1 ka. These ages yield recurrence intervals of 27 and 17 k.y. between events and an elapsed time of 12 k.y. since the latest surface-rupturing paleoearthquake. Slip rates are not well constrained, but our preferred average slip rate since rupture event 2 (post-56 ka) is 0.05 mm/yr, and interval slip rates between the last three events are 0.06 and 0.09 mm/yr, respectively. Vertical displacements of 1-2 m per event and probable rupture lengths of 34-43 km indicate probable paleoearthquake magnitudes (Ms or Mw) of 6.8-7.1. Future earthquakes of this size likely would cause strong ground motions in the Albuquerque metropolitan area.
Detailed seismicity analysis revealing the dynamics of the southern Dead Sea area
NASA Astrophysics Data System (ADS)
Braeuer, B.; Asch, G.; Hofstetter, R.; Haberland, Ch.; Jaser, D.; El-Kelani, R.; Weber, M.
2014-10-01
Within the framework of the international DESIRE (DEad Sea Integrated REsearch) project, a dense temporary local seismological network was operated in the southern Dead Sea area. During 18 recording months, 648 events were detected. Based on an already published tomography study clustering, focal mechanisms, statistics and the distribution of the microseismicity in relation to the velocity models from the tomography are analysed. The determined b value of 0.74 leads to a relatively high risk of large earthquakes compared to the moderate microseismic activity. The distribution of the seismicity indicates an asymmetric basin with a vertical strike-slip fault forming the eastern boundary of the basin, and an inclined western boundary, made up of strike-slip and normal faults. Furthermore, significant differences between the area north and south of the Bokek fault were observed. South of the Bokek fault, the western boundary is inactive while the entire seismicity occurs on the eastern boundary and below the basin-fill sediments. The largest events occurred here, and their focal mechanisms represent the northwards transform motion of the Arabian plate along the Dead Sea Transform. The vertical extension of the spatial and temporal cluster from February 2007 is interpreted as being related to the locking of the region around the Bokek fault. North of the Bokek fault similar seismic activity occurs on both boundaries most notably within the basin-fill sediments, displaying mainly small events with strike-slip mechanism and normal faulting in EW direction. Therefore, we suggest that the Bokek fault forms the border between the single transform fault and the pull-apart basin with two active border faults.
Stress and Strain Rates from Faults Reconstructed by Earthquakes Relocalization
NASA Astrophysics Data System (ADS)
Morra, G.; Chiaraluce, L.; Di Stefano, R.; Michele, M.; Cambiotti, G.; Yuen, D. A.; Brunsvik, B.
2017-12-01
Recurrence of main earthquakes on the same fault depends on kinematic setting, hosting lithologies and fault geometry and population. Northern and central Italy transitioned from convergence to post-orogenic extension. This has produced a unique and very complex tectonic setting characterized by superimposed normal faults, crossing different geologic domains, that allows to investigate a variety of seismic manifestations. In the past twenty years three seismic sequences (1997 Colfiorito, 2009 L'Aquila and 2016-17 Amatrice-Norcia-Visso) activated a 150km long normal fault system located between the central and northern apennines and allowing the recordings of thousands of seismic events. Both the 1997 and the 2009 main shocks were preceded by a series of small pre-shocks occurring in proximity to the future largest events. It has been proposed and modelled that the seismicity pattern of the two foreshocks sequences was caused by active dilatancy phenomenon, due to fluid flow in the source area. Seismic activity has continued intensively until three events with 6.0
NASA Astrophysics Data System (ADS)
Haneda, Kiyofumi; Kajima, Toshio; Koyama, Tadashi; Muranaka, Hiroyuki; Dojo, Hirofumi; Aratani, Yasuhiko
2002-05-01
The target of our study is to analyze the level of necessary security requirements, to search for suitable security measures and to optimize security distribution to every portion of the medical practice. Quantitative expression must be introduced to our study, if possible, to enable simplified follow-up security procedures and easy evaluation of security outcomes or results. Using fault tree analysis (FTA), system analysis showed that system elements subdivided into groups by details result in a much more accurate analysis. Such subdivided composition factors greatly depend on behavior of staff, interactive terminal devices, kinds of services provided, and network routes. Security measures were then implemented based on the analysis results. In conclusion, we identified the methods needed to determine the required level of security and proposed security measures for each medical information system, and the basic events and combinations of events that comprise the threat composition factors. Methods for identifying suitable security measures were found and implemented. Risk factors for each basic event, a number of elements for each composition factor, and potential security measures were found. Methods to optimize the security measures for each medical information system were proposed, developing the most efficient distribution of risk factors for basic events.
Haneda, Kiyofumi; Umeda, Tokuo; Koyama, Tadashi; Harauchi, Hajime; Inamura, Kiyonari
2002-01-01
The target of our study is to establish the methodology for analyzing level of security requirements, for searching suitable security measures and for optimizing security distribution to every portion of medical practice. Quantitative expression must be introduced to our study as possible for the purpose of easy follow up of security procedures and easy evaluation of security outcomes or results. Results of system analysis by fault tree analysis (FTA) clarified that subdivided system elements in detail contribute to much more accurate analysis. Such subdivided composition factors very much depended on behavior of staff, interactive terminal devices, kinds of service, and routes of network. As conclusion, we found the methods to analyze levels of security requirements for each medical information systems employing FTA, basic events for each composition factor and combination of basic events. Methods for searching suitable security measures were found. Namely risk factors for each basic event, number of elements for each composition factor and candidates of security measure elements were found. Method to optimize the security measures for each medical information system was proposed. Namely optimum distribution of risk factors in terms of basic events were figured out, and comparison of them between each medical information systems became possible.
Transform fault earthquakes in the North Atlantic: Source mechanisms and depth of faulting
NASA Technical Reports Server (NTRS)
Bergman, Eric A.; Solomon, Sean C.
1987-01-01
The centroid depths and source mechanisms of 12 large earthquakes on transform faults of the northern Mid-Atlantic Ridge were determined from an inversion of long-period body waveforms. The earthquakes occurred on the Gibbs, Oceanographer, Hayes, Kane, 15 deg 20 min, and Vema transforms. The depth extent of faulting during each earthquake was estimated from the centroid depth and the fault width. The source mechanisms for all events in this study display the strike slip motion expected for transform fault earthquakes; slip vector azimuths agree to 2 to 3 deg of the local strike of the zone of active faulting. The only anomalies in mechanism were for two earthquakes near the western end of the Vema transform which occurred on significantly nonvertical fault planes. Secondary faulting, occurring either precursory to or near the end of the main episode of strike-slip rupture, was observed for 5 of the 12 earthquakes. For three events the secondary faulting was characterized by reverse motion on fault planes striking oblique to the trend of the transform. In all three cases, the site of secondary reverse faulting is near a compression jog in the current trace of the active transform fault zone. No evidence was found to support the conclusions of Engeln, Wiens, and Stein that oceanic transform faults in general are either hotter than expected from current thermal models or weaker than normal oceanic lithosphere.
Redundancy management for efficient fault recovery in NASA's distributed computing system
NASA Technical Reports Server (NTRS)
Malek, Miroslaw; Pandya, Mihir; Yau, Kitty
1991-01-01
The management of redundancy in computer systems was studied and guidelines were provided for the development of NASA's fault-tolerant distributed systems. Fault recovery and reconfiguration mechanisms were examined. A theoretical foundation was laid for redundancy management by efficient reconfiguration methods and algorithmic diversity. Algorithms were developed to optimize the resources for embedding of computational graphs of tasks in the system architecture and reconfiguration of these tasks after a failure has occurred. The computational structure represented by a path and the complete binary tree was considered and the mesh and hypercube architectures were targeted for their embeddings. The innovative concept of Hybrid Algorithm Technique was introduced. This new technique provides a mechanism for obtaining fault tolerance while exhibiting improved performance.
Engineering risk assessment for emergency disposal projects of sudden water pollution incidents.
Shi, Bin; Jiang, Jiping; Liu, Rentao; Khan, Afed Ullah; Wang, Peng
2017-06-01
Without an engineering risk assessment for emergency disposal in response to sudden water pollution incidents, responders are prone to be challenged during emergency decision making. To address this gap, the concept and framework of emergency disposal engineering risks are reported in this paper. The proposed risk index system covers three stages consistent with the progress of an emergency disposal project. Fuzzy fault tree analysis (FFTA), a logical and diagrammatic method, was developed to evaluate the potential failure during the process of emergency disposal. The probability of basic events and their combination, which caused the failure of an emergency disposal project, were calculated based on the case of an emergency disposal project of an aniline pollution incident in the Zhuozhang River, Changzhi, China, in 2014. The critical events that can cause the occurrence of a top event (TE) were identified according to their contribution. Finally, advices on how to take measures using limited resources to prevent the failure of a TE are given according to the quantified results of risk magnitude. The proposed approach could be a potential useful safeguard for the implementation of an emergency disposal project during the process of emergency response.
Failure mode effect analysis and fault tree analysis as a combined methodology in risk management
NASA Astrophysics Data System (ADS)
Wessiani, N. A.; Yoshio, F.
2018-04-01
There have been many studies reported the implementation of Failure Mode Effect Analysis (FMEA) and Fault Tree Analysis (FTA) as a method in risk management. However, most of the studies usually only choose one of these two methods in their risk management methodology. On the other side, combining these two methods will reduce the drawbacks of each methods when implemented separately. This paper aims to combine the methodology of FMEA and FTA in assessing risk. A case study in the metal company will illustrate how this methodology can be implemented. In the case study, this combined methodology will assess the internal risks that occur in the production process. Further, those internal risks should be mitigated based on their level of risks.
Magmatic controls on axial relief and faulting at mid-ocean ridges
NASA Astrophysics Data System (ADS)
Liu, Zhonglan; Buck, W. Roger
2018-06-01
Previous models do not simultaneously reproduce the observed range of axial relief and fault patterns at plate spreading centers. We suggest that this failure is due to the approximation that magmatic dikes open continuously rather than in discrete events. During short - lived events, dikes open not only in the strong axial lithosphere but also some distance into the underlying weaker asthenosphere. Axial valley relief affects the partitioning of magma between the lithosphere and asthenosphere during diking events. The deeper the valley, the more magma goes into lithospheric dikes in each event and so the greater the average opening rate of those dikes. The long-term rate of lithospheric dike opening controls faulting rate and axial depth. The feedback between axial valley depth D and lithospheric dike opening rate allows us to analytically relate steady-state values of D to lithospheric thickness HL and crustal thickness HC. A two-dimensional model numerical model with a fixed axial lithospheric structure illustrates the analytic model implications for axial faulting. The predictions of this new model are broadly consistent with global and segment-scale trends of axial depth and fault patterns with HL and HC.
On simultaneous tilt and creep observations on the San Andreas Fault
Johnston, M.J.S.; McHugh, S.; Burford, S.
1976-01-01
THE installation of an array of tiltmeters along the San Andreas Fault 1 has provided an excellent opportunity to study the amplitude and spatial scale of the tilt fields associated with fault creep. We report here preliminary results from, and some implications of, a search for interrelated surface tilts and creep event observations at four pairs of tiltmeters and creepmeters along an active 20-km stretch of the San Andreas Fault. We have observed clear creep-related tilts above the instrument resolution (10 -8 rad) only on a tiltmeter less than 0.5 km from the fault. The tilt events always preceded surface creep observations by 2-12 min, and were not purely transient in character. ?? 1975 Nature Publishing Group.
Surface faulting. A preliminary view
Sharp, R.V.
1989-01-01
This description of surface faulting near Spitak, Armenia, is based on a field inspection made December 22-26, 1988. The surface rupture west of Spitak, displacement of the ground surface, pre-earthquake surface expressions of the fault, and photolineaments in landsat images are described and surface faulting is compared to aftershocks. It is concluded that the 2 meters of maximum surface displacement fits well within the range of reliably measured maximum surface offsets for historic reverse and oblique-reverse faulting events throughout the world. By contrast, the presently known length of surface rupture near Spitak, between 8 and 13 km, is shorter than any other reverse or oblique-reverse event of magnitude greater than 6.0. This may be a reason to suppose that additional surface rupture might remain unmapped.
An Examination of Seismicity Linking the Solomon Islands and Vanuatu Subduction Zones
NASA Astrophysics Data System (ADS)
Neely, J. S.; Furlong, K. P.
2015-12-01
The Solomon Islands-Vanuatu composite subduction zone represents a tectonically complex region along the Pacific-Australia plate boundary in the southwest Pacific Ocean. Here the Australia plate subducts under the Pacific plate in two segments: the South Solomon Trench and the Vanuatu Trench. The two subducting sections are offset by a 200 km long, transform fault - the San Cristobal Trough (SCT) - which acts as a Subduction-Transform Edge Propagator (STEP) fault. The subducting segments have experienced much more frequent and larger seismic events than the STEP fault. The northern Vanuatu trench hosted a M8.0 earthquake in 2013. In 2014, at the juncture of the western terminus of the SCT and the southern South Solomon Trench, two earthquakes (M7.4 and M7.6) occurred with disparate mechanisms (dominantly thrust and strike-slip respectively), which we interpret to indicate the tearing of the Australia plate as its northern section subducts and southern section translates along the SCT. During the 2013-2014 timeframe, little seismic activity occurred along the STEP fault. However, in May 2015, three M6.8-6.9 strike-slip events occurred in rapid succession as the STEP fault ruptured east to west. These recent events share similarities with a 1993 strike-slip STEP sequence on the SCT. Analysis of the 1993 and 2015 STEP earthquake sequences provides constraints on the plate boundary geometry of this major transform fault. Preliminary research suggests that plate motion along the STEP fault is partitioned between larger east-west oriented strike-slip events and smaller north-south thrust earthquakes. Additionally, the differences in seismic activity between the subducting slabs and the STEP fault can provide insights into how stress is transferred along the plate boundary and the mechanisms by which that stress is released.
NASA Astrophysics Data System (ADS)
Barão, Leonardo M.; Trzaskos, Barbara; Vesely, Fernando F.; de Castro, Luís Gustavo; Ferreira, Francisco J. F.; Vasconcellos, Eleonora M. G.; Barbosa, Tiago C.
2017-12-01
The Guaratubinha Basin is a late Neoproterozoic volcano-sedimentary basin included in the transitional-stage basins of the South American Platform. The aim of this study is to investigate its tectonic evolution through a detailed structural analysis based on remote sensing and field data. The structural and aerogeophysics data indicate that at least three major deformational events affected the basin. Event E1 caused the activation of the two main basin-bounding fault zones, the Guaratubinha Master Fault and the Guaricana Shear Zone. These structures, oriented N20-45E, are associated with well-defined right-lateral to oblique vertical faults, conjugate normal faults and vertical flow structures. Progressive transtensional deformation along the two main fault systems was the main mechanism for basin formation and the deposition of thick coarse-grained deposits close to basin-borders. The continuous opening of the basin provided intense intermediate and acid magmatism as well as deposition of volcaniclastic sediments. Event E2 characterizes generalized compression, recorded as minor thrust faults with tectonic transport toward the northwest and left-lateral activation of the NNE-SSW Palmital Shear Zone. Event E3 is related to the Mesozoic tectonism associated with the South Atlantic opening, which generated diabase dykes and predominantly right-lateral strike-slip faults oriented N10-50W. Its rhomboidal geometry with long axis parallel to major Precambrian shear zones, the main presence of high-angle, strike-slip or oblique faults, the asymmetric distribution of geological units and field evidence for concomitant Neoproterozoic magmatism and strike-slip movements are consistent with pull-apart basins reported in the literature.
Observations of Static Coulomb Stress Triggering During the Mw 5.7 Pawnee Earthquake Sequence
NASA Astrophysics Data System (ADS)
Pennington, C.; Chen, X.; Nakata, N.; Chang, J. C.
2016-12-01
The Pawnee earthquake occurred at 12:02 UTC on September 3 and was felt throughout Oklahoma and is the largest event recorded in Oklahoma instrumented history. The earthquake occurred near the junction of two previously mapped faults (Watchorn Fault and Labette Fault), but the actual fault that ruptured was a left-lateral unmapped basement fault (now known as the Sooner Lake Fault) with a strike of 107°, which is conjugate to a segment of the Labette fault that is optimally oriented (referred as OOF). We located 634 events from both before and after the mainshock (updated on September 15, 2016) and use these locations to map other seismogenic faults in the area. Examining the catalog, we found two episodes of seismicity, which started at 100 days and 40 days prior to mainshock, each episode has two clusters occurring two days apart on both OOF and near the mainshock. The near-simultaneous occurrence of clusters suggests possible stress interaction between the Sooner Lake Fault and the Labette fault. We examined the Coulomb stress changes on the surrounding faults caused by the mainshock and have found an increase of coulomb stress along the rakes of mapped faults in the area, the highest being along the Sooner Lake fault and the OOF segment of the Labette fault (see fig 1). These faults experienced up to 5 bars of positive coulomb stress increase, which matched the areas that experience the most aftershocks. To better understand the effect of the coulomb stress on the aftershocks, we plan on refining the catalogs for both aftershocks over a longer period and focal mechanisms to obtain accurate nodal planes, which will be used to see how and if the aftershocks were triggered by the Coulomb stress changes. We will also examine and refine the focal mechanisms that were produced for the events that occurred both before and after the main shock to investigate Coulomb stress interaction. Fig 1. (a) Is a map of faults in the Pawnee area with the red line being the source fault, which is part of the Sooner Lake Fault (green and red line segments.) The opitimally oriented segment of the Labette Fault (OOF) is shown in blue. (b) Shows the coulomb stress change for individual rakes after the rupture along the source fault.
Using Decision Trees to Detect and Isolate Simulated Leaks in the J-2X Rocket Engine
NASA Technical Reports Server (NTRS)
Schwabacher, Mark A.; Aguilar, Robert; Figueroa, Fernando F.
2009-01-01
The goal of this work was to use data-driven methods to automatically detect and isolate faults in the J-2X rocket engine. It was decided to use decision trees, since they tend to be easier to interpret than other data-driven methods. The decision tree algorithm automatically "learns" a decision tree by performing a search through the space of possible decision trees to find one that fits the training data. The particular decision tree algorithm used is known as C4.5. Simulated J-2X data from a high-fidelity simulator developed at Pratt & Whitney Rocketdyne and known as the Detailed Real-Time Model (DRTM) was used to "train" and test the decision tree. Fifty-six DRTM simulations were performed for this purpose, with different leak sizes, different leak locations, and different times of leak onset. To make the simulations as realistic as possible, they included simulated sensor noise, and included a gradual degradation in both fuel and oxidizer turbine efficiency. A decision tree was trained using 11 of these simulations, and tested using the remaining 45 simulations. In the training phase, the C4.5 algorithm was provided with labeled examples of data from nominal operation and data including leaks in each leak location. From the data, it "learned" a decision tree that can classify unseen data as having no leak or having a leak in one of the five leak locations. In the test phase, the decision tree produced very low false alarm rates and low missed detection rates on the unseen data. It had very good fault isolation rates for three of the five simulated leak locations, but it tended to confuse the remaining two locations, perhaps because a large leak at one of these two locations can look very similar to a small leak at the other location.
NASA Astrophysics Data System (ADS)
Gu, C.; Mighani, S.; Prieto, G. A.; Mok, U.; Evans, J. B.; Hager, B. H.; Toksoz, M. N.
2017-12-01
Repeating earthquakes have been found in subduction zones and interpreted as repeated ruptures of small local asperities. Repeating earthquakes have also been found in oil/gas fields, interpreted as the reactivation of pre-existing faults due to fluid injection/extraction. To mimic the fault rupture of a fault with local asperities, we designed a "stick-slip" experiment using a saw-cut cylindrical Lucite sample, which had sharp localized ridges parallel to the strike of the fault plane. The sample was subjected to conventional triaxial loading with a constant confining pressure of 10 MPa. The axial load was then increased to 6 MPa at a constant rate of 0.12 MPa/sec until the sliding occurred along the fault plane. Ultrasonic acoustic emissions (AEs) were monitored with eight PZT sensors. Two cycles of AEs were detected with the occurrence rate that decreased from the beginning to the end of each cycle, while the relative magnitudes increased. Correlation analysis indicated that these AEs were clustered into two groups - those with frequency content between 200-300kHz and a second group with frequency content between 10-50kHz. The locations of the high-frequency events, with almost identical waveforms, show that these events are from the sharp localized ridges on the saw-cut plane. The locations of the low-frequency events show an approaching process to the high-frequency events for each cycle. In this single experiment, there was a correlation of the proximity of the low-frequency events with the subsequent triggering of large high-frequency repeating events.
MacDonald III, Angus W.; Zick, Jennifer L.; Chafee, Matthew V.; Netoff, Theoden I.
2016-01-01
The grand challenges of schizophrenia research are linking the causes of the disorder to its symptoms and finding ways to overcome those symptoms. We argue that the field will be unable to address these challenges within psychiatry’s standard neo-Kraepelinian (DSM) perspective. At the same time the current corrective, based in molecular genetics and cognitive neuroscience, is also likely to flounder due to its neglect for psychiatry’s syndromal structure. We suggest adopting a new approach long used in reliability engineering, which also serves as a synthesis of these approaches. This approach, known as fault tree analysis, can be combined with extant neuroscientific data collection and computational modeling efforts to uncover the causal structures underlying the cognitive and affective failures in people with schizophrenia as well as other complex psychiatric phenomena. By making explicit how causes combine from basic faults to downstream failures, this approach makes affordances for: (1) causes that are neither necessary nor sufficient in and of themselves; (2) within-diagnosis heterogeneity; and (3) between diagnosis co-morbidity. PMID:26779007
NASA Astrophysics Data System (ADS)
King, Chi-Yu; Chia, Yeeping
2017-12-01
Streamflow recorded by a stream gauge located 4 km from the epicenter of the 1999 M7.6 Chi-Chi earthquake in central Taiwan showed a large and rapid anomalous increase of 124 m3/s starting 4 days before the earthquake. This increase was followed by a comparable co-seismic drop to below the background level for 8 months. In addition, groundwater-levels recorded at a well 1.5 km east of the seismogenic fault showed an anomalous rise 2 days before the earthquake, and then a unique 4-cm drop beginning 3 h before the earthquake. The anomalous streamflow increase is attributed to gravity-driven groundwater discharge into the creek through the openings of existing fractures in the steep creek banks crossed by the upstream Shueilikun fault zone, as a result of pre-earthquake crustal buckling. The continued tectonic movement and buckling, together with the downward flow of water in the crust, may have triggered the occurrence of some shallow slow-slip events in the Shueilikun and other nearby fault zones. When these events propagate down-dip to decollement, where the faults merges with the seismogenic Chelungpu fault, they may have triggered other slow-slip events propagating toward the asperity at the hypocenter and the Chelungpu fault. These events may then have caused the observed groundwater-level anomaly and helped to trigger the earthquake.
NASA Astrophysics Data System (ADS)
Bréda, Nathalie; Badeau, Vincent
2008-09-01
The aim of this paper is to illustrate how some extreme events could affect forest ecosystems. Forest tree response can be analysed using dendroecological methods, as tree-ring widths are strongly controlled by climatic or biotic events. Years with such events induce similar tree responses and are called pointer years. They can result from extreme climatic events like frost, a heat wave, spring water logging, drought or insect damage… Forest tree species showed contrasting responses to climatic hazards, depending on their sensitivity to water shortage or temperature hardening, as illustrated from our dendrochronological database. For foresters, a drought or a pest disease is an extreme event if visible and durable symptoms are induced (leaf discolouration, leaf loss, perennial organs mortality, tree dieback and mortality). These symptoms here are shown, lagging one or several years behind a climatic or biotic event, from forest decline cases in progress since the 2003 drought or attributed to previous severe droughts or defoliations in France. Tree growth or vitality recovery is illustrated, and the functional interpretation of the long lasting memory of trees is discussed. A coupled approach linking dendrochronology and ecophysiology helps in discussing vulnerability of forest stands, and suggests management advices in order to mitigate extreme drought and cope with selective mortality.
NASA Astrophysics Data System (ADS)
Mencin, D.; Hodgkinson, K. M.; Mattioli, G. S.; Johnson, W.; Gottlieb, M. H.; Meertens, C. M.
2016-12-01
Three-component strainmeter data from numerous borehole strainmeters (BSM) along the San Andreas Fault (SAF), including those that were installed and maintained as part of the EarthScope Plate Boundary Observatory (PBO), demonstrate that the characteristics of creep propagation events with sub-cm slip amplitudes can be quantified for slip events at 10 km source-to-sensor distances. The strainmeters are installed at depths of approximately 100 - 250 m and record data at a rate of 100 samples per second. Noise levels at periods of less than a few minutes are 10-11 strain, and for periods in the bandwidth hours to weeks, the periods of interest in the search for slow slip events, are of the order of 10-8 to 10-10 strain. Strainmeters, creepmeters, and tiltmeters have been operated along the San Andreas Fault, observing creep events for decades. BSM data proximal to the SAF cover a significant temporal portion of the inferred earthquake cycle along this portion of the fault. A single instrument is capable of providing broad scale constraints of creep event asperity size, location, and depth and moreover can capture slow slip, coseismic rupture as well as afterslip. The synthesis of these BSM data presents a unique opportunity to constrain the partitioning between aseismic and seismic slip on the central SAF. We show that the creepmeters confirm that creep events that are imaged by the strainmeters, previously catalogued by the authors, are indeed occurring on the SAF, and are simultaneously being recorded on local creepmeters. We further show that simple models allow us to loosely constrain the location and depth of the creep event on the fault, even with a single instrument, and to image the accumulation and behavior of surface as well as crustal creep with time.
The 2016 Mihoub (north-central Algeria) earthquake sequence: Seismological and tectonic aspects
NASA Astrophysics Data System (ADS)
Khelif, M. F.; Yelles-Chaouche, A.; Benaissa, Z.; Semmane, F.; Beldjoudi, H.; Haned, A.; Issaadi, A.; Chami, A.; Chimouni, R.; Harbi, A.; Maouche, S.; Dabbouz, G.; Aidi, C.; Kherroubi, A.
2018-06-01
On 28 May 2016 at 23:54 (UTC), an Mw5.4 earthquake occurred in Mihoub village, Algeria, 60 km southeast of Algiers. This earthquake was the largest event in a sequence recorded from 10 April to 15 July 2016. In addition to the permanent national network, a temporary network was installed in the epicentral region after this shock. Recorded event locations allow us to give a general overview of the sequence and reveal the existence of two main fault segments. The first segment, on which the first event in the sequence was located, is near-vertical and trends E-W. The second fault plane, on which the largest event of the sequence was located, dips to the southeast and strikes NE-SW. A total of 46 well-constrained focal mechanisms were calculated. The events located on the E-W-striking fault segment show mainly right-lateral strike-slip (strike N70°E, dip 77° to the SSE, rake 150°). The events located on the NE-SW-striking segment show mainly reverse faulting (strike N60°E, dip 70° to the SE, rake 130°). We calculated the static stress change caused by the first event (Md4.9) of the sequence; the result shows that the fault plane of the largest event in the sequence (Mw5.4) and most of the aftershocks occurred within an area of increased Coulomb stress. Moreover, using the focal mechanisms calculated in this work, we estimated the orientations of the main axes of the local stress tensor ellipsoid. The results confirm previous findings that the general stress field in this area shows orientations aligned NNW-SSE to NW-SE. The 2016 Mihoub earthquake sequence study thus improves our understanding of seismic hazard in north-central Algeria.
Improved FTA methodology and application to subsea pipeline reliability design.
Lin, Jing; Yuan, Yongbo; Zhang, Mingyuan
2014-01-01
An innovative logic tree, Failure Expansion Tree (FET), is proposed in this paper, which improves on traditional Fault Tree Analysis (FTA). It describes a different thinking approach for risk factor identification and reliability risk assessment. By providing a more comprehensive and objective methodology, the rather subjective nature of FTA node discovery is significantly reduced and the resulting mathematical calculations for quantitative analysis are greatly simplified. Applied to the Useful Life phase of a subsea pipeline engineering project, the approach provides a more structured analysis by constructing a tree following the laws of physics and geometry. Resulting improvements are summarized in comparison table form.
Improved FTA Methodology and Application to Subsea Pipeline Reliability Design
Lin, Jing; Yuan, Yongbo; Zhang, Mingyuan
2014-01-01
An innovative logic tree, Failure Expansion Tree (FET), is proposed in this paper, which improves on traditional Fault Tree Analysis (FTA). It describes a different thinking approach for risk factor identification and reliability risk assessment. By providing a more comprehensive and objective methodology, the rather subjective nature of FTA node discovery is significantly reduced and the resulting mathematical calculations for quantitative analysis are greatly simplified. Applied to the Useful Life phase of a subsea pipeline engineering project, the approach provides a more structured analysis by constructing a tree following the laws of physics and geometry. Resulting improvements are summarized in comparison table form. PMID:24667681
Hubbard, Judith; Shaw, John H.; Dolan, James F.; Pratt, Thomas L.; McAuliffe, Lee J.; Rockwell, Thomas K.
2014-01-01
The Ventura Avenue anticline is one of the fastest uplifting structures in southern California, rising at ∼5 mm/yr. We use well data and seismic reflection profiles to show that the anticline is underlain by the Ventura fault, which extends to seismogenic depth. Fault offset increases with depth, implying that the Ventura Avenue anticline is a fault‐propagation fold. A decrease in the uplift rate since ∼30±10 ka is consistent with the Ventura fault breaking through to the surface at that time and implies that the fault has a recent dip‐slip rate of ∼4.4–6.9 mm/yr.To the west, the Ventura fault and fold trend continues offshore as the Pitas Point fault and its associated hanging wall anticline. The Ventura–Pitas Point fault appears to flatten at about 7.5 km depth to a detachment, called the Sisar decollement, then step down on a blind thrust fault to the north. Other regional faults, including the San Cayetano and Red Mountain faults, link with this system at depth. We suggest that below 7.5 km, these faults may form a nearly continuous surface, posing the threat of large, multisegment earthquakes.Holocene marine terraces on the Ventura Avenue anticline suggest that it grows in discrete events with 5–10 m of uplift, with the latest event having occurred ∼800 years ago (Rockwell, 2011). Uplift this large would require large earthquakes (Mw 7.7–8.1) involving the entire Ventura/Pitas Point system and possibly more structures along strike, such as the San Cayetano fault. Because of the local geography and geology, such events would be associated with significant ground shaking amplification and regional tsunamis.
Risk-informed Maintenance for Non-coherent Systems
NASA Astrophysics Data System (ADS)
Tao, Ye
Probabilistic Safety Assessment (PSA) is a systematic and comprehensive methodology to evaluate risks associated with a complex engineered technological entity. The information provided by PSA has been increasingly implemented for regulatory purposes but rarely used in providing information for operation and maintenance activities. As one of the key parts in PSA, Fault Tree Analysis (FTA) attempts to model and analyze failure processes of engineering and biological systems. The fault trees are composed of logic diagrams that display the state of the system and are constructed using graphical design techniques. Risk Importance Measures (RIMs) are information that can be obtained from both qualitative and quantitative aspects of FTA. Components within a system can be ranked with respect to each specific criterion defined by each RIM. Through a RIM, a ranking of the components or basic events can be obtained and provide valuable information for risk-informed decision making. Various RIMs have been applied in various applications. In order to provide a thorough understanding of RIMs and interpret the results, they are categorized with respect to risk significance (RS) and safety significance (SS) in this thesis. This has also tied them into different maintenance activities. When RIMs are used for maintenance purposes, it is called risk-informed maintenance. On the other hand, the majority of work produced on the FTA method has been concentrated on failure logic diagrams restricted to the direct or implied use of AND and OR operators. Such systems are considered as coherent systems. However, the NOT logic can also contribute to the information produced by PSA. The importance analysis of non-coherent systems is rather limited, even though the field has received more and more attention over the years. The non-coherent systems introduce difficulties in both qualitative and quantitative assessment of the fault tree compared with the coherent systems. In this thesis, a set of RIMs is analyzed and investigated. The 8 commonly used RIMs (Birnbaum's Measure, Criticality Importance Factor, Fussell-Vesely Measure, Improvement Potential, Conditional Probability, Risk Achievement, Risk Achievement Worth, and Risk Reduction Worth) are extended to non-coherent forms. Both coherent and non-coherent forms are classified into different categories in order to assist different types of maintenance activities. The real systems such as the Steam Generator Level Control System in CANDU Nuclear Power Plant (NPP), a Gas Detection System, and the Automatic Power Control System of the experimental nuclear reactor are presented to demonstrate the application of the results as case studies.
Testing for Independence between Evolutionary Processes.
Behdenna, Abdelkader; Pothier, Joël; Abby, Sophie S; Lambert, Amaury; Achaz, Guillaume
2016-09-01
Evolutionary events co-occurring along phylogenetic trees usually point to complex adaptive phenomena, sometimes implicating epistasis. While a number of methods have been developed to account for co-occurrence of events on the same internal or external branch of an evolutionary tree, there is a need to account for the larger diversity of possible relative positions of events in a tree. Here we propose a method to quantify to what extent two or more evolutionary events are associated on a phylogenetic tree. The method is applicable to any discrete character, like substitutions within a coding sequence or gains/losses of a biological function. Our method uses a general approach to statistically test for significant associations between events along the tree, which encompasses both events inseparable on the same branch, and events genealogically ordered on different branches. It assumes that the phylogeny and themapping of branches is known without errors. We address this problem from the statistical viewpoint by a linear algebra representation of the localization of the evolutionary events on the tree.We compute the full probability distribution of the number of paired events occurring in the same branch or in different branches of the tree, under a null model of independence where each type of event occurs at a constant rate uniformly inthephylogenetic tree. The strengths andweaknesses of themethodare assessed via simulations;we then apply the method to explore the loss of cell motility in intracellular pathogens. © The Author(s) 2016. Published by Oxford University Press, on behalf of the Society of Systematic Biologists. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Multi-Fault Rupture Scenarios in the Brawley Seismic Zone
NASA Astrophysics Data System (ADS)
Kyriakopoulos, C.; Oglesby, D. D.; Rockwell, T. K.; Meltzner, A. J.; Barall, M.
2017-12-01
Dynamic rupture complexity is strongly affected by both the geometric configuration of a network of faults and pre-stress conditions. Between those two, the geometric configuration is more likely to be anticipated prior to an event. An important factor in the unpredictability of the final rupture pattern of a group of faults is the time-dependent interaction between them. Dynamic rupture models provide a means to investigate this otherwise inscrutable processes. The Brawley Seismic Zone in Southern California is an area in which this approach might be important for inferring potential earthquake sizes and rupture patterns. Dynamic modeling can illuminate how the main faults in this area, the Southern San Andreas (SSAF) and Imperial faults, might interact with the intersecting cross faults, and how the cross faults may modulate rupture on the main faults. We perform 3D finite element modeling of potential earthquakes in this zone assuming an extended array of faults (Figure). Our results include a wide range of ruptures and fault behaviors depending on assumptions about nucleation location, geometric setup, pre-stress conditions, and locking depth. For example, in the majority of our models the cross faults do not strongly participate in the rupture process, giving the impression that they are not typically an aid or an obstacle to the rupture propagation. However, in some cases, particularly when rupture proceeds slowly on the main faults, the cross faults indeed can participate with significant slip, and can even cause rupture termination on one of the main faults. Furthermore, in a complex network of faults we should not preclude the possibility of a large event nucleating on a smaller fault (e.g. a cross fault) and eventually promoting rupture on the main structure. Recent examples include the 2010 Mw 7.1 Darfield (New Zealand) and Mw 7.2 El Mayor-Cucapah (Mexico) earthquakes, where rupture started on a smaller adjacent segment and later cascaded into a larger event. For that reason, we are investigating scenarios of a moderate rupture on a cross fault, and determining conditions under which the rupture will propagate onto the adjacent SSAF. Our investigation will provide fundamental insights that may help us interpret faulting behaviors in other areas, such as the complex Mw 7.8 2016 Kaikoura (New Zealand) earthquake.
The Bay Area Earthquake Cycle:A Paleoseismic Perspective
NASA Astrophysics Data System (ADS)
Schwartz, D. P.; Seitz, G.; Lienkaemper, J. J.; Dawson, T. E.; Hecker, S.; William, L.; Kelson, K.
2001-12-01
Stress changes produced by the 1906 San Francisco earthquake had a profound effect on Bay Area seismicity, dramatically reducing it in the 20th century. Whether the San Francisco Bay Region (SFBR) is still within, is just emerging from it, or is out of the 1906 stress shadow is an issue of strong debate with important implications for earthquake mechanics and seismic hazards. Historically the SFBR has not experienced one complete earthquake cycle--the interval immediately following, then leading up to and repeating, a 1906-type (multi-segment rupture, M7.9) San Andreas event. The historical record of earthquake occurrence in the SFBR appears to be complete at about M5.5 back to 1850 (Bakun, 1999), which is less than half a cycle. For large events (qualitatively placed at M*7) Toppozada and Borchardt (1998) suggest the record is complete back to 1776, which may represent about half a cycle. During this period only the southern Hayward fault (1868) and the San Andreas fault (1838?, 1906) have produced their expected large events. New paleoseismic data now provide, for the first time, a more complete view of the most recent pre-1906 SFBR earthquake cycle. Focused paleoseismic efforts under the Bay Area Paleoearthquake Experiment (BAPEX) have developed a chronology of the most recent large earthquakes (MRE) on major SFBR faults. The San Andreas (SA), northern Hayward (NH), southern Hayward (SH), Rodgers Creek (RC), and northern Calaveras (NC) faults provide clear paleoseismic evidence for large events post-1600 AD. The San Gregorio (SG) may have also produced a large earthquake after this date. The timing of the MREs, in years AD, follows. The age ranges are 2-sigma radiocarbon intervals; the dates in parentheses are 1-sigma. MRE ages are: a) SA 1600-1670 (1630-1660), NH 1640-1776 (1635-1776); SH 1635-1776 (1685-1676); RC 1670-1776 (1730-1776); NC 1670-1830?; and San Gregorio 1270-1776 but possibly 1640-1776 (1685-1776). Based on present radiocarbon dating, the NH/SH/RC/NC/(SG?) sequence likely occurred subsequent to the penultimate San Andreas event. Although offset data, which reflect M, are limited, observations indicate that the penultimate SA event ruptured essentially the same fault length as 1906 (Schwartz et al, 1998). In addition, measured point-specific slip (RC, 1.8-2.3m; SG, 3.5-5m) and modeled average slip (SH, 1.9m) for the MREs indicate large magnitude earthquakes on the other regional faults. The major observation from the new paleoseismic data is that during a maximum interval of 176 years (1600 to 1776), significant seismic moment was released in the SFBR by large (M*6.7) surface-faulting earthquakes on the SA, RC, SH, NH, NC and possibly SG faults. This places an upper limit on the duration of San Andreas interaction effects (stress shadow) on the regional fault system. In fact, the interval between the penultimate San Andreas rupture and large earthquakes on other SFBR faults could have been considerably shorter. We are now 95 years out from the 1906 and the SFBR Working Group 99 probability time window extends to 2030, an interval of 124 years. The paleoearthquake data allow that within this amount of time following the penultimate San Andreas event one or more large earthquakes may have occurred on Bay Area faults. Longer paleoearthquake chronologies with more precise event dating in the SFBR and other locales provide the exciting potential for defining regional earthquake cycles and modeling long-term fault interactions.
Knowledge Representation Standards and Interchange Formats for Causal Graphs
NASA Technical Reports Server (NTRS)
Throop, David R.; Malin, Jane T.; Fleming, Land
2005-01-01
In many domains, automated reasoning tools must represent graphs of causally linked events. These include fault-tree analysis, probabilistic risk assessment (PRA), planning, procedures, medical reasoning about disease progression, and functional architectures. Each of these fields has its own requirements for the representation of causation, events, actors and conditions. The representations include ontologies of function and cause, data dictionaries for causal dependency, failure and hazard, and interchange formats between some existing tools. In none of the domains has a generally accepted interchange format emerged. The paper makes progress towards interoperability across the wide range of causal analysis methodologies. We survey existing practice and emerging interchange formats in each of these fields. Setting forth a set of terms and concepts that are broadly shared across the domains, we examine the several ways in which current practice represents them. Some phenomena are difficult to represent or to analyze in several domains. These include mode transitions, reachability analysis, positive and negative feedback loops, conditions correlated but not causally linked and bimodal probability distributions. We work through examples and contrast the differing methods for addressing them. We detail recent work in knowledge interchange formats for causal trees in aerospace analysis applications in early design, safety and reliability. Several examples are discussed, with a particular focus on reachability analysis and mode transitions. We generalize the aerospace analysis work across the several other domains. We also recommend features and capabilities for the next generation of causal knowledge representation standards.
Dynamic Dilational Strengthening During Earthquakes in Saturated Gouge-Filled Fault Zones
NASA Astrophysics Data System (ADS)
Sparks, D. W.; Higby, K.
2016-12-01
The effect of fluid pressure in saturated fault zones has been cited as an important factor in the strength and slip-stability of faults. Fluid pressure controls the effective normal stress across the fault and therefore controls the faults strength. In a fault core consisting of granular fault gouge, local transient dilations and compactions occur during slip that dynamically change the fluid pressure. We use a grain-scale numerical model to investigate the effect of these fluid effects in fault gouge during an earthquake. We use a coupled finite difference-discrete element model (Goren et al, 2011), in which the pore space is filled with a fluid. Local changes in grain packing generate local deviations in fluid pressure, which can be relieved by fluid flow through the permeable gouge. Fluid pressure gradients exert drag forces on the grains that couple the grain motion and fluid flow. We simulated 39 granular gouge zones that were slowly loaded in shear stress to near the failure point, and then conducted two different simulations starting from each grain packing: one with a high enough mean permeability (> 10-11 m2) that pressure remains everywhere equilibrated ("fully drained"), and one with a lower permeability ( 10-14 m2) in which flow is not fast enough to prevent significant pressure variations from developing ("undrained"). The static strength of the fault, the size of the event and the evolution of slip velocity are not imposed, but arise naturally from the granular packing. In our particular granular model, all fully drained slip events are well-modeled by a rapid drop in the frictional resistance of the granular packing from a static value to a dynamic value that remains roughly constant during slip. Undrained events show more complex behavior. In some cases, slip occurs via a slow creep with resistance near the static value. When rapid slip events do occur, the dynamic resistance is typically larger than in drained events, and highly variable. Frictional resistance is not correlated with the mean fluid pressure in the layer, but is instead controlled by local regions undergoing dilational strengthening. We find that (in the absence of pressure-generating effects like thermal pressurization or fluid-releasing reactions), the overall effect of fluid is to strengthen the fault.
The 2013 Balochistan earthquake: An extraordinary or completely ordinary event?
NASA Astrophysics Data System (ADS)
Zhou, Yu; Elliott, John R.; Parsons, Barry; Walker, Richard T.
2015-08-01
The 2013 Balochistan earthquake, a predominantly strike-slip event, occurred on the arcuate Hoshab fault in the eastern Makran linking an area of mainly left-lateral shear in the east to one of shortening in the west. The difficulty of reconciling predominantly strike-slip motion with this shortening has led to a wide range of unconventional kinematic and dynamic models. Here we determine the vertical component of motion on the fault using a 1 m resolution elevation model derived from postearthquake Pleiades satellite imagery. We find a constant local ratio of vertical to horizontal slip through multiple past earthquakes, suggesting the kinematic style of the Hoshab fault has remained constant throughout the late Quaternary. We also find evidence for active faulting on a series of nearby, subparallel faults, showing that failure in large, distributed and rare earthquakes is the likely method of faulting across the eastern Makran, reconciling geodetic and long-term records of strain accumulation.
Rupture preparation process controlled by surface roughness on meter-scale laboratory fault
NASA Astrophysics Data System (ADS)
Yamashita, Futoshi; Fukuyama, Eiichi; Xu, Shiqing; Mizoguchi, Kazuo; Kawakata, Hironori; Takizawa, Shigeru
2018-05-01
We investigate the effect of fault surface roughness on rupture preparation characteristics using meter-scale metagabbro specimens. We repeatedly conducted the experiments with the same pair of rock specimens to make the fault surface rough. We obtained three experimental results under the same experimental conditions (6.7 MPa of normal stress and 0.01 mm/s of loading rate) but at different roughness conditions (smooth, moderately roughened, and heavily roughened). During each experiment, we observed many stick-slip events preceded by precursory slow slip. We investigated when and where slow slip initiated by using the strain gauge data processed by the Kalman filter algorithm. The observed rupture preparation processes on the smooth fault (i.e. the first experiment among the three) showed high repeatability of the spatiotemporal distributions of slow slip initiation. Local stress measurements revealed that slow slip initiated around the region where the ratio of shear to normal stress (τ/σ) was the highest as expected from finite element method (FEM) modeling. However, the exact location of slow slip initiation was where τ/σ became locally minimum, probably due to the frictional heterogeneity. In the experiment on the moderately roughened fault, some irregular events were observed, though the basic characteristics of other regular events were similar to those on the smooth fault. Local stress data revealed that the spatiotemporal characteristics of slow slip initiation and the resulting τ/σ drop for irregular events were different from those for regular ones even under similar stress conditions. On the heavily roughened fault, the location of slow slip initiation was not consistent with τ/σ anymore because of the highly heterogeneous static friction on the fault, which also decreased the repeatability of spatiotemporal distributions of slow slip initiation. These results suggest that fault surface roughness strongly controls the rupture preparation process, and generally increases its complexity with the degree of roughness.
Improving Multiple Fault Diagnosability using Possible Conflicts
NASA Technical Reports Server (NTRS)
Daigle, Matthew J.; Bregon, Anibal; Biswas, Gautam; Koutsoukos, Xenofon; Pulido, Belarmino
2012-01-01
Multiple fault diagnosis is a difficult problem for dynamic systems. Due to fault masking, compensation, and relative time of fault occurrence, multiple faults can manifest in many different ways as observable fault signature sequences. This decreases diagnosability of multiple faults, and therefore leads to a loss in effectiveness of the fault isolation step. We develop a qualitative, event-based, multiple fault isolation framework, and derive several notions of multiple fault diagnosability. We show that using Possible Conflicts, a model decomposition technique that decouples faults from residuals, we can significantly improve the diagnosability of multiple faults compared to an approach using a single global model. We demonstrate these concepts and provide results using a multi-tank system as a case study.
NASA Astrophysics Data System (ADS)
Wei, Z.; He, H.
2016-12-01
Fault scarp is important specific tectonic landform caused by surface-rupture earthquake. The morphology of the fault scarp in unconsolidated sediment could evolve in a predictable, time-dependent diffusion model. As a result, the investigation of fault-generated fault scarps is a prevalent technique used to study fault activity, geomorphic evolution, and the recurrence of faulting events. Addition to obtainment of cumulative displacement, gradient changes, i.e. slope breaks, in the morphology of fault scarps could indicate multiple rupture events along an active fault. In this study, we exacted a large set of densely spaced topographic profiles across fault scarp from LiDAR-derive DEM to detect subtle changes in the fault scarp geometry at the Dushanzi trust fault in the Northern Tianshan, China. Several slope breaks in topographic profiles can be identified, which may represent repeated rupture at the investigated fault. The number of paleo-earthquakes derived from our analysis is 4-3, well in agreement with the investigation results from the paleoseismological trenches. Statistical analysis results show that the scarp height of fault scarp with one slope break is 0.75±0.12 (mean value ±1 standard deviation) m representing the last incremental displacement during earthquakes; the height of fault scarp with two slope breaks is 1.86±0.32 m, and the height of fault scarp with three-four slope break is 6.45±1.44 m. Our approach enables us to obtain paleo-earthquake information from geomorphological analysis of fault scarps, and to assess the multiple rupture history of a complex fault system.
The effect of roughness on the nucleation and propagation of shear rupture on small faults
NASA Astrophysics Data System (ADS)
Tal, Y.; Hager, B. H.
2016-12-01
Faults are rough at all scales and can be described as self-affine fractals. This deviation from planarity results in geometric asperities and a locally heterogeneous stress field, which affect the nucleation and propagation of shear rupture. We study this effect numerically and aim to understand the relative effects of different fault geometries, remote stresses, and medium and fault properties, focusing on small earthquakes, in which realistic geometry and friction law parameters can be incorporated in the model. Our numerical approach includes three main features. First, to enable slip that is large relative to the size of the elements near the fault, as well as the variation of normal stress during slip, we implement slip-weakening and rate-and state-friction laws into the Mortar Finite Element Method, in which non-matching meshes are allowed across the fault and the contacts are continuously updated. Second, we refine the mesh near the fault using hanging nodes, thereby enabling accurate representation of the fault geometry. Finally, using a variable time step size, we gradually increase the remote stress and let the rupture nucleate spontaneously. This procedure involves a quasi-static backward Euler scheme for the inter-seismic stages and a dynamic implicit Newmark scheme for the co-seismic stages. In general, under the same range of external loads, rougher faults experience more events but with smaller slips, stress drops, and slip rates, where the roughest faults experience only slow-slip aseismic events. Moreover, the roughness complicates the nucleation process, with asymmetric expansion of the rupture and larger nucleation length. In the propagation phase of the seismic events, the roughness results in larger breakdown zones.
NASA Astrophysics Data System (ADS)
Benavente, Carlos; Zerathe, Swann; Audin, Laurence; Hall, Sarah R.; Robert, Xavier; Delgado, Fabrizio; Carcaillet, Julien; Team, Aster
2017-09-01
Our understanding of the style and rate of Quaternary tectonic deformation in the forearc of the Central Andes is hampered by a lack of field observations and constraints on neotectonic structures. Here we present a detailed analysis of the Purgatorio fault, a recently recognized active fault located in the forearc of southern Peru. Based on field and remote sensing analysis (Pléiades DEM), we define the Purgatorio fault as a subvertical structure trending NW-SE to W-E along its 60 km length, connecting, on its eastern end, to the crustal Incapuquio Fault System. The Purgatorio fault accommodates right-lateral transpressional deformation, as shown by the numerous lateral and vertical plurimetric offsets recorded along strike. In particular, scarp with a 5 m cumulative throw is preserved and displays cobbles that are cut and covered by slickensides. Cosmogenic radionuclide exposure dating (10Be) of quartzite cobbles along the vertical fault scarp yields young exposure ages that can be bracketed between 0 to 6 ka, depending on the inheritance model that is applied. Our preferred scenario, which takes in account our geomorphic observations, implies at least two distinct rupture events, each associated with 3 and 2 m of vertical offset. These two events plausibly occurred during the last thousand years. Nevertheless, an interpretation invoking more tectonic events along the fault cannot be ruled out. This work affirms crustal deformation along active faults in the Andean forearc of southern Peru during the last thousand years.
NASA Technical Reports Server (NTRS)
Prassinos, Peter G.; Stamatelatos, Michael G.; Young, Jonathan; Smith, Curtis
2010-01-01
Managed by NASA's Office of Safety and Mission Assurance, a pilot probabilistic risk analysis (PRA) of the NASA Crew Exploration Vehicle (CEV) was performed in early 2006. The PRA methods used follow the general guidance provided in the NASA PRA Procedures Guide for NASA Managers and Practitioners'. Phased-mission based event trees and fault trees are used to model a lunar sortie mission of the CEV - involving the following phases: launch of a cargo vessel and a crew vessel; rendezvous of these two vessels in low Earth orbit; transit to th$: moon; lunar surface activities; ascension &om the lunar surface; and return to Earth. The analysis is based upon assumptions, preliminary system diagrams, and failure data that may involve large uncertainties or may lack formal validation. Furthermore, some of the data used were based upon expert judgment or extrapolated from similar componentssystemsT. his paper includes a discussion of the system-level models and provides an overview of the analysis results used to identify insights into CEV risk drivers, and trade and sensitivity studies. Lastly, the PRA model was used to determine changes in risk as the system configurations or key parameters are modified.
Time-dependent seismic hazard analysis for the Greater Tehran and surrounding areas
NASA Astrophysics Data System (ADS)
Jalalalhosseini, Seyed Mostafa; Zafarani, Hamid; Zare, Mehdi
2018-01-01
This study presents a time-dependent approach for seismic hazard in Tehran and surrounding areas. Hazard is evaluated by combining background seismic activity, and larger earthquakes may emanate from fault segments. Using available historical and paleoseismological data or empirical relation, the recurrence time and maximum magnitude of characteristic earthquakes for the major faults have been explored. The Brownian passage time (BPT) distribution has been used to calculate equivalent fictitious seismicity rate for major faults in the region. To include ground motion uncertainty, a logic tree and five ground motion prediction equations have been selected based on their applicability in the region. Finally, hazard maps have been presented.
Rutqvist, Jonny; Rinaldi, Antonio P.; Cappa, Frédéric; ...
2015-03-01
We conducted three-dimensional coupled fluid-flow and geomechanical modeling of fault activation and seismicity associated with hydraulic fracturing stimulation of a shale-gas reservoir. We simulated a case in which a horizontal injection well intersects a steeply dip- ping fault, with hydraulic fracturing channeled within the fault, during a 3-hour hydraulic fracturing stage. Consistent with field observations, the simulation results show that shale-gas hydraulic fracturing along faults does not likely induce seismic events that could be felt on the ground surface, but rather results in numerous small microseismic events, as well as aseismic deformations along with the fracture propagation. The calculated seismicmore » moment magnitudes ranged from about -2.0 to 0.5, except for one case assuming a very brittle fault with low residual shear strength, for which the magnitude was 2.3, an event that would likely go unnoticed or might be barely felt by humans at its epicenter. The calculated moment magnitudes showed a dependency on injection depth and fault dip. We attribute such dependency to variation in shear stress on the fault plane and associated variation in stress drop upon reactivation. Our simulations showed that at the end of the 3-hour injection, the rupture zone associated with tensile and shear failure extended to a maximum radius of about 200 m from the injection well. The results of this modeling study for steeply dipping faults at 1000 to 2500 m depth is in agreement with earlier studies and field observations showing that it is very unlikely that activation of a fault by shale-gas hydraulic fracturing at great depth (thousands of meters) could cause felt seismicity or create a new flow path (through fault rupture) that could reach shallow groundwater resources.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rutqvist, Jonny; Rinaldi, Antonio P.; Cappa, Frédéric
We conducted three-dimensional coupled fluid-flow and geomechanical modeling of fault activation and seismicity associated with hydraulic fracturing stimulation of a shale-gas reservoir. We simulated a case in which a horizontal injection well intersects a steeply dip- ping fault, with hydraulic fracturing channeled within the fault, during a 3-hour hydraulic fracturing stage. Consistent with field observations, the simulation results show that shale-gas hydraulic fracturing along faults does not likely induce seismic events that could be felt on the ground surface, but rather results in numerous small microseismic events, as well as aseismic deformations along with the fracture propagation. The calculated seismicmore » moment magnitudes ranged from about -2.0 to 0.5, except for one case assuming a very brittle fault with low residual shear strength, for which the magnitude was 2.3, an event that would likely go unnoticed or might be barely felt by humans at its epicenter. The calculated moment magnitudes showed a dependency on injection depth and fault dip. We attribute such dependency to variation in shear stress on the fault plane and associated variation in stress drop upon reactivation. Our simulations showed that at the end of the 3-hour injection, the rupture zone associated with tensile and shear failure extended to a maximum radius of about 200 m from the injection well. The results of this modeling study for steeply dipping faults at 1000 to 2500 m depth is in agreement with earlier studies and field observations showing that it is very unlikely that activation of a fault by shale-gas hydraulic fracturing at great depth (thousands of meters) could cause felt seismicity or create a new flow path (through fault rupture) that could reach shallow groundwater resources.« less
Aagaard, Brad T.; Anderson, G.; Hudnut, K.W.
2004-01-01
We use three-dimensional dynamic (spontaneous) rupture models to investigate the nearly simultaneous ruptures of the Susitna Glacier thrust fault and the Denali strike-slip fault. With the 1957 Mw 8.3 Gobi-Altay, Mongolia, earthquake as the only other well-documented case of significant, nearly simultaneous rupture of both thrust and strike-slip faults, this feature of the 2002 Denali fault earthquake provides a unique opportunity to investigate the mechanisms responsible for development of these large, complex events. We find that the geometry of the faults and the orientation of the regional stress field caused slip on the Susitna Glacier fault to load the Denali fault. Several different stress orientations with oblique right-lateral motion on the Susitna Glacier fault replicate the triggering of rupture on the Denali fault about 10 sec after the rupture nucleates on the Susitna Glacier fault. However, generating slip directions compatible with measured surface offsets and kinematic source inversions requires perturbing the stress orientation from that determined with focal mechanisms of regional events. Adjusting the vertical component of the principal stress tensor for the regional stress field so that it is more consistent with a mixture of strike-slip and reverse faulting significantly improves the fit of the slip-rake angles to the data. Rotating the maximum horizontal compressive stress direction westward appears to improve the fit even further.
NASA Astrophysics Data System (ADS)
John, B.
2009-04-01
Earthquake Hazard Assessment Based on Geological Data: An approach from Crystalline Terrain of Peninsular India Biju John National Institute of Rock Mechanics b_johnp@yahoo.co.in Peninsular India was for long considered as seismically stable. But the recent earthquake sequence of Latur (1993), Jabalpur (1997), Bhuj (2001) suggests this region is among one of the active Stable Continental Regions (SCRs) of the world, where the recurrence intervals is of the order of tens of thousands of years. In such areas, earthquake may happen at unexpected locations, devoid of any previous seismicity or dramatic geomorphic features. Even moderate earthquakes will lead to heavy loss of life and property in the present scenario. So it is imperative to map suspected areas to identify active faults and evaluate its activities, which will be a vital input to seismic hazard assessment of SCR area. The region around Wadakkanchery, Kerala, South India has been experiencing micro seismic activities since 1989. Subsequent studies, by the author, identified a 30 km long WNW-ESE trending reverse fault, dipping south (45°), that influenced the drainage system of the area. The macroscopic and microscopic studies of the fault rocks from the exposures near Desamangalam show an episodic nature of faulting. Dislocations of pegmatitic veins across the fault indicate a cumulative dip displacement of 2.1m in the reverse direction. A minimum of four episodes of faulting were identified in this fault based on the cross cutting relations of different structural elements and from the mineralogic changes of different generations of gouge zones. This suggests that an average displacement of 52cm per event might have occurred for each event. A cyclic nature of faulting is identified in this fault zone in which the inter-seismic period is characterized by gouge induration and fracture sealing aided by the prevailing fluids. Available empirical relations connecting magnitude with displacement and rupture length show that each event might have produced an earthquake of magnitude ≥ 6.0, which could be a damaging one to an area like peninsular India. Electron Spin Resonance dating of fault gouge indicates a major event around 430ka. In the present stress regime this fault can be considered as seismically active, because the orientation of the fault is favorable for reactivation.
NASA Astrophysics Data System (ADS)
Xue, Lian; Bürgmann, Roland; Shelly, David R.; Johnson, Christopher W.; Taira, Taka'aki
2018-05-01
Earthquake swarms represent a sudden increase in seismicity that may indicate a heterogeneous fault-zone, the involvement of crustal fluids and/or slow fault slip. Swarms sometimes precede major earthquake ruptures. An earthquake swarm occurred in October 2015 near San Ramon, California in an extensional right step-over region between the northern Calaveras Fault and the Concord-Mt. Diablo fault zone, which has hosted ten major swarms since 1970. The 2015 San Ramon swarm is examined here from 11 October through 18 November using template matching analysis. The relocated seismicity catalog contains ∼4000 events with magnitudes between - 0.2
NASA Astrophysics Data System (ADS)
Laó-Dávila, Daniel A.; Anderson, Thomas H.
2009-12-01
Faults and shear zones recorded in the Monte del Estado and Río Guanajibo serpentinite masses in southwestern Puerto Rico show previously unrecognized southwestward tectonic transport. The orientations of planar and linear structures and the sense of slip along faults and shear zones determined by offset rock layers, drag folds in foliations, and steps in slickensided surfaces and/or S-C fabrics from 1846 shear planes studied at more than 300 stations reveal two predominant groups of faults: 1) northwesterly-striking thrust faults and easterly-striking left-lateral faults and, 2) northwesterly-striking right-lateral faults and easterly-striking thrust faults. Shortening and extension (P and T) axes calculated for geographic domains within the serpentinite reveal early north-trending shortening followed by southwestward-directed movement during which older structures were re-activated. The SW-directed shortening is attributed to transpression that accompanied Late Eocene left-lateral shearing of the serpentinite. A third, younger, group comprising fewer faults consists of northwesterly-striking left-lateral faults and north-directed thrusts that also may be related to the latest transpressional deformation within Puerto Rico. Deformational events in Puerto Rico correlate to tectonic events along the Caribbean-North American plate boundary.
NASA Technical Reports Server (NTRS)
Panontin, Tina; Carvalho, Robert; Keller, Richard
2004-01-01
Contents include the folloving:Overview of the Application; Input Data; Analytical Process; Tool's Output; and Application of the Results of the Analysis.The tool enables the first element through a Web-based application that can be accessed by distributed teams to store and retrieve any type of digital investigation material in a secure environment. The second is accomplished by making the relationships between information explicit through the use of a semantic network-a structure that literally allows an investigator or team to "connect -the-dots." The third element, the significance of the correlated information, is established through causality and consistency tests using a number of different methods embedded within the tool, including fault trees, event sequences, and other accident models. And finally, the evidence gathered and structured within the tool can be directly, electronically archived to preserve the evidence and investigative reasoning.
Fault detection and fault tolerance in robotics
NASA Technical Reports Server (NTRS)
Visinsky, Monica; Walker, Ian D.; Cavallaro, Joseph R.
1992-01-01
Robots are used in inaccessible or hazardous environments in order to alleviate some of the time, cost and risk involved in preparing men to endure these conditions. In order to perform their expected tasks, the robots are often quite complex, thus increasing their potential for failures. If men must be sent into these environments to repair each component failure in the robot, the advantages of using the robot are quickly lost. Fault tolerant robots are needed which can effectively cope with failures and continue their tasks until repairs can be realistically scheduled. Before fault tolerant capabilities can be created, methods of detecting and pinpointing failures must be perfected. This paper develops a basic fault tree analysis of a robot in order to obtain a better understanding of where failures can occur and how they contribute to other failures in the robot. The resulting failure flow chart can also be used to analyze the resiliency of the robot in the presence of specific faults. By simulating robot failures and fault detection schemes, the problems involved in detecting failures for robots are explored in more depth.
NASA Astrophysics Data System (ADS)
Lai, Wenqing; Wang, Yuandong; Li, Wenpeng; Sun, Guang; Qu, Guomin; Cui, Shigang; Li, Mengke; Wang, Yongqiang
2017-10-01
Based on long term vibration monitoring of the No.2 oil-immersed fat wave reactor in the ±500kV converter station in East Mongolia, the vibration signals in normal state and in core loose fault state were saved. Through the time-frequency analysis of the signals, the vibration characteristics of the core loose fault were obtained, and a fault diagnosis method based on the dual tree complex wavelet (DT-CWT) and support vector machine (SVM) was proposed. The vibration signals were analyzed by DT-CWT, and the energy entropy of the vibration signals were taken as the feature vector; the support vector machine was used to train and test the feature vector, and the accurate identification of the core loose fault of the flat wave reactor was realized. Through the identification of many groups of normal and core loose fault state vibration signals, the diagnostic accuracy of the result reached 97.36%. The effectiveness and accuracy of the method in the fault diagnosis of the flat wave reactor core is verified.
The role of bed-parallel slip in the development of complex normal fault zones
NASA Astrophysics Data System (ADS)
Delogkos, Efstratios; Childs, Conrad; Manzocchi, Tom; Walsh, John J.; Pavlides, Spyros
2017-04-01
Normal faults exposed in Kardia lignite mine, Ptolemais Basin, NW Greece formed at the same time as bed-parallel slip-surfaces, so that while the normal faults grew they were intermittently offset by bed-parallel slip. Following offset by a bed-parallel slip-surface, further fault growth is accommodated by reactivation on one or both of the offset fault segments. Where one fault is reactivated the site of bed-parallel slip is a bypassed asperity. Where both faults are reactivated, they propagate past each other to form a volume between overlapping fault segments that displays many of the characteristics of relay zones, including elevated strains and transfer of displacement between segments. Unlike conventional relay zones, however, these structures contain either a repeated or a missing section of stratigraphy which has a thickness equal to the throw of the fault at the time of the bed-parallel slip event, and the displacement profiles along the relay-bounding fault segments have discrete steps at their intersections with bed-parallel slip-surfaces. With further increase in displacement, the overlapping fault segments connect to form a fault-bound lens. Conventional relay zones form during initial fault propagation, but with coeval bed-parallel slip, relay-like structures can form later in the growth of a fault. Geometrical restoration of cross-sections through selected faults shows that repeated bed-parallel slip events during fault growth can lead to complex internal fault zone structure that masks its origin. Bed-parallel slip, in this case, is attributed to flexural-slip arising from hanging-wall rollover associated with a basin-bounding fault outside the study area.
Fault diagnosis of helical gearbox using acoustic signal and wavelets
NASA Astrophysics Data System (ADS)
Pranesh, SK; Abraham, Siju; Sugumaran, V.; Amarnath, M.
2017-05-01
The efficient transmission of power in machines is needed and gears are an appropriate choice. Faults in gears result in loss of energy and money. The monitoring and fault diagnosis are done by analysis of the acoustic and vibrational signals which are generally considered to be unwanted by products. This study proposes the usage of machine learning algorithm for condition monitoring of a helical gearbox by using the sound signals produced by the gearbox. Artificial faults were created and subsequently signals were captured by a microphone. An extensive study using different wavelet transformations for feature extraction from the acoustic signals was done, followed by waveletselection and feature selection using J48 decision tree and feature classification was performed using K star algorithm. Classification accuracy of 100% was obtained in the study
Inferring patterns in mitochondrial DNA sequences through hypercube independent spanning trees.
Silva, Eduardo Sant Ana da; Pedrini, Helio
2016-03-01
Given a graph G, a set of spanning trees rooted at a vertex r of G is said vertex/edge independent if, for each vertex v of G, v≠r, the paths of r to v in any pair of trees are vertex/edge disjoint. Independent spanning trees (ISTs) provide a number of advantages in data broadcasting due to their fault tolerant properties. For this reason, some studies have addressed the issue by providing mechanisms for constructing independent spanning trees efficiently. In this work, we investigate how to construct independent spanning trees on hypercubes, which are generated based upon spanning binomial trees, and how to use them to predict mitochondrial DNA sequence parts through paths on the hypercube. The prediction works both for inferring mitochondrial DNA sequences comprised of six bases as well as infer anomalies that probably should not belong to the mitochondrial DNA standard. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Sainvil, A. K.; Schmidt, D. A.; Nuyen, C.
2017-12-01
The goal of this study is to explore how slow slip events on the southern Cascadia Subduction Zone respond to nearby, offshore earthquakes by examining GPS and tremor data. At intermediate depths on the plate interface ( 40 km), transient fault slip is observed in the form of Episodic Tremor and Slip (ETS) events. These ETS events occur regularly (every 10 months), and have a longer duration than normal earthquakes. Researchers have been documenting slow slip events through data obtained by continuously running GPS stations in the Pacific Northwest. Some studies have proposed that pore fluid may play a role in these ETS events by lowering the effective stress on the fault. The interaction of earthquakes and ETS can provide constraints on the strength of the fault and the level of stress needed to alter ETS behavior. Earthquakes can trigger ETS events, but the connection between these events and earthquake activity is less understood. We originally hypothesized that ETS events would be affected by earthquakes in southern Cascadia, and could result in a shift in the recurrence interval of ETS events. ETS events were cataloged using GPS time series provided by PANGA, in conjunction with tremor positions, in Southern Cascadia for stations YBHB and DDSN from 1997 to 2017. We looked for evidence of change from three offshore earthquakes that occurred near the Mendocino Triple Junction with moment magnitudes of 7.2 in 2005, 6.5 in 2010, and 6.8 in 2014. Our results showed that the recurrence interval of ETS for stations YBHB and DDSN was not altered by the three earthquake events. Future is needed to explore whether this lack of interaction is explained by the non-optimal orientation of the receiver fault for the earthquake focal mechanisms.
Nouri.Gharahasanlou, Ali; Mokhtarei, Ashkan; Khodayarei, Aliasqar; Ataei, Mohammad
2014-01-01
Evaluating and analyzing the risk in the mining industry is a new approach for improving the machinery performance. Reliability, safety, and maintenance management based on the risk analysis can enhance the overall availability and utilization of the mining technological systems. This study investigates the failure occurrence probability of the crushing and mixing bed hall department at Azarabadegan Khoy cement plant by using fault tree analysis (FTA) method. The results of the analysis in 200 h operating interval show that the probability of failure occurrence for crushing, conveyor systems, crushing and mixing bed hall department is 73, 64, and 95 percent respectively and the conveyor belt subsystem found as the most probable system for failure. Finally, maintenance as a method of control and prevent the occurrence of failure is proposed. PMID:26779433
Towards generating ECSS-compliant fault tree analysis results via ConcertoFLA
NASA Astrophysics Data System (ADS)
Gallina, B.; Haider, Z.; Carlsson, A.
2018-05-01
Attitude Control Systems (ACSs) maintain the orientation of the satellite in three-dimensional space. ACSs need to be engineered in compliance with ECSS standards and need to ensure a certain degree of dependability. Thus, dependability analysis is conducted at various levels and by using ECSS-compliant techniques. Fault Tree Analysis (FTA) is one of these techniques. FTA is being automated within various Model Driven Engineering (MDE)-based methodologies. The tool-supported CHESS-methodology is one of them. This methodology incorporates ConcertoFLA, a dependability analysis technique enabling failure behavior analysis and thus FTA-results generation. ConcertoFLA, however, similarly to other techniques, still belongs to the academic research niche. To promote this technique within the space industry, we apply it on an ACS and discuss about its multi-faceted potentialities in the context of ECSS-compliant engineering.
Accelerated Monte Carlo Simulation for Safety Analysis of the Advanced Airspace Concept
NASA Technical Reports Server (NTRS)
Thipphavong, David
2010-01-01
Safe separation of aircraft is a primary objective of any air traffic control system. An accelerated Monte Carlo approach was developed to assess the level of safety provided by a proposed next-generation air traffic control system. It combines features of fault tree and standard Monte Carlo methods. It runs more than one order of magnitude faster than the standard Monte Carlo method while providing risk estimates that only differ by about 10%. It also preserves component-level model fidelity that is difficult to maintain using the standard fault tree method. This balance of speed and fidelity allows sensitivity analysis to be completed in days instead of weeks or months with the standard Monte Carlo method. Results indicate that risk estimates are sensitive to transponder, pilot visual avoidance, and conflict detection failure probabilities.
Nouri Gharahasanlou, Ali; Mokhtarei, Ashkan; Khodayarei, Aliasqar; Ataei, Mohammad
2014-04-01
Evaluating and analyzing the risk in the mining industry is a new approach for improving the machinery performance. Reliability, safety, and maintenance management based on the risk analysis can enhance the overall availability and utilization of the mining technological systems. This study investigates the failure occurrence probability of the crushing and mixing bed hall department at Azarabadegan Khoy cement plant by using fault tree analysis (FTA) method. The results of the analysis in 200 h operating interval show that the probability of failure occurrence for crushing, conveyor systems, crushing and mixing bed hall department is 73, 64, and 95 percent respectively and the conveyor belt subsystem found as the most probable system for failure. Finally, maintenance as a method of control and prevent the occurrence of failure is proposed.
Dynamic Modelling of Fault Slip Induced by Stress Waves due to Stope Production Blasts
NASA Astrophysics Data System (ADS)
Sainoki, Atsushi; Mitri, Hani S.
2016-01-01
Seismic events can take place due to the interaction of stress waves induced by stope production blasts with faults located in close proximity to stopes. The occurrence of such seismic events needs to be controlled to ensure the safety of the mine operators and the underground mine workings. This paper presents the results of a dynamic numerical modelling study of fault slip induced by stress waves resulting from stope production blasts. First, the calibration of a numerical model having a single blast hole is performed using a charge weight scaling law to determine blast pressure and damping coefficient of the rockmass. Subsequently, a numerical model of a typical Canadian metal mine encompassing a fault parallel to a tabular ore deposit is constructed, and the simulation of stope extraction sequence is carried out with static analyses until the fault exhibits slip burst conditions. At that point, the dynamic analysis begins by applying the calibrated blast pressure to the stope wall in the form of velocities generated by the blast holes. It is shown from the results obtained from the dynamic analysis that the stress waves reflected on the fault create a drop of normal stresses acting on the fault, which produces a reduction in shear stresses while resulting in fault slip. The influence of blast sequences on the behaviour of the fault is also examined assuming several types of blast sequences. Comparison of the blast sequence simulation results indicates that performing simultaneous blasts symmetrically induces the same level of seismic events as separate blasts, although seismic energy is more rapidly released when blasts are performed symmetrically. On the other hand when nine blast holes are blasted simultaneously, a large seismic event is induced, compared to the other two blasts. It is concluded that the separate blasts might be employed under the adopted geological conditions. The developed methodology and procedure to arrive at an ideal blast sequence can be applied to other mines where faults are found in the vicinity of stopes.
Computer simulation of earthquakes
NASA Technical Reports Server (NTRS)
Cohen, S. C.
1976-01-01
Two computer simulation models of earthquakes were studied for the dependence of the pattern of events on the model assumptions and input parameters. Both models represent the seismically active region by mechanical blocks which are connected to one another and to a driving plate. The blocks slide on a friction surface. In the first model elastic forces were employed and time independent friction to simulate main shock events. The size, length, and time and place of event occurrence were influenced strongly by the magnitude and degree of homogeniety in the elastic and friction parameters of the fault region. Periodically reoccurring similar events were frequently observed in simulations with near homogeneous parameters along the fault, whereas, seismic gaps were a common feature of simulations employing large variations in the fault parameters. The second model incorporated viscoelastic forces and time-dependent friction to account for aftershock sequences. The periods between aftershock events increased with time and the aftershock region was confined to that which moved in the main event.
NASA Astrophysics Data System (ADS)
Console, R.; Vannoli, P.; Carluccio, R.
2016-12-01
The application of a physics-based earthquake simulation algorithm to the central Apennines region, where the 24 August 2016 Amatrice earthquake occurred, allowed the compilation of a synthetic seismic catalog lasting 100 ky, and containing more than 500,000 M ≥ 4.0 events, without the limitations that real catalogs suffer in terms of completeness, homogeneity and time duration. The algorithm on which this simulator is based is constrained by several physical elements as: (a) an average slip rate for every single fault in the investigated fault systems, (b) the process of rupture growth and termination, leading to a self-organized earthquake magnitude distribution, and (c) interaction between earthquake sources, including small magnitude events. Events nucleated in one fault are allowed to expand into neighboring faults, even belonging to a different fault system, if they are separated by less than a given maximum distance. The seismogenic model upon which we applied the simulator code, was derived from the DISS 3.2.0 database (http://diss.rm.ingv.it/diss/), selecting all the fault systems that are recognized in the central Apennines region, for a total of 24 fault systems. The application of our simulation algorithm provides typical features in time, space and magnitude behavior of the seismicity, which are comparable with those of real observations. These features include long-term periodicity and clustering of strong earthquakes, and a realistic earthquake magnitude distribution departing from the linear Gutenberg-Richter distribution in the moderate and higher magnitude range. The statistical distribution of earthquakes with M ≥ 6.0 on single faults exhibits a fairly clear pseudo-periodic behavior, with a coefficient of variation Cv of the order of 0.3-0.6. We found in our synthetic catalog a clear trend of long-term acceleration of seismic activity preceding M ≥ 6.0 earthquakes and quiescence following those earthquakes. Lastly, as an example of a possible use of synthetic catalogs, an attenuation law was applied to all the events reported in the synthetic catalog for the production of maps showing the exceedence probability of given values of peak acceleration (PGA) on the territory under investigation. The application of a physics-based earthquake simulation algorithm to the central Apennines region, where the 24 August 2016 Amatrice earthquake occurred, allowed the compilation of a synthetic seismic catalog lasting 100 ky, and containing more than 500,000 M ≥ 4.0 events, without the limitations that real catalogs suffer in terms of completeness, homogeneity and time duration. The algorithm on which this simulator is based is constrained by several physical elements as: (a) an average slip rate for every single fault in the investigated fault systems, (b) the process of rupture growth and termination, leading to a self-organized earthquake magnitude distribution, and (c) interaction between earthquake sources, including small magnitude events. Events nucleated in one fault are allowed to expand into neighboring faults, even belonging to a different fault system, if they are separated by less than a given maximum distance. The seismogenic model upon which we applied the simulator code, was derived from the DISS 3.2.0 database (http://diss.rm.ingv.it/diss/), selecting all the fault systems that are recognized in the central Apennines region, for a total of 24 fault systems. The application of our simulation algorithm provides typical features in time, space and magnitude behavior of the seismicity, which are comparable with those of real observations. These features include long-term periodicity and clustering of strong earthquakes, and a realistic earthquake magnitude distribution departing from the linear Gutenberg-Richter distribution in the moderate and higher magnitude range. The statistical distribution of earthquakes with M ≥ 6.0 on single faults exhibits a fairly clear pseudo-periodic behavior, with a coefficient of variation Cv of the order of 0.3-0.6. We found in our synthetic catalog a clear trend of long-term acceleration of seismic activity preceding M ≥ 6.0 earthquakes and quiescence following those earthquakes. Lastly, as an example of a possible use of synthetic catalogs, an attenuation law was applied to all the events reported in the synthetic catalog for the production of maps showing the exceedence probability of given values of peak acceleration (PGA) on the territory under investigation.
A New Correlation of Large Earthquakes Along the Southern San Andreas Fault
NASA Astrophysics Data System (ADS)
Scharer, K. M.; Weldon, R. J.; Biasi, G. P.
2010-12-01
There are now three sites on the southern San Andreas fault (SSAF) with records of 10 or more dated ground rupturing earthquakes (Frazier Mountain, Wrightwood and Pallett Creek) and at least seven other sites with 3-5 dated events. Numerous sites have related information including geomorphic offsets caused by 1 to a few earthquakes, a known amount of slip spanning a specific interval of time or number of earthquakes, or the number (but not necessarily the exact ages) of earthquakes in an interval of time. We use this information to construct a record of recent large earthquakes on the SSAF. Strongly overlapping C-14 age ranges, especially between closely spaced sites like Pallett Creek and Wrightwood on the Mojave segment and Thousand Palms, Indio, Coachella and Salt Creek on the southernmost 100 kms of the fault, and overlap between the more distant Frazier Mountain and Bidart Fan sites on the northernmost part of the fault suggest that the paleoseismic data are robust and can be explained by a relatively small number of events that span substantial portions of the fault. This is consistent with the extent of rupture of the two historic events (1857 was ~300 km long and 1812 was 100-200 km long); slip per event data that averages 3-5 m per event at most sites; and the long historical hiatus since 1857. While some sites have smaller offsets for individual events, correlation between sites suggests that many small offsets are near the end of long ruptures. While the long event series on the Mojave are quasi-periodic, individual intervals range about an order of magnitude, from a few decades up to ~200 years. This wide range of intervals and the apparent anti-slip predictable behavior of ruptures (small intervals are not followed by small events) suggest weak clustering or periods of time spanning multiple intervals when strain release is higher low lower than average. These properties defy the application of simple hazard analysis but need to be understood to properly forecast hazard along the fault.
Nonlinear dynamic failure process of tunnel-fault system in response to strong seismic event
NASA Astrophysics Data System (ADS)
Yang, Zhihua; Lan, Hengxing; Zhang, Yongshuang; Gao, Xing; Li, Langping
2013-03-01
Strong earthquakes and faults have significant effect on the stability capability of underground tunnel structures. This study used a 3-Dimensional Discrete Element model and the real records of ground motion in the Wenchuan earthquake to investigate the dynamic response of tunnel-fault system. The typical tunnel-fault system was composed of one planned railway tunnel and one seismically active fault. The discrete numerical model was prudentially calibrated by means of the comparison between the field survey and numerical results of ground motion. It was then used to examine the detailed quantitative information on the dynamic response characteristics of tunnel-fault system, including stress distribution, strain, vibration velocity and tunnel failure process. The intensive tunnel-fault interaction during seismic loading induces the dramatic stress redistribution and stress concentration in the intersection of tunnel and fault. The tunnel-fault system behavior is characterized by the complicated nonlinear dynamic failure process in response to a real strong seismic event. It can be qualitatively divided into 5 main stages in terms of its stress, strain and rupturing behaviors: (1) strain localization, (2) rupture initiation, (3) rupture acceleration, (4) spontaneous rupture growth and (5) stabilization. This study provides the insight into the further stability estimation of underground tunnel structures under the combined effect of strong earthquakes and faults.
Using certification trails to achieve software fault tolerance
NASA Technical Reports Server (NTRS)
Sullivan, Gregory F.; Masson, Gerald M.
1993-01-01
A conceptually novel and powerful technique to achieve fault tolerance in hardware and software systems is introduced. When used for software fault tolerance, this new technique uses time and software redundancy and can be outlined as follows. In the initial phase, a program is run to solve a problem and store the result. In addition, this program leaves behind a trail of data called a certification trail. In the second phase, another program is run which solves the original problem again. This program, however, has access to the certification trail left by the first program. Because of the availability of the certification trail, the second phase can be performed by a less complex program and can execute more quickly. In the final phase, the two results are accepted as correct; otherwise an error is indicated. An essential aspect of this approach is that the second program must always generate either an error indication or a correct output even when the certification trail it receives from the first program is incorrect. The certification trail approach to fault tolerance was formalized and it was illustrated by applying it to the fundamental problem of finding a minimum spanning tree. Cases in which the second phase can be run concorrectly with the first and act as a monitor are discussed. The certification trail approach was compared to other approaches to fault tolerance. Because of space limitations we have omitted examples of our technique applied to the Huffman tree, and convex hull problems. These can be found in the full version of this paper.
Constraining fault constitutive behavior with slip and stress heterogeneity
Aagaard, Brad T.; Heaton, T.H.
2008-01-01
We study how enforcing self-consistency in the statistical properties of the preshear and postshear stress on a fault can be used to constrain fault constitutive behavior beyond that required to produce a desired spatial and temporal evolution of slip in a single event. We explore features of rupture dynamics that (1) lead to slip heterogeneity in earthquake ruptures and (2) maintain these conditions following rupture, so that the stress field is compatible with the generation of aftershocks and facilitates heterogeneous slip in subsequent events. Our three-dimensional fmite element simulations of magnitude 7 events on a vertical, planar strike-slip fault show that the conditions that lead to slip heterogeneity remain in place after large events when the dynamic stress drop (initial shear stress) and breakdown work (fracture energy) are spatially heterogeneous. In these models the breakdown work is on the order of MJ/m2, which is comparable to the radiated energy. These conditions producing slip heterogeneity also tend to produce narrower slip pulses independent of a slip rate dependence in the fault constitutive model. An alternative mechanism for generating these confined slip pulses appears to be fault constitutive models that have a stronger rate dependence, which also makes them difficult to implement in numerical models. We hypothesize that self-consistent ruptures could also be produced by very narrow slip pulses propagating in a self-sustaining heterogeneous stress field with breakdown work comparable to fracture energy estimates of kJ/M2. Copyright 2008 by the American Geophysical Union.
An experimental overview of the seismic cycle
NASA Astrophysics Data System (ADS)
Spagnuolo, E.; Violay, M.; Passelegue, F. X.; Nielsen, S. B.; Di Toro, G.
2017-12-01
Earthquake nucleation is the last stage of the inter-seismic cycle where the fault surface evolves through the interplay of friction, healing, stress perturbations and strain events. Slip stability under rate-and state friction has been extensively discussed in terms of loading point velocity and equivalent fault stiffness, but fault evolution towards seismic runaway under complex loading histories (e.g. slow variations of tectonic stress, stress transfer from impulsive nearby seismic events) is not yet fully investigated. Nevertheless, the short term earthquake forecasting is based precisely on a relation between seismic productivity and loading history which remains up to date still largely unresolved. To this end we propose a novel experimental approach which avails of a closed loop control of the shear stress, a nominally infinite equivalent slip and transducers for continuous monitoring of acoustic emissions. This experimental simulation allows us to study the stress dependency and temporal evolution of spontaneous slip events occurring on a pre-existing fault subjected to different loading histories. The experimental fault has an initial roughness which mimic a population of randomly distributed asperities, which here are used as a proxy for patches which are either far or close to failure on an extended fault. Our observations suggest that the increase of shear stress may trigger either spontaneous slow slip (creep) or short-lived stick-slip bursts, eventually leading to a fast slip instability (seismic runaway) when slip rates are larger than a few cm/s. The event type and the slip rate are regulated at first order by the background shear stress whereas the ultimate strength of the entire fault is dominated by the number of asperities close to failure under a stress step. The extrapolation of these results to natural conditions might explain the plethora of events that often characterize seismic sequences. Nonetheless this experimental approach helps the definition of a scaling relation between the loading rate and cumulated slip which is relevant to the definition of a recurrence model for the seismic cycle.
Major and micro seismo-volcanic crises in the Asal Rift, Djibouti
NASA Astrophysics Data System (ADS)
Peltzer, G.; Doubre, C.; Tomic, J.
2009-05-01
The Asal-Ghoubbet Rift is located on the eastern branch of the Afar triple junction between the Arabia, Somalia, and Nubia tectonic plates. The last major seismo-volcanic crisis on this segment occurred in November 1978, involving two earthquakes of mb=5+, a basaltic fissure eruption, the development of many open fissures across the rift and up to 80 cm of vertical slip on the bordering faults. Geodetic leveling revealed ~2 m of horizontal opening of the rift accompanied by ~70 cm of subsidence of the inner-floor, consistent with models of the elastic deformation produced by the injection of magma in a system of two dykes. InSAR data acquired at 24-day intervals during the last 12 years by the Canadian Radarsat satellite over the Asal Rift show that the two main faults activated in 1978 continue to slip with periods of steady creep at rates of 0.3-1.3 mm/yr, interrupted by sudden slip events of a few millimeters, in 2000 and 2003. Slip events are coincident with bursts of micro earthquakes distributed around and over the Fieale volcanic center in the eastern part of the Asal Rift. In both cases (the 1978 crisis and micro-slip events), the observed geodetic moment released by fault slip exceeds by a few orders of magnitude the total seismic moment released by earthquakes over the same period. Aseismic fault slip is likely to be the faults response to a changing stress field associated with a volcanic process and not due to dry friction on faults. Sustained injection of magma (1978 crisis) and/or crustal fluids (micro-slip events) in dykes and fissures is a plausible mechanism to control fluid pressure in the basal parts of faults and trigger aseismic slip. In this respect, the micro-events observed by InSAR during a 12-year period of low activity in the rift and the 1978 seismo-volcanic episode are of same nature.
The South Sandwich "Forgotten" Subduction Zone and Tsunami Hazard in the South Atlantic
NASA Astrophysics Data System (ADS)
Okal, E. A.; Hartnady, C. J. H.; Synolakis, C. E.
2009-04-01
While no large interplate thrust earthquakes are know at the "forgotten" South Sandwich subduction zone, historical catalogues include a number of events with reported magnitudes 7 or more. A detailed seismological study of the largest event (27 June 1929; M (G&R) = 8.3) is presented. The earthquake relocates 80 km North of the Northwestern corner of the arc and its mechanism, inverted using the PDFM method, features normal faulting on a steeply dipping fault plane (phi, delta, lambda = 71, 70, 272 deg. respectively). The seismic moment of 1.7*10**28 dyn*cm supports Gutenberg and Richter's estimate, and is 28 times the largest shallow CMT in the region. This event is interpreted as representing a lateral tear in the South Atlantic plate, comparable to similar earthquakes in Samoa and Loyalty, deemed "STEP faults" by Gover and Wortel [2005]. Hydrodynamic simulations were performed using the MOST method [Titov and Synolakis, 1997]. Computed deep-water tsunami amplitudes of 30cm and 20cm were found off the coast of Brazil and along the Gulf of Guinea (Ivory Coast, Ghana) respectively. The 1929 moment was assigned to the geometries of other know earthquakes in the region, namely outer-rise normal faulting events at the center of the arc and its southern extremity, and an interplate thrust fault at the Southern corner, where the youngest lithosphere is subducted. Tsunami hydrodynamic simulation of these scenarios revealed strong focusing of tsunami wave energy by the SAR, the SWIOR and the Agulhas Rise, in Ghana, Southern Mozambique and certain parts of the coast of South Africa. This study documents the potential tsunami hazard to South Atlantic shorelines from earthquakes in this region, principally normal faulting events.
NASA Astrophysics Data System (ADS)
Elliott, A. J.; Walker, R. T.; Parsons, B.; Ren, Z.; Ainscoe, E. A.; Abdrakhmatov, K.; Mackenzie, D.; Arrowsmith, R.; Gruetzner, C.
2016-12-01
In regions of the planet with long historical records, known past seismic events can be attributed to specific fault sources through the identification and measurement of single-event scarps in high-resolution imagery and topography. The level of detail captured by modern remote sensing is now sufficient to map and measure complete earthquake ruptures that were originally only sparsely mapped or overlooked entirely. We can thus extend the record of mapped earthquake surface ruptures into the preinstrumental period and capture the wealth of information preserved in the numerous historical earthquake ruptures throughout regions like Central Asia. We investigate two major late 19th and early 20th century earthquakes that are well located macroseismically but whose fault sources had proved enigmatic in the absence of detailed imagery and topography. We use high-resolution topographic models derived from photogrammetry of satellite, low-altitude, and ground-based optical imagery to map and measure the coseismic scarps of the 1889 M8.3 Chilik, Kazakhstan and 1932 M7.6 Changma, China earthquakes. Measurement of the scarps on the combined imagery and topography reveals the extent and slip distribution of coseismic rupture in each of these events, showing both earthquakes involved multiple faults with variable kinematics. We use a 1-m elevation model of the Changma fault derived from Pleiades satellite imagery to map the changing kinematics of the 1932 rupture along strike. For the 1889 Chilik earthquake we use 1.5-m SPOT-6 satellite imagery to produce a regional elevation model of the fault ruptures, from which we identify three distinct, intersecting fault systems that each have >20 km of fresh, single-event scarps. Along sections of each of these faults we construct high resolution (330 points per sq m) elevation models using quadcopter- and helikite-mounted cameras. From the detailed topography we measure single-event oblique offsets of 6-10 m, consistent with the large inferred magnitude of the 1889 Chilik event. High resolution, photogrammetric topography offers a low-cost, effective way to thoroughly map rupture traces and measure coseismic displacements for past fault ruptures, extending our record of coseismic displacements into a past rich with formerly sparsely documented ruptures.
NASA Astrophysics Data System (ADS)
Solaro, G.; Bonano, M.; Boncio, P.; Brozzetti, F.; Castaldo, R.; Casu, F.; Cirillo, D.; Cheloni, D.; De Luca, C.; De Nardis, R.; De Novellis, V.; Ferrarini, F.; Lanari, R.; Lavecchia, G.; Manunta, M.; Manzo, M.; Pepe, A.; Pepe, S.; Tizzani, P.; Zinno, I.
2017-12-01
The 2016 Central Italy seismic sequence started on 24th August with a MW 6.1 event, where the intra-Apennine WSW-dipping Vettore-Gorzano extensional fault system released a destructive earthquake, causing 300 casualties and extensive damage to the town of Amatrice and surroundings. We generated several interferograms by using ALOS and Sentinel 1-A and B constellation data acquired on both ascending and descending orbits to show that most displacement is characterized by two main subsiding lobes of about 20 cm on the fault hanging-wall. By inverting the generated interferograms, following the Okada analytical approach, the modelling results account for two sources related to main shock and more energetic aftershock. Through Finite Element numerical modelling that jointly exploits DInSAR deformation measurements and structural-geological data, we reconstruct the 3D source of the Amatrice 2016 normal fault earthquake which well fit the main shock. The inversion shows that the co-seismic displacement area was partitioned on two distinct en echelon fault planes, which at the main event hypocentral depth (8 km) merge in one single WSW-dipping surface. Slip peaks were higher along the southern half of the Vettore fault, lower along the northern half of Gorzano fault and null in the relay zone between the two faults; field evidence of co-seismic surface rupture are coherent with the reconstructed scenario. The following seismic sequence was characterized by numerous aftershocks located southeast and northwest of the epicenter which decreased in frequency and magnitude until the end of October, when a MW 5.9 event occurred on 26th October about 25 km to the NW of the previous mainshock. Then, on 30th October, a third large event of magnitude MW 6.5 nucleated below the town of Norcia, striking the area between the two preceding events and filling the gap between the previous ruptures. Also in this case, we exploit a large dataset of DInSAR and GPS measurements to investigate the ground displacement field and to determine, by using elastic dislocation modelling, the geometries and slip distributions of the causative normal fault segments.
Geotechnical aspects of the 2016 MW 6.2, MW 6.0, and MW 7.0 Kumamoto earthquakes
Kayen, Robert E.; Dashti, Shideh; Kokusho, T.; Hazarika, H.; Franke, Kevin; Oettle, N. K.; Wham, Brad; Ramirez Calderon, Jenny; Briggs, Dallin; Guillies, Samantha; Cheng, Katherine; Tanoue, Yutaka; Takematsu, Katsuji; Matsumoto, Daisuke; Morinaga, Takayuki; Furuichi, Hideo; Kitano, Yuuta; Tajiri, Masanori; Chaudhary, Babloo; Nishimura, Kengo; Chu, Chu
2017-01-01
The 2016 Kumamoto earthquakes are a series of events that began with an earthquake of moment magnitude 6.2 on the Hinagu Fault on April 14, 2016, followed by another foreshock of moment magnitude 6.0 on the Hinagu Fault on April 15, 2016, and a larger moment magnitude 7.0 event on the Futagawa Fault on April 16, 2016 beneath Kumamoto City, Kumamoto Prefecture on Kyushu, Japan. These events are the strongest earthquakes recorded in Kyushu during the modern instrumental era. The earthquakes resulted in substantial damage to infrastructure, buildings, cultural heritage of Kumamoto Castle, roads and highways, slopes, and river embankments due to earthquake-induced landsliding and debris flows. Surface fault rupture produced offset and damage to roads, buildings, river levees, and an agricultural dam. Surprisingly, given the extremely intense earthquake motions, liquefaction occurred only in a few districts of Kumamoto City and in the port areas indicating that the volcanic soils were less susceptible to liquefying than expected given the intensity of earthquake shaking, a significant finding from this event.
NASA Astrophysics Data System (ADS)
Andrade, V.; Rajendran, K.
2010-12-01
The response of subduction zones to large earthquakes varies along their strike, both during the interseismic and post-seismic periods. The December 26, 2004 earthquake nucleated at 3° N latitude and its rupture propagated northward, along the Andaman-Sumatra subduction zone, terminating at 15°N. Rupture speed was estimated at about 2.0 km per second in the northern part under the Andaman region and 2.5 - 2.7 km per second under southern Nicobar and North Sumatra. We have examined the pre and post-2004 seismicity to understand the stress transfer processes within the subducting plate, in the Andaman (10° - 15° N ) and Nicobar (5° - 10° N) segments. The seismicity pattern in these segments shows distinctive characteristics associated with the outer rise, accretionary prism and the spreading ridge, all of which are relatively better developed in the Andaman segment. The Ninety East ridge and the Sumatra Fault System are significant tectonic features in the Nicobar segment. The pre-2004 seismicity in both these segments conform to the steady-state conditions wherein large earthquakes are fewer and compressive stresses dominate along the plate interface. Among the pre-2004 great earthquakes are the 1881 Nicobar and 1941 Andaman events. The former is considered to be a shallow thrust event that generated a small tsunami. Studies in other subduction zones suggest that large outer-rise tensional events follow great plate boundary breaking earthquakes due to the the up-dip transfer of stresses within the subducting plate. The seismicity of the Andaman segment (1977-2004) concurs with the steady-state stress conditions where earthquakes occur dominantly by thrust faulting. The post-2004 seismicity shows up-dip migration along the plate interface, with dominance of shallow normal faulting, including a few outer rise events and some deeper (> 100 km) strike-slip faulting events within the subducting plate. The September 13, 2002, Mw 6.5 thrust faulting earthquake at Diglipur (depth: 21 km) and the August 10, 2009, Mw 7.5 normal faulting earthquake near Coco Island (depth: 22 km), within the northern terminus of the 2004 rupture are cited as examples of the alternating pre and post earthquake stress conditions. The major pre and post 2004 clusters were associated with the Andaman Spreading Ridge (ASR). In the Nicobar segment, the most recent earthquake on June 12, 2010, Mw 7.5 (focal depth: 35 km) occurred very close to the plate boundary, through left lateral strike-slip faulting. A segment that does not feature any active volcanoes unlike its northern and southern counterparts, this part of the plate boundary has generated several right lateral strike-slip earthquakes, mostly on the Sumatra Fault System. The left-lateral strike-slip faulting associated with the June 12 event on a nearly N-S oriented fault plane consistent with the trend of the Ninety East ridge and the occasional left-lateral earthquakes prior to the 2004 mega-thrust event suggest the involvement of the Ninety East ridge in the subduction process.
NASA Astrophysics Data System (ADS)
Okumura, K.; Kondo, H.; Toda, S.; Takada, K.; Kinoshita, H.
2006-12-01
Ten years have past since the first official assessment of the long-term seismic risks of the Itoigawa-Shizuoka tectonic line active fault system (ISTL) in 1996. The disaster caused by the1995 Kobe (Hyogo-ken-Nanbu) earthquake urged the Japanese government to initiated a national project to assess the long-term seismic risks of on-shore active faults using geologic information. ISTL was the first target of the 98 significant faults and the probability of a M7 to M8 event turned out to be the highest among them. After the 10 years of continued efforts to understand the ISTL, now it is getting ready to revise the assessment. Fault mapping and segmentation: The most active segment of the Gofukuji fault (~1 cm/yr left-lateral strike slip, R=500~800 yrs.) had been maped only for less than 10 km. Adjacent segments were much less active. This large slip on such a short segment was contradictory. However, detailed topographic study including Lidar survey revealed the length of the Gofukuji fault to be 25 km or more. High slip rate with frequent earthquakes may be restricted to the Gofukuji fault while the 1996 assessment modeled frequent >100 km rupture scenario. The geometry of the fault is controversial especially on the left-lateral strike-slip section of the ISTL. There are two models of high-angle Middel ISTL and low-angle Middle ISTL with slip partitioning. However, all geomorphic and shallow geologic data supports high-angle almost pure strike slip on the faults in the Middle ISTL. CRIEPI's 3- dimensional trenching in several sites as well as the previous results clearly demonstrated repeated pure strike-slip offset during past a few events. In Middle ISTL, there is no evidence of recent activity of pre-existing low-angle thrust faults that are inferred to be active from shallow seismic survey. Separation of high (~3000 m) mountain ranges and low (<1000 m) basin floor requires significant dip-slip component, but basin-fill sediments and geology of the range do not need vertical separation along the Gofukuji fault. The key issue for the time-dependent assessment of the Northern ISTL (east dipping reverse faults) was the lack of reliable time constraints on past earthquakes. In order to solve this problem, we have carried out intensive geoslicer and boring survey of buried faults at Kisaki. Along a 35 m long transect, we collected total 150 m complete cores in 9 geoslicer and 5 all-core boring holes. This is one of the most intensive surveys of a buried fault scarp under the water table. About 20 m vertical offset of 6000-year-old buried A-horizon is now underlain by a series of flood deposits, point bars and over-bank sediments, that intercalates 2 or 3 faulting events. The precise timing and offset of each event recorded in the section will be the critical evidence to tell the synchroneity of earthquakes in the Northern ISTL and the Middle ISTL. The magnitude of the coming event on ISTL is the most important but uncertain parameter of the 1996 assessment. The structural and paleoseimological information will present better constraints on the earthquake.
NASA Astrophysics Data System (ADS)
Pezzo, Giuseppe; Merryman Boncori, John Peter; Atzori, Simone; Antonioli, Andrea; Salvi, Stefano
2014-05-01
We use Synthetic Aperture Radar Differential Interferometry (DInSAR) and Multi-Aperture Interferometry (MAI) to constrain the sources of the three largest events of the 2008 Baluchistan (western Pakistan) seismic sequence, namely two Mw 6.4 events only 12 hours apart and an Mw 5.7event occurred 40 days later. The sequence took place in the Quetta Syntaxis, the most seismically active region of Baluchistan, tectonically located between the colliding Indian Plate and the Afghan block of the Eurasian Plate. Elastic dislocation modelling of the surface displacements, derived from ascending and descending ENVISAT ASAR acquisitions, yields slip distributions with peak values of 80 cm and 70 cm for the two main events on a pair of strike-slip near-vertical faults, and values up to 50 cm for the largest aftershock on a NE-SW strike-slip fault. The MAI measurements, with their high sensitivity to the north-south motion component, are crucial in this area to resolve the fault plane ambiguity of moment tensors. We also studied the relationships between the largest earthquakes of the sequence by means of the Coulomb Failure Function to verify the agreement of our source modelling with the stress variations induced by the October 28 earthquake on the October 29 fault plane, and the stress variations induced by the two mainshocks on the December 09 fault plane. Our results provide insight into the deformation style of the Quetta Syntaxis, suggesting that right-lateral slip released at intermediate depths on large NW fault planes is compatible with contemporaneous left-lateral activation on NE-SW minor faults at shallower depths, in agreement with a bookshelf deformation mechanism.
Shelly, David R.; Ellsworth, William L.; Hill, David P.
2016-01-01
An extended earthquake swarm occurred beneath southeastern Long Valley Caldera between May and November 2014, culminating in three magnitude 3.5 earthquakes and 1145 cataloged events on 26 September alone. The swarm produced the most prolific seismicity in the caldera since a major unrest episode in 1997-1998. To gain insight into the physics controlling swarm evolution, we used large-scale cross-correlation between waveforms of cataloged earthquakes and continuous data, producing precise locations for 8494 events, more than 2.5 times the routine catalog. We also estimated magnitudes for 18,634 events (~5.5 times the routine catalog), using a principal component fit to measure waveform amplitudes relative to cataloged events. This expanded and relocated catalog reveals multiple episodes of pronounced hypocenter expansion and migration on a collection of neighboring faults. Given the rapid migration and alignment of hypocenters on narrow faults, we infer that activity was initiated and sustained by an evolving fluid pressure transient with a low-viscosity fluid, likely composed primarily of water and CO2 exsolved from underlying magma. Although both updip and downdip migration were observed within the swarm, downdip activity ceased shortly after activation, while updip activity persisted for weeks at moderate levels. Strongly migrating, single-fault episodes within the larger swarm exhibited a higher proportion of larger earthquakes (lower Gutenberg-Richter b value), which may have been facilitated by fluid pressure confined in two dimensions within the fault zone. In contrast, the later swarm activity occurred on an increasingly diffuse collection of smaller faults, with a much higher b value.
5000 yr of paleoseismicity along the southern Dead Sea fault
NASA Astrophysics Data System (ADS)
Klinger, Y.; Le Béon, M.; Al-Qaryouti, M.
2015-07-01
The 1000-km-long left-lateral Dead Sea fault is a major tectonic structure of the oriental Mediterranean basin, bounding the Arabian Plate to the west. The fault is located in a region with an exceptionally long and rich historical record, allowing to document historical seismicity catalogues with unprecedented level of details. However, if the earthquake time series is well documented, location and lateral extent of past earthquakes remain often difficult to establish, if only based on historical testimonies. We excavated a palaeoseismic trench in a site located in a kilometre-size extensional jog, south of the Dead Sea, in the Wadi Araba. Based on the stratigraphy exposed in the trench, we present evidence for nine earthquakes that produced surface ruptures during a time period spanning 5000 yr. Abundance of datable material allows us to tie the five most recent events to historical earthquakes with little ambiguities, and to constrain the possible location of these historical earthquakes. The events identified at our site are the 1458 C.E., 1212 C.E., 1068 C.E., one event during the 8th century crisis, and the 363 C.E. earthquake. Four other events are also identified, which correlation with historical events remains more speculative. The magnitude of earthquakes is difficult to assess based on evidence at one site only. The deformation observed in the excavation, however, allows discriminating between two classes of events that produced vertical deformation with one order of amplitude difference, suggesting that we could distinguish earthquakes that started/stopped at our site from earthquakes that potentially ruptured most of the Wadi Araba fault. The time distribution of earthquakes during the past 5000 yr is uneven. The early period shows little activity with return interval of ˜500 yr or longer. It is followed by a ˜1500-yr-long period with more frequent events, about every 200 yr. Then, for the past ˜550 yr, the fault has switched back to a quieter mode with no significant earthquake along the entire southern part of the Dead Sea fault, between the Dead Sea and the Gulf of Aqaba. We computed the Coefficient of Variation for our site and three other sites along the Dead Sea fault, south of Lebanon, to compare time distribution of earthquakes at different locations along the fault. With one exception at a site located next to Lake Tiberias, the three other sites are consistent to show some temporal clustering at the scale of few thousands years.
STRIDE: Species Tree Root Inference from Gene Duplication Events.
Emms, David M; Kelly, Steven
2017-12-01
The correct interpretation of any phylogenetic tree is dependent on that tree being correctly rooted. We present STRIDE, a fast, effective, and outgroup-free method for identification of gene duplication events and species tree root inference in large-scale molecular phylogenetic analyses. STRIDE identifies sets of well-supported in-group gene duplication events from a set of unrooted gene trees, and analyses these events to infer a probability distribution over an unrooted species tree for the location of its root. We show that STRIDE correctly identifies the root of the species tree in multiple large-scale molecular phylogenetic data sets spanning a wide range of timescales and taxonomic groups. We demonstrate that the novel probability model implemented in STRIDE can accurately represent the ambiguity in species tree root assignment for data sets where information is limited. Furthermore, application of STRIDE to outgroup-free inference of the origin of the eukaryotic tree resulted in a root probability distribution that provides additional support for leading hypotheses for the origin of the eukaryotes. © The Author 2017. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution.
NASA Technical Reports Server (NTRS)
Shontz, W. D.; Records, R. M.; Antonelli, D. R.
1992-01-01
The focus of this project is on alerting pilots to impending events in such a way as to provide the additional time required for the crew to make critical decisions concerning non-normal operations. The project addresses pilots' need for support in diagnosis and trend monitoring of faults as they affect decisions that must be made within the context of the current flight. Monitoring and diagnostic modules developed under the NASA Faultfinder program were restructured and enhanced using input data from an engine model and real engine fault data. Fault scenarios were prepared to support knowledge base development activities on the MONITAUR and DRAPhyS modules of Faultfinder. An analysis of the information requirements for fault management was included in each scenario. A conceptual framework was developed for systematic evaluation of the impact of context variables on pilot action alternatives as a function of event/fault combinations.
Global surface displacement data for assessing variability of displacement at a point on a fault
Hecker, Suzanne; Sickler, Robert; Feigelson, Leah; Abrahamson, Norman; Hassett, Will; Rosa, Carla; Sanquini, Ann
2014-01-01
This report presents a global dataset of site-specific surface-displacement data on faults. We have compiled estimates of successive displacements attributed to individual earthquakes, mainly paleoearthquakes, at sites where two or more events have been documented, as a basis for analyzing inter-event variability in surface displacement on continental faults. An earlier version of this composite dataset was used in a recent study relating the variability of surface displacement at a point to the magnitude-frequency distribution of earthquakes on faults, and to hazard from fault rupture (Hecker and others, 2013). The purpose of this follow-on report is to provide potential data users with an updated comprehensive dataset, largely complete through 2010 for studies in English-language publications, as well as in some unpublished reports and abstract volumes.
NASA Astrophysics Data System (ADS)
He, Zhongtai
2017-04-01
The two eastern segments of the Sertengshan piedmont fault have moved considerably since the Holocene. Several paleoseismic events have occurred along the fault since 30 ka BP. Paleoearthquake studies have been advanced by digging new trenches and combining the results with the findings of previous studies. Comprehensive analyses of the trenches revealed that 6 paleoseismic events have occurred on the Kuoluebulong segment since approximately 30 ka BP within the following successive time periods: 19.01-37.56 ka, 18.73 ka, 15.03-15.86 ka, 10.96 ka, 5.77-6.48 ka and 2.32 ka BP. The analyses also revealed that 6 paleoseismic events have occurred on the Dashetai segment since approximately 30 ka BP, and the successive occurrence times are 29.07 ka, 19.12-28.23 ka, 13.92-15.22 ka, 9.38-9.83 ka, 6.08-8.36 ka and 3.59 ka BP. The results indicate that quasi-periodic recurrences occurred along the two segments with an approximate 4000 a mean recurrence interval. The consistent timing of the 6 events between the two segments indicates that the segments might conform to the cascade rupturing model between the two segments of the Sertengshan piedmont fault. The latest event on the Kuoluebulong segment of the Sertengshan piedmont fault is the historical M8 earthquake that occurred on November 11, 7 BC, which was recorded by a large number of Chinese historical texts.
Machine Learning of Fault Friction
NASA Astrophysics Data System (ADS)
Johnson, P. A.; Rouet-Leduc, B.; Hulbert, C.; Marone, C.; Guyer, R. A.
2017-12-01
We are applying machine learning (ML) techniques to continuous acoustic emission (AE) data from laboratory earthquake experiments. Our goal is to apply explicit ML methods to this acoustic datathe AE in order to infer frictional properties of a laboratory fault. The experiment is a double direct shear apparatus comprised of fault blocks surrounding fault gouge comprised of glass beads or quartz powder. Fault characteristics are recorded, including shear stress, applied load (bulk friction = shear stress/normal load) and shear velocity. The raw acoustic signal is continuously recorded. We rely on explicit decision tree approaches (Random Forest and Gradient Boosted Trees) that allow us to identify important features linked to the fault friction. A training procedure that employs both the AE and the recorded shear stress from the experiment is first conducted. Then, testing takes place on data the algorithm has never seen before, using only the continuous AE signal. We find that these methods provide rich information regarding frictional processes during slip (Rouet-Leduc et al., 2017a; Hulbert et al., 2017). In addition, similar machine learning approaches predict failure times, as well as slip magnitudes in some cases. We find that these methods work for both stick slip and slow slip experiments, for periodic slip and for aperiodic slip. We also derive a fundamental relationship between the AE and the friction describing the frictional behavior of any earthquake slip cycle in a given experiment (Rouet-Leduc et al., 2017b). Our goal is to ultimately scale these approaches to Earth geophysical data to probe fault friction. References Rouet-Leduc, B., C. Hulbert, N. Lubbers, K. Barros, C. Humphreys and P. A. Johnson, Machine learning predicts laboratory earthquakes, in review (2017). https://arxiv.org/abs/1702.05774Rouet-LeDuc, B. et al., Friction Laws Derived From the Acoustic Emissions of a Laboratory Fault by Machine Learning (2017), AGU Fall Meeting Session S025: Earthquake source: from the laboratory to the fieldHulbert, C., Characterizing slow slip applying machine learning (2017), AGU Fall Meeting Session S019: Slow slip, Tectonic Tremor, and the Brittle-to-Ductile Transition Zone: What mechanisms control the diversity of slow and fast earthquakes?
NASA Astrophysics Data System (ADS)
Griffith, W. A.; Nielsen, S.; di Toro, G.; Pollard, D. D.; Pennacchioni, G.
2007-12-01
We estimate the coseismic static stress drop on small exhumed strike-slip faults in the Mt. Abbot quadrangle of the central Sierra Nevada (California). The sub-vertical strike-slip faults cut ~85 Ma granodiorite, were exhumed from 7-10 km depth, and were chosen because they are exposed along their entire lengths, ranging from 8 to 13 m. Net slip is estimated using offset aplite dikes and shallowly plunging slickenlines on the fault surfaces. The faults show a record of progressive strain localization: slip initially nucleated on joints and accumulated from ductile shearing (quartz-bearing mylonites) to brittle slipping (epidote-bearing cataclasites). Thin (< 1 mm) pseudotachylytes associated with the cataclasites have been identified along some faults, suggesting that brittle slip may have been seismic. The brittle contribution to slip may be distinguished from the ductile shearing because epidote-filled, rhombohedral dilational jogs opened at bends and step-overs during brittle slip, are distributed periodically along the length of the faults. We argue that brittle slip occurred along the measured fault lengths in single slip events based on several pieces of evidence. 1) Epidote crystals are randomly oriented and undeformed within dilational jogs, indicating they did not grow during aseismic slip and were not broken after initial opening and precipitation. 2) Opening-mode splay cracks are concentrated near fault tips rather than the fault center, suggesting that the reactivated faults ruptured all at once rather than in smaller slip patches. 3) The fact that the opening lengths of the dilational jogs vary systematically along the fault traces suggests that brittle reactivation occurred in a single slip event along the entire fault rather than in multiple slip events. This unique combination of factors distinguishes this study from previous attempts to estimate stress drop from exhumed faults because we can constrain the coseismic rupture length and slip. The static stress drop is calculated for a circular fault using the length of the mapped faults and their slip distributions as well as the shear modulus of the host granodiorite measured in the laboratory. Calculations yield stress drops on the order of 100-200 MPa, one to two orders of magnitude larger than typical seismological estimates. The studied seismic ruptures occurred along small, deep-seated faults (10 km depth), and, given the fault mineral filling (quartz-bearing mylonites) these were "strong" faults. Our estimates are consistent with static stress drops estimated by Nadeau and Johnson (1998) for small repeated earthquakes.
Persistent fine-scale fault structures control rupture development in Parkfield, CA.
NASA Astrophysics Data System (ADS)
Perrin, C.; Waldhauser, F.; Scholz, C. H.
2016-12-01
We investigate the fine-scale geometry and structure of the San Andreas Fault (SAF) near Parkfield, CA, and their role in the development of the 1966 and 2004 M6 earthquakes. Both events broke the fault mainly unilaterally with similar length ( 30 km) but in opposite directions. Seismic slip occurred in a narrow zone between 5 and 10 km depth, as outlined by the concentration of aftershocks along the edge of the slip area. Across fault distribution of the 2004 aftershocks show a rapid decrease of event density away from the fault core. The damage zone is narrower in the Parkfield section (few 100 meters) than in the creeping section ( 1 km). We observe a similar but broader distribution during the interseismic periods. This implies that stress accumulates in a volume around the fault during interseismic periods, whereas coseismic deformation is more localized on the mature SAF. Large aftershocks are concentrated at both rupture tips, characterized by strong heterogeneities in the fault structure at the surface and at depth: i) in the south near Gold Hill-Cholame, a large releasing bend (>25°) separates the Parkfield section from the southern section of the SAF; ii) in the north at Middle Mountain, the surface fault trace goes through an ancient restraining step-over connecting the Parkfield and creeping sections. Fine-scale analysis of the 2004 aftershocks reveals a change in the fault dip and local variations of the fault strike (up to 25°) beneath Middle Mountain, in good agreement with focal mechanisms, which show oblique normal and reverse faulting. We observe these variations during the interseismic periods before and after the 2004 event, suggesting that the structural heterogeneities persisted through at least two earthquake cycles. These heterogeneities act as barriers to rupture propagation of moderate size earthquakes at Parkfield, but also as stress concentrations where rupture initiates.
The repetition of large-earthquake ruptures.
Sieh, K
1996-01-01
This survey of well-documented repeated fault rupture confirms that some faults have exhibited a "characteristic" behavior during repeated large earthquakes--that is, the magnitude, distribution, and style of slip on the fault has repeated during two or more consecutive events. In two cases faults exhibit slip functions that vary little from earthquake to earthquake. In one other well-documented case, however, fault lengths contrast markedly for two consecutive ruptures, but the amount of offset at individual sites was similar. Adjacent individual patches, 10 km or more in length, failed singly during one event and in tandem during the other. More complex cases of repetition may also represent the failure of several distinct patches. The faults of the 1992 Landers earthquake provide an instructive example of such complexity. Together, these examples suggest that large earthquakes commonly result from the failure of one or more patches, each characterized by a slip function that is roughly invariant through consecutive earthquake cycles. The persistence of these slip-patches through two or more large earthquakes indicates that some quasi-invariant physical property controls the pattern and magnitude of slip. These data seem incompatible with theoretical models that produce slip distributions that are highly variable in consecutive large events. Images Fig. 3 Fig. 7 Fig. 9 PMID:11607662
Dynamic rupture modeling of thrust faults with parallel surface traces.
NASA Astrophysics Data System (ADS)
Peshette, P.; Lozos, J.; Yule, D.
2017-12-01
Fold and thrust belts (such as those found in the Himalaya or California Transverse Ranges) consist of many neighboring thrust faults in a variety of geometries. Active thrusts within these belts individually contribute to regional seismic hazard, but further investigation is needed regarding the possibility of multi-fault rupture in a single event. Past analyses of historic thrust surface traces suggest that rupture within a single event can jump up to 12 km. There is also observational precedent for long distance triggering between subparallel thrusts (e.g. the 1997 Harnai, Pakistan events, separated by 50 km). However, previous modeling studies find a maximum jumping rupture distance between thrust faults of merely 200 m. Here, we present a new dynamic rupture modeling parameter study that attempts to reconcile these differences and determine which geometrical and stress conditions promote jumping rupture. We use a community verified 3D finite element method to model rupture on pairs of thrust faults with parallel surface traces. We vary stress drop and fault strength to determine which conditions produce jumping rupture at different dip angles and different separations between surface traces. This parameter study may help to understand the likelihood of jumping rupture in real-world thrust systems, and may thereby improve earthquake hazard assessment.
Armenia-To Trans-Boundary Fault: AN Example of International Cooperation in the Caucasus
NASA Astrophysics Data System (ADS)
Karakhanyan, A.; Avagyan, A.; Avanesyan, M.; Elashvili, M.; Godoladze, T.; Javakishvili, Z.; Korzhenkov, A.; Philip, S.; Vergino, E. S.
2012-12-01
Studies of a trans-boundary active fault that cuts through the border of Armenia to Georgia in the area of the Javakheti volcanic highland have been conducted since 2007. The studies have been implemented based on the ISTC 1418 and NATO SfP 983284 Projects. The Javakheti Fault is oriented to the north-northwest and consists of individual segments displaying clear left-stepping trend. Fault mechanism is represented by right-lateral strike-slip with normal-fault component. The fault formed distinct scarps, deforming young volcanic and glacial sediments. The maximum-size displacements are recorded in the central part of the fault and range up to 150-200 m by normal fault and 700-900 m by right-lateral strike-slip fault. On both flanks, fault scarps have younger appearance, and displacement size there decreases to tens of meters. Fault length is 80 km, suggesting that maximum fault magnitude is estimated at 7.3 according to the Wells and Coppersmith (1994) relation. Many minor earthquakes and a few stronger events (1088, Mw=6.4, 1899 Mw=6.4, 1912, Mw=6.4 and 1925, Mw=5.6) are associated with the fault. In 2011/2012, we conducted paleoseismological and archeoseismological studies of the fault. By two paleoseismological trenches were excavated in the central part of the fault, and on its northern and southern flanks. The trenches enabled recording at least three strong ancient earthquakes. Presently, results of radiocarbon age estimations of those events are expected. The Javakheti Fault may pose considerable seismic hazard for trans-boundary areas of Armenia and Georgia as its northern flank is located at the distance of 15 km from the Baku-Ceyhan pipeline.
77 FR 61024 - Notice of Public Meeting and Request for Comments
Federal Register 2010, 2011, 2012, 2013, 2014
2012-10-05
... public meeting and public comments--The National Christmas Tree Lighting and the subsequent 26-day event... National Christmas Tree Lighting and the subsequent 26-day event. The general plan and theme for the event... comments and suggestions on the planning of the 2012 National Christmas Tree Lighting and the subsequent 26...
2000-06-01
real - time operating system and design of a human-computer interface (HCI) for a triple modular redundant (TMR) fault-tolerant microprocessor for use in space-based applications. Once disadvantage of using COTS hardware components is their susceptibility to the radiation effects present in the space environment. and specifically, radiation-induced single-event upsets (SEUs). In the event of an SEU, a fault-tolerant system can mitigate the effects of the upset and continue to process from the last known correct system state. The TMR basic hardware
Intraplate seismicity along the Gedi Fault in Kachchh rift basin of western India
NASA Astrophysics Data System (ADS)
Joshi, Vishwa; Rastogi, B. K.; Kumar, Santosh
2017-11-01
The Kachchh rift basin is located on the western continental margin of India and has a history of experiencing large to moderate intraplate earthquakes with M ≥ 5. During the past two centuries, two large earthquakes of Mw 7.8 (1819) and Mw 7.7 (2001) have occurred in the Kachchh region, the latter with an epicenter near Bhuj. The aftershock activity of the 2001 Bhuj earthquake is still ongoing with migration of seismicity. Initially, epicenters migrated towards the east and northeast within the Kachchh region but, since 2007, it has also migrated to the south. The triggered faults are mostly within 100 km and some up to 200 km distance from the epicentral area of the mainshock. Most of these faults are trending in E-W direction, and some are transverse. It was noticed that some faults generate earthquakes down to the Moho depth whereas some faults show earthquake activity within the upper crustal volume. The Gedi Fault, situated about 50 km northeast of the 2001 mainshock epicenter, triggered the largest earthquake of Mw 5.6 in 2006. We have carried out detailed seismological studies to evaluate the seismic potential of the Gedi Fault. We have relocated 331 earthquakes by HypoDD to improve upon location errors. Further, the relocated events are used to estimate the b value, p value, and fractal correlation dimension Dc of the fault zone. The present study indicates that all the events along the Gedi Fault are shallow in nature, with focal depths less than 20 km. The estimated b value shows that the Gedi aftershock sequence could be classified as Mogi's type 2 sequence, and the p value suggests a relatively slow decay of aftershocks. The fault plane solutions of some selected events of Mw > 3.5 are examined, and activeness of the Gedi Fault is assessed from the results of active fault studies as well as GPS and InSAR results. All these results are critically examined to evaluate the material properties and seismic potential of the Gedi Fault that may be useful for seismic hazard assessment in the region.
NASA Astrophysics Data System (ADS)
Baratin, Laura-May; Chamberlain, Calum J.; Townend, John; Savage, Martha K.
2018-02-01
Characterising the seismicity associated with slow deformation in the vicinity of the Alpine Fault may provide constraints on the stresses acting on a major transpressive margin prior to an anticipated great (≥M8) earthquake. Here, we use recently detected tremor and low-frequency earthquakes (LFEs) to examine how slow tectonic deformation is loading the Alpine Fault late in its typical ∼300-yr seismic cycle. We analyse a continuous seismic dataset recorded between 2009 and 2016 using a network of 10-13 short-period seismometers, the Southern Alps Microearthquake Borehole Array. Fourteen primary LFE templates are used in an iterative matched-filter and stacking routine, allowing the detection of similar signals corresponding to LFE families sharing common locations. This yields an 8-yr catalogue containing 10,000 LFEs that are combined for each of the 14 LFE families using phase-weighted stacking to produce signals with the highest possible signal-to-noise ratios. We show that LFEs occur almost continuously during the 8-yr study period and highlight two types of LFE distributions: (1) discrete behaviour with an inter-event time exceeding 2 min; (2) burst-like behaviour with an inter-event time below 2 min. We interpret the discrete events as small-scale frequent deformation on the deep extent of the Alpine Fault and LFE bursts (corresponding in most cases to known episodes of tremor or large regional earthquakes) as brief periods of increased slip activity indicative of slow slip. We compute improved non-linear earthquake locations using a 3-D velocity model. LFEs occur below the seismogenic zone at depths of 17-42 km, on or near the hypothesised deep extent of the Alpine Fault. The first estimates of LFE focal mechanisms associated with continental faulting, in conjunction with recurrence intervals, are consistent with quasi-continuous shear faulting on the deep extent of the Alpine Fault.
Detachment Fault Behavior Revealed by Micro-Seismicity at 13°N, Mid-Atlantic Ridge
NASA Astrophysics Data System (ADS)
Parnell-Turner, R. E.; Sohn, R. A.; MacLeod, C. J.; Peirce, C.; Reston, T. J.; Searle, R. C.
2016-12-01
Under certain tectono-magmatic conditions, crustal accretion and extension at slow-spreading mid-ocean ridges is accommodated by low-angle detachment faults. While it is now generally accepted that oceanic detachments initiate on steeply dipping faults that rotate to low-angles at shallow depths, many details of their kinematics remain unknown. Debate has continued between a "continuous" model, where a single, undulating detachment surface underlies an entire ridge segment, and a "discrete" (or discontinuous) model, where detachments are spatially restricted and ephemeral. Here we present results from a passive microearthquake study of detachment faulting at the 13°N region of the Mid-Atlantic Ridge. This study is one component of a joint US-UK seismic study to constrain the sub-surface structure and 3-dimensional geometry of oceanic detachment faults. We detected over 300,000 microearthquakes during a 6-month deployment of 25 ocean bottom seismographs. Events are concentrated in two 1-2 km wide ridge-parallel bands, located between the prominent corrugated detachment fault surface at 13°20'N and the present-day spreading axis, separated by a 1-km wide patch of reduced seismicity. These two bands are 7-8 km in length parallel to the ridge and are clearly limited in spatial extent to the north and south. Events closest to the axis are generally at depths of 6-8 km, while those nearest to the oceanic detachment fault are shallower, at 4-6 km. There is an overall trend of deepening seismicity northwards, with events occurring progressively deeper by 4 km over an along-axis length of 8 km. Events are typically very small, and range in local magnitude from ML -1 to 3. Focal mechanisms indicate two modes of deformation, with extension nearest to the axis and compression at shallower depths near to the detachment fault termination.
NASA Astrophysics Data System (ADS)
Castro, J.; Martin-Rojas, I.; Medina-Cascales, I.; García-Tortosa, F. J.; Alfaro, P.; Insua-Arévalo, J. M.
2018-06-01
This paper on the Baza Fault provides the first palaeoseismic data from trenches in the central sector of the Betic Cordillera (S Spain), one of the most tectonically active areas of the Iberian Peninsula. With the palaeoseismological data we constructed time-stratigraphic OxCal models that yield probability density functions (PDFs) of individual palaeoseismic event timing. We analysed PDF overlap to quantitatively correlate the walls and site events into a single earthquake chronology. We assembled a surface-rupturing history of the Baza Fault for the last ca. 45,000 years. We postulated six alternative surface rupturing histories including 8-9 fault-wide earthquakes. We calculated fault-wide earthquake recurrence intervals using Monte Carlo. This analysis yielded a 4750-5150 yr recurrence interval. Finally, compared our results with the results from empirical relationships. Our results will provide a basis for future analyses of more of other active normal faults in this region. Moreover, our results will be essential for improving earthquake-probability assessments in Spain, where palaeoseismic data are scarce.
Nickel-Hydrogen Battery Fault Clearing at Low State of Charge
NASA Technical Reports Server (NTRS)
Lurie, C.
1997-01-01
Fault clearing currents were achieved and maintained at discharge rates from C/2 to C/3 at high and low states of charge. The fault clearing plateau voltage is strong function of: discharge current, and voltage-prior-to-the-fault-clearing-event and a weak function of state of charge. Voltage performance, for the range of conditions reported, is summarized.
On fault evidence for a large earthquake in the late fifteenth century, Eastern Kunlun fault, China
NASA Astrophysics Data System (ADS)
Junlong, Zhang
2017-11-01
The EW-trending Kunlun Fault System (KFS) is one of the major left-lateral strike-slip faults on the Tibetan Plateau. It forms the northern boundary of the Bayan Har block. Heretofore, no evidence has been provided for the most recent event (MRE) of the 70-km-long eastern section of the KFS. The studied area is located in the north of the Zoige Basin (northwest Sichuan province) and was recognized by field mapping. Several trenches were excavated and revealed evidence of repeated events in late Holocene. The fault zone is characterized by a distinct 30-60-cm-thick clay fault gouge layer juxtaposing the hanging wall bedrock over unconsolidated late Holocene footwall colluvium and alluvium. The fault zone, hanging wall, and footwall were conformably overlain by undeformed post-MRE deposits. Samples of charred organic material were obtained from the top of the faulted sediments and the base of the unfaulted sediments. Modeling of the age of samples, earthquake yielded a calibrated 2σ radiocarbon age of A.D. 1489 ± 82. Combined with the historical earthquake record, the MRE is dated at A.D. 1488. Based on the over 50 km-long surface rupture, the magnitude of this event is nearly M w 7.0. Our data suggests that a 200-km-long seismic gap could be further divided into the Luocha and Maqu sections. For the last 1000 years, the Maqu section has been inactive, and hence, it is likely that the end of its seismic cycle is approaching, and that there is a potentially significant seismic hazard in eastern Tibet.
Choy, G.L.; Boatwright, J.
2004-01-01
Displacement, velocity, and velocity-squared records of P and SH body waves recorded at teleseismic distances are analyzed to determine the rupture characteristics of the Denali fault, Alaska, earthquake of 3 November 2002 (MW 7.9, Me 8.1). Three episodes of rupture can be identified from broadband (???0.1-5.0 Hz) waveforms. The Denali fault earthquake started as a MW 7.3 thrust event. Subsequent right-lateral strike-slip rupture events with centroid depths of 9 km occurred about 22 and 49 sec later. The teleseismic P waves are dominated by energy at intermediate frequencies (0.1-1 Hz) radiated by the thrust event, while the SH waves are dominated by energy at lower frequencies (0.05-0.2 Hz) radiated by the strike-slip events. The strike-slip events exhibit strong directivity in the teleseismic SH waves. Correcting the recorded P-wave acceleration spectra for the effect of the free surface yields an estimate of 2.8 ?? 1015 N m for the energy radiated by the thrust event. Correcting the recorded SH-wave acceleration spectra similarly yields an estimate of 3.3 ?? 10 16 N m for the energy radiated by the two strike-slip events. The average rupture velocity for the strike-slip rupture process is 1.1??-1.2??. The strike-slip events were located 90 and 188 km east of the epicenter. The rupture length over which significant or resolvable energy is radiated is, thus, far shorter than the 340-km fault length over which surface displacements were observed. However, the seismic moment released by these three events, 4 ?? 1020 N m, was approximately half the seismic moment determined from very low-frequency analyses of the earthquake. The difference in seismic moment can be reasonably attributed to slip on fault segments that did not radiate significant or coherent seismic energy. These results suggest that very large and great strike-slip earthquakes can generate stress pulses that rapidly produce substantial slip with negligible stress drop and little discernible radiated energy on fault segments distant from the initial point of nucleation. The existence of this energy-deficient rupture mode has important implications for the evaluation of the seismic hazard of very large strike-slip earthquakes.
Seismic potential of weak, near-surface faults revealed at plate tectonic slip rates
Ikari, Matt J.; Kopf, Achim J.
2017-01-01
The near-surface areas of major faults commonly contain weak, phyllosilicate minerals, which, based on laboratory friction measurements, are assumed to creep stably. However, it is now known that shallow faults can experience tens of meters of earthquake slip and also host slow and transient slip events. Laboratory experiments are generally performed at least two orders of magnitude faster than plate tectonic speeds, which are the natural driving conditions for major faults; the absence of experimental data for natural driving rates represents a critical knowledge gap. We use laboratory friction experiments on natural fault zone samples at driving rates of centimeters per year to demonstrate that there is abundant evidence of unstable slip behavior that was not previously predicted. Specifically, weak clay-rich fault samples generate slow slip events (SSEs) and have frictional properties favorable for earthquake rupture. Our work explains growing field observations of shallow SSE and surface-breaking earthquake slip, and predicts that such phenomena should be more widely expected. PMID:29202027
Real-Time Fault Classification for Plasma Processes
Yang, Ryan; Chen, Rongshun
2011-01-01
Plasma process tools, which usually cost several millions of US dollars, are often used in the semiconductor fabrication etching process. If the plasma process is halted due to some process fault, the productivity will be reduced and the cost will increase. In order to maximize the product/wafer yield and tool productivity, a timely and effective fault process detection is required in a plasma reactor. The classification of fault events can help the users to quickly identify fault processes, and thus can save downtime of the plasma tool. In this work, optical emission spectroscopy (OES) is employed as the metrology sensor for in-situ process monitoring. Splitting into twelve different match rates by spectrum bands, the matching rate indicator in our previous work (Yang, R.; Chen, R.S. Sensors 2010, 10, 5703–5723) is used to detect the fault process. Based on the match data, a real-time classification of plasma faults is achieved by a novel method, developed in this study. Experiments were conducted to validate the novel fault classification. From the experimental results, we may conclude that the proposed method is feasible inasmuch that the overall accuracy rate of the classification for fault event shifts is 27 out of 28 or about 96.4% in success. PMID:22164001
1983-04-01
tolerances or spaci - able assets diagnostic/fault ness float fications isolation devices Operation of cannibalL- zation point Why Sustain materiel...with diagnostic software based on "fault tree " representation of the M65 ThS) to bridge the gap in diagnostics capability was demonstrated in 1980 and... identification friend or foe) which has much lower reliability than TSQ-73 peculiar hardware). Thus, as in other examples, reported readiness does not reflect
AADL Fault Modeling and Analysis Within an ARP4761 Safety Assessment
2014-10-01
Analysis Generator 27 3.2.3 Mapping to OpenFTA Format File 27 3.2.4 Mapping to Generic XML Format 28 3.2.5 AADL and FTA Mapping Rules 28 3.2.6 Issues...PSSA), System Safety Assessment (SSA), Common Cause Analysis (CCA), Fault Tree Analysis ( FTA ), Failure Modes and Effects Analysis (FMEA), Failure...Modes and Effects Summary, Mar - kov Analysis (MA), and Dependence Diagrams (DDs), also referred to as Reliability Block Dia- grams (RBDs). The
Gold, Ryan D.; Reitman, Nadine G.; Briggs, Richard; Barnhart, William; Hayes, Gavin; Wilson, Earl M.
2015-01-01
The 24 September 2013 Mw7.7 Balochistan, Pakistan earthquake ruptured a ~ 200 km-long stretch of the Hoshab fault in southern Pakistan and produced the second-largest lateral surface displacement observed for a continental strike-slip earthquake. We remotely measured surface deformation associated with this event using high-resolution (0.5 m) pre- and post-event satellite optical imagery. We document left lateral, near-field, on-fault offsets (10 m from fault) using 309 laterally offset piercing points, such as streams, terrace risers, and roads. Peak near-field displacement is 13.6 + 2.5/− 3.4 m. We characterize off-fault deformation by measuring medium- (< 350 m from fault) and far-field (> 350 m from fault) displacement using manual (259 measurements) and automated image cross-correlation methods, respectively. Off-fault peak lateral displacement values are ~ 15 m and exceed on-fault displacement magnitudes for ~ 85% of the rupture length. Our observations suggest that for this rupture, coseismic surface displacement typically increases with distance away from the surface trace of the fault; however, nearly 100% of total surface displacement occurs within a few hundred meters of the primary fault trace. Furthermore, off-fault displacement accounts for, on average, 28% of the total displacement but exhibits a highly heterogeneous along-strike pattern. The best agreement between near-field and far-field displacements generally corresponds to the narrowest fault zone widths. Our analysis demonstrates significant and heterogeneous mismatches between on- and off-fault coseismic deformation, and we conclude that this phenomenon should be considered in hazard models based on geologically determined on-fault slip rates.
NASA Astrophysics Data System (ADS)
Gold, Ryan D.; Reitman, Nadine G.; Briggs, Richard W.; Barnhart, William D.; Hayes, Gavin P.; Wilson, Earl
2015-10-01
The 24 September 2013 Mw7.7 Balochistan, Pakistan earthquake ruptured a ~ 200 km-long stretch of the Hoshab fault in southern Pakistan and produced the second-largest lateral surface displacement observed for a continental strike-slip earthquake. We remotely measured surface deformation associated with this event using high-resolution (0.5 m) pre- and post-event satellite optical imagery. We document left lateral, near-field, on-fault offsets (10 m from fault) using 309 laterally offset piercing points, such as streams, terrace risers, and roads. Peak near-field displacement is 13.6 + 2.5/- 3.4 m. We characterize off-fault deformation by measuring medium- (< 350 m from fault) and far-field (> 350 m from fault) displacement using manual (259 measurements) and automated image cross-correlation methods, respectively. Off-fault peak lateral displacement values are ~ 15 m and exceed on-fault displacement magnitudes for ~ 85% of the rupture length. Our observations suggest that for this rupture, coseismic surface displacement typically increases with distance away from the surface trace of the fault; however, nearly 100% of total surface displacement occurs within a few hundred meters of the primary fault trace. Furthermore, off-fault displacement accounts for, on average, 28% of the total displacement but exhibits a highly heterogeneous along-strike pattern. The best agreement between near-field and far-field displacements generally corresponds to the narrowest fault zone widths. Our analysis demonstrates significant and heterogeneous mismatches between on- and off-fault coseismic deformation, and we conclude that this phenomenon should be considered in hazard models based on geologically determined on-fault slip rates.
Unsupervised Learning —A Novel Clustering Method for Rolling Bearing Faults Identification
NASA Astrophysics Data System (ADS)
Kai, Li; Bo, Luo; Tao, Ma; Xuefeng, Yang; Guangming, Wang
2017-12-01
To promptly process the massive fault data and automatically provide accurate diagnosis results, numerous studies have been conducted on intelligent fault diagnosis of rolling bearing. Among these studies, such as artificial neural networks, support vector machines, decision trees and other supervised learning methods are used commonly. These methods can detect the failure of rolling bearing effectively, but to achieve better detection results, it often requires a lot of training samples. Based on above, a novel clustering method is proposed in this paper. This novel method is able to find the correct number of clusters automatically the effectiveness of the proposed method is validated using datasets from rolling element bearings. The diagnosis results show that the proposed method can accurately detect the fault types of small samples. Meanwhile, the diagnosis results are also relative high accuracy even for massive samples.
NASA Astrophysics Data System (ADS)
Ratzov, G.; Cattaneo, A.; Babonneau, N.; Yelles, K.; Bracene, R.; Deverchere, J.
2012-12-01
It is commonly assumed that stress buildup along a given fault is proportional to the time elapsed since the previous earthquake. Although the resulting « seismic gap » hypothesis suits well for moderate magnitude earthquakes (Mw 4-5), large events (Mw>6) are hardly predictable and depict great variation in recurrence intervals. Models based on stress transfer and interactions between faults argue that an earthquake may promote or delay the occurrence of next earthquakes on adjacent faults by increasing or lowering the level of static stress. The Algerian margin is a Cenozoic passive margin presently inverted within the slow convergence between Africa and Eurasia plates (~3-6 mm/yr). The western margin experienced two large earthquakes in 1954 (Orléansville, M 6.7) and 1980 (El Asnam, M 7.3), supporting an interaction between the two faults. To get meaningful statistics of large earthquakes recurrence intervals over numerous seismic cycles, we conducted a submarine paleoseismicity investigation based on turbidite chronostratigraphy. As evidenced on the Cascadia subduction zone, synchronous turbidites accumulated over a large area and originated from independent sources are likely triggered by an earthquake. To test the method on a slowly convergent margin, we analyze turbidites from three sediment cores collected during the Maradja (2003) and Prisme (2007) cruises off the 1954-1980 source areas. We use X-ray radioscopy, XRF major elements counter, magnetic susceptibility, and grain-size distribution to accurately discriminate turbidites from hemipelagites. We date turbidites by calculating hemipelagic sedimentation rates obtained with radiocarbon ages, and interpolate the rates between turbidites. Finally, the age of events is compared with the only paleoseismic study available on land (El Asnam fault). Fourteen possible seismic events are identified by the counting and correlation of turbidites over the last 8 ka. Most events are correlated with the paleoseismic record of the El Asnam fault, but uncorrelated events suggest that other faults were active. Only the 1954 event (not the 1980) triggered a turbidity current, implying that the sediment buffer on the continental shelf could not be reloaded in 26 years, thus arguing for a minimum time resolution of our method. The new paleoseismic catalog shows a recurrence interval of 300-700 years for most events, but also a great interval of >1200 years without any major earthquake. This result suggests that the level of static stress may have drastically dropped as a result of three main events occurring within the 800 years prior the quiescence period.
Fault Analysis on Bevel Gear Teeth Surface Damage of Aeroengine
NASA Astrophysics Data System (ADS)
Cheng, Li; Chen, Lishun; Li, Silu; Liang, Tao
2017-12-01
Aiming at the trouble phenomenon for bevel gear teeth surface damage of Aero-engine, Fault Tree of bevel gear teeth surface damage was drawing by logical relations, the possible cause of trouble was analyzed, scanning electron-microscope, energy spectrum analysis, Metallographic examination, hardness measurement and other analysis means were adopted to investigate the spall gear tooth. The results showed that Material composition, Metallographic structure, Micro-hardness, Carburization depth of the fault bevel gear accord with technical requirements. Contact fatigue spall defect caused bevel gear teeth surface damage. The small magnitude of Interference of accessory gearbox install hole and driving bevel gear bearing seat was mainly caused. Improved measures were proposed, after proof, Thermoelement measures are effective.
NASA Astrophysics Data System (ADS)
Rudzinski, Lukasz; Lizurek, Grzegorz; Plesiewicz, Beata
2014-05-01
On 19th March 2013 tremor shook the surface of Polkowice town were "Rudna" mine is located. This event of ML=4.2 was third most powerful seismic event recorded in Legnica Głogów Copper District (LGCD). Citizens of the area reported that felt tremors were bigger and last longer than any other ones felt in last couple years. The event was studied with use of two different networks: underground network of "Rudna" mine and surface local network run by IGF PAS (LUMINEOS network). The first one is composed of 32 vertical seismometers at mining level, except 5 sensors placed in elevator shafts, seismometers location depth varies from 300 down to 1000 meters below surface. The seismometers used in this network are vertical short period Willmore MkII and MkIII sensors, with the frequency band from 1Hz to 100Hz. At the beginning of 2013th the local surface network of the Institute of Geophysics Polish Academy of Sciences (IGF PAS) with acronym LUMINEOS was installed under agreement with KGHM SA and "Rudna" mine officials. This network at the moment of the March 19th 2013 event was composed of 4 short-period one-second triaxial seismometers LE-3D/1s manufactured by Lenartz Electronics. Analysis of spectral parameters of the records from in mine seismic system and surface LUMINEOS network along with broadband station KSP record were carried out. Location of the event was close to the Rudna Główna fault zone, the nodal planes orientations determined with two different approaches were almost parallel to the strike of the fault. The mechanism solutions were also obtained in form of Full Moment Tensor inversion from P wave amplitude pulses of underground records and waveform inversion of surface network seismograms. Final results of the seismic analysis along with macroseismic survey and observed effects from the destroyed part of the mining panel indicate that the mechanism of the event was thrust faulting on inactive tectonic fault. The results confirm that the fault zones are the areas of higher risk, even in case of carefully taken mining operations.
Episodic slow slip events in the Japan subduction zone before the 2011 Tohoku-Oki earthquake
NASA Astrophysics Data System (ADS)
Ito, Y.; Hino, R.; Kido, M.; Fujimoto, H.; Osada, Y.; Inazu, D.; Ohta, Y.; Iinuma, T.; Ohzono, M.; Mishina, M.; Miura, S.; Suzuki, K.; Tsuji, T.; Ashi, J.
2012-12-01
We describe two transient slow slip events that occurred before the 2011 Tohoku-Oki earthquake. The first transient crustal deformation, which occurred over a period of a week in November 2008, was recorded simultaneously using ocean-bottom pressure gauges and an on-shore volumetric strainmeter; this deformation has been interpreted as being an M6.8 episodic slow slip event. The second had a duration exceeding 1 month and was observed in February 2011, just before the 2011 Tohoku-Oki earthquake; the moment magnitude of this event reached 7.0. The two events preceded interplate earthquakes of magnitudes M6.1 (December 2008) and M7.3 (March 9, 2011), respectively; the latter is the largest foreshock of the 2011 Tohoku-Oki earthquake. Our findings indicate that these slow slip events induced increases in shear stress, which in turn triggered the interplate earthquakes. The slow slip event source area on the fault is also located within the downdip portion of the huge-coseismic-slip area of the 2011 earthquake. This demonstrates episodic slow slip and seismic behavior occurring on the same portions of the megathrust fault, suggesting that the faults undergo slip in slow slip events can also rupture seismically.
Episodic slow slip events in the Japan subduction zone before the 2011 Tohoku-Oki earthquake
NASA Astrophysics Data System (ADS)
Ito, Yoshihiro; Hino, Ryota; Kido, Motoyuki; Fujimoto, Hiromi; Osada, Yukihito; Inazu, Daisuke; Ohta, Yusaku; Iinuma, Takeshi; Ohzono, Mako; Miura, Satoshi; Mishina, Masaaki; Suzuki, Kensuke; Tsuji, Takeshi; Ashi, Juichiro
2013-07-01
We describe two transient slow slip events that occurred before the 2011 Tohoku-Oki earthquake. The first transient crustal deformation, which occurred over a period of a week in November 2008, was recorded simultaneously using ocean-bottom pressure gauges and an on-shore volumetric strainmeter; this deformation has been interpreted as being an M6.8 episodic slow slip event. The second had a duration exceeding 1 month and was observed in February 2011, just before the 2011 Tohoku-Oki earthquake; the moment magnitude of this event reached 7.0. The two events preceded interplate earthquakes of magnitudes M6.1 (December 2008) and M7.3 (March 9, 2011), respectively; the latter is the largest foreshock of the 2011 Tohoku-Oki earthquake. Our findings indicate that these slow slip events induced increases in shear stress, which in turn triggered the interplate earthquakes. The slow slip event source area on the fault is also located within the downdip portion of the huge-coseismic-slip area of the 2011 earthquake. This demonstrates episodic slow slip and seismic behavior occurring on the same portions of the megathrust fault, suggesting that the faults undergo slip in slow slip events can also rupture seismically.
NASA Astrophysics Data System (ADS)
Reitman, N. G.; Briggs, R.; Gold, R. D.; DuRoss, C. B.
2015-12-01
Post-earthquake, field-based assessments of surface displacement commonly underestimate offsets observed with remote sensing techniques (e.g., InSAR, image cross-correlation) because they fail to capture the total deformation field. Modern earthquakes are readily characterized by comparing pre- and post-event remote sensing data, but historical earthquakes often lack pre-event data. To overcome this challenge, we use historical aerial photographs to derive pre-event digital surface models (DSMs), which we compare to modern, post-event DSMs. Our case study focuses on resolving on- and off-fault deformation along the Lost River fault that accompanied the 1983 M6.9 Borah Peak, Idaho, normal-faulting earthquake. We use 343 aerial images from 1952-1966 and vertical control points selected from National Geodetic Survey benchmarks measured prior to 1983 to construct a pre-event point cloud (average ~ 0.25 pts/m2) and corresponding DSM. The post-event point cloud (average ~ 1 pt/m2) and corresponding DSM are derived from WorldView 1 and 2 scenes processed with NASA's Ames Stereo Pipeline. The point clouds and DSMs are coregistered using vertical control points, an iterative closest point algorithm, and a DSM coregistration algorithm. Preliminary results of differencing the coregistered DSMs reveal a signal spanning the surface rupture that is consistent with tectonic displacement. Ongoing work is focused on quantifying the significance of this signal and error analysis. We expect this technique to yield a more complete understanding of on- and off-fault deformation patterns associated with the Borah Peak earthquake along the Lost River fault and to help improve assessments of surface deformation for other historical ruptures.
Aftershocks of the 2014 South Napa, California, Earthquake: Complex faulting on secondary faults
Hardebeck, Jeanne L.; Shelly, David R.
2016-01-01
We investigate the aftershock sequence of the 2014 MW6.0 South Napa, California, earthquake. Low-magnitude aftershocks missing from the network catalog are detected by applying a matched-filter approach to continuous seismic data, with the catalog earthquakes serving as the waveform templates. We measure precise differential arrival times between events, which we use for double-difference event relocation in a 3D seismic velocity model. Most aftershocks are deeper than the mainshock slip, and most occur west of the mapped surface rupture. While the mainshock coseismic and postseismic slip appears to have occurred on the near-vertical, strike-slip West Napa fault, many of the aftershocks occur in a complex zone of secondary faulting. Earthquake locations in the main aftershock zone, near the mainshock hypocenter, delineate multiple dipping secondary faults. Composite focal mechanisms indicate strike-slip and oblique-reverse faulting on the secondary features. The secondary faults were moved towards failure by Coulomb stress changes from the mainshock slip. Clusters of aftershocks north and south of the main aftershock zone exhibit vertical strike-slip faulting more consistent with the West Napa Fault. The northern aftershocks correspond to the area of largest mainshock coseismic slip, while the main aftershock zone is adjacent to the fault area that has primarily slipped postseismically. Unlike most creeping faults, the zone of postseismic slip does not appear to contain embedded stick-slip patches that would have produced on-fault aftershocks. The lack of stick-slip patches along this portion of the fault may contribute to the low productivity of the South Napa aftershock sequence.
NASA Astrophysics Data System (ADS)
Gold, P. O.; Cowgill, E.; Kreylos, O.
2010-12-01
Measurements derived from high-resolution terrestrial LiDAR (t-Lidar) surveys of landforms displaced during the 16 December 1954 Mw 6.8 Dixie Valley earthquake in central Nevada confirm the absence of historical strike slip north of latitude 39.5°N. This conclusion has implications for the effect of stress changes on the spatial and temporal evolution of the central Nevada seismic belt. The Dixie Valley fault is a low-angle, east-dipping, range-bounding normal fault located in the central-northern reach of the central Nevada seismic belt (CNSB), a ~N-S trending group of historical ruptures that may represent a migration of northwest trending right-lateral Pacific-North American plate motion into central Nevada. Migration of a component of right slip eastward from the eastern California shear zone/Walker lane to the CNSB is supported by the presence of pronounced right-lateral motion observed in most of the CNSB earthquakes south of the Dixie Valley fault and by GPS data spanning the CNSB. Such eastward migration and northward propagation of right-slip into the CNSB predicts a component of lateral slip on the Dixie Valley fault. However, landforms offsets have previously been reported to indicate only purely normal slip in the 1954 Dixie Valley event. To check the direction of motion during the Dixie Valley earthquake using higher precision methods than previously employed, we collected t-LiDAR data to quantify displacements of two well-preserved debris flow chutes separated along strike by ~10 km and at locations where the local fault strike diverges by >10° from the regional strike. Our highest confidence measurements yield a horizontal slip vector azimuth of ~107° at both sites, orthogonal to the average regional fault strike of ~17°. Thus, we find no compelling evidence for regional lateral motion in our other measurements. This result indicates that continued northward propagation of right lateral slip from its diffuse termination at the northern end of the 1954 Fairview Peak event, 4 minutes before the Dixie Valley event, and the Rainbow Mountain-Stillwater events six months earlier, must be accommodated by some other mechanism. We see several options for the spatial and temporal evolution of right slip propagation into the northern CNSB. 1) Lateral motion may be accommodated to the east by faults opposite the Dixie Valley fault along the base of Clan Alpine range, or to the west by faults at the western base of the Stillwater range-diffuse faults to the SW and SE of the Dixie Valley fault that also ruptured in 1954 accommodated right slip and could represent a west and/or east migration of lateral motion; 2) right lateral motion may activate an as yet unrecognized fault within the Dixie Valley; or 3) the Dixie Valley fault may be reactivated with a greater component of lateral slip in response to changes in stress, a phenomena that has been recognized on the Borrego Fault in northern Mexico between the penultimate event and the recent 4 April 2010 El Mayor-Cucapah earthquake.
NASA Astrophysics Data System (ADS)
Yang, Z.; Yehya, A.; Rice, J. R.; Yin, J.
2017-12-01
Earthquakes can be induced by human activity involving fluid injection, e.g., as wastewater disposal from hydrocarbon production. The occurrence of such events is thought to be, mainly, due to the increase in pore pressure, which reduces the effective normal stress and hence the strength of a nearby fault. Change in subsurface stress around suitably oriented faults at near-critical stress states may also contribute. We focus on improving the modeling and prediction of the hydro-mechanical response due to fluid injection, considering the full poroelastic effects and not solely changes in pore pressure in a rigid host. Thus we address the changes in porosity and permeability of the medium due to the changes in the local volumetric strains. Our results also focus on including effects of the fault architecture (low permeability fault core and higher permeability bordering damage zones) on the pressure diffusion and the fault poroelastic response. Field studies of faults have provided a generally common description for the size of their bordering damage zones and how they evolve along their direction of propagation. Empirical laws, from a large number of such observations, describe their fracture density, width, permeability, etc. We use those laws and related data to construct our study cases. We show that the existence of high permeability damage zones facilitates pore-pressure diffusion and, in some cases, results in a sharp increase in pore-pressure at levels much deeper than the injection wells, because these regions act as conduits for fluid pressure changes. This eventually results in higher seismicity rates. By better understanding the mechanisms of nucleation of injection-induced seismicity, and better predicting the hydro-mechanical response of faults, we can assess methodologies and injection strategies to avoid risks of high magnitude seismic events. Microseismic events occurring after the start of injection are very important indications of when injection should be stopped and how to avoid major events. Our work contributes to the assessment or mitigation of seismic hazard and risk, and our long-term target question is: How to not make an earthquake?
NASA Astrophysics Data System (ADS)
Higgins, N.; Lapusta, N.
2014-12-01
Many large earthquakes on natural faults are preceded by smaller events, often termed foreshocks, that occur close in time and space to the larger event that follows. Understanding the origin of such events is important for understanding earthquake physics. Unique laboratory experiments of earthquake nucleation in a meter-scale slab of granite (McLaskey and Kilgore, 2013; McLaskey et al., 2014) demonstrate that sample-scale nucleation processes are also accompanied by much smaller seismic events. One potential explanation for these foreshocks is that they occur on small asperities - or bumps - on the fault interface, which may also be the locations of smaller critical nucleation size. We explore this possibility through 3D numerical simulations of a heterogeneous 2D fault embedded in a homogeneous elastic half-space, in an attempt to qualitatively reproduce the laboratory observations of foreshocks. In our model, the simulated fault interface is governed by rate-and-state friction with laboratory-relevant frictional properties, fault loading, and fault size. To create favorable locations for foreshocks, the fault surface heterogeneity is represented as patches of increased normal stress, decreased characteristic slip distance L, or both. Our simulation results indicate that one can create a rate-and-state model of the experimental observations. Models with a combination of higher normal stress and lower L at the patches are closest to matching the laboratory observations of foreshocks in moment magnitude, source size, and stress drop. In particular, we find that, when the local compression is increased, foreshocks can occur on patches that are smaller than theoretical critical nucleation size estimates. The additional inclusion of lower L for these patches helps to keep stress drops within the range observed in experiments, and is compatible with the asperity model of foreshock sources, since one would expect more compressed spots to be smoother (and hence have lower L). In this heterogeneous rate-and-state fault model, the foreshocks interact with each other and with the overall nucleation process through their postseismic slip. The interplay amongst foreshocks, and between foreshocks and the larger-scale nucleation process, is a topic of our future work.
Fumal, T.E.; Rymer, M.J.; Seitz, G.G.
2002-01-01
Paleoseismic investigations across the Mission Creek strand of the San Andreas fault at Thousand Palms Oasis indicate that four and probably five surface-rupturing earthquakes occurred during the past 1200 years. Calendar age estimates for these earthquakes are based on a chronological model that incorporates radio-carbon dates from 18 in situ burn layers and stratigraphic ordering constraints. These five earthquakes occurred in about A.D. 825 (770-890) (mean, 95% range), A.D. 982 (840-1150), A.D. 1231 (1170-1290), A.D. 1502 (1450-1555), and after a date in the range of A.D. 1520-1680. The most recent surface-rupturing earthquake at Thousand Palms is likely the same as the A.D. 1676 ?? 35 event at Indio reported by Sieh and Williams (1990). Each of the past five earthquakes recorded on the San Andreas fault in the Coachella Valley strongly overlaps in time with an event at the Wrightwood paleoseismic site, about 120 km northwest of Thousand Palms Oasis. Correlation of events between these two sites suggests that at least the southernmost 200 km of the San Andreas fault zone may have ruptured in each earthquake. The average repeat time for surface-rupturing earthquakes on the San Andreas fault in the Coachella Valley is 215 ?? 25 years, whereas the elapsed time since the most recent event is 326 ?? 35 years. This suggests the southernmost San Andreas fault zone likely is very near failure. The Thousand Palms Oasis site is underlain by a series of six channels cut and filled since about A.D. 800 that cross the fault at high angles. A channel margin about 900 years old is offset right laterally 2.0 ?? 0.5 m, indicating a slip rate of 4 ?? 2 mm/yr. This slip rate is low relative to geodetic and other geologic slip rate estimates (26 ?? 2 mm/yr and about 23-35 mm/yr, respectively) on the southernmost San Andreas fault zone, possibly because (1) the site is located in a small step-over in the fault trace and so the rate is not be representative of the Mission Creek fault, (2) slip is partitioned northward from the San Andreas fault and into the eastern California shear zone, and/or (3) slip is partitioned onto the Banning strand of the San Andreas fault zone.
Goal-Function Tree Modeling for Systems Engineering and Fault Management
NASA Technical Reports Server (NTRS)
Johnson, Stephen B.; Breckenridge, Jonathan T.
2013-01-01
This paper describes a new representation that enables rigorous definition and decomposition of both nominal and off-nominal system goals and functions: the Goal-Function Tree (GFT). GFTs extend the concept and process of functional decomposition, utilizing state variables as a key mechanism to ensure physical and logical consistency and completeness of the decomposition of goals (requirements) and functions, and enabling full and complete traceabilitiy to the design. The GFT also provides for means to define and represent off-nominal goals and functions that are activated when the system's nominal goals are not met. The physical accuracy of the GFT, and its ability to represent both nominal and off-nominal goals enable the GFT to be used for various analyses of the system, including assessments of the completeness and traceability of system goals and functions, the coverage of fault management failure detections, and definition of system failure scenarios.
Risk management of PPP project in the preparation stage based on Fault Tree Analysis
NASA Astrophysics Data System (ADS)
Xing, Yuanzhi; Guan, Qiuling
2017-03-01
The risk management of PPP(Public Private Partnership) project can improve the level of risk control between government departments and private investors, so as to make more beneficial decisions, reduce investment losses and achieve mutual benefit as well. Therefore, this paper takes the PPP project preparation stage venture as the research object to identify and confirm four types of risks. At the same time, fault tree analysis(FTA) is used to evaluate the risk factors that belong to different parts, and quantify the influencing degree of risk impact on the basis of risk identification. In addition, it determines the importance order of risk factors by calculating unit structure importance on PPP project preparation stage. The result shows that accuracy of government decision-making, rationality of private investors funds allocation and instability of market returns are the main factors to generate the shared risk on the project.
Rocket engine system reliability analyses using probabilistic and fuzzy logic techniques
NASA Technical Reports Server (NTRS)
Hardy, Terry L.; Rapp, Douglas C.
1994-01-01
The reliability of rocket engine systems was analyzed by using probabilistic and fuzzy logic techniques. Fault trees were developed for integrated modular engine (IME) and discrete engine systems, and then were used with the two techniques to quantify reliability. The IRRAS (Integrated Reliability and Risk Analysis System) computer code, developed for the U.S. Nuclear Regulatory Commission, was used for the probabilistic analyses, and FUZZYFTA (Fuzzy Fault Tree Analysis), a code developed at NASA Lewis Research Center, was used for the fuzzy logic analyses. Although both techniques provided estimates of the reliability of the IME and discrete systems, probabilistic techniques emphasized uncertainty resulting from randomness in the system whereas fuzzy logic techniques emphasized uncertainty resulting from vagueness in the system. Because uncertainty can have both random and vague components, both techniques were found to be useful tools in the analysis of rocket engine system reliability.
Enterprise architecture availability analysis using fault trees and stakeholder interviews
NASA Astrophysics Data System (ADS)
Närman, Per; Franke, Ulrik; König, Johan; Buschle, Markus; Ekstedt, Mathias
2014-01-01
The availability of enterprise information systems is a key concern for many organisations. This article describes a method for availability analysis based on Fault Tree Analysis and constructs from the ArchiMate enterprise architecture (EA) language. To test the quality of the method, several case-studies within the banking and electrical utility industries were performed. Input data were collected through stakeholder interviews. The results from the case studies were compared with availability of log data to determine the accuracy of the method's predictions. In the five cases where accurate log data were available, the yearly downtime estimates were within eight hours from the actual downtimes. The cost of performing the analysis was low; no case study required more than 20 man-hours of work, making the method ideal for practitioners with an interest in obtaining rapid availability estimates of their enterprise information systems.
Khan, F I; Iqbal, A; Ramesh, N; Abbasi, S A
2001-10-12
As it is conventionally done, strategies for incorporating accident--prevention measures in any hazardous chemical process industry are developed on the basis of input from risk assessment. However, the two steps-- risk assessment and hazard reduction (or safety) measures--are not linked interactively in the existing methodologies. This prevents a quantitative assessment of the impacts of safety measures on risk control. We have made an attempt to develop a methodology in which risk assessment steps are interactively linked with implementation of safety measures. The resultant system tells us the extent of reduction of risk by each successive safety measure. It also tells based on sophisticated maximum credible accident analysis (MCAA) and probabilistic fault tree analysis (PFTA) whether a given unit can ever be made 'safe'. The application of the methodology has been illustrated with a case study.
Reches, Z.; Dieterich, J.H.
1983-01-01
The dependence of the number of sets of faults and their orientation on the intermediate strain axis is investigated through polyaxial tests, reported here, and theoretical analysis, reported in an accompanying paper. In the experiments, cubic samples of Berea sandstone, Sierra-White and Westerly granites, and Candoro and Solnhofen limestones were loaded on their three pairs of faces by three independent, mutually perpendicular presses at room temperature. Two of the presses were servo-controlled and applied constant displacement rates throughout the experiment. Most samples display three or four sets of faults in orthorhombic symmetry. These faults form in several yielding events that follow a stage of elastic deformation. In many experiments, the maximum and the intermediate compressive stresses interchange orientations during the yielding events, where the corresponding strains are constant. The final stage of most experiments is characterized by slip along the faults. ?? 1983.
NASA Astrophysics Data System (ADS)
Reches, Ze'ev; Dieterich, James H.
1983-05-01
The dependence of the number of sets of faults and their orientation on the intermediate strain axis is investigated through polyaxial tests, reported here, and theoretical analysis, reported in an accompanying paper. In the experiments, cubic samples of Berea sandstone, Sierra-White and Westerly granites, and Candoro and Solnhofen limestones were loaded on their three pairs of faces by three independent, mutually perpendicular presses at room temperature. Two of the presses were servo-controlled and applied constant displacement rates throughout the experiment. Most samples display three or four sets of faults in orthorhombic symmetry. These faults form in several yielding events that follow a stage of elastic deformation. In many experiments, the maximum and the intermediate compressive stresses interchange orientations during the yielding events, where the corresponding strains are constant. The final stage of most experiments is characterized by slip along the faults.
On-line early fault detection and diagnosis of municipal solid waste incinerators
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhao Jinsong; Huang Jianchao; Sun Wei
A fault detection and diagnosis framework is proposed in this paper for early fault detection and diagnosis (FDD) of municipal solid waste incinerators (MSWIs) in order to improve the safety and continuity of production. In this framework, principal component analysis (PCA), one of the multivariate statistical technologies, is used for detecting abnormal events, while rule-based reasoning performs the fault diagnosis and consequence prediction, and also generates recommendations for fault mitigation once an abnormal event is detected. A software package, SWIFT, is developed based on the proposed framework, and has been applied in an actual industrial MSWI. The application shows thatmore » automated real-time abnormal situation management (ASM) of the MSWI can be achieved by using SWIFT, resulting in an industrially acceptable low rate of wrong diagnosis, which has resulted in improved process continuity and environmental performance of the MSWI.« less
NASA Astrophysics Data System (ADS)
Biasi, G. P.; Clark, K.; Berryman, K. R.; Cochran, U. A.; Prior, C.
2010-12-01
The Hokuri Creek paleoseismic site on the Alpine fault in south Westland, New Zealand has yielded a remarkable history of fault activity spanning the past ~7000 years. Evidence for earthquake occurrence and timing has been developed primarily from natural exposures created after a geologically major incision event a few hundred years ago. Prior to this event, the elevation of the spillway of Hokuri Creek into its previous drainage was controlled by NE translation of a shutter ridge during earthquakes. Each event increased the base level for sediment accumulation upstream by decimetres to perhaps a metre. Each increase in base level is associated with a period of accumulation principally of clean fine silts and rock flour. With infilling and time, the wetlands reestablish and sedimentation transitions to a slower and more organic-rich phase (Clark et al., this meeting). At least 18 such cycles have been identified at the site. Carbonaceous material is abundant in almost all layers. Much of the dating is done on macrofossils - individual beech tree leaves, reeds, and similar fragile features. Reworking is considered unlikely due to the fragility of the samples. All dates were developed by the Rafter Radiocarbon Laboratory of the National Isotope Centre at GNS. Delta 13C was measured and used to correct for fractionation. Dating earthquakes at the Hokuri Creek site presents some special challenges. Individual stratigraphic sections around the site expose different time intervals. The Main Section series provides the most complete single section, with over 5000 years of represented. Nearby auxiliary exposures cover nearly 1500 years more. Date series from individual exposures tend to be internally very consistent with stratigraphic ordering, but by virtue of their spatial separation, correlations between sections are more difficult. We find, however, that the distinctive layering and the typical 2-4 centuries between primary silt layers provides a way to cross-correlate sections at the site. Within a series of dates from a section, ordering with intrinsic precision of the dates indicates an uncertainty at event horizons on the order of 50 years, while the transitions from peat to silt indicating an earthquake are separated by several times this amount. The effect is to create a stair-stepping date sequence that often allows us to link sections and improve dating resolution in both sections. The combined section provides clear evidence for at least 18 earthquake-induced cycles. Event recurrence would be about 390 years in a simple average. Internal evidence and close examination of date sequences provide preliminary indications of as many as 22 earthquakes could be represented at Hokuri Creek, and a recurrence interval of ~320 years. Both sequences indicate a middle sequence from 3800 to 1000 BC in which recurrence intervals are resolvably longer than average. Variability in recurrence is relatively small - relatively few intervals are even >1.5x the average. This indicates that large earthquakes on the Alpine Fault of South Island, New Zealand are best fit by a time-predictable model.
Tidal triggering of earthquakes suggests poroelastic behavior on the San Andreas Fault
Delorey, Andrew A.; van der Elst, Nicholas J.; Johnson, Paul Allan
2016-12-28
Tidal triggering of earthquakes is hypothesized to provide quantitative information regarding the fault's stress state, poroelastic properties, and may be significant for our understanding of seismic hazard. To date, studies of regional or global earthquake catalogs have had only modest successes in identifying tidal triggering. We posit that the smallest events that may provide additional evidence of triggering go unidentified and thus we developed a technique to improve the identification of very small magnitude events. We identify events applying a method known as inter-station seismic coherence where we prioritize detection and discrimination over characterization. Here we show tidal triggering ofmore » earthquakes on the San Andreas Fault. We find the complex interaction of semi-diurnal and fortnightly tidal periods exposes both stress threshold and critical state behavior. Lastly, our findings reveal earthquake nucleation processes and pore pressure conditions – properties of faults that are difficult to measure, yet extremely important for characterizing earthquake physics and seismic hazards.« less
Wang, Chao Saul; Fu, Zhong-Chuan; Chen, Hong-Song; Wang, Dong-Sheng
2014-01-01
As semiconductor technology scales into the nanometer regime, intermittent faults have become an increasing threat. This paper focuses on the effects of intermittent faults on NET versus REG on one hand and the implications for dependability strategy on the other. First, the vulnerability characteristics of representative units in OpenSPARC T2 are revealed, and in particular, the highly sensitive modules are identified. Second, an arch-level dependability enhancement strategy is proposed, showing that events such as core/strand running status and core-memory interface events can be candidates of detectable symptoms. A simple watchdog can be deployed to detect application running status (IEXE event). Then SDC (silent data corruption) rate is evaluated demonstrating its potential. Third and last, the effects of traditional protection schemes in the target CMT to intermittent faults are quantitatively studied on behalf of the contribution of each trap type, demonstrating the necessity of taking this factor into account for the strategy.
Tidal triggering of earthquakes suggests poroelastic behavior on the San Andreas Fault
DOE Office of Scientific and Technical Information (OSTI.GOV)
Delorey, Andrew A.; van der Elst, Nicholas J.; Johnson, Paul Allan
Tidal triggering of earthquakes is hypothesized to provide quantitative information regarding the fault's stress state, poroelastic properties, and may be significant for our understanding of seismic hazard. To date, studies of regional or global earthquake catalogs have had only modest successes in identifying tidal triggering. We posit that the smallest events that may provide additional evidence of triggering go unidentified and thus we developed a technique to improve the identification of very small magnitude events. We identify events applying a method known as inter-station seismic coherence where we prioritize detection and discrimination over characterization. Here we show tidal triggering ofmore » earthquakes on the San Andreas Fault. We find the complex interaction of semi-diurnal and fortnightly tidal periods exposes both stress threshold and critical state behavior. Lastly, our findings reveal earthquake nucleation processes and pore pressure conditions – properties of faults that are difficult to measure, yet extremely important for characterizing earthquake physics and seismic hazards.« less
Tidal triggering of earthquakes suggests poroelastic behavior on the San Andreas Fault
Delorey, Andrew; Van Der Elst, Nicholas; Johnson, Paul
2017-01-01
Tidal triggering of earthquakes is hypothesized to provide quantitative information regarding the fault's stress state, poroelastic properties, and may be significant for our understanding of seismic hazard. To date, studies of regional or global earthquake catalogs have had only modest successes in identifying tidal triggering. We posit that the smallest events that may provide additional evidence of triggering go unidentified and thus we developed a technique to improve the identification of very small magnitude events. We identify events applying a method known as inter-station seismic coherence where we prioritize detection and discrimination over characterization. Here we show tidal triggering of earthquakes on the San Andreas Fault. We find the complex interaction of semi-diurnal and fortnightly tidal periods exposes both stress threshold and critical state behavior. Our findings reveal earthquake nucleation processes and pore pressure conditions – properties of faults that are difficult to measure, yet extremely important for characterizing earthquake physics and seismic hazards.
NASA Technical Reports Server (NTRS)
Berg, Melanie D.; LaBel, Kenneth; Kim, Hak
2014-01-01
An informative session regarding SRAM FPGA basics. Presenting a framework for fault injection techniques applied to Xilinx Field Programmable Gate Arrays (FPGAs). Introduce an overlooked time component that illustrates fault injection is impractical for most real designs as a stand-alone characterization tool. Demonstrate procedures that benefit from fault injection error analysis.
Moran, Michael J.; Wilson, Jon W.; Beard, L. Sue
2015-11-03
Several major faults, including the Salt Cedar Fault and the Palm Tree Fault, play an important role in the movement of groundwater. Groundwater may move along these faults and discharge where faults intersect volcanic breccias or fractured rock. Vertical movement of groundwater along faults is suggested as a mechanism for the introduction of heat energy present in groundwater from many of the springs. Groundwater altitudes in the study area indicate a potential for flow from Eldorado Valley to Black Canyon although current interpretations of the geology of this area do not favor such flow. If groundwater from Eldorado Valley discharges at springs in Black Canyon then the development of groundwater resources in Eldorado Valley could result in a decrease in discharge from the springs. Geology and structure indicate that it is not likely that groundwater can move between Detrital Valley and Black Canyon. Thus, the development of groundwater resources in Detrital Valley may not result in a decrease in discharge from springs in Black Canyon.
Risk Acceptance Personality Paradigm: How We View What We Don't Know We Don't Know
NASA Technical Reports Server (NTRS)
Massie, Michael J.; Morris, A. Terry
2011-01-01
The purpose of integrated hazard analyses, probabilistic risk assessments, failure modes and effects analyses, fault trees and many other similar tools is to give managers of a program some idea of the risks associated with their program. All risk tools establish a set of undesired events and then try to evaluate the risk to the program by assessing the severity of the undesired event and the likelihood of that event occurring. Some tools provide qualitative results, some provide quantitative results and some do both. However, in the end the program manager and his/her team must decide which risks are acceptable and which are not. Even with a wide array of analysis tools available, risk acceptance is often a controversial and difficult decision making process. And yet, today's space exploration programs are moving toward more risk based design approaches. Thus, risk identification and good risk assessment is becoming even more vital to the engineering development process. This paper explores how known and unknown information influences risk-based decisions by looking at how the various parts of our personalities are affected by what they know and what they don't know. This paper then offers some criteria for consideration when making risk-based decisions.
NASA Astrophysics Data System (ADS)
Rosalia, Shindy; Widiyantoro, Sri; Nugraha, Andri Dian; Ash Shiddiqi, Hasbi; Supendi, Pepen; Wandono
2017-04-01
West Java, part of the Sunda Arc, has relatively high seismicity due to subduction activity and faulting. The first step of tomography study in order to infer the geometry of the structure beneath West Java is to conduct precise earthquake hypocenter determination. In this study, we used earthquake waveform data taken from the regional Meteorological, Climatological, Geophysical Agency (BMKG) network from South Sumatra to central Java. We have repicked P and S arrival times from about 800 events in the period from April 2009 to December 2015. We selected the events which have azimuthal gap < 210° and phase more than 8. The non-linear method employed in this study used the oct-tree sampling algorithm from NonLinLoc program to determine the earthquake hypocenters. The hypocenter location results give better clustering earthquakes which are correlated well with geological structure in the study region. We also compared our results with BMKG catalog data and found that the average hypocenter location difference is about 12 km in latitude direction, 9.5 km in longitude direction, and the average focal depth difference is about 19.5 km. For future studies, we will conduct tomographic imaging to invert 3-D seismic velocity structure beneath the western part of Java.
Ruleman, Chester A.; Larsen, Mort; Stickney, Michael C.
2014-01-01
The catastrophic Hebgen Lake earthquake of 18 August 1959 (MW 7.3) led many geoscientists to develop new methods to better understand active tectonics in extensional tectonic regimes that address seismic hazards. The Madison Range fault system and adjacent Hebgen Lake–Red Canyon fault system provide an intermountain active tectonic analog for regional analyses of extensional crustal deformation. The Madison Range fault system comprises fault zones (~100 km in length) that have multiple salients and embayments marked by preexisting structures exposed in the footwall. Quaternary tectonic activity rates differ along the length of the fault system, with less displacement to the north. Within the Hebgen Lake basin, the 1959 earthquake is the latest slip event in the Hebgen Lake–Red Canyon fault system and southern Madison Range fault system. Geomorphic and paleoseismic investigations indicate previous faulting events on both fault systems. Surficial geologic mapping and historic seismicity support a coseismic structural linkage between the Madison Range and Hebgen Lake–Red Canyon fault systems. On this trip, we will look at Quaternary surface ruptures that characterize prehistoric earthquake magnitudes. The one-day field trip begins and ends in Bozeman, and includes an overview of the active tectonics within the Madison Valley and Hebgen Lake basin, southwestern Montana. We will also review geologic evidence, which includes new geologic maps and geomorphic analyses that demonstrate preexisting structural controls on surface rupture patterns along the Madison Range and Hebgen Lake–Red Canyon fault systems.
Variability of recurrence interval for New Zealand surface-rupturing paleoearthquakes
NASA Astrophysics Data System (ADS)
Nicol, A., , Prof; Robinson, R., Jr.; Van Dissen, R. J.; Harvison, A.
2015-12-01
Recurrence interval (RI) for successive earthquakes on individual faults is recorded by paleoseismic datasets for surface-rupturing earthquakes which, in New Zealand, have magnitudes of >Mw ~6 to 7.2 depending on the thickness of the brittle crust. New Zealand faults examined have mean RI of ~130 to 8500 yrs, with an upper bound censored by the sample duration (<30 kyr) and an inverse relationship to fault slip rate. Frequency histograms, probability density functions (PDFs) and coefficient of variation (CoV= standard deviation/arithmetic mean) values have been used to quantify RI variability for geological and simulated earthquakes on >100 New Zealand active faults. RI for individual faults can vary by more than an order of magnitude. CoV of RI for paleoearthquake data comprising 4-10 events ranges from ~0.2 to 1 with a mean of 0.6±0.2. These values are generally comparable to simulated earthquakes (>100 events per fault) and suggest that RI ranges from quasi periodic (e.g., ~0.2-0.5) to random (e.g., ~1.0). Comparison of earthquake simulation and paleoearthquake data indicates that the mean and CoV of RI can be strongly influenced by sampling artefacts including; the magnitude of completeness, the dimensionality of spatial sampling and the duration of the sample period. Despite these sampling issues RI for the best of the geological data (i.e. >6 events) and earthquake simulations are described by log-normal or Weibull distributions with long recurrence tails (~3 times the mean) and provide a basis for quantifying real RI variability (rather than sampling artefacts). Our analysis indicates that CoV of RI is negatively related to fault slip rate. These data are consistent with the notion that fault interaction and associated stress perturbations arising from slip on larger faults are more likely to advance or retard future slip on smaller faults than visa versa.
NASA Astrophysics Data System (ADS)
Villalobos, A.
2015-12-01
On 2007 April 21, a Mw = 6.2 earthquake hit the Aysén region, an area of low seismicity in southern Chile. This event corresponds to the main shock of a sequence of earthquakes that were felt from January 10, with a small earthquake of magnitude ML <3, to February 2008 as recurrent aftershocks. This area is characterized by the presence of the Liquiñe-Ofqui Fault System (LOFS), which corresponds to neotectonic feature and the main seismotectonic southern Chile. In this research we use improved sub-aqueous paleoseismological techniques with geomorphological evidence to constrain the seismogenic source of this event as cortical origin. It is established that the Punta Cola Fault, a dextral-reverse structure which exhibits in seismic profiles a complex fault zone with distinguished positive flower geometry, is responsible for the main shock. This fault caused vertical offsets that reached the seafloor generating fault scarps in a mass movement deposit triggered by the same earthquake. Following this idea, a model of surface rupture is proposed for this structure. Further evidence that this cortical phenomenon is not an isolated event in time is presented by paleoseismological trench-like mappings in sub-bottom profiles.
Dynamic Triggering of Seismic Events and Their Relation to Slow Slip in Interior Alaska
NASA Astrophysics Data System (ADS)
Sims, N. E.; Holtkamp, S. G.
2017-12-01
We conduct a search for dynamically triggered events in the Minto Flats Fault Zone (MFFZ), a left-lateral strike-slip zone expressed as multiple, partially overlapping faults, in central Alaska. We focus on the MFFZ because we have observed slow slip processes (earthquake swarms and Very Low Frequency Earthquakes) and interaction between earthquake swarms and larger main-shock (MS) events in this area before. We utilize the Alaska Earthquake Center catalog to identify potential earthquake swarms and dynamically triggered foreshock and mainshock events along the fault zone. We find 30 swarms occurring in the last two decades, five of which we classify as foreshock (FS) swarms due to their close proximity in both time and space to MS events. Many of the earthquake swarms cluster around 15-20 km depth, which is near the seismic-aseismic transition along this fault zone. Additionally, we observe instances of large teleseismic events such as the M8.6 2012 Sumatra earthquake and M7.4 2012 Guatemala earthquake triggering seismic events within the MFFZ, with the Sumatra earthquake triggering a mainshock event that was preceded by an ongoing earthquake swarm and the Guatemala event triggering earthquake swarms that subsequently transition into a larger mainshock event. In both cases an earthquake swarm transitioned into a mainshock-aftershock event and activity continued for several days after the teleseismic waves had passed, lending some evidence to delayed dynamic triggering of seismic events. We hypothesize that large dynamic transient strain associated with the passage of teleseismic surface waves is triggering slow slip processes near the base of the seismogenic zone. These triggered aseismic transient events result in earthquake swarms, which sometimes lead to the nucleation of larger earthquakes. We utilize network matched filtering to build more robust catalogs of swarm earthquake families in this region to search for additional swarm-like or triggered activity in response to teleseismic surface waves, and to test dynamic triggering hypotheses.
Earthquakes and aseismic creep associated with growing fault-related folds
NASA Astrophysics Data System (ADS)
Burke, C. C.; Johnson, K. M.
2017-12-01
Blind thrust faults overlain by growing anticlinal folds pose a seismic risk to many urban centers in the world. A large body of research has focused on using fold and growth strata geometry to infer the rate of slip on the causative fault and the distribution of off-fault deformation. However, because we have had few recorded large earthquakes on blind faults underlying folds, it remains unclear how much of the folding occurs during large earthquakes or during the interseismic period accommodated by aseismic creep. Numerous kinematic and mechanical models as well as field observations demonstrate that flexural slip between sedimentary layering is an important mechanism of fault-related folding. In this study, we run boundary element models of flexural-slip fault-related folding to examine the extent to which energy is released seismically or aseismically throughout the evolution of the fold and fault. We assume a fault imbedded in viscoelastic mechanical layering under frictional contact. We assign depth-dependent frictional properties and adopt a rate-state friction formulation to simulate slip over time. We find that in many cases, a large percentage (greater than 50%) of fold growth is accomplished by aseismic creep at bedding and fault contacts. The largest earthquakes tend to occur on the fault, but a significant portion of the seismicity is distributed across bedding contacts through the fold. We are currently working to quantify these results using a large number of simulations with various fold and fault geometries. Result outputs include location, duration, and magnitude of events. As more simulations are completed, these results from different fold and fault geometries will provide insight into how much folding occurs from these slip events. Generalizations from these simulations can be compared with observations of active fault-related folds and used in the future to inform seismic hazard studies.
NASA Astrophysics Data System (ADS)
Abdelrhman, Ahmed M.; Sei Kien, Yong; Salman Leong, M.; Meng Hee, Lim; Al-Obaidi, Salah M. Ali
2017-07-01
The vibration signals produced by rotating machinery contain useful information for condition monitoring and fault diagnosis. Fault severities assessment is a challenging task. Wavelet Transform (WT) as a multivariate analysis tool is able to compromise between the time and frequency information in the signals and served as a de-noising method. The CWT scaling function gives different resolutions to the discretely signals such as very fine resolution at lower scale but coarser resolution at a higher scale. However, the computational cost increased as it needs to produce different signal resolutions. DWT has better low computation cost as the dilation function allowed the signals to be decomposed through a tree of low and high pass filters and no further analysing the high-frequency components. In this paper, a method for bearing faults identification is presented by combing Continuous Wavelet Transform (CWT) and Discrete Wavelet Transform (DWT) with envelope analysis for bearing fault diagnosis. The experimental data was sampled by Case Western Reserve University. The analysis result showed that the proposed method is effective in bearing faults detection, identify the exact fault’s location and severity assessment especially for the inner race and outer race faults.
NASA Astrophysics Data System (ADS)
Clark, K.; Berryman, K. R.; Cochran, U. A.; Bartholomew, T.; Turner, G. M.
2010-12-01
At Hokuri Creek, in south Westland, New Zealand, an 18 m thickness of Holocene sediments has accumulated against the upthrown side of the Alpine Fault. Recent fluvial incision has created numerous exposures of this sedimentary sequence. At a decimetre to metre scale there are two dominant types of sedimentary units: clastic-dominated, grey silt packages, and organic-dominated, light brown peaty-silt units. These units represent repeated alternations of the paleoenvironment due to fault rupture over the past 7000 years. We have located the event horizons within the sedimentary sequence, and identified evidence to support earthquake-driven paleoenvironmental change (rather than climatic variability), and developed a model of paleoenvironmental changes over a typical seismic cycle. To quantitatively characterise the sediments we use high resolution photography, x-ray imaging, magnetic-susceptibility and total carbon analysis. To understand the depositional environment we used diatom and pollen studies. The organic-rich units have very low magnetic susceptibility and density values, with high greyscale and high total carbon values. Diatoms indicate these units represent stable wetland environments with standing water and predominantly in-situ organic material deposition. The clastic-rich units are characterised by higher magnetic susceptibility and density values, with low greyscale and total carbon. The clastic-rich units represent environments of flowing water and deep pond settings that received predominantly catchment-derived silt and sand. The event horizon is located at the upper contact of the organic-rich horizons. The event horizon contact marks a drastic change in hydrologic regime as fault rupture changed the stream base level and there was a synchronous influx of clastic sediment as the catchment responded to earthquake shaking. During the interseismic period the flowing-water environment gradually stabilised and returned to an organic-rich wetland. Such cycles were repeated 18 times at Hokuri Creek. Evidence that fault rupture was responsible for the cyclical paleoenvironmental changes at Hokuri Creek include: the average time period for each organic- and clastic-rich couplet to be deposited approximately equals the long-term average Alpine Fault recurrence interval, and the most recent events recorded at Hokuri correlate to an earthquake dated in paleoseismic trenches 100 km along strike; fault rupture is the only mechanism that can create accommodation space for 18 m of sediment to accumulate, and the sedimentary units can be traced from the outcrop to the fault trace and show tectonic deformation. The record of 18 fault rupture events at Hokuri Creek is one of the longest records of surface ruptures on a major plate boundary fault. High-resolution dating and statistical treatment of the radiocarbon data (Biasi et al., this meeting) has resulted in major advances in understanding the long-term behaviour of the Alpine Fault (Berryman et al., this meeting).
Experimental evaluation of the certification-trail method
NASA Technical Reports Server (NTRS)
Sullivan, Gregory F.; Wilson, Dwight S.; Masson, Gerald M.; Itoh, Mamoru; Smith, Warren W.; Kay, Jonathan S.
1993-01-01
Certification trails are a recently introduced and promising approach to fault-detection and fault-tolerance. A comprehensive attempt to assess experimentally the performance and overall value of the method is reported. The method is applied to algorithms for the following problems: huffman tree, shortest path, minimum spanning tree, sorting, and convex hull. Our results reveal many cases in which an approach using certification-trails allows for significantly faster overall program execution time than a basic time redundancy-approach. Algorithms for the answer-validation problem for abstract data types were also examined. This kind of problem provides a basis for applying the certification-trail method to wide classes of algorithms. Answer-validation solutions for two types of priority queues were implemented and analyzed. In both cases, the algorithm which performs answer-validation is substantially faster than the original algorithm for computing the answer. Next, a probabilistic model and analysis which enables comparison between the certification-trail method and the time-redundancy approach were presented. The analysis reveals some substantial and sometimes surprising advantages for ther certification-trail method. Finally, the work our group performed on the design and implementation of fault injection testbeds for experimental analysis of the certification trail technique is discussed. This work employs two distinct methodologies, software fault injection (modification of instruction, data, and stack segments of programs on a Sun Sparcstation ELC and on an IBM 386 PC) and hardware fault injection (control, address, and data lines of a Motorola MC68000-based target system pulsed at logical zero/one values). Our results indicate the viability of the certification trail technique. It is also believed that the tools developed provide a solid base for additional exploration.
NASA Astrophysics Data System (ADS)
SaïD, Aymen; Baby, Patrice; Chardon, Dominique; Ouali, Jamel
2011-12-01
Structural analysis of the southern Tunisian Atlas was carried out using field observation, seismic interpretation, and cross section balancing. It shows a mix of thick-skinned and thin-skinned tectonics with lateral variations in regional structural geometry and amounts of shortening controlled by NW-SE oblique ramps and tear faults. It confirms the role of the Late Triassic-Early Jurassic rifting inheritance in the structuring of the active foreland fold and thrust belt of the southern Tunisian Atlas, in particular in the development of NW-SE oblique structures such as the Gafsa fault. The Late Triassic-Early Jurassic structural pattern is characterized by a family of first-order NW-SE trending normal faults dipping to the east and by second-order E-W trending normal faults limiting a complex system of grabens and horsts. These faults have been inverted during two contractional tectonic events. The first event occurred between the middle Turonian and the late Maastrichtian and can be correlated with the onset of the convergence between Africa and Eurasia. The second event corresponding to the principal shortening tectonic event in the southern Atlas started in the Serravalian-Tortonian and is still active. During the Neogene, the southern Atlas foreland fold and thrust belt propagated on the evaporitic décollement level infilling the Late Triassic-Early Jurassic rift. The major Eocene "Atlas event," described in hinterland domains and in eastern Tunisia, did not deform significantly the southern Tunisian Atlas, which corresponded in this period to a backbulge broad depozone.
Using Low-Frequency Earthquake Families on the San Andreas Fault as Deep Creepmeters
NASA Astrophysics Data System (ADS)
Thomas, A. M.; Beeler, N. M.; Bletery, Q.; Burgmann, R.; Shelly, D. R.
2018-01-01
The central section of the San Andreas Fault hosts tectonic tremor and low-frequency earthquakes (LFEs) similar to subduction zone environments. LFEs are often interpreted as persistent regions that repeatedly fail during the aseismic shear of the surrounding fault allowing them to be used as creepmeters. We test this idea by using the recurrence intervals of individual LFEs within LFE families to estimate the timing, duration, recurrence interval, slip, and slip rate associated with inferred slow slip events. We formalize the definition of a creepmeter and determine whether this definition is consistent with our observations. We find that episodic families reflect surrounding creep over the interevent time, while the continuous families and the short time scale bursts that occur as part of the episodic families do not. However, when these families are evaluated on time scales longer than the interevent time these events can also be used to meter slip. A straightforward interpretation of episodic families is that they define sections of the fault where slip is distinctly episodic in well-defined slow slip events that slip 16 times the long-term rate. In contrast, the frequent short-term bursts of the continuous and short time scale episodic families likely do not represent individual creep events but rather are persistent asperities that are driven to failure by quasi-continuous creep on the surrounding fault. Finally, we find that the moment-duration scaling of our inferred creep events are inconsistent with the proposed linear moment-duration scaling. However, caution must be exercised when attempting to determine scaling with incomplete knowledge of scale.
Implications of a localized zone of seismic activity near the Inner Piedmont-Blue Ridge boundary
DOE Office of Scientific and Technical Information (OSTI.GOV)
Douglas, S.; Powell, C.
1994-03-01
A small but distinct cluster of earthquake activity is located in Henderson County, NC, near the boundary of the Inner Piedmont and Blue Ridge physiographic provinces. Over twenty events have occurred within the cluster since 1776 and four had body-wave magnitudes exceeding 3.0. Average focal depth for instrumentally recorded events is 7.7 km. Epicenters plot within the Inner Piedmont, roughly 13 km from the surface expression of the Brevard fault zone. The reason for sustained earthquake activity in Henderson County is not known but the close spatial association of the events with the Brevard fault suggests a causal relationship. Themore » Brevard zone dips steeply to the SE and the events could be associated with the fault at depth. An even more intriguing possibility is that the events are associated with the intersection of the Brevard zone and the decollemont; this possibility is compatible with available information concerning the depth to the decollemont and the dip on the Brevard zone. An association of seismic activity with the Brevard zone at depth is supported by the presence of another small cluster of activity located in Rutherford County, NC. This cluster is located in the Inner Piedmont, roughly 30 km NE of the Henderson cluster and 16 km from the Brevard fault zone. Association of seismic activity with known faults is very rare in the eastern US and has implications for tectonic models and hazard evaluation. Additional research must be conducted to determine the feasibility that activity is associated with the Brevard zone.« less
Magmatic dyking and recharge in the Asal Rift, Republic of Djibouti
NASA Astrophysics Data System (ADS)
Peltzer, G.; Harrington, J.; Doubre, C.; Tomic, J.
2012-12-01
The Asal Rift, Republic of Djibouti, has been the locus of a major magmatic event in 1978 and seems to have maintained a sustained activity in the three decade following the event. We compare the dyking event of 1978 with the magmatic activity occurring in the rift during the 1997-2008 time period. We use historical air photos and satellite images to quantify the horizontal opening on the major faults activated in 1978. These observations are combined with ground based geodetic data acquired between 1973 and 1979 across the rift to constrain a kinematic model of the 1978 rifting event, including bordering faults and mid-crustal dykes under the Asal Rift and the Ghoubbet Gulf. The model indicates that extension was concentrated between the surface and a depth of 3 km in the crust, resulting in the opening of faults, dykes and fissures between the two main faults, E and gamma, and that the structure located under the Asal Rift, below 3 km, deflated. These results suggest that, during the 1978 event, magmatic fluids transferred from a mid-crustal reservoir to the shallow structures, injecting dykes and filling faults and fissures, and reaching the surface in the Ardoukoba fissural eruption. Surface deformation observed by InSAR during the 1997-2008 decade reveals a slow, yet sustained inflation and extension across the Asal Rift combined with continuous subsidence of the rift inner floor. Modeling shows that these observations cannot be explained by visco-elastic relaxation, a process, which mostly vanishes 20 to 30 years after the 1978 event. However, the InSAR observations over this decade are well explained by a kinematic model in which an inflating body is present at mid-crustal depth, approximately under the Fieale caldera, and shallow faults accommodate both horizontal opening and down-dip slip. The total geometric moment rate, or inflation rate, due to the opening of the mid-crustal structure and the deeper parts of the opening faults is 3 106 m3yr. Such a volume change per year corresponds to 1-2% of the total volume of magma estimated to have been mobilized during the 1978 seismo-magmatic event. The comparison of the 1978-dyking and post-dyking models of rift suggests that the source of the injected magma during the 1978 event lies at mi-crustal depth under the Fieale caldera and appears to be recharging at a sustained rate more than 20 years after the event. Whether this rate is a transient rate or a long-term rate will determine the time of the next magma injection in the shallow crust. However, at the current rate, the 1978 total volume would be replenished in 50-100 years.
NASA Astrophysics Data System (ADS)
Wechsler, N.; Rockwell, T. K.; Klinger, Y.; Agnon, A.; Marco, S.
2012-12-01
Models used to forecast future seismicity make fundamental assumptions about the behavior of faults and fault systems in the long term, but in many cases this long-term behavior is assumed using short-term and perhaps non-representative observations. The question arises - how long of a record is long enough to represent actual fault behavior, both in terms of recurrence of earthquakes and of moment release (aka slip-rate). We test earthquake recurrence and slip models via high-resolution three-dimensional trenching of the Beteiha (Bet-Zayda) site on the Dead Sea Transform (DST) in northern Israel. We extend the earthquake history of this simple plate boundary fault to establish slip rate for the past 3-4kyr, to determine the amount of slip per event and to study the fundamental behavior, thereby testing competing rupture models (characteristic, slip-patch, slip-loading, and Gutenberg Richter type distribution). To this end we opened more than 900m of trenches, mapped 8 buried channels and dated more than 80 radiocarbon samples. By mapping buried channels, offset by the DST on both sides of the fault, we obtained for each an estimate of displacement. Coupled with fault crossing trenches to determine event history, we construct earthquake and slip history for the fault for the past 2kyr. We observe evidence for a total of 9-10 surface-rupturing earthquakes with varying offset amounts. 6-7 events occurred in the 1st millennium, compared to just 2-3 in the 2nd millennium CE. From our observations it is clear that the fault is not behaving in a periodic fashion. A 4kyr old buried channel yields a slip rate of 3.5-4mm/yr, consistent with GPS rates for this segment. Yet in spite of the apparent agreement between GPS, Pleistocene to present slip rate, and the lifetime rate of the DST, the past 800-1000 year period appears deficit in strain release. Thus, in terms of moment release, most of the fault has remained locked and is accumulating elastic strain. In contrast, the preceding 1200 years or so experienced a spate of earthquake activity, with large events along the Jordan Valley segment alone in 31 BCE, 363, 749, and 1033 CE. Thus, the return period appears to vary by a factor of two to four during the historical period in the Jordan Valley as well as at our site. The Beteiha site seems to be affected by both its southern and northern neighboring segments, and there is tentative evidence that earthquakes nucleating in the Jordan Valley (e.g. 749 CE) can rupture through the Galilee step-over to the south of Beteiha, or trigger a smaller event on the Jordan Gorge segment, in which case the historical record will tend to amalgamate any evidence for it into one large event. We offer a model of earthquake slip for this segment, in which the overall slip rate remains constant, yet differing earthquake sizes can occur, depending on the segment from which they originated and the time since the last large event. The rate of earthquake production in this model does not produce a time predictable pattern over a period of 2kyr, and the slip rate varies between the 1st and 2nd millennia CE, as a result of the interplay between coalescing fault segments to the north.