A diagnosis system using object-oriented fault tree models
NASA Technical Reports Server (NTRS)
Iverson, David L.; Patterson-Hine, F. A.
1990-01-01
Spaceborne computing systems must provide reliable, continuous operation for extended periods. Due to weight, power, and volume constraints, these systems must manage resources very effectively. A fault diagnosis algorithm is described which enables fast and flexible diagnoses in the dynamic distributed computing environments planned for future space missions. The algorithm uses a knowledge base that is easily changed and updated to reflect current system status. Augmented fault trees represented in an object-oriented form provide deep system knowledge that is easy to access and revise as a system changes. Given such a fault tree, a set of failure events that have occurred, and a set of failure events that have not occurred, this diagnosis system uses forward and backward chaining to propagate causal and temporal information about other failure events in the system being diagnosed. Once the system has established temporal and causal constraints, it reasons backward from heuristically selected failure events to find a set of basic failure events which are a likely cause of the occurrence of the top failure event in the fault tree. The diagnosis system has been implemented in common LISP using Flavors.
Using Fault Trees to Advance Understanding of Diagnostic Errors.
Rogith, Deevakar; Iyengar, M Sriram; Singh, Hardeep
2017-11-01
Diagnostic errors annually affect at least 5% of adults in the outpatient setting in the United States. Formal analytic techniques are only infrequently used to understand them, in part because of the complexity of diagnostic processes and clinical work flows involved. In this article, diagnostic errors were modeled using fault tree analysis (FTA), a form of root cause analysis that has been successfully used in other high-complexity, high-risk contexts. How factors contributing to diagnostic errors can be systematically modeled by FTA to inform error understanding and error prevention is demonstrated. A team of three experts reviewed 10 published cases of diagnostic error and constructed fault trees. The fault trees were modeled according to currently available conceptual frameworks characterizing diagnostic error. The 10 trees were then synthesized into a single fault tree to identify common contributing factors and pathways leading to diagnostic error. FTA is a visual, structured, deductive approach that depicts the temporal sequence of events and their interactions in a formal logical hierarchy. The visual FTA enables easier understanding of causative processes and cognitive and system factors, as well as rapid identification of common pathways and interactions in a unified fashion. In addition, it enables calculation of empirical estimates for causative pathways. Thus, fault trees might provide a useful framework for both quantitative and qualitative analysis of diagnostic errors. Future directions include establishing validity and reliability by modeling a wider range of error cases, conducting quantitative evaluations, and undertaking deeper exploration of other FTA capabilities. Copyright © 2017 The Joint Commission. Published by Elsevier Inc. All rights reserved.
Integrated Approach To Design And Analysis Of Systems
NASA Technical Reports Server (NTRS)
Patterson-Hine, F. A.; Iverson, David L.
1993-01-01
Object-oriented fault-tree representation unifies evaluation of reliability and diagnosis of faults. Programming/fault tree described more fully in "Object-Oriented Algorithm For Evaluation Of Fault Trees" (ARC-12731). Augmented fault tree object contains more information than fault tree object used in quantitative analysis of reliability. Additional information needed to diagnose faults in system represented by fault tree.
[The Application of the Fault Tree Analysis Method in Medical Equipment Maintenance].
Liu, Hongbin
2015-11-01
In this paper, the traditional fault tree analysis method is presented, detailed instructions for its application characteristics in medical instrument maintenance is made. It is made significant changes when the traditional fault tree analysis method is introduced into the medical instrument maintenance: gave up the logic symbolic, logic analysis and calculation, gave up its complicated programs, and only keep its image and practical fault tree diagram, and the fault tree diagram there are also differences: the fault tree is no longer a logical tree but the thinking tree in troubleshooting, the definition of the fault tree's nodes is different, the composition of the fault tree's branches is also different.
NASA Technical Reports Server (NTRS)
Lee, Charles; Alena, Richard L.; Robinson, Peter
2004-01-01
We started from ISS fault trees example to migrate to decision trees, presented a method to convert fault trees to decision trees. The method shows that the visualizations of root cause of fault are easier and the tree manipulating becomes more programmatic via available decision tree programs. The visualization of decision trees for the diagnostic shows a format of straight forward and easy understands. For ISS real time fault diagnostic, the status of the systems could be shown by mining the signals through the trees and see where it stops at. The other advantage to use decision trees is that the trees can learn the fault patterns and predict the future fault from the historic data. The learning is not only on the static data sets but also can be online, through accumulating the real time data sets, the decision trees can gain and store faults patterns in the trees and recognize them when they come.
Automatic translation of digraph to fault-tree models
NASA Technical Reports Server (NTRS)
Iverson, David L.
1992-01-01
The author presents a technique for converting digraph models, including those models containing cycles, to a fault-tree format. A computer program which automatically performs this translation using an object-oriented representation of the models has been developed. The fault-trees resulting from translations can be used for fault-tree analysis and diagnosis. Programs to calculate fault-tree and digraph cut sets and perform diagnosis with fault-tree models have also been developed. The digraph to fault-tree translation system has been successfully tested on several digraphs of varying size and complexity. Details of some representative translation problems are presented. Most of the computation performed by the program is dedicated to finding minimal cut sets for digraph nodes in order to break cycles in the digraph. Fault-trees produced by the translator have been successfully used with NASA's Fault-Tree Diagnosis System (FTDS) to produce automated diagnostic systems.
NASA Technical Reports Server (NTRS)
Butler, Ricky W.; Boerschlein, David P.
1993-01-01
Fault-Tree Compiler (FTC) program, is software tool used to calculate probability of top event in fault tree. Gates of five different types allowed in fault tree: AND, OR, EXCLUSIVE OR, INVERT, and M OF N. High-level input language easy to understand and use. In addition, program supports hierarchical fault-tree definition feature, which simplifies tree-description process and reduces execution time. Set of programs created forming basis for reliability-analysis workstation: SURE, ASSIST, PAWS/STEM, and FTC fault-tree tool (LAR-14586). Written in PASCAL, ANSI-compliant C language, and FORTRAN 77. Other versions available upon request.
Application Research of Fault Tree Analysis in Grid Communication System Corrective Maintenance
NASA Astrophysics Data System (ADS)
Wang, Jian; Yang, Zhenwei; Kang, Mei
2018-01-01
This paper attempts to apply the fault tree analysis method to the corrective maintenance field of grid communication system. Through the establishment of the fault tree model of typical system and the engineering experience, the fault tree analysis theory is used to analyze the fault tree model, which contains the field of structural function, probability importance and so on. The results show that the fault tree analysis can realize fast positioning and well repairing of the system. Meanwhile, it finds that the analysis method of fault tree has some guiding significance to the reliability researching and upgrading f the system.
Fault Tree in the Trenches, A Success Story
NASA Technical Reports Server (NTRS)
Long, R. Allen; Goodson, Amanda (Technical Monitor)
2000-01-01
Getting caught up in the explanation of Fault Tree Analysis (FTA) minutiae is easy. In fact, most FTA literature tends to address FTA concepts and methodology. Yet there seems to be few articles addressing actual design changes resulting from the successful application of fault tree analysis. This paper demonstrates how fault tree analysis was used to identify and solve a potentially catastrophic mechanical problem at a rocket motor manufacturer. While developing the fault tree given in this example, the analyst was told by several organizations that the piece of equipment in question had been evaluated by several committees and organizations, and that the analyst was wasting his time. The fault tree/cutset analysis resulted in a joint-redesign of the control system by the tool engineering group and the fault tree analyst, as well as bragging rights for the analyst. (That the fault tree found problems where other engineering reviews had failed was not lost on the other engineering groups.) Even more interesting was that this was the analyst's first fault tree which further demonstrates how effective fault tree analysis can be in guiding (i.e., forcing) the analyst to take a methodical approach in evaluating complex systems.
NASA Technical Reports Server (NTRS)
Butler, Ricky W.; Martensen, Anna L.
1992-01-01
FTC, Fault-Tree Compiler program, is reliability-analysis software tool used to calculate probability of top event of fault tree. Five different types of gates allowed in fault tree: AND, OR, EXCLUSIVE OR, INVERT, and M OF N. High-level input language of FTC easy to understand and use. Program supports hierarchical fault-tree-definition feature simplifying process of description of tree and reduces execution time. Solution technique implemented in FORTRAN, and user interface in Pascal. Written to run on DEC VAX computer operating under VMS operating system.
Tutorial: Advanced fault tree applications using HARP
NASA Technical Reports Server (NTRS)
Dugan, Joanne Bechta; Bavuso, Salvatore J.; Boyd, Mark A.
1993-01-01
Reliability analysis of fault tolerant computer systems for critical applications is complicated by several factors. These modeling difficulties are discussed and dynamic fault tree modeling techniques for handling them are described and demonstrated. Several advanced fault tolerant computer systems are described, and fault tree models for their analysis are presented. HARP (Hybrid Automated Reliability Predictor) is a software package developed at Duke University and NASA Langley Research Center that is capable of solving the fault tree models presented.
Technology transfer by means of fault tree synthesis
NASA Astrophysics Data System (ADS)
Batzias, Dimitris F.
2012-12-01
Since Fault Tree Analysis (FTA) attempts to model and analyze failure processes of engineering, it forms a common technique for good industrial practice. On the contrary, fault tree synthesis (FTS) refers to the methodology of constructing complex trees either from dentritic modules built ad hoc or from fault tress already used and stored in a Knowledge Base. In both cases, technology transfer takes place in a quasi-inductive mode, from partial to holistic knowledge. In this work, an algorithmic procedure, including 9 activity steps and 3 decision nodes is developed for performing effectively this transfer when the fault under investigation occurs within one of the latter stages of an industrial procedure with several stages in series. The main parts of the algorithmic procedure are: (i) the construction of a local fault tree within the corresponding production stage, where the fault has been detected, (ii) the formation of an interface made of input faults that might occur upstream, (iii) the fuzzy (to count for uncertainty) multicriteria ranking of these faults according to their significance, and (iv) the synthesis of an extended fault tree based on the construction of part (i) and on the local fault tree of the first-ranked fault in part (iii). An implementation is presented, referring to 'uneven sealing of Al anodic film', thus proving the functionality of the developed methodology.
Faults Discovery By Using Mined Data
NASA Technical Reports Server (NTRS)
Lee, Charles
2005-01-01
Fault discovery in the complex systems consist of model based reasoning, fault tree analysis, rule based inference methods, and other approaches. Model based reasoning builds models for the systems either by mathematic formulations or by experiment model. Fault Tree Analysis shows the possible causes of a system malfunction by enumerating the suspect components and their respective failure modes that may have induced the problem. The rule based inference build the model based on the expert knowledge. Those models and methods have one thing in common; they have presumed some prior-conditions. Complex systems often use fault trees to analyze the faults. Fault diagnosis, when error occurs, is performed by engineers and analysts performing extensive examination of all data gathered during the mission. International Space Station (ISS) control center operates on the data feedback from the system and decisions are made based on threshold values by using fault trees. Since those decision-making tasks are safety critical and must be done promptly, the engineers who manually analyze the data are facing time challenge. To automate this process, this paper present an approach that uses decision trees to discover fault from data in real-time and capture the contents of fault trees as the initial state of the trees.
Fault trees and sequence dependencies
NASA Technical Reports Server (NTRS)
Dugan, Joanne Bechta; Boyd, Mark A.; Bavuso, Salvatore J.
1990-01-01
One of the frequently cited shortcomings of fault-tree models, their inability to model so-called sequence dependencies, is discussed. Several sources of such sequence dependencies are discussed, and new fault-tree gates to capture this behavior are defined. These complex behaviors can be included in present fault-tree models because they utilize a Markov solution. The utility of the new gates is demonstrated by presenting several models of the fault-tolerant parallel processor, which include both hot and cold spares.
McElroy, Lisa M; Khorzad, Rebeca; Rowe, Theresa A; Abecassis, Zachary A; Apley, Daniel W; Barnard, Cynthia; Holl, Jane L
The purpose of this study was to use fault tree analysis to evaluate the adequacy of quality reporting programs in identifying root causes of postoperative bloodstream infection (BSI). A systematic review of the literature was used to construct a fault tree to evaluate 3 postoperative BSI reporting programs: National Surgical Quality Improvement Program (NSQIP), Centers for Medicare and Medicaid Services (CMS), and The Joint Commission (JC). The literature review revealed 699 eligible publications, 90 of which were used to create the fault tree containing 105 faults. A total of 14 identified faults are currently mandated for reporting to NSQIP, 5 to CMS, and 3 to JC; 2 or more programs require 4 identified faults. The fault tree identifies numerous contributing faults to postoperative BSI and reveals substantial variation in the requirements and ability of national quality data reporting programs to capture these potential faults. Efforts to prevent postoperative BSI require more comprehensive data collection to identify the root causes and develop high-reliability improvement strategies.
A dynamic fault tree model of a propulsion system
NASA Technical Reports Server (NTRS)
Xu, Hong; Dugan, Joanne Bechta; Meshkat, Leila
2006-01-01
We present a dynamic fault tree model of the benchmark propulsion system, and solve it using Galileo. Dynamic fault trees (DFT) extend traditional static fault trees with special gates to model spares and other sequence dependencies. Galileo solves DFT models using a judicious combination of automatically generated Markov and Binary Decision Diagram models. Galileo easily handles the complexities exhibited by the benchmark problem. In particular, Galileo is designed to model phased mission systems.
Chen, Yingyi; Zhen, Zhumi; Yu, Huihui; Xu, Jing
2017-01-14
In the Internet of Things (IoT) equipment used for aquaculture is often deployed in outdoor ponds located in remote areas. Faults occur frequently in these tough environments and the staff generally lack professional knowledge and pay a low degree of attention in these areas. Once faults happen, expert personnel must carry out maintenance outdoors. Therefore, this study presents an intelligent method for fault diagnosis based on fault tree analysis and a fuzzy neural network. In the proposed method, first, the fault tree presents a logic structure of fault symptoms and faults. Second, rules extracted from the fault trees avoid duplicate and redundancy. Third, the fuzzy neural network is applied to train the relationship mapping between fault symptoms and faults. In the aquaculture IoT, one fault can cause various fault symptoms, and one symptom can be caused by a variety of faults. Four fault relationships are obtained. Results show that one symptom-to-one fault, two symptoms-to-two faults, and two symptoms-to-one fault relationships can be rapidly diagnosed with high precision, while one symptom-to-two faults patterns perform not so well, but are still worth researching. This model implements diagnosis for most kinds of faults in the aquaculture IoT.
Chen, Yingyi; Zhen, Zhumi; Yu, Huihui; Xu, Jing
2017-01-01
In the Internet of Things (IoT) equipment used for aquaculture is often deployed in outdoor ponds located in remote areas. Faults occur frequently in these tough environments and the staff generally lack professional knowledge and pay a low degree of attention in these areas. Once faults happen, expert personnel must carry out maintenance outdoors. Therefore, this study presents an intelligent method for fault diagnosis based on fault tree analysis and a fuzzy neural network. In the proposed method, first, the fault tree presents a logic structure of fault symptoms and faults. Second, rules extracted from the fault trees avoid duplicate and redundancy. Third, the fuzzy neural network is applied to train the relationship mapping between fault symptoms and faults. In the aquaculture IoT, one fault can cause various fault symptoms, and one symptom can be caused by a variety of faults. Four fault relationships are obtained. Results show that one symptom-to-one fault, two symptoms-to-two faults, and two symptoms-to-one fault relationships can be rapidly diagnosed with high precision, while one symptom-to-two faults patterns perform not so well, but are still worth researching. This model implements diagnosis for most kinds of faults in the aquaculture IoT. PMID:28098822
Reliability computation using fault tree analysis
NASA Technical Reports Server (NTRS)
Chelson, P. O.
1971-01-01
A method is presented for calculating event probabilities from an arbitrary fault tree. The method includes an analytical derivation of the system equation and is not a simulation program. The method can handle systems that incorporate standby redundancy and it uses conditional probabilities for computing fault trees where the same basic failure appears in more than one fault path.
Object-oriented fault tree evaluation program for quantitative analyses
NASA Technical Reports Server (NTRS)
Patterson-Hine, F. A.; Koen, B. V.
1988-01-01
Object-oriented programming can be combined with fault free techniques to give a significantly improved environment for evaluating the safety and reliability of large complex systems for space missions. Deep knowledge about system components and interactions, available from reliability studies and other sources, can be described using objects that make up a knowledge base. This knowledge base can be interrogated throughout the design process, during system testing, and during operation, and can be easily modified to reflect design changes in order to maintain a consistent information source. An object-oriented environment for reliability assessment has been developed on a Texas Instrument (TI) Explorer LISP workstation. The program, which directly evaluates system fault trees, utilizes the object-oriented extension to LISP called Flavors that is available on the Explorer. The object representation of a fault tree facilitates the storage and retrieval of information associated with each event in the tree, including tree structural information and intermediate results obtained during the tree reduction process. Reliability data associated with each basic event are stored in the fault tree objects. The object-oriented environment on the Explorer also includes a graphical tree editor which was modified to display and edit the fault trees.
NASA Technical Reports Server (NTRS)
Martensen, Anna L.; Butler, Ricky W.
1987-01-01
The Fault Tree Compiler Program is a new reliability tool used to predict the top event probability for a fault tree. Five different gate types are allowed in the fault tree: AND, OR, EXCLUSIVE OR, INVERT, and M OF N gates. The high level input language is easy to understand and use when describing the system tree. In addition, the use of the hierarchical fault tree capability can simplify the tree description and decrease program execution time. The current solution technique provides an answer precise (within the limits of double precision floating point arithmetic) to the five digits in the answer. The user may vary one failure rate or failure probability over a range of values and plot the results for sensitivity analyses. The solution technique is implemented in FORTRAN; the remaining program code is implemented in Pascal. The program is written to run on a Digital Corporation VAX with the VMS operation system.
The Fault Tree Compiler (FTC): Program and mathematics
NASA Technical Reports Server (NTRS)
Butler, Ricky W.; Martensen, Anna L.
1989-01-01
The Fault Tree Compiler Program is a new reliability tool used to predict the top-event probability for a fault tree. Five different gate types are allowed in the fault tree: AND, OR, EXCLUSIVE OR, INVERT, AND m OF n gates. The high-level input language is easy to understand and use when describing the system tree. In addition, the use of the hierarchical fault tree capability can simplify the tree description and decrease program execution time. The current solution technique provides an answer precisely (within the limits of double precision floating point arithmetic) within a user specified number of digits accuracy. The user may vary one failure rate or failure probability over a range of values and plot the results for sensitivity analyses. The solution technique is implemented in FORTRAN; the remaining program code is implemented in Pascal. The program is written to run on a Digital Equipment Corporation (DEC) VAX computer with the VMS operation system.
Systems Theoretic Process Analysis Applied to an Offshore Supply Vessel Dynamic Positioning System
2016-06-01
additional safety issues that were either not identified or inadequately mitigated through the use of Fault Tree Analysis and Failure Modes and...Techniques ...................................................................................................... 15 1.3.1. Fault Tree Analysis...49 3.2. Fault Tree Analysis Comparison
An overview of the phase-modular fault tree approach to phased mission system analysis
NASA Technical Reports Server (NTRS)
Meshkat, L.; Xing, L.; Donohue, S. K.; Ou, Y.
2003-01-01
We look at how fault tree analysis (FTA), a primary means of performing reliability analysis of PMS, can meet this challenge in this paper by presenting an overview of the modular approach to solving fault trees that represent PMS.
Try Fault Tree Analysis, a Step-by-Step Way to Improve Organization Development.
ERIC Educational Resources Information Center
Spitzer, Dean
1980-01-01
Fault Tree Analysis, a systems safety engineering technology used to analyze organizational systems, is described. Explains the use of logic gates to represent the relationship between failure events, qualitative analysis, quantitative analysis, and effective use of Fault Tree Analysis. (CT)
Fault Tree Analysis: A Research Tool for Educational Planning. Technical Report No. 1.
ERIC Educational Resources Information Center
Alameda County School Dept., Hayward, CA. PACE Center.
This ESEA Title III report describes fault tree analysis and assesses its applicability to education. Fault tree analysis is an operations research tool which is designed to increase the probability of success in any system by analyzing the most likely modes of failure that could occur. A graphic portrayal, which has the form of a tree, is…
Review: Evaluation of Foot-and-Mouth Disease Control Using Fault Tree Analysis.
Isoda, N; Kadohira, M; Sekiguchi, S; Schuppers, M; Stärk, K D C
2015-06-01
An outbreak of foot-and-mouth disease (FMD) causes huge economic losses and animal welfare problems. Although much can be learnt from past FMD outbreaks, several countries are not satisfied with their degree of contingency planning and aiming at more assurance that their control measures will be effective. The purpose of the present article was to develop a generic fault tree framework for the control of an FMD outbreak as a basis for systematic improvement and refinement of control activities and general preparedness. Fault trees are typically used in engineering to document pathways that can lead to an undesired event, that is, ineffective FMD control. The fault tree method allows risk managers to identify immature parts of the control system and to analyse the events or steps that will most probably delay rapid and effective disease control during a real outbreak. The present developed fault tree is generic and can be tailored to fit the specific needs of countries. For instance, the specific fault tree for the 2001 FMD outbreak in the UK was refined based on control weaknesses discussed in peer-reviewed articles. Furthermore, the specific fault tree based on the 2001 outbreak was applied to the subsequent FMD outbreak in 2007 to assess the refinement of control measures following the earlier, major outbreak. The FMD fault tree can assist risk managers to develop more refined and adequate control activities against FMD outbreaks and to find optimum strategies for rapid control. Further application using the current tree will be one of the basic measures for FMD control worldwide. © 2013 Blackwell Verlag GmbH.
The weakest t-norm based intuitionistic fuzzy fault-tree analysis to evaluate system reliability.
Kumar, Mohit; Yadav, Shiv Prasad
2012-07-01
In this paper, a new approach of intuitionistic fuzzy fault-tree analysis is proposed to evaluate system reliability and to find the most critical system component that affects the system reliability. Here weakest t-norm based intuitionistic fuzzy fault tree analysis is presented to calculate fault interval of system components from integrating expert's knowledge and experience in terms of providing the possibility of failure of bottom events. It applies fault-tree analysis, α-cut of intuitionistic fuzzy set and T(ω) (the weakest t-norm) based arithmetic operations on triangular intuitionistic fuzzy sets to obtain fault interval and reliability interval of the system. This paper also modifies Tanaka et al.'s fuzzy fault-tree definition. In numerical verification, a malfunction of weapon system "automatic gun" is presented as a numerical example. The result of the proposed method is compared with the listing approaches of reliability analysis methods. Copyright © 2012 ISA. Published by Elsevier Ltd. All rights reserved.
Software For Fault-Tree Diagnosis Of A System
NASA Technical Reports Server (NTRS)
Iverson, Dave; Patterson-Hine, Ann; Liao, Jack
1993-01-01
Fault Tree Diagnosis System (FTDS) computer program is automated-diagnostic-system program identifying likely causes of specified failure on basis of information represented in system-reliability mathematical models known as fault trees. Is modified implementation of failure-cause-identification phase of Narayanan's and Viswanadham's methodology for acquisition of knowledge and reasoning in analyzing failures of systems. Knowledge base of if/then rules replaced with object-oriented fault-tree representation. Enhancement yields more-efficient identification of causes of failures and enables dynamic updating of knowledge base. Written in C language, C++, and Common LISP.
Fault tree models for fault tolerant hypercube multiprocessors
NASA Technical Reports Server (NTRS)
Boyd, Mark A.; Tuazon, Jezus O.
1991-01-01
Three candidate fault tolerant hypercube architectures are modeled, their reliability analyses are compared, and the resulting implications of these methods of incorporating fault tolerance into hypercube multiprocessors are discussed. In the course of performing the reliability analyses, the use of HARP and fault trees in modeling sequence dependent system behaviors is demonstrated.
Comparative analysis of techniques for evaluating the effectiveness of aircraft computing systems
NASA Technical Reports Server (NTRS)
Hitt, E. F.; Bridgman, M. S.; Robinson, A. C.
1981-01-01
Performability analysis is a technique developed for evaluating the effectiveness of fault-tolerant computing systems in multiphase missions. Performability was evaluated for its accuracy, practical usefulness, and relative cost. The evaluation was performed by applying performability and the fault tree method to a set of sample problems ranging from simple to moderately complex. The problems involved as many as five outcomes, two to five mission phases, permanent faults, and some functional dependencies. Transient faults and software errors were not considered. A different analyst was responsible for each technique. Significantly more time and effort were required to learn performability analysis than the fault tree method. Performability is inherently as accurate as fault tree analysis. For the sample problems, fault trees were more practical and less time consuming to apply, while performability required less ingenuity and was more checkable. Performability offers some advantages for evaluating very complex problems.
Product Support Manager Guidebook
2011-04-01
package is being developed using supportability analysis concepts such as Failure Mode, Effects and Criticality Analysis (FMECA), Fault Tree Analysis ( FTA ...Analysis (LORA) Condition Based Maintenance + (CBM+) Fault Tree Analysis ( FTA ) Failure Mode, Effects, and Criticality Analysis (FMECA) Maintenance Task...Reporting and Corrective Action System (FRACAS), Fault Tree Analysis ( FTA ), Level of Repair Analysis (LORA), Maintenance Task Analysis (MTA
MIRAP, microcomputer reliability analysis program
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jehee, J.N.T.
1989-01-01
A program for a microcomputer is outlined that can determine minimal cut sets from a specified fault tree logic. The speed and memory limitations of the microcomputers on which the program is implemented (Atari ST and IBM) are addressed by reducing the fault tree's size and by storing the cut set data on disk. Extensive well proven fault tree restructuring techniques, such as the identification of sibling events and of independent gate events, reduces the fault tree's size but does not alter its logic. New methods are used for the Boolean reduction of the fault tree logic. Special criteria formore » combining events in the 'AND' and 'OR' logic avoid the creation of many subsuming cut sets which all would cancel out due to existing cut sets. Figures and tables illustrates these methods. 4 refs., 5 tabs.« less
The FTA Method And A Possibility Of Its Application In The Area Of Road Freight Transport
NASA Astrophysics Data System (ADS)
Poliaková, Adela
2015-06-01
The Fault Tree process utilizes logic diagrams to portray and analyse potentially hazardous events. Three basic symbols (logic gates) are adequate for diagramming any fault tree. However, additional recently developed symbols can be used to reduce the time and effort required for analysis. A fault tree is a graphical representation of the relationship between certain specific events and the ultimate undesired event (2). This paper deals to method of Fault Tree Analysis basic description and provides a practical view on possibility of application by quality improvement in road freight transport company.
Fault Tree Analysis: Its Implications for Use in Education.
ERIC Educational Resources Information Center
Barker, Bruce O.
This study introduces the concept of Fault Tree Analysis as a systems tool and examines the implications of Fault Tree Analysis (FTA) as a technique for isolating failure modes in educational systems. A definition of FTA and discussion of its history, as it relates to education, are provided. The step by step process for implementation and use of…
Preventing medical errors by designing benign failures.
Grout, John R
2003-07-01
One way to successfully reduce medical errors is to design health care systems that are more resistant to the tendencies of human beings to err. One interdisciplinary approach entails creating design changes, mitigating human errors, and making human error irrelevant to outcomes. This approach is intended to facilitate the creation of benign failures, which have been called mistake-proofing devices and forcing functions elsewhere. USING FAULT TREES TO DESIGN FORCING FUNCTIONS: A fault tree is a graphical tool used to understand the relationships that either directly cause or contribute to the cause of a particular failure. A careful analysis of a fault tree enables the analyst to anticipate how the process will behave after the change. EXAMPLE OF AN APPLICATION: A scenario in which a patient is scalded while bathing can serve as an example of how multiple fault trees can be used to design forcing functions. The first fault tree shows the undesirable event--patient scalded while bathing. The second fault tree has a benign event--no water. Adding a scald valve changes the outcome from the undesirable event ("patient scalded while bathing") to the benign event ("no water") Analysis of fault trees does not ensure or guarantee that changes necessary to eliminate error actually occur. Most mistake-proofing is used to prevent simple errors and to create well-defended processes, but complex errors can also result. The utilization of mistake-proofing or forcing functions can be thought of as changing the logic of a process. Errors that formerly caused undesirable failures can be converted into the causes of benign failures. The use of fault trees can provide a variety of insights into the design of forcing functions that will improve patient safety.
Fault Tree Analysis Application for Safety and Reliability
NASA Technical Reports Server (NTRS)
Wallace, Dolores R.
2003-01-01
Many commercial software tools exist for fault tree analysis (FTA), an accepted method for mitigating risk in systems. The method embedded in the tools identifies a root as use in system components, but when software is identified as a root cause, it does not build trees into the software component. No commercial software tools have been built specifically for development and analysis of software fault trees. Research indicates that the methods of FTA could be applied to software, but the method is not practical without automated tool support. With appropriate automated tool support, software fault tree analysis (SFTA) may be a practical technique for identifying the underlying cause of software faults that may lead to critical system failures. We strive to demonstrate that existing commercial tools for FTA can be adapted for use with SFTA, and that applied to a safety-critical system, SFTA can be used to identify serious potential problems long before integrator and system testing.
ERIC Educational Resources Information Center
Barker, Bruce O.; Petersen, Paul D.
This paper explores the fault-tree analysis approach to isolating failure modes within a system. Fault tree investigates potentially undesirable events and then looks for failures in sequence that would lead to their occurring. Relationships among these events are symbolized by AND or OR logic gates, AND used when single events must coexist to…
Evidential Networks for Fault Tree Analysis with Imprecise Knowledge
NASA Astrophysics Data System (ADS)
Yang, Jianping; Huang, Hong-Zhong; Liu, Yu; Li, Yan-Feng
2012-06-01
Fault tree analysis (FTA), as one of the powerful tools in reliability engineering, has been widely used to enhance system quality attributes. In most fault tree analyses, precise values are adopted to represent the probabilities of occurrence of those events. Due to the lack of sufficient data or imprecision of existing data at the early stage of product design, it is often difficult to accurately estimate the failure rates of individual events or the probabilities of occurrence of the events. Therefore, such imprecision and uncertainty need to be taken into account in reliability analysis. In this paper, the evidential networks (EN) are employed to quantify and propagate the aforementioned uncertainty and imprecision in fault tree analysis. The detailed conversion processes of some logic gates to EN are described in fault tree (FT). The figures of the logic gates and the converted equivalent EN, together with the associated truth tables and the conditional belief mass tables, are also presented in this work. The new epistemic importance is proposed to describe the effect of ignorance degree of event. The fault tree of an aircraft engine damaged by oil filter plugs is presented to demonstrate the proposed method.
Object-oriented fault tree models applied to system diagnosis
NASA Technical Reports Server (NTRS)
Iverson, David L.; Patterson-Hine, F. A.
1990-01-01
When a diagnosis system is used in a dynamic environment, such as the distributed computer system planned for use on Space Station Freedom, it must execute quickly and its knowledge base must be easily updated. Representing system knowledge as object-oriented augmented fault trees provides both features. The diagnosis system described here is based on the failure cause identification process of the diagnostic system described by Narayanan and Viswanadham. Their system has been enhanced in this implementation by replacing the knowledge base of if-then rules with an object-oriented fault tree representation. This allows the system to perform its task much faster and facilitates dynamic updating of the knowledge base in a changing diagnosis environment. Accessing the information contained in the objects is more efficient than performing a lookup operation on an indexed rule base. Additionally, the object-oriented fault trees can be easily updated to represent current system status. This paper describes the fault tree representation, the diagnosis algorithm extensions, and an example application of this system. Comparisons are made between the object-oriented fault tree knowledge structure solution and one implementation of a rule-based solution. Plans for future work on this system are also discussed.
Probabilistic fault tree analysis of a radiation treatment system.
Ekaette, Edidiong; Lee, Robert C; Cooke, David L; Iftody, Sandra; Craighead, Peter
2007-12-01
Inappropriate administration of radiation for cancer treatment can result in severe consequences such as premature death or appreciably impaired quality of life. There has been little study of vulnerable treatment process components and their contribution to the risk of radiation treatment (RT). In this article, we describe the application of probabilistic fault tree methods to assess the probability of radiation misadministration to patients at a large cancer treatment center. We conducted a systematic analysis of the RT process that identified four process domains: Assessment, Preparation, Treatment, and Follow-up. For the Preparation domain, we analyzed possible incident scenarios via fault trees. For each task, we also identified existing quality control measures. To populate the fault trees we used subjective probabilities from experts and compared results with incident report data. Both the fault tree and the incident report analysis revealed simulation tasks to be most prone to incidents, and the treatment prescription task to be least prone to incidents. The probability of a Preparation domain incident was estimated to be in the range of 0.1-0.7% based on incident reports, which is comparable to the mean value of 0.4% from the fault tree analysis using probabilities from the expert elicitation exercise. In conclusion, an analysis of part of the RT system using a fault tree populated with subjective probabilities from experts was useful in identifying vulnerable components of the system, and provided quantitative data for risk management.
Reconfigurable tree architectures using subtree oriented fault tolerance
NASA Technical Reports Server (NTRS)
Lowrie, Matthew B.
1987-01-01
An approach to the design of reconfigurable tree architecture is presented in which spare processors are allocated at the leaves. The approach is unique in that spares are associated with subtrees and sharing of spares between these subtrees can occur. The Subtree Oriented Fault Tolerance (SOFT) approach is more reliable than previous approaches capable of tolerating link and switch failures for both single chip and multichip tree implementations while reducing redundancy in terms of both spare processors and links. VLSI layout is 0(n) for binary trees and is directly extensible to N-ary trees and fault tolerance through performance degradation.
Secure Embedded System Design Methodologies for Military Cryptographic Systems
2016-03-31
Fault- Tree Analysis (FTA); Built-In Self-Test (BIST) Introduction Secure access-control systems restrict operations to authorized users via methods...failures in the individual software/processor elements, the question of exactly how unlikely is difficult to answer. Fault- Tree Analysis (FTA) has a...Collins of Sandia National Laboratories for years of sharing his extensive knowledge of Fail-Safe Design Assurance and Fault- Tree Analysis
Rymer, M.J.
2000-01-01
The Coachella Valley area was strongly shaken by the 1992 Joshua Tree (23 April) and Landers (28 June) earthquakes, and both events caused triggered slip on active faults within the area. Triggered slip associated with the Joshua Tree earthquake was on a newly recognized fault, the East Wide Canyon fault, near the southwestern edge of the Little San Bernardino Mountains. Slip associated with the Landers earthquake formed along the San Andreas fault in the southeastern Coachella Valley. Surface fractures formed along the East Wide Canyon fault in association with the Joshua Tree earthquake. The fractures extended discontinuously over a 1.5-km stretch of the fault, near its southern end. Sense of slip was consistently right-oblique, west side down, similar to the long-term style of faulting. Measured offset values were small, with right-lateral and vertical components of slip ranging from 1 to 6 mm and 1 to 4 mm, respectively. This is the first documented historic slip on the East Wide Canyon fault, which was first mapped only months before the Joshua Tree earthquake. Surface slip associated with the Joshua Tree earthquake most likely developed as triggered slip given its 5 km distance from the Joshua Tree epicenter and aftershocks. As revealed in a trench investigation, slip formed in an area with only a thin (<3 m thick) veneer of alluvium in contrast to earlier documented triggered slip events in this region, all in the deep basins of the Salton Trough. A paleoseismic trench study in an area of 1992 surface slip revealed evidence of two and possibly three surface faulting events on the East Wide Canyon fault during the late Quaternary, probably latest Pleistocene (first event) and mid- to late Holocene (second two events). About two months after the Joshua Tree earthquake, the Landers earthquake then triggered slip on many faults, including the San Andreas fault in the southeastern Coachella Valley. Surface fractures associated with this event formed discontinuous breaks over a 54-km-long stretch of the fault, from the Indio Hills southeastward to Durmid Hill. Sense of slip was right-lateral; only locally was there a minor (~1 mm) vertical component of slip. Measured dextral displacement values ranged from 1 to 20 mm, with the largest amounts found in the Mecca Hills where large slip values have been measured following past triggered-slip events.
NASA Astrophysics Data System (ADS)
de Barros, Felipe P. J.; Bolster, Diogo; Sanchez-Vila, Xavier; Nowak, Wolfgang
2011-05-01
Assessing health risk in hydrological systems is an interdisciplinary field. It relies on the expertise in the fields of hydrology and public health and needs powerful translation concepts to provide decision support and policy making. Reliable health risk estimates need to account for the uncertainties and variabilities present in hydrological, physiological, and human behavioral parameters. Despite significant theoretical advancements in stochastic hydrology, there is still a dire need to further propagate these concepts to practical problems and to society in general. Following a recent line of work, we use fault trees to address the task of probabilistic risk analysis and to support related decision and management problems. Fault trees allow us to decompose the assessment of health risk into individual manageable modules, thus tackling a complex system by a structural divide and conquer approach. The complexity within each module can be chosen individually according to data availability, parsimony, relative importance, and stage of analysis. Three differences are highlighted in this paper when compared to previous works: (1) The fault tree proposed here accounts for the uncertainty in both hydrological and health components, (2) system failure within the fault tree is defined in terms of risk being above a threshold value, whereas previous studies that used fault trees used auxiliary events such as exceedance of critical concentration levels, and (3) we introduce a new form of stochastic fault tree that allows us to weaken the assumption of independent subsystems that is required by a classical fault tree approach. We illustrate our concept in a simple groundwater-related setting.
Planning effectiveness may grow on fault trees.
Chow, C W; Haddad, K; Mannino, B
1991-10-01
The first step of a strategic planning process--identifying and analyzing threats and opportunities--requires subjective judgments. By using an analytical tool known as a fault tree, healthcare administrators can reduce the unreliability of subjective decision making by creating a logical structure for problem solving and decision making. A case study of 11 healthcare administrators showed that an analysis technique called prospective hindsight can add to a fault tree's ability to improve a strategic planning process.
NASA Astrophysics Data System (ADS)
Batzias, Dimitris F.
2012-12-01
Fault Tree Analysis (FTA) can be used for technology transfer when the relevant problem (called 'top even' in FTA) is solved in a technology centre and the results are diffused to interested parties (usually Small Medium Enterprises - SMEs) that have not the proper equipment and the required know-how to solve the problem by their own. Nevertheless, there is a significant drawback in this procedure: the information usually provided by the SMEs to the technology centre, about production conditions and corresponding quality characteristics of the product, and (sometimes) the relevant expertise in the Knowledge Base of this centre may be inadequate to form a complete fault tree. Since such cases are quite frequent in practice, we have developed a methodology for transforming incomplete fault tree to Ishikawa diagram, which is more flexible and less strict in establishing causal chains, because it uses a surface phenomenological level with a limited number of categories of faults. On the other hand, such an Ishikawa diagram can be extended to simulate a fault tree as relevant knowledge increases. An implementation of this transformation, referring to anodization of aluminium, is presented.
A systematic risk management approach employed on the CloudSat project
NASA Technical Reports Server (NTRS)
Basilio, R. R.; Plourde, K. S.; Lam, T.
2000-01-01
The CloudSat Project has developed a simplified approach for fault tree analysis and probabilistic risk assessment. A system-level fault tree has been constructed to identify credible fault scenarios and failure modes leading up to a potential failure to meet the nominal mission success criteria.
Fault Tree Analysis: A Bibliography
NASA Technical Reports Server (NTRS)
2000-01-01
Fault tree analysis is a top-down approach to the identification of process hazards. It is as one of the best methods for systematically identifying an graphically displaying the many ways some things can go wrong. This bibliography references 266 documents in the NASA STI Database that contain the major concepts. fault tree analysis, risk an probability theory, in the basic index or major subject terms. An abstract is included with most citations, followed by the applicable subject terms.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sarrack, A.G.
The purpose of this report is to document fault tree analyses which have been completed for the Defense Waste Processing Facility (DWPF) safety analysis. Logic models for equipment failures and human error combinations that could lead to flammable gas explosions in various process tanks, or failure of critical support systems were developed for internal initiating events and for earthquakes. These fault trees provide frequency estimates for support systems failures and accidents that could lead to radioactive and hazardous chemical releases both on-site and off-site. Top event frequency results from these fault trees will be used in further APET analyses tomore » calculate accident risk associated with DWPF facility operations. This report lists and explains important underlying assumptions, provides references for failure data sources, and briefly describes the fault tree method used. Specific commitments from DWPF to provide new procedural/administrative controls or system design changes are listed in the ''Facility Commitments'' section. The purpose of the ''Assumptions'' section is to clarify the basis for fault tree modeling, and is not necessarily a list of items required to be protected by Technical Safety Requirements (TSRs).« less
Graphical fault tree analysis for fatal falls in the construction industry.
Chi, Chia-Fen; Lin, Syuan-Zih; Dewi, Ratna Sari
2014-11-01
The current study applied a fault tree analysis to represent the causal relationships among events and causes that contributed to fatal falls in the construction industry. Four hundred and eleven work-related fatalities in the Taiwanese construction industry were analyzed in terms of age, gender, experience, falling site, falling height, company size, and the causes for each fatality. Given that most fatal accidents involve multiple events, the current study coded up to a maximum of three causes for each fall fatality. After the Boolean algebra and minimal cut set analyses, accident causes associated with each falling site can be presented as a fault tree to provide an overview of the basic causes, which could trigger fall fatalities in the construction industry. Graphical icons were designed for each falling site along with the associated accident causes to illustrate the fault tree in a graphical manner. A graphical fault tree can improve inter-disciplinary discussion of risk management and the communication of accident causation to first line supervisors. Copyright © 2014 Elsevier Ltd. All rights reserved.
Fault Tree Analysis for an Inspection Robot in a Nuclear Power Plant
NASA Astrophysics Data System (ADS)
Ferguson, Thomas A.; Lu, Lixuan
2017-09-01
The life extension of current nuclear reactors has led to an increasing demand on inspection and maintenance of critical reactor components that are too expensive to replace. To reduce the exposure dosage to workers, robotics have become an attractive alternative as a preventative safety tool in nuclear power plants. It is crucial to understand the reliability of these robots in order to increase the veracity and confidence of their results. This study presents the Fault Tree (FT) analysis to a coolant outlet piper snake-arm inspection robot in a nuclear power plant. Fault trees were constructed for a qualitative analysis to determine the reliability of the robot. Insight on the applicability of fault tree methods for inspection robotics in the nuclear industry is gained through this investigation.
Interim reliability evaluation program, Browns Ferry fault trees
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stewart, M.E.
1981-01-01
An abbreviated fault tree method is used to evaluate and model Browns Ferry systems in the Interim Reliability Evaluation programs, simplifying the recording and displaying of events, yet maintaining the system of identifying faults. The level of investigation is not changed. The analytical thought process inherent in the conventional method is not compromised. But the abbreviated method takes less time, and the fault modes are much more visible.
Object-Oriented Algorithm For Evaluation Of Fault Trees
NASA Technical Reports Server (NTRS)
Patterson-Hine, F. A.; Koen, B. V.
1992-01-01
Algorithm for direct evaluation of fault trees incorporates techniques of object-oriented programming. Reduces number of calls needed to solve trees with repeated events. Provides significantly improved software environment for such computations as quantitative analyses of safety and reliability of complicated systems of equipment (e.g., spacecraft or factories).
NASA Technical Reports Server (NTRS)
English, Thomas
2005-01-01
A standard tool of reliability analysis used at NASA-JSC is the event tree. An event tree is simply a probability tree, with the probabilities determining the next step through the tree specified at each node. The nodal probabilities are determined by a reliability study of the physical system at work for a particular node. The reliability study performed at a node is typically referred to as a fault tree analysis, with the potential of a fault tree existing.for each node on the event tree. When examining an event tree it is obvious why the event tree/fault tree approach has been adopted. Typical event trees are quite complex in nature, and the event tree/fault tree approach provides a systematic and organized approach to reliability analysis. The purpose of this study was two fold. Firstly, we wanted to explore the possibility that a semi-Markov process can create dependencies between sojourn times (the times it takes to transition from one state to the next) that can decrease the uncertainty when estimating time to failures. Using a generalized semi-Markov model, we studied a four element reliability model and were able to demonstrate such sojourn time dependencies. Secondly, we wanted to study the use of semi-Markov processes to introduce a time variable into the event tree diagrams that are commonly developed in PRA (Probabilistic Risk Assessment) analyses. Event tree end states which change with time are more representative of failure scenarios than are the usual static probability-derived end states.
Structural system reliability calculation using a probabilistic fault tree analysis method
NASA Technical Reports Server (NTRS)
Torng, T. Y.; Wu, Y.-T.; Millwater, H. R.
1992-01-01
The development of a new probabilistic fault tree analysis (PFTA) method for calculating structural system reliability is summarized. The proposed PFTA procedure includes: developing a fault tree to represent the complex structural system, constructing an approximation function for each bottom event, determining a dominant sampling sequence for all bottom events, and calculating the system reliability using an adaptive importance sampling method. PFTA is suitable for complicated structural problems that require computer-intensive computer calculations. A computer program has been developed to implement the PFTA.
Locating hardware faults in a data communications network of a parallel computer
Archer, Charles J.; Megerian, Mark G.; Ratterman, Joseph D.; Smith, Brian E.
2010-01-12
Hardware faults location in a data communications network of a parallel computer. Such a parallel computer includes a plurality of compute nodes and a data communications network that couples the compute nodes for data communications and organizes the compute node as a tree. Locating hardware faults includes identifying a next compute node as a parent node and a root of a parent test tree, identifying for each child compute node of the parent node a child test tree having the child compute node as root, running a same test suite on the parent test tree and each child test tree, and identifying the parent compute node as having a defective link connected from the parent compute node to a child compute node if the test suite fails on the parent test tree and succeeds on all the child test trees.
Khan, F I; Abbasi, S A
2000-07-10
Fault tree analysis (FTA) is based on constructing a hypothetical tree of base events (initiating events) branching into numerous other sub-events, propagating the fault and eventually leading to the top event (accident). It has been a powerful technique used traditionally in identifying hazards in nuclear installations and power industries. As the systematic articulation of the fault tree is associated with assigning probabilities to each fault, the exercise is also sometimes called probabilistic risk assessment. But powerful as this technique is, it is also very cumbersome and costly, limiting its area of application. We have developed a new algorithm based on analytical simulation (named as AS-II), which makes the application of FTA simpler, quicker, and cheaper; thus opening up the possibility of its wider use in risk assessment in chemical process industries. Based on the methodology we have developed a computer-automated tool. The details are presented in this paper.
DG TO FT - AUTOMATIC TRANSLATION OF DIGRAPH TO FAULT TREE MODELS
NASA Technical Reports Server (NTRS)
Iverson, D. L.
1994-01-01
Fault tree and digraph models are frequently used for system failure analysis. Both types of models represent a failure space view of the system using AND and OR nodes in a directed graph structure. Each model has its advantages. While digraphs can be derived in a fairly straightforward manner from system schematics and knowledge about component failure modes and system design, fault tree structure allows for fast processing using efficient techniques developed for tree data structures. The similarities between digraphs and fault trees permits the information encoded in the digraph to be translated into a logically equivalent fault tree. The DG TO FT translation tool will automatically translate digraph models, including those with loops or cycles, into fault tree models that have the same minimum cut set solutions as the input digraph. This tool could be useful, for example, if some parts of a system have been modeled using digraphs and others using fault trees. The digraphs could be translated and incorporated into the fault trees, allowing them to be analyzed using a number of powerful fault tree processing codes, such as cut set and quantitative solution codes. A cut set for a given node is a group of failure events that will cause the failure of the node. A minimum cut set for a node is any cut set that, if any of the failures in the set were to be removed, the occurrence of the other failures in the set will not cause the failure of the event represented by the node. Cut sets calculations can be used to find dependencies, weak links, and vital system components whose failures would cause serious systems failure. The DG TO FT translation system reads in a digraph with each node listed as a separate object in the input file. The user specifies a terminal node for the digraph that will be used as the top node of the resulting fault tree. A fault tree basic event node representing the failure of that digraph node is created and becomes a child of the terminal root node. A subtree is created for each of the inputs to the digraph terminal node and the root of those subtrees are added as children of the top node of the fault tree. Every node in the digraph upstream of the terminal node will be visited and converted. During the conversion process, the algorithm keeps track of the path from the digraph terminal node to the current digraph node. If a node is visited twice, then the program has found a cycle in the digraph. This cycle is broken by finding the minimal cut sets of the twice visited digraph node and forming those cut sets into subtrees. Another implementation of the algorithm resolves loops by building a subtree based on the digraph minimal cut sets calculation. It does not reduce the subtree to minimal cut set form. This second implementation produces larger fault trees, but runs much faster than the version using minimal cut sets since it does not spend time reducing the subtrees to minimal cut sets. The fault trees produced by DG TO FT will contain OR gates, AND gates, Basic Event nodes, and NOP gates. The results of a translation can be output as a text object description of the fault tree similar to the text digraph input format. The translator can also output a LISP language formatted file and an augmented LISP file which can be used by the FTDS (ARC-13019) diagnosis system, available from COSMIC, which performs diagnostic reasoning using the fault tree as a knowledge base. DG TO FT is written in C-language to be machine independent. It has been successfully implemented on a Sun running SunOS, a DECstation running ULTRIX, a Macintosh running System 7, and a DEC VAX running VMS. The RAM requirement varies with the size of the models. DG TO FT is available in UNIX tar format on a .25 inch streaming magnetic tape cartridge (standard distribution) or on a 3.5 inch diskette. It is also available on a 3.5 inch Macintosh format diskette or on a 9-track 1600 BPI magnetic tape in DEC VAX FILES-11 format. Sample input and sample output are provided on the distribution medium. An electronic copy of the documentation in Macintosh Microsoft Word format is provided on the distribution medium. DG TO FT was developed in 1992. Sun, and SunOS are trademarks of Sun Microsystems, Inc. DECstation, ULTRIX, VAX, and VMS are trademarks of Digital Equipment Corporation. UNIX is a registered trademark of AT&T Bell Laboratories. Macintosh is a registered trademark of Apple Computer, Inc. System 7 is a trademark of Apple Computers Inc. Microsoft Word is a trademark of Microsoft Corporation.
Reliability database development for use with an object-oriented fault tree evaluation program
NASA Technical Reports Server (NTRS)
Heger, A. Sharif; Harringtton, Robert J.; Koen, Billy V.; Patterson-Hine, F. Ann
1989-01-01
A description is given of the development of a fault-tree analysis method using object-oriented programming. In addition, the authors discuss the programs that have been developed or are under development to connect a fault-tree analysis routine to a reliability database. To assess the performance of the routines, a relational database simulating one of the nuclear power industry databases has been constructed. For a realistic assessment of the results of this project, the use of one of existing nuclear power reliability databases is planned.
Fault diagnosis of power transformer based on fault-tree analysis (FTA)
NASA Astrophysics Data System (ADS)
Wang, Yongliang; Li, Xiaoqiang; Ma, Jianwei; Li, SuoYu
2017-05-01
Power transformers is an important equipment in power plants and substations, power distribution transmission link is made an important hub of power systems. Its performance directly affects the quality and health of the power system reliability and stability. This paper summarizes the five parts according to the fault type power transformers, then from the time dimension divided into three stages of power transformer fault, use DGA routine analysis and infrared diagnostics criterion set power transformer running state, finally, according to the needs of power transformer fault diagnosis, by the general to the section by stepwise refinement of dendritic tree constructed power transformer fault
CUTSETS - MINIMAL CUT SET CALCULATION FOR DIGRAPH AND FAULT TREE RELIABILITY MODELS
NASA Technical Reports Server (NTRS)
Iverson, D. L.
1994-01-01
Fault tree and digraph models are frequently used for system failure analysis. Both type of models represent a failure space view of the system using AND and OR nodes in a directed graph structure. Fault trees must have a tree structure and do not allow cycles or loops in the graph. Digraphs allow any pattern of interconnection between loops in the graphs. A common operation performed on digraph and fault tree models is the calculation of minimal cut sets. A cut set is a set of basic failures that could cause a given target failure event to occur. A minimal cut set for a target event node in a fault tree or digraph is any cut set for the node with the property that if any one of the failures in the set is removed, the occurrence of the other failures in the set will not cause the target failure event. CUTSETS will identify all the minimal cut sets for a given node. The CUTSETS package contains programs that solve for minimal cut sets of fault trees and digraphs using object-oriented programming techniques. These cut set codes can be used to solve graph models for reliability analysis and identify potential single point failures in a modeled system. The fault tree minimal cut set code reads in a fault tree model input file with each node listed in a text format. In the input file the user specifies a top node of the fault tree and a maximum cut set size to be calculated. CUTSETS will find minimal sets of basic events which would cause the failure at the output of a given fault tree gate. The program can find all the minimal cut sets of a node, or minimal cut sets up to a specified size. The algorithm performs a recursive top down parse of the fault tree, starting at the specified top node, and combines the cut sets of each child node into sets of basic event failures that would cause the failure event at the output of that gate. Minimal cut set solutions can be found for all nodes in the fault tree or just for the top node. The digraph cut set code uses the same techniques as the fault tree cut set code, except it includes all upstream digraph nodes in the cut sets for a given node and checks for cycles in the digraph during the solution process. CUTSETS solves for specified nodes and will not automatically solve for all upstream digraph nodes. The cut sets will be output as a text file. CUTSETS includes a utility program that will convert the popular COD format digraph model description files into text input files suitable for use with the CUTSETS programs. FEAT (MSC-21873) and FIRM (MSC-21860) available from COSMIC are examples of programs that produce COD format digraph model description files that may be converted for use with the CUTSETS programs. CUTSETS is written in C-language to be machine independent. It has been successfully implemented on a Sun running SunOS, a DECstation running ULTRIX, a Macintosh running System 7, and a DEC VAX running VMS. The RAM requirement varies with the size of the models. CUTSETS is available in UNIX tar format on a .25 inch streaming magnetic tape cartridge (standard distribution) or on a 3.5 inch diskette. It is also available on a 3.5 inch Macintosh format diskette or on a 9-track 1600 BPI magnetic tape in DEC VAX FILES-11 format. Sample input and sample output are provided on the distribution medium. An electronic copy of the documentation in Macintosh Microsoft Word format is included on the distribution medium. Sun and SunOS are trademarks of Sun Microsystems, Inc. DEC, DeCstation, ULTRIX, VAX, and VMS are trademarks of Digital Equipment Corporation. UNIX is a registered trademark of AT&T Bell Laboratories. Macintosh is a registered trademark of Apple Computer, Inc.
Fault trees for decision making in systems analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lambert, Howard E.
1975-10-09
The application of fault tree analysis (FTA) to system safety and reliability is presented within the framework of system safety analysis. The concepts and techniques involved in manual and automated fault tree construction are described and their differences noted. The theory of mathematical reliability pertinent to FTA is presented with emphasis on engineering applications. An outline of the quantitative reliability techniques of the Reactor Safety Study is given. Concepts of probabilistic importance are presented within the fault tree framework and applied to the areas of system design, diagnosis and simulation. The computer code IMPORTANCE ranks basic events and cut setsmore » according to a sensitivity analysis. A useful feature of the IMPORTANCE code is that it can accept relative failure data as input. The output of the IMPORTANCE code can assist an analyst in finding weaknesses in system design and operation, suggest the most optimal course of system upgrade, and determine the optimal location of sensors within a system. A general simulation model of system failure in terms of fault tree logic is described. The model is intended for efficient diagnosis of the causes of system failure in the event of a system breakdown. It can also be used to assist an operator in making decisions under a time constraint regarding the future course of operations. The model is well suited for computer implementation. New results incorporated in the simulation model include an algorithm to generate repair checklists on the basis of fault tree logic and a one-step-ahead optimization procedure that minimizes the expected time to diagnose system failure.« less
Fire safety in transit systems fault tree analysis
DOT National Transportation Integrated Search
1981-09-01
Fire safety countermeasures applicable to transit vehicles are identified and evaluated. This document contains fault trees which illustrate the sequences of events which may lead to a transit-fire related casualty. A description of the basis for the...
System Analysis by Mapping a Fault-tree into a Bayesian-network
NASA Astrophysics Data System (ADS)
Sheng, B.; Deng, C.; Wang, Y. H.; Tang, L. H.
2018-05-01
In view of the limitations of fault tree analysis in reliability assessment, Bayesian Network (BN) has been studied as an alternative technology. After a brief introduction to the method for mapping a Fault Tree (FT) into an equivalent BN, equations used to calculate the structure importance degree, the probability importance degree and the critical importance degree are presented. Furthermore, the correctness of these equations is proved mathematically. Combining with an aircraft landing gear’s FT, an equivalent BN is developed and analysed. The results show that richer and more accurate information have been achieved through the BN method than the FT, which demonstrates that the BN is a superior technique in both reliability assessment and fault diagnosis.
Reset Tree-Based Optical Fault Detection
Lee, Dong-Geon; Choi, Dooho; Seo, Jungtaek; Kim, Howon
2013-01-01
In this paper, we present a new reset tree-based scheme to protect cryptographic hardware against optical fault injection attacks. As one of the most powerful invasive attacks on cryptographic hardware, optical fault attacks cause semiconductors to misbehave by injecting high-energy light into a decapped integrated circuit. The contaminated result from the affected chip is then used to reveal secret information, such as a key, from the cryptographic hardware. Since the advent of such attacks, various countermeasures have been proposed. Although most of these countermeasures are strong, there is still the possibility of attack. In this paper, we present a novel optical fault detection scheme that utilizes the buffers on a circuit's reset signal tree as a fault detection sensor. To evaluate our proposal, we model radiation-induced currents into circuit components and perform a SPICE simulation. The proposed scheme is expected to be used as a supplemental security tool. PMID:23698267
Fault tree applications within the safety program of Idaho Nuclear Corporation
NASA Technical Reports Server (NTRS)
Vesely, W. E.
1971-01-01
Computerized fault tree analyses are used to obtain both qualitative and quantitative information about the safety and reliability of an electrical control system that shuts the reactor down when certain safety criteria are exceeded, in the design of a nuclear plant protection system, and in an investigation of a backup emergency system for reactor shutdown. The fault tree yields the modes by which the system failure or accident will occur, the most critical failure or accident causing areas, detailed failure probabilities, and the response of safety or reliability to design modifications and maintenance schemes.
Development of a methodology for assessing the safety of embedded software systems
NASA Technical Reports Server (NTRS)
Garrett, C. J.; Guarro, S. B.; Apostolakis, G. E.
1993-01-01
A Dynamic Flowgraph Methodology (DFM) based on an integrated approach to modeling and analyzing the behavior of software-driven embedded systems for assessing and verifying reliability and safety is discussed. DFM is based on an extension of the Logic Flowgraph Methodology to incorporate state transition models. System models which express the logic of the system in terms of causal relationships between physical variables and temporal characteristics of software modules are analyzed to determine how a certain state can be reached. This is done by developing timed fault trees which take the form of logical combinations of static trees relating the system parameters at different point in time. The resulting information concerning the hardware and software states can be used to eliminate unsafe execution paths and identify testing criteria for safety critical software functions.
Lognormal Approximations of Fault Tree Uncertainty Distributions.
El-Shanawany, Ashraf Ben; Ardron, Keith H; Walker, Simon P
2018-01-26
Fault trees are used in reliability modeling to create logical models of fault combinations that can lead to undesirable events. The output of a fault tree analysis (the top event probability) is expressed in terms of the failure probabilities of basic events that are input to the model. Typically, the basic event probabilities are not known exactly, but are modeled as probability distributions: therefore, the top event probability is also represented as an uncertainty distribution. Monte Carlo methods are generally used for evaluating the uncertainty distribution, but such calculations are computationally intensive and do not readily reveal the dominant contributors to the uncertainty. In this article, a closed-form approximation for the fault tree top event uncertainty distribution is developed, which is applicable when the uncertainties in the basic events of the model are lognormally distributed. The results of the approximate method are compared with results from two sampling-based methods: namely, the Monte Carlo method and the Wilks method based on order statistics. It is shown that the closed-form expression can provide a reasonable approximation to results obtained by Monte Carlo sampling, without incurring the computational expense. The Wilks method is found to be a useful means of providing an upper bound for the percentiles of the uncertainty distribution while being computationally inexpensive compared with full Monte Carlo sampling. The lognormal approximation method and Wilks's method appear attractive, practical alternatives for the evaluation of uncertainty in the output of fault trees and similar multilinear models. © 2018 Society for Risk Analysis.
Fault Tree Analysis as a Planning and Management Tool: A Case Study
ERIC Educational Resources Information Center
Witkin, Belle Ruth
1977-01-01
Fault Tree Analysis is an operations research technique used to analyse the most probable modes of failure in a system, in order to redesign or monitor the system more closely in order to increase its likelihood of success. (Author)
NASA Astrophysics Data System (ADS)
Rodak, C. M.; McHugh, R.; Wei, X.
2016-12-01
The development and combination of horizontal drilling and hydraulic fracturing has unlocked unconventional hydrocarbon reserves around the globe. These advances have triggered a number of concerns regarding aquifer contamination and over-exploitation, leading to scientific studies investigating potential risks posed by directional hydraulic fracturing activities. These studies, balanced with potential economic benefits of energy production, are a crucial source of information for communities considering the development of unconventional reservoirs. However, probabilistic quantification of the overall risk posed by hydraulic fracturing at the system level are rare. Here we present the concept of fault tree analysis to determine the overall probability of groundwater contamination or over-exploitation, broadly referred to as the probability of failure. The potential utility of fault tree analysis for the quantification and communication of risks is approached with a general application. However, the fault tree design is robust and can handle various combinations of regional-specific data pertaining to relevant spatial scales, geological conditions, and industry practices where available. All available data are grouped into quantity and quality-based impacts and sub-divided based on the stage of the hydraulic fracturing process in which the data is relevant as described by the USEPA. Each stage is broken down into the unique basic events required for failure; for example, to quantify the risk of an on-site spill we must consider the likelihood, magnitude, composition, and subsurface transport of the spill. The structure of the fault tree described above can be used to render a highly complex system of variables into a straightforward equation for risk calculation based on Boolean logic. This project shows the utility of fault tree analysis for the visual communication of the potential risks of hydraulic fracturing activities on groundwater resources.
Fault Tree Analysis: An Emerging Methodology for Instructional Science.
ERIC Educational Resources Information Center
Wood, R. Kent; And Others
1979-01-01
Describes Fault Tree Analysis, a tool for systems analysis which attempts to identify possible modes of failure in systems to increase the probability of success. The article defines the technique and presents the steps of FTA construction, focusing on its application to education. (RAO)
Program listing for fault tree analysis of JPL technical report 32-1542
NASA Technical Reports Server (NTRS)
Chelson, P. O.
1971-01-01
The computer program listing for the MAIN program and those subroutines unique to the fault tree analysis are described. Some subroutines are used for analyzing the reliability block diagram. The program is written in FORTRAN 5 and is running on a UNIVAC 1108.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Powell, Danny H; Elwood Jr, Robert H
2011-01-01
Analysis of the material protection, control, and accountability (MPC&A) system is necessary to understand the limits and vulnerabilities of the system to internal threats. A self-appraisal helps the facility be prepared to respond to internal threats and reduce the risk of theft or diversion of nuclear material. The material control and accountability (MC&A) system effectiveness tool (MSET) fault tree was developed to depict the failure of the MPC&A system as a result of poor practices and random failures in the MC&A system. It can also be employed as a basis for assessing deliberate threats against a facility. MSET uses faultmore » tree analysis, which is a top-down approach to examining system failure. The analysis starts with identifying a potential undesirable event called a 'top event' and then determining the ways it can occur (e.g., 'Fail To Maintain Nuclear Materials Under The Purview Of The MC&A System'). The analysis proceeds by determining how the top event can be caused by individual or combined lower level faults or failures. These faults, which are the causes of the top event, are 'connected' through logic gates. The MSET model uses AND-gates and OR-gates and propagates the effect of event failure using Boolean algebra. To enable the fault tree analysis calculations, the basic events in the fault tree are populated with probability risk values derived by conversion of questionnaire data to numeric values. The basic events are treated as independent variables. This assumption affects the Boolean algebraic calculations used to calculate results. All the necessary calculations are built into the fault tree codes, but it is often useful to estimate the probabilities manually as a check on code functioning. The probability of failure of a given basic event is the probability that the basic event primary question fails to meet the performance metric for that question. The failure probability is related to how well the facility performs the task identified in that basic event over time (not just one performance or exercise). Fault tree calculations provide a failure probability for the top event in the fault tree. The basic fault tree calculations establish a baseline relative risk value for the system. This probability depicts relative risk, not absolute risk. Subsequent calculations are made to evaluate the change in relative risk that would occur if system performance is improved or degraded. During the development effort of MSET, the fault tree analysis program used was SAPHIRE. SAPHIRE is an acronym for 'Systems Analysis Programs for Hands-on Integrated Reliability Evaluations.' Version 1 of the SAPHIRE code was sponsored by the Nuclear Regulatory Commission in 1987 as an innovative way to draw, edit, and analyze graphical fault trees primarily for safe operation of nuclear power reactors. When the fault tree calculations are performed, the fault tree analysis program will produce several reports that can be used to analyze the MPC&A system. SAPHIRE produces reports showing risk importance factors for all basic events in the operational MC&A system. The risk importance information is used to examine the potential impacts when performance of certain basic events increases or decreases. The initial results produced by the SAPHIRE program are considered relative risk values. None of the results can be interpreted as absolute risk values since the basic event probability values represent estimates of risk associated with the performance of MPC&A tasks throughout the material balance area (MBA). The RRR for a basic event represents the decrease in total system risk that would result from improvement of that one event to a perfect performance level. Improvement of the basic event with the greatest RRR value produces a greater decrease in total system risk than improvement of any other basic event. Basic events with the greatest potential for system risk reduction are assigned performance improvement values, and new fault tree calculations show the improvement in total system risk. The operational impact or cost-effectiveness from implementing the performance improvements can then be evaluated. The improvements being evaluated can be system performance improvements, or they can be potential, or actual, upgrades to the system. The RIR for a basic event represents the increase in total system risk that would result from failure of that one event. Failure of the basic event with the greatest RIR value produces a greater increase in total system risk than failure of any other basic event. Basic events with the greatest potential for system risk increase are assigned failure performance values, and new fault tree calculations show the increase in total system risk. This evaluation shows the importance of preventing performance degradation of the basic events. SAPHIRE identifies combinations of basic events where concurrent failure of the events results in failure of the top event.« less
Fault Diagnosis from Raw Sensor Data Using Deep Neural Networks Considering Temporal Coherence.
Zhang, Ran; Peng, Zhen; Wu, Lifeng; Yao, Beibei; Guan, Yong
2017-03-09
Intelligent condition monitoring and fault diagnosis by analyzing the sensor data can assure the safety of machinery. Conventional fault diagnosis and classification methods usually implement pretreatments to decrease noise and extract some time domain or frequency domain features from raw time series sensor data. Then, some classifiers are utilized to make diagnosis. However, these conventional fault diagnosis approaches suffer from the expertise of feature selection and they do not consider the temporal coherence of time series data. This paper proposes a fault diagnosis model based on Deep Neural Networks (DNN). The model can directly recognize raw time series sensor data without feature selection and signal processing. It also takes advantage of the temporal coherence of the data. Firstly, raw time series training data collected by sensors are used to train the DNN until the cost function of DNN gets the minimal value; Secondly, test data are used to test the classification accuracy of the DNN on local time series data. Finally, fault diagnosis considering temporal coherence with former time series data is implemented. Experimental results show that the classification accuracy of bearing faults can get 100%. The proposed fault diagnosis approach is effective in recognizing the type of bearing faults.
Fault Diagnosis from Raw Sensor Data Using Deep Neural Networks Considering Temporal Coherence
Zhang, Ran; Peng, Zhen; Wu, Lifeng; Yao, Beibei; Guan, Yong
2017-01-01
Intelligent condition monitoring and fault diagnosis by analyzing the sensor data can assure the safety of machinery. Conventional fault diagnosis and classification methods usually implement pretreatments to decrease noise and extract some time domain or frequency domain features from raw time series sensor data. Then, some classifiers are utilized to make diagnosis. However, these conventional fault diagnosis approaches suffer from the expertise of feature selection and they do not consider the temporal coherence of time series data. This paper proposes a fault diagnosis model based on Deep Neural Networks (DNN). The model can directly recognize raw time series sensor data without feature selection and signal processing. It also takes advantage of the temporal coherence of the data. Firstly, raw time series training data collected by sensors are used to train the DNN until the cost function of DNN gets the minimal value; Secondly, test data are used to test the classification accuracy of the DNN on local time series data. Finally, fault diagnosis considering temporal coherence with former time series data is implemented. Experimental results show that the classification accuracy of bearing faults can get 100%. The proposed fault diagnosis approach is effective in recognizing the type of bearing faults. PMID:28282936
Direct evaluation of fault trees using object-oriented programming techniques
NASA Technical Reports Server (NTRS)
Patterson-Hine, F. A.; Koen, B. V.
1989-01-01
Object-oriented programming techniques are used in an algorithm for the direct evaluation of fault trees. The algorithm combines a simple bottom-up procedure for trees without repeated events with a top-down recursive procedure for trees with repeated events. The object-oriented approach results in a dynamic modularization of the tree at each step in the reduction process. The algorithm reduces the number of recursive calls required to solve trees with repeated events and calculates intermediate results as well as the solution of the top event. The intermediate results can be reused if part of the tree is modified. An example is presented in which the results of the algorithm implemented with conventional techniques are compared to those of the object-oriented approach.
NASA Astrophysics Data System (ADS)
Guns, K. A.; Bennett, R. A.; Blisniuk, K.
2017-12-01
To better evaluate the distribution and transfer of strain and slip along the Southern San Andreas Fault (SSAF) zone in the northern Coachella valley in southern California, we integrate geological and geodetic observations to test whether strain is being transferred away from the SSAF system towards the Eastern California Shear Zone through microblock rotation of the Eastern Transverse Ranges (ETR). The faults of the ETR consist of five east-west trending left lateral strike slip faults that have measured cumulative offsets of up to 20 km and as low as 1 km. Present kinematic and block models present a variety of slip rate estimates, from as low as zero to as high as 7 mm/yr, suggesting a gap in our understanding of what role these faults play in the larger system. To determine whether present-day block rotation along these faults is contributing to strain transfer in the region, we are applying 10Be surface exposure dating methods to observed offset channel and alluvial fan deposits in order to estimate fault slip rates along two faults in the ETR. We present observations of offset geomorphic landforms using field mapping and LiDAR data at three sites along the Blue Cut Fault and one site along the Smoke Tree Wash Fault in Joshua Tree National Park which indicate recent Quaternary fault activity. Initial results of site mapping and clast count analyses reveal at least three stages of offset, including potential Holocene offsets, for one site along the Blue Cut Fault, while preliminary 10Be geochronology is in progress. This geologic slip rate data, combined with our new geodetic surface velocity field derived from updated campaign-based GPS measurements within Joshua Tree National Park will allow us to construct a suite of elastic fault block models to elucidate rates of strain transfer away from the SSAF and how that strain transfer may be affecting the length of the interseismic period along the SSAF.
FAULT TREE ANALYSIS FOR EXPOSURE TO REFRIGERANTS USED FOR AUTOMOTIVE AIR CONDITIONING IN THE U.S.
A fault tree analysis was used to estimate the number of refrigerant exposures of automotive service technicians and vehicle occupants in the United States. Exposures of service technicians can occur when service equipment or automotive air-conditioning systems leak during servic...
A Fault Tree Approach to Analysis of Organizational Communication Systems.
ERIC Educational Resources Information Center
Witkin, Belle Ruth; Stephens, Kent G.
Fault Tree Analysis (FTA) is a method of examing communication in an organization by focusing on: (1) the complex interrelationships in human systems, particularly in communication systems; (2) interactions across subsystems and system boundaries; and (3) the need to select and "prioritize" channels which will eliminate noise in the…
Applying fault tree analysis to the prevention of wrong-site surgery.
Abecassis, Zachary A; McElroy, Lisa M; Patel, Ronak M; Khorzad, Rebeca; Carroll, Charles; Mehrotra, Sanjay
2015-01-01
Wrong-site surgery (WSS) is a rare event that occurs to hundreds of patients each year. Despite national implementation of the Universal Protocol over the past decade, development of effective interventions remains a challenge. We performed a systematic review of the literature reporting root causes of WSS and used the results to perform a fault tree analysis to assess the reliability of the system in preventing WSS and identifying high-priority targets for interventions aimed at reducing WSS. Process components where a single error could result in WSS were labeled with OR gates; process aspects reinforced by verification were labeled with AND gates. The overall redundancy of the system was evaluated based on prevalence of AND gates and OR gates. In total, 37 studies described risk factors for WSS. The fault tree contains 35 faults, most of which fall into five main categories. Despite the Universal Protocol mandating patient verification, surgical site signing, and a brief time-out, a large proportion of the process relies on human transcription and verification. Fault tree analysis provides a standardized perspective of errors or faults within the system of surgical scheduling and site confirmation. It can be adapted by institutions or specialties to lead to more targeted interventions to increase redundancy and reliability within the preoperative process. Copyright © 2015 Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Bennett, Richard A.; Reilinger, Robert E.; Rodi, William; Li, Yingping; Toksoz, M. Nafi; Hudnut, Ken
1995-01-01
Coseismic surface deformation associated with the M(sub w) 6.1, April 23, 1992, Joshua Tree earthquake is well represented by estimates of geodetic monument displacements at 20 locations independently derived from Global Positioning System and trilateration measurements. The rms signal to noise ratio for these inferred displacements is 1.8 with near-fault displacement estimates exceeding 40 mm. In order to determine the long-wavelength distribution of slip over the plane of rupture, a Tikhonov regularization operator is applied to these estimates which minimizes stress variability subject to purely right-lateral slip and zero surface slip constraints. The resulting slip distribution yields a geodetic moment estimate of 1.7 x 10(exp 18) N m with corresponding maximum slip around 0.8 m and compares well with independent and complementary information including seismic moment and source time function estimates and main shock and aftershock locations. From empirical Green's functions analyses, a rupture duration of 5 s is obtained which implies a rupture radius of 6-8 km. Most of the inferred slip lies to the north of the hypocenter, consistent with northward rupture propagation. Stress drop estimates are in the range of 2-4 MPa. In addition, predicted Coulomb stress increases correlate remarkably well with the distribution of aftershock hypocenters; most of the aftershocks occur in areas for which the mainshock rupture produced stress increases larger than about 0.1 MPa. In contrast, predicted stress changes are near zero at the hypocenter of the M(sub w) 7.3, June 28, 1992, Landers earthquake which nucleated about 20 km beyond the northernmost edge of the Joshua Tree rupture. Based on aftershock migrations and the predicted static stress field, we speculate that redistribution of Joshua Tree-induced stress perturbations played a role in the spatio-temporal development of the earth sequence culminating in the Landers event.
Langenheim, Victoria E.; Rymer, Michael J.; Catchings, Rufus D.; Goldman, Mark R.; Watt, Janet T.; Powell, Robert E.; Matti, Jonathan C.
2016-03-02
We describe high-resolution gravity and seismic refraction surveys acquired to determine the thickness of valley-fill deposits and to delineate geologic structures that might influence groundwater flow beneath the Smoke Tree Wash area in Joshua Tree National Park. These surveys identified a sedimentary basin that is fault-controlled. A profile across the Smoke Tree Wash fault zone reveals low gravity values and seismic velocities that coincide with a mapped strand of the Smoke Tree Wash fault. Modeling of the gravity data reveals a basin about 2–2.5 km long and 1 km wide that is roughly centered on this mapped strand, and bounded by inferred faults. According to the gravity model the deepest part of the basin is about 270 m, but this area coincides with low velocities that are not characteristic of typical basement complex rocks. Most likely, the density contrast assumed in the inversion is too high or the uncharacteristically low velocities represent highly fractured or weathered basement rocks, or both. A longer seismic profile extending onto basement outcrops would help differentiate which scenario is more accurate. The seismic velocities also determine the depth to water table along the profile to be about 40–60 m, consistent with water levels measured in water wells near the northern end of the profile.
A Fault Tree Approach to Needs Assessment -- An Overview.
ERIC Educational Resources Information Center
Stephens, Kent G.
A "failsafe" technology is presented based on a new unified theory of needs assessment. Basically the paper discusses fault tree analysis as a technique for enhancing the probability of success in any system by analyzing the most likely modes of failure that could occur and then suggesting high priority avoidance strategies for those…
Huang, Weiqing; Fan, Hongbo; Qiu, Yongfu; Cheng, Zhiyu; Xu, Pingru; Qian, Yu
2016-05-01
Recently, China has frequently experienced large-scale, severe and persistent haze pollution due to surging urbanization and industrialization and a rapid growth in the number of motor vehicles and energy consumption. The vehicle emission due to the consumption of a large number of fossil fuels is no doubt a critical factor of the haze pollution. This work is focused on the causation mechanism of haze pollution related to the vehicle emission for Guangzhou city by employing the Fault Tree Analysis (FTA) method for the first time. With the establishment of the fault tree system of "Haze weather-Vehicle exhausts explosive emission", all of the important risk factors are discussed and identified by using this deductive FTA method. The qualitative and quantitative assessments of the fault tree system are carried out based on the structure, probability and critical importance degree analysis of the risk factors. The study may provide a new simple and effective tool/strategy for the causation mechanism analysis and risk management of haze pollution in China. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Sanchez-Vila, X.; de Barros, F.; Bolster, D.; Nowak, W.
2010-12-01
Assessing the potential risk of hydro(geo)logical supply systems to human population is an interdisciplinary field. It relies on the expertise in fields as distant as hydrogeology, medicine, or anthropology, and needs powerful translation concepts to provide decision support and policy making. Reliable health risk estimates need to account for the uncertainties in hydrological, physiological and human behavioral parameters. We propose the use of fault trees to address the task of probabilistic risk analysis (PRA) and to support related management decisions. Fault trees allow decomposing the assessment of health risk into individual manageable modules, thus tackling a complex system by a structural “Divide and Conquer” approach. The complexity within each module can be chosen individually according to data availability, parsimony, relative importance and stage of analysis. The separation in modules allows for a true inter- and multi-disciplinary approach. This presentation highlights the three novel features of our work: (1) we define failure in terms of risk being above a threshold value, whereas previous studies used auxiliary events such as exceedance of critical concentration levels, (2) we plot an integrated fault tree that handles uncertainty in both hydrological and health components in a unified way, and (3) we introduce a new form of stochastic fault tree that allows to weaken the assumption of independent subsystems that is required by a classical fault tree approach. We illustrate our concept in a simple groundwater-related setting.
A fuzzy decision tree for fault classification.
Zio, Enrico; Baraldi, Piero; Popescu, Irina C
2008-02-01
In plant accident management, the control room operators are required to identify the causes of the accident, based on the different patterns of evolution of the monitored process variables thereby developing. This task is often quite challenging, given the large number of process parameters monitored and the intense emotional states under which it is performed. To aid the operators, various techniques of fault classification have been engineered. An important requirement for their practical application is the physical interpretability of the relationships among the process variables underpinning the fault classification. In this view, the present work propounds a fuzzy approach to fault classification, which relies on fuzzy if-then rules inferred from the clustering of available preclassified signal data, which are then organized in a logical and transparent decision tree structure. The advantages offered by the proposed approach are precisely that a transparent fault classification model is mined out of the signal data and that the underlying physical relationships among the process variables are easily interpretable as linguistic if-then rules that can be explicitly visualized in the decision tree structure. The approach is applied to a case study regarding the classification of simulated faults in the feedwater system of a boiling water reactor.
FTC - THE FAULT-TREE COMPILER (SUN VERSION)
NASA Technical Reports Server (NTRS)
Butler, R. W.
1994-01-01
FTC, the Fault-Tree Compiler program, is a tool used to calculate the top-event probability for a fault-tree. Five different gate types are allowed in the fault tree: AND, OR, EXCLUSIVE OR, INVERT, and M OF N. The high-level input language is easy to understand and use. In addition, the program supports a hierarchical fault tree definition feature which simplifies the tree-description process and reduces execution time. A rigorous error bound is derived for the solution technique. This bound enables the program to supply an answer precisely (within the limits of double precision floating point arithmetic) at a user-specified number of digits accuracy. The program also facilitates sensitivity analysis with respect to any specified parameter of the fault tree such as a component failure rate or a specific event probability by allowing the user to vary one failure rate or the failure probability over a range of values and plot the results. The mathematical approach chosen to solve a reliability problem may vary with the size and nature of the problem. Although different solution techniques are utilized on different programs, it is possible to have a common input language. The Systems Validation Methods group at NASA Langley Research Center has created a set of programs that form the basis for a reliability analysis workstation. The set of programs are: SURE reliability analysis program (COSMIC program LAR-13789, LAR-14921); the ASSIST specification interface program (LAR-14193, LAR-14923), PAWS/STEM reliability analysis programs (LAR-14165, LAR-14920); and the FTC fault tree tool (LAR-14586, LAR-14922). FTC is used to calculate the top-event probability for a fault tree. PAWS/STEM and SURE are programs which interpret the same SURE language, but utilize different solution methods. ASSIST is a preprocessor that generates SURE language from a more abstract definition. SURE, ASSIST, and PAWS/STEM are also offered as a bundle. Please see the abstract for COS-10039/COS-10041, SARA - SURE/ASSIST Reliability Analysis Workstation, for pricing details. FTC was originally developed for DEC VAX series computers running VMS and was later ported for use on Sun computers running SunOS. The program is written in PASCAL, ANSI compliant C-language, and FORTRAN 77. The TEMPLATE graphics library is required to obtain graphical output. The standard distribution medium for the VMS version of FTC (LAR-14586) is a 9-track 1600 BPI magnetic tape in VMSINSTAL format. It is also available on a TK50 tape cartridge in VMSINSTAL format. Executables are included. The standard distribution medium for the Sun version of FTC (LAR-14922) is a .25 inch streaming magnetic tape cartridge in UNIX tar format. Both Sun3 and Sun4 executables are included. FTC was developed in 1989 and last updated in 1992. DEC, VAX, VMS, and TK50 are trademarks of Digital Equipment Corporation. UNIX is a registered trademark of AT&T Bell Laboratories. SunOS is a trademark of Sun Microsystems, Inc.
FTC - THE FAULT-TREE COMPILER (VAX VMS VERSION)
NASA Technical Reports Server (NTRS)
Butler, R. W.
1994-01-01
FTC, the Fault-Tree Compiler program, is a tool used to calculate the top-event probability for a fault-tree. Five different gate types are allowed in the fault tree: AND, OR, EXCLUSIVE OR, INVERT, and M OF N. The high-level input language is easy to understand and use. In addition, the program supports a hierarchical fault tree definition feature which simplifies the tree-description process and reduces execution time. A rigorous error bound is derived for the solution technique. This bound enables the program to supply an answer precisely (within the limits of double precision floating point arithmetic) at a user-specified number of digits accuracy. The program also facilitates sensitivity analysis with respect to any specified parameter of the fault tree such as a component failure rate or a specific event probability by allowing the user to vary one failure rate or the failure probability over a range of values and plot the results. The mathematical approach chosen to solve a reliability problem may vary with the size and nature of the problem. Although different solution techniques are utilized on different programs, it is possible to have a common input language. The Systems Validation Methods group at NASA Langley Research Center has created a set of programs that form the basis for a reliability analysis workstation. The set of programs are: SURE reliability analysis program (COSMIC program LAR-13789, LAR-14921); the ASSIST specification interface program (LAR-14193, LAR-14923), PAWS/STEM reliability analysis programs (LAR-14165, LAR-14920); and the FTC fault tree tool (LAR-14586, LAR-14922). FTC is used to calculate the top-event probability for a fault tree. PAWS/STEM and SURE are programs which interpret the same SURE language, but utilize different solution methods. ASSIST is a preprocessor that generates SURE language from a more abstract definition. SURE, ASSIST, and PAWS/STEM are also offered as a bundle. Please see the abstract for COS-10039/COS-10041, SARA - SURE/ASSIST Reliability Analysis Workstation, for pricing details. FTC was originally developed for DEC VAX series computers running VMS and was later ported for use on Sun computers running SunOS. The program is written in PASCAL, ANSI compliant C-language, and FORTRAN 77. The TEMPLATE graphics library is required to obtain graphical output. The standard distribution medium for the VMS version of FTC (LAR-14586) is a 9-track 1600 BPI magnetic tape in VMSINSTAL format. It is also available on a TK50 tape cartridge in VMSINSTAL format. Executables are included. The standard distribution medium for the Sun version of FTC (LAR-14922) is a .25 inch streaming magnetic tape cartridge in UNIX tar format. Both Sun3 and Sun4 executables are included. FTC was developed in 1989 and last updated in 1992. DEC, VAX, VMS, and TK50 are trademarks of Digital Equipment Corporation. UNIX is a registered trademark of AT&T Bell Laboratories. SunOS is a trademark of Sun Microsystems, Inc.
SPACE PROPULSION SYSTEM PHASED-MISSION PROBABILITY ANALYSIS USING CONVENTIONAL PRA METHODS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Curtis Smith; James Knudsen
As part of a series of papers on the topic of advance probabilistic methods, a benchmark phased-mission problem has been suggested. This problem consists of modeling a space mission using an ion propulsion system, where the mission consists of seven mission phases. The mission requires that the propulsion operate for several phases, where the configuration changes as a function of phase. The ion propulsion system itself consists of five thruster assemblies and a single propellant supply, where each thruster assembly has one propulsion power unit and two ion engines. In this paper, we evaluate the probability of mission failure usingmore » the conventional methodology of event tree/fault tree analysis. The event tree and fault trees are developed and analyzed using Systems Analysis Programs for Hands-on Integrated Reliability Evaluations (SAPHIRE). While the benchmark problem is nominally a "dynamic" problem, in our analysis the mission phases are modeled in a single event tree to show the progression from one phase to the next. The propulsion system is modeled in fault trees to account for the operation; or in this case, the failure of the system. Specifically, the propulsion system is decomposed into each of the five thruster assemblies and fed into the appropriate N-out-of-M gate to evaluate mission failure. A separate fault tree for the propulsion system is developed to account for the different success criteria of each mission phase. Common-cause failure modeling is treated using traditional (i.e., parametrically) methods. As part of this paper, we discuss the overall results in addition to the positive and negative aspects of modeling dynamic situations with non-dynamic modeling techniques. One insight from the use of this conventional method for analyzing the benchmark problem is that it requires significant manual manipulation to the fault trees and how they are linked into the event tree. The conventional method also requires editing the resultant cut sets to obtain the correct results. While conventional methods may be used to evaluate a dynamic system like that in the benchmark, the level of effort required may preclude its use on real-world problems.« less
A Fault Tree Approach to Analysis of Behavioral Systems: An Overview.
ERIC Educational Resources Information Center
Stephens, Kent G.
Developed at Brigham Young University, Fault Tree Analysis (FTA) is a technique for enhancing the probability of success in any system by analyzing the most likely modes of failure that could occur. It provides a logical, step-by-step description of possible failure events within a system and their interaction--the combinations of potential…
An earthquake rate forecast for Europe based on smoothed seismicity and smoothed fault contribution
NASA Astrophysics Data System (ADS)
Hiemer, Stefan; Woessner, Jochen; Basili, Roberto; Wiemer, Stefan
2013-04-01
The main objective of project SHARE (Seismic Hazard Harmonization in Europe) is to develop a community-based seismic hazard model for the Euro-Mediterranean region. The logic tree of earthquake rupture forecasts comprises several methodologies including smoothed seismicity approaches. Smoothed seismicity thus represents an alternative concept to express the degree of spatial stationarity of seismicity and provides results that are more objective, reproducible, and testable. Nonetheless, the smoothed-seismicity approach suffers from the common drawback of being generally based on earthquake catalogs alone, i.e. the wealth of knowledge from geology is completely ignored. We present a model that applies the kernel-smoothing method to both past earthquake locations and slip rates on mapped crustal faults and subductions. The result is mainly driven by the data, being independent of subjective delineation of seismic source zones. The core parts of our model are two distinct location probability densities: The first is computed by smoothing past seismicity (using variable kernel smoothing to account for varying data density). The second is obtained by smoothing fault moment rate contributions. The fault moment rates are calculated by summing the moment rate of each fault patch on a fully parameterized and discretized fault as available from the SHARE fault database. We assume that the regional frequency-magnitude distribution of the entire study area is well known and estimate the a- and b-value of a truncated Gutenberg-Richter magnitude distribution based on a maximum likelihood approach that considers the spatial and temporal completeness history of the seismic catalog. The two location probability densities are linearly weighted as a function of magnitude assuming that (1) the occurrence of past seismicity is a good proxy to forecast occurrence of future seismicity and (2) future large-magnitude events occur more likely in the vicinity of known faults. Consequently, the underlying location density of our model depends on the magnitude. We scale the density with the estimated a-value in order to construct a forecast that specifies the earthquake rate in each longitude-latitude-magnitude bin. The model is intended to be one branch of SHARE's logic tree of rupture forecasts and provides rates of events in the magnitude range of 5 <= m <= 8.5 for the entire region of interest and is suitable for comparison with other long-term models in the framework of the Collaboratory for the Study of Earthquake Predictability (CSEP).
The engine fuel system fault analysis
NASA Astrophysics Data System (ADS)
Zhang, Yong; Song, Hanqiang; Yang, Changsheng; Zhao, Wei
2017-05-01
For improving the reliability of the engine fuel system, the typical fault factor of the engine fuel system was analyzed from the point view of structure and functional. The fault character was gotten by building the fuel system fault tree. According the utilizing of fault mode effect analysis method (FMEA), several factors of key component fuel regulator was obtained, which include the fault mode, the fault cause, and the fault influences. All of this made foundation for next development of fault diagnosis system.
Fault tree analysis: NiH2 aerospace cells for LEO mission
NASA Technical Reports Server (NTRS)
Klein, Glenn C.; Rash, Donald E., Jr.
1992-01-01
The Fault Tree Analysis (FTA) is one of several reliability analyses or assessments applied to battery cells to be utilized in typical Electric Power Subsystems for spacecraft in low Earth orbit missions. FTA is generally the process of reviewing and analytically examining a system or equipment in such a way as to emphasize the lower level fault occurrences which directly or indirectly contribute to the major fault or top level event. This qualitative FTA addresses the potential of occurrence for five specific top level events: hydrogen leakage through either discrete leakage paths or through pressure vessel rupture; and four distinct modes of performance degradation - high charge voltage, suppressed discharge voltage, loss of capacity, and high pressure.
Modular techniques for dynamic fault-tree analysis
NASA Technical Reports Server (NTRS)
Patterson-Hine, F. A.; Dugan, Joanne B.
1992-01-01
It is noted that current approaches used to assess the dependability of complex systems such as Space Station Freedom and the Air Traffic Control System are incapable of handling the size and complexity of these highly integrated designs. A novel technique for modeling such systems which is built upon current techniques in Markov theory and combinatorial analysis is described. It enables the development of a hierarchical representation of system behavior which is more flexible than either technique alone. A solution strategy which is based on an object-oriented approach to model representation and evaluation is discussed. The technique is virtually transparent to the user since the fault tree models can be built graphically and the objects defined automatically. The tree modularization procedure allows the two model types, Markov and combinatoric, to coexist and does not require that the entire fault tree be translated to a Markov chain for evaluation. This effectively reduces the size of the Markov chain required and enables solutions with less truncation, making analysis of longer mission times possible. Using the fault-tolerant parallel processor as an example, a model is built and solved for a specific mission scenario and the solution approach is illustrated in detail.
Soft error evaluation and vulnerability analysis in Xilinx Zynq-7010 system-on chip
NASA Astrophysics Data System (ADS)
Du, Xuecheng; He, Chaohui; Liu, Shuhuan; Zhang, Yao; Li, Yonghong; Xiong, Ceng; Tan, Pengkang
2016-09-01
Radiation-induced soft errors are an increasingly important threat to the reliability of modern electronic systems. In order to evaluate system-on chip's reliability and soft error, the fault tree analysis method was used in this work. The system fault tree was constructed based on Xilinx Zynq-7010 All Programmable SoC. Moreover, the soft error rates of different components in Zynq-7010 SoC were tested by americium-241 alpha radiation source. Furthermore, some parameters that used to evaluate the system's reliability and safety were calculated using Isograph Reliability Workbench 11.0, such as failure rate, unavailability and mean time to failure (MTTF). According to fault tree analysis for system-on chip, the critical blocks and system reliability were evaluated through the qualitative and quantitative analysis.
Learning from examples - Generation and evaluation of decision trees for software resource analysis
NASA Technical Reports Server (NTRS)
Selby, Richard W.; Porter, Adam A.
1988-01-01
A general solution method for the automatic generation of decision (or classification) trees is investigated. The approach is to provide insights through in-depth empirical characterization and evaluation of decision trees for software resource data analysis. The trees identify classes of objects (software modules) that had high development effort. Sixteen software systems ranging from 3,000 to 112,000 source lines were selected for analysis from a NASA production environment. The collection and analysis of 74 attributes (or metrics), for over 4,700 objects, captured information about the development effort, faults, changes, design style, and implementation style. A total of 9,600 decision trees were automatically generated and evaluated. The trees correctly identified 79.3 percent of the software modules that had high development effort or faults, and the trees generated from the best parameter combinations correctly identified 88.4 percent of the modules on the average.
Eble, C.F.; Greb, S.F.; Williams, D.A.; Hower, J.C.
1999-01-01
Eight bench-column samples of the Western Kentucky Number 4 coal bed, collected from an area along the southern margin of the Western Kentucky Coal Field, were analyzed palynologically, petrographically, and geochemically to document both temporal and spatial variability among these parameters. The Western Kentucky Number 4 coal occurs near the top of the Tradewater Formation, is of Early Desmoinesian age, and is correlative with the lower part of the Allegheny Formation of the Appalachian Basin, and Late Bolsovian strata of western Europe. Palynologically, the coal is co-dominated by spores that were produced by lycopod trees (Lycospora and Granasporites medius) and tree ferns. Thin-walled tree fern spores (Punctatisporites minutus, P. minutus, P. rotundus) are more abundant than thick-walled forms (Laevigatosporites globosus, P. granifer). Calamitean spores (Calamospora and Laevigatosporites spp.) are locally abundant as is cordaitean pollen (Florinites). Small fern (Granulatisporites) and small lycopod spores (Densosporites, Cirratriradites, Endosporites and Anacanthotriletes spinosus) are present, but occur in minor amounts. Temporal changes in palynomorph composition occur, but are not uniform between columns. Spatial variability among columns is also evident. Petrographically, the coal is dominated by vitrinite macerals, with telinite and telocollinite generally occurring more commonly than desmocollinite and gelocollinite. Basal benches typically contain high percentages of vitrinite; middle benches usually contain higher percentages of liptinite and inertinite. In about half the studied columns, the terminal coal benches show a slight increase in vitrinite. In the study area, the petrography of the Western Kentucky Number 4 coal is more uniform than the palynology. Ash yields and total sulfur contents are temporally uniform in some columns, but variable in others. In the latter case, higher percentages of ash and sulfur occur at the base of the bed and decrease up to the middle of the bed. The terminal benches of these columns often, but not always, show slight increases in ash or sulfur. Both syngenetic and epigenetic forms of sulfur are present in the Western Kentucky Number 4 coal. The high vitrinite contents and moderate to high sulfur contents suggest that the Western Kentucky Number 4 paleomire was mainly planar and rheotrophic throughout its developmental history. Groundwaters carrying dissolved solutes may have helped neutralize the normally acidic interstitial peat waters allowing for the production of sulfide minerals. Several of the columns with high sulfur contents at the base of the bed occur in faulted areas. The faults could have promoted the flow of groundwaters through the peat, providing an increased dissolved load for acid mitigation and sulfide formation. The concentration of sulfur at the base of the bed may be a function of the peat/underclay contact enhancing sulfide formation. The clay layer may also have acted as an impermeable boundary for downward moving groundwaters, causing mainly lateral, rather than vertical movement along the base of the coal bed.Eight bench-column samples of the Western Kentucky Number 4 coal bed were analyzed palynologically, petrographically, and geochemically to study both temporal and spatial variability among these parameters. Palynologically, the coal is co-dominated by spores that were produced by lycopod trees and tree ferns. Petrographically, the coal is dominated by vitrinite macerals, with telinite and telocollinite generally occurring more commonly than desmocollinite and gelocollinite. The petrography of the coal was found to be more uniform than the palynology.
NASA Astrophysics Data System (ADS)
Hampel, Andrea; Hetzel, Ralf
2013-04-01
The friction coefficient is a key parameter for the slip evolution of faults, but how temporal changes in friction affect fault slip is still poorly known. By using three-dimensional numerical models with a thrust fault that is alternately locked and released, we show that variations in the friction coefficient affect both coseismic and long-term fault slip (Hampel and Hetzel, 2012). Decreasing the friction coefficient by 5% while keeping the duration of the interseismic phase constant leads to a four-fold increase in coseismic slip, whereas a 5% increase nearly suppresses slip. A gradual decrease or increase of friction over several earthquake cycles (1-5% per earthquake) considerably alters the cumulative fault slip. In nature, the slip deficit (surplus) resulting from variations in the friction coefficient would presumably be compensated by a longer (shorter) interseismic phase, but the magnitude of the changes required for compensation render variations of the friction coefficient of >5% unlikely. Reference Hampel, A., R. Hetzel (2012) Temporal variation in fault friction and its effects on the slip evolution of a thrust fault over several earthquake cycles. Terra Nova, 24, 357-362, doi: 10.1111/j.1365-3121.2012.01073.x.
Decision tree and PCA-based fault diagnosis of rotating machinery
NASA Astrophysics Data System (ADS)
Sun, Weixiang; Chen, Jin; Li, Jiaqing
2007-04-01
After analysing the flaws of conventional fault diagnosis methods, data mining technology is introduced to fault diagnosis field, and a new method based on C4.5 decision tree and principal component analysis (PCA) is proposed. In this method, PCA is used to reduce features after data collection, preprocessing and feature extraction. Then, C4.5 is trained by using the samples to generate a decision tree model with diagnosis knowledge. At last the tree model is used to make diagnosis analysis. To validate the method proposed, six kinds of running states (normal or without any defect, unbalance, rotor radial rub, oil whirl, shaft crack and a simultaneous state of unbalance and radial rub), are simulated on Bently Rotor Kit RK4 to test C4.5 and PCA-based method and back-propagation neural network (BPNN). The result shows that C4.5 and PCA-based diagnosis method has higher accuracy and needs less training time than BPNN.
On the implementation of faults in finite-element glacial isostatic adjustment models
NASA Astrophysics Data System (ADS)
Steffen, Rebekka; Wu, Patrick; Steffen, Holger; Eaton, David W.
2014-01-01
Stresses induced in the crust and mantle by continental-scale ice sheets during glaciation have triggered earthquakes along pre-existing faults, commencing near the end of the deglaciation. In order to get a better understanding of the relationship between glacial loading/unloading and fault movement due to the spatio-temporal evolution of stresses, a commonly used model for glacial isostatic adjustment (GIA) is extended by including a fault structure. Solving this problem is enabled by development of a workflow involving three cascaded finite-element simulations. Each step has identical lithospheric and mantle structure and properties, but evolving stress conditions along the fault. The purpose of the first simulation is to compute the spatio-temporal evolution of rebound stress when the fault is tied together. An ice load with a parabolic profile and simple ice history is applied to represent glacial loading of the Laurentide Ice Sheet. The results of the first step describe the evolution of the stress and displacement induced by the rebound process. The second step in the procedure augments the results of the first, by computing the spatio-temporal evolution of total stress (i.e. rebound stress plus tectonic background stress and overburden pressure) and displacement with reaction forces that can hold the model in equilibrium. The background stress is estimated by assuming that the fault is in frictional equilibrium before glaciation. The third step simulates fault movement induced by the spatio-temporal evolution of total stress by evaluating fault stability in a subroutine. If the fault remains stable, no movement occurs; in case of fault instability, the fault displacement is computed. We show an example of fault motion along a 45°-dipping fault at the ice-sheet centre for a two-dimensional model. Stable conditions along the fault are found during glaciation and the initial part of deglaciation. Before deglaciation ends, the fault starts to move, and fault offsets of up to 22 m are obtained. A fault scarp at the surface of 19.74 m is determined. The fault is stable in the following time steps with a high stress accumulation at the fault tip. Along the upper part of the fault, GIA stresses are released in one earthquake.
NASA Technical Reports Server (NTRS)
Chang, Chi-Yung (Inventor); Fang, Wai-Chi (Inventor); Curlander, John C. (Inventor)
1995-01-01
A system for data compression utilizing systolic array architecture for Vector Quantization (VQ) is disclosed for both full-searched and tree-searched. For a tree-searched VQ, the special case of a Binary Tree-Search VQ (BTSVQ) is disclosed with identical Processing Elements (PE) in the array for both a Raw-Codebook VQ (RCVQ) and a Difference-Codebook VQ (DCVQ) algorithm. A fault tolerant system is disclosed which allows a PE that has developed a fault to be bypassed in the array and replaced by a spare at the end of the array, with codebook memory assignment shifted one PE past the faulty PE of the array.
Fault tree analysis for system modeling in case of intentional EMI
NASA Astrophysics Data System (ADS)
Genender, E.; Mleczko, M.; Döring, O.; Garbe, H.; Potthast, S.
2011-08-01
The complexity of modern systems on the one hand and the rising threat of intentional electromagnetic interference (IEMI) on the other hand increase the necessity for systematical risk analysis. Most of the problems can not be treated deterministically since slight changes in the configuration (source, position, polarization, ...) can dramatically change the outcome of an event. For that purpose, methods known from probabilistic risk analysis can be applied. One of the most common approaches is the fault tree analysis (FTA). The FTA is used to determine the system failure probability and also the main contributors to its failure. In this paper the fault tree analysis is introduced and a possible application of that method is shown using a small computer network as an example. The constraints of this methods are explained and conclusions for further research are drawn.
NASA Astrophysics Data System (ADS)
Akinci, A.; Pace, B.
2017-12-01
In this study, we discuss the seismic hazard variability of peak ground acceleration (PGA) at 475 years return period in the Southern Apennines of Italy. The uncertainty and parametric sensitivity are presented to quantify the impact of the several fault parameters on ground motion predictions for 10% exceedance in 50-year hazard. A time-independent PSHA model is constructed based on the long-term recurrence behavior of seismogenic faults adopting the characteristic earthquake model for those sources capable of rupturing the entire fault segment with a single maximum magnitude. The fault-based source model uses the dimensions and slip rates of mapped fault to develop magnitude-frequency estimates for characteristic earthquakes. Variability of the selected fault parameter is given with a truncated normal random variable distribution presented by standard deviation about a mean value. A Monte Carlo approach, based on the random balanced sampling by logic tree, is used in order to capture the uncertainty in seismic hazard calculations. For generating both uncertainty and sensitivity maps, we perform 200 simulations for each of the fault parameters. The results are synthesized both in frequency-magnitude distribution of modeled faults as well as the different maps: the overall uncertainty maps provide a confidence interval for the PGA values and the parameter uncertainty maps determine the sensitivity of hazard assessment to variability of every logic tree branch. These branches of logic tree, analyzed through the Monte Carlo approach, are maximum magnitudes, fault length, fault width, fault dip and slip rates. The overall variability of these parameters is determined by varying them simultaneously in the hazard calculations while the sensitivity of each parameter to overall variability is determined varying each of the fault parameters while fixing others. However, in this study we do not investigate the sensitivity of mean hazard results to the consideration of different GMPEs. Distribution of possible seismic hazard results is illustrated by 95% confidence factor map, which indicates the dispersion about mean value, and coefficient of variation map, which shows percent variability. The results of our study clearly illustrate the influence of active fault parameters to probabilistic seismic hazard maps.
Li, Jia; Wang, Deming; Huang, Zonghou
2017-01-01
Coal dust explosions (CDE) are one of the main threats to the occupational safety of coal miners. Aiming to identify and assess the risk of CDE, this paper proposes a novel method of fuzzy fault tree analysis combined with the Visual Basic (VB) program. In this methodology, various potential causes of the CDE are identified and a CDE fault tree is constructed. To overcome drawbacks from the lack of exact probability data for the basic events, fuzzy set theory is employed and the probability data of each basic event is treated as intuitionistic trapezoidal fuzzy numbers. In addition, a new approach for calculating the weighting of each expert is also introduced in this paper to reduce the error during the expert elicitation process. Specifically, an in-depth quantitative analysis of the fuzzy fault tree, such as the importance measure of the basic events and the cut sets, and the CDE occurrence probability is given to assess the explosion risk and acquire more details of the CDE. The VB program is applied to simplify the analysis process. A case study and analysis is provided to illustrate the effectiveness of this proposed method, and some suggestions are given to take preventive measures in advance and avoid CDE accidents. PMID:28793348
Shi, Lei; Shuai, Jian; Xu, Kui
2014-08-15
Fire and explosion accidents of steel oil storage tanks (FEASOST) occur occasionally during the petroleum and chemical industry production and storage processes and often have devastating impact on lives, the environment and property. To contribute towards the development of a quantitative approach for assessing the occurrence probability of FEASOST, a fault tree of FEASOST is constructed that identifies various potential causes. Traditional fault tree analysis (FTA) can achieve quantitative evaluation if the failure data of all of the basic events (BEs) are available, which is almost impossible due to the lack of detailed data, as well as other uncertainties. This paper makes an attempt to perform FTA of FEASOST by a hybrid application between an expert elicitation based improved analysis hierarchy process (AHP) and fuzzy set theory, and the occurrence possibility of FEASOST is estimated for an oil depot in China. A comparison between statistical data and calculated data using fuzzy fault tree analysis (FFTA) based on traditional and improved AHP is also made. Sensitivity and importance analysis has been performed to identify the most crucial BEs leading to FEASOST that will provide insights into how managers should focus effective mitigation. Copyright © 2014 Elsevier B.V. All rights reserved.
Wang, Hetang; Li, Jia; Wang, Deming; Huang, Zonghou
2017-01-01
Coal dust explosions (CDE) are one of the main threats to the occupational safety of coal miners. Aiming to identify and assess the risk of CDE, this paper proposes a novel method of fuzzy fault tree analysis combined with the Visual Basic (VB) program. In this methodology, various potential causes of the CDE are identified and a CDE fault tree is constructed. To overcome drawbacks from the lack of exact probability data for the basic events, fuzzy set theory is employed and the probability data of each basic event is treated as intuitionistic trapezoidal fuzzy numbers. In addition, a new approach for calculating the weighting of each expert is also introduced in this paper to reduce the error during the expert elicitation process. Specifically, an in-depth quantitative analysis of the fuzzy fault tree, such as the importance measure of the basic events and the cut sets, and the CDE occurrence probability is given to assess the explosion risk and acquire more details of the CDE. The VB program is applied to simplify the analysis process. A case study and analysis is provided to illustrate the effectiveness of this proposed method, and some suggestions are given to take preventive measures in advance and avoid CDE accidents.
Unraveling the dynamics of magmatic CO2 degassing at Mammoth Mountain, California
NASA Astrophysics Data System (ADS)
Peiffer, Loïc; Wanner, Christoph; Lewicki, Jennifer L.
2018-02-01
The accumulation of magmatic CO2 beneath low-permeability barriers may lead to the formation of CO2-rich gas reservoirs within volcanic systems. Such accumulation is often evidenced by high surface CO2 emissions that fluctuate over time. The temporal variability in surface degassing is believed in part to reflect a complex interplay between deep magmatic degassing and the permeability of degassing pathways. A better understanding of the dynamics of CO2 degassing is required to improve monitoring and hazards mitigation in these systems. Owing to the availability of long-term records of CO2 emissions rates and seismicity, Mammoth Mountain in California constitutes an ideal site towards such predictive understanding. Mammoth Mountain is characterized by intense soil CO2 degassing (up to ∼1000 t d-1) and tree kill areas that resulted from leakage of CO2 from a CO2-rich gas reservoir located in the upper ∼4 km. The release of CO2-rich fluids from deeper basaltic intrusions towards the reservoir induces seismicity and potentially reactivates faults connecting the reservoir to the surface. While this conceptual model is well-accepted, there is still a debate whether temporally variable surface CO2 fluxes directly reflect degassing of intrusions or variations in fault permeability. Here, we report the first large-scale numerical model of fluid and heat transport for Mammoth Mountain. We discuss processes (i) leading to the initial formation of the CO2-rich gas reservoir prior to the occurrence of high surface CO2 degassing rates and (ii) controlling current CO2 degassing at the surface. Although the modeling settings are site-specific, the key mechanisms discussed in this study are likely at play at other volcanic systems hosting CO2-rich gas reservoirs. In particular, our model results illustrate the role of convection in stripping a CO2-rich gas phase from a rising hydrothermal fluid and leading to an accumulation of a large mass of CO2 (∼107-108 t) in a shallow gas reservoir. Moreover, we show that both, short-lived (months to years) and long-lived (hundreds of years) events of magmatic fluid injection can lead to critical pressures within the reservoir and potentially trigger fault reactivation. Our sensitivity analysis suggests that observed temporal fluctuations in surface degassing are only indirectly controlled by variations in magmatic degassing and are mainly the result of temporally variable fault permeability. Finally, we suggest that long-term CO2 emission monitoring, seismic tomography and coupled thermal-hydraulic-mechanical modeling are important for CO2-related hazard mitigation.
Graphical workstation capability for reliability modeling
NASA Technical Reports Server (NTRS)
Bavuso, Salvatore J.; Koppen, Sandra V.; Haley, Pamela J.
1992-01-01
In addition to computational capabilities, software tools for estimating the reliability of fault-tolerant digital computer systems must also provide a means of interfacing with the user. Described here is the new graphical interface capability of the hybrid automated reliability predictor (HARP), a software package that implements advanced reliability modeling techniques. The graphics oriented (GO) module provides the user with a graphical language for modeling system failure modes through the selection of various fault-tree gates, including sequence-dependency gates, or by a Markov chain. By using this graphical input language, a fault tree becomes a convenient notation for describing a system. In accounting for any sequence dependencies, HARP converts the fault-tree notation to a complex stochastic process that is reduced to a Markov chain, which it can then solve for system reliability. The graphics capability is available for use on an IBM-compatible PC, a Sun, and a VAX workstation. The GO module is written in the C programming language and uses the graphical kernal system (GKS) standard for graphics implementation. The PC, VAX, and Sun versions of the HARP GO module are currently in beta-testing stages.
Fault tree analysis for urban flooding.
ten Veldhuis, J A E; Clemens, F H L R; van Gelder, P H A J M
2009-01-01
Traditional methods to evaluate flood risk generally focus on heavy storm events as the principal cause of flooding. Conversely, fault tree analysis is a technique that aims at modelling all potential causes of flooding. It quantifies both overall flood probability and relative contributions of individual causes of flooding. This paper presents a fault model for urban flooding and an application to the case of Haarlem, a city of 147,000 inhabitants. Data from a complaint register, rainfall gauges and hydrodynamic model calculations are used to quantify probabilities of basic events in the fault tree. This results in a flood probability of 0.78/week for Haarlem. It is shown that gully pot blockages contribute to 79% of flood incidents, whereas storm events contribute only 5%. This implies that for this case more efficient gully pot cleaning is a more effective strategy to reduce flood probability than enlarging drainage system capacity. Whether this is also the most cost-effective strategy can only be decided after risk assessment has been complemented with a quantification of consequences of both types of events. To do this will be the next step in this study.
NASA Astrophysics Data System (ADS)
Koji, Yusuke; Kitamura, Yoshinobu; Kato, Yoshikiyo; Tsutsui, Yoshio; Mizoguchi, Riichiro
In conceptual design, it is important to develop functional structures which reflect the rich experience in the knowledge from previous design failures. Especially, if a designer learns possible abnormal behaviors from a previous design failure, he or she can add an additional function which prevents such abnormal behaviors and faults. To do this, it is a crucial issue to share such knowledge about possible faulty phenomena and how to cope with them. In fact, a part of such knowledge is described in FMEA (Failure Mode and Effect Analysis) sheets, function structure models for systematic design and fault trees for FTA (Fault Tree Analysis).
Failure analysis of energy storage spring in automobile composite brake chamber
NASA Astrophysics Data System (ADS)
Luo, Zai; Wei, Qing; Hu, Xiaofeng
2015-02-01
This paper set energy storage spring of parking brake cavity, part of automobile composite brake chamber, as the research object. And constructed the fault tree model of energy storage spring which caused parking brake failure based on the fault tree analysis method. Next, the parking brake failure model of energy storage spring was established by analyzing the working principle of composite brake chamber. Finally, the data of working load and the push rod stroke measured by comprehensive test-bed valve was used to validate the failure model above. The experimental result shows that the failure model can distinguish whether the energy storage spring is faulted.
A fast bottom-up algorithm for computing the cut sets of noncoherent fault trees
DOE Office of Scientific and Technical Information (OSTI.GOV)
Corynen, G.C.
1987-11-01
An efficient procedure for finding the cut sets of large fault trees has been developed. Designed to address coherent or noncoherent systems, dependent events, shared or common-cause events, the method - called SHORTCUT - is based on a fast algorithm for transforming a noncoherent tree into a quasi-coherent tree (COHERE), and on a new algorithm for reducing cut sets (SUBSET). To assure sufficient clarity and precision, the procedure is discussed in the language of simple sets, which is also developed in this report. Although the new method has not yet been fully implemented on the computer, we report theoretical worst-casemore » estimates of its computational complexity. 12 refs., 10 figs.« less
Electromagnetic Compatibility (EMC) in Microelectronics.
1983-02-01
Fault Tree Analysis", System Saftey Symposium, June 8-9, 1965, Seattle: The Boeing Company . 12. Fussell, J.B., "Fault Tree Analysis-Concepts and...procedure for assessing EMC in microelectronics and for applying DD, 1473 EOiTO OP I, NOV6 IS OESOL.ETE UNCLASSIFIED SECURITY CLASSIFICATION OF THIS...CRITERIA 2.1 Background 2 2.2 The Probabilistic Nature of EMC 2 2.3 The Probabilistic Approach 5 2.4 The Compatibility Factor 6 3 APPLYING PROBABILISTIC
A graphical language for reliability model generation
NASA Technical Reports Server (NTRS)
Howell, Sandra V.; Bavuso, Salvatore J.; Haley, Pamela J.
1990-01-01
A graphical interface capability of the hybrid automated reliability predictor (HARP) is described. The graphics-oriented (GO) module provides the user with a graphical language for modeling system failure modes through the selection of various fault tree gates, including sequence dependency gates, or by a Markov chain. With this graphical input language, a fault tree becomes a convenient notation for describing a system. In accounting for any sequence dependencies, HARP converts the fault-tree notation to a complex stochastic process that is reduced to a Markov chain which it can then solve for system reliability. The graphics capability is available for use on an IBM-compatible PC, a Sun, and a VAX workstation. The GO module is written in the C programming language and uses the Graphical Kernel System (GKS) standard for graphics implementation. The PC, VAX, and Sun versions of the HARP GO module are currently in beta-testing.
A Hybrid Spatio-Temporal Data Indexing Method for Trajectory Databases
Ke, Shengnan; Gong, Jun; Li, Songnian; Zhu, Qing; Liu, Xintao; Zhang, Yeting
2014-01-01
In recent years, there has been tremendous growth in the field of indoor and outdoor positioning sensors continuously producing huge volumes of trajectory data that has been used in many fields such as location-based services or location intelligence. Trajectory data is massively increased and semantically complicated, which poses a great challenge on spatio-temporal data indexing. This paper proposes a spatio-temporal data indexing method, named HBSTR-tree, which is a hybrid index structure comprising spatio-temporal R-tree, B*-tree and Hash table. To improve the index generation efficiency, rather than directly inserting trajectory points, we group consecutive trajectory points as nodes according to their spatio-temporal semantics and then insert them into spatio-temporal R-tree as leaf nodes. Hash table is used to manage the latest leaf nodes to reduce the frequency of insertion. A new spatio-temporal interval criterion and a new node-choosing sub-algorithm are also proposed to optimize spatio-temporal R-tree structures. In addition, a B*-tree sub-index of leaf nodes is built to query the trajectories of targeted objects efficiently. Furthermore, a database storage scheme based on a NoSQL-type DBMS is also proposed for the purpose of cloud storage. Experimental results prove that HBSTR-tree outperforms TB*-tree in some aspects such as generation efficiency, query performance and query type. PMID:25051028
A hybrid spatio-temporal data indexing method for trajectory databases.
Ke, Shengnan; Gong, Jun; Li, Songnian; Zhu, Qing; Liu, Xintao; Zhang, Yeting
2014-07-21
In recent years, there has been tremendous growth in the field of indoor and outdoor positioning sensors continuously producing huge volumes of trajectory data that has been used in many fields such as location-based services or location intelligence. Trajectory data is massively increased and semantically complicated, which poses a great challenge on spatio-temporal data indexing. This paper proposes a spatio-temporal data indexing method, named HBSTR-tree, which is a hybrid index structure comprising spatio-temporal R-tree, B*-tree and Hash table. To improve the index generation efficiency, rather than directly inserting trajectory points, we group consecutive trajectory points as nodes according to their spatio-temporal semantics and then insert them into spatio-temporal R-tree as leaf nodes. Hash table is used to manage the latest leaf nodes to reduce the frequency of insertion. A new spatio-temporal interval criterion and a new node-choosing sub-algorithm are also proposed to optimize spatio-temporal R-tree structures. In addition, a B*-tree sub-index of leaf nodes is built to query the trajectories of targeted objects efficiently. Furthermore, a database storage scheme based on a NoSQL-type DBMS is also proposed for the purpose of cloud storage. Experimental results prove that HBSTR-tree outperforms TB*-tree in some aspects such as generation efficiency, query performance and query type.
NASA Astrophysics Data System (ADS)
Davarpanah, A.; Babaie, H. A.
2012-12-01
The interaction of the thermally induced stress field of the Yellowstone hotspot (YHS) with existing Basin and Range (BR) fault blocks, over the past 17 m.y., has produced a new, spatially and temporally variable system of normal faults around the Snake River Plain (SRP) in Idaho and Wyoming-Montana area. Data about the trace of these new cross faults (CF) and older BR normal faults were acquired from a combination of satellite imageries, DEM, and USGS geological maps and databases at scales of 1:24,000, 1:100,000, 1:250,000, 1:1000, 000, and 1:2,500, 000, and classified based on their azimuth in ArcGIS 10. The box-counting fractal dimension (Db) of the BR fault traces, determined applying the Benoit software, and the anisotropy intensity (ellipticity) of the fractal dimensions, measured with the modified Cantor dust method applying the AMOCADO software, were measured in two large spatial domains (I and II). The Db and anisotropy of the cross faults were studied in five temporal domains (T1-T5) classified based on the geologic age of successive eruptive centers (12 Ma to recent) of the YHS along the eastern SRP. The fractal anisotropy of the CF system in each temporal domain was also spatially determined in the southern part (domain S1), central part (domain S2), and northern part (domain S3) of the SRP. Line (fault trace) density maps for the BR and CF polylines reveal a higher linear density (trace length per unit area) for the BR traces in the spatial domain I, and a higher linear density of the CF traces around the present Yellowstone National Park (S1T5) where most of the seismically active faults are located. Our spatio-temporal analysis reveals that the fractal dimension of the BR system in domain I (Db=1.423) is greater than that in domain II (Db=1.307). It also shows that the anisotropy of the fractal dimension in domain I is less eccentric (axial ratio: 1.242) than that in domain II (1.355), probably reflecting the greater variation in the trend of the BR system in domain I. The CF system in the S1T5 domain has the highest fractal dimension (Db=1.37) and the lowest anisotropy eccentricity (1.23) among the five temporal domains. These values positively correlate with the observed maxima on the fault trace density maps. The major axis of the anisotropy ellipses is consistently perpendicular to the average trend of the normal fault system in each domain, and therefore approximates the orientation of extension for normal faulting in each domain. This fact gives a NE-SW and NW-SE extension direction for the BR system in domains I and II, respectively. The observed NE-SW orientation of the major axes of the anisotropy ellipses in the youngest T4 and T5 temporal domains, oriented perpendicular to the mean trend of the normal faults in the these domains, suggests extension along the NE-SW direction for cross faulting in these areas. The spatial trajectories (form lines) of the minor axes of the anisotropy ellipses, and the mean trend of fault traces in the T4 and T5 temporal domains, define a large parabolic pattern about the axis of the eastern SRP, with its apex at the Yellowstone plateau.
NASA Astrophysics Data System (ADS)
Wu, Jianing; Yan, Shaoze; Xie, Liyang
2011-12-01
To address the impact of solar array anomalies, it is important to perform analysis of the solar array reliability. This paper establishes the fault tree analysis (FTA) and fuzzy reasoning Petri net (FRPN) models of a solar array mechanical system and analyzes reliability to find mechanisms of the solar array fault. The index final truth degree (FTD) and cosine matching function (CMF) are employed to resolve the issue of how to evaluate the importance and influence of different faults. So an improvement reliability analysis method is developed by means of the sorting of FTD and CMF. An example is analyzed using the proposed method. The analysis results show that harsh thermal environment and impact caused by particles in space are the most vital causes of the solar array fault. Furthermore, other fault modes and the corresponding improvement methods are discussed. The results reported in this paper could be useful for the spacecraft designers, particularly, in the process of redesigning the solar array and scheduling its reliability growth plan.
Seera, Manjeevan; Lim, Chee Peng; Ishak, Dahaman; Singh, Harapajan
2012-01-01
In this paper, a novel approach to detect and classify comprehensive fault conditions of induction motors using a hybrid fuzzy min-max (FMM) neural network and classification and regression tree (CART) is proposed. The hybrid model, known as FMM-CART, exploits the advantages of both FMM and CART for undertaking data classification and rule extraction problems. A series of real experiments is conducted, whereby the motor current signature analysis method is applied to form a database comprising stator current signatures under different motor conditions. The signal harmonics from the power spectral density are extracted as discriminative input features for fault detection and classification with FMM-CART. A comprehensive list of induction motor fault conditions, viz., broken rotor bars, unbalanced voltages, stator winding faults, and eccentricity problems, has been successfully classified using FMM-CART with good accuracy rates. The results are comparable, if not better, than those reported in the literature. Useful explanatory rules in the form of a decision tree are also elicited from FMM-CART to analyze and understand different fault conditions of induction motors.
Aydin, Ilhan; Karakose, Mehmet; Akin, Erhan
2014-03-01
Although reconstructed phase space is one of the most powerful methods for analyzing a time series, it can fail in fault diagnosis of an induction motor when the appropriate pre-processing is not performed. Therefore, boundary analysis based a new feature extraction method in phase space is proposed for diagnosis of induction motor faults. The proposed approach requires the measurement of one phase current signal to construct the phase space representation. Each phase space is converted into an image, and the boundary of each image is extracted by a boundary detection algorithm. A fuzzy decision tree has been designed to detect broken rotor bars and broken connector faults. The results indicate that the proposed approach has a higher recognition rate than other methods on the same dataset. © 2013 ISA Published by ISA All rights reserved.
NASA Astrophysics Data System (ADS)
Sun, Y.; Luo, G.
2017-12-01
Seismicity in a region is usually characterized by earthquake clusters and earthquake migration along its major fault zones. However, we do not fully understand why and how earthquake clusters and spatio-temporal migration of earthquakes occur. The northeastern Tibetan Plateau is a good example for us to investigate these problems. In this study, we construct and use a three-dimensional viscoelastoplastic finite-element model to simulate earthquake cycles and spatio-temporal migration of earthquakes along major fault zones in northeastern Tibetan Plateau. We calculate stress evolution and fault interactions, and explore effects of topographic loading and viscosity of middle-lower crust and upper mantle on model results. Model results show that earthquakes and fault interactions increase Coulomb stress on the neighboring faults or segments, accelerating the future earthquakes in this region. Thus, earthquakes occur sequentially in a short time, leading to regional earthquake clusters. Through long-term evolution, stresses on some seismogenic faults, which are far apart, may almost simultaneously reach the critical state of fault failure, probably also leading to regional earthquake clusters and earthquake migration. Based on our model synthetic seismic catalog and paleoseismic data, we analyze probability of earthquake migration between major faults in northeastern Tibetan Plateau. We find that following the 1920 M 8.5 Haiyuan earthquake and the 1927 M 8.0 Gulang earthquake, the next big event (M≥7) in northeastern Tibetan Plateau would be most likely to occur on the Haiyuan fault.
Fault detection in digital and analog circuits using an i(DD) temporal analysis technique
NASA Technical Reports Server (NTRS)
Beasley, J.; Magallanes, D.; Vridhagiri, A.; Ramamurthy, Hema; Deyong, Mark
1993-01-01
An i(sub DD) temporal analysis technique which is used to detect defects (faults) and fabrication variations in both digital and analog IC's by pulsing the power supply rails and analyzing the temporal data obtained from the resulting transient rail currents is presented. A simple bias voltage is required for all the inputs, to excite the defects. Data from hardware tests supporting this technique are presented.
The P-Mesh: A Commodity-based Scalable Network Architecture for Clusters
NASA Technical Reports Server (NTRS)
Nitzberg, Bill; Kuszmaul, Chris; Stockdale, Ian; Becker, Jeff; Jiang, John; Wong, Parkson; Tweten, David (Technical Monitor)
1998-01-01
We designed a new network architecture, the P-Mesh which combines the scalability and fault resilience of a torus with the performance of a switch. We compare the scalability, performance, and cost of the hub, switch, torus, tree, and P-Mesh architectures. The latter three are capable of scaling to thousands of nodes, however, the torus has severe performance limitations with that many processors. The tree and P-Mesh have similar latency, bandwidth, and bisection bandwidth, but the P-Mesh outperforms the switch architecture (a lower bound for tree performance) on 16-node NAB Parallel Benchmark tests by up to 23%, and costs 40% less. Further, the P-Mesh has better fault resilience characteristics. The P-Mesh architecture trades increased management overhead for lower cost, and is a good bridging technology while the price of tree uplinks is expensive.
3D Modelling of Seismically Active Parts of Underground Faults via Seismic Data Mining
NASA Astrophysics Data System (ADS)
Frantzeskakis, Theofanis; Konstantaras, Anthony
2015-04-01
During the last few years rapid steps have been taken towards drilling for oil in the western Mediterranean sea. Since most of the countries in the region benefit mainly from tourism and considering that the Mediterranean is a closed sea only replenishing its water once every ninety years careful measures are being taken to ensure safe drilling. In that concept this research work attempts to derive a three dimensional model of the seismically active parts of the underlying underground faults in areas of petroleum interest. For that purpose seismic spatio-temporal clustering has been applied to seismic data to identify potential distinct seismic regions in the area of interest. Results have been coalesced with two dimensional maps of underground faults from past surveys and seismic epicentres, having followed careful reallocation processing, have been used to provide information regarding the vertical extent of multiple underground faults in the region of interest. The end product is a three dimensional map of the possible underground location and extent of the seismically active parts of underground faults. Indexing terms: underground faults modelling, seismic data mining, 3D visualisation, active seismic source mapping, seismic hazard evaluation, dangerous phenomena modelling Acknowledgment This research work is supported by the ESPA Operational Programme, Education and Life Long Learning, Students Practical Placement Initiative. References [1] Alves, T.M., Kokinou, E. and Zodiatis, G.: 'A three-step model to assess shoreline and offshore susceptibility to oil spills: The South Aegean (Crete) as an analogue for confined marine basins', Marine Pollution Bulletin, In Press, 2014 [2] Ciappa, A., Costabile, S.: 'Oil spill hazard assessment using a reverse trajectory method for the Egadi marine protected area (Central Mediterranean Sea)', Marine Pollution Bulletin, vol. 84 (1-2), pp. 44-55, 2014 [3] Ganas, A., Karastathis, V., Moshou, A., Valkaniotis, S., Mouzakiotis, E. and Papathanassiou, G.: 'Aftershock relocation and frequency-size distribution, stress inversion and seismotectonic setting of the 7 August 2013 M=5.4 earthquake in Kallidromon Mountain, central Greece', Tectonophysics, vol. 617, pp. 101-113, 2014 [4] Maravelakis, E., Bilalis, N., Mantzorou, I., Konstantaras, A. and Antoniadis, A.: '3D modelling of the oldest olive tree of the world', International Journal Of Computational Engineering Research, vol. 2 (2), pp. 340-347, 2012 [5] Konstantaras, A., Katsifarakis, E, Maravelakis, E, Skounakis, E, Kokkinos, E. and Karapidakis, E.: 'Intelligent spatial-clustering of seismicity in the vicinity of the Hellenic seismic arc', Earth Science Research, vol. 1 (2), pp. 1- 10, 2012 [6] Georgoulas, G., Konstantaras, A., Katsifarakis, E., Stylios, C., Maravelakis, E and Vachtsevanos, G.: 'Seismic-mass" density-based algorithm for spatio-temporal clustering', Expert Systems with Applications, vol. 40 (10), pp. 4183-4189, 2013 [7] Konstantaras, A.: 'Classification of Distinct Seismic Regions and Regional Temporal Modelling of Seismicity in the Vicinity of the Hellenic Seismic Arc', Selected Topics in Applied Earth Observations and Remote Sensing, IEEE Journal of', vol. 99, pp. 1-7, 2013
Unraveling the dynamics of magmatic CO2 degassing at Mammoth Mountain, California
Pfeiffer, Loic; Wanner, Christoph; Lewicki, Jennifer L.
2018-01-01
The accumulation of magmatic CO2 beneath low-permeability barriers may lead to the formation of CO2-rich gas reservoirs within volcanic systems. Such accumulation is often evidenced by high surface CO2 emissions that fluctuate over time. The temporal variability in surface degassing is believed in part to reflect a complex interplay between deep magmatic degassing and the permeability of degassing pathways. A better understanding of the dynamics of CO2 degassing is required to improve monitoring and hazards mitigation in these systems. Owing to the availability of long-term records of CO2 emissions rates and seismicity, Mammoth Mountain in California constitutes an ideal site towards such predictive understanding. Mammoth Mountain is characterized by intense soil CO2 degassing (up to ∼1000 t d−1) and tree kill areas that resulted from leakage of CO2 from a CO2-rich gas reservoir located in the upper ∼4 km. The release of CO2-rich fluids from deeper basaltic intrusions towards the reservoir induces seismicity and potentially reactivates faults connecting the reservoir to the surface. While this conceptual model is well-accepted, there is still a debate whether temporally variable surface CO2 fluxes directly reflect degassing of intrusions or variations in fault permeability. Here, we report the first large-scale numerical model of fluid and heat transport for Mammoth Mountain. We discuss processes (i) leading to the initial formation of the CO2-rich gas reservoir prior to the occurrence of high surface CO2 degassing rates and (ii) controlling current CO2 degassing at the surface. Although the modeling settings are site-specific, the key mechanisms discussed in this study are likely at play at other volcanic systems hosting CO2-rich gas reservoirs. In particular, our model results illustrate the role of convection in stripping a CO2-rich gas phase from a rising hydrothermal fluid and leading to an accumulation of a large mass of CO2 (∼107–108 t) in a shallow gas reservoir. Moreover, we show that both, short-lived (months to years) and long-lived (hundreds of years) events of magmatic fluid injection can lead to critical pressures within the reservoir and potentially trigger fault reactivation. Our sensitivity analysis suggests that observed temporal fluctuations in surface degassing are only indirectly controlled by variations in magmatic degassing and are mainly the result of temporally variable fault permeability. Finally, we suggest that long-term CO2 emission monitoring, seismic tomography and coupled thermal–hydraulic–mechanical modeling are important for CO2-related hazard mitigation.
The 1992 Landers earthquake sequence; seismological observations
Egill Hauksson,; Jones, Lucile M.; Hutton, Kate; Eberhart-Phillips, Donna
1993-01-01
The (MW6.1, 7.3, 6.2) 1992 Landers earthquakes began on April 23 with the MW6.1 1992 Joshua Tree preshock and form the most substantial earthquake sequence to occur in California in the last 40 years. This sequence ruptured almost 100 km of both surficial and concealed faults and caused aftershocks over an area 100 km wide by 180 km long. The faulting was predominantly strike slip and three main events in the sequence had unilateral rupture to the north away from the San Andreas fault. The MW6.1 Joshua Tree preshock at 33°N58′ and 116°W19′ on 0451 UT April 23 was preceded by a tightly clustered foreshock sequence (M≤4.6) beginning 2 hours before the mainshock and followed by a large aftershock sequence with more than 6000 aftershocks. The aftershocks extended along a northerly trend from about 10 km north of the San Andreas fault, northwest of Indio, to the east-striking Pinto Mountain fault. The Mw7.3 Landers mainshock occurred at 34°N13′ and 116°W26′ at 1158 UT, June 28, 1992, and was preceded for 12 hours by 25 small M≤3 earthquakes at the mainshock epicenter. The distribution of more than 20,000 aftershocks, analyzed in this study, and short-period focal mechanisms illuminate a complex sequence of faulting. The aftershocks extend 60 km to the north of the mainshock epicenter along a system of at least five different surficial faults, and 40 km to the south, crossing the Pinto Mountain fault through the Joshua Tree aftershock zone towards the San Andreas fault near Indio. The rupture initiated in the depth range of 3–6 km, similar to previous M∼5 earthquakes in the region, although the maximum depth of aftershocks is about 15 km. The mainshock focal mechanism showed right-lateral strike-slip faulting with a strike of N10°W on an almost vertical fault. The rupture formed an arclike zone well defined by both surficial faulting and aftershocks, with more westerly faulting to the north. This change in strike is accomplished by jumping across dilational jogs connecting surficial faults with strikes rotated progressively to the west. A 20-km-long linear cluster of aftershocks occurred 10–20 km north of Barstow, or 30–40 km north of the end of the mainshock rupture. The most prominent off-fault aftershock cluster occurred 30 km to the west of the Landers mainshock. The largest aftershock was within this cluster, the Mw6.2 Big Bear aftershock occurring at 34°N10′ and 116°W49′ at 1505 UT June 28. It exhibited left-lateral strike-slip faulting on a northeast striking and steeply dipping plane. The Big Bear aftershocks form a linear trend extending 20 km to the northeast with a scattered distribution to the north. The Landers mainshock occurred near the southernmost extent of the Eastern California Shear Zone, an 80-km-wide, more than 400-km-long zone of deformation. This zone extends into the Death Valley region and accommodates about 10 to 20% of the plate motion between the Pacific and North American plates. The Joshua Tree preshock, its aftershocks, and Landers aftershocks form a previously missing link that connects the Eastern California Shear Zone to the southern San Andreas fault.
NASA Astrophysics Data System (ADS)
Sayab, Mohammad; Khan, Muhammad Asif
2010-10-01
Detailed rupture-fracture analyses of some of the well-studied earthquakes have revealed that the geometrical arrangement of secondary faults and fractures can be used as a geological tool to understand the temporal evolution of slip produced during the mainshock. The October 8, 2005 Mw 7.6 Kashmir earthquake, NW Himalaya, surface rupture provides an opportunity to study a complex network of secondary fractures developed on the hanging wall of the fault scarp. The main fault scarp is clearly thrust-type, rupture length is ~ 75 ± 5 km and the overall trend of the rupture is NW-SE. We present the results of our detailed structural mapping of secondary faults and fractures at 1:100 scale, on the hanging wall of the southern end of the rupture in the vicinity of the Sar Pain. Secondary ruptures can be broadly classified as two main types, 1) normal faults and, (2) right-lateral strike-slip 'Riedel' fractures. The secondary normal faults are NW-SE striking, with a maximum 3.3 meter vertical displacement and 2.5 meter horizontal displacement. Estimated total horizontal extension across the secondary normal faults is 3.1-3.5%. We propose that the bending-moment and coseismic stress relaxation can explain the formation of secondary normal faults on the hanging wall of the thrust fault. The strike-slip 'Riedel' fractures form distinct sets of tension (T) and shear fractures (R', R, Y) with right-lateral displacement. Field observations revealed that the 'Riedel' fractures (T) cut the secondary normal faults. In addition, there is kinematic incompatibility and magnitude mismatch between the secondary normal faults and strike-slip 'Riedel' fractures. The cross-cutting relationship, geometric and magnitude incoherence implies a temporal evolution of slip from dip- to strike-slip during the mainshock faulting. The interpretation is consistent with the thrust fault plane solution with minor right-lateral strike-slip component.
Reliability analysis of the solar array based on Fault Tree Analysis
NASA Astrophysics Data System (ADS)
Jianing, Wu; Shaoze, Yan
2011-07-01
The solar array is an important device used in the spacecraft, which influences the quality of in-orbit operation of the spacecraft and even the launches. This paper analyzes the reliability of the mechanical system and certifies the most vital subsystem of the solar array. The fault tree analysis (FTA) model is established according to the operating process of the mechanical system based on DFH-3 satellite; the logical expression of the top event is obtained by Boolean algebra and the reliability of the solar array is calculated. The conclusion shows that the hinges are the most vital links between the solar arrays. By analyzing the structure importance(SI) of the hinge's FTA model, some fatal causes, including faults of the seal, insufficient torque of the locking spring, temperature in space, and friction force, can be identified. Damage is the initial stage of the fault, so limiting damage is significant to prevent faults. Furthermore, recommendations for improving reliability associated with damage limitation are discussed, which can be used for the redesigning of the solar array and the reliability growth planning.
Fault tree safety analysis of a large Li/SOCl(sub)2 spacecraft battery
NASA Technical Reports Server (NTRS)
Uy, O. Manuel; Maurer, R. H.
1987-01-01
The results of the safety fault tree analysis on the eight module, 576 F cell Li/SOCl2 battery on the spacecraft and in the integration and test environment prior to launch on the ground are presented. The analysis showed that with the right combination of blocking diodes, electrical fuses, thermal fuses, thermal switches, cell balance, cell vents, and battery module vents the probability of a single cell or a 72 cell module exploding can be reduced to .000001, essentially the probability due to explosion for unexplained reasons.
NASA Astrophysics Data System (ADS)
Arai, H.; Ando, R.; Aoki, Y.
2017-12-01
The 2016 Kumamoto earthquake sequence hit the SW Japan, from April 14th to 16th and its sequence includes two M6-class foreshocks and the main shock (Mw 7.0). Importantly, the detailed surface displacement caused solely by the two foreshocks could be captured by a SAR observation isolated from the mainshock deformation. The foreshocks ruptured the previously mapped Hinagu fault and their hypocentral locations and the aftershock distribution indicates the involvement of two different subparallel faults. Therefore we assumed that the 1st and the 2nd foreshocks respectively ruptured each of the subparallel faults (faults A and B). One of the interesting points of this earthquake is that the two major foreshocks had a temporal gap of 2.5 hours even though the fault A and B are quite close by each other. This suggests that the stress perturbation due to the 1st foreshock is not large enough to trigger the 2nd one right away but that it's large enough to bring about the following earthquake after a delay time.We aim to reproduce the foreshock sequence such as rupture jumping over the subparallel faults by using dynamic rupture simulations. We employed a spatiotemporal-boundary integral equation method accelerated by the Fast Domain Partitioning Method (Ando, 2016, GJI) since this method allows us to construct a complex fault geometry in 3D media. Our model has two faults and a free ground surface. We conducted rupture simulation with various sets of parameters to identify the optimal condition describing the observation.Our simulation results are roughly categorized into 3 cases with regard to the criticality for the rupture jumping. The case 1 (supercritical case) shows the fault A and B ruptured consecutively without any temporal gap. In the case 2 (nearly critical), the rupture on the fault B started with a temporal gap after the fault A finished rupturing, which is what we expected as a reproduction. In the case 3 (subcritical), only the fault A ruptured and its rupture did not transfer to the fault B. We succeed in reproducing rupture jumping over two faults with a temporal gap due to the nucleation by taking account of a velocity strengthening (direct) effect. With a detailed analysis of the case 2, we can constrain ranges of parameters strictly, and this gives us deeper insights into the physics underlying the delayed foreshock activity.
NASA Astrophysics Data System (ADS)
Qu, F.; Lu, Z.; Kim, J. W.
2017-12-01
Growth faults are common and continue to evolve throughout the unconsolidated sediments of Greater Houston (GH) region in Texas. Presence of faults can induce localized surface displacements, aggravate localized subsidence, and discontinue the integrity of ground water flow. Property damages due to fault creep have become more evident during the past few years over the GH area, portraying the necessity of further study of these faults. Interferometric synthetic aperture radar (InSAR) has been proven to be effective in mapping creep along and/or across faults. However, extracting a short wavelength, as well as small amplitude of the creep signal (about 10-20 mm/year) from long time span interferograms is extremely difficult, especially in agricultural or vegetated areas. This paper aims to map and monitor the latest rate, extent, and temporal evolution of faulting at a highest spatial density over GH region using an improved Multi-temporal InSAR (MTI) technique. The method, with maximized usable signal and correlation, has the ability to identify and monitor the active faults to provide an accurate and elaborate image of the faults. In this study, two neighboring ALOS tracks and Sentinel-1A datasets are used. Many zones of steep phase gradients and/or discontinuities have been recognized from the long term velocity maps by both ALOS (2007-2011) and Sentinei-1A (2015-2017) imagery. Not only those previously known faults position but also the new fault traces that have not been mapped by other techniques are imaged by our MTI technique. Fault damage and visible cracking of ground were evident at most locations through our field survey. The discovery of new fault activation, or faults moved from earlier locations is a part of the Big Barn Fault and Conroe fault system, trending from southwest to northeast between Hockley and Conroe. The location of area of subsidence over GH is also shrinking and migrating toward the northeast (Montgomery County) after 2000. The continuous mining of ground water from the Jasper aquifer formed a new water-level decline cones over Montgomery County, exactly reflects the intensity of new fault activity. The discovery of new fault activation, or faults moved from earlier locations appear to be related to excessive water exploitation from Montgomery County aquifers.
NASA Astrophysics Data System (ADS)
Zhao, P.; Peng, Z.
2008-12-01
We systemically identify repeating earthquakes and investigate spatio-temporal variations of fault zone properties associated with the 2004 Mw6.0 Parkfield earthquake along the Parkfield section of the San Andreas fault, and the 1984 Mw6.2 Morgan Hill earthquake along the central Calaveras fault. The procedure for identifying repeating earthquakes is based on overlapping of the source regions and the waveform similarity, and is briefly described as follows. First, we estimate the source radius of each event based on a circular crack model and a normal stress drop of 3 MPa. Next, we compute inter-hypocentral distance for events listed in the relocated catalog of Thurber et al. (2006) around Parkfield, and Schaff et al. (2002) along the Calaveras fault. Then, we group all events into 'initial' clusters by requiring the separation distance between each event pair to be less than the source radius of larger event, and their magnitude difference to be less than 1. Next, we calculate the correlation coefficients between every event pair within each 'initial' cluster using a 3-s time window around the direct P waves for all available stations. The median value of the correlation coefficients is used as a measure of similarity between each event pair. We drop an event if the median similarity to the rest events in that cluster is less than 0.9. After identifying repeating clusters in both regions, our next step is to apply a sliding window waveform cross-correlation technique (Niu et al., 2003; Peng and Ben-Zion, 2006) to calculate the delay time and decorrelation index for each repeating cluster. By measuring temporal changes in waveforms of repeating clusters at different locations and depth, we hope to obtain a better constraint on spatio-temporal variations of fault zone properties and near-surface layers associated with the occurrence of major earthquakes.
NASA Astrophysics Data System (ADS)
LI, Y.; Yang, S. H.
2017-05-01
The Antarctica astronomical telescopes work chronically on the top of the unattended South Pole, and they have only one chance to maintain every year. Due to the complexity of the optical, mechanical, and electrical systems, the telescopes are hard to be maintained and need multi-tasker expedition teams, which means an excessive awareness is essential for the reliability of the Antarctica telescopes. Based on the fault mechanism and fault mode of the main-axis control system for the equatorial Antarctica astronomical telescope AST3-3 (Antarctic Schmidt Telescopes 3-3), the method of fault tree analysis is introduced in this article, and we obtains the importance degree of the top event from the importance degree of the bottom event structure. From the above results, the hidden problems and weak links can be effectively found out, which will indicate the direction for promoting the stability of the system and optimizing the design of the system.
Fault tree analysis of most common rolling bearing tribological failures
NASA Astrophysics Data System (ADS)
Vencl, Aleksandar; Gašić, Vlada; Stojanović, Blaža
2017-02-01
Wear as a tribological process has a major influence on the reliability and life of rolling bearings. Field examinations of bearing failures due to wear indicate possible causes and point to the necessary measurements for wear reduction or elimination. Wear itself is a very complex process initiated by the action of different mechanisms, and can be manifested by different wear types which are often related. However, the dominant type of wear can be approximately determined. The paper presents the classification of most common bearing damages according to the dominant wear type, i.e. abrasive wear, adhesive wear, surface fatigue wear, erosive wear, fretting wear and corrosive wear. The wear types are correlated with the terms used in ISO 15243 standard. Each wear type is illustrated with an appropriate photograph, and for each wear type, appropriate description of causes and manifestations is presented. Possible causes of rolling bearing failure are used for the fault tree analysis (FTA). It was performed to determine the root causes for bearing failures. The constructed fault tree diagram for rolling bearing failure can be useful tool for maintenance engineers.
Renjith, V R; Madhu, G; Nayagam, V Lakshmana Gomathi; Bhasi, A B
2010-11-15
The hazards associated with major accident hazard (MAH) industries are fire, explosion and toxic gas releases. Of these, toxic gas release is the worst as it has the potential to cause extensive fatalities. Qualitative and quantitative hazard analyses are essential for the identification and quantification of these hazards related to chemical industries. Fault tree analysis (FTA) is an established technique in hazard identification. This technique has the advantage of being both qualitative and quantitative, if the probabilities and frequencies of the basic events are known. This paper outlines the estimation of the probability of release of chlorine from storage and filling facility of chlor-alkali industry using FTA. An attempt has also been made to arrive at the probability of chlorine release using expert elicitation and proven fuzzy logic technique for Indian conditions. Sensitivity analysis has been done to evaluate the percentage contribution of each basic event that could lead to chlorine release. Two-dimensional fuzzy fault tree analysis (TDFFTA) has been proposed for balancing the hesitation factor involved in expert elicitation. Copyright © 2010 Elsevier B.V. All rights reserved.
Orogen-scale uplift in the central Italian Apennines drives episodic behaviour of earthquake faults
Cowie, P. A.; Phillips, R. J.; Roberts, G. P.; McCaffrey, K.; Zijerveld, L. J. J.; Gregory, L. C.; Faure Walker, J.; Wedmore, L. N. J.; Dunai, T. J.; Binnie, S. A.; Freeman, S. P. H. T.; Wilcken, K.; Shanks, R. P.; Huismans, R. S.; Papanikolaou, I.; Michetti, A. M.; Wilkinson, M.
2017-01-01
Many areas of the Earth’s crust deform by distributed extensional faulting and complex fault interactions are often observed. Geodetic data generally indicate a simpler picture of continuum deformation over decades but relating this behaviour to earthquake occurrence over centuries, given numerous potentially active faults, remains a global problem in hazard assessment. We address this challenge for an array of seismogenic faults in the central Italian Apennines, where crustal extension and devastating earthquakes occur in response to regional surface uplift. We constrain fault slip-rates since ~18 ka using variations in cosmogenic 36Cl measured on bedrock scarps, mapped using LiDAR and ground penetrating radar, and compare these rates to those inferred from geodesy. The 36Cl data reveal that individual faults typically accumulate meters of displacement relatively rapidly over several thousand years, separated by similar length time intervals when slip-rates are much lower, and activity shifts between faults across strike. Our rates agree with continuum deformation rates when averaged over long spatial or temporal scales (104 yr; 102 km) but over shorter timescales most of the deformation may be accommodated by <30% of the across-strike fault array. We attribute the shifts in activity to temporal variations in the mechanical work of faulting. PMID:28322311
Orogen-scale uplift in the central Italian Apennines drives episodic behaviour of earthquake faults.
Cowie, P A; Phillips, R J; Roberts, G P; McCaffrey, K; Zijerveld, L J J; Gregory, L C; Faure Walker, J; Wedmore, L N J; Dunai, T J; Binnie, S A; Freeman, S P H T; Wilcken, K; Shanks, R P; Huismans, R S; Papanikolaou, I; Michetti, A M; Wilkinson, M
2017-03-21
Many areas of the Earth's crust deform by distributed extensional faulting and complex fault interactions are often observed. Geodetic data generally indicate a simpler picture of continuum deformation over decades but relating this behaviour to earthquake occurrence over centuries, given numerous potentially active faults, remains a global problem in hazard assessment. We address this challenge for an array of seismogenic faults in the central Italian Apennines, where crustal extension and devastating earthquakes occur in response to regional surface uplift. We constrain fault slip-rates since ~18 ka using variations in cosmogenic 36 Cl measured on bedrock scarps, mapped using LiDAR and ground penetrating radar, and compare these rates to those inferred from geodesy. The 36 Cl data reveal that individual faults typically accumulate meters of displacement relatively rapidly over several thousand years, separated by similar length time intervals when slip-rates are much lower, and activity shifts between faults across strike. Our rates agree with continuum deformation rates when averaged over long spatial or temporal scales (10 4 yr; 10 2 km) but over shorter timescales most of the deformation may be accommodated by <30% of the across-strike fault array. We attribute the shifts in activity to temporal variations in the mechanical work of faulting.
Development and validation of techniques for improving software dependability
NASA Technical Reports Server (NTRS)
Knight, John C.
1992-01-01
A collection of document abstracts are presented on the topic of improving software dependability through NASA grant NAG-1-1123. Specific topics include: modeling of error detection; software inspection; test cases; Magnetic Stereotaxis System safety specifications and fault trees; and injection of synthetic faults into software.
Trade Studies of Space Launch Architectures using Modular Probabilistic Risk Analysis
NASA Technical Reports Server (NTRS)
Mathias, Donovan L.; Go, Susie
2006-01-01
A top-down risk assessment in the early phases of space exploration architecture development can provide understanding and intuition of the potential risks associated with new designs and technologies. In this approach, risk analysts draw from their past experience and the heritage of similar existing systems as a source for reliability data. This top-down approach captures the complex interactions of the risk driving parts of the integrated system without requiring detailed knowledge of the parts themselves, which is often unavailable in the early design stages. Traditional probabilistic risk analysis (PRA) technologies, however, suffer several drawbacks that limit their timely application to complex technology development programs. The most restrictive of these is a dependence on static planning scenarios, expressed through fault and event trees. Fault trees incorporating comprehensive mission scenarios are routinely constructed for complex space systems, and several commercial software products are available for evaluating fault statistics. These static representations cannot capture the dynamic behavior of system failures without substantial modification of the initial tree. Consequently, the development of dynamic models using fault tree analysis has been an active area of research in recent years. This paper discusses the implementation and demonstration of dynamic, modular scenario modeling for integration of subsystem fault evaluation modules using the Space Architecture Failure Evaluation (SAFE) tool. SAFE is a C++ code that was originally developed to support NASA s Space Launch Initiative. It provides a flexible framework for system architecture definition and trade studies. SAFE supports extensible modeling of dynamic, time-dependent risk drivers of the system and functions at the level of fidelity for which design and failure data exists. The approach is scalable, allowing inclusion of additional information as detailed data becomes available. The tool performs a Monte Carlo analysis to provide statistical estimates. Example results of an architecture system reliability study are summarized for an exploration system concept using heritage data from liquid-fueled expendable Saturn V/Apollo launch vehicles.
Spatial and temporal seismic velocity changes on Kyushu Island during the 2016 Kumamoto earthquake
Nimiya, Hiro; Ikeda, Tatsunori; Tsuji, Takeshi
2017-01-01
Monitoring of earthquake faults and volcanoes contributes to our understanding of their dynamic mechanisms and to our ability to predict future earthquakes and volcanic activity. We report here on spatial and temporal variations of seismic velocity around the seismogenic fault of the 2016 Kumamoto earthquake [moment magnitude (Mw) 7.0] based on ambient seismic noise. Seismic velocity near the rupture faults and Aso volcano decreased during the earthquake. The velocity reduction near the faults may have been due to formation damage, a change in stress state, and an increase in pore pressure. Further, we mapped the post-earthquake fault-healing process. The largest seismic velocity reduction observed at Aso volcano during the earthquake was likely caused by pressurized volcanic fluids, and the large increase in seismic velocity at the volcano’s magma body observed ~3 months after the earthquake may have been a response to depressurization caused by the eruption. This study demonstrates the usefulness of continuous monitoring of faults and volcanoes. PMID:29202026
Spatiotemporal earthquake clusters along the North Anatolian fault zone offshore Istanbul
Bulut, Fatih; Ellsworth, William L.; Bohnhoff, Marco; Aktar, Mustafa; Dresen, Georg
2011-01-01
We investigate earthquakes with similar waveforms in order to characterize spatiotemporal microseismicity clusters within the North Anatolian fault zone (NAFZ) in northwest Turkey along the transition between the 1999 ??zmit rupture zone and the Marmara Sea seismic gap. Earthquakes within distinct activity clusters are relocated with cross-correlation derived relative travel times using the double difference method. The spatiotemporal distribution of micro earthquakes within individual clusters is resolved with relative location accuracy comparable to or better than the source size. High-precision relative hypocenters define the geometry of individual fault patches, permitting a better understanding of fault kinematics and their role in local-scale seismotectonics along the region of interest. Temporal seismic sequences observed in the eastern Sea of Marmara region suggest progressive failure of mostly nonoverlapping areas on adjacent fault patches and systematic migration of microearthquakes within clusters during the progressive failure of neighboring fault patches. The temporal distributions of magnitudes as well as the number of events follow swarmlike behavior rather than a mainshock/aftershock pattern.
Survey of critical failure events in on-chip interconnect by fault tree analysis
NASA Astrophysics Data System (ADS)
Yokogawa, Shinji; Kunii, Kyousuke
2018-07-01
In this paper, a framework based on reliability physics is proposed for adopting fault tree analysis (FTA) to the on-chip interconnect system of a semiconductor. By integrating expert knowledge and experience regarding the possibilities of failure on basic events, critical issues of on-chip interconnect reliability will be evaluated by FTA. In particular, FTA is used to identify the minimal cut sets with high risk priority. Critical events affecting the on-chip interconnect reliability are identified and discussed from the viewpoint of long-term reliability assessment. The moisture impact is evaluated as an external event.
NASA Astrophysics Data System (ADS)
Ma, Q.; Su, Y.; Tao, S.; Guo, Q.
2016-12-01
Trees in the Sierra Nevada (SN) forests are experiencing rapid changes due to human disturbances and climatic changes. An improved monitoring of tree growth and understanding of how tree growth responses to different impact factors, such as tree competition, forest density, topographic and hydrologic conditions, are urgently needed in tree growth modeling. Traditional tree growth modeling mainly relied on field survey, which was highly time-consuming and labor-intensive. Airborne Light detection and ranging System (ALS) is increasingly used in forest survey, due to its high efficiency and accuracy in three-dimensional tree structure delineation and terrain characterization. This study successfully detected individual tree growth in height (ΔH), crown area (ΔA), and crown volume (ΔV) over a five-year period (2007-2012) using bi-temporal ALS data in two conifer forest areas in SN. We further analyzed their responses to original tree size, competition indices, forest structure indices, and topographic environmental parameters at individual tree and forest stand scales. Our results indicated ΔH was strongly sensitive to topographic wetness index; whereas ΔA and ΔV were highly responsive to forest density and original tree sizes. These ALS based findings in ΔH were consistent with field measurements. Our study demonstrated the promising potential of using bi-temporal ALS data in forest growth measurements and analysis. A more comprehensive study over a longer temporal period and a wider range of forest stands would give better insights into tree growth in the SN, and provide useful guides for forest growth monitoring, modeling, and management.
Sun, Weifang; Yao, Bin; Zeng, Nianyin; Chen, Binqiang; He, Yuchao; Cao, Xincheng; He, Wangpeng
2017-07-12
As a typical example of large and complex mechanical systems, rotating machinery is prone to diversified sorts of mechanical faults. Among these faults, one of the prominent causes of malfunction is generated in gear transmission chains. Although they can be collected via vibration signals, the fault signatures are always submerged in overwhelming interfering contents. Therefore, identifying the critical fault's characteristic signal is far from an easy task. In order to improve the recognition accuracy of a fault's characteristic signal, a novel intelligent fault diagnosis method is presented. In this method, a dual-tree complex wavelet transform (DTCWT) is employed to acquire the multiscale signal's features. In addition, a convolutional neural network (CNN) approach is utilized to automatically recognise a fault feature from the multiscale signal features. The experiment results of the recognition for gear faults show the feasibility and effectiveness of the proposed method, especially in the gear's weak fault features.
The impact of lake level variation on seismicity around XianNvShan fault in the Three Gorge area
NASA Astrophysics Data System (ADS)
Liao, W.; Li, J.; Zhang, L.
2017-12-01
Since the impounding of Three Gorge Project in 2003,more than 10000 earthquakes have been recorded by the digital telemetry seismic network. Most of them occurred around the GaoQiao fault and the Northern segment of XianNvShan fault . In March 2014, the M4.3 and M4.7 earthquake happened in the northern segment of Xiannvshshan fault .In order to study the relationship between the seismicity around the XianNvShan fault and the lake level variation, we had been deployed 5 temporal seismic stations in this area from 2015 to 2016. More than 3000 earthquakes recorded during the time of temporal seismic monitoring are located by hypo-center of by waveform cross-correlation and double-difference method. The depth of most earthquakes is from 5 to 7 km.but it is obvious that the variation of depth is relate to the fluctuation of water level.
NASA Astrophysics Data System (ADS)
Budach, Ingmar; Moeck, Inga; Lüschen, Ewald; Wolfgramm, Markus
2018-03-01
The structural evolution of faults in foreland basins is linked to a complex basin history ranging from extension to contraction and inversion tectonics. Faults in the Upper Jurassic of the German Molasse Basin, a Cenozoic Alpine foreland basin, play a significant role for geothermal exploration and are therefore imaged, interpreted and studied by 3D seismic reflection data. Beyond this applied aspect, the analysis of these seismic data help to better understand the temporal evolution of faults and respective stress fields. In 2009, a 27 km2 3D seismic reflection survey was conducted around the Unterhaching Gt 2 well, south of Munich. The main focus of this study is an in-depth analysis of a prominent v-shaped fault block structure located at the center of the 3D seismic survey. Two methods were used to study the periodic fault activity and its relative age of the detected faults: (1) horizon flattening and (2) analysis of incremental fault throws. Slip and dilation tendency analyses were conducted afterwards to determine the stresses resolved on the faults in the current stress field. Two possible kinematic models explain the structural evolution: One model assumes a left-lateral strike slip fault in a transpressional regime resulting in a positive flower structure. The other model incorporates crossing conjugate normal faults within a transtensional regime. The interpreted successive fault formation prefers the latter model. The episodic fault activity may enhance fault zone permeability hence reservoir productivity implying that the analysis of periodically active faults represents an important part in successfully targeting geothermal wells.
Analysis of a hardware and software fault tolerant processor for critical applications
NASA Technical Reports Server (NTRS)
Dugan, Joanne B.
1993-01-01
Computer systems for critical applications must be designed to tolerate software faults as well as hardware faults. A unified approach to tolerating hardware and software faults is characterized by classifying faults in terms of duration (transient or permanent) rather than source (hardware or software). Errors arising from transient faults can be handled through masking or voting, but errors arising from permanent faults require system reconfiguration to bypass the failed component. Most errors which are caused by software faults can be considered transient, in that they are input-dependent. Software faults are triggered by a particular set of inputs. Quantitative dependability analysis of systems which exhibit a unified approach to fault tolerance can be performed by a hierarchical combination of fault tree and Markov models. A methodology for analyzing hardware and software fault tolerant systems is applied to the analysis of a hypothetical system, loosely based on the Fault Tolerant Parallel Processor. The models consider both transient and permanent faults, hardware and software faults, independent and related software faults, automatic recovery, and reconfiguration.
Determining preventability of pediatric readmissions using fault tree analysis.
Jonas, Jennifer A; Devon, Erin Pete; Ronan, Jeanine C; Ng, Sonia C; Owusu-McKenzie, Jacqueline Y; Strausbaugh, Janet T; Fieldston, Evan S; Hart, Jessica K
2016-05-01
Previous studies attempting to distinguish preventable from nonpreventable readmissions reported challenges in completing reviews efficiently and consistently. (1) Examine the efficiency and reliability of a Web-based fault tree tool designed to guide physicians through chart reviews to a determination about preventability. (2) Investigate root causes of general pediatrics readmissions and identify the percent that are preventable. General pediatricians from The Children's Hospital of Philadelphia used a Web-based fault tree tool to classify root causes of all general pediatrics 15-day readmissions in 2014. The tool guided reviewers through a logical progression of questions, which resulted in 1 of 18 root causes of readmission, 8 of which were considered potentially preventable. Twenty percent of cases were cross-checked to measure inter-rater reliability. Of the 7252 discharges, 248 were readmitted, for an all-cause general pediatrics 15-day readmission rate of 3.4%. Of those readmissions, 15 (6.0%) were deemed potentially preventable, corresponding to 0.2% of total discharges. The most common cause of potentially preventable readmissions was premature discharge. For the 50 cross-checked cases, both reviews resulted in the same root cause for 44 (86%) of files (κ = 0.79; 95% confidence interval: 0.60-0.98). Completing 1 review using the tool took approximately 20 minutes. The Web-based fault tree tool helped physicians to identify root causes of hospital readmissions and classify them as either preventable or not preventable in an efficient and consistent way. It also confirmed that only a small percentage of general pediatrics 15-day readmissions are potentially preventable. Journal of Hospital Medicine 2016;11:329-335. © 2016 Society of Hospital Medicine. © 2016 Society of Hospital Medicine.
Risk Analysis of Return Support Material on Gas Compressor Platform Project
NASA Astrophysics Data System (ADS)
Silvianita; Aulia, B. U.; Khakim, M. L. N.; Rosyid, Daniel M.
2017-07-01
On a fixed platforms project are not only carried out by a contractor, but two or more contractors. Cooperation in the construction of fixed platforms is often not according to plan, it is caused by several factors. It takes a good synergy between the contractor to avoid miss communication may cause problems on the project. For the example is about support material (sea fastening, skid shoe and shipping support) used in the process of sending a jacket structure to operation place often does not return to the contractor. It needs a systematic method to overcome the problem of support material. This paper analyses the causes and effects of GAS Compressor Platform that support material is not return, using Fault Tree Analysis (FTA) and Event Tree Analysis (ETA). From fault tree analysis, the probability of top event is 0.7783. From event tree analysis diagram, the contractors lose Rp.350.000.000, - to Rp.10.000.000.000, -.
Mines Systems Safety Improvement Using an Integrated Event Tree and Fault Tree Analysis
NASA Astrophysics Data System (ADS)
Kumar, Ranjan; Ghosh, Achyuta Krishna
2017-04-01
Mines systems such as ventilation system, strata support system, flame proof safety equipment, are exposed to dynamic operational conditions such as stress, humidity, dust, temperature, etc., and safety improvement of such systems can be done preferably during planning and design stage. However, the existing safety analysis methods do not handle the accident initiation and progression of mine systems explicitly. To bridge this gap, this paper presents an integrated Event Tree (ET) and Fault Tree (FT) approach for safety analysis and improvement of mine systems design. This approach includes ET and FT modeling coupled with redundancy allocation technique. In this method, a concept of top hazard probability is introduced for identifying system failure probability and redundancy is allocated to the system either at component or system level. A case study on mine methane explosion safety with two initiating events is performed. The results demonstrate that the presented method can reveal the accident scenarios and improve the safety of complex mine systems simultaneously.
NASA Astrophysics Data System (ADS)
Hiemer, S.; Woessner, J.; Basili, R.; Danciu, L.; Giardini, D.; Wiemer, S.
2014-08-01
We present a time-independent gridded earthquake rate forecast for the European region including Turkey. The spatial component of our model is based on kernel density estimation techniques, which we applied to both past earthquake locations and fault moment release on mapped crustal faults and subduction zone interfaces with assigned slip rates. Our forecast relies on the assumption that the locations of past seismicity is a good guide to future seismicity, and that future large-magnitude events occur more likely in the vicinity of known faults. We show that the optimal weighted sum of the corresponding two spatial densities depends on the magnitude range considered. The kernel bandwidths and density weighting function are optimized using retrospective likelihood-based forecast experiments. We computed earthquake activity rates (a- and b-value) of the truncated Gutenberg-Richter distribution separately for crustal and subduction seismicity based on a maximum likelihood approach that considers the spatial and temporal completeness history of the catalogue. The final annual rate of our forecast is purely driven by the maximum likelihood fit of activity rates to the catalogue data, whereas its spatial component incorporates contributions from both earthquake and fault moment-rate densities. Our model constitutes one branch of the earthquake source model logic tree of the 2013 European seismic hazard model released by the EU-FP7 project `Seismic HAzard haRmonization in Europe' (SHARE) and contributes to the assessment of epistemic uncertainties in earthquake activity rates. We performed retrospective and pseudo-prospective likelihood consistency tests to underline the reliability of our model and SHARE's area source model (ASM) using the testing algorithms applied in the collaboratory for the study of earthquake predictability (CSEP). We comparatively tested our model's forecasting skill against the ASM and find a statistically significant better performance for testing periods of 10-20 yr. The testing results suggest that our model is a viable candidate model to serve for long-term forecasting on timescales of years to decades for the European region.
NASA Astrophysics Data System (ADS)
Li, Yongbo; Li, Guoyan; Yang, Yuantao; Liang, Xihui; Xu, Minqiang
2018-05-01
The fault diagnosis of planetary gearboxes is crucial to reduce the maintenance costs and economic losses. This paper proposes a novel fault diagnosis method based on adaptive multi-scale morphological filter (AMMF) and modified hierarchical permutation entropy (MHPE) to identify the different health conditions of planetary gearboxes. In this method, AMMF is firstly adopted to remove the fault-unrelated components and enhance the fault characteristics. Second, MHPE is utilized to extract the fault features from the denoised vibration signals. Third, Laplacian score (LS) approach is employed to refine the fault features. In the end, the obtained features are fed into the binary tree support vector machine (BT-SVM) to accomplish the fault pattern identification. The proposed method is numerically and experimentally demonstrated to be able to recognize the different fault categories of planetary gearboxes.
2013-05-01
specifics of the correlation will be explored followed by discussion of new paradigms— the ordered event list (OEL) and the decision tree — that result from...4.2.1 Brief Overview of the Decision Tree Paradigm ................................................15 4.2.2 OEL Explained...6 Figure 3. A depiction of a notional fault/activation tree . ................................................................7
Monitoring of Microseismicity with ArrayTechniques in the Peach Tree Valley Region
NASA Astrophysics Data System (ADS)
Garcia-Reyes, J. L.; Clayton, R. W.
2016-12-01
This study is focused on the analysis of microseismicity along the San Andreas Fault in the PeachTree Valley region. This zone is part of the transition zone between the locked portion to the south (Parkfield, CA) and the creeping section to the north (Jovilet, et al., JGR, 2014). The data for the study comes from a 2-week deployment of 116 Zland nodes in a cross-shaped configuration along (8.2 km) and across (9 km) the Fault. We analyze the distribution of microseismicity using a 3D backprojection technique, and we explore the use of Hidden Markov Models to identify different patterns of microseismicity (Hammer et al., GJI, 2013). The goal of the study is to relate the style of seismicity to the mechanical state of the Fault. The results show the evolution of seismic activity as well as at least two different patterns of seismic signals.
[Impact of water pollution risk in water transfer project based on fault tree analysis].
Liu, Jian-Chang; Zhang, Wei; Wang, Li-Min; Li, Dai-Qing; Fan, Xiu-Ying; Deng, Hong-Bing
2009-09-15
The methods to assess water pollution risk for medium water transfer are gradually being explored. The event-nature-proportion method was developed to evaluate the probability of the single event. Fault tree analysis on the basis of calculation on single event was employed to evaluate the extent of whole water pollution risk for the channel water body. The result indicates, that the risk of pollutants from towns and villages along the line of water transfer project to the channel water body is at high level with the probability of 0.373, which will increase pollution to the channel water body at the rate of 64.53 mg/L COD, 4.57 mg/L NH4(+) -N and 0.066 mg/L volatilization hydroxybenzene, respectively. The measurement of fault probability on the basis of proportion method is proved to be useful in assessing water pollution risk under much uncertainty.
Coseismic temporal changes of slip direction: the effect of absolute stress on dynamic rupture
Guatteri, Mariagiovanna; Spudich, P.
1998-01-01
We investigate the dynamics of rupture at low-stress level. We show that one main difference between the dynamics of high- and low-stress events is the amount of coseismic temporal rake rotation occurring at given points on the fault. Curved striations on exposed fault surfaces and earthquake dislocation models derived from ground-motion inversion indicate that the slip direction may change with time at a point on the fault during dynamic rupture. We use a 3D boundary integral method to model temporal rake variations during dynamic rupture propagation assuming a slip-weakening friction law and isotropic friction. The points at which the slip rotates most are characterized by an initial shear stress direction substantially different from the average stress direction over the fault plane. We show that for a given value of stress drop, the level of initial shear stress (i.e., the fractional stress drop) determines the amount of rotation in slip direction. We infer that seismic events that show evidence of temporal rake rotations are characterized by a low initial shear-stress level with spatially variable direction on the fault (possibly due to changes in fault surface geometry) and an almost complete stress drop.Our models motivate a new interpretation of curved and cross-cutting striations and put new constraints on their analysis. The initial rake is in general collinear with the initial stress at the hypocentral zone, supporting the assumptions made in stress-tensor inversion from first-motion analysis. At other points on the fault, especially away from the hypocenter, the initial slip rake may not be collinear with the initial shear stress, contradicting a common assumption of structural geology. On the other hand, the later part of slip in our models is systematically more aligned with the average stress direction than the early slip. Our modeling suggests that the length of the straight part of curved striations is usually an upper bound of the slip-weakening distance if this parameter is uniform over the fault plane, and the direction of the late part of slip of curved striations should have more weight in the estimate of initial stress direction.
Petrology and palynology of the No. 5 block coal bed, northeastern Kentucky
Hower, J.C.; Eble, C.F.; Rathbone, R.F.
1994-01-01
The upper Middle Pennsylvanian (middle Westphalian D equivalent) No. 5 Block coal bed (Eastern Kentucky Coal Field of the Central Appalachian Basin) is a low-sulfur, compliance coal resource, dominantly comprised of dull, inertinite-rich lithotypes. Ash yields tend to be highly variable in the No. 5 Block, as does bed thickness and frequency of bed splitting. This study describes the petrographic, palynologic and geochemical characteristics of the No. 5 Block coal bed, and reports on some temporal and spatial trends among these parameters in eastern-northeastern Kentucky. Petrographically the No. 5 Block coal is predominated by dull, often high-ash lithotypes, with inertinite contents commonly exceeding 30% (mmf). The coal thins to the north-northwest where it tends to be higher in vitrinite and sulfur content. Representatives of large and small lycopsids and ferns (both tree-like and small varieties) dominate the No. 5 Block coal bed palynoflora. Calamite spores and cordaite pollen also occur but are less abundant. Small lycopsid (Densosporites spp. and related crassicingulate genera) and tree fern (e.g. Punctatisporites minutus, Laevigatosporites globosus) spore taxa are most abundant in dull lithotypes. Bright lithotypes contain higher percentages of arboreous lycopsid spores (Lycospora spp.). Regionally, the No. 5 Block coal contains abundant Torispora securis, a tree fern spore specially adapted for desiccation prevention. This, along with overall high percentages of inertinite macerals, suggest that peat accumulation may have taken place in a seasonally dry (?) paleoclimate. The No. 5 Block coal bed thickens rather dramatically in a NW-SE direction, as does the frequency of coal bed splitting. This phenomenon appears to be related to increased accomodation space in the southeastern portion of the study area, perhaps via penecontemporaneous growth faulting. Maceral and palynomorph variations within the bed correspond with these changes. Thin coal along the northwestern margin tends to be vetrinite rich and contains abundant Lycospora, perhaps reflecting relatively stable peat-forming conditions. Thicker coal to the southeast contains more inertinite, high-ash coal layers, and inorganic partings. Spore floras contain more small lycopsid and tree fern components and are temporally variable, perhaps indicating a more unstable peat-forming environment. ?? 1994.
Viewpoint on ISA TR84.0.02--simplified methods and fault tree analysis.
Summers, A E
2000-01-01
ANSI/ISA-S84.01-1996 and IEC 61508 require the establishment of a safety integrity level for any safety instrumented system or safety related system used to mitigate risk. Each stage of design, operation, maintenance, and testing is judged against this safety integrity level. Quantitative techniques can be used to verify whether the safety integrity level is met. ISA-dTR84.0.02 is a technical report under development by ISA, which discusses how to apply quantitative analysis techniques to safety instrumented systems. This paper discusses two of those techniques: (1) Simplified equations and (2) Fault tree analysis.
TH-EF-BRC-03: Fault Tree Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thomadsen, B.
2016-06-15
This Hands-on Workshop will be focused on providing participants with experience with the principal tools of TG 100 and hence start to build both competence and confidence in the use of risk-based quality management techniques. The three principal tools forming the basis of TG 100’s risk analysis: Process mapping, Failure-Modes and Effects Analysis and fault-tree analysis will be introduced with a 5 minute refresher presentation and each presentation will be followed by a 30 minute small group exercise. An exercise on developing QM from the risk analysis follows. During the exercise periods, participants will apply the principles in 2 differentmore » clinical scenarios. At the conclusion of each exercise there will be ample time for participants to discuss with each other and the faculty their experience and any challenges encountered. Learning Objectives: To review the principles of Process Mapping, Failure Modes and Effects Analysis and Fault Tree Analysis. To gain familiarity with these three techniques in a small group setting. To share and discuss experiences with the three techniques with faculty and participants. Director, TreatSafely, LLC. Director, Center for the Assessment of Radiological Sciences. Occasional Consultant to the IAEA and Varian.« less
Estimating earthquake-induced failure probability and downtime of critical facilities.
Porter, Keith; Ramer, Kyle
2012-01-01
Fault trees have long been used to estimate failure risk in earthquakes, especially for nuclear power plants (NPPs). One interesting application is that one can assess and manage the probability that two facilities - a primary and backup - would be simultaneously rendered inoperative in a single earthquake. Another is that one can calculate the probabilistic time required to restore a facility to functionality, and the probability that, during any given planning period, the facility would be rendered inoperative for any specified duration. A large new peer-reviewed library of component damageability and repair-time data for the first time enables fault trees to be used to calculate the seismic risk of operational failure and downtime for a wide variety of buildings other than NPPs. With the new library, seismic risk of both the failure probability and probabilistic downtime can be assessed and managed, considering the facility's unique combination of structural and non-structural components, their seismic installation conditions, and the other systems on which the facility relies. An example is offered of real computer data centres operated by a California utility. The fault trees were created and tested in collaboration with utility operators, and the failure probability and downtime results validated in several ways.
Huang, Weiqing; Fan, Hongbo; Qiu, Yongfu; Cheng, Zhiyu; Qian, Yu
2016-02-15
Haze weather has become a serious environmental pollution problem which occurs in many Chinese cities. One of the most critical factors for the formation of haze weather is the exhausts of coal combustion, thus it is meaningful to figure out the causation mechanism between urban haze and the exhausts of coal combustion. Based on above considerations, the fault tree analysis (FAT) approach was employed for the causation mechanism of urban haze in Beijing by considering the risk events related with the exhausts of coal combustion for the first time. Using this approach, firstly the fault tree of the urban haze causation system connecting with coal combustion exhausts was established; consequently the risk events were discussed and identified; then, the minimal cut sets were successfully determined using Boolean algebra; finally, the structure, probability and critical importance degree analysis of the risk events were completed for the qualitative and quantitative assessment. The study results proved that the FTA was an effective and simple tool for the causation mechanism analysis and risk management of urban haze in China. Copyright © 2015 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Mulyana, Cukup; Muhammad, Fajar; Saad, Aswad H.; Mariah, Riveli, Nowo
2017-03-01
Storage tank component is the most critical component in LNG regasification terminal. It has the risk of failure and accident which impacts to human health and environment. Risk assessment is conducted to detect and reduce the risk of failure in storage tank. The aim of this research is determining and calculating the probability of failure in regasification unit of LNG. In this case, the failure is caused by Boiling Liquid Expanding Vapor Explosion (BLEVE) and jet fire in LNG storage tank component. The failure probability can be determined by using Fault Tree Analysis (FTA). Besides that, the impact of heat radiation which is generated is calculated. Fault tree for BLEVE and jet fire on storage tank component has been determined and obtained with the value of failure probability for BLEVE of 5.63 × 10-19 and for jet fire of 9.57 × 10-3. The value of failure probability for jet fire is high enough and need to be reduced by customizing PID scheme of regasification LNG unit in pipeline number 1312 and unit 1. The value of failure probability after customization has been obtained of 4.22 × 10-6.
Quasi-dynamic earthquake fault systems with rheological heterogeneity
NASA Astrophysics Data System (ADS)
Brietzke, G. B.; Hainzl, S.; Zoeller, G.; Holschneider, M.
2009-12-01
Seismic risk and hazard estimates mostly use pure empirical, stochastic models of earthquake fault systems tuned specifically to the vulnerable areas of interest. Although such models allow for reasonable risk estimates, such models cannot allow for physical statements of the described seismicity. In contrary such empirical stochastic models, physics based earthquake fault systems models allow for a physical reasoning and interpretation of the produced seismicity and system dynamics. Recently different fault system earthquake simulators based on frictional stick-slip behavior have been used to study effects of stress heterogeneity, rheological heterogeneity, or geometrical complexity on earthquake occurrence, spatial and temporal clustering of earthquakes, and system dynamics. Here we present a comparison of characteristics of synthetic earthquake catalogs produced by two different formulations of quasi-dynamic fault system earthquake simulators. Both models are based on discretized frictional faults embedded in an elastic half-space. While one (1) is governed by rate- and state-dependent friction with allowing three evolutionary stages of independent fault patches, the other (2) is governed by instantaneous frictional weakening with scheduled (and therefore causal) stress transfer. We analyze spatial and temporal clustering of events and characteristics of system dynamics by means of physical parameters of the two approaches.
Spatial and Temporal Variations in Slip Partitioning During Oblique Convergence Experiments
NASA Astrophysics Data System (ADS)
Beyer, J. L.; Cooke, M. L.; Toeneboehn, K.
2017-12-01
Physical experiments of oblique convergence in wet kaolin demonstrate the development of slip partitioning, where two faults accommodate strain via different slip vectors. In these experiments, the second fault forms after the development of the first fault. As one strain component is relieved by one fault, the local stress field then favors the development of a second fault with different slip sense. A suite of physical experiments reveals three styles of slip partitioning development controlled by the convergence angle and presence of a pre-existing fault. In experiments with low convergence angles, strike-slip faults grow prior to reverse faults (Type 1) regardless of whether the fault is precut or not. In experiments with moderate convergence angles, slip partitioning is dominantly controlled by the presence of a pre-existing fault. In all experiments, the primarily reverse fault forms first. Slip partitioning then develops with the initiation of strike-slip along the precut fault (Type 2) or growth of a secondary reverse fault where the first fault is steepest. Subsequently, the slip on the first fault transitions to primarily strike-slip (Type 3). Slip rates and rakes along the slip partitioned faults for both precut and uncut experiments vary temporally, suggesting that faults in these slip-partitioned systems are constantly adapting to the conditions produced by slip along nearby faults in the system. While physical experiments show the evolution of slip partitioning, numerical simulations of the experiments provide information about both the stress and strain fields, which can be used to compute the full work budget, providing insight into the mechanisms that drive slip partitioning. Preliminary simulations of precut experiments show that strain energy density (internal work) can be used to predict fault growth, highlighting where fault growth can reduce off-fault deformation in the physical experiments. In numerical simulations of uncut experiments with a first non-planar oblique slip fault, strain energy density is greatest where the first fault is steepest, as less convergence is accommodated along this portion of the fault. The addition of a second slip-partitioning fault to the system decreases external work indicating that these faults increase the mechanical efficiency of the system.
NASA Astrophysics Data System (ADS)
Li, Shuanghong; Cao, Hongliang; Yang, Yupu
2018-02-01
Fault diagnosis is a key process for the reliability and safety of solid oxide fuel cell (SOFC) systems. However, it is difficult to rapidly and accurately identify faults for complicated SOFC systems, especially when simultaneous faults appear. In this research, a data-driven Multi-Label (ML) pattern identification approach is proposed to address the simultaneous fault diagnosis of SOFC systems. The framework of the simultaneous-fault diagnosis primarily includes two components: feature extraction and ML-SVM classifier. The simultaneous-fault diagnosis approach can be trained to diagnose simultaneous SOFC faults, such as fuel leakage, air leakage in different positions in the SOFC system, by just using simple training data sets consisting only single fault and not demanding simultaneous faults data. The experimental result shows the proposed framework can diagnose the simultaneous SOFC system faults with high accuracy requiring small number training data and low computational burden. In addition, Fault Inference Tree Analysis (FITA) is employed to identify the correlations among possible faults and their corresponding symptoms at the system component level.
NASA Astrophysics Data System (ADS)
Schwartz, D. P.; Haeussler, P. J.; Seitz, G. G.; Dawson, T. E.; Stenner, H. D.; Matmon, A.; Crone, A. J.; Personius, S.; Burns, P. B.; Cadena, A.; Thoms, E.
2005-12-01
Developing accurate rupture histories of long, high-slip-rate strike-slip faults is is especially challenging where recurrence is relatively short (hundreds of years), adjacent segments may fail within decades of each other, and uncertainties in dating can be as large as, or larger than, the time between events. The Denali Fault system (DFS) is the major active structure of interior Alaska, but received little study since pioneering fault investigations in the early 1970s. Until the summer of 2003 essentially no data existed on the timing or spatial distribution of past ruptures on the DFS. This changed with the occurrence of the M7.9 2002 Denali fault earthquake, which has been a catalyst for present paleoseismic investigations. It provided a well-constrained rupture length and slip distribution. Strike-slip faulting occurred along 290 km of the Denali and Totschunda faults, leaving unruptured ?140km of the eastern Denali fault, ?180 km of the western Denali fault, and ?70 km of the eastern Totschunda fault. The DFS presents us with a blank canvas on which to fill a chronology of past earthquakes using modern paleoseismic techniques. Aware of correlation issues with potentially closely-timed earthquakes we have a) investigated 11 paleoseismic sites that allow a variety of dating techniques, b) measured paleo offsets, which provide insight into magnitude and rupture length of past events, at 18 locations, and c) developed late Pleistocene and Holocene slip rates using exposure age dating to constrain long-term fault behavior models. We are in the process of: 1) radiocarbon-dating peats involved in faulting and liquefaction, and especially short-lived forest floor vegetation that includes outer rings of trees, spruce needles, and blueberry leaves killed and buried during paleoearthquakes; 2) supporting development of a 700-900 year tree-ring time-series for precise dating of trees used in event timing; 3) employing Pb 210 for constraining the youngest ruptures in sag ponds on the eastern and western Denali fault; and 4) using volcanic ashes in trenches for dating and correlation. Initial results are: 1) Large earthquakes occurred along the 2002 rupture section 350-700 yrb02 (2-sigma, calendar-corrected, years before 2002) with offsets about the same as 2002. The Denali penultimate rupture appears younger (350-570 yrb02) than the Totschunda (580-700 yrb02); 2) The western Denali fault is geomorphically fresh, its MRE likely occurred within the past 250 years, the penultimate event occurred 570-680 yrb02, and slip in each event was 4m; 3) The eastern Denali MRE post-dates peat dated at 550-680 yrb02, is younger than the penultimate Totschunda event, and could be part of the penultimate Denali fault rupture or a separate earthquake; 4) A 120-km section of the Denali fault between tNenana glacier and the Delta River may be a zone of overlap for large events and/or capable of producing smaller earthquakes; its western part has fresh scarps with small (1m) offsets. 2004/2005 field observations show there are longer datable records, with 4-5 events recorded in trenches on the eastern Denali fault and the west end of the 2002 rupture, 2-3 events on the western part of the fault in Denali National Park, and 3-4 events on the Totschunda fault. These and extensive datable material provide the basis to define the paleoseismic history of DFS earthquake ruptures through multiple and complete earthquake cycles.
Support vector machines-based fault diagnosis for turbo-pump rotor
NASA Astrophysics Data System (ADS)
Yuan, Sheng-Fa; Chu, Fu-Lei
2006-05-01
Most artificial intelligence methods used in fault diagnosis are based on empirical risk minimisation principle and have poor generalisation when fault samples are few. Support vector machines (SVM) is a new general machine-learning tool based on structural risk minimisation principle that exhibits good generalisation even when fault samples are few. Fault diagnosis based on SVM is discussed. Since basic SVM is originally designed for two-class classification, while most of fault diagnosis problems are multi-class cases, a new multi-class classification of SVM named 'one to others' algorithm is presented to solve the multi-class recognition problems. It is a binary tree classifier composed of several two-class classifiers organised by fault priority, which is simple, and has little repeated training amount, and the rate of training and recognition is expedited. The effectiveness of the method is verified by the application to the fault diagnosis for turbo pump rotor.
NASA Astrophysics Data System (ADS)
Tsopela, A.; Guglielmi, Y.; Donze, F. V.; De Barros, L.; Henry, P.; Castilla, R.; Gout, C.
2017-12-01
Fluid injections associated with human activities are well known to induce perturbations in the ambient rock mass. In particular, the hydromechanical response of a nearby fault under an increase of the pore pressure is of great interest in permeability as well as seismicity related problems. We present a field injection experiment conducted in the host rock 4m away from a fault affecting Toarcian shales (Tournemire massif, France). The site was densely instrumented and during the test the pressure, displacements and seismicity were recorded in order to capture the hydro-mechanical response of the surrounding stimulated volume. A numerical model was used including the reactivated structure at the injection point interacting with a plane representing the main fault orientation. A number of calculations were performed in order to estimate the injection characteristics and the state of stress of the test. By making use of the recorded seismic events location an attempt is made to reproduce the spatio-temporal characteristics of the microseismicity cloud. We have introduced in the model heterogeneous frictional properties along the fault plane that result in flow and rupture channeling effects. Based on the spatio-temporal characteristics of these rupture events we attempt to estimate the resulting hydraulic properties of the fault according to the triggering front concept proposed by Shapiro et al. (2002). The effect of the frictional heterogeneities and the fault orientation on the resulting hydraulic diffusivity is discussed. We have so far observed in our model that by statistically taking into account the frictional heterogeneities in our analysis, the spatio-temporal characteristics of the rupture events and the recovered hydraulic properties of the fault are in a satisfying agreement. References: Shapiro, S. A., Rothert, E., Rath, V., & Rindschwentner, J. (2002). Characterization of fluid transport properties of reservoirs using induced microseismicity. Geophysics, 67(1), 212-220.
Deciphering structural and temporal interplays during the architectural development of mango trees.
Dambreville, Anaëlle; Lauri, Pierre-Éric; Trottier, Catherine; Guédon, Yann; Normand, Frédéric
2013-05-01
Plant architecture is commonly defined by the adjacency of organs within the structure and their properties. Few studies consider the effect of endogenous temporal factors, namely phenological factors, on the establishment of plant architecture. This study hypothesized that, in addition to the effect of environmental factors, the observed plant architecture results from both endogenous structural and temporal components, and their interplays. Mango tree, which is characterized by strong phenological asynchronisms within and between trees and by repeated vegetative and reproductive flushes during a growing cycle, was chosen as a plant model. During two consecutive growing cycles, this study described vegetative and reproductive development of 20 trees submitted to the same environmental conditions. Four mango cultivars were considered to assess possible cultivar-specific patterns. Integrative vegetative and reproductive development models incorporating generalized linear models as components were built. These models described the occurrence, intensity, and timing of vegetative and reproductive development at the growth unit scale. This study showed significant interplays between structural and temporal components of plant architectural development at two temporal scales. Within a growing cycle, earliness of bud burst was highly and positively related to earliness of vegetative development and flowering. Between growing cycles, flowering growth units delayed vegetative development compared to growth units that did not flower. These interplays explained how vegetative and reproductive phenological asynchronisms within and between trees were generated and maintained. It is suggested that causation networks involving structural and temporal components may give rise to contrasted tree architectures.
EDNA: Expert fault digraph analysis using CLIPS
NASA Technical Reports Server (NTRS)
Dixit, Vishweshwar V.
1990-01-01
Traditionally fault models are represented by trees. Recently, digraph models have been proposed (Sack). Digraph models closely imitate the real system dependencies and hence are easy to develop, validate and maintain. However, they can also contain directed cycles and analysis algorithms are hard to find. Available algorithms tend to be complicated and slow. On the other hand, the tree analysis (VGRH, Tayl) is well understood and rooted in vast research effort and analytical techniques. The tree analysis algorithms are sophisticated and orders of magnitude faster. Transformation of a digraph (cyclic) into trees (CLP, LP) is a viable approach to blend the advantages of the representations. Neither the digraphs nor the trees provide the ability to handle heuristic knowledge. An expert system, to capture the engineering knowledge, is essential. We propose an approach here, namely, expert network analysis. We combine the digraph representation and tree algorithms. The models are augmented by probabilistic and heuristic knowledge. CLIPS, an expert system shell from NASA-JSC will be used to develop a tool. The technique provides the ability to handle probabilities and heuristic knowledge. Mixed analysis, some nodes with probabilities, is possible. The tool provides graphics interface for input, query, and update. With the combined approach it is expected to be a valuable tool in the design process as well in the capture of final design knowledge.
NASA Astrophysics Data System (ADS)
Hu, Bingbing; Li, Bing
2016-02-01
It is very difficult to detect weak fault signatures due to the large amount of noise in a wind turbine system. Multiscale noise tuning stochastic resonance (MSTSR) has proved to be an effective way to extract weak signals buried in strong noise. However, the MSTSR method originally based on discrete wavelet transform (DWT) has disadvantages such as shift variance and the aliasing effects in engineering application. In this paper, the dual-tree complex wavelet transform (DTCWT) is introduced into the MSTSR method, which makes it possible to further improve the system output signal-to-noise ratio and the accuracy of fault diagnosis by the merits of DTCWT (nearly shift invariant and reduced aliasing effects). Moreover, this method utilizes the relationship between the two dual-tree wavelet basis functions, instead of matching the single wavelet basis function to the signal being analyzed, which may speed up the signal processing and be employed in on-line engineering monitoring. The proposed method is applied to the analysis of bearing outer ring and shaft coupling vibration signals carrying fault information. The results confirm that the method performs better in extracting the fault features than the original DWT-based MSTSR, the wavelet transform with post spectral analysis, and EMD-based spectral analysis methods.
Locating hardware faults in a parallel computer
Archer, Charles J.; Megerian, Mark G.; Ratterman, Joseph D.; Smith, Brian E.
2010-04-13
Locating hardware faults in a parallel computer, including defining within a tree network of the parallel computer two or more sets of non-overlapping test levels of compute nodes of the network that together include all the data communications links of the network, each non-overlapping test level comprising two or more adjacent tiers of the tree; defining test cells within each non-overlapping test level, each test cell comprising a subtree of the tree including a subtree root compute node and all descendant compute nodes of the subtree root compute node within a non-overlapping test level; performing, separately on each set of non-overlapping test levels, an uplink test on all test cells in a set of non-overlapping test levels; and performing, separately from the uplink tests and separately on each set of non-overlapping test levels, a downlink test on all test cells in a set of non-overlapping test levels.
Wilkinson, Maxwell W; McCaffrey, Ken J W; Jones, Richard R; Roberts, Gerald P; Holdsworth, Robert E; Gregory, Laura C; Walters, Richard J; Wedmore, Luke; Goodall, Huw; Iezzi, Francesco
2017-07-04
The temporal evolution of slip on surface ruptures during an earthquake is important for assessing fault displacement, defining seismic hazard and for predicting ground motion. However, measurements of near-field surface displacement at high temporal resolution are elusive. We present a novel record of near-field co-seismic displacement, measured with 1-second temporal resolution during the 30 th October 2016 M w 6.6 Vettore earthquake (Central Italy), using low-cost Global Navigation Satellite System (GNSS) receivers located in the footwall and hangingwall of the Mt. Vettore - Mt. Bove fault system, close to new surface ruptures. We observe a clear temporal and spatial link between our near-field record and InSAR, far-field GPS data, regional measurements from the Italian Strong Motion and National Seismic networks, and field measurements of surface ruptures. Comparison of these datasets illustrates that the observed surface ruptures are the propagation of slip from depth on a surface rupturing (i.e. capable) fault array, as a direct and immediate response to the 30 th October earthquake. Large near-field displacement ceased within 6-8 seconds of the origin time, implying that shaking induced gravitational processes were not the primary driving mechanism. We demonstrate that low-cost GNSS is an accurate monitoring tool when installed as custom-made, short-baseline networks.
Dong, Shirley Xiaobi; Davies, Stuart J; Ashton, Peter S; Bunyavejchewin, Sarayudh; Supardi, M N Nur; Kassim, Abd Rahman; Tan, Sylvester; Moorcroft, Paul R
2012-10-07
The response of tropical forests to global climate variability and change remains poorly understood. Results from long-term studies of permanent forest plots have reported different, and in some cases opposing trends in tropical forest dynamics. In this study, we examined changes in tree growth rates at four long-term permanent tropical forest research plots in relation to variation in solar radiation, temperature and precipitation. Temporal variation in the stand-level growth rates measured at five-year intervals was found to be positively correlated with variation in incoming solar radiation and negatively related to temporal variation in night-time temperatures. Taken alone, neither solar radiation variability nor the effects of night-time temperatures can account for the observed temporal variation in tree growth rates across sites, but when considered together, these two climate variables account for most of the observed temporal variability in tree growth rates. Further analysis indicates that the stand-level response is primarily driven by the responses of smaller-sized trees (less than 20 cm in diameter). The combined temperature and radiation responses identified in this study provide a potential explanation for the conflicting patterns in tree growth rates found in previous studies.
Model authoring system for fail safe analysis
NASA Technical Reports Server (NTRS)
Sikora, Scott E.
1990-01-01
The Model Authoring System is a prototype software application for generating fault tree analyses and failure mode and effects analyses for circuit designs. Utilizing established artificial intelligence and expert system techniques, the circuits are modeled as a frame-based knowledge base in an expert system shell, which allows the use of object oriented programming and an inference engine. The behavior of the circuit is then captured through IF-THEN rules, which then are searched to generate either a graphical fault tree analysis or failure modes and effects analysis. Sophisticated authoring techniques allow the circuit to be easily modeled, permit its behavior to be quickly defined, and provide abstraction features to deal with complexity.
A quantitative analysis of the F18 flight control system
NASA Technical Reports Server (NTRS)
Doyle, Stacy A.; Dugan, Joanne B.; Patterson-Hine, Ann
1993-01-01
This paper presents an informal quantitative analysis of the F18 flight control system (FCS). The analysis technique combines a coverage model with a fault tree model. To demonstrate the method's extensive capabilities, we replace the fault tree with a digraph model of the F18 FCS, the only model available to us. The substitution shows that while digraphs have primarily been used for qualitative analysis, they can also be used for quantitative analysis. Based on our assumptions and the particular failure rates assigned to the F18 FCS components, we show that coverage does have a significant effect on the system's reliability and thus it is important to include coverage in the reliability analysis.
Modelling Fault Zone Evolution: Implications for fluid flow.
NASA Astrophysics Data System (ADS)
Moir, H.; Lunn, R. J.; Shipton, Z. K.
2009-04-01
Flow simulation models are of major interest to many industries including hydrocarbon, nuclear waste, sequestering of carbon dioxide and mining. One of the major uncertainties in these models is in predicting the permeability of faults, principally in the detailed structure of the fault zone. Studying the detailed structure of a fault zone is difficult because of the inaccessible nature of sub-surface faults and also because of their highly complex nature; fault zones show a high degree of spatial and temporal heterogeneity i.e. the properties of the fault change as you move along the fault, they also change with time. It is well understood that faults influence fluid flow characteristics. They may act as a conduit or a barrier or even as both by blocking flow across the fault while promoting flow along it. Controls on fault hydraulic properties include cementation, stress field orientation, fault zone components and fault zone geometry. Within brittle rocks, such as granite, fracture networks are limited but provide the dominant pathway for flow within this rock type. Research at the EU's Soultz-sous-Forệt Hot Dry Rock test site [Evans et al., 2005] showed that 95% of flow into the borehole was associated with a single fault zone at 3490m depth, and that 10 open fractures account for the majority of flow within the zone. These data underline the critical role of faults in deep flow systems and the importance of achieving a predictive understanding of fault hydraulic properties. To improve estimates of fault zone permeability, it is important to understand the underlying hydro-mechanical processes of fault zone formation. In this research, we explore the spatial and temporal evolution of fault zones in brittle rock through development and application of a 2D hydro-mechanical finite element model, MOPEDZ. The authors have previously presented numerical simulations of the development of fault linkage structures from two or three pre-existing joints, the results of which compare well to features observed in mapped exposures. For these simple simulations from a small number of pre-existing joints the fault zone evolves in a predictable way: fault linkage is governed by three key factors: Stress ratio of s1 (maximum compressive stress) to s3(minimum compressive stress), original geometry of the pre-existing structures (contractional vs. dilational geometries) and the orientation of the principle stress direction (σ1) to the pre-existing structures. In this paper we present numerical simulations of the temporal and spatial evolution of fault linkage structures from many pre-existing joints. The initial location, size and orientations of these joints are based on field observations of cooling joints in granite from the Sierra Nevada. We show that the constantly evolving geometry and local stress field perturbations contribute significantly to fault zone evolution. The location and orientations of linkage structures previously predicted by the simple simulations are consistent with the predicted geometries in the more complex fault zones, however, the exact location at which individual structures form is not easily predicted. Markedly different fault zone geometries are predicted when the pre-existing joints are rotated with respect to the maximum compressive stress. In particular, fault surfaces range from evolving smooth linear structures to producing complex ‘stepped' fault zone geometries. These geometries have a significant effect on simulations of along and across-fault flow.
Chen, Gang; Song, Yongduan; Lewis, Frank L
2016-05-03
This paper investigates the distributed fault-tolerant control problem of networked Euler-Lagrange systems with actuator and communication link faults. An adaptive fault-tolerant cooperative control scheme is proposed to achieve the coordinated tracking control of networked uncertain Lagrange systems on a general directed communication topology, which contains a spanning tree with the root node being the active target system. The proposed algorithm is capable of compensating for the actuator bias fault, the partial loss of effectiveness actuation fault, the communication link fault, the model uncertainty, and the external disturbance simultaneously. The control scheme does not use any fault detection and isolation mechanism to detect, separate, and identify the actuator faults online, which largely reduces the online computation and expedites the responsiveness of the controller. To validate the effectiveness of the proposed method, a test-bed of multiple robot-arm cooperative control system is developed for real-time verification. Experiments on the networked robot-arms are conduced and the results confirm the benefits and the effectiveness of the proposed distributed fault-tolerant control algorithms.
Friction Laws Derived From the Acoustic Emissions of a Laboratory Fault by Machine Learning
NASA Astrophysics Data System (ADS)
Rouet-Leduc, B.; Hulbert, C.; Ren, C. X.; Bolton, D. C.; Marone, C.; Johnson, P. A.
2017-12-01
Fault friction controls nearly all aspects of fault rupture, yet it is only possible to measure in the laboratory. Here we describe laboratory experiments where acoustic emissions are recorded from the fault. We find that by applying a machine learning approach known as "extreme gradient boosting trees" to the continuous acoustical signal, the fault friction can be directly inferred, showing that instantaneous characteristics of the acoustic signal are a fingerprint of the frictional state. This machine learning-based inference leads to a simple law that links the acoustic signal to the friction state, and holds for every stress cycle the laboratory fault goes through. The approach does not use any other measured parameter than instantaneous statistics of the acoustic signal. This finding may have importance for inferring frictional characteristics from seismic waves in Earth where fault friction cannot be measured.
The Design of a Fault-Tolerant COTS-Based Bus Architecture for Space Applications
NASA Technical Reports Server (NTRS)
Chau, Savio N.; Alkalai, Leon; Tai, Ann T.
2000-01-01
The high-performance, scalability and miniaturization requirements together with the power, mass and cost constraints mandate the use of commercial-off-the-shelf (COTS) components and standards in the X2000 avionics system architecture for deep-space missions. In this paper, we report our experiences and findings on the design of an IEEE 1394 compliant fault-tolerant COTS-based bus architecture. While the COTS standard IEEE 1394 adequately supports power management, high performance and scalability, its topological criteria impose restrictions on fault tolerance realization. To circumvent the difficulties, we derive a "stack-tree" topology that not only complies with the IEEE 1394 standard but also facilitates fault tolerance realization in a spaceborne system with limited dedicated resource redundancies. Moreover, by exploiting pertinent standard features of the 1394 interface which are not purposely designed for fault tolerance, we devise a comprehensive set of fault detection mechanisms to support the fault-tolerant bus architecture.
Earthquake Clustering on Normal Faults: Insight from Rate-and-State Friction Models
NASA Astrophysics Data System (ADS)
Biemiller, J.; Lavier, L. L.; Wallace, L.
2016-12-01
Temporal variations in slip rate on normal faults have been recognized in Hawaii and the Basin and Range. The recurrence intervals of these slip transients range from 2 years on the flanks of Kilauea, Hawaii to 10 kyr timescale earthquake clustering on the Wasatch Fault in the eastern Basin and Range. In addition to these longer recurrence transients in the Basin and Range, recent GPS results there also suggest elevated deformation rate events with recurrence intervals of 2-4 years. These observations suggest that some active normal fault systems are dominated by slip behaviors that fall between the end-members of steady aseismic creep and periodic, purely elastic, seismic-cycle deformation. Recent studies propose that 200 year to 50 kyr timescale supercycles may control the magnitude, timing, and frequency of seismic-cycle earthquakes in subduction zones, where aseismic slip transients are known to play an important role in total deformation. Seismic cycle deformation of normal faults may be similarly influenced by its timing within long-period supercycles. We present numerical models (based on rate-and-state friction) of normal faults such as the Wasatch Fault showing that realistic rate-and-state parameter distributions along an extensional fault zone can give rise to earthquake clusters separated by 500 yr - 5 kyr periods of aseismic slip transients on some portions of the fault. The recurrence intervals of events within each earthquake cluster range from 200 to 400 years. Our results support the importance of stress and strain history as controls on a normal fault's present and future slip behavior and on the characteristics of its current seismic cycle. These models suggest that long- to medium-term fault slip history may influence the temporal distribution, recurrence interval, and earthquake magnitudes for a given normal fault segment.
Structural Controllability of Temporal Networks with a Single Switching Controller
Yao, Peng; Hou, Bao-Yu; Pan, Yu-Jian; Li, Xiang
2017-01-01
Temporal network, whose topology evolves with time, is an important class of complex networks. Temporal trees of a temporal network describe the necessary edges sustaining the network as well as their active time points. By a switching controller which properly selects its location with time, temporal trees are used to improve the controllability of the network. Therefore, more nodes are controlled within the limited time. Several switching strategies to efficiently select the location of the controller are designed, which are verified with synthetic and empirical temporal networks to achieve better control performance. PMID:28107538
Fault linkage and continental breakup
NASA Astrophysics Data System (ADS)
Cresswell, Derren; Lymer, Gaël; Reston, Tim; Stevenson, Carl; Bull, Jonathan; Sawyer, Dale; Morgan, Julia
2017-04-01
The magma-poor rifted margin off the west coast of Galicia (NW Spain) has provided some of the key observations in the development of models describing the final stages of rifting and continental breakup. In 2013, we collected a 68 x 20 km 3D seismic survey across the Galicia margin, NE Atlantic. Processing through to 3D Pre-stack Time Migration (12.5 m bin-size) and 3D depth conversion reveals the key structures, including an underlying detachment fault (the S detachment), and the intra-block and inter-block faults. These data reveal multiple phases of faulting, which overlap spatially and temporally, have thinned the crust to between zero and a few km thickness, producing 'basement windows' where crustal basement has been completely pulled apart and sediments lie directly on the mantle. Two approximately N-S trending fault systems are observed: 1) a margin proximal system of two linked faults that are the upward extension (breakaway faults) of the S; in the south they form one surface that splays northward to form two faults with an intervening fault block. These faults were thus demonstrably active at one time rather than sequentially. 2) An oceanward relay structure that shows clear along strike linkage. Faults within the relay trend NE-SW and heavily dissect the basement. The main block bounding faults can be traced from the S detachment through the basement into, and heavily deforming, the syn-rift sediments where they die out, suggesting that the faults propagated up from the S detachment surface. Analysis of the fault heaves and associated maps at different structural levels show complementary fault systems. The pattern of faulting suggests a variation in main tectonic transport direction moving oceanward. This might be interpreted as a temporal change during sequential faulting, however the transfer of extension between faults and the lateral variability of fault blocks suggests that many of the faults across the 3D volume were active at least in part simultaneously. Alternatively, extension may have varied in direction spatially if it were a rotation about a pole located to the north.
NASA Astrophysics Data System (ADS)
Zielke, Olaf; Arrowsmith, Ramon
2010-05-01
Slip-rates along individual faults may differ as a function of measurement time scale. Short-term slip-rates may be higher than the long term rate and vice versa. For example, vertical slip-rates along the Wasatch Fault, Utah are 1.7+/-0.5 mm/yr since 6ka, <0.6 mm/yr since 130ka, and 0.5-0.7 mm/yr since 10Ma (Friedrich et al., 2003). Following conventional earthquake recurrence models like the characteristic earthquake model, this observation implies that the driving strain accumulation rates may have changed over the respective time scales as well. While potential explanations for such slip-rate variations may be found for example in the reorganization of plate tectonic motion or mantle flow dynamics, causing changes in the crustal velocity field over long spatial wavelengths, no single geophysical explanation exists. Temporal changes in earthquake rate (i.e., event clustering) due to elastic interactions within a complex fault system may present an alternative explanation that requires neither variations in strain accumulation rate or nor changes in fault constitutive behavior for frictional sliding. In the presented study, we explore this scenario and investigate how fault geometric complexity, fault segmentation and fault (segment) interaction affect the seismic behavior and slip-rate along individual faults while keeping tectonic stressing-rate and frictional behavior constant in time. For that, we used FIMozFric--a physics-based numerical earthquake simulator, based on Okada's (1992) formulations for internal displacements and strains due to shear and tensile faults in a half-space. Faults are divided into a large number of equal-sized fault patches which communicate via elastic interaction, allowing implementation of geometrically complex, non-planar faults. Each patch has assigned a static and dynamic friction coefficient. The difference between those values is a function of depth--corresponding to the temperature-dependence of velocity-weakening that is observed in laboratory friction experiments and expressed in an [a-b] term in Rate-State-Friction (RSF) theory. Patches in the seismic zone are incrementally loaded during the interseismic phase. An earthquake initiates if shear stress along at least one (seismic) patch exceeds its static frictional strength and may grow in size due to elastic interaction with other fault patches (static stress transfer). Aside from investigating slip-rate variations due to the elastic interactions within a fault system with this tool, we want to show how such modeling results can be very useful in exploring the physics underlying the patterns that the paleoseismology sees and that those methods (simulation and observations) can be merged, with both making important contributions. Using FIMozFric, we generated synthetic seismic records for a large number of fault geometries and structural scenarios to investigate along-fault slip accumulation patterns and the variability of slip at a point. Our simulations show that fault geometric complexity and the accompanied fault interactions and multi-fault ruptures may cause temporal deviations from the average fault slip-rate, in other words phases of earthquake clustering or relative quiescence. Slip-rates along faults within an interacting fault system may change even when the loading function (stressing rate) remains constant and the magnitude of slip rate change is suggested to be proportional to the magnitude of fault interaction. Thus, spatially isolated and structurally mature faults are expected to experience less slip-rate changes than strongly interacting and less mature faults. The magnitude of slip-rate change may serve as a proxy for the magnitude of fault interaction and vice versa.
NASA Astrophysics Data System (ADS)
Davarpanah, A.; Babaie, H. A.; Dai, D.
2013-12-01
Two systems of full and half grabens have been forming since the mid-Tertiary through tectonic and thermally induced extensional events in SW Montana and neighboring SE Idaho. The earlier mid-Tertiary Basin and Range (BR) tectonic event formed the NW- and NE-striking mountains around the Snake River Plain (SRP) in Idaho and SW Montana, respectively. Since the mid-Tertiary, partially synchronous with the BR event, diachronous bulging and subsidence due to the thermally induced stress field of the Yellowstone hotspot (YHS) has produced the second system of variably-oriented grabens through faulting across the older BR fault blocks. The track of the migration of the YHS is defined by the presence of six prominent volcanic calderas along the SRP which become younger toward the present location of the YHS. Graben basins bounded by both the BR faults and thermally induced cross-faults (CF) systems are now filled with Tertiary-Quaternary clastic sedimentary and volcanic-volcaniclastic rocks. Neogene mafic and felsic lava which erupted along the SRP and clastic sedimentary units (Sixmile Creek Fm., Ts) deposited in both types of graben basins were classified based on their lithology and age, and mapped in ArcGIS 10 as polygon using a combination of MBMG and USGS databases and geological maps at scales of 1:250.000, 1:100,000, and 1:48,000. The spatio-temporal distributions of the lava polygons were then analyzed applying the Global and Local Moran`s I methods to detect any possible spatial or temporal autocorrelation relative to the track of the YHS. The results reveal the spatial autocorrelation of the lithology and age of the Neogene lavas, and suggest a spatio-temporal sequence of eruption of extrusive rocks between Miocene and late Pleistocene along the SRP. The sequence of eruptions, which progressively becomes younger toward the Yellowstone National Park, may track the migration of the YSH. The sub-parallelism of the trend of the SRP with the long axis of the standard deviation ellipses (SDEs), that give the trend of the dispersion of the centroids of lavas erupted at different times, and the spatio-temporally ordered overlap of older lavas by younger ones which were progressively erupted to the northeast of the older lavas, indicate the spatio-temporal migration of the centers of eruption along the SRP. Prominent graben basins which formed and filled during and after the BR normal faulting event were identified from those that formed during and after the cross faulting event based on cross cutting relationships and the trend of their long dimension (determined by applying the Dissolve and Minimum Bounding Geometry tools in ArcGIS 10) relative to the linear directional mean (LDM) of the BR and CF sets. The parallelism of the mean trend of the Ts graben fill polygons with the linear directional mean (LDM) of each of the two BR fault trace sets in the eastern SRP indicates that the Neogene deposition of the Ts is post-BR and pre-to syn-cross faulting. Cross-fault-bounded graben valleys filled with Ts roughly sub-parallel the mean trend of the CF sets, indicating that they formed after the BR faulting event.
Comparison of Observed Spatio-temporal Aftershock Patterns with Earthquake Simulator Results
NASA Astrophysics Data System (ADS)
Kroll, K.; Richards-Dinger, K. B.; Dieterich, J. H.
2013-12-01
Due to the complex nature of faulting in southern California, knowledge of rupture behavior near fault step-overs is of critical importance to properly quantify and mitigate seismic hazards. Estimates of earthquake probability are complicated by the uncertainty that a rupture will stop at or jump a fault step-over, which affects both the magnitude and frequency of occurrence of earthquakes. In recent years, earthquake simulators and dynamic rupture models have begun to address the effects of complex fault geometries on earthquake ground motions and rupture propagation. Early models incorporated vertical faults with highly simplified geometries. Many current studies examine the effects of varied fault geometry, fault step-overs, and fault bends on rupture patterns; however, these works are limited by the small numbers of integrated fault segments and simplified orientations. The previous work of Kroll et al., 2013 on the northern extent of the 2010 El Mayor-Cucapah rupture in the Yuha Desert region uses precise aftershock relocations to show an area of complex conjugate faulting within the step-over region between the Elsinore and Laguna Salada faults. Here, we employ an innovative approach of incorporating this fine-scale fault structure defined through seismological, geologic and geodetic means in the physics-based earthquake simulator, RSQSim, to explore the effects of fine-scale structures on stress transfer and rupture propagation and examine the mechanisms that control aftershock activity and local triggering of other large events. We run simulations with primary fault structures in state of California and northern Baja California and incorporate complex secondary faults in the Yuha Desert region. These models produce aftershock activity that enables comparison between the observed and predicted distribution and allow for examination of the mechanisms that control them. We investigate how the spatial and temporal distribution of aftershocks are affected by changes to model parameters such as shear and normal stress, rate-and-state frictional properties, fault geometry, and slip rate.
J.M. Rice; C.B. Halpern; J.A. Antos; J.A. Jones
2012-01-01
Tree invasions of grasslands are occurring globally, with profound consequences for ecosystem structure and function. We explore the spatio-temporal dynamics of tree invasion of a montane meadow in the Cascade Mountains of Oregon, where meadow loss is a conservation concern. We examine the early stages of invasion, where extrinsic and intrinsic processes can be clearly...
Fault-zone waves observed at the southern Joshua Tree earthquake rupture zone
Hough, S.E.; Ben-Zion, Y.; Leary, P.
1994-01-01
Waveform and spectral characteristics of several aftershocks of the M 6.1 22 April 1992 Joshua Tree earthquake recorded at stations just north of the Indio Hills in the Coachella Valley can be interpreted in terms of waves propagating within narrow, low-velocity, high-attenuation, vertical zones. Evidence for our interpretation consists of: (1) emergent P arrivals prior to and opposite in polarity to the impulsive direct phase; these arrivals can be modeled as headwaves indicative of a transfault velocity contrast; (2) spectral peaks in the S wave train that can be interpreted as internally reflected, low-velocity fault-zone wave energy; and (3) spatial selectivity of event-station pairs at which these data are observed, suggesting a long, narrow geologic structure. The observed waveforms are modeled using the analytical solution of Ben-Zion and Aki (1990) for a plane-parallel layered fault-zone structure. Synthetic waveform fits to the observed data indicate the presence of NS-trending vertical fault-zone layers characterized by a thickness of 50 to 100 m, a velocity decrease of 10 to 15% relative to the surrounding rock, and a P-wave quality factor in the range 25 to 50.
Probability and possibility-based representations of uncertainty in fault tree analysis.
Flage, Roger; Baraldi, Piero; Zio, Enrico; Aven, Terje
2013-01-01
Expert knowledge is an important source of input to risk analysis. In practice, experts might be reluctant to characterize their knowledge and the related (epistemic) uncertainty using precise probabilities. The theory of possibility allows for imprecision in probability assignments. The associated possibilistic representation of epistemic uncertainty can be combined with, and transformed into, a probabilistic representation; in this article, we show this with reference to a simple fault tree analysis. We apply an integrated (hybrid) probabilistic-possibilistic computational framework for the joint propagation of the epistemic uncertainty on the values of the (limiting relative frequency) probabilities of the basic events of the fault tree, and we use possibility-probability (probability-possibility) transformations for propagating the epistemic uncertainty within purely probabilistic and possibilistic settings. The results of the different approaches (hybrid, probabilistic, and possibilistic) are compared with respect to the representation of uncertainty about the top event (limiting relative frequency) probability. Both the rationale underpinning the approaches and the computational efforts they require are critically examined. We conclude that the approaches relevant in a given setting depend on the purpose of the risk analysis, and that further research is required to make the possibilistic approaches operational in a risk analysis context. © 2012 Society for Risk Analysis.
Fault Tree Based Diagnosis with Optimal Test Sequencing for Field Service Engineers
NASA Technical Reports Server (NTRS)
Iverson, David L.; George, Laurence L.; Patterson-Hine, F. A.; Lum, Henry, Jr. (Technical Monitor)
1994-01-01
When field service engineers go to customer sites to service equipment, they want to diagnose and repair failures quickly and cost effectively. Symptoms exhibited by failed equipment frequently suggest several possible causes which require different approaches to diagnosis. This can lead the engineer to follow several fruitless paths in the diagnostic process before they find the actual failure. To assist in this situation, we have developed the Fault Tree Diagnosis and Optimal Test Sequence (FTDOTS) software system that performs automated diagnosis and ranks diagnostic hypotheses based on failure probability and the time or cost required to isolate and repair each failure. FTDOTS first finds a set of possible failures that explain exhibited symptoms by using a fault tree reliability model as a diagnostic knowledge to rank the hypothesized failures based on how likely they are and how long it would take or how much it would cost to isolate and repair them. This ordering suggests an optimal sequence for the field service engineer to investigate the hypothesized failures in order to minimize the time or cost required to accomplish the repair task. Previously, field service personnel would arrive at the customer site and choose which components to investigate based on past experience and service manuals. Using FTDOTS running on a portable computer, they can now enter a set of symptoms and get a list of possible failures ordered in an optimal test sequence to help them in their decisions. If facilities are available, the field engineer can connect the portable computer to the malfunctioning device for automated data gathering. FTDOTS is currently being applied to field service of medical test equipment. The techniques are flexible enough to use for many different types of devices. If a fault tree model of the equipment and information about component failure probabilities and isolation times or costs are available, a diagnostic knowledge base for that device can be developed easily.
NASA Astrophysics Data System (ADS)
Zarco-Tejada, P. J.; Hornero, A.; Hernández-Clemente, R.; Beck, P. S. A.
2018-03-01
The operational monitoring of forest decline requires the development of remote sensing methods that are sensitive to the spatiotemporal variations of pigment degradation and canopy defoliation. In this context, the red-edge spectral region (RESR) was proposed in the past due to its combined sensitivity to chlorophyll content and leaf area variation. In this study, the temporal dimension of the RESR was evaluated as a function of forest decline using a radiative transfer method with the PROSPECT and 3D FLIGHT models. These models were used to generate synthetic pine stands simulating decline and recovery processes over time and explore the temporal rate of change of the red-edge chlorophyll index (CI) as compared to the trajectories obtained for the structure-related Normalized Difference Vegetation Index (NDVI). The temporal trend method proposed here consisted of using synthetic spectra to calculate the theoretical boundaries of the subspace for healthy and declining pine trees in the temporal domain, defined by CItime=n/CItime=n+1 vs. NDVItime=n/NDVItime=n+1. Within these boundaries, trees undergoing decline and recovery processes showed different trajectories through this subspace. The method was then validated using three high-resolution airborne hyperspectral images acquired at 40 cm resolution and 260 spectral bands of 6.5 nm full-width half-maximum (FWHM) over a forest with widespread tree decline, along with field-based monitoring of chlorosis and defoliation (i.e., 'decline' status) in 663 trees between the years 2015 and 2016. The temporal rate of change of chlorophyll vs. structural indices, based on reflectance spectra extracted from the hyperspectral images, was different for trees undergoing decline, and aligned towards the decline baseline established using the radiative transfer models. By contrast, healthy trees over time aligned towards the theoretically obtained healthy baseline. The applicability of this temporal trend method to the red-edge bands of the MultiSpectral Imager (MSI) instrument on board Sentinel-2a for operational forest status monitoring was also explored by comparing the temporal rate of change of the Sentinel-2-derived CI over areas with declining and healthy trees. Results demonstrated that the Sentinel-2a red-edge region was sensitive to the temporal dimension of forest condition, as the relationships obtained for pixels in healthy condition deviated from those of pixels undergoing decline.
Sequential Test Strategies for Multiple Fault Isolation
NASA Technical Reports Server (NTRS)
Shakeri, M.; Pattipati, Krishna R.; Raghavan, V.; Patterson-Hine, Ann; Kell, T.
1997-01-01
In this paper, we consider the problem of constructing near optimal test sequencing algorithms for diagnosing multiple faults in redundant (fault-tolerant) systems. The computational complexity of solving the optimal multiple-fault isolation problem is super-exponential, that is, it is much more difficult than the single-fault isolation problem, which, by itself, is NP-hard. By employing concepts from information theory and Lagrangian relaxation, we present several static and dynamic (on-line or interactive) test sequencing algorithms for the multiple fault isolation problem that provide a trade-off between the degree of suboptimality and computational complexity. Furthermore, we present novel diagnostic strategies that generate a static diagnostic directed graph (digraph), instead of a static diagnostic tree, for multiple fault diagnosis. Using this approach, the storage complexity of the overall diagnostic strategy reduces substantially. Computational results based on real-world systems indicate that the size of a static multiple fault strategy is strictly related to the structure of the system, and that the use of an on-line multiple fault strategy can diagnose faults in systems with as many as 10,000 failure sources.
MacDonald Iii, Angus W; Zick, Jennifer L; Chafee, Matthew V; Netoff, Theoden I
2015-01-01
The grand challenges of schizophrenia research are linking the causes of the disorder to its symptoms and finding ways to overcome those symptoms. We argue that the field will be unable to address these challenges within psychiatry's standard neo-Kraepelinian (DSM) perspective. At the same time the current corrective, based in molecular genetics and cognitive neuroscience, is also likely to flounder due to its neglect for psychiatry's syndromal structure. We suggest adopting a new approach long used in reliability engineering, which also serves as a synthesis of these approaches. This approach, known as fault tree analysis, can be combined with extant neuroscientific data collection and computational modeling efforts to uncover the causal structures underlying the cognitive and affective failures in people with schizophrenia as well as other complex psychiatric phenomena. By making explicit how causes combine from basic faults to downstream failures, this approach makes affordances for: (1) causes that are neither necessary nor sufficient in and of themselves; (2) within-diagnosis heterogeneity; and (3) between diagnosis co-morbidity.
Optical fiber-fault surveillance for passive optical networks in S-band operation window
NASA Astrophysics Data System (ADS)
Yeh, Chien-Hung; Chi, Sien
2005-07-01
An S-band (1470 to 1520 nm) fiber laser scheme, which uses multiple fiber Bragg grating (FBG) elements as feedback elements on each passive branch, is proposed and described for in-service fault identification in passive optical networks (PONs). By tuning a wavelength selective filter located within the laser cavity over a gain bandwidth, the fiber-fault of each branch can be monitored without affecting the in-service channels. In our experiment, an S-band four-branch monitoring tree-structured PON system is demonstrated and investigated experimentally.
Optical fiber-fault surveillance for passive optical networks in S-band operation window.
Yeh, Chien-Hung; Chi, Sien
2005-07-11
An S-band (1470 to 1520 nm) fiber laser scheme, which uses multiple fiber Bragg grating (FBG) elements as feedback elements on each passive branch, is proposed and described for in-service fault identification in passive optical networks (PONs). By tuning a wavelength selective filter located within the laser cavity over a gain bandwidth, the fiber-fault of each branch can be monitored without affecting the in-service channels. In our experiment, an S-band four-branch monitoring tree-structured PON system is demonstrated and investigated experimentally.
Sun, Weifang; Yao, Bin; Zeng, Nianyin; He, Yuchao; Cao, Xincheng; He, Wangpeng
2017-01-01
As a typical example of large and complex mechanical systems, rotating machinery is prone to diversified sorts of mechanical faults. Among these faults, one of the prominent causes of malfunction is generated in gear transmission chains. Although they can be collected via vibration signals, the fault signatures are always submerged in overwhelming interfering contents. Therefore, identifying the critical fault’s characteristic signal is far from an easy task. In order to improve the recognition accuracy of a fault’s characteristic signal, a novel intelligent fault diagnosis method is presented. In this method, a dual-tree complex wavelet transform (DTCWT) is employed to acquire the multiscale signal’s features. In addition, a convolutional neural network (CNN) approach is utilized to automatically recognise a fault feature from the multiscale signal features. The experiment results of the recognition for gear faults show the feasibility and effectiveness of the proposed method, especially in the gear’s weak fault features. PMID:28773148
NASA Astrophysics Data System (ADS)
Cowgill, E.; Gold, R. D.; Arrowsmith, R.; Friedrich, A. M.
2015-12-01
In elastic rebound theory, hazard increases as interseismic strain rebuilds after rupture. This model is challenged by the temporal variation in the pacing of major earthquakes that is both predicted by mechanical models and suggested by some long paleoseismic records (e.g., 1-3). However, the extent of such behavior remains unclear due to a lack of long (5-25 ky) records of fault slip. Using Monte Carlo analysis of 11 offset landforms, we determined a 16-ky record of fault slip for the active, left-lateral Altyn Tagh fault, which bounds the NW margin of the Tibetan Plateau. This history reveals a pulse of accelerated slip between 6.4 and 6.0 ka, during which the fault slipped 9 +14/-2 m at a rate of 23 +35/-5 mm/y, or ~3x the 16 ky average of 8.1 +1.2/-0.9mm/y. These two modes of earthquake behavior suggest temporal variation in the rates of stress storage and release. The simplest explanation for the pulse is a cluster of 2-8 Mw > 7.5 earthquakes. Such supercyclicity has been reported for the Sunda (4) and Cascadia (3) megathrusts, but contrasts with steady slip along the strike-slip Alpine fault (5), for example. A second possibility is that the pulse reflects a single, unusually large rupture. However, this Black Swan event is unlikely: empirical scaling relationships require a Mw 8.2 rupture of the entire 1200-km-long ATF to produce 7 m of average slip. Likewise, Coulomb stress change from rupture on the adjacent North Altyn fault is of modest magnitude and overlap with the ATF. Poor temporal correlation between precipitation and the slip pulse argues against climatically modulated changes in surface loading (lakes/ice) or pore-fluid pressure. "Paleoslip" studies such as this sacrifice the single-event resolution of paleoseismology in exchange for long records that quantify both the timing and magnitude of fault slip averaged over multiple ruptures, and are essential for documenting temporal variations in fault slip as we begin to use calibrated physical models of the earthquake cycle to forecast time-dependent earthquake hazard (e.g., 6,7). 1. Weldon et al., 2004 GSA Today 14, 4; 2. Rockwell et al., 2015, PAGEOPH, 172, 1143; 3. Goldfinger et al., 2013, SRL, 84, 24; 4. Sieh et al., 2008, Science, 322, 1674; 5. Berryman et l., 2012, Science, 336, 1690; 6. Barbot et al., 2012, Science, 336, 707; 7. Field, 2015, BSSA, 105, 544.
Lacustrine Paleoseismology Reveals Earthquake Segmentation of the Alpine Fault, New Zealand
NASA Astrophysics Data System (ADS)
Howarth, J. D.; Fitzsimons, S.; Norris, R.; Langridge, R. M.
2013-12-01
Transform plate boundary faults accommodate high rates of strain and are capable of producing large (Mw>7.0) to great (Mw>8.0) earthquakes that pose significant seismic hazard. The Alpine Fault in New Zealand is one of the longest, straightest and fastest slipping plate boundary transform faults on Earth and produces earthquakes at quasi-periodic intervals. Theoretically, the fault's linearity, isolation from other faults and quasi-periodicity should promote the generation of earthquakes that have similar magnitudes over multiple seismic cycles. We test the hypothesis that the Alpine Fault produces quasi-regular earthquakes that contiguously rupture the southern and central fault segments, using a novel lacustrine paleoseismic proxy to reconstruct spatial and temporal patterns of fault rupture over the last 2000 years. In three lakes located close to the Alpine Fault the last nine earthquakes are recorded as megaturbidites formed by co-seismic subaqueous slope failures, which occur when shaking exceeds Modified Mercalli (MM) VII. When the fault ruptures adjacent to a lake the co-seismic megaturbidites are overlain by stacks of turbidites produced by enhanced fluvial sediment fluxes from earthquake-induced landslides. The turbidite stacks record shaking intensities of MM>IX in the lake catchments and can be used to map the spatial location of fault rupture. The lake records can be dated precisely, facilitating meaningful along strike correlations, and the continuous records allow earthquakes closely spaced in time on adjacent fault segments to be distinguished. The results show that while multi-segment ruptures of the Alpine Fault occurred during most seismic cycles, sequential earthquakes on adjacent segments and single segment ruptures have also occurred. The complexity of the fault rupture pattern suggests that the subtle variations in fault geometry, sense of motion and slip rate that have been used to distinguish the central and southern segments of the Alpine Fault can inhibit rupture propagation, producing a soft earthquake segment boundary. The study demonstrates the utility of lakes as paleoseismometers that can be used to reconstruct the spatial and temporal patterns of earthquakes on a fault.
Fault Tolerant Real-Time Systems
1993-09-30
The ART (Advanced Real-Time Technology) Project of Carnegie Mellon University is engaged in wide ranging research on hard real - time systems . The...including hardware and software fault tolerance using temporal redundancy and analytic redundancy to permit the construction of real - time systems whose
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jiang, Huaiguang; Dai, Xiaoxiao; Gao, David Wenzhong
An approach of big data characterization for smart grids (SGs) and its applications in fault detection, identification, and causal impact analysis is proposed in this paper, which aims to provide substantial data volume reduction while keeping comprehensive information from synchrophasor measurements in spatial and temporal domains. Especially, based on secondary voltage control (SVC) and local SG observation algorithm, a two-layer dynamic optimal synchrophasor measurement devices selection algorithm (OSMDSA) is proposed to determine SVC zones, their corresponding pilot buses, and the optimal synchrophasor measurement devices. Combining the two-layer dynamic OSMDSA and matching pursuit decomposition, the synchrophasor data is completely characterized inmore » the spatial-temporal domain. To demonstrate the effectiveness of the proposed characterization approach, SG situational awareness is investigated based on hidden Markov model based fault detection and identification using the spatial-temporal characteristics generated from the reduced data. To identify the major impact buses, the weighted Granger causality for SGs is proposed to investigate the causal relationship of buses during system disturbance. The IEEE 39-bus system and IEEE 118-bus system are employed to validate and evaluate the proposed approach.« less
NASA Astrophysics Data System (ADS)
Bayrak, Erdem; Yılmaz, Şeyda; Bayrak, Yusuf
2017-05-01
The temporal and spatial variations of Gutenberg-Richter parameter (b-value) and fractal dimension (DC) during the period 1900-2010 in Western Anatolia was investigated. The study area is divided into 15 different source zones based on their tectonic and seismotectonic regimes. We calculated the temporal variation of b and DC values in each region using Zmap. The temporal variation of these parameters for the prediction of major earthquakes was calculated. The spatial distribution of these parameters is related to the stress levels of the faults. We observed that b and DC values change before the major earthquakes in the 15 seismic regions. To evaluate the spatial distribution of b and DC values, 0.50° × 0.50° grid interval were used. The b-values smaller than 0.70 are related to the Aegean Arc and Eskisehir Fault. The highest values are related to Sultandağı and Sandıklı Faults. Fractal correlation dimension varies from 1.65 to 2.60, which shows that the study area has a higher DC value. The lowest DC values are related to the joining area between Aegean and Cyprus arcs, Burdur-Fethiye fault zone. Some have concluded that b-values drop instantly before large shocks. Others suggested that temporally stable low b value zones identify future large earthquake locations. The results reveal that large earthquakes occur when b decreases and DC increases, suggesting that variation of b and DC can be used as an earthquake precursor. Mapping of b and DC values provide information about the state of stress in the region, i.e. lower b and higher DC values associated with epicentral areas of large earthquakes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bentz, B.J.; Powell, J.A.; Logan, J.A.
1996-12-01
Colonization of a host tree by the mountain pine beetle (Dendroctonus ponderosae) involves chemical communication that enables a massive aggregation of beetles on a single resource, thereby ensuring host death and subsequent beetle population survival. Beetle populations have evolved a mechanism for termination of colonization on a lodgepole pine tree at optimal beetle densities, with a concomitant switch of attacks to nearby trees. Observations of the daily spatial and temporal attack process of mountain pine beetles (nonepidemic) attacking lodgepole pine suggest that beetles switch attacks to a new host tree before the original focus tree is fully colonized, and thatmore » verbenone, an antiaggregating pheromone, may be acting within a tree rather than between trees.« less
NASA Astrophysics Data System (ADS)
Chartier, Thomas; Scotti, Oona; Boiselet, Aurelien; Lyon-Caen, Hélène
2016-04-01
Including faults in probabilistic seismic hazard assessment tends to increase the degree of uncertainty in the results due to the intrinsically uncertain nature of the fault data. This is especially the case in the low to moderate seismicity regions of Europe, where slow slipping faults are difficult to characterize. In order to better understand the key parameters that control the uncertainty in the fault-related hazard computations, we propose to build an analytic tool that provides a clear link between the different components of the fault-related hazard computations and their impact on the results. This will allow identifying the important parameters that need to be better constrained in order to reduce the resulting uncertainty in hazard and also provide a more hazard-oriented strategy for collecting relevant fault parameters in the field. The tool will be illustrated through the example of the West Corinth rifts fault-models. Recent work performed in the gulf has shown the complexity of the normal faulting system that is accommodating the extensional deformation of the rift. A logic-tree approach is proposed to account for this complexity and the multiplicity of scientifically defendable interpretations. At the nodes of the logic tree, different options that could be considered at each step of the fault-related seismic hazard will be considered. The first nodes represent the uncertainty in the geometries of the faults and their slip rates, which can derive from different data and methodologies. The subsequent node explores, for a given geometry/slip rate of faults, different earthquake rupture scenarios that may occur in the complex network of faults. The idea is to allow the possibility of several faults segments to break together in a single rupture scenario. To build these multiple-fault-segment scenarios, two approaches are considered: one based on simple rules (i.e. minimum distance between faults) and a second one that relies on physically-based simulations. The following nodes represents for each rupture scenario different rupture forecast models (i.e; characteristic or Gutenberg-Richter) and for a given rupture forecast, two probability models commonly used in seismic hazard assessment: poissonian or time-dependent. The final node represents an exhaustive set of ground motion prediction equations chosen in order to be compatible with the region. Finally, the expected probability of exceeding a given ground motion level is computed at each sites. Results will be discussed for a few specific localities of the West Corinth Gulf.
Earthquake Clustering in Noisy Viscoelastic Systems
NASA Astrophysics Data System (ADS)
Dicaprio, C. J.; Simons, M.; Williams, C. A.; Kenner, S. J.
2006-12-01
Geologic studies show evidence for temporal clustering of earthquakes on certain fault systems. Since post- seismic deformation may result in a variable loading rate on a fault throughout the inter-seismic period, it is reasonable to expect that the rheology of the non-seismogenic lower crust and mantle lithosphere may play a role in controlling earthquake recurrence times. Previously, the role of rheology of the lithosphere on the seismic cycle had been studied with a one-dimensional spring-dashpot-slider model (Kenner and Simons [2005]). In this study we use the finite element code PyLith to construct a two-dimensional continuum model a strike-slip fault in an elastic medium overlying one or more linear Maxwell viscoelastic layers loaded in the far field by a constant velocity boundary condition. Taking advantage of the linear properties of the model, we use the finite element solution to one earthquake as a spatio-temporal Green's function. Multiple Green's function solutions, scaled by the size of each earthquake, are then summed to form an earthquake sequence. When the shear stress on the fault reaches a predefined yield stress it is allowed to slip, relieving all accumulated shear stress. Random variation in the fault yield stress from one earthquake to the next results in a temporally clustered earthquake sequence. The amount of clustering depends on a non-dimensional number, W, called the Wallace number. For models with one viscoelastic layer, W is equal to the standard deviation of the earthquake stress drop divided by the viscosity times the tectonic loading rate. This definition of W is modified from the original one used in Kenner and Simons [2005] by using the standard deviation of the stress drop instead of the mean stress drop. We also use a new, more appropriate, metric to measure the amount of temporal clustering of the system. W is the ratio of the viscoelastic relaxation rate of the system to the tectonic loading rate of the system. For values of W greater than the critical value of about 10, the clustered earthquake behavior is due to the rapid reloading of the fault due to viscoelastic recycling of stress. A model with multiple viscoelastic layers has more complex clustering behavior than a system with only one viscosity. In this case, multiple clustering modes exist; the size and mean period of which are influenced by the viscosities and relative thicknesses of the viscoelastic layers. Kenner, S.J. and Simons, M., (2005), Temporal cluster of major earthquakes along individual faults due to post-seismic reloading, Geophysical Journal International, 160, 179-194.
Field, Edward; Biasi, Glenn P.; Bird, Peter; Dawson, Timothy E.; Felzer, Karen R.; Jackson, David A.; Johnson, Kaj M.; Jordan, Thomas H.; Madden, Christopher; Michael, Andrew J.; Milner, Kevin; Page, Morgan T.; Parsons, Thomas E.; Powers, Peter; Shaw, Bruce E.; Thatcher, Wayne R.; Weldon, Ray J.; Zeng, Yuehua
2015-01-01
The 2014 Working Group on California Earthquake Probabilities (WGCEP 2014) presents time-dependent earthquake probabilities for the third Uniform California Earthquake Rupture Forecast (UCERF3). Building on the UCERF3 time-independent model, published previously, renewal models are utilized to represent elastic-rebound-implied probabilities. A new methodology has been developed that solves applicability issues in the previous approach for un-segmented models. The new methodology also supports magnitude-dependent aperiodicity and accounts for the historic open interval on faults that lack a date-of-last-event constraint. Epistemic uncertainties are represented with a logic tree, producing 5,760 different forecasts. Results for a variety of evaluation metrics are presented, including logic-tree sensitivity analyses and comparisons to the previous model (UCERF2). For 30-year M≥6.7 probabilities, the most significant changes from UCERF2 are a threefold increase on the Calaveras fault and a threefold decrease on the San Jacinto fault. Such changes are due mostly to differences in the time-independent models (e.g., fault slip rates), with relaxation of segmentation and inclusion of multi-fault ruptures being particularly influential. In fact, some UCERF2 faults were simply too long to produce M 6.7 sized events given the segmentation assumptions in that study. Probability model differences are also influential, with the implied gains (relative to a Poisson model) being generally higher in UCERF3. Accounting for the historic open interval is one reason. Another is an effective 27% increase in the total elastic-rebound-model weight. The exact factors influencing differences between UCERF2 and UCERF3, as well as the relative importance of logic-tree branches, vary throughout the region, and depend on the evaluation metric of interest. For example, M≥6.7 probabilities may not be a good proxy for other hazard or loss measures. This sensitivity, coupled with the approximate nature of the model and known limitations, means the applicability of UCERF3 should be evaluated on a case-by-case basis.
NASA Astrophysics Data System (ADS)
Kwiatek, G.; Orlecka-Sikora, B.; Goebel, T.; Martínez-Garzón, P.; Dresen, G.; Bohnhoff, M.
2017-12-01
In this study we investigate details of spatial and temporal evolution of the stress field and damage at a pre-existing fault plane in laboratory stick-slip friction experiments performed on Westerly Granite sample. Specimen of 10 cm height and 4 cm diameter was deformed at a constant strain rate of 3×10-6 s-1 and confining pressure of 150 MPa. Here we analyze a series of 6 macroscopic slip events occurring on a rough fault during the course of experiment. Each macroscopic slip was associated with an intense femtoseismic acoustic emission (AE) activity recorded using a 16-channel transient recording system. To monitor the the spatiotemporal damage evolution, and unravel the micromechanical processes governing nucleation and propagation of slip events, we analyzed AE source characteristics (magnitude, seismic moment tensors, focal mechanisms), as well as the statistical properties (b-, c-, d- value) of femtoseismicity. In addition, the calculated AE focal mechanisms were used to reveal the spatiotemporal evolution of local stress field orientations and stress shape ratio coefficients over the fault plane, as well as additional parameters quantifying proximity to failure of individual fault patches. The calculated characteristics are used to comprehensively describe the complexity of the spatial and temporal evolution of the stress over the fault plane, and properties of the corresponding seismicity before and after the macroscopic slips. The observed faulting processes and characteristics are discussed in the context of global strain and stress changes, fault maturation, and earthquake stress drop.
Spatio-temporal mapping of plate boundary faults in California using geodetic imaging
Donnellan, Andrea; Arrowsmith, Ramon; DeLong, Stephen B.
2017-01-01
The Pacific–North American plate boundary in California is composed of a 400-km-wide network of faults and zones of distributed deformation. Earthquakes, even large ones, can occur along individual or combinations of faults within the larger plate boundary system. While research often focuses on the primary and secondary faults, holistic study of the plate boundary is required to answer several fundamental questions. How do plate boundary motions partition across California faults? How do faults within the plate boundary interact during earthquakes? What fraction of strain accumulation is relieved aseismically and does this provide limits on fault rupture propagation? Geodetic imaging, broadly defined as measurement of crustal deformation and topography of the Earth’s surface, enables assessment of topographic characteristics and the spatio-temporal behavior of the Earth’s crust. We focus here on crustal deformation observed with continuous Global Positioning System (GPS) data and Interferometric Synthetic Aperture Radar (InSAR) from NASA’s airborne UAVSAR platform, and on high-resolution topography acquired from lidar and Structure from Motion (SfM) methods. Combined, these measurements are used to identify active structures, past ruptures, transient motions, and distribution of deformation. The observations inform estimates of the mechanical and geometric properties of faults. We discuss five areas in California as examples of different fault behavior, fault maturity and times within the earthquake cycle: the M6.0 2014 South Napa earthquake rupture, the San Jacinto fault, the creeping and locked Carrizo sections of the San Andreas fault, the Landers rupture in the Eastern California Shear Zone, and the convergence of the Eastern California Shear Zone and San Andreas fault in southern California. These examples indicate that distribution of crustal deformation can be measured using interferometric synthetic aperture radar (InSAR), Global Navigation Satellite System (GNSS), and high-resolution topography and can improve our understanding of tectonic deformation and rupture characteristics within the broad plate boundary zone.
Foreshock and aftershocks in simple earthquake models.
Kazemian, J; Tiampo, K F; Klein, W; Dominguez, R
2015-02-27
Many models of earthquake faults have been introduced that connect Gutenberg-Richter (GR) scaling to triggering processes. However, natural earthquake fault systems are composed of a variety of different geometries and materials and the associated heterogeneity in physical properties can cause a variety of spatial and temporal behaviors. This raises the question of how the triggering process and the structure interact to produce the observed phenomena. Here we present a simple earthquake fault model based on the Olami-Feder-Christensen and Rundle-Jackson-Brown cellular automata models with long-range interactions that incorporates a fixed percentage of stronger sites, or asperity cells, into the lattice. These asperity cells are significantly stronger than the surrounding lattice sites but eventually rupture when the applied stress reaches their higher threshold stress. The introduction of these spatial heterogeneities results in temporal clustering in the model that mimics that seen in natural fault systems along with GR scaling. In addition, we observe sequences of activity that start with a gradually accelerating number of larger events (foreshocks) prior to a main shock that is followed by a tail of decreasing activity (aftershocks). This work provides further evidence that the spatial and temporal patterns observed in natural seismicity are strongly influenced by the underlying physical properties and are not solely the result of a simple cascade mechanism.
Reconstruction of late Holocene climate based on tree growth and mechanistic hierarchical models
Tipton, John; Hooten, Mevin B.; Pederson, Neil; Tingley, Martin; Bishop, Daniel
2016-01-01
Reconstruction of pre-instrumental, late Holocene climate is important for understanding how climate has changed in the past and how climate might change in the future. Statistical prediction of paleoclimate from tree ring widths is challenging because tree ring widths are a one-dimensional summary of annual growth that represents a multi-dimensional set of climatic and biotic influences. We develop a Bayesian hierarchical framework using a nonlinear, biologically motivated tree ring growth model to jointly reconstruct temperature and precipitation in the Hudson Valley, New York. Using a common growth function to describe the response of a tree to climate, we allow for species-specific parameterizations of the growth response. To enable predictive backcasts, we model the climate variables with a vector autoregressive process on an annual timescale coupled with a multivariate conditional autoregressive process that accounts for temporal correlation and cross-correlation between temperature and precipitation on a monthly scale. Our multi-scale temporal model allows for flexibility in the climate response through time at different temporal scales and predicts reasonable climate scenarios given tree ring width data.
The Dallas-Fort Worth Airport Earthquake Sequence: Seismicity Beyond Injection Period
NASA Astrophysics Data System (ADS)
Ogwari, Paul O.; DeShon, Heather R.; Hornbach, Matthew J.
2018-01-01
The 2008 Dallas-Fort Worth Airport earthquakes mark the beginning of seismicity rate changes linked to oil and gas operations in the central United States. We assess the spatial and temporal evolution of the sequence through December 2015 using template-based waveform correlation and relative location methods. We locate 400 earthquakes spanning 2008-2015 along a basement fault mapped as the Airport fault. The sequence exhibits temporally variable b values, and small-magnitude (m < 3.4) earthquakes spread northeast along strike over time. Pore pressure diffusion models indicate that the high-volume brine injection well located within 1 km of the 2008 earthquakes, although only operating from September 2008 to August 2009, contributes most significantly to long-term pressure perturbations, and hence stress changes, along the fault; a second long-operating, low-volume injector located 10 km north causes insufficient pressure changes. High-volume injection for a short time period near a critically stressed fault can induce long-lasting seismicity.
Computing and visualizing time-varying merge trees for high-dimensional data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Oesterling, Patrick; Heine, Christian; Weber, Gunther H.
2017-06-03
We introduce a new method that identifies and tracks features in arbitrary dimensions using the merge tree -- a structure for identifying topological features based on thresholding in scalar fields. This method analyzes the evolution of features of the function by tracking changes in the merge tree and relates features by matching subtrees between consecutive time steps. Using the time-varying merge tree, we present a structural visualization of the changing function that illustrates both features and their temporal evolution. We demonstrate the utility of our approach by applying it to temporal cluster analysis of high-dimensional point clouds.
Zarco-Tejada, P J; Hornero, A; Hernández-Clemente, R; Beck, P S A
2018-03-01
The operational monitoring of forest decline requires the development of remote sensing methods that are sensitive to the spatiotemporal variations of pigment degradation and canopy defoliation. In this context, the red-edge spectral region (RESR) was proposed in the past due to its combined sensitivity to chlorophyll content and leaf area variation. In this study, the temporal dimension of the RESR was evaluated as a function of forest decline using a radiative transfer method with the PROSPECT and 3D FLIGHT models. These models were used to generate synthetic pine stands simulating decline and recovery processes over time and explore the temporal rate of change of the red-edge chlorophyll index (CI) as compared to the trajectories obtained for the structure-related Normalized Difference Vegetation Index (NDVI). The temporal trend method proposed here consisted of using synthetic spectra to calculate the theoretical boundaries of the subspace for healthy and declining pine trees in the temporal domain, defined by CI time=n /CI time=n+1 vs. NDVI time=n /NDVI time=n+1 . Within these boundaries, trees undergoing decline and recovery processes showed different trajectories through this subspace. The method was then validated using three high-resolution airborne hyperspectral images acquired at 40 cm resolution and 260 spectral bands of 6.5 nm full-width half-maximum (FWHM) over a forest with widespread tree decline, along with field-based monitoring of chlorosis and defoliation (i.e., 'decline' status) in 663 trees between the years 2015 and 2016. The temporal rate of change of chlorophyll vs. structural indices, based on reflectance spectra extracted from the hyperspectral images, was different for trees undergoing decline, and aligned towards the decline baseline established using the radiative transfer models. By contrast, healthy trees over time aligned towards the theoretically obtained healthy baseline . The applicability of this temporal trend method to the red-edge bands of the MultiSpectral Imager (MSI) instrument on board Sentinel-2a for operational forest status monitoring was also explored by comparing the temporal rate of change of the Sentinel-2-derived CI over areas with declining and healthy trees. Results demonstrated that the Sentinel-2a red-edge region was sensitive to the temporal dimension of forest condition, as the relationships obtained for pixels in healthy condition deviated from those of pixels undergoing decline.
NASA Astrophysics Data System (ADS)
Levia, D. F.; van Stan, J. T.; Mage, S.; Hauske, P. W.
2009-05-01
Stemflow is a localized point input at the base of trees that can account for more than 10% of the incident gross precipitation in deciduous forests. Despite the fact that stemflow has been documented to be of hydropedological importance, affecting soil moisture patterns, soil erosion, soil chemistry, and the distribution of understory vegetation, our current understanding of the temporal variability of stemflow yield is poor. The aim of the present study, conducted in a beech-yellow poplar forest in northeastern Maryland (39°42'N, 75°50'W), was to better understand the temporal and variability of stemflow production from Fagus grandifolia Ehrh. (American beech) and Liriodendron tulipifera L. (yellow poplar) in relation to meteorological conditions and season in order to better assess its importance to canopy-soil interactions. The experimental plot had a stand density of 225 trees/ha, a stand basal area of 36.8 sq. m/ha, a mean dbh of 40.8 cm, and a mean tree height of 27.8 m. The stand leaf area index (LAI) is 5.3. Yellow poplar and beech constitute three- quarters of the stand basal area. Using a high resolution (5 min) sequential stemflow sampling network, consisting of tipping-bucket gauges interfaced with a Campbell CR1000 datalogger, the temporal variability of stemflow yield was examined. Beech produced significantly larger stemflow amounts than yellow poplar. The amount of stemflow produced by individual beech trees in 5 minute intervals reached three liters. Stemflow yield and funneling ratios decreased with increasing rain intensity. Temporal variability of stemflow inputs were affected by the nature of incident gross rainfall, season, tree species, tree size, and bark water storage capacity. Stemflow was greater during the leafless period than full leaf period. Stemflow yield was greater for larger beech trees and smaller yellow poplar trees, owing to differences in bark water storage capacity. The findings of this study indicate that stemflow has a detectable affect on soil moisture patterning and the hydraulic conductivity of forest soils.
Redundancy management for efficient fault recovery in NASA's distributed computing system
NASA Technical Reports Server (NTRS)
Malek, Miroslaw; Pandya, Mihir; Yau, Kitty
1991-01-01
The management of redundancy in computer systems was studied and guidelines were provided for the development of NASA's fault-tolerant distributed systems. Fault recovery and reconfiguration mechanisms were examined. A theoretical foundation was laid for redundancy management by efficient reconfiguration methods and algorithmic diversity. Algorithms were developed to optimize the resources for embedding of computational graphs of tasks in the system architecture and reconfiguration of these tasks after a failure has occurred. The computational structure represented by a path and the complete binary tree was considered and the mesh and hypercube architectures were targeted for their embeddings. The innovative concept of Hybrid Algorithm Technique was introduced. This new technique provides a mechanism for obtaining fault tolerance while exhibiting improved performance.
Failure mode effect analysis and fault tree analysis as a combined methodology in risk management
NASA Astrophysics Data System (ADS)
Wessiani, N. A.; Yoshio, F.
2018-04-01
There have been many studies reported the implementation of Failure Mode Effect Analysis (FMEA) and Fault Tree Analysis (FTA) as a method in risk management. However, most of the studies usually only choose one of these two methods in their risk management methodology. On the other side, combining these two methods will reduce the drawbacks of each methods when implemented separately. This paper aims to combine the methodology of FMEA and FTA in assessing risk. A case study in the metal company will illustrate how this methodology can be implemented. In the case study, this combined methodology will assess the internal risks that occur in the production process. Further, those internal risks should be mitigated based on their level of risks.
Using Decision Trees to Detect and Isolate Simulated Leaks in the J-2X Rocket Engine
NASA Technical Reports Server (NTRS)
Schwabacher, Mark A.; Aguilar, Robert; Figueroa, Fernando F.
2009-01-01
The goal of this work was to use data-driven methods to automatically detect and isolate faults in the J-2X rocket engine. It was decided to use decision trees, since they tend to be easier to interpret than other data-driven methods. The decision tree algorithm automatically "learns" a decision tree by performing a search through the space of possible decision trees to find one that fits the training data. The particular decision tree algorithm used is known as C4.5. Simulated J-2X data from a high-fidelity simulator developed at Pratt & Whitney Rocketdyne and known as the Detailed Real-Time Model (DRTM) was used to "train" and test the decision tree. Fifty-six DRTM simulations were performed for this purpose, with different leak sizes, different leak locations, and different times of leak onset. To make the simulations as realistic as possible, they included simulated sensor noise, and included a gradual degradation in both fuel and oxidizer turbine efficiency. A decision tree was trained using 11 of these simulations, and tested using the remaining 45 simulations. In the training phase, the C4.5 algorithm was provided with labeled examples of data from nominal operation and data including leaks in each leak location. From the data, it "learned" a decision tree that can classify unseen data as having no leak or having a leak in one of the five leak locations. In the test phase, the decision tree produced very low false alarm rates and low missed detection rates on the unseen data. It had very good fault isolation rates for three of the five simulated leak locations, but it tended to confuse the remaining two locations, perhaps because a large leak at one of these two locations can look very similar to a small leak at the other location.
The Origin of High-angle Dip-slip Earthquakes at Geothermal Fields in California
NASA Astrophysics Data System (ADS)
Barbour, A. J.; Schoenball, M.; Martínez-Garzón, P.; Kwiatek, G.
2016-12-01
We examine the source mechanisms of earthquakes occurring in three California geothermal fields: The Geysers, Salton Sea, and Coso. We find source mechanisms ranging from strike slip faulting, consistent with the tectonic settings, to dip slip with unusually steep dip angles which are inconsistent with local structures. For example, we identify a fault zone in the Salton Sea Geothermal Field imaged using precisely-relocated hypocenters with a dip angle of 60° yet double-couple focal mechanisms indicate higher-angle dip-slip on ≥75° dipping planes. We observe considerable temporal variability in the distribution of source mechanisms. For example, at the Salton Sea we find that the number of high angle dip-slip events increased after 1989, when net-extraction rates were highest. There is a concurrent decline in strike-slip and strike-slip-normal faulting, the mechanisms expected from regional tectonics. These unusual focal mechanisms and their spatio-temporal patterns are enigmatic in terms of our understanding of faulting in geothermal regions. While near-vertical fault planes are expected to slip in a strike-slip sense, and dip slip is expected to occur on moderately dipping faults, we observe dip slip on near-vertical fault planes. However, for plausible stress states and accounting for geothermal production, the resolved fault planes should be stable. We systematically analyze the source mechanisms of these earthquakes using full moment tensor inversion to understand the constraints imposed by assuming a double-couple source. Applied to The Geysers field, we find a significant reduction in the number of high-angle dip-slip mechanisms using the full moment tensor. The remaining mechanisms displaying high-angle dip-slip could be consistent with faults accommodating subsidence and compaction associated with volumetric strain changes in the geothermal reservoir.
Shallow soil CO2 flow along the San Andreas and Calaveras Faults, California
Lewicki, J.L.; Evans, William C.; Hilley, G.E.; Sorey, M.L.; Rogie, J.D.; Brantley, S.L.
2003-01-01
We evaluate a comprehensive soil CO2 survey along the San Andreas fault (SAF) in Parkfield, and the Calaveras fault (CF) in Hollister, California, in the context of spatial and temporal variability, origin, and transport of CO2 in fractured terrain. CO2 efflux was measured within grids with portable instrumentation and continously with meteorological parameters at a fixed station, in both faulted and unfaulted areas. Spatial and temporal variability of surface CO2 effluxes was observed to be higher at faulted SAF and CF sites, relative to comparable background areas. However, ??13C (-23.3 to - 16.4???) and ??14C (75.5 to 94.4???) values of soil CO2 in both faulted and unfaulted areas are indicative of biogenic CO2, even though CO2 effluxes in faulted areas reached values as high as 428 g m-2 d-1. Profiles of soil CO2 concentration as a function of depth were measured at multiple sites within SAF and CF grids and repeatedly at two locations at the SAF grid. Many of these profiles suggest a surprisingly high component of advective CO2 flow. Spectral and correlation analysis of SAF CO2 efflux and meteorological parameter time series indicates that effects of wind speed variations on atmospheric air flow though fractures modulate surface efflux of biogenic CO2. The resulting areal patterns in CO2 effluxes could be erroneously attributed to a deep gas source in the absence of isotopic data, a problem that must be addressed in fault zone soil gas studies.
Slip history and dynamic implications of the 1999 Chi-Chi, Taiwan, earthquake
Ji, C.; Helmberger, D.V.; Wald, D.J.; Ma, K.-F.
2003-01-01
We investigate the rupture process of the 1999 Chi-Chi, Taiwan, earthquake using extensive near-source observations, including three-component velocity waveforms at 36 strong motion stations and 119 GPS measurements. A three-plane fault geometry derived from our previous inversion using only static data [Ji et al., 2001] is applied. The slip amplitude, rake angle, rupture initiation time, and risetime function are inverted simultaneously with a recently developed finite fault inverse method that combines a wavelet transform approach with a simulated annealing algorithm [Ji et al., 2002b]. The inversion results are validated by the forward prediction of an independent data set, the teleseismic P and SH ground velocities, with notable agreement. The results show that the total seismic moment release of this earthquake is 2.7 ?? 1020 N m and that most of the slip occured in a triangular-shaped asperity involving two fault segments, which is consistent with our previous static inversion. The rupture front propagates with an average rupture velocity of ???2.0 km s-1, and the average slip duration (risetime) is 7.2 s. Several interesting observations related to the temporal evolution of the Chi-Chi earthquake are also investigated, including (1) the strong effect of the sinuous fault plane of the Chelungpu fault on spatial and temporal variations in slip history, (2) the intersection of fault 1 and fault 2 not being a strong impediment to the rupture propagation, and (3 the observation that the peak slip velocity near the surface is, in general, higher than on the deeper portion of the fault plane, as predicted by dynamic modeling.
MacDonald III, Angus W.; Zick, Jennifer L.; Chafee, Matthew V.; Netoff, Theoden I.
2016-01-01
The grand challenges of schizophrenia research are linking the causes of the disorder to its symptoms and finding ways to overcome those symptoms. We argue that the field will be unable to address these challenges within psychiatry’s standard neo-Kraepelinian (DSM) perspective. At the same time the current corrective, based in molecular genetics and cognitive neuroscience, is also likely to flounder due to its neglect for psychiatry’s syndromal structure. We suggest adopting a new approach long used in reliability engineering, which also serves as a synthesis of these approaches. This approach, known as fault tree analysis, can be combined with extant neuroscientific data collection and computational modeling efforts to uncover the causal structures underlying the cognitive and affective failures in people with schizophrenia as well as other complex psychiatric phenomena. By making explicit how causes combine from basic faults to downstream failures, this approach makes affordances for: (1) causes that are neither necessary nor sufficient in and of themselves; (2) within-diagnosis heterogeneity; and (3) between diagnosis co-morbidity. PMID:26779007
Geology of Joshua Tree National Park geodatabase
Powell, Robert E.; Matti, Jonathan C.; Cossette, Pamela M.
2015-09-16
The database in this Open-File Report describes the geology of Joshua Tree National Park and was completed in support of the National Cooperative Geologic Mapping Program of the U.S. Geological Survey (USGS) and in cooperation with the National Park Service (NPS). The geologic observations and interpretations represented in the database are relevant to both the ongoing scientific interests of the USGS in southern California and the management requirements of NPS, specifically of Joshua Tree National Park (JOTR).Joshua Tree National Park is situated within the eastern part of California’s Transverse Ranges province and straddles the transition between the Mojave and Sonoran deserts. The geologically diverse terrain that underlies JOTR reveals a rich and varied geologic evolution, one that spans nearly two billion years of Earth history. The Park’s landscape is the current expression of this evolution, its varied landforms reflecting the differing origins of underlying rock types and their differing responses to subsequent geologic events. Crystalline basement in the Park consists of Proterozoic plutonic and metamorphic rocks intruded by a composite Mesozoic batholith of Triassic through Late Cretaceous plutons arrayed in northwest-trending lithodemic belts. The basement was exhumed during the Cenozoic and underwent differential deep weathering beneath a low-relief erosion surface, with the deepest weathering profiles forming on quartz-rich, biotite-bearing granitoid rocks. Disruption of the basement terrain by faults of the San Andreas system began ca. 20 Ma and the JOTR sinistral domain, preceded by basalt eruptions, began perhaps as early as ca. 7 Ma, but no later than 5 Ma. Uplift of the mountain blocks during this interval led to erosional stripping of the thick zones of weathered quartz-rich granitoid rocks to form etchplains dotted by bouldery tors—the iconic landscape of the Park. The stripped debris filled basins along the fault zones.Mountain ranges and basins in the Park exhibit an east-west physiographic grain controlled by left-lateral fault zones that form a sinistral domain within the broad zone of dextral shear along the transform boundary between the North American and Pacific plates. Geologic and geophysical evidence reveal that movement on the sinistral faults zones has resulted in left steps along the zones, resulting in the development of sub-basins beneath Pinto Basin and Shavers and Chuckwalla Valleys. The sinistral fault zones connect the Mojave Desert dextral faults of the Eastern California Shear Zone to the north and east with the Coachella Valley strands of the southern San Andreas Fault Zone to the west.Quaternary surficial deposits accumulated in alluvial washes and playas and lakes along the valley floors; in alluvial fans, washes, and sheet wash aprons along piedmonts flanking the mountain ranges; and in eolian dunes and sand sheets that span the transition from valley floor to piedmont slope. Sequences of Quaternary pediments are planed into piedmonts flanking valley-floor and upland basins, each pediment in turn overlain by successively younger residual and alluvial surficial deposits.
NASA Astrophysics Data System (ADS)
Kissling, W. M.; Villamor, P.; Ellis, S. M.; Rae, A.
2018-05-01
Present-day geothermal activity on the margins of the Ngakuru graben and evidence of fossil hydrothermal activity in the central graben suggest that a graben-wide system of permeable intersecting faults acts as the principal conduit for fluid flow to the surface. We have developed numerical models of fluid and heat flow in a regional-scale 2-D cross-section of the Ngakuru Graben. The models incorporate simplified representations of two 'end-member' fault architectures (one symmetric at depth, the other highly asymmetric) which are consistent with the surface locations and dips of the Ngakuru graben faults. The models are used to explore controls on buoyancy-driven convective fluid flow which could explain the differences between the past and present hydrothermal systems associated with these faults. The models show that the surface flows from the faults are strongly controlled by the fault permeability, the fault system architecture and the location of the heat source with respect to the faults in the graben. In particular, fault intersections at depth allow exchange of fluid between faults, and the location of the heat source on the footwall of normal faults can facilitate upflow along those faults. These controls give rise to two distinct fluid flow regimes in the fault network. The first, a regular flow regime, is characterised by a nearly unchanging pattern of fluid flow vectors within the fault network as the fault permeability evolves. In the second, complex flow regime, the surface flows depend strongly on fault permeability, and can fluctuate in an erratic manner. The direction of flow within faults can reverse in both regimes as fault permeability changes. Both flow regimes provide insights into the differences between the present-day and fossil geothermal systems in the Ngakuru graben. Hydrothermal upflow along the Paeroa fault seems to have occurred, possibly continuously, for tens of thousands of years, while upflow in other faults in the graben has switched on and off during the same period. An asymmetric graben architecture with the Paeroa being the major boundary fault will facilitate the predominant upflow along this fault. Upflow on the axial faults is more difficult to explain with this modelling. It occurs most easily with an asymmetric graben architecture and heat sources close to the graben axis (which could be associated with remnant heat from recent eruptions from Okataina Volcanic Centre). Temporal changes in upflow can also be associated with acceleration and deceleration of fault activity if this is considered a proxy for fault permeability. Other explanations for temporal variations in hydrothermal activity not explored here are different permeability on different faults, and different permeability along fault strike.
Improved FTA methodology and application to subsea pipeline reliability design.
Lin, Jing; Yuan, Yongbo; Zhang, Mingyuan
2014-01-01
An innovative logic tree, Failure Expansion Tree (FET), is proposed in this paper, which improves on traditional Fault Tree Analysis (FTA). It describes a different thinking approach for risk factor identification and reliability risk assessment. By providing a more comprehensive and objective methodology, the rather subjective nature of FTA node discovery is significantly reduced and the resulting mathematical calculations for quantitative analysis are greatly simplified. Applied to the Useful Life phase of a subsea pipeline engineering project, the approach provides a more structured analysis by constructing a tree following the laws of physics and geometry. Resulting improvements are summarized in comparison table form.
Improved FTA Methodology and Application to Subsea Pipeline Reliability Design
Lin, Jing; Yuan, Yongbo; Zhang, Mingyuan
2014-01-01
An innovative logic tree, Failure Expansion Tree (FET), is proposed in this paper, which improves on traditional Fault Tree Analysis (FTA). It describes a different thinking approach for risk factor identification and reliability risk assessment. By providing a more comprehensive and objective methodology, the rather subjective nature of FTA node discovery is significantly reduced and the resulting mathematical calculations for quantitative analysis are greatly simplified. Applied to the Useful Life phase of a subsea pipeline engineering project, the approach provides a more structured analysis by constructing a tree following the laws of physics and geometry. Resulting improvements are summarized in comparison table form. PMID:24667681
Spatial and temporal distribution of trunk-injected imidacloprid in apple tree canopies.
Aćimović, Srđan G; VanWoerkom, Anthony H; Reeb, Pablo D; Vandervoort, Christine; Garavaglia, Thomas; Cregg, Bert M; Wise, John C
2014-11-01
Pesticide use in orchards creates drift-driven pesticide losses which contaminate the environment. Trunk injection of pesticides as a target-precise delivery system could greatly reduce pesticide losses. However, pesticide efficiency after trunk injection is associated with the underinvestigated spatial and temporal distribution of the pesticide within the tree crown. This study quantified the spatial and temporal distribution of trunk-injected imidacloprid within apple crowns after trunk injection using one, two, four or eight injection ports per tree. The spatial uniformity of imidacloprid distribution in apple crowns significantly increased with more injection ports. Four ports allowed uniform spatial distribution of imidacloprid in the crown. Uniform and non-uniform spatial distributions were established early and lasted throughout the experiment. The temporal distribution of imidacloprid was significantly non-uniform. Upper and lower crown positions did not significantly differ in compound concentration. Crown concentration patterns indicated that imidacloprid transport in the trunk occurred through radial diffusion and vertical uptake with a spiral pattern. By showing where and when a trunk-injected compound is distributed in the apple tree canopy, this study addresses a key knowledge gap in terms of explaining the efficiency of the compound in the crown. These findings allow the improvement of target-precise pesticide delivery for more sustainable tree-based agriculture. © 2014 Society of Chemical Industry.
HOT Faults", Fault Organization, and the Occurrence of the Largest Earthquakes
NASA Astrophysics Data System (ADS)
Carlson, J. M.; Hillers, G.; Archuleta, R. J.
2006-12-01
We apply the concept of "Highly Optimized Tolerance" (HOT) for the investigation of spatio-temporal seismicity evolution, in particular mechanisms associated with largest earthquakes. HOT provides a framework for investigating both qualitative and quantitative features of complex feedback systems that are far from equilibrium and punctuated by rare, catastrophic events. In HOT, robustness trade-offs lead to complexity and power laws in systems that are coupled to evolving environments. HOT was originally inspired by biology and engineering, where systems are internally very highly structured, through biological evolution or deliberate design, and perform in an optimum manner despite fluctuations in their surroundings. Though faults and fault systems are not designed in ways comparable to biological and engineered structures, feedback processes are responsible in a conceptually comparable way for the development, evolution and maintenance of younger fault structures and primary slip surfaces of mature faults, respectively. Hence, in geophysical applications the "optimization" approach is perhaps more aptly replaced by "organization", reflecting the distinction between HOT and random, disorganized configurations, and highlighting the importance of structured interdependencies that evolve via feedback among and between different spatial and temporal scales. Expressed in the terminology of the HOT concept, mature faults represent a configuration optimally organized for the release of strain energy; whereas immature, more heterogeneous fault networks represent intermittent, suboptimal systems that are regularized towards structural simplicity and the ability to generate large earthquakes more easily. We discuss fault structure and associated seismic response pattern within the HOT concept, and outline fundamental differences between this novel interpretation to more orthodox viewpoints like the criticality concept. The discussion is flanked by numerical simulations of a 2D fault model, where we investigate different feedback mechanisms and their effect on seismicity evolution. We introduce an approach to estimate the state of a fault and thus its capability of generating a large (system-wide) event assuming likely heterogeneous distributions of hypocenters and stresses, respectively.
Gold, Ryan; dePolo, Craig; Briggs, Richard W.; Crone, Anthony
2013-01-01
The extent to which faults exhibit temporally varying slip rates has important consequences for models of fault mechanics and probabilistic seismic hazard. Here, we explore the temporal behavior of the dextral‐slip Warm Springs Valley fault system, which is part of a network of closely spaced (10–20 km) faults in the northern Walker Lane (California–Nevada border). We develop a late Quaternary slip record for the fault using Quaternary mapping and high‐resolution topographic data from airborne Light Distance and Ranging (LiDAR). The faulted Fort Sage alluvial fan (40.06° N, 119.99° W) is dextrally displaced 98+42/-43 m, and we estimate the age of the alluvial fan to be 41.4+10.0/-4.8 to 55.7±9.2 ka, based on a terrestrial cosmogenic 10Be depth profile and 36Cl analyses on basalt boulders, respectively. The displacement and age constraints for the fan yield a slip rate of 1.8 +0.8/-0.8 mm/yr to 2.4 +1.2/-1.1 mm/yr (2σ) along the northern Warm Springs Valley fault system for the past 41.4–55.7 ka. In contrast to this longer‐term slip rate, shorelines associated with the Sehoo highstand of Lake Lahontan (~15.8 ka) adjacent to the Fort Sage fan are dextrally faulted at most 3 m, which limits a maximum post‐15.8 ka slip rate to 0.2 mm/yr. These relations indicate that the post‐Lahontan slip rate on the fault is only about one‐tenth the longer‐term (41–56 ka) average slip rate. This apparent slip‐rate variation may be related to co‐dependent interaction with the nearby Honey Lake fault system, which shows evidence of an accelerated period of mid‐Holocene earthquakes.
Morphologic dating of fault scarps using airborne laser swath mapping (ALSM) data
Hilley, G.E.; Delong, S.; Prentice, C.; Blisniuk, K.; Arrowsmith, J.R.
2010-01-01
Models of fault scarp morphology have been previously used to infer the relative age of different fault scarps in a fault zone using labor-intensive ground surveying. We present a method for automatically extracting scarp morphologic ages within high-resolution digital topography. Scarp degradation is modeled as a diffusive mass transport process in the across-scarp direction. The second derivative of the modeled degraded fault scarp was normalized to yield the best-fitting (in a least-squared sense) scarp height at each point, and the signal-to-noise ratio identified those areas containing scarp-like topography. We applied this method to three areas along the San Andreas Fault and found correspondence between the mapped geometry of the fault and that extracted by our analysis. This suggests that the spatial distribution of scarp ages may be revealed by such an analysis, allowing the recent temporal development of a fault zone to be imaged along its length.
Scarpino, Samuel V.; Jansen, Patrick A.; Garzon-Lopez, Carol X.; Winkelhagen, Annemarie J. S.; Bohlman, Stephanie A.; Walsh, Peter D.
2010-01-01
Background The movement patterns of wild animals depend crucially on the spatial and temporal availability of resources in their habitat. To date, most attempts to model this relationship were forced to rely on simplified assumptions about the spatiotemporal distribution of food resources. Here we demonstrate how advances in statistics permit the combination of sparse ground sampling with remote sensing imagery to generate biological relevant, spatially and temporally explicit distributions of food resources. We illustrate our procedure by creating a detailed simulation model of fruit production patterns for Dipteryx oleifera, a keystone tree species, on Barro Colorado Island (BCI), Panama. Methodology and Principal Findings Aerial photographs providing GPS positions for large, canopy trees, the complete census of a 50-ha and 25-ha area, diameter at breast height data from haphazardly sampled trees and long-term phenology data from six trees were used to fit 1) a point process model of tree spatial distribution and 2) a generalized linear mixed-effect model of temporal variation of fruit production. The fitted parameters from these models are then used to create a stochastic simulation model which incorporates spatio-temporal variations of D. oleifera fruit availability on BCI. Conclusions and Significance We present a framework that can provide a statistical characterization of the habitat that can be included in agent-based models of animal movements. When environmental heterogeneity cannot be exhaustively mapped, this approach can be a powerful alternative. The results of our model on the spatio-temporal variation in D. oleifera fruit availability will be used to understand behavioral and movement patterns of several species on BCI. PMID:21124927
NASA Astrophysics Data System (ADS)
Lesparre, Nolwenn; Cabrera, Justo; Courbet, Christelle
2015-04-01
We explore the capacity of electrical resistivity tomography and muon density imaging to detect spatio-temporal variations of the medium surrounding a regional fault crossing the underground platform of Tournemire (Aveyron, France). The studied Cernon fault is sub-vertical and intersects perpendicularly the tunnel of Tournemire and extends to surface. The fault separates clay and limestones layers of the Dogger from limestones layers of the Lias. The Cernon fault presents a thickness of a ten of meters and drives water from an aquifer circulating at the top of the Dogger clay layer to the tunnel. An experiment combining electrical resistivity imaging and muon density imaging was setup taking advantage of the tunnel presence. A specific array of electrodes were set up, adapted for the characterization of the fault. Electrodes were placed along the tunnel as well as at the surface above the tunnel on both sides of the fault in order to acquire data in transmission across the massif to better cover the sounded medium. Electrical resistivity is particularly sensitive to water presence in the medium and thus carry information on the main water flow paths and on the pore space saturation. At the same time a muon sensor was placed in the tunnel under the fault region to detect muons coming from the sky after their crossing of the rock medium. Since the muon flux is attenuated as function of the quantity of matter crossed, muons flux measurements supply information on the medium average density along muons paths. The sensor presents 961 angles of view so measurements performed from one station allows a comparison of the muon flux temporal variations along the fault as well as in the medium surrounding the fault. As the water saturation of the porous medium fluctuates through time the medium density might indeed present sensible variations as shown by gravimetric studies. During the experiment important rainfalls occurred leading variations of the medium properties affecting density and electrical resistivity physical parameters. We show with data sets acquired before and after an important rainfall event how muon density and electrical resistivity imaging may complementary characterize variations of the medium properties. The development of such innovative experiments for hydrogeophysical studies presents then the ability to supply new information on fluid dynamics in the sub-surface.
The temporal distribution and carbon storage of large oak wood in streams and floodplain deposits
Richard P. Guyette; Daniel C. Dey; Michael C. Stambaugh
2008-01-01
We used tree-ring dating and 14C dating to document the temporal distribution and carbon storage of oak (Quercus spp.) wood in trees recruited and buried by streams and floodplains in northern Missouri, USA. Frequency distributions indicated that oak wood has been accumulating in Midwest streams continually since at least the...
Time-dependent seismic hazard analysis for the Greater Tehran and surrounding areas
NASA Astrophysics Data System (ADS)
Jalalalhosseini, Seyed Mostafa; Zafarani, Hamid; Zare, Mehdi
2018-01-01
This study presents a time-dependent approach for seismic hazard in Tehran and surrounding areas. Hazard is evaluated by combining background seismic activity, and larger earthquakes may emanate from fault segments. Using available historical and paleoseismological data or empirical relation, the recurrence time and maximum magnitude of characteristic earthquakes for the major faults have been explored. The Brownian passage time (BPT) distribution has been used to calculate equivalent fictitious seismicity rate for major faults in the region. To include ground motion uncertainty, a logic tree and five ground motion prediction equations have been selected based on their applicability in the region. Finally, hazard maps have been presented.
LIDAR Helps Identify Source of 1872 Earthquake Near Chelan, Washington
NASA Astrophysics Data System (ADS)
Sherrod, B. L.; Blakely, R. J.; Weaver, C. S.
2015-12-01
One of the largest historic earthquakes in the Pacific Northwest occurred on 15 December 1872 (M6.5-7) near the south end of Lake Chelan in north-central Washington State. Lack of recognized surface deformation suggested that the earthquake occurred on a blind, perhaps deep, fault. New LiDAR data show landslides and a ~6 km long, NW-side-up scarp in Spencer Canyon, ~30 km south of Lake Chelan. Two landslides in Spencer Canyon impounded small ponds. An historical account indicated that dead trees were visible in one pond in AD1884. Wood from a snag in the pond yielded a calibrated age of AD1670-1940. Tree ring counts show that the oldest living trees on each landslide are 130 and 128 years old. The larger of the two landslides obliterated the scarp and thus, post-dates the last scarp-forming event. Two trenches across the scarp exposed a NW-dipping thrust fault. One trench exposed alluvial fan deposits, Mazama ash, and scarp colluvium cut by a single thrust fault. Three charcoal samples from a colluvium buried during the last fault displacement had calibrated ages between AD1680 and AD1940. The second trench exposed gneiss thrust over colluvium during at least two, and possibly three fault displacements. The younger of two charcoal samples collected from a colluvium below gneiss had a calibrated age of AD1665- AD1905. For an historical constraint, we assume that the lack of felt reports for large earthquakes in the period between 1872 and today indicates that no large earthquakes capable of rupturing the ground surface occurred in the region after the 1872 earthquake; thus the last displacement on the Spencer Canyon scarp cannot post-date the 1872 earthquake. Modeling of the age data suggests that the last displacement occurred between AD1840 and AD1890. These data, combined with the historical record, indicate that this fault is the source of the 1872 earthquake. Analyses of aeromagnetic data reveal lithologic contacts beneath the scarp that form an ENE-striking, curvilinear zone ~2.5 km wide and ~55 km long. This zone coincides with monoclines mapped in Mesozoic bedrock and Miocene flood basalts. This study ends uncertainty regarding the source of the 1872 earthquake and provides important information for seismic hazard analyses of major infrastructure projects in Washington and British Columbia.
Fault detection and fault tolerance in robotics
NASA Technical Reports Server (NTRS)
Visinsky, Monica; Walker, Ian D.; Cavallaro, Joseph R.
1992-01-01
Robots are used in inaccessible or hazardous environments in order to alleviate some of the time, cost and risk involved in preparing men to endure these conditions. In order to perform their expected tasks, the robots are often quite complex, thus increasing their potential for failures. If men must be sent into these environments to repair each component failure in the robot, the advantages of using the robot are quickly lost. Fault tolerant robots are needed which can effectively cope with failures and continue their tasks until repairs can be realistically scheduled. Before fault tolerant capabilities can be created, methods of detecting and pinpointing failures must be perfected. This paper develops a basic fault tree analysis of a robot in order to obtain a better understanding of where failures can occur and how they contribute to other failures in the robot. The resulting failure flow chart can also be used to analyze the resiliency of the robot in the presence of specific faults. By simulating robot failures and fault detection schemes, the problems involved in detecting failures for robots are explored in more depth.
NASA Astrophysics Data System (ADS)
Lai, Wenqing; Wang, Yuandong; Li, Wenpeng; Sun, Guang; Qu, Guomin; Cui, Shigang; Li, Mengke; Wang, Yongqiang
2017-10-01
Based on long term vibration monitoring of the No.2 oil-immersed fat wave reactor in the ±500kV converter station in East Mongolia, the vibration signals in normal state and in core loose fault state were saved. Through the time-frequency analysis of the signals, the vibration characteristics of the core loose fault were obtained, and a fault diagnosis method based on the dual tree complex wavelet (DT-CWT) and support vector machine (SVM) was proposed. The vibration signals were analyzed by DT-CWT, and the energy entropy of the vibration signals were taken as the feature vector; the support vector machine was used to train and test the feature vector, and the accurate identification of the core loose fault of the flat wave reactor was realized. Through the identification of many groups of normal and core loose fault state vibration signals, the diagnostic accuracy of the result reached 97.36%. The effectiveness and accuracy of the method in the fault diagnosis of the flat wave reactor core is verified.
Method and system for dynamic probabilistic risk assessment
NASA Technical Reports Server (NTRS)
Dugan, Joanne Bechta (Inventor); Xu, Hong (Inventor)
2013-01-01
The DEFT methodology, system and computer readable medium extends the applicability of the PRA (Probabilistic Risk Assessment) methodology to computer-based systems, by allowing DFT (Dynamic Fault Tree) nodes as pivot nodes in the Event Tree (ET) model. DEFT includes a mathematical model and solution algorithm, supports all common PRA analysis functions and cutsets. Additional capabilities enabled by the DFT include modularization, phased mission analysis, sequence dependencies, and imperfect coverage.
Bacques, Guillaume; de Michele, Marcello; Raucoules, Daniel; Aochi, Hideo; Rolandone, Frédérique
2018-04-16
This study focuses on the shallow deformation that occurred during the 5 years following the Parkfield earthquake (28/09/2004, Mw 6, San Andreas Fault, California). We use Synthetic Aperture Radar interferometry (InSAR) to provide precise measurements of transient deformations after the Parkfield earthquake between 2005 and 2010. We propose a method to combine both ERS2 and ENVISAT interferograms to increase the temporal data sampling. Firstly, we combine 5 years of available Synthetic Aperture Radar (SAR) acquisitions including both ERS-2 and Envisat. Secondly, we stack selected interferograms (both from ERS2 and Envisat) for measuring the temporal evolution of the ground velocities at given time intervals. Thanks to its high spatial resolution, InSAR could provide new insights on the surface fault motion behavior over the 5 years following the Parkfield earthquake. As a complement to previous studies in this area, our results suggest that shallow transient deformations affected the Creeping-Parkfield-Cholame sections of the San Andreas Fault after the 2004 Mw6 Parkfield earthquake.
Fault diagnosis of helical gearbox using acoustic signal and wavelets
NASA Astrophysics Data System (ADS)
Pranesh, SK; Abraham, Siju; Sugumaran, V.; Amarnath, M.
2017-05-01
The efficient transmission of power in machines is needed and gears are an appropriate choice. Faults in gears result in loss of energy and money. The monitoring and fault diagnosis are done by analysis of the acoustic and vibrational signals which are generally considered to be unwanted by products. This study proposes the usage of machine learning algorithm for condition monitoring of a helical gearbox by using the sound signals produced by the gearbox. Artificial faults were created and subsequently signals were captured by a microphone. An extensive study using different wavelet transformations for feature extraction from the acoustic signals was done, followed by waveletselection and feature selection using J48 decision tree and feature classification was performed using K star algorithm. Classification accuracy of 100% was obtained in the study
Inferring patterns in mitochondrial DNA sequences through hypercube independent spanning trees.
Silva, Eduardo Sant Ana da; Pedrini, Helio
2016-03-01
Given a graph G, a set of spanning trees rooted at a vertex r of G is said vertex/edge independent if, for each vertex v of G, v≠r, the paths of r to v in any pair of trees are vertex/edge disjoint. Independent spanning trees (ISTs) provide a number of advantages in data broadcasting due to their fault tolerant properties. For this reason, some studies have addressed the issue by providing mechanisms for constructing independent spanning trees efficiently. In this work, we investigate how to construct independent spanning trees on hypercubes, which are generated based upon spanning binomial trees, and how to use them to predict mitochondrial DNA sequence parts through paths on the hypercube. The prediction works both for inferring mitochondrial DNA sequences comprised of six bases as well as infer anomalies that probably should not belong to the mitochondrial DNA standard. Copyright © 2016 Elsevier Ltd. All rights reserved.
Nouri.Gharahasanlou, Ali; Mokhtarei, Ashkan; Khodayarei, Aliasqar; Ataei, Mohammad
2014-01-01
Evaluating and analyzing the risk in the mining industry is a new approach for improving the machinery performance. Reliability, safety, and maintenance management based on the risk analysis can enhance the overall availability and utilization of the mining technological systems. This study investigates the failure occurrence probability of the crushing and mixing bed hall department at Azarabadegan Khoy cement plant by using fault tree analysis (FTA) method. The results of the analysis in 200 h operating interval show that the probability of failure occurrence for crushing, conveyor systems, crushing and mixing bed hall department is 73, 64, and 95 percent respectively and the conveyor belt subsystem found as the most probable system for failure. Finally, maintenance as a method of control and prevent the occurrence of failure is proposed. PMID:26779433
Towards generating ECSS-compliant fault tree analysis results via ConcertoFLA
NASA Astrophysics Data System (ADS)
Gallina, B.; Haider, Z.; Carlsson, A.
2018-05-01
Attitude Control Systems (ACSs) maintain the orientation of the satellite in three-dimensional space. ACSs need to be engineered in compliance with ECSS standards and need to ensure a certain degree of dependability. Thus, dependability analysis is conducted at various levels and by using ECSS-compliant techniques. Fault Tree Analysis (FTA) is one of these techniques. FTA is being automated within various Model Driven Engineering (MDE)-based methodologies. The tool-supported CHESS-methodology is one of them. This methodology incorporates ConcertoFLA, a dependability analysis technique enabling failure behavior analysis and thus FTA-results generation. ConcertoFLA, however, similarly to other techniques, still belongs to the academic research niche. To promote this technique within the space industry, we apply it on an ACS and discuss about its multi-faceted potentialities in the context of ECSS-compliant engineering.
NASA Astrophysics Data System (ADS)
Zeng, Yajun; Skibniewski, Miroslaw J.
2013-08-01
Enterprise resource planning (ERP) system implementations are often characterised with large capital outlay, long implementation duration, and high risk of failure. In order to avoid ERP implementation failure and realise the benefits of the system, sound risk management is the key. This paper proposes a probabilistic risk assessment approach for ERP system implementation projects based on fault tree analysis, which models the relationship between ERP system components and specific risk factors. Unlike traditional risk management approaches that have been mostly focused on meeting project budget and schedule objectives, the proposed approach intends to address the risks that may cause ERP system usage failure. The approach can be used to identify the root causes of ERP system implementation usage failure and quantify the impact of critical component failures or critical risk events in the implementation process.
Accelerated Monte Carlo Simulation for Safety Analysis of the Advanced Airspace Concept
NASA Technical Reports Server (NTRS)
Thipphavong, David
2010-01-01
Safe separation of aircraft is a primary objective of any air traffic control system. An accelerated Monte Carlo approach was developed to assess the level of safety provided by a proposed next-generation air traffic control system. It combines features of fault tree and standard Monte Carlo methods. It runs more than one order of magnitude faster than the standard Monte Carlo method while providing risk estimates that only differ by about 10%. It also preserves component-level model fidelity that is difficult to maintain using the standard fault tree method. This balance of speed and fidelity allows sensitivity analysis to be completed in days instead of weeks or months with the standard Monte Carlo method. Results indicate that risk estimates are sensitive to transponder, pilot visual avoidance, and conflict detection failure probabilities.
Logic flowgraph methodology - A tool for modeling embedded systems
NASA Technical Reports Server (NTRS)
Muthukumar, C. T.; Guarro, S. B.; Apostolakis, G. E.
1991-01-01
The logic flowgraph methodology (LFM), a method for modeling hardware in terms of its process parameters, has been extended to form an analytical tool for the analysis of integrated (hardware/software) embedded systems. In the software part of a given embedded system model, timing and the control flow among different software components are modeled by augmenting LFM with modified Petrinet structures. The objective of the use of such an augmented LFM model is to uncover possible errors and the potential for unanticipated software/hardware interactions. This is done by backtracking through the augmented LFM mode according to established procedures which allow the semiautomated construction of fault trees for any chosen state of the embedded system (top event). These fault trees, in turn, produce the possible combinations of lower-level states (events) that may lead to the top event.
Nouri Gharahasanlou, Ali; Mokhtarei, Ashkan; Khodayarei, Aliasqar; Ataei, Mohammad
2014-04-01
Evaluating and analyzing the risk in the mining industry is a new approach for improving the machinery performance. Reliability, safety, and maintenance management based on the risk analysis can enhance the overall availability and utilization of the mining technological systems. This study investigates the failure occurrence probability of the crushing and mixing bed hall department at Azarabadegan Khoy cement plant by using fault tree analysis (FTA) method. The results of the analysis in 200 h operating interval show that the probability of failure occurrence for crushing, conveyor systems, crushing and mixing bed hall department is 73, 64, and 95 percent respectively and the conveyor belt subsystem found as the most probable system for failure. Finally, maintenance as a method of control and prevent the occurrence of failure is proposed.
Risk assessment techniques with applicability in marine engineering
NASA Astrophysics Data System (ADS)
Rudenko, E.; Panaitescu, F. V.; Panaitescu, M.
2015-11-01
Nowadays risk management is a carefully planned process. The task of risk management is organically woven into the general problem of increasing the efficiency of business. Passive attitude to risk and awareness of its existence are replaced by active management techniques. Risk assessment is one of the most important stages of risk management, since for risk management it is necessary first to analyze and evaluate risk. There are many definitions of this notion but in general case risk assessment refers to the systematic process of identifying the factors and types of risk and their quantitative assessment, i.e. risk analysis methodology combines mutually complementary quantitative and qualitative approaches. Purpose of the work: In this paper we will consider as risk assessment technique Fault Tree analysis (FTA). The objectives are: understand purpose of FTA, understand and apply rules of Boolean algebra, analyse a simple system using FTA, FTA advantages and disadvantages. Research and methodology: The main purpose is to help identify potential causes of system failures before the failures actually occur. We can evaluate the probability of the Top event.The steps of this analize are: the system's examination from Top to Down, the use of symbols to represent events, the use of mathematical tools for critical areas, the use of Fault tree logic diagrams to identify the cause of the Top event. Results: In the finally of study it will be obtained: critical areas, Fault tree logical diagrams and the probability of the Top event. These results can be used for the risk assessment analyses.
Gold, R.D.; Cowgill, E.; Arrowsmith, J.R.; Chen, X.; Sharp, W.D.; Cooper, K.M.; Wang, X.-F.
2011-01-01
The active, left-lateral Altyn Tagh fault defines the northwestern margin of the Tibetan Plateau in western China. To clarify late Quaternary temporal and spatial variations in slip rate along the central portion of this fault system (85??-90??E), we have more than doubled the number of dated offset markers along the central Altyn Tagh fault. In particular, we determined offset-age relations for seven left-laterally faulted terrace risers at three sites (Kelutelage, Yukuang, and Keke Qiapu) spanning a 140-km-long fault reach by integrating surficial geologic mapping, topographic surveys (total station and tripod-light detection and ranging [T-LiDAR]), and geochronology (radiocarbon dating of organic samples, 230Th/U dating of pedogenic carbonate coatings on buried clasts, and terrestrial cosmogenic radionuclide exposure age dating applied to quartz-rich gravels). At Kelutelage, which is the westernmost site (37.72??N, 86.67??E), two faulted terrace risers are offset 58 ?? 3 m and 48 ?? 4 m, and formed at 6.2-6.1 ka and 5.9-3.7 ka, respectively. At the Yukuang site (38.00??N, 87.87??E), four faulted terrace risers are offset 92 ?? 12 m, 68 ?? 6 m, 55 ?? 13 m, and 59 ?? 9 m and formed at 24.2-9.5 ka, 6.4-5.0 ka, 5.1-3.9 ka, and 24.2-6.4 ka, respectively. At the easternmost site, Keke Qiapu (38.08??N, 88.12??E), a faulted terrace riser is offset 33 ?? 6 m and has an age of 17.1-2.2 ka. The displacement-age relationships derived from these markers can be satisfied by an approximately uniform slip rate of 8-12 mm/yr. However, additional analysis is required to test how much temporal variability in slip rate is permitted by this data set. ?? 2011 Geological Society of America.
Using certification trails to achieve software fault tolerance
NASA Technical Reports Server (NTRS)
Sullivan, Gregory F.; Masson, Gerald M.
1993-01-01
A conceptually novel and powerful technique to achieve fault tolerance in hardware and software systems is introduced. When used for software fault tolerance, this new technique uses time and software redundancy and can be outlined as follows. In the initial phase, a program is run to solve a problem and store the result. In addition, this program leaves behind a trail of data called a certification trail. In the second phase, another program is run which solves the original problem again. This program, however, has access to the certification trail left by the first program. Because of the availability of the certification trail, the second phase can be performed by a less complex program and can execute more quickly. In the final phase, the two results are accepted as correct; otherwise an error is indicated. An essential aspect of this approach is that the second program must always generate either an error indication or a correct output even when the certification trail it receives from the first program is incorrect. The certification trail approach to fault tolerance was formalized and it was illustrated by applying it to the fundamental problem of finding a minimum spanning tree. Cases in which the second phase can be run concorrectly with the first and act as a monitor are discussed. The certification trail approach was compared to other approaches to fault tolerance. Because of space limitations we have omitted examples of our technique applied to the Huffman tree, and convex hull problems. These can be found in the full version of this paper.
NASA Astrophysics Data System (ADS)
Öztürk, S.
2018-03-01
The Eastern Anatolian Region of Turkey is one of the most seismically and tectonically active regions due to the frequent occurrence of earthquakes. Thus, the main goal of this study is to analyze the regional and temporal characteristics of seismicity in the Eastern Anatolia in terms of the seismotectonic b-value, fractal dimension Dc-value, precursory seismic quiescence Z-value, and their interrelationships. This study also seeks to obtain a reliable empirical relation between b and Dc-values and to evaluate the temporal changes of these parameters as they relate to the earthquake potential of the region. A more up-to-date relation of Dc = 2:55-0:39* b is found with a very strong negative correlation coefficient ( r =-0.95) by using the orthogonal regression method. The b-values less than 1.0 and the Dc-values greater than 2.2 are observed in the Northeast Anatolian Fault Zone, Aşkale, Erzurum, Iğdır and Çaldıran Faults, Doğubeyazıt Fault Zone, around the Genç Fault, the western part of the Bitlis-Zagros Thrust Zone, Pülümür and Karakoçan Faults, and the Sancak- Uzunpınar Fault Zone. In addition, the regions having small b-values and large Z-values are calculated around the Genç, Pülümür and Karakoçan Faults as well as the Sancak-Uzunpınar Fault Zone. Remarkably, the combinations of these seismotectonic parameters could reveal the earthquake hazard potential in the Eastern Anatolian Region of Turkey, thus creating an increased interest in these anomaly regions.
The Role of Deep Creep in the Timing of Large Earthquakes
NASA Astrophysics Data System (ADS)
Sammis, C. G.; Smith, S. W.
2012-12-01
The observed temporal clustering of the world's largest earthquakes has been largely discounted for two reasons: a) it is consistent with Poisson clustering, and b) no physical mechanism leading to such clustering has been proposed. This lack of a mechanism arises primarily because the static stress transfer mechanism, commonly used to explain aftershocks and the clustering of large events on localized fault networks, does not work at global distances. However, there is recent observational evidence that the surface waves from large earthquakes trigger non-volcanic tremor at the base of distant fault zones at global distances. Based on these observations, we develop a simple non-linear coupled oscillator model that shows how the triggering of such tremor can lead to the synchronization of large earthquakes on a global scale. A basic assumption of the model is that induced tremor is a proxy for deep creep that advances the seismic cycle of the fault. We support this hypothesis by demonstrating that the 2010 Maule Chile and the 2011 Fukushima Japan earthquakes, which have been shown to induce tremor on the Parkfield segment of the San Andreas Fault, also produce changes in off-fault seismicity that are spatially and temporally consistent with episodes of deep creep on the fault. The observed spatial pattern can be simulated using an Okada dislocation model for deep creep (below 20 km) on the fault plane in which the slip rate decreases from North to South consistent with surface creep measurements and deepens south of the "Parkfield asperity" as indicated by recent tremor locations. The model predicts the off-fault events should have reverse mechanism consistent with observed topography.
Bodin, Paul; Bilham, Roger; Behr, Jeff; Gomberg, Joan; Hudnut, Kenneth W.
1994-01-01
Five out of six functioning creepmeters on southern California faults recorded slip triggered at the time of some or all of the three largest events of the 1992 Landers earthquake sequence. Digital creep data indicate that dextral slip was triggered within 1 min of each mainshock and that maximum slip velocities occurred 2 to 3 min later. The duration of triggered slip events ranged from a few hours to several weeks. We note that triggered slip occurs commonly on faults that exhibit fault creep. To account for the observation that slip can be triggered repeatedly on a fault, we propose that the amplitude of triggered slip may be proportional to the depth of slip in the creep event and to the available near-surface tectonic strain that would otherwise eventually be released as fault creep. We advance the notion that seismic surface waves, perhaps amplified by sediments, generate transient local conditions that favor the release of tectonic strain to varying depths. Synthetic strain seismograms are presented that suggest increased pore pressure during periods of fault-normal contraction may be responsible for triggered slip, since maximum dextral shear strain transients correspond to times of maximum fault-normal contraction.
NASA Astrophysics Data System (ADS)
Tsuji, T.; Ikeda, T.; Nimiya, H.
2017-12-01
We report spatio-temporal variations of seismic velocity around the seismogenic faults in western Japan. We mainly focus on the seismic velocity variation during (1) the 2016 Off-Mie earthquake in the Nankai subduction zone (Mw5.8) and (2) the 2016 Kumamoto earthquake in Kyushu Island (Mw7.0). We applied seismic interferometry and surface wave analysis to the ambient noise data recorded by Hi-net and DONET seismometers of National Research Institute for Earth Science and Disaster Resilience (NIED). Seismic velocity near the rupture faults and volcano decreased during the earthquake. For example, we observed velocity reduction around the seismogenic Futagawa-Hinagu fault system and Mt Aso in the 2016 Kumamoto earthquake. We also identified velocity increase after the eruptions of Mt Aso. During the 2016 Off-Mie earthquake, we observed seismic velocity variation in the Nankai accretionary prism. After the earthquakes, the seismic velocity gradually returned to the pre-earthquake value. The velocity recovering process (healing process) is caused by several mechanisms, such as pore pressure reduction, strain change, and crack sealing. By showing the velocity variations obtained at different geologic settings (volcano, seismogenic fault, unconsolidated sediment), we discuss the mechanism of seismic velocity variation as well as the post-seismic fault healing process.
Parsons, Tom
2007-01-01
The power law distribution of earthquake magnitudes and frequencies is a fundamental scaling relationship used for forecasting. However, can its slope (b value) be used on individual faults as a stress indicator? Some have concluded that b values drop just before large shocks. Others suggested that temporally stable low b value zones identify future large-earthquake locations. This study assesses the frequency of b value anomalies portending M ≥ 4.0 shocks versus how often they do not. I investigated M ≥ 4.0 Calaveras fault earthquakes because there have been 25 over the 37-year duration of the instrumental catalog on the most active southern half of the fault. With that relatively large sample, I conducted retrospective time and space earthquake forecasts. I calculated temporal b value changes in 5-km-radius cylindrical volumes of crust that were significant at 90% confidence, but these changes were poor forecasters of M ≥ 4.0 earthquakes. M ≥ 4.0 events were as likely to happen at times of high b values as they were at low ones. However, I could not rule out a hypothesis that spatial b value anomalies portend M ≥ 4.0 events; of 20 M ≥ 4 shocks that could be studied, 6 to 8 (depending on calculation method) occurred where b values were significantly less than the spatial mean, 1 to 2 happened above the mean, and 10 to 13 occurred within 90% confidence intervals of the mean and were thus inconclusive. Thus spatial b value variation might be a useful forecast tool, but resolution is poor, even on seismically active faults.
Lienkaemper, James J.; McFarland, Forrest S.; Simpson, Robert W.; Bilham, Roger; Ponce, David A.; Boatwright, John; Caskey, S. John
2012-01-01
The Hayward fault (HF) in California exhibits large (Mw 6.5–7.1) earthquakes with short recurrence times (161±65 yr), probably kept short by a 26%–78% aseismic release rate (including postseismic). Its interseismic release rate varies locally over time, as we infer from many decades of surface creep data. Earliest estimates of creep rate, primarily from infrequent surveys of offset cultural features, revealed distinct spatial variation in rates along the fault, but no detectable temporal variation. Since the 1989 Mw 6.9 Loma Prieta earthquake (LPE), monitoring on 32 alinement arrays and 5 creepmeters has greatly improved the spatial and temporal resolution of creep rate. We now identify significant temporal variations, mostly associated with local and regional earthquakes. The largest rate change was a 6‐yr cessation of creep along a 5‐km length near the south end of the HF, attributed to a regional stress drop from the LPE, ending in 1996 with a 2‐cm creep event. North of there near Union City starting in 1991, rates apparently increased by 25% above pre‐LPE levels on a 16‐km‐long reach of the fault. Near Oakland in 2007 an Mw 4.2 earthquake initiated a 1–2 cm creep event extending 10–15 km along the fault. Using new better‐constrained long‐term creep rates, we updated earlier estimates of depth to locking along the HF. The locking depths outline a single, ∼50‐km‐long locked or retarded patch with the potential for an Mw∼6.8 event equaling the 1868 HF earthquake. We propose that this inferred patch regulates the size and frequency of large earthquakes on HF.
Temporal Association Between Nonfatal Self-Directed Violence and Tree and Grass Pollen Counts.
Jeon-Slaughter, Haekyung; Claassen, Cynthia A; Khan, David A; Mihalakos, Perry; Lee, Kevin B; Brown, E Sherwood
2016-09-01
Prior research suggests a possible association between pollen and suicide. No studies have examined the relationship between pollen and attempted suicide. This study examines the temporal association between airborne pollen counts and nonfatal suicidal and nonsuicidal self-directed violence (SDV) requiring an emergency department visit. Data on daily emergency department visits due to nonfatal SDV as identified by ICD-9 diagnosis criteria were extracted from emergency department medical records of Parkland Memorial Hospital in Dallas, Texas, between January 2000 and December 2003. Concurrent daily airborne tree, grass, and ragweed pollen data from the city of Dallas were extracted from the National Allergy Bureau online database. The data were analyzed using the time series method of generalized autoregressive conditional heteroskedasticity. There were statistically significant and positive temporal associations between tree pollen counts and the number of nonfatal SDV events among women (P = .04) and between grass pollen counts and number of nonfatal SDV events among both men (P = .03) and women (P < .0001). There was no significant temporal association found between ragweed pollen counts and number of nonfatal SDV events. The study findings suggest that an increase in nonfatal SDV is associated with changes in tree and grass pollen counts. This is the first study that has examined an association between seasonal variation in tree and grass pollen levels and nonfatal SDV event data. The study also used a narrowly defined geographic area and temporal window. The findings suggest that pollen count may be a factor influencing seasonal patterns in suicidal behavior. © Copyright 2016 Physicians Postgraduate Press, Inc.
NASA Astrophysics Data System (ADS)
Urata, Yumi; Kuge, Keiko; Kase, Yuko
2008-11-01
To understand role of fluid on earthquake rupture processes, we investigated effects of thermal pressurization on spatial variation of dynamic rupture by computing spontaneous rupture propagation on a rectangular fault. We found thermal pressurization can cause heterogeneity of rupture even on a fault of uniform properties. On drained faults, tractions drop linearly with increasing slip in the same way everywhere. However, by changing the drained condition to an undrained one, the slip-weakening curves become non-linear and depend on locations on faults with small shear zone thickness w, and the dynamic frictional stresses vary spatially and temporally. Consequently, the super-shear transition fault length decreases for small w, and the final slip distribution can have some peaks regardless of w, especially on undrained faults. These effects should be taken into account of determining dynamic rupture parameters and modeling earthquake cycles when the presence of fluid is suggested in the source regions.
NASA Astrophysics Data System (ADS)
Khoshmanesh, M.; Shirzaei, M.
2017-12-01
Recent seismic and geodetic observations indicate that interseismic creep rate varies in both time and space. The spatial extent of creep determines the earthquake potential, while its temporal evolution, known as slow slip events (SSE), may trigger earthquakes. Although the conditions promoting fault creep are well-established, the mechanism for initiating self-sustaining and sometimes cyclic creep events is enigmatic. Here we investigate a time series of 19 years of surface deformation measured by radar interferometry between 1992 and 2011 along the Central San Andreas Fault (CSAF) to constrain the temporal evolution of creep. We show that the creep rate along the CSAF has a sporadic behavior, quantified with a Gumbel-like probability distribution characterized by longer tail toward the extreme positive rates, which is signature of burst-like creep dynamics. Defining creep avalanches as clusters of isolated creep with rates exceeding the shearing rate of tectonic plates, we investigate the statistical properties of their size and length. We show that, similar to the frequency-magnitude distribution of seismic events, the distribution of potency estimated for creep avalanches along the CSAF follows a power law, dictated by the distribution of their along-strike lengths. We further show that an ensemble of concurrent creep avalanches which aseismically rupture isolated fault compartments form the semi-periodic SSEs observed along the CSAF. Using a rate and state friction model, we show that normal stress is temporally variable on the fault, and support this using seismic observations. We propose that, through a self-sustaining fault-valve behavior, compaction induced elevation of pore pressure within hydraulically isolated fault compartments, and subsequent frictional dilation is the cause for the observed episodic SSEs. We further suggest that the 2004 Parkfield Mw6 earthquake may have been triggered by the SSE on adjacent creeping segment, which increased Coulomb failure stress up to 0.45 bar/yr. While creeping segments are suggested to act as barriers and arrest rupture, our study implies that SSEs on these zones may trigger seismic events on adjacent locked parts.
W. Beltran; Joseph Wunderle Jr.
2014-01-01
The seasonal dynamics of foliage arthropod populations are poorly studied in tropical dry forests despite the importance of these studies for understanding arthropod population responses to environmental change.We monitored the abundance, temporal distributions, and body size of arthropods in five naturalized alien and one native tree species to characterize arthropod...
Machine Learning of Fault Friction
NASA Astrophysics Data System (ADS)
Johnson, P. A.; Rouet-Leduc, B.; Hulbert, C.; Marone, C.; Guyer, R. A.
2017-12-01
We are applying machine learning (ML) techniques to continuous acoustic emission (AE) data from laboratory earthquake experiments. Our goal is to apply explicit ML methods to this acoustic datathe AE in order to infer frictional properties of a laboratory fault. The experiment is a double direct shear apparatus comprised of fault blocks surrounding fault gouge comprised of glass beads or quartz powder. Fault characteristics are recorded, including shear stress, applied load (bulk friction = shear stress/normal load) and shear velocity. The raw acoustic signal is continuously recorded. We rely on explicit decision tree approaches (Random Forest and Gradient Boosted Trees) that allow us to identify important features linked to the fault friction. A training procedure that employs both the AE and the recorded shear stress from the experiment is first conducted. Then, testing takes place on data the algorithm has never seen before, using only the continuous AE signal. We find that these methods provide rich information regarding frictional processes during slip (Rouet-Leduc et al., 2017a; Hulbert et al., 2017). In addition, similar machine learning approaches predict failure times, as well as slip magnitudes in some cases. We find that these methods work for both stick slip and slow slip experiments, for periodic slip and for aperiodic slip. We also derive a fundamental relationship between the AE and the friction describing the frictional behavior of any earthquake slip cycle in a given experiment (Rouet-Leduc et al., 2017b). Our goal is to ultimately scale these approaches to Earth geophysical data to probe fault friction. References Rouet-Leduc, B., C. Hulbert, N. Lubbers, K. Barros, C. Humphreys and P. A. Johnson, Machine learning predicts laboratory earthquakes, in review (2017). https://arxiv.org/abs/1702.05774Rouet-LeDuc, B. et al., Friction Laws Derived From the Acoustic Emissions of a Laboratory Fault by Machine Learning (2017), AGU Fall Meeting Session S025: Earthquake source: from the laboratory to the fieldHulbert, C., Characterizing slow slip applying machine learning (2017), AGU Fall Meeting Session S019: Slow slip, Tectonic Tremor, and the Brittle-to-Ductile Transition Zone: What mechanisms control the diversity of slow and fast earthquakes?
Vergara, Pablo M.; Soto, Gerardo E.; Rodewald, Amanda D.; Meneses, Luis O.; Pérez-Hernández, Christian G.
2016-01-01
Theoretical models predict that animals should make foraging decisions after assessing the quality of available habitat, but most models fail to consider the spatio-temporal scales at which animals perceive habitat availability. We tested three foraging strategies that explain how Magellanic woodpeckers (Campephilus magellanicus) assess the relative quality of trees: 1) Woodpeckers with local knowledge select trees based on the available trees in the immediate vicinity. 2) Woodpeckers lacking local knowledge select trees based on their availability at previously visited locations. 3) Woodpeckers using information from long-term memory select trees based on knowledge about trees available within the entire landscape. We observed foraging woodpeckers and used a Brownian Bridge Movement Model to identify trees available to woodpeckers along foraging routes. Woodpeckers selected trees with a later decay stage than available trees. Selection models indicated that preferences of Magellanic woodpeckers were based on clusters of trees near the most recently visited trees, thus suggesting that woodpeckers use visual cues from neighboring trees. In a second analysis, Cox’s proportional hazards models showed that woodpeckers used information consolidated across broader spatial scales to adjust tree residence times. Specifically, woodpeckers spent more time at trees with larger diameters and in a more advanced stage of decay than trees available along their routes. These results suggest that Magellanic woodpeckers make foraging decisions based on the relative quality of trees that they perceive and memorize information at different spatio-temporal scales. PMID:27416115
Vergara, Pablo M; Soto, Gerardo E; Moreira-Arce, Darío; Rodewald, Amanda D; Meneses, Luis O; Pérez-Hernández, Christian G
2016-01-01
Theoretical models predict that animals should make foraging decisions after assessing the quality of available habitat, but most models fail to consider the spatio-temporal scales at which animals perceive habitat availability. We tested three foraging strategies that explain how Magellanic woodpeckers (Campephilus magellanicus) assess the relative quality of trees: 1) Woodpeckers with local knowledge select trees based on the available trees in the immediate vicinity. 2) Woodpeckers lacking local knowledge select trees based on their availability at previously visited locations. 3) Woodpeckers using information from long-term memory select trees based on knowledge about trees available within the entire landscape. We observed foraging woodpeckers and used a Brownian Bridge Movement Model to identify trees available to woodpeckers along foraging routes. Woodpeckers selected trees with a later decay stage than available trees. Selection models indicated that preferences of Magellanic woodpeckers were based on clusters of trees near the most recently visited trees, thus suggesting that woodpeckers use visual cues from neighboring trees. In a second analysis, Cox's proportional hazards models showed that woodpeckers used information consolidated across broader spatial scales to adjust tree residence times. Specifically, woodpeckers spent more time at trees with larger diameters and in a more advanced stage of decay than trees available along their routes. These results suggest that Magellanic woodpeckers make foraging decisions based on the relative quality of trees that they perceive and memorize information at different spatio-temporal scales.
NASA Astrophysics Data System (ADS)
Gold, P. O.; Cowgill, E.; Kreylos, O.
2010-12-01
Measurements derived from high-resolution terrestrial LiDAR (t-Lidar) surveys of landforms displaced during the 16 December 1954 Mw 6.8 Dixie Valley earthquake in central Nevada confirm the absence of historical strike slip north of latitude 39.5°N. This conclusion has implications for the effect of stress changes on the spatial and temporal evolution of the central Nevada seismic belt. The Dixie Valley fault is a low-angle, east-dipping, range-bounding normal fault located in the central-northern reach of the central Nevada seismic belt (CNSB), a ~N-S trending group of historical ruptures that may represent a migration of northwest trending right-lateral Pacific-North American plate motion into central Nevada. Migration of a component of right slip eastward from the eastern California shear zone/Walker lane to the CNSB is supported by the presence of pronounced right-lateral motion observed in most of the CNSB earthquakes south of the Dixie Valley fault and by GPS data spanning the CNSB. Such eastward migration and northward propagation of right-slip into the CNSB predicts a component of lateral slip on the Dixie Valley fault. However, landforms offsets have previously been reported to indicate only purely normal slip in the 1954 Dixie Valley event. To check the direction of motion during the Dixie Valley earthquake using higher precision methods than previously employed, we collected t-LiDAR data to quantify displacements of two well-preserved debris flow chutes separated along strike by ~10 km and at locations where the local fault strike diverges by >10° from the regional strike. Our highest confidence measurements yield a horizontal slip vector azimuth of ~107° at both sites, orthogonal to the average regional fault strike of ~17°. Thus, we find no compelling evidence for regional lateral motion in our other measurements. This result indicates that continued northward propagation of right lateral slip from its diffuse termination at the northern end of the 1954 Fairview Peak event, 4 minutes before the Dixie Valley event, and the Rainbow Mountain-Stillwater events six months earlier, must be accommodated by some other mechanism. We see several options for the spatial and temporal evolution of right slip propagation into the northern CNSB. 1) Lateral motion may be accommodated to the east by faults opposite the Dixie Valley fault along the base of Clan Alpine range, or to the west by faults at the western base of the Stillwater range-diffuse faults to the SW and SE of the Dixie Valley fault that also ruptured in 1954 accommodated right slip and could represent a west and/or east migration of lateral motion; 2) right lateral motion may activate an as yet unrecognized fault within the Dixie Valley; or 3) the Dixie Valley fault may be reactivated with a greater component of lateral slip in response to changes in stress, a phenomena that has been recognized on the Borrego Fault in northern Mexico between the penultimate event and the recent 4 April 2010 El Mayor-Cucapah earthquake.
Investigating the creeping section of the San Andreas Fault using ALOS PALSAR interferometry
NASA Astrophysics Data System (ADS)
Agram, P. S.; Wortham, C.; Zebker, H. A.
2010-12-01
In recent years, time-series InSAR techniques have been used to study the temporal characteristics of various geophysical phenomena that produce surface deformation including earthquakes and magma migration in volcanoes. Conventional InSAR and time-series InSAR techniques have also been successfully used to study aseismic creep across faults in urban areas like the Northern Hayward Fault in California [1-3]. However, application of these methods to studying the time-dependent creep across the Central San Andreas Fault using C-band ERS and Envisat radar satellites has resulted in limited success. While these techniques estimate the average long-term far-field deformation rates reliably, creep measurement close to the fault (< 3-4 Km) is virtually impossible due to heavy decorrelation at C-band (6cm wavelength). Shanker and Zebker (2009) [4] used the Persistent Scatterer (PS) time-series InSAR technique to estimate a time-dependent non-uniform creep signal across a section of the creeping segment of the San Andreas Fault. However, the identified PS network was spatially very sparse (1 per sq. km) to study temporal characteristics of deformation of areas close to the fault. In this work, we use L-band (24cm wavelength) SAR data from the PALSAR instrument on-board the ALOS satellite, launched by Japanese Aerospace Exploration Agency (JAXA) in 2006, to study the temporal characteristics of creep across the Central San Andreas Fault. The longer wavelength at L-band improves observed correlation over the entire scene which significantly increased the ground area coverage of estimated deformation in each interferogram but at the cost of decreased sensitivity of interferometric phase to surface deformation. However, noise levels in our deformation estimates can be decreased by combining information from multiple SAR acquisitions using time-series InSAR techniques. We analyze 13 SAR acquisitions spanning the time-period from March 2007 to Dec 2009 using the Short Baseline Subset Analysis (SBAS) time-series InSAR technique [3]. We present detailed comparisons of estimated time-series of fault creep as a function of position along the fault including the locked section around Parkfield, CA. We also present comparisons between the InSAR time-series and GPS network observations in the Parkfield region. During these three years of observation, the average fault creep is estimated to be 35 mm/yr. References [1] Bürgmann,R., E. Fielding and, J. Sukhatme, Slip along the Hayward fault, California, estimated from space-based synthetic aperture radar interferometry, Geology,26, 559-562, 1998. [2] Ferretti, A., C. Prati and F. Rocca, Permanent Scatterers in SAR Interferometry, IEEE Trans. Geosci. Remote Sens., 39, 8-20, 2001. [3] Lanari, R.,F. Casu, M. Manzo, and P. Lundgren, Application of SBAS D- InSAR technique to fault creep: A case study of the Hayward Fault, California. Remote Sensing of Environment, 109(1), 20-28, 2007. [4] Shanker, A. P., and H. Zebker, Edgelist phase unwrapping algorithm for time-series InSAR. J. Opt. Soc. Am. A, 37(4), 2010.
1983-04-01
tolerances or spaci - able assets diagnostic/fault ness float fications isolation devices Operation of cannibalL- zation point Why Sustain materiel...with diagnostic software based on "fault tree " representation of the M65 ThS) to bridge the gap in diagnostics capability was demonstrated in 1980 and... identification friend or foe) which has much lower reliability than TSQ-73 peculiar hardware). Thus, as in other examples, reported readiness does not reflect
AADL Fault Modeling and Analysis Within an ARP4761 Safety Assessment
2014-10-01
Analysis Generator 27 3.2.3 Mapping to OpenFTA Format File 27 3.2.4 Mapping to Generic XML Format 28 3.2.5 AADL and FTA Mapping Rules 28 3.2.6 Issues...PSSA), System Safety Assessment (SSA), Common Cause Analysis (CCA), Fault Tree Analysis ( FTA ), Failure Modes and Effects Analysis (FMEA), Failure...Modes and Effects Summary, Mar - kov Analysis (MA), and Dependence Diagrams (DDs), also referred to as Reliability Block Dia- grams (RBDs). The
Unsupervised Learning —A Novel Clustering Method for Rolling Bearing Faults Identification
NASA Astrophysics Data System (ADS)
Kai, Li; Bo, Luo; Tao, Ma; Xuefeng, Yang; Guangming, Wang
2017-12-01
To promptly process the massive fault data and automatically provide accurate diagnosis results, numerous studies have been conducted on intelligent fault diagnosis of rolling bearing. Among these studies, such as artificial neural networks, support vector machines, decision trees and other supervised learning methods are used commonly. These methods can detect the failure of rolling bearing effectively, but to achieve better detection results, it often requires a lot of training samples. Based on above, a novel clustering method is proposed in this paper. This novel method is able to find the correct number of clusters automatically the effectiveness of the proposed method is validated using datasets from rolling element bearings. The diagnosis results show that the proposed method can accurately detect the fault types of small samples. Meanwhile, the diagnosis results are also relative high accuracy even for massive samples.
Fault Analysis on Bevel Gear Teeth Surface Damage of Aeroengine
NASA Astrophysics Data System (ADS)
Cheng, Li; Chen, Lishun; Li, Silu; Liang, Tao
2017-12-01
Aiming at the trouble phenomenon for bevel gear teeth surface damage of Aero-engine, Fault Tree of bevel gear teeth surface damage was drawing by logical relations, the possible cause of trouble was analyzed, scanning electron-microscope, energy spectrum analysis, Metallographic examination, hardness measurement and other analysis means were adopted to investigate the spall gear tooth. The results showed that Material composition, Metallographic structure, Micro-hardness, Carburization depth of the fault bevel gear accord with technical requirements. Contact fatigue spall defect caused bevel gear teeth surface damage. The small magnitude of Interference of accessory gearbox install hole and driving bevel gear bearing seat was mainly caused. Improved measures were proposed, after proof, Thermoelement measures are effective.
NASA Astrophysics Data System (ADS)
Moyer, P. A.; Boettcher, M. S.; McGuire, J. J.; Collins, J. A.
2017-12-01
During the last five seismic cycles on Gofar transform fault on the East Pacific Rise, the largest earthquakes (6.0 ≤ Mw ≤ 6.2) have repeatedly ruptured the same fault segment (rupture asperity), while intervening fault segments host swarms of microearthquakes. Previous studies on Gofar have shown that these segments of low (≤10%) seismic coupling contain diffuse zones of seismicity and P-wave velocity reduction compared with the rupture asperity; suggesting heterogeneous fault properties control earthquake behavior. We investigate the role systematic differences in material properties have on earthquake rupture along Gofar using waveforms from ocean bottom seismometers that recorded the end of the 2008 Mw 6.0 seismic cycle.We determine stress drop for 117 earthquakes (2.4 ≤ Mw ≤ 4.2) that occurred in and between rupture asperities from corner frequency derived using an empirical Green's function spectral ratio method and seismic moment obtained by fitting the omega-square source model to the low frequency amplitude of earthquake spectra. We find stress drops from 0.03 to 2.7 MPa with significant spatial variation, including 2 times higher average stress drop in the rupture asperity compared to fault segments with low seismic coupling. We interpret an inverse correlation between stress drop and P-wave velocity reduction as the effect of damage on earthquake rupture. Earthquakes with higher stress drops occur in more intact crust of the rupture asperity, while earthquakes with lower stress drops occur in regions of low seismic coupling and reflect lower strength, highly fractured fault zone material. We also observe a temporal control on stress drop consistent with log-time healing following the Mw 6.0 mainshock, suggesting a decrease in stress drop as a result of fault zone damage caused by the large earthquake.
Revised seismic hazard map for the Kyrgyz Republic
NASA Astrophysics Data System (ADS)
Fleming, Kevin; Ullah, Shahid; Parolai, Stefano; Walker, Richard; Pittore, Massimiliano; Free, Matthew; Fourniadis, Yannis; Villiani, Manuela; Sousa, Luis; Ormukov, Cholponbek; Moldobekov, Bolot; Takeuchi, Ko
2017-04-01
As part of a seismic risk study sponsored by the World Bank, a revised seismic hazard map for the Kyrgyz Republic has been produced, using the OpenQuake-engine developed by the Global Earthquake Model Foundation (GEM). In this project, an earthquake catalogue spanning a period from 250 BCE to 2014 was compiled and processed through spatial and temporal declustering tools. The territory of the Kyrgyz Republic was divided into 31 area sources defined based on local seismicity, including a total area covering 200 km from the border. The results are presented in terms of Peak Ground Acceleration (PGA). In addition, macroseismic intensity estimates, making use of recent intensity prediction equations, were also provided, given that this measure is still widely used in Central Asia. In order to accommodate the associated epistemic uncertainty, three ground motion prediction equations were used in a logic tree structure. A set of representative earthquake scenarios were further identified based on historical data and the nature of the considered faults. The resulting hazard map, as expected, follows the country's seismicity, with the highest levels of hazard in the northeast, south and southwest of the country, with an elevated part around the centre. When considering PGA, the hazard is slightly greater for major urban centres than in previous works (e.g., Abdrakhmatov et al., 2003), although the macroseismic intensity estimates are less than previous studies, e.g., Ulomov (1999). For the scenario assessments, the examples that most affect the urban centres assessed are the Issyk Ata fault (in particular for Bishkek), the Chilik and Kemin faults (in particular Balykchy and Karakol), the Ferghana Valley fault system (in particular Osh, Jalah-Abad and Uzgen), the Oinik Djar fault (Naryn) and the central and western Talas-Ferghanafaukt (Talas). Finally, while site effects (in particular, those dependent on the upper-most geological structure) have an obvious effect on the final hazard level, this is still not fully accounted for, even if a nation-wide first order Vs30 model (i.e., from the USGS) is available. Abdrakhmatov, K., Havenith, H.-B., Delvaux, D., Jongsmans, D. and Trefois, P. (2003) Probabilistic PGA and Arias Intensity maps of Kyrgyzstan (Central Asia), Journal of Seismology, 7, 203-220. Ulomov, V.I., The GSHAP Region 7 working group (1999) Seismic hazard of Northern Eurasia, Annali di Geofisica, 42, 1012-1038.
Goal-Function Tree Modeling for Systems Engineering and Fault Management
NASA Technical Reports Server (NTRS)
Johnson, Stephen B.; Breckenridge, Jonathan T.
2013-01-01
This paper describes a new representation that enables rigorous definition and decomposition of both nominal and off-nominal system goals and functions: the Goal-Function Tree (GFT). GFTs extend the concept and process of functional decomposition, utilizing state variables as a key mechanism to ensure physical and logical consistency and completeness of the decomposition of goals (requirements) and functions, and enabling full and complete traceabilitiy to the design. The GFT also provides for means to define and represent off-nominal goals and functions that are activated when the system's nominal goals are not met. The physical accuracy of the GFT, and its ability to represent both nominal and off-nominal goals enable the GFT to be used for various analyses of the system, including assessments of the completeness and traceability of system goals and functions, the coverage of fault management failure detections, and definition of system failure scenarios.
Risk management of PPP project in the preparation stage based on Fault Tree Analysis
NASA Astrophysics Data System (ADS)
Xing, Yuanzhi; Guan, Qiuling
2017-03-01
The risk management of PPP(Public Private Partnership) project can improve the level of risk control between government departments and private investors, so as to make more beneficial decisions, reduce investment losses and achieve mutual benefit as well. Therefore, this paper takes the PPP project preparation stage venture as the research object to identify and confirm four types of risks. At the same time, fault tree analysis(FTA) is used to evaluate the risk factors that belong to different parts, and quantify the influencing degree of risk impact on the basis of risk identification. In addition, it determines the importance order of risk factors by calculating unit structure importance on PPP project preparation stage. The result shows that accuracy of government decision-making, rationality of private investors funds allocation and instability of market returns are the main factors to generate the shared risk on the project.
Rocket engine system reliability analyses using probabilistic and fuzzy logic techniques
NASA Technical Reports Server (NTRS)
Hardy, Terry L.; Rapp, Douglas C.
1994-01-01
The reliability of rocket engine systems was analyzed by using probabilistic and fuzzy logic techniques. Fault trees were developed for integrated modular engine (IME) and discrete engine systems, and then were used with the two techniques to quantify reliability. The IRRAS (Integrated Reliability and Risk Analysis System) computer code, developed for the U.S. Nuclear Regulatory Commission, was used for the probabilistic analyses, and FUZZYFTA (Fuzzy Fault Tree Analysis), a code developed at NASA Lewis Research Center, was used for the fuzzy logic analyses. Although both techniques provided estimates of the reliability of the IME and discrete systems, probabilistic techniques emphasized uncertainty resulting from randomness in the system whereas fuzzy logic techniques emphasized uncertainty resulting from vagueness in the system. Because uncertainty can have both random and vague components, both techniques were found to be useful tools in the analysis of rocket engine system reliability.
Enterprise architecture availability analysis using fault trees and stakeholder interviews
NASA Astrophysics Data System (ADS)
Närman, Per; Franke, Ulrik; König, Johan; Buschle, Markus; Ekstedt, Mathias
2014-01-01
The availability of enterprise information systems is a key concern for many organisations. This article describes a method for availability analysis based on Fault Tree Analysis and constructs from the ArchiMate enterprise architecture (EA) language. To test the quality of the method, several case-studies within the banking and electrical utility industries were performed. Input data were collected through stakeholder interviews. The results from the case studies were compared with availability of log data to determine the accuracy of the method's predictions. In the five cases where accurate log data were available, the yearly downtime estimates were within eight hours from the actual downtimes. The cost of performing the analysis was low; no case study required more than 20 man-hours of work, making the method ideal for practitioners with an interest in obtaining rapid availability estimates of their enterprise information systems.
Khan, F I; Iqbal, A; Ramesh, N; Abbasi, S A
2001-10-12
As it is conventionally done, strategies for incorporating accident--prevention measures in any hazardous chemical process industry are developed on the basis of input from risk assessment. However, the two steps-- risk assessment and hazard reduction (or safety) measures--are not linked interactively in the existing methodologies. This prevents a quantitative assessment of the impacts of safety measures on risk control. We have made an attempt to develop a methodology in which risk assessment steps are interactively linked with implementation of safety measures. The resultant system tells us the extent of reduction of risk by each successive safety measure. It also tells based on sophisticated maximum credible accident analysis (MCAA) and probabilistic fault tree analysis (PFTA) whether a given unit can ever be made 'safe'. The application of the methodology has been illustrated with a case study.
Uncertainty analysis in fault tree models with dependent basic events.
Pedroni, Nicola; Zio, Enrico
2013-06-01
In general, two types of dependence need to be considered when estimating the probability of the top event (TE) of a fault tree (FT): "objective" dependence between the (random) occurrences of different basic events (BEs) in the FT and "state-of-knowledge" (epistemic) dependence between estimates of the epistemically uncertain probabilities of some BEs of the FT model. In this article, we study the effects on the TE probability of objective and epistemic dependences. The well-known Frèchet bounds and the distribution envelope determination (DEnv) method are used to model all kinds of (possibly unknown) objective and epistemic dependences, respectively. For exemplification, the analyses are carried out on a FT with six BEs. Results show that both types of dependence significantly affect the TE probability; however, the effects of epistemic dependence are likely to be overwhelmed by those of objective dependence (if present). © 2012 Society for Risk Analysis.
A fault tree model to assess probability of contaminant discharge from shipwrecks.
Landquist, H; Rosén, L; Lindhe, A; Norberg, T; Hassellöv, I-M; Lindgren, J F; Dahllöf, I
2014-11-15
Shipwrecks on the sea floor around the world may contain hazardous substances that can cause harm to the marine environment. Today there are no comprehensive methods for environmental risk assessment of shipwrecks, and thus there is poor support for decision-making on prioritization of mitigation measures. The purpose of this study was to develop a tool for quantitative risk estimation of potentially polluting shipwrecks, and in particular an estimation of the annual probability of hazardous substance discharge. The assessment of the probability of discharge is performed using fault tree analysis, facilitating quantification of the probability with respect to a set of identified hazardous events. This approach enables a structured assessment providing transparent uncertainty and sensitivity analyses. The model facilitates quantification of risk, quantification of the uncertainties in the risk calculation and identification of parameters to be investigated further in order to obtain a more reliable risk calculation. Copyright © 2014 Elsevier Ltd. All rights reserved.
Qualitative Importance Measures of Systems Components - A New Approach and Its Applications
NASA Astrophysics Data System (ADS)
Chybowski, Leszek; Gawdzińska, Katarzyna; Wiśnicki, Bogusz
2016-12-01
The paper presents an improved methodology of analysing the qualitative importance of components in the functional and reliability structures of the system. We present basic importance measures, i.e. the Birnbaum's structural measure, the order of the smallest minimal cut-set, the repetition count of an i-th event in the Fault Tree and the streams measure. A subsystem of circulation pumps and fuel heaters in the main engine fuel supply system of a container vessel illustrates the qualitative importance analysis. We constructed a functional model and a Fault Tree which we analysed using qualitative measures. Additionally, we compared the calculated measures and introduced corrected measures as a tool for improving the analysis. We proposed scaled measures and a common measure taking into account the location of the component in the reliability and functional structures. Finally, we proposed an area where the measures could be applied.
Schwartz, D.P.; Pantosti, D.; Okumura, K.; Powers, T.J.; Hamilton, J.C.
1998-01-01
Trenching, microgeomorphic mapping, and tree ring analysis provide information on timing of paleoearthquakes and behavior of the San Andreas fault in the Santa Cruz mountains. At the Grizzly Flat site alluvial units dated at 1640-1659 A.D., 1679-1894 A.D., 1668-1893 A.D., and the present ground surface are displaced by a single event. This was the 1906 surface rupture. Combined trench dates and tree ring analysis suggest that the penultimate event occurred in the mid-1600s, possibly in an interval as narrow as 1632-1659 A.D. There is no direct evidence in the trenches for the 1838 or 1865 earthquakes, which have been proposed as occurring on this part of the fault zone. In a minimum time of about 340 years only one large surface faulting event (1906) occurred at Grizzly Flat, in contrast to previous recurrence estimates of 95-110 years for the Santa Cruz mountains segment. Comparison with dates of the penultimate San Andreas earthquake at sites north of San Francisco suggests that the San Andreas fault between Point Arena and the Santa Cruz mountains may have failed either as a sequence of closely timed earthquakes on adjacent segments or as a single long rupture similar in length to the 1906 rupture around the mid-1600s. The 1906 coseismic geodetic slip and the late Holocene geologic slip rate on the San Francisco peninsula and southward are about 50-70% and 70% of their values north of San Francisco, respectively. The slip gradient along the 1906 rupture section of the San Andreas reflects partitioning of plate boundary slip onto the San Gregorio, Sargent, and other faults south of the Golden Gate. If a mid-1600s event ruptured the same section of the fault that failed in 1906, it supports the concept that long strike-slip faults can contain master rupture segments that repeat in both length and slip distribution. Recognition of a persistent slip rate gradient along the northern San Andreas fault and the concept of a master segment remove the requirement that lower slip sections of large events such as 1906 must fill in on a periodic basis with smaller and more frequent earthquakes.
Distributed Fault Detection Based on Credibility and Cooperation for WSNs in Smart Grids.
Shao, Sujie; Guo, Shaoyong; Qiu, Xuesong
2017-04-28
Due to the increasingly important role in monitoring and data collection that sensors play, accurate and timely fault detection is a key issue for wireless sensor networks (WSNs) in smart grids. This paper presents a novel distributed fault detection mechanism for WSNs based on credibility and cooperation. Firstly, a reasonable credibility model of a sensor is established to identify any suspicious status of the sensor according to its own temporal data correlation. Based on the credibility model, the suspicious sensor is then chosen to launch fault diagnosis requests. Secondly, the sending time of fault diagnosis request is discussed to avoid the transmission overhead brought about by unnecessary diagnosis requests and improve the efficiency of fault detection based on neighbor cooperation. The diagnosis reply of a neighbor sensor is analyzed according to its own status. Finally, to further improve the accuracy of fault detection, the diagnosis results of neighbors are divided into several classifications to judge the fault status of the sensors which launch the fault diagnosis requests. Simulation results show that this novel mechanism can achieve high fault detection ratio with a small number of fault diagnoses and low data congestion probability.
Distributed Fault Detection Based on Credibility and Cooperation for WSNs in Smart Grids
Shao, Sujie; Guo, Shaoyong; Qiu, Xuesong
2017-01-01
Due to the increasingly important role in monitoring and data collection that sensors play, accurate and timely fault detection is a key issue for wireless sensor networks (WSNs) in smart grids. This paper presents a novel distributed fault detection mechanism for WSNs based on credibility and cooperation. Firstly, a reasonable credibility model of a sensor is established to identify any suspicious status of the sensor according to its own temporal data correlation. Based on the credibility model, the suspicious sensor is then chosen to launch fault diagnosis requests. Secondly, the sending time of fault diagnosis request is discussed to avoid the transmission overhead brought about by unnecessary diagnosis requests and improve the efficiency of fault detection based on neighbor cooperation. The diagnosis reply of a neighbor sensor is analyzed according to its own status. Finally, to further improve the accuracy of fault detection, the diagnosis results of neighbors are divided into several classifications to judge the fault status of the sensors which launch the fault diagnosis requests. Simulation results show that this novel mechanism can achieve high fault detection ratio with a small number of fault diagnoses and low data congestion probability. PMID:28452925
Moran, Michael J.; Wilson, Jon W.; Beard, L. Sue
2015-11-03
Several major faults, including the Salt Cedar Fault and the Palm Tree Fault, play an important role in the movement of groundwater. Groundwater may move along these faults and discharge where faults intersect volcanic breccias or fractured rock. Vertical movement of groundwater along faults is suggested as a mechanism for the introduction of heat energy present in groundwater from many of the springs. Groundwater altitudes in the study area indicate a potential for flow from Eldorado Valley to Black Canyon although current interpretations of the geology of this area do not favor such flow. If groundwater from Eldorado Valley discharges at springs in Black Canyon then the development of groundwater resources in Eldorado Valley could result in a decrease in discharge from the springs. Geology and structure indicate that it is not likely that groundwater can move between Detrital Valley and Black Canyon. Thus, the development of groundwater resources in Detrital Valley may not result in a decrease in discharge from springs in Black Canyon.
NASA Astrophysics Data System (ADS)
Abdelrhman, Ahmed M.; Sei Kien, Yong; Salman Leong, M.; Meng Hee, Lim; Al-Obaidi, Salah M. Ali
2017-07-01
The vibration signals produced by rotating machinery contain useful information for condition monitoring and fault diagnosis. Fault severities assessment is a challenging task. Wavelet Transform (WT) as a multivariate analysis tool is able to compromise between the time and frequency information in the signals and served as a de-noising method. The CWT scaling function gives different resolutions to the discretely signals such as very fine resolution at lower scale but coarser resolution at a higher scale. However, the computational cost increased as it needs to produce different signal resolutions. DWT has better low computation cost as the dilation function allowed the signals to be decomposed through a tree of low and high pass filters and no further analysing the high-frequency components. In this paper, a method for bearing faults identification is presented by combing Continuous Wavelet Transform (CWT) and Discrete Wavelet Transform (DWT) with envelope analysis for bearing fault diagnosis. The experimental data was sampled by Case Western Reserve University. The analysis result showed that the proposed method is effective in bearing faults detection, identify the exact fault’s location and severity assessment especially for the inner race and outer race faults.
Biophysical control of whole tree transpiration under an urban environment in Northern China
Lixin Chen; Zhiqiang Zhang; Zhandong Li; Jianwu Tang; Peter Caldwell; et al
2011-01-01
Urban reforestation in China has led to increasing debate about the impact of urban trees and forests on water resources. Although transpiration is the largest water flux leaving terrestrial ecosystems, little is known regarding whole tree transpiration in urban environments. In this study, we quantified urban tree transpiration at various temporal scales and examined...
Experimental evaluation of the certification-trail method
NASA Technical Reports Server (NTRS)
Sullivan, Gregory F.; Wilson, Dwight S.; Masson, Gerald M.; Itoh, Mamoru; Smith, Warren W.; Kay, Jonathan S.
1993-01-01
Certification trails are a recently introduced and promising approach to fault-detection and fault-tolerance. A comprehensive attempt to assess experimentally the performance and overall value of the method is reported. The method is applied to algorithms for the following problems: huffman tree, shortest path, minimum spanning tree, sorting, and convex hull. Our results reveal many cases in which an approach using certification-trails allows for significantly faster overall program execution time than a basic time redundancy-approach. Algorithms for the answer-validation problem for abstract data types were also examined. This kind of problem provides a basis for applying the certification-trail method to wide classes of algorithms. Answer-validation solutions for two types of priority queues were implemented and analyzed. In both cases, the algorithm which performs answer-validation is substantially faster than the original algorithm for computing the answer. Next, a probabilistic model and analysis which enables comparison between the certification-trail method and the time-redundancy approach were presented. The analysis reveals some substantial and sometimes surprising advantages for ther certification-trail method. Finally, the work our group performed on the design and implementation of fault injection testbeds for experimental analysis of the certification trail technique is discussed. This work employs two distinct methodologies, software fault injection (modification of instruction, data, and stack segments of programs on a Sun Sparcstation ELC and on an IBM 386 PC) and hardware fault injection (control, address, and data lines of a Motorola MC68000-based target system pulsed at logical zero/one values). Our results indicate the viability of the certification trail technique. It is also believed that the tools developed provide a solid base for additional exploration.
Investigation of Fuel Oil/Lube Oil Spray Fires On Board Vessels. Volume 3.
1998-11-01
U.S. Coast Guard Research and Development Center 1082 Shennecossett Road, Groton, CT 06340-6096 Report No. CG-D-01-99, III Investigation of Fuel ...refinery). Developed the technical and mathematical specifications for BRAVO™2.0, a state-of-the-art Windows program for performing event tree and fault...tree analyses. Also managed the development of and prepared the technical specifications for QRA ROOTS™, a Windows program for storing, searching K-4
1992-01-01
boost plenum which houses the camshaft . The compressed mixture is metered by a throttle to intake valves of the engine. The engine is constructed from...difficulties associated with a time-tagged fault tree . In particular, recent work indicates that the multi-layer perception architecture can give good fdi...Abstract: In the past decade, wastepaper recycling has gained a wider acceptance. Depletion of tree stocks, waste water treatment demands and
Interim reliability evaluation program, Browns Ferry 1
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mays, S.E.; Poloski, J.P.; Sullivan, W.H.
1981-01-01
Probabilistic risk analysis techniques, i.e., event tree and fault tree analysis, were utilized to provide a risk assessment of the Browns Ferry Nuclear Plant Unit 1. Browns Ferry 1 is a General Electric boiling water reactor of the BWR 4 product line with a Mark 1 (drywell and torus) containment. Within the guidelines of the IREP Procedure and Schedule Guide, dominant accident sequences that contribute to public health and safety risks were identified and grouped according to release categories.
[Research on monitoring land subsidence in Beijing plain area using PS-InSAR technology].
Gu, Zhao-Qin; Gong, Hui-Li; Zhang, You-Quan; Lu, Xue-Hui; Wang, Sa; Wang, Rong; Liu, Huan-Huan
2014-07-01
In the present paper, the authors use permanent scatterers synthetic aperture radar interferometry (PS-InSAR) technique and 29 acquisitions by Envisat during 2003 to 2009 to monitor and analyze the spatial-temporal distribution and mechanism characterize of land subsidence in Beijing plain area. The results show that subsidence bowls have been bounded together in Beijing plain area, which covers Chaoyang, Changping, Shunyi and Tongzhou area, and the range of subsidence has an eastward trend. The most serious regional subsidence is mainly distributed by the quaternary depression in Beijing plain area. PS-Insar results also show a new subsidence bowl in Pinggu. What's more, the spatial and temporal distribution of deformation is controlled mainly by faults, such as Liangxiang-Shunyi fault, Huangzhuang-Gaoliying fault, and Nankou-Sunhe fault. The subsidence and level of groundwater in study area shows a good correlation, and the subsidence shows seasonal ups trend during November to March and seasonal downs trend during March to June along with changes in groundwater levels. The contribution of land subsidence is also influenced by stress-strain behavior of aquitards. The compaction of aquitards shows an elastic, plastic, viscoelastic pattern.
NASA Technical Reports Server (NTRS)
Rundle, John B.
1988-01-01
The idea that earthquakes represent a fluctuation about the long-term motion of plates is expressed mathematically through the fluctuation hypothesis, under which all physical quantities which pertain to the occurance of earthquakes are required to depend on the difference between the present state of slip on the fault and its long-term average. It is shown that under certain circumstances the model fault dynamics undergo a sudden transition from a spatially ordered, temporally disordered state to a spatially disordered, temporally ordered state, and that the latter stages are stable for long intervals of time. For long enough faults, the dynamics are evidently chaotic. The methods developed are then used to construct a detailed model for earthquake dynamics in southern California. The result is a set of slip-time histories for all the major faults, which are similar to data obtained by geological trenching studies. Although there is an element of periodicity to the events, the patterns shift, change and evolve with time. Time scales for pattern evolution seem to be of the order of a thousand years for average recurring intervals of about a hundred years.
Cost-effectiveness analysis of risk-reduction measures to reach water safety targets.
Lindhe, Andreas; Rosén, Lars; Norberg, Tommy; Bergstedt, Olof; Pettersson, Thomas J R
2011-01-01
Identifying the most suitable risk-reduction measures in drinking water systems requires a thorough analysis of possible alternatives. In addition to the effects on the risk level, also the economic aspects of the risk-reduction alternatives are commonly considered important. Drinking water supplies are complex systems and to avoid sub-optimisation of risk-reduction measures, the entire system from source to tap needs to be considered. There is a lack of methods for quantification of water supply risk reduction in an economic context for entire drinking water systems. The aim of this paper is to present a novel approach for risk assessment in combination with economic analysis to evaluate risk-reduction measures based on a source-to-tap approach. The approach combines a probabilistic and dynamic fault tree method with cost-effectiveness analysis (CEA). The developed approach comprises the following main parts: (1) quantification of risk reduction of alternatives using a probabilistic fault tree model of the entire system; (2) combination of the modelling results with CEA; and (3) evaluation of the alternatives with respect to the risk reduction, the probability of not reaching water safety targets and the cost-effectiveness. The fault tree method and CEA enable comparison of risk-reduction measures in the same quantitative unit and consider costs and uncertainties. The approach provides a structured and thorough analysis of risk-reduction measures that facilitates transparency and long-term planning of drinking water systems in order to avoid sub-optimisation of available resources for risk reduction. Copyright © 2010 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Gu, Hai-Ou; Xiao, Yilin; Santosh, M.; Li, Wang-Ye; Yang, Xiaoyong; Pack, Andreas; Hou, Zhenhui
2013-09-01
The Mesozoic tectonics in East China is characterized by significant lithospheric thinning of the North China Craton, large-scale strike-slip movement along the Tan-Lu fault, and regional magmatism with associated metallogeny. Here we address the possible connections between these three events through a systematic investigation of the geochemistry, zircon geochronology and whole rock oxygen isotopes of the Mesozoic magmatic rocks distributed along the Tan-Lu fault in the Shandong province. The characteristic spatial and temporal distributions of high-Mg adakitic rocks along the Tan-Lu fault with emplacement ages of 134-128 Ma suggest a strong structural control for the emplacement of these intrusions, with magma generation possibly associated with the subduction of the Pacific plate in the early Cretaceous. The low-Mg adakitic rocks (127-120 Ma) in the Su-Lu orogenic belt were formed later than the high-Mg adakitic rocks, whereas in the Dabie orogenic belt, most of the low-Mg adakitic rocks (143-129 Ma) were generated earlier than the high-Mg adakitic rocks. Based on available data, we suggest that the large scale strike-slip tectonics of the Tan-Lu fault in the Mesozoic initiated cratonic destruction at the south-eastern margin of the North China Craton, significantly affecting the lower continental crust within areas near the fault. This process resulted in crustal fragments sinking into the asthenosphere and reacting with peridotites, which increased the Mg# of the adakitic melts, generating the high-Mg adakitic rocks. The gravitationally unstable lower continental crust below the Tan-Lu fault in the Su-Lu orogenic belt triggered larger volume delamination of the lower continental crust or foundering of the root.
NASA Astrophysics Data System (ADS)
Brothers, Daniel Stephen
Five studies along the Pacific-North America (PA-NA) plate boundary offer new insights into continental margin processes, the development of the PA-NA tectonic margin and regional earthquake hazards. This research is based on the collection and analysis of several new marine geophysical and geological datasets. Two studies used seismic CHIRP surveys and sediment coring in Fallen Leaf Lake (FLL) and Lake Tahoe to constrain tectonic and geomorphic processes in the lakes, but also the slip-rate and earthquake history along the West Tahoe-Dollar Point Fault. CHIRP profiles image vertically offset and folded strata that record deformation associated with the most recent event (MRE). Radiocarbon dating of organic material extracted from piston cores constrain the age of the MRE to be between 4.1--4.5 k.y. B.P. Offset of Tioga aged glacial deposits yield a slip rate of 0.4--0.8 mm/yr. An ancillary study in FLL determined that submerged, in situ pine trees that date to between 900-1250 AD are related to a medieval megadrought in the Lake Tahoe Basin. The timing and severity of this event match medieval megadroughts observed in the western United States and in Europe. CHIRP profiles acquired in the Salton Sea, California provide new insights into the processes that control pull-apart basin development and earthquake hazards along the southernmost San Andreas Fault. Differential subsidence (>10 mm/yr) in the southern sea suggests the existence of northwest-dipping basin-bounding faults near the southern shoreline. In contrast to previous models, the rapid subsidence and fault architecture observed in the southern part of the sea are consistent with experimental models for pull-apart basins. Geophysical surveys imaged more than 15 ˜N15°E oriented faults, some of which have produced up to 10 events in the last 2-3 kyr. Potentially 2 of the last 5 events on the southern San Andreas Fault (SAF) were synchronous with rupture on offshore faults, but it appears that ruptures on three offshore faults are synchronous with Colorado River diversions into the basin. The final study was used coincident wide-angle seismic refraction and multichannel seismic reflection surveys that spanned the width of the of the southern Baja California (BC) Peninsula. The data provide insight into the spatial and temporal evolution of the BC microplate capture by the Pacific Plate. Seismic reflection profiles constrain the upper crustal structure and deformation history along fault zone on the western Baja margin and in the Gulf of California. Stratal divergence in two transtensional basins along the Magdalena Shelf records the onset of extension across the Tosco-Abreojos and Santa Margarita faults. We define an upper bound of 12 Ma on the age of the pre-rift sediments and an age of ˜8 Ma for the onset of extension. Tomographic imaging reveals a very heterogeneous upper crust and a narrow, high velocity zone that extends ˜40 km east of the paleotrench and is interpreted to be remnant oceanic crust.
Chronology of volcanism and rift basin propagation - Rungwe volcanic province, East Africa
NASA Technical Reports Server (NTRS)
Ebinger, C. J.; Deino, A. L.; Drake, R. E.; Tesha, A. L.
1989-01-01
The spatial and temporal development of along-axis segmentation in youthful continental rifts was investigated using the results of field, remote sensing, and K-Ar geochronology studies conducted in four (Rukwa, Songwe, Usangu, and Karonga) rift basins within the Rungwe volcanic province in East Africa. Results indicated that the Rukwa and Karonga border fault segments formed between 7.25 and 5 m.y. ago, the Usangu border fault segment developed between 3 and 2 m.y. ago, and subsidence along the Songwe border fault segment had occurred by 0.5 Ma. It is shown that individual basins developed diachronously, each following a similar sequence: (1) initial border fault development; (2) asymmetric basin subsidence/flank uplift and the development of monoclines opposite the border faults; and (3) continued subsidence and tilting along intrabasinal faults with flexural upwarping of the rift flanks, enhancing basinal asymmetries.
CARE3MENU- A CARE III USER FRIENDLY INTERFACE
NASA Technical Reports Server (NTRS)
Pierce, J. L.
1994-01-01
CARE3MENU generates an input file for the CARE III program. CARE III is used for reliability prediction of complex, redundant, fault-tolerant systems including digital computers, aircraft, nuclear and chemical control systems. The CARE III input file often becomes complicated and is not easily formatted with a text editor. CARE3MENU provides an easy, interactive method of creating an input file by automatically formatting a set of user-supplied inputs for the CARE III system. CARE3MENU provides detailed on-line help for most of its screen formats. The reliability model input process is divided into sections using menu-driven screen displays. Each stage, or set of identical modules comprising the model, must be identified and described in terms of number of modules, minimum number of modules for stage operation, and critical fault threshold. The fault handling and fault occurence models are detailed in several screens by parameters such as transition rates, propagation and detection densities, Weibull or exponential characteristics, and model accuracy. The system fault tree and critical pairs fault tree screens are used to define the governing logic and to identify modules affected by component failures. Additional CARE3MENU screens prompt the user for output options and run time control values such as mission time and truncation values. There are fourteen major screens, many with default values and HELP options. The documentation includes: 1) a users guide with several examples of CARE III models, the dialog required to input them to CARE3MENU, and the output files created; and 2) a maintenance manual for assistance in changing the HELP files and modifying any of the menu formats or contents. CARE3MENU is written in FORTRAN 77 for interactive execution and has been implemented on a DEC VAX series computer operating under VMS. This program was developed in 1985.
Dissecting the space-time structure of tree-ring datasets using the partial triadic analysis.
Rossi, Jean-Pierre; Nardin, Maxime; Godefroid, Martin; Ruiz-Diaz, Manuela; Sergent, Anne-Sophie; Martinez-Meier, Alejandro; Pâques, Luc; Rozenberg, Philippe
2014-01-01
Tree-ring datasets are used in a variety of circumstances, including archeology, climatology, forest ecology, and wood technology. These data are based on microdensity profiles and consist of a set of tree-ring descriptors, such as ring width or early/latewood density, measured for a set of individual trees. Because successive rings correspond to successive years, the resulting dataset is a ring variables × trees × time datacube. Multivariate statistical analyses, such as principal component analysis, have been widely used for extracting worthwhile information from ring datasets, but they typically address two-way matrices, such as ring variables × trees or ring variables × time. Here, we explore the potential of the partial triadic analysis (PTA), a multivariate method dedicated to the analysis of three-way datasets, to apprehend the space-time structure of tree-ring datasets. We analyzed a set of 11 tree-ring descriptors measured in 149 georeferenced individuals of European larch (Larix decidua Miller) during the period of 1967-2007. The processing of densitometry profiles led to a set of ring descriptors for each tree and for each year from 1967-2007. The resulting three-way data table was subjected to two distinct analyses in order to explore i) the temporal evolution of spatial structures and ii) the spatial structure of temporal dynamics. We report the presence of a spatial structure common to the different years, highlighting the inter-individual variability of the ring descriptors at the stand scale. We found a temporal trajectory common to the trees that could be separated into a high and low frequency signal, corresponding to inter-annual variations possibly related to defoliation events and a long-term trend possibly related to climate change. We conclude that PTA is a powerful tool to unravel and hierarchize the different sources of variation within tree-ring datasets.
NASA Astrophysics Data System (ADS)
Polverino, Pierpaolo; Pianese, Cesare; Sorrentino, Marco; Marra, Dario
2015-04-01
The paper focuses on the design of a procedure for the development of an on-field diagnostic algorithm for solid oxide fuel cell (SOFC) systems. The diagnosis design phase relies on an in-deep analysis of the mutual interactions among all system components by exploiting the physical knowledge of the SOFC system as a whole. This phase consists of the Fault Tree Analysis (FTA), which identifies the correlations among possible faults and their corresponding symptoms at system components level. The main outcome of the FTA is an inferential isolation tool (Fault Signature Matrix - FSM), which univocally links the faults to the symptoms detected during the system monitoring. In this work the FTA is considered as a starting point to develop an improved FSM. Making use of a model-based investigation, a fault-to-symptoms dependency study is performed. To this purpose a dynamic model, previously developed by the authors, is exploited to simulate the system under faulty conditions. Five faults are simulated, one for the stack and four occurring at BOP level. Moreover, the robustness of the FSM design is increased by exploiting symptom thresholds defined for the investigation of the quantitative effects of the simulated faults on the affected variables.
Observations from old forests underestimate climate change effects on tree mortality.
Luo, Yong; Chen, Han Y H
2013-01-01
Understanding climate change-associated tree mortality is central to linking climate change impacts and forest structure and function. However, whether temporal increases in tree mortality are attributed to climate change or stand developmental processes remains uncertain. Furthermore, interpreting the climate change-associated tree mortality estimated from old forests for regional forests rests on an un-tested assumption that the effects of climate change are the same for young and old forests. Here we disentangle the effects of climate change and stand developmental processes on tree mortality. We show that both climate change and forest development processes influence temporal mortality increases, climate change-associated increases are significantly higher in young than old forests, and higher increases in younger forests are a result of their higher sensitivity to regional warming and drought. We anticipate our analysis to be a starting point for more comprehensive examinations of how forest ecosystems might respond to climate change.
Spatial-temporal variation of low-frequency earthquake bursts near Parkfield, California
Wu, Chunquan; Guyer, Robert; Shelly, David R.; Trugman, D.; Frank, William; Gomberg, Joan S.; Johnson, P.
2015-01-01
Tectonic tremor (TT) and low-frequency earthquakes (LFEs) have been found in the deeper crust of various tectonic environments globally in the last decade. The spatial-temporal behaviour of LFEs provides insight into deep fault zone processes. In this study, we examine recurrence times from a 12-yr catalogue of 88 LFE families with ∼730 000 LFEs in the vicinity of the Parkfield section of the San Andreas Fault (SAF) in central California. We apply an automatic burst detection algorithm to the LFE recurrence times to identify the clustering behaviour of LFEs (LFE bursts) in each family. We find that the burst behaviours in the northern and southern LFE groups differ. Generally, the northern group has longer burst duration but fewer LFEs per burst, while the southern group has shorter burst duration but more LFEs per burst. The southern group LFE bursts are generally more correlated than the northern group, suggesting more coherent deep fault slip and relatively simpler deep fault structure beneath the locked section of SAF. We also found that the 2004 Parkfield earthquake clearly increased the number of LFEs per burst and average burst duration for both the northern and the southern groups, with a relatively larger effect on the northern group. This could be due to the weakness of northern part of the fault, or the northwesterly rupture direction of the Parkfield earthquake.
Chad M. Hoffman; Rodman Linn; Russell Parsons; Carolyn Sieg; Judith Winterkamp
2015-01-01
Patches of live, dead, and dying trees resulting from bark beetle-caused mortality alter spatial and temporal variability in the canopy and surface fuel complex through changes in the foliar moisture content of attacked trees and through the redistribution of canopy fuels. The resulting heterogeneous fuels complexes alter within-canopy wind flow, wind fluctuations, and...
Spatial and Temporal Variation of in-situ Stress in and around Active Fault zones in Central Japan
NASA Astrophysics Data System (ADS)
Ikeda, R.; Omura, K.; Matsuda, T.; Iio, Y.
2002-12-01
In the "Active Fault Zone Drilling Project in Japan," we have compared the relationship between the stress concentration state and the heterogeneous strength of an earthquake fault zone in different conditions. The Nojima fault which appeared on the surface by the 1995 Great Kobe earthquake (M=7.2) and the Neodani fault which appeared by the 1891 Nobi earthquake (M=8.0), have been drilled through their fault fracture zones. A similar experiment conducted on and research of the Atera fault, of which some parts have seemed to be dislocated by the 1586 Tensyo earthquake (M=7.9). We can use a deep borehole as a reliable tool to understand overall fault structure and composed materials directly. Additionally, the stress states in and around the fault fractured zones were obtained from in-situ stress measurements by the hydraulic fracturing method. Important phenomena such as rapid stress drop in the fault fracture zones were observed in the Neodani well (1300 m deep) and the Nojima well (1800 m) of the fault zone drillings, as well as in the Ashio well (2,000 m) in the focal area. In the Atera fault project, we have conducted integrated investigations by surface geophysical survey and drilling around the Atera fault. Four boreholes (400 m to 600 m deep) were located on a line crossing the fracture zone of the Atera fault. We noted that the stress magnitude decreases in the area closer to the center of the fracture zone. Furthermore the orientation of the maximum horizontal compressive stress was almost reverse of the fault moving direction. These results support the idea that the differential stress is extremely small at narrow zones adjoining fracture zones. We also noted that the frictional strength of the crust adjacent to the faults is high and the level of shear stress in the crust adjacent to the faults is principally controlled by the frictional strength of rock. We argue that the stress state observed in these sites exists only if the faults are quite "weak." As a temporal variation of stresses, crustal stress was recorded from 1978 to before the Kobe earthquake in and around the area where the earthquake occurred. By examining this data, the change in tectonic stress gradually increased prior to the earthquake. After the earthquake, the same boreholes were once again used to obtain new data. From these measurements, we were able to determine that there was a definite drop in the crustal stress in the area and that there was a change in the direction of the principal stresses. The continual measuring is essential to estimate the absolute stress magnitude that initiate earthquakes and control their propagation.
TU-AB-BRD-03: Fault Tree Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dunscombe, P.
2015-06-15
Current quality assurance and quality management guidelines provided by various professional organizations are prescriptive in nature, focusing principally on performance characteristics of planning and delivery devices. However, published analyses of events in radiation therapy show that most events are often caused by flaws in clinical processes rather than by device failures. This suggests the need for the development of a quality management program that is based on integrated approaches to process and equipment quality assurance. Industrial engineers have developed various risk assessment tools that are used to identify and eliminate potential failures from a system or a process before amore » failure impacts a customer. These tools include, but are not limited to, process mapping, failure modes and effects analysis, fault tree analysis. Task Group 100 of the American Association of Physicists in Medicine has developed these tools and used them to formulate an example risk-based quality management program for intensity-modulated radiotherapy. This is a prospective risk assessment approach that analyzes potential error pathways inherent in a clinical process and then ranks them according to relative risk, typically before implementation, followed by the design of a new process or modification of the existing process. Appropriate controls are then put in place to ensure that failures are less likely to occur and, if they do, they will more likely be detected before they propagate through the process, compromising treatment outcome and causing harm to the patient. Such a prospective approach forms the basis of the work of Task Group 100 that has recently been approved by the AAPM. This session will be devoted to a discussion of these tools and practical examples of how these tools can be used in a given radiotherapy clinic to develop a risk based quality management program. Learning Objectives: Learn how to design a process map for a radiotherapy process Learn how to perform failure modes and effects analysis analysis for a given process Learn what fault trees are all about Learn how to design a quality management program based upon the information obtained from process mapping, failure modes and effects analysis and fault tree analysis. Dunscombe: Director, TreatSafely, LLC and Center for the Assessment of Radiological Sciences; Consultant to IAEA and Varian Thomadsen: President, Center for the Assessment of Radiological Sciences Palta: Vice President of the Center for the Assessment of Radiological Sciences.« less
A 100-year average recurrence interval for the San Andreas fault at Wrightwood, California
Fumal, T.E.; Pezzopane, S.K.; Weldon, R.J.; Schwartz, D.P.
1993-01-01
Evidence for five large earthquakes during the past five centuries along the San Andreas fault zone 70 kilometers northeast of Los Angeles, California, indicates that the average recurrence interval and the temporal variability are significantly smaller than previously thought. Rapid sedimentation during the past 5000 years in a 150-meter-wide structural depression has produced a greater than 21-meter-thick sequence of debris flow and stream deposits interbedded with more than 50 datable peat layers. Fault scarps, colluvial wedges, fissure infills, upward termination of ruptures, and tilted and folded deposits above listric faults provide evidence for large earthquakes that occurred in A.D. 1857, 1812, and about 1700, 1610, and 1470.
Probabilistic seismic hazard study based on active fault and finite element geodynamic models
NASA Astrophysics Data System (ADS)
Kastelic, Vanja; Carafa, Michele M. C.; Visini, Francesco
2016-04-01
We present a probabilistic seismic hazard analysis (PSHA) that is exclusively based on active faults and geodynamic finite element input models whereas seismic catalogues were used only in a posterior comparison. We applied the developed model in the External Dinarides, a slow deforming thrust-and-fold belt at the contact between Adria and Eurasia.. is the Our method consists of establishing s two earthquake rupture forecast models: (i) a geological active fault input (GEO) model and, (ii) a finite element (FEM) model. The GEO model is based on active fault database that provides information on fault location and its geometric and kinematic parameters together with estimations on its slip rate. By default in this model all deformation is set to be released along the active faults. The FEM model is based on a numerical geodynamic model developed for the region of study. In this model the deformation is, besides along the active faults, released also in the volumetric continuum elements. From both models we calculated their corresponding activity rates, its earthquake rates and their final expected peak ground accelerations. We investigated both the source model and the earthquake model uncertainties by varying the main active fault and earthquake rate calculation parameters through constructing corresponding branches of the seismic hazard logic tree. Hazard maps and UHS curves have been produced for horizontal ground motion on bedrock conditions VS 30 ≥ 800 m/s), thereby not considering local site amplification effects. The hazard was computed over a 0.2° spaced grid considering 648 branches of the logic tree and the mean value of 10% probability of exceedance in 50 years hazard level, while the 5th and 95th percentiles were also computed to investigate the model limits. We conducted a sensitivity analysis to control which of the input parameters influence the final hazard results in which measure. The results of such comparison evidence the deformation model and with their internal variability together with the choice of the ground motion prediction equations (GMPEs) are the most influencing parameter. Both of these parameters have significan affect on the hazard results. Thus having good knowledge of the existence of active faults and their geometric and activity characteristics is of key importance. We also show that PSHA models based exclusively on active faults and geodynamic inputs, which are thus not dependent on past earthquake occurrences, provide a valid method for seismic hazard calculation.
Tree biology and dendrochemistry
Kevin T. Smith; Walter C. Shortle
1996-01-01
Dendrochemistry, the interpretation of elemental analysis of dated tree rings, can provide a temporal record of environmental change. Using the dendrochemical record requires an understanding of tree biology. In this review, we pose four questions concerning assumptions that underlie recent dendrochemical research: 1) Does the chemical composition of the wood directly...
Temporal slip rate variability in the Lower Rhine Embayment, Northwest Europe
NASA Astrophysics Data System (ADS)
Gold, Ryan; Kuebler, Simon; Friedrich, Anke
2016-04-01
Low strain regions may be characterized by long periods of seismic quiescence, punctuated by periods of clustered earthquake activity. This type of non-periodic recurrence behavior challenges accurate seismic hazard analysis. The Lower Rhine Embayment in the German-Belgium-Netherland border region presents a unique opportunity to characterize the long-term record of faulting to evaluate the periodicity of earthquake occurrence in a low strain region. The Lower Rhine Embayment is covered by a high-resolution record of Quaternary terraces associated with the Rhine and Maas (Meuse) Rivers and their tributaries. These terraces are cut by numerous NW-trending faults and record cumulative displacements that exceed 100 m in numerous locations. In this study, we exploit this rich record of faulted fluvial terraces and find convincing evidence for temporally varying rates of Quaternary fault movement across the Lower Rhine Embayment. First, we document a significant increase in vertical fault slip rates since 700 ka, compared to the average slip rate since the start of the Quaternary using the top and base of the Main Terrace, respectively. Increases in slip rate exceed 500% along many of the faults, including the Swist/Erft, Stockheim, Viersen, Sandgewand, and Kirspenich fault systems. This increase in fault slip rate corresponds to a regional period of increased tectonic uplift of the Rhenish Massif, increased volcanism in Eifel, and incision of the Rhine River. In a second and related analysis, we synthesize terrace offset and age information from the Feldbiss fault system along the western boundary of the Lower Rhine Embayment, which transects a flight of Quaternary terraces associated with the Mass river. This analysis reveals evidence for secular variation in slip rate. In particular, we identify two periods of higher slip rate (800-400 ka and 130-100 ka), where fault slip rate exceeds the longer-term average slip rate of 0.04-0.05 mm/yr by as much as a factor of two. These results show that in the Lower Rhine Embayment low-strain region, the tempo of strain release (and therefore earthquakes) is non-steady. This variable slip behavior should be incorporated into future efforts to characterize seismic hazard across the region.
A fault is born: The Landers-Mojave earthquake line
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nur, A.; Ron, H.
1993-04-01
The epicenter and the southern portion of the 1992 Landers earthquake fell on an approximately N-S earthquake line, defined by both epicentral locations and by the rupture directions of four previous M>5 earthquakes in the Mojave: The 1947 Manix; 1975 Galway Lake; 1979 Homestead Valley: and 1992 Joshua Tree events. Another M 5.2 earthquake epicenter in 1965 fell on this line where it intersects the Calico fault. In contrast, the northern part of the Landers rupture followed the NW-SE trending Camp Rock and parallel faults, exhibiting an apparently unusual rupture kink. The block tectonic model (Ron et al., 1984) combiningmore » fault kinematic and mechanics, explains both the alignment of the events, and their ruptures (Nur et al., 1986, 1989), as well as the Landers kink (Nur et al., 1992). Accordingly, the now NW oriented faults have rotated into their present direction away from the direction of maximum shortening, close to becoming locked, whereas a new fault set, optimally oriented relative to the direction of shortening, is developing to accommodate current crustal deformation. The Mojave-Landers line may thus be a new fault in formation. During the transition of faulting from the old, well developed and wak but poorly oriented faults to the strong, but favorably oriented new ones, both can slip simultaneously, giving rise to kinks such as Landers.« less
Quantitative method of medication system interface evaluation.
Pingenot, Alleene Anne; Shanteau, James; Pingenot, James D F
2007-01-01
The objective of this study was to develop a quantitative method of evaluating the user interface for medication system software. A detailed task analysis provided a description of user goals and essential activity. A structural fault analysis was used to develop a detailed description of the system interface. Nurses experienced with use of the system under evaluation provided estimates of failure rates for each point in this simplified fault tree. Means of estimated failure rates provided quantitative data for fault analysis. Authors note that, although failures of steps in the program were frequent, participants reported numerous methods of working around these failures so that overall system failure was rare. However, frequent process failure can affect the time required for processing medications, making a system inefficient. This method of interface analysis, called Software Efficiency Evaluation and Fault Identification Method, provides quantitative information with which prototypes can be compared and problems within an interface identified.
NASA Astrophysics Data System (ADS)
Bourne, S. J.; Oates, S. J.
2017-12-01
Measurements of the strains and earthquakes induced by fluid extraction from a subsurface reservoir reveal a transient, exponential-like increase in seismicity relative to the volume of fluids extracted. If the frictional strength of these reactivating faults is heterogeneously and randomly distributed, then progressive failures of the weakest fault patches account in a general manner for this initial exponential-like trend. Allowing for the observable elastic and geometric heterogeneity of the reservoir, the spatiotemporal evolution of induced seismicity over 5 years is predictable without significant bias using a statistical physics model of poroelastic reservoir deformations inducing extreme threshold frictional failures of previously inactive faults. This model is used to forecast the temporal and spatial probability density of earthquakes within the Groningen natural gas reservoir, conditional on future gas production plans. Probabilistic seismic hazard and risk assessments based on these forecasts inform the current gas production policy and building strengthening plans.
An Application of the Geo-Semantic Micro-services in Seamless Data-Model Integration
NASA Astrophysics Data System (ADS)
Jiang, P.; Elag, M.; Kumar, P.; Liu, R.; Hu, Y.; Marini, L.; Peckham, S. D.; Hsu, L.
2016-12-01
We are applying machine learning (ML) techniques to continuous acoustic emission (AE) data from laboratory earthquake experiments. Our goal is to apply explicit ML methods to this acoustic datathe AE in order to infer frictional properties of a laboratory fault. The experiment is a double direct shear apparatus comprised of fault blocks surrounding fault gouge comprised of glass beads or quartz powder. Fault characteristics are recorded, including shear stress, applied load (bulk friction = shear stress/normal load) and shear velocity. The raw acoustic signal is continuously recorded. We rely on explicit decision tree approaches (Random Forest and Gradient Boosted Trees) that allow us to identify important features linked to the fault friction. A training procedure that employs both the AE and the recorded shear stress from the experiment is first conducted. Then, testing takes place on data the algorithm has never seen before, using only the continuous AE signal. We find that these methods provide rich information regarding frictional processes during slip (Rouet-Leduc et al., 2017a; Hulbert et al., 2017). In addition, similar machine learning approaches predict failure times, as well as slip magnitudes in some cases. We find that these methods work for both stick slip and slow slip experiments, for periodic slip and for aperiodic slip. We also derive a fundamental relationship between the AE and the friction describing the frictional behavior of any earthquake slip cycle in a given experiment (Rouet-Leduc et al., 2017b). Our goal is to ultimately scale these approaches to Earth geophysical data to probe fault friction. References Rouet-Leduc, B., C. Hulbert, N. Lubbers, K. Barros, C. Humphreys and P. A. Johnson, Machine learning predicts laboratory earthquakes, in review (2017). https://arxiv.org/abs/1702.05774Rouet-LeDuc, B. et al., Friction Laws Derived From the Acoustic Emissions of a Laboratory Fault by Machine Learning (2017), AGU Fall Meeting Session S025: Earthquake source: from the laboratory to the fieldHulbert, C., Characterizing slow slip applying machine learning (2017), AGU Fall Meeting Session S019: Slow slip, Tectonic Tremor, and the Brittle-to-Ductile Transition Zone: What mechanisms control the diversity of slow and fast earthquakes?
Arjan J. H. Meddens; Jeffrey A. Hicke; Lee A. Vierling; Andrew T. Hudak
2013-01-01
Bark beetles cause significant tree mortality in coniferous forests across North America. Mapping beetle-caused tree mortality is therefore important for gauging impacts to forest ecosystems and assessing trends. Remote sensing offers the potential for accurate, repeatable estimates of tree mortality in outbreak areas. With the advancement of multi-temporal disturbance...
The Central Italy Seismic Sequence (2016): Spatial Patterns and Dynamic Fingerprints
NASA Astrophysics Data System (ADS)
Suteanu, Cristian; Liucci, Luisa; Melelli, Laura
2018-01-01
The paper investigates spatio-temporal aspects of the seismic sequence that started in Central Italy (Amatrice, Lazio region) in August 2016, causing hundreds of fatalities and producing major damage to settlements. On one hand, scaling properties of the landscape topography are identified and related to geomorphological processes, supporting the identification of preferential spatial directions in tectonic activity and confirming the role of the past tectonic periods and ongoing processes with respect to the driving of the geomorphological evolution of the area. On the other hand, relations between the spatio-temporal evolution of the sequence and the seismogenic fault systems are studied. The dynamic fingerprints of seismicity are established with the help of events thread analysis (ETA), which characterizes anisotropy in spatio-temporal earthquake patterns. ETA confirms the fact that the direction of the seismogenic normal fault-oriented (N)NW-(S)SE is characterized by persistent seismic activity. More importantly, it also highlights the role of the pre-existing compressive structures, Neogenic thrust and transpressive regional fronts, with a trend-oriented (N)NE-(S)SW, in the stress transfer. Both the fractal features of the topographic surface and the dynamic fingerprint of the recent seismic sequence point to the hypothesis of an active interaction between the Quaternary fault systems and the pre-existing compressional structures.
Spatiotemporal analysis of Quaternary normal faults in the Northern Rocky Mountains, USA
NASA Astrophysics Data System (ADS)
Davarpanah, A.; Babaie, H. A.; Reed, P.
2010-12-01
The mid-Tertiary Basin-and-Range extensional tectonic event developed most of the normal faults that bound the ranges in the northern Rocky Mountains within Montana, Wyoming, and Idaho. The interaction of the thermally induced stress field of the Yellowstone hot spot with the existing Basin-and-Range fault blocks, during the last 15 my, has produced a new, spatially and temporally variable system of normal faults in these areas. The orientation and spatial distribution of the trace of these hot-spot induced normal faults, relative to earlier Basin-and-Range faults, have significant implications for the effect of the temporally varying and spatially propagating thermal dome on the growth of new hot spot related normal faults and reactivation of existing Basin-and-Range faults. Digitally enhanced LANDSAT 7 Enhanced Thematic Mapper Plus (ETM+) and Landsat 4 and 5 Thematic Mapper (TM) bands, with spatial resolution of 30 m, combined with analytical GIS and geological techniques helped in determining and analyzing the lineaments and traces of the Quaternary, thermally-induced normal faults in the study area. Applying the color composite (CC) image enhancement technique, the combination of bands 3, 2 and 1 of the ETM+ and TM images was chosen as the best statistical choice to create a color composite for lineament identification. The spatiotemporal analysis of the Quaternary normal faults produces significant information on the structural style, timing, spatial variation, spatial density, and frequency of the faults. The seismic Quaternary normal faults, in the whole study area, are divided, based on their age, into four specific sets, which from oldest to youngest include: Quaternary (>1.6 Ma), middle and late Quaternary (>750 ka), latest Quaternary (>15 ka), and the last 150 years. A density map for the Quaternary faults reveals that most active faults are near the current Yellowstone National Park area (YNP), where most seismically active faults, in the past 1.6 my, are located. The GIS based autocorrelation method, applied to the trace orientation, length, frequency, and spatial distribution for each age-defined fault set, revealed spatial homogeneity for each specific set. The results of the method of Moran`sI and Geary`s C show no spatial autocorrelation among the trend of the fault traces and their location. Our results suggest that while lineaments of similar age define a clustered pattern in each domain, the overall distribution pattern of lineaments with different ages seems to be non-uniform (random). The directional distribution analysis reveals a distinct range of variation for fault traces of different ages (i.e., some displaying ellipsis behavior). Among the Quaternary normal fault sets, the youngest lineament set (i.e., last 150 years) defines the greatest ellipticity (eccentricity) and the least lineaments distribution variation. The frequency rose diagram for the entire Quaternary normal faults, shows four major modes (around 360o, 330o, 300o, and 270o), and two minor modes (around 235 and 205).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mays, S.E.; Poloski, J.P.; Sullivan, W.H.
1982-07-01
This report describes a risk study of the Browns Ferry, Unit 1, nuclear plant. The study is one of four such studies sponsored by the NRC Office of Research, Division of Risk Assessment, as part of its Interim Reliability Evaluation Program (IREP), Phase II. This report is contained in four volumes: a main report and three appendixes. Appendix B provides a description of Browns Ferry, Unit 1, plant systems and the failure evaluation of those systems as they apply to accidents at Browns Ferry. Information is presented concerning front-line system fault analysis; support system fault analysis; human error models andmore » probabilities; and generic control circuit analyses.« less
Risk Analysis Methods for Deepwater Port Oil Transfer Systems
DOT National Transportation Integrated Search
1976-06-01
This report deals with the risk analysis methodology for oil spills from the oil transfer systems in deepwater ports. Failure mode and effect analysis in combination with fault tree analysis are identified as the methods best suited for the assessmen...
NASA Astrophysics Data System (ADS)
Nadeau, R. M.; Traer, M.; Guilhem, A.
2005-12-01
Seismic indicators of fault zone deformation can complement geodetic measurements by providing information on aseismic transient deformation: 1) from deep within the fault zone, 2) on a regional scale, 3) with intermediate temporal resolution (weeks to months) and 4) that spans over 2 decades (1984 to early 2005), including pre- GPS and INSAR coverage. Along the San Andreas Fault (SAF) in central California, two types of seismic indicators are proving to be particularly useful for providing information on deep fault zone deformation. The first, characteristically repeating microearthquakes, provide long-term coverage (decades) on the evolution of aseismic fault slip rates at seismogenic depths along a large (~175 km) stretch of the SAF between the rupture zones of the ~M8 1906 San Francisco and 1857 Fort Tejon earthquakes. In Cascadia and Japan the second type of seismic indicator, nonvolcanic tremors, have shown a remarkable correlation between their activity rates and GPS and tiltmeter measurements of transient deformation in the deep (sub-seismogenic) fault zone. This correlation suggests that tremor rate changes and deep transient deformation are intimately related and that deformation associated with the tremor activity may be stressing the seismogenic zone in both areas. Along the SAF, nonvolcanic tremors have only recently been discovered (i.e., in the Parkfield-Cholame area), and knowledge of their full spatial extent is still relatively limited. Nonetheless the observed temporal correlation between earthquake and tremor activity in this area is consistent with a model in which sub-seismogenic deformation and seismogenic zone stress changes are closely related. We present observations of deep aseismic transient deformation associated with the 28 September 2004, M6 Parkfield earthquake from both repeating earthquake and nonvolcanic tremor data. Also presented are updated deep fault slip rate estimates from prepeating quakes in the San Juan Bautista area with an assessment of their significance to previously reported quasi-periodic slip rate pulses and small to moderate magnitude (> M3.5) earthquake occurrence in the area.
A-Priori Rupture Models for Northern California Type-A Faults
Wills, Chris J.; Weldon, Ray J.; Field, Edward H.
2008-01-01
This appendix describes how a-priori rupture models were developed for the northern California Type-A faults. As described in the main body of this report, and in Appendix G, ?a-priori? models represent an initial estimate of the rate of single and multi-segment surface ruptures on each fault. Whether or not a given model is moment balanced (i.e., satisfies section slip-rate data) depends on assumptions made regarding the average slip on each segment in each rupture (which in turn depends on the chosen magnitude-area relationship). Therefore, for a given set of assumptions, or branch on the logic tree, the methodology of the present Working Group (WGCEP-2007) is to find a final model that is as close as possible to the a-priori model, in the least squares sense, but that also satisfies slip rate and perhaps other data. This is analogous the WGCEP- 2002 approach of effectively voting on the relative rate of each possible rupture, and then finding the closest moment-balance model (under a more limiting set of assumptions than adopted by the present WGCEP, as described in detail in Appendix G). The 2002 Working Group Report (WCCEP, 2003, referred to here as WGCEP-2002), created segmented earthquake rupture forecast models for all faults in the region, including some that had been designated as Type B faults in the NSHMP, 1996, and one that had not previously been considered. The 2002 National Seismic Hazard Maps used the values from WGCEP-2002 for all the faults in the region, essentially treating all the listed faults as Type A faults. As discussed in Appendix A, the current WGCEP found that there are a number of faults with little or no data on slip-per-event, or dates of previous earthquakes. As a result, the WGCEP recommends that faults with minimal available earthquake recurrence data: the Greenville, Mount Diablo, San Gregorio, Monte Vista-Shannon and Concord-Green Valley be modeled as Type B faults to be consistent with similarly poorly-known faults statewide. As a result, the modified segmented models discussed here only concern the San Andreas, Hayward-Rodgers Creek, and Calaveras faults. Given the extensive level of effort given by the recent Bay-Area WGCEP-2002, our approach has been to adopt their final average models as our preferred a-prior models. We have modified the WGCEP-2002 models where necessary to match data that were not available or not used by that WGCEP and where the models needed by WGCEP-2007 for a uniform statewide model require different assumptions and/or logic-tree branch weights. In these cases we have made what are usually slight modifications to the WGCEP-2002 model. This Appendix presents the minor changes needed to accomodate updated information and model construction. We do not attempt to reproduce here the extensive documentation of data, model parameters and earthquake probablilities in the WG-2002 report.
NASA Astrophysics Data System (ADS)
Calderoni, G.
2015-12-01
We investigate the variability of Brune stress drop in the normal fault system activated by the Mw 6.1 L'Aquila earthquake in the complex tectonic setting of the central Apennine. We re-analyze the dataset used by Calderoni et al. [2013], augmented by additional earthquakes and additional records at closer distance stations. We refine the EGF method used by Calderoni et al. [2013] applying more restrictive criteria in the selection of the EGF events and removing outliers based on statistical criteria. We focus on spatio-temporal variations in the Paganica fault before the mainshock. Using 51 earthquakes (9 foreshocks, the mainshock, and 42 aftershocks), we show that, after the Mw 4.1 largest foreshock of 30 March 2009, the Brune stress drop goes down to the lowest values (0.4 MPa). This largest foreshock was indicated as a marker for the onset of the temporal variations in efficiency of fault-zone guided waves (Calderoni et al., 2015) and other independent seismic parameters such as the b value [Papadopoulos et al., 2010; Sugan et al., 2014], and the P-to-S wave velocity ratio [Di Luccio et al., 2010; Lucente et al., 2010]. The low values of stress drop after the Mw 4.1 foreshock are consistent with the increase of pore pressure invoked by other authors to explain the increase of the Vp/Vs ratio and the decrease of Vs in the damage fault zone. In contrast, immediate foreshocks occurring a few hours before the mainshock very close to its nucleation are characterized by the highest values observed for foreshocks (≈5 MPa). These high stress drop foreshocks are located in the fault patch where a low b value anomaly indicates highly stressed rock before the main shock rupture [Sugan et al., 2014]. These results provide further evidence to previous observations before major earthquakes suggesting that stress drop variations can provide insight into the preparatory phase of impending earthquakes.
Charles H. (Hobie) Perry; Kevin J. Horn; R. Quinn Thomas; Linda H. Pardo; Erica A.H. Smithwick; Doug Baldwin; Gregory B. Lawrence; Scott W. Bailey; Sabine Braun; Christopher M. Clark; Mark Fenn; Annika Nordin; Jennifer N. Phelan; Paul G. Schaberg; Sam St. Clair; Richard Warby; Shaun Watmough; Steven S. Perakis
2015-01-01
The abundance of temporally and spatially consistent Forest Inventory and Analysis data facilitates hierarchical/multilevel analysis to investigate factors affecting tree growth, scaling from plot-level to continental scales. Herein we use FIA tree and soil inventories in conjunction with various spatial climate and soils data to estimate species-specific responses of...
Langbein, J.O.; Linker, M.F.; McGarr, A.; Slater, L.E.
1982-01-01
Two-color laser ranging measurements during a 15-month period over a geodetic network spanning the San Andreas fault near Palmdale, California, indicate that the crust expands and contracts aseismically in episodes as short as 2 weeks. Shear strain parallel to the fault has accumulated monotonically since November 1980, but at a variable rate. Improvements in measurement precision and temporal resolution over those of previous geodetic studies near Palmdale have resulted in the definition of a time history of crustal deformation that is much more complex than formerly realized. Copyright ?? 1982 AAAS.
Methodology for Designing Fault-Protection Software
NASA Technical Reports Server (NTRS)
Barltrop, Kevin; Levison, Jeffrey; Kan, Edwin
2006-01-01
A document describes a methodology for designing fault-protection (FP) software for autonomous spacecraft. The methodology embodies and extends established engineering practices in the technical discipline of Fault Detection, Diagnosis, Mitigation, and Recovery; and has been successfully implemented in the Deep Impact Spacecraft, a NASA Discovery mission. Based on established concepts of Fault Monitors and Responses, this FP methodology extends the notion of Opinion, Symptom, Alarm (aka Fault), and Response with numerous new notions, sub-notions, software constructs, and logic and timing gates. For example, Monitor generates a RawOpinion, which graduates into Opinion, categorized into no-opinion, acceptable, or unacceptable opinion. RaiseSymptom, ForceSymptom, and ClearSymptom govern the establishment and then mapping to an Alarm (aka Fault). Local Response is distinguished from FP System Response. A 1-to-n and n-to- 1 mapping is established among Monitors, Symptoms, and Responses. Responses are categorized by device versus by function. Responses operate in tiers, where the early tiers attempt to resolve the Fault in a localized step-by-step fashion, relegating more system-level response to later tier(s). Recovery actions are gated by epoch recovery timing, enabling strategy, urgency, MaxRetry gate, hardware availability, hazardous versus ordinary fault, and many other priority gates. This methodology is systematic, logical, and uses multiple linked tables, parameter files, and recovery command sequences. The credibility of the FP design is proven via a fault-tree analysis "top-down" approach, and a functional fault-mode-effects-and-analysis via "bottoms-up" approach. Via this process, the mitigation and recovery strategy(s) per Fault Containment Region scope (width versus depth) the FP architecture.
Kivlin, Stephanie N; Hawkes, Christine V
2016-01-01
The high diversity of tree species has traditionally been considered an important controller of belowground processes in tropical rainforests. However, soil water availability and resources are also primary regulators of soil bacteria in many ecosystems. Separating the effects of these biotic and abiotic factors in the tropics is challenging because of their high spatial and temporal heterogeneity. To determine the drivers of tropical soil bacteria, we examined tree species effects using experimental tree monocultures and secondary forests at La Selva Biological Station in Costa Rica. A randomized block design captured spatial variation and we sampled at four dates across two years to assess temporal variation. We measured bacteria richness, phylogenetic diversity, community composition, biomass, and functional potential. All bacteria parameters varied significantly across dates. In addition, bacteria richness and phylogenetic diversity were affected by the interaction of vegetation type and date, whereas bacteria community composition was affected by the interaction of vegetation type and block. Shifts in bacteria community richness and composition were unrelated to shifts in enzyme function, suggesting physiological overlap among taxa. Based on the observed temporal and spatial heterogeneity, our understanding of tropical soil bacteria will benefit from additional work to determine the optimal temporal and spatial scales for sampling. Understanding spatial and temporal variation will facilitate prediction of how tropical soil microbes will respond to future environmental change. PMID:27391450
Kivlin, Stephanie N; Hawkes, Christine V
2016-01-01
The high diversity of tree species has traditionally been considered an important controller of belowground processes in tropical rainforests. However, soil water availability and resources are also primary regulators of soil bacteria in many ecosystems. Separating the effects of these biotic and abiotic factors in the tropics is challenging because of their high spatial and temporal heterogeneity. To determine the drivers of tropical soil bacteria, we examined tree species effects using experimental tree monocultures and secondary forests at La Selva Biological Station in Costa Rica. A randomized block design captured spatial variation and we sampled at four dates across two years to assess temporal variation. We measured bacteria richness, phylogenetic diversity, community composition, biomass, and functional potential. All bacteria parameters varied significantly across dates. In addition, bacteria richness and phylogenetic diversity were affected by the interaction of vegetation type and date, whereas bacteria community composition was affected by the interaction of vegetation type and block. Shifts in bacteria community richness and composition were unrelated to shifts in enzyme function, suggesting physiological overlap among taxa. Based on the observed temporal and spatial heterogeneity, our understanding of tropical soil bacteria will benefit from additional work to determine the optimal temporal and spatial scales for sampling. Understanding spatial and temporal variation will facilitate prediction of how tropical soil microbes will respond to future environmental change.
Breaking down barriers in cooperative fault management: Temporal and functional information displays
NASA Technical Reports Server (NTRS)
Potter, Scott S.; Woods, David D.
1994-01-01
At the highest level, the fundamental question addressed by this research is how to aid human operators engaged in dynamic fault management. In dynamic fault management there is some underlying dynamic process (an engineered or physiological process referred to as the monitored process - MP) whose state changes over time and whose behavior must be monitored and controlled. In these types of applications (dynamic, real-time systems), a vast array of sensor data is available to provide information on the state of the MP. Faults disturb the MP and diagnosis must be performed in parallel with responses to maintain process integrity and to correct the underlying problem. These situations frequently involve time pressure, multiple interacting goals, high consequences of failure, and multiple interleaved tasks.
Fault-tolerant continuous flow systems modelling
NASA Astrophysics Data System (ADS)
Tolbi, B.; Tebbikh, H.; Alla, H.
2017-01-01
This paper presents a structural modelling of faults with hybrid Petri nets (HPNs) for the analysis of a particular class of hybrid dynamic systems, continuous flow systems. HPNs are first used for the behavioural description of continuous flow systems without faults. Then, faults' modelling is considered using a structural method without having to rebuild the model to new. A translation method is given in hierarchical way, it gives a hybrid automata (HA) from an elementary HPN. This translation preserves the behavioural semantics (timed bisimilarity), and reflects the temporal behaviour by giving semantics for each model in terms of timed transition systems. Thus, advantages of the power modelling of HPNs and the analysis ability of HA are taken. A simple example is used to illustrate the ideas.
Crone, A.J.; De Martini, P. M.; Machette, M.M.; Okumura, K.; Prescott, J.R.
2003-01-01
Paleoseismic studies of two historically aseismic Quaternary faults in Australia confirm that cratonic faults in stable continental regions (SCR) typically have a long-term behavior characterized by episodes of activity separated by quiescent intervals of at least 10,000 and commonly 100,000 years or more. Studies of the approximately 30-km-long Roopena fault in South Australia and the approximately 30-km-long Hyden fault in Western Australia document multiple Quaternary surface-faulting events that are unevenly spaced in time. The episodic clustering of events on cratonic SCR faults may be related to temporal fluctuations of fault-zone fluid pore pressures in a volume of strained crust. The long-term slip rate on cratonic SCR faults is extremely low, so the geomorphic expression of many cratonic SCR faults is subtle, and scarps may be difficult to detect because they are poorly preserved. Both the Roopena and Hyden faults are in areas of limited or no significant seismicity; these and other faults that we have studied indicate that many potentially hazardous SCR faults cannot be recognized solely on the basis of instrumental data or historical earthquakes. Although cratonic SCR faults may appear to be nonhazardous because they have been historically aseismic, those that are favorably oriented for movement in the current stress field can and have produced unexpected damaging earthquakes. Paleoseismic studies of modern and prehistoric SCR faulting events provide the basis for understanding of the long-term behavior of these faults and ultimately contribute to better seismic-hazard assessments.
Automated Generation of Fault Management Artifacts from a Simple System Model
NASA Technical Reports Server (NTRS)
Kennedy, Andrew K.; Day, John C.
2013-01-01
Our understanding of off-nominal behavior - failure modes and fault propagation - in complex systems is often based purely on engineering intuition; specific cases are assessed in an ad hoc fashion as a (fallible) fault management engineer sees fit. This work is an attempt to provide a more rigorous approach to this understanding and assessment by automating the creation of a fault management artifact, the Failure Modes and Effects Analysis (FMEA) through querying a representation of the system in a SysML model. This work builds off the previous development of an off-nominal behavior model for the upcoming Soil Moisture Active-Passive (SMAP) mission at the Jet Propulsion Laboratory. We further developed the previous system model to more fully incorporate the ideas of State Analysis, and it was restructured in an organizational hierarchy that models the system as layers of control systems while also incorporating the concept of "design authority". We present software that was developed to traverse the elements and relationships in this model to automatically construct an FMEA spreadsheet. We further discuss extending this model to automatically generate other typical fault management artifacts, such as Fault Trees, to efficiently portray system behavior, and depend less on the intuition of fault management engineers to ensure complete examination of off-nominal behavior.
Naive Bayes Bearing Fault Diagnosis Based on Enhanced Independence of Data
Zhang, Nannan; Wu, Lifeng; Yang, Jing; Guan, Yong
2018-01-01
The bearing is the key component of rotating machinery, and its performance directly determines the reliability and safety of the system. Data-based bearing fault diagnosis has become a research hotspot. Naive Bayes (NB), which is based on independent presumption, is widely used in fault diagnosis. However, the bearing data are not completely independent, which reduces the performance of NB algorithms. In order to solve this problem, we propose a NB bearing fault diagnosis method based on enhanced independence of data. The method deals with data vector from two aspects: the attribute feature and the sample dimension. After processing, the classification limitation of NB is reduced by the independence hypothesis. First, we extract the statistical characteristics of the original signal of the bearings effectively. Then, the Decision Tree algorithm is used to select the important features of the time domain signal, and the low correlation features is selected. Next, the Selective Support Vector Machine (SSVM) is used to prune the dimension data and remove redundant vectors. Finally, we use NB to diagnose the fault with the low correlation data. The experimental results show that the independent enhancement of data is effective for bearing fault diagnosis. PMID:29401730
NASA Astrophysics Data System (ADS)
Li, Yongbo; Xu, Minqiang; Wang, Rixin; Huang, Wenhu
2016-01-01
This paper presents a new rolling bearing fault diagnosis method based on local mean decomposition (LMD), improved multiscale fuzzy entropy (IMFE), Laplacian score (LS) and improved support vector machine based binary tree (ISVM-BT). When the fault occurs in rolling bearings, the measured vibration signal is a multi-component amplitude-modulated and frequency-modulated (AM-FM) signal. LMD, a new self-adaptive time-frequency analysis method can decompose any complicated signal into a series of product functions (PFs), each of which is exactly a mono-component AM-FM signal. Hence, LMD is introduced to preprocess the vibration signal. Furthermore, IMFE that is designed to avoid the inaccurate estimation of fuzzy entropy can be utilized to quantify the complexity and self-similarity of time series for a range of scales based on fuzzy entropy. Besides, the LS approach is introduced to refine the fault features by sorting the scale factors. Subsequently, the obtained features are fed into the multi-fault classifier ISVM-BT to automatically fulfill the fault pattern identifications. The experimental results validate the effectiveness of the methodology and demonstrate that proposed algorithm can be applied to recognize the different categories and severities of rolling bearings.
Mumma, Joel M; Durso, Francis T; Ferguson, Ashley N; Gipson, Christina L; Casanova, Lisa; Erukunuakpor, Kimberly; Kraft, Colleen S; Walsh, Victoria L; Zimring, Craig; DuBose, Jennifer; Jacob, Jesse T
2018-03-05
Doffing protocols for personal protective equipment (PPE) are critical for keeping healthcare workers (HCWs) safe during care of patients with Ebola virus disease. We assessed the relationship between errors and self-contamination during doffing. Eleven HCWs experienced with doffing Ebola-level PPE participated in simulations in which HCWs donned PPE marked with surrogate viruses (ɸ6 and MS2), completed a clinical task, and were assessed for contamination after doffing. Simulations were video recorded, and a failure modes and effects analysis and fault tree analyses were performed to identify errors during doffing, quantify their risk (risk index), and predict contamination data. Fifty-one types of errors were identified, many having the potential to spread contamination. Hand hygiene and removing the powered air purifying respirator (PAPR) hood had the highest total risk indexes (111 and 70, respectively) and number of types of errors (9 and 13, respectively). ɸ6 was detected on 10% of scrubs and the fault tree predicted a 10.4% contamination rate, likely occurring when the PAPR hood inadvertently contacted scrubs during removal. MS2 was detected on 10% of hands, 20% of scrubs, and 70% of inner gloves and the predicted rates were 7.3%, 19.4%, 73.4%, respectively. Fault trees for MS2 and ɸ6 contamination suggested similar pathways. Ebola-level PPE can both protect and put HCWs at risk for self-contamination throughout the doffing process, even among experienced HCWs doffing with a trained observer. Human factors methodologies can identify error-prone steps, delineate the relationship between errors and self-contamination, and suggest remediation strategies.
NASA Astrophysics Data System (ADS)
Krechowicz, Maria
2017-10-01
Nowadays, one of the characteristic features of construction industry is an increased complexity of a growing number of projects. Almost each construction project is unique, has its project-specific purpose, its own project structural complexity, owner’s expectations, ground conditions unique to a certain location, and its own dynamics. Failure costs and costs resulting from unforeseen problems in complex construction projects are very high. Project complexity drivers pose many vulnerabilities to a successful completion of a number of projects. This paper discusses the process of effective risk management in complex construction projects in which renewable energy sources were used, on the example of the realization phase of the ENERGIS teaching-laboratory building, from the point of view of DORBUD S.A., its general contractor. This paper suggests a new approach to risk management for complex construction projects in which renewable energy sources were applied. The risk management process was divided into six stages: gathering information, identification of the top, critical project risks resulting from the project complexity, construction of the fault tree for each top, critical risks, logical analysis of the fault tree, quantitative risk assessment applying fuzzy logic and development of risk response strategy. A new methodology for the qualitative and quantitative risk assessment for top, critical risks in complex construction projects was developed. Risk assessment was carried out applying Fuzzy Fault Tree analysis on the example of one top critical risk. Application of the Fuzzy sets theory to the proposed model allowed to decrease uncertainty and eliminate problems with gaining the crisp values of the basic events probability, common during expert risk assessment with the objective to give the exact risk score of each unwanted event probability.
Taheriyoun, Masoud; Moradinejad, Saber
2015-01-01
The reliability of a wastewater treatment plant is a critical issue when the effluent is reused or discharged to water resources. Main factors affecting the performance of the wastewater treatment plant are the variation of the influent, inherent variability in the treatment processes, deficiencies in design, mechanical equipment, and operational failures. Thus, meeting the established reuse/discharge criteria requires assessment of plant reliability. Among many techniques developed in system reliability analysis, fault tree analysis (FTA) is one of the popular and efficient methods. FTA is a top down, deductive failure analysis in which an undesired state of a system is analyzed. In this study, the problem of reliability was studied on Tehran West Town wastewater treatment plant. This plant is a conventional activated sludge process, and the effluent is reused in landscape irrigation. The fault tree diagram was established with the violation of allowable effluent BOD as the top event in the diagram, and the deficiencies of the system were identified based on the developed model. Some basic events are operator's mistake, physical damage, and design problems. The analytical method is minimal cut sets (based on numerical probability) and Monte Carlo simulation. Basic event probabilities were calculated according to available data and experts' opinions. The results showed that human factors, especially human error had a great effect on top event occurrence. The mechanical, climate, and sewer system factors were in subsequent tier. Literature shows applying FTA has been seldom used in the past wastewater treatment plant (WWTP) risk analysis studies. Thus, the developed FTA model in this study considerably improves the insight into causal failure analysis of a WWTP. It provides an efficient tool for WWTP operators and decision makers to achieve the standard limits in wastewater reuse and discharge to the environment.
Jetter, J J; Forte, R; Rubenstein, R
2001-02-01
A fault tree analysis was used to estimate the number of refrigerant exposures of automotive service technicians and vehicle occupants in the United States. Exposures of service technicians can occur when service equipment or automotive air-conditioning systems leak during servicing. The number of refrigerant exposures of service technicians was estimated to be 135,000 per year. Exposures of vehicle occupants can occur when refrigerant enters passenger compartments due to sudden leaks in air-conditioning systems, leaks following servicing, or leaks caused by collisions. The total number of exposures of vehicle occupants was estimated to be 3,600 per year. The largest number of exposures of vehicle occupants was estimated for leaks caused by collisions, and the second largest number of exposures was estimated for leaks following servicing. Estimates used in the fault tree analysis were based on a survey of automotive air-conditioning service shops, the best available data from the literature, and the engineering judgement of the authors and expert reviewers from the Society of Automotive Engineers Interior Climate Control Standards Committee. Exposure concentrations and durations were estimated and compared with toxicity data for refrigerants currently used in automotive air conditioners. Uncertainty was high for the estimated numbers of exposures, exposure concentrations, and exposure durations. Uncertainty could be reduced in the future by conducting more extensive surveys, measurements of refrigerant concentrations, and exposure monitoring. Nevertheless, the analysis indicated that the risk of exposure of service technicians and vehicle occupants is significant, and it is recommended that no refrigerant that is substantially more toxic than currently available substitutes be accepted for use in vehicle air-conditioning systems, absent a means of mitigating exposure.
Fault tree analysis for integrated and probabilistic risk analysis of drinking water systems.
Lindhe, Andreas; Rosén, Lars; Norberg, Tommy; Bergstedt, Olof
2009-04-01
Drinking water systems are vulnerable and subject to a wide range of risks. To avoid sub-optimisation of risk-reduction options, risk analyses need to include the entire drinking water system, from source to tap. Such an integrated approach demands tools that are able to model interactions between different events. Fault tree analysis is a risk estimation tool with the ability to model interactions between events. Using fault tree analysis on an integrated level, a probabilistic risk analysis of a large drinking water system in Sweden was carried out. The primary aims of the study were: (1) to develop a method for integrated and probabilistic risk analysis of entire drinking water systems; and (2) to evaluate the applicability of Customer Minutes Lost (CML) as a measure of risk. The analysis included situations where no water is delivered to the consumer (quantity failure) and situations where water is delivered but does not comply with water quality standards (quality failure). Hard data as well as expert judgements were used to estimate probabilities of events and uncertainties in the estimates. The calculations were performed using Monte Carlo simulations. CML is shown to be a useful measure of risks associated with drinking water systems. The method presented provides information on risk levels, probabilities of failure, failure rates and downtimes of the system. This information is available for the entire system as well as its different sub-systems. Furthermore, the method enables comparison of the results with performance targets and acceptable levels of risk. The method thus facilitates integrated risk analysis and consequently helps decision-makers to minimise sub-optimisation of risk-reduction options.
NASA Astrophysics Data System (ADS)
Philibosian, B.; Meltzner, A. J.; Sieh, K.
2017-12-01
Understanding earthquake cycle processes is key to both seismic hazard and fault mechanics. A concept that has come into focus recently is that rupture segmentation and cyclicity can be complex, and that simple models of periodically repeating similar earthquakes are inadequate. The term "supercycle" has been used to describe repeating longer periods of strain accumulation that involve multiple fault ruptures. However, this term has become broadly applied, lumping together several distinct phenomena that likely have disparate underlying causes. Earthquake recurrence patterns have often been described as "clustered," but this term is also imprecise. It is necessary to develop a terminology framework that consistently and meaningfully describes all types of behavior that are observed. We divide earthquake cycle patterns into four major classes, each having different implications for seismic hazard and fault mechanics: 1) quasi-periodic similar ruptures, 2) temporally clustered similar ruptures, 3) temporally clustered complementary ruptures, also known as rupture cascades, in which neighboring fault patches fail sequentially, and 4) superimposed cycles in which neighboring fault patches have cycles with different recurrence intervals, but may occasionally rupture together. Rupture segmentation is classified as persistent, frequent, or transient depending on how reliably ruptures terminate in a given area. We discuss the paleoseismic and historical evidence currently available for each of these types of behavior on subduction zone megathrust faults worldwide. Due to the unique level of paleoseismic and paleogeodetic detail provided by the coral microatoll technique, the Sumatran Sunda megathrust provides one of the most complete records over multiple seismic cycles. Most subduction zones with sufficient data exhibit examples of persistent and frequent segmentation, with cycle patterns 1, 3, and 4 on different segments. Pattern 2 is generally confined to overlap zones between segments. This catalog of seismic cycle observations provides a basis for exploring and modeling root causes of rupture segmentation and cycle behavior. Researchers should expect to discover similar behavior styles on other megathrust faults and perhaps major crustal faults around the world.
Naghibi, Seyed Amir; Pourghasemi, Hamid Reza; Dixon, Barnali
2016-01-01
Groundwater is considered one of the most valuable fresh water resources. The main objective of this study was to produce groundwater spring potential maps in the Koohrang Watershed, Chaharmahal-e-Bakhtiari Province, Iran, using three machine learning models: boosted regression tree (BRT), classification and regression tree (CART), and random forest (RF). Thirteen hydrological-geological-physiographical (HGP) factors that influence locations of springs were considered in this research. These factors include slope degree, slope aspect, altitude, topographic wetness index (TWI), slope length (LS), plan curvature, profile curvature, distance to rivers, distance to faults, lithology, land use, drainage density, and fault density. Subsequently, groundwater spring potential was modeled and mapped using CART, RF, and BRT algorithms. The predicted results from the three models were validated using the receiver operating characteristics curve (ROC). From 864 springs identified, 605 (≈70 %) locations were used for the spring potential mapping, while the remaining 259 (≈30 %) springs were used for the model validation. The area under the curve (AUC) for the BRT model was calculated as 0.8103 and for CART and RF the AUC were 0.7870 and 0.7119, respectively. Therefore, it was concluded that the BRT model produced the best prediction results while predicting locations of springs followed by CART and RF models, respectively. Geospatially integrated BRT, CART, and RF methods proved to be useful in generating the spring potential map (SPM) with reasonable accuracy.
NASA Astrophysics Data System (ADS)
Magee, Craig; McDermott, Kenneth G.; Stevenson, Carl T. E.; Jackson, Christopher A.-L.
2014-05-01
Continental rifting is commonly accommodated by the nucleation of normal faults, slip on pre-existing fault surfaces and/or magmatic intrusion. Because crystallised igneous intrusions are pervasive in many rift basins and are commonly more competent (i.e. higher shear strengths and Young's moduli) than the host rock, it is theoretically plausible that they locally intersect and modify the mechanical properties of pre-existing normal faults. We illustrate the influence that crystallised igneous intrusions may have on fault reactivation using a conceptual model and observations from field and subsurface datasets. Our results show that igneous rocks may initially resist failure, and promote the preferential reactivation of favourably-oriented, pre-existing faults that are not spatially-associated with solidified intrusions. Fault segments situated along strike from laterally restricted fault-intrusion intersections may similarly be reactivated. This spatial and temporal control on strain distribution may generate: (1) supra-intrusion folds in the hanging wall; (2) new dip-slip faults adjacent to the igneous body; or (3) sub-vertical, oblique-slip faults oriented parallel to the extension direction. Importantly, stress accumulation within igneous intrusions may eventually initiate failure and further localise strain. The results of our study have important implications for the structural of sedimentary basins and the subsurface migration of hydrocarbons and mineral-bearing fluids.
Foreshock sequences and short-term earthquake predictability on East Pacific Rise transform faults.
McGuire, Jeffrey J; Boettcher, Margaret S; Jordan, Thomas H
2005-03-24
East Pacific Rise transform faults are characterized by high slip rates (more than ten centimetres a year), predominantly aseismic slip and maximum earthquake magnitudes of about 6.5. Using recordings from a hydroacoustic array deployed by the National Oceanic and Atmospheric Administration, we show here that East Pacific Rise transform faults also have a low number of aftershocks and high foreshock rates compared to continental strike-slip faults. The high ratio of foreshocks to aftershocks implies that such transform-fault seismicity cannot be explained by seismic triggering models in which there is no fundamental distinction between foreshocks, mainshocks and aftershocks. The foreshock sequences on East Pacific Rise transform faults can be used to predict (retrospectively) earthquakes of magnitude 5.4 or greater, in narrow spatial and temporal windows and with a high probability gain. The predictability of such transform earthquakes is consistent with a model in which slow slip transients trigger earthquakes, enrich their low-frequency radiation and accommodate much of the aseismic plate motion.
Hong Su An; David W. MacFarlane; Christopher W. Woodall
2012-01-01
Standing dead trees are an important component of forest ecosystems. However, reliable estimates of standing dead tree population parameters can be difficult to obtain due to their low abundance and spatial and temporal variation. After 1999, the Forest Inventory and Analysis (FIA) Program began collecting data for standing dead trees at the Phase 2 stage of sampling....
Streaks, multiplets, and holes: High-resolution spatio-temporal behavior of Parkfield seismicity
Waldhauser, F.; Ellsworth, W.L.; Schaff, D.P.; Cole, A.
2004-01-01
Double-difference locations of ???8000 earthquakes from 1969-2002 on the Parkfield section of the San Andreas Fault reveal detailed fault structures and seismicity that is, although complex, highly organized in both space and time. Distinctive features of the seismicity include: 1) multiple recurrence of earthquakes of the same size at precisely the same location on the fault (multiplets), implying frictional or geometric controls on their location and size; 2) sub-horizontal alignments of hypocenters along the fault plane (streaks), suggestive of rheological transitions within the fault zone and/or stress concentrations between locked and creeping areas; 3) regions devoid of microearthquakes with typical dimensions of 1-5 km (holes), one of which contains the M6 1966 Parkfield earthquake hypocenter. These features represent long lived structures that persist through many cycles of individual event. Copyright 2004 by the American Geophysical Union.
Risk management of key issues of FPSO
NASA Astrophysics Data System (ADS)
Sun, Liping; Sun, Hai
2012-12-01
Risk analysis of key systems have become a growing topic late of because of the development of offshore structures. Equipment failures of offloading system and fire accidents were analyzed based on the floating production, storage and offloading (FPSO) features. Fault tree analysis (FTA), and failure modes and effects analysis (FMEA) methods were examined based on information already researched on modules of relex reliability studio (RRS). Equipment failures were also analyzed qualitatively by establishing a fault tree and Boolean structure function based on the shortage of failure cases, statistical data, and risk control measures examined. Failure modes of fire accident were classified according to the different areas of fire occurrences during the FMEA process, using risk priority number (RPN) methods to evaluate their severity rank. The qualitative analysis of FTA gave the basic insight of forming the failure modes of FPSO offloading, and the fire FMEA gave the priorities and suggested processes. The research has practical importance for the security analysis problems of FPSO.
Yazdi, Mohammad; Korhan, Orhan; Daneshvar, Sahand
2018-05-09
This study aimed at establishing fault tree analysis (FTA) using expert opinion to compute the probability of an event. To find the probability of the top event (TE), all probabilities of the basic events (BEs) should be available when the FTA is drawn. In this case, employing expert judgment can be used as an alternative to failure data in an awkward situation. The fuzzy analytical hierarchy process as a standard technique is used to give a specific weight to each expert, and fuzzy set theory is engaged for aggregating expert opinion. In this regard, the probability of BEs will be computed and, consequently, the probability of the TE obtained using Boolean algebra. Additionally, to reduce the probability of the TE in terms of three parameters (safety consequences, cost and benefit), the importance measurement technique and modified TOPSIS was employed. The effectiveness of the proposed approach is demonstrated with a real-life case study.
NASA Astrophysics Data System (ADS)
Guan, Yifeng; Zhao, Jie; Shi, Tengfei; Zhu, Peipei
2016-09-01
In recent years, China's increased interest in environmental protection has led to a promotion of energy-efficient dual fuel (diesel/natural gas) ships in Chinese inland rivers. A natural gas as ship fuel may pose dangers of fire and explosion if a gas leak occurs. If explosions or fires occur in the engine rooms of a ship, heavy damage and losses will be incurred. In this paper, a fault tree model is presented that considers both fires and explosions in a dual fuel ship; in this model, dual fuel engine rooms are the top events. All the basic events along with the minimum cut sets are obtained through the analysis. The primary factors that affect accidents involving fires and explosions are determined by calculating the degree of structure importance of the basic events. According to these results, corresponding measures are proposed to ensure and improve the safety and reliability of Chinese inland dual fuel ships.
Kingman, D M; Field, W E
2005-11-01
Findings reported by researchers at Illinois State University and Purdue University indicated that since 1980, an average of eight individuals per year have become engulfed and died in farm grain bins in the U.S. and Canada and that all these deaths are significant because they are believed to be preventable. During a recent effort to develop intervention strategies and recommendations for an ASAE farm grain bin safety standard, fault tree analysis (FTA) was utilized to identify contributing factors to engulfments in grain stored in on-farm grain bins. FTA diagrams provided a spatial perspective of the circumstances that occurred prior to engulfment incidents, a perspective never before presented in other hazard analyses. The FTA also demonstrated relationships and interrelationships of the contributing factors. FTA is a useful tool that should be applied more often in agricultural incident investigations to assist in the more complete understanding of the problem studied.
Fault tree analysis for data-loss in long-term monitoring networks.
Dirksen, J; ten Veldhuis, J A E; Schilperoort, R P S
2009-01-01
Prevention of data-loss is an important aspect in the design as well as the operational phase of monitoring networks since data-loss can seriously limit intended information yield. In the literature limited attention has been paid to the origin of unreliable or doubtful data from monitoring networks. Better understanding of causes of data-loss points out effective solutions to increase data yield. This paper introduces FTA as a diagnostic tool to systematically deduce causes of data-loss in long-term monitoring networks in urban drainage systems. In order to illustrate the effectiveness of FTA, a fault tree is developed for a monitoring network and FTA is applied to analyze the data yield of a UV/VIS submersible spectrophotometer. Although some of the causes of data-loss cannot be recovered because the historical database of metadata has been updated infrequently, the example points out that FTA still is a powerful tool to analyze the causes of data-loss and provides useful information on effective data-loss prevention.
Accurate reliability analysis method for quantum-dot cellular automata circuits
NASA Astrophysics Data System (ADS)
Cui, Huanqing; Cai, Li; Wang, Sen; Liu, Xiaoqiang; Yang, Xiaokuo
2015-10-01
Probabilistic transfer matrix (PTM) is a widely used model in the reliability research of circuits. However, PTM model cannot reflect the impact of input signals on reliability, so it does not completely conform to the mechanism of the novel field-coupled nanoelectronic device which is called quantum-dot cellular automata (QCA). It is difficult to get accurate results when PTM model is used to analyze the reliability of QCA circuits. To solve this problem, we present the fault tree models of QCA fundamental devices according to different input signals. After that, the binary decision diagram (BDD) is used to quantitatively investigate the reliability of two QCA XOR gates depending on the presented models. By employing the fault tree models, the impact of input signals on reliability can be identified clearly and the crucial components of a circuit can be found out precisely based on the importance values (IVs) of components. So this method is contributive to the construction of reliable QCA circuits.
Precursory changes in seismic velocity for the spectrum of earthquake failure modes
Scuderi, M.M.; Marone, C.; Tinti, E.; Di Stefano, G.; Collettini, C.
2016-01-01
Temporal changes in seismic velocity during the earthquake cycle have the potential to illuminate physical processes associated with fault weakening and connections between the range of fault slip behaviors including slow earthquakes, tremor and low frequency earthquakes1. Laboratory and theoretical studies predict changes in seismic velocity prior to earthquake failure2, however tectonic faults fail in a spectrum of modes and little is known about precursors for those modes3. Here we show that precursory changes of wave speed occur in laboratory faults for the complete spectrum of failure modes observed for tectonic faults. We systematically altered the stiffness of the loading system to reproduce the transition from slow to fast stick-slip and monitored ultrasonic wave speed during frictional sliding. We find systematic variations of elastic properties during the seismic cycle for both slow and fast earthquakes indicating similar physical mechanisms during rupture nucleation. Our data show that accelerated fault creep causes reduction of seismic velocity and elastic moduli during the preparatory phase preceding failure, which suggests that real time monitoring of active faults may be a means to detect earthquake precursors. PMID:27597879
Modeling vertebrate diversity in Oregon using satellite imagery
NASA Astrophysics Data System (ADS)
Cablk, Mary Elizabeth
Vertebrate diversity was modeled for the state of Oregon using a parametric approach to regression tree analysis. This exploratory data analysis effectively modeled the non-linear relationships between vertebrate richness and phenology, terrain, and climate. Phenology was derived from time-series NOAA-AVHRR satellite imagery for the year 1992 using two methods: principal component analysis and derivation of EROS data center greenness metrics. These two measures of spatial and temporal vegetation condition incorporated the critical temporal element in this analysis. The first three principal components were shown to contain spatial and temporal information about the landscape and discriminated phenologically distinct regions in Oregon. Principal components 2 and 3, 6 greenness metrics, elevation, slope, aspect, annual precipitation, and annual seasonal temperature difference were investigated as correlates to amphibians, birds, all vertebrates, reptiles, and mammals. Variation explained for each regression tree by taxa were: amphibians (91%), birds (67%), all vertebrates (66%), reptiles (57%), and mammals (55%). Spatial statistics were used to quantify the pattern of each taxa and assess validity of resulting predictions from regression tree models. Regression tree analysis was relatively robust against spatial autocorrelation in the response data and graphical results indicated models were well fit to the data.
Bukata, Andrew R; Kyser, T Kurtis
2007-02-15
Increasing anthropogenic pollution from urban centers and fossil fuel combustion can impact the carbon and nitrogen cycles in forests. To assess the impact of twentieth century anthropogenic pollution on forested system carbon and nitrogen cycles, variations in the carbon and nitrogen isotopic compositions of tree-rings were measured. Individual annual growth rings in trees from six sites across Ontario and one in New Brunswick, Canada were used to develop site chronologies of tree-ring delta 15N and delta 13C values. Tree-ring 615N values were approximately 0.5% per hundred higher and correlated with contemporaneous foliar samples from the same tree, but not with delta 15N values of soil samples. Temporal trends in carbon and nitrogen isotopic compositions of these tree-rings are consistent with increasing anthropogenic influence on both the carbon and nitrogen cycles since 1945. Tree-ring delta 13C values and delta 15N values are correlated at both remote and urban-proximal sites, with delta 15N values decreasing since 1945 and converging on 1% per hundred at urban-proximal sites and decreasing but not converging on a single delta 15N value in remote sites. These results indicate that temporal trends in tree-ring nitrogen and carbon isotopic compositions record the regional extent of pollution.
NASA Astrophysics Data System (ADS)
Walker, R. T.; Fattahi, M.; Mousavi, Z.; Pathier, E.; Sloan, R. A.; Talebian, M.; Thomas, A. L.; Walpersdorf, A.
2014-12-01
The Doruneh left-lateral strike-slip fault of NE Iran has a prominent expression in the landscape, showing that the fault is active in the late Quaternary. Existing estimates of its slip-rate vary, however, which has led to suggestions that it may exhibit temporal changes in activity. Using high-resolution optical satellite imagery we make reconstructions of displacement across four alluvial fans that cross the Doruneh fault, and determine the ages of these fans using luminescence dating, combined with U-series dating of pedogenic carbonates in one case. The four fans, which vary in age from 10-100 kyr, yield estimates of slip rate of ~2-3 mm/yr. We compare the average slip-rate measurements to the rate of accumulation of strain across the Doruneh fault using GPS and InSAR measurements, and find that the slip-rate is likely to have remained constant - within the uncertainty of our measurements - over the last ~100 ka. The slip-rate that we measure is consistent with the E-W left-lateral Doruneh fault accommodating N-S right-lateral faulting by 'bookshelf' faulting, with clockwise rotation about a vertical axis, in a similar manner to the Eastern California Shear Zone.
NASA Astrophysics Data System (ADS)
Lin, X.; Dreger, D.; Ge, H.; Xu, P.; Wu, M.; Chiang, A.; Zhao, G.; Yuan, H.
2018-03-01
Following the mainshock of the 2008 M8 Wenchuan Earthquake, there were more than 300 ML ≥ 4.0 aftershocks that occurred between 12 May 2008 and 8 September 2010. We analyzed the broadband waveforms for these events and found 160 events with sufficient signal-to-noise levels to invert for seismic moment tensors. Considering the length of the activated fault and the distances to the recording stations, four velocity models were employed to account for variability in crustal structure. The moment tensor solutions show considerable variations with a mixture of mainly reverse and strike-slip mechanisms and a small number of normal events and ambiguous events. We analyzed the spatial and temporal distribution of the aftershocks and their mechanism types to characterize the structure and the deformation occurring in the Longmen Shan fold and thrust belt. Our results suggest that the stress is very complex at the Longmen Shan fault zone. The moment tensors have both a spatial segmentation with two major categories of the moment tensor of thrust and strike slip; and a temporal pattern that the majority of the aftershocks gradually migrated to thrust-type events. The variability of aftershock mechanisms is a strong indication of significant tectonic release and stress reorganization that activated numerous small faults in the system.
Life stage, not climate change, explains observed tree range shifts.
Máliš, František; Kopecký, Martin; Petřík, Petr; Vladovič, Jozef; Merganič, Ján; Vida, Tomáš
2016-05-01
Ongoing climate change is expected to shift tree species distribution and therefore affect forest biodiversity and ecosystem services. To assess and project tree distributional shifts, researchers may compare the distribution of juvenile and adult trees under the assumption that differences between tree life stages reflect distributional shifts triggered by climate change. However, the distribution of tree life stages could differ within the lifespan of trees, therefore, we hypothesize that currently observed distributional differences could represent shifts over ontogeny as opposed to climatically driven changes. Here, we test this hypothesis with data from 1435 plots resurveyed after more than three decades across the Western Carpathians. We compared seedling, sapling and adult distribution of 12 tree species along elevation, temperature and precipitation gradients. We analyzed (i) temporal shifts between the surveys and (ii) distributional differences between tree life stages within both surveys. Despite climate warming, tree species distribution of any life stage did not shift directionally upward along elevation between the surveys. Temporal elevational shifts were species specific and an order of magnitude lower than differences among tree life stages within the surveys. Our results show that the observed range shifts among tree life stages are more consistent with ontogenetic differences in the species' environmental requirements than with responses to recent climate change. The distribution of seedlings substantially differed from saplings and adults, while the distribution of saplings did not differ from adults, indicating a critical transition between seedling and sapling tree life stages. Future research has to take ontogenetic differences among life stages into account as we found that distributional differences recently observed worldwide may not reflect climate change but rather the different environmental requirements of tree life stages. © 2016 John Wiley & Sons Ltd.
Life-stage, not climate change, explains observed tree range shifts
Máliš, František; Kopecký, Martin; Petřík, Petr; Vladovič, Jozef; Merganič, Ján; Vida, Tomáš
2017-01-01
Ongoing climate change is expected to shift tree species distribution and therefore affect forest biodiversity and ecosystem services. To assess and project tree distributional shifts, researchers may compare the distribution of juvenile and adult trees under the assumption that differences between tree life-stages reflect distributional shifts triggered by climate change. However, the distribution of tree life-stages could differ within the lifespan of trees, therefore we hypothesize that currently observed distributional differences could represent shifts over ontogeny as opposed to climatically driven changes. Here we test this hypothesis with data from 1435 plots resurveyed after more than three decades across the Western Carpathians. We compared seedling, sapling and adult distribution of 12 tree species along elevation, temperature and precipitation gradients. We analyzed i) temporal shifts between the surveys and ii) distributional differences between tree life-stages within both surveys. Despite climate warming, tree species distribution of any life-stage did not shift directionally upward along elevation between the surveys. Temporal elevational shifts were species-specific and an order of magnitude lower than differences among tree life-stages within the surveys. Our results show that the observed range shifts among tree life-stages are more consistent with ontogenetic differences in the species’ environmental requirements than with responses to recent climate change. The distribution of seedlings substantially differed from saplings and adults, while the distribution of saplings did not differ from adults, indicating a critical transition between seedling and sapling tree life-stages. Future research has to take ontogenetic differences among life-stages into account as we found that distributional differences recently observed worldwide may not reflect climate change but rather the different environmental requirements of tree life-stages. PMID:26725258
Relationships between tree height and carbon isotope discrimination
Nate G. McDowell; Barbara J. Bond; Lee T. Dickman; Michael G. Ryan; David Whitehead
2011-01-01
Understanding how tree size impacts leaf- and crown-level gas exchange is essential to predicting forest yields and carbon and water budgets. The stable carbon isotope ratio of organic matter has been used to examine the relationship of gas exchange to tree size for a host of species because it carries a temporally integrated signature of foliar photosynthesis and...
Juan Guerra-Hernández; Eduardo González-Ferreiro; Vicente Monleon; Sonia Faias; Margarida Tomé; Ramón Díaz-Varela
2017-01-01
High spatial resolution imagery provided by unmanned aerial vehicles (UAVs) can yield accurate and efficient estimation of tree dimensions and canopy structural variables at the local scale. We flew a low-cost, lightweight UAV over an experimental Pinus pinea L. plantation (290 trees distributed over 16 ha with different fertirrigation treatments)...
Smith, Merryn G; Miller, Rebecca E; Arndt, Stefan K; Kasel, Sabine; Bennett, Lauren T
2018-04-01
Non-structural carbohydrates (NSCs) form a fundamental yet poorly quantified carbon pool in trees. Studies of NSC seasonality in forest trees have seldom measured whole-tree NSC stocks and allocation among organs, and are not representative of all tree functional types. Non-structural carbohydrate research has primarily focussed on broadleaf deciduous and coniferous evergreen trees with distinct growing seasons, while broadleaf evergreen trees remain under-studied despite their different growth phenology. We measured whole-tree NSC allocation and temporal variation in Eucalyptus obliqua L'Hér., a broadleaf evergreen tree species typically occurring in mixed-age temperate forests, which has year-round growth and the capacity to resprout after fire. Our overarching objective was to improve the empirical basis for understanding the functional importance of NSC allocation and stock changes at the tree- and organ-level in this tree functional type. Starch was the principal storage carbohydrate and was primarily stored in the stem and roots of young (14-year-old) trees rather than the lignotuber, which did not appear to be a specialized starch storage organ. Whole-tree NSC stocks were depleted during spring and summer due to significant decreases in starch mass in the roots and stem, seemingly to support root and crown growth but potentially exacerbated by water stress in summer. Seasonality of stem NSCs differed between young and mature trees, and was not synchronized with stem basal area increments in mature trees. Our results suggest that the relative magnitude of seasonal NSC stock changes could vary with tree growth stage, and that the main drivers of NSC fluctuations in broadleaf evergreen trees in temperate biomes could be periodic disturbances such as summer drought and fire, rather than growth phenology. These results have implications for understanding post-fire tree recovery via resprouting, and for incorporating NSC pools into carbon models of mixed-age forests.
Seismicity near Palmdale, California, and its relation to strain changes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sauber, J.; McNally, K.; Pechmann, J.C.
We evaluate the relationships between the spatio-temporal patterns and faulting mechanisms of small earthquakes and the recent temporal changes in horizontal strain observed along the 'big bend' portion of the San Andreas fault near Palmdale, California. Microearthquake activity along the entire big bend of the San Andreas fault increased in November 1976 concurrent with the initiation of an earthquake swarm at Juniper Hills. This activity then decreased abruptly to the northwest and southeast of Juniter Hills during the beginning of 1979. This drop in seismic activity occurred around the time that crustal dilatation was observed on the U.S. Geological Surveymore » Palmdale trilateration network. Focal mechanisms from the study region are predominantly thrust. There are two time periods when the mechanisms are closer to strike slip than to thrust. The first period (December 1976 to February 1977) corresponds to the beginning of the Juniper Hills swarm. The second period (November 1978 to April 1979) approximately coincides with a change in trend of the strain data from uniaxial N-S compression to dilatation.« less
49 CFR Appendix B to Part 236 - Risk Assessment Criteria
Code of Federal Regulations, 2012 CFR
2012-10-01
... availability calculations for subsystems and components, Fault Tree Analysis (FTA) of the subsystems, and... upper bound, as estimated with a sensitivity analysis, and the risk value selected must be demonstrated... interconnected subsystems/components? The risk assessment of each safety-critical system (product) must account...
49 CFR Appendix B to Part 236 - Risk Assessment Criteria
Code of Federal Regulations, 2014 CFR
2014-10-01
... availability calculations for subsystems and components, Fault Tree Analysis (FTA) of the subsystems, and... upper bound, as estimated with a sensitivity analysis, and the risk value selected must be demonstrated... interconnected subsystems/components? The risk assessment of each safety-critical system (product) must account...
49 CFR Appendix D to Part 236 - Independent Review of Verification and Validation
Code of Federal Regulations, 2010 CFR
2010-10-01
... standards. (f) The reviewer shall analyze all Fault Tree Analyses (FTA), Failure Mode and Effects... for each product vulnerability cited by the reviewer; (4) Identification of any documentation or... not properly followed; (6) Identification of the software verification and validation procedures, as...
14 CFR 417.309 - Flight safety system analysis.
Code of Federal Regulations, 2012 CFR
2012-01-01
... system anomaly occurring and all of its effects as determined by the single failure point analysis and... termination system. (c) Single failure point. A command control system must undergo an analysis that... fault tree analysis or a failure modes effects and criticality analysis; (2) Identify all possible...
14 CFR 417.309 - Flight safety system analysis.
Code of Federal Regulations, 2010 CFR
2010-01-01
... system anomaly occurring and all of its effects as determined by the single failure point analysis and... termination system. (c) Single failure point. A command control system must undergo an analysis that... fault tree analysis or a failure modes effects and criticality analysis; (2) Identify all possible...
14 CFR 417.309 - Flight safety system analysis.
Code of Federal Regulations, 2013 CFR
2013-01-01
... system anomaly occurring and all of its effects as determined by the single failure point analysis and... termination system. (c) Single failure point. A command control system must undergo an analysis that... fault tree analysis or a failure modes effects and criticality analysis; (2) Identify all possible...
14 CFR 417.309 - Flight safety system analysis.
Code of Federal Regulations, 2014 CFR
2014-01-01
... system anomaly occurring and all of its effects as determined by the single failure point analysis and... termination system. (c) Single failure point. A command control system must undergo an analysis that... fault tree analysis or a failure modes effects and criticality analysis; (2) Identify all possible...
14 CFR 417.309 - Flight safety system analysis.
Code of Federal Regulations, 2011 CFR
2011-01-01
... system anomaly occurring and all of its effects as determined by the single failure point analysis and... termination system. (c) Single failure point. A command control system must undergo an analysis that... fault tree analysis or a failure modes effects and criticality analysis; (2) Identify all possible...
Toward a Model-Based Approach for Flight System Fault Protection
NASA Technical Reports Server (NTRS)
Day, John; Meakin, Peter; Murray, Alex
2012-01-01
Use SysML/UML to describe the physical structure of the system This part of the model would be shared with other teams - FS Systems Engineering, Planning & Execution, V&V, Operations, etc., in an integrated model-based engineering environment Use the UML Profile mechanism, defining Stereotypes to precisely express the concepts of the FP domain This extends the UML/SysML languages to contain our FP concepts Use UML/SysML, along with our profile, to capture FP concepts and relationships in the model Generate typical FP engineering products (the FMECA, Fault Tree, MRD, V&V Matrices)
Conversion of Questionnaire Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Powell, Danny H; Elwood Jr, Robert H
During the survey, respondents are asked to provide qualitative answers (well, adequate, needs improvement) on how well material control and accountability (MC&A) functions are being performed. These responses can be used to develop failure probabilities for basic events performed during routine operation of the MC&A systems. The failure frequencies for individual events may be used to estimate total system effectiveness using a fault tree in a probabilistic risk analysis (PRA). Numeric risk values are required for the PRA fault tree calculations that are performed to evaluate system effectiveness. So, the performance ratings in the questionnaire must be converted to relativemore » risk values for all of the basic MC&A tasks performed in the facility. If a specific material protection, control, and accountability (MPC&A) task is being performed at the 'perfect' level, the task is considered to have a near zero risk of failure. If the task is performed at a less than perfect level, the deficiency in performance represents some risk of failure for the event. As the degree of deficiency in performance increases, the risk of failure increases. If a task that should be performed is not being performed, that task is in a state of failure. The failure probabilities of all basic events contribute to the total system risk. Conversion of questionnaire MPC&A system performance data to numeric values is a separate function from the process of completing the questionnaire. When specific questions in the questionnaire are answered, the focus is on correctly assessing and reporting, in an adjectival manner, the actual performance of the related MC&A function. Prior to conversion, consideration should not be given to the numeric value that will be assigned during the conversion process. In the conversion process, adjectival responses to questions on system performance are quantified based on a log normal scale typically used in human error analysis (see A.D. Swain and H.E. Guttmann, 'Handbook of Human Reliability Analysis with Emphasis on Nuclear Power Plant Applications,' NUREG/CR-1278). This conversion produces the basic event risk of failure values required for the fault tree calculations. The fault tree is a deductive logic structure that corresponds to the operational nuclear MC&A system at a nuclear facility. The conventional Delphi process is a time-honored approach commonly used in the risk assessment field to extract numerical values for the failure rates of actions or activities when statistically significant data is absent.« less
Isosurface Extraction in Time-Varying Fields Using a Temporal Hierarchical Index Tree
NASA Technical Reports Server (NTRS)
Shen, Han-Wei; Gerald-Yamasaki, Michael (Technical Monitor)
1998-01-01
Many high-performance isosurface extraction algorithms have been proposed in the past several years as a result of intensive research efforts. When applying these algorithms to large-scale time-varying fields, the storage overhead incurred from storing the search index often becomes overwhelming. this paper proposes an algorithm for locating isosurface cells in time-varying fields. We devise a new data structure, called Temporal Hierarchical Index Tree, which utilizes the temporal coherence that exists in a time-varying field and adoptively coalesces the cells' extreme values over time; the resulting extreme values are then used to create the isosurface cell search index. For a typical time-varying scalar data set, not only does this temporal hierarchical index tree require much less storage space, but also the amount of I/O required to access the indices from the disk at different time steps is substantially reduced. We illustrate the utility and speed of our algorithm with data from several large-scale time-varying CID simulations. Our algorithm can achieve more than 80% of disk-space savings when compared with the existing techniques, while the isosurface extraction time is nearly optimal.
Qualitative and temporal reasoning in engine behavior analysis
NASA Technical Reports Server (NTRS)
Dietz, W. E.; Stamps, M. E.; Ali, M.
1987-01-01
Numerical simulation models, engine experts, and experimental data are used to generate qualitative and temporal representations of abnormal engine behavior. Engine parameters monitored during operation are used to generate qualitative and temporal representations of actual engine behavior. Similarities between the representations of failure scenarios and the actual engine behavior are used to diagnose fault conditions which have already occurred, or are about to occur; to increase the surveillance by the monitoring system of relevant engine parameters; and to predict likely future engine behavior.
Preliminary Isostatic Gravity Map of Joshua Tree National Park and Vicinity, Southern California
Langenheim, V.E.; Biehler, Shawn; McPhee, D.K.; McCabe, C.A.; Watt, J.T.; Anderson, M.L.; Chuchel, B.A.; Stoffer, P.
2007-01-01
This isostatic residual gravity map is part of an effort to map the three-dimensional distribution of rocks in Joshua Tree National Park, southern California. This map will serve as a basis for modeling the shape of basins beneath the Park and in adjacent valleys and also for determining the location and geometry of faults within the area. Local spatial variations in the Earth's gravity field, after accounting for variations caused by elevation, terrain, and deep crustal structure, reflect the distribution of densities in the mid- to upper crust. Densities often can be related to rock type, and abrupt spatial changes in density commonly mark lithologic or structural boundaries. High-density basement rocks exposed within the Eastern Transverse Ranges include crystalline rocks that range in age from Proterozoic to Mesozoic and these rocks are generally present in the mountainous areas of the quadrangle. Alluvial sediments, usually located in the valleys, and Tertiary sedimentary rocks are characterized by low densities. However, with increasing depth of burial and age, the densities of these rocks may become indistinguishable from those of basement rocks. Tertiary volcanic rocks are characterized by a wide range of densities, but, on average, are less dense than the pre-Cenozoic basement rocks. Basalt within the Park is as dense as crystalline basement, but is generally thin (less than 100 m thick; e.g., Powell, 2003). Isostatic residual gravity values within the map area range from about 44 mGal over Coachella Valley to about 8 mGal between the Mecca Hills and the Orocopia Mountains. Steep linear gravity gradients are coincident with the traces of several Quaternary strike-slip faults, most notably along the San Andreas Fault bounding the east side of Coachella Valley and east-west-striking, left-lateral faults, such as the Pinto Mountain, Blue Cut, and Chiriaco Faults (Fig. 1). Gravity gradients also define concealed basin-bounding faults, such as those beneath the Chuckwalla Valley (e.g. Rotstein and others, 1976). These gradients result from juxtaposing dense basement rocks against thick Cenozoic sedimentary rocks.
Sharp, R.V.
1989-01-01
The M6.2 Elmore Desert Ranch earthquake of 24 November 1987 was associated spatially and probably temporally with left-lateral surface rupture on many northeast-trending faults in and near the Superstition Hills in western Imperial Valley. Three curving discontinuous principal zones of rupture among these breaks extended northeastward from near the Superstition Hills fault zone as far as 9km; the maximum observed surface slip, 12.5cm, was on the northern of the three, the Elmore Ranch fault, at a point near the epicenter. Twelve hours after the Elmore Ranch earthquake, the M6.6 Superstition Hills earthquake occurred near the northwest end of the right-lateral Superstition Hills fault zone. We measured displacements over 339 days at as many as 296 sites along the Superstition Hills fault zone, and repeated measurements at 49 sites provided sufficient data to fit with a simple power law. The overall distributions of right-lateral displacement at 1 day and the estimated final slip are nearly symmetrical about the midpoint of the surface rupture. The average estimated final right-lateral slip for the Superstition Hills fault zone is ~54cm. The average left-lateral slip for the conjugate faults trending northeastward is ~23cm. The southernmost ruptured member of the Superstition Hills fault zone, newly named the Wienert fault, extends the known length of the zone by about 4km. -from Authors
NASA Astrophysics Data System (ADS)
Matzka, J.; Maher, B. A.
We report here the novel use of rapid and non-destructive magnetic measurements to investigate the spatial and temporal pattern of urban dust loadings on leaves of roadside trees. More than 600 leaves were collected from birch trees and their remanent magnetization (IRM 300 mT ) determined and normalized for the leaf area. The results show that this normalised 2-D magnetization is dominantly controlled by the tree's distance to the road. The magnetic analyses enabled detailed mapping of the spatial and temporal variations of vehicle-derived particulates. Higher 2D-magnetizations, indicating higher magnetic dust loadings, were measured for leaves collected adjacent to uphill road sections than for those next to downhill sections. This suggests that vehicle emissions, rather than friction wear or resuspended road dust, are the major source of the roadside magnetic particles. Additional magnetic analyses suggest that the particle size of the magnetic grains dominantly falls in the range classified for airborne particulate matter as PM 2.5 (<2.5 μm), a particle size hazardous to health due to its capacity to be respired deeply into the lungs. Thus, the leaf magnetizations relate directly to release into the atmosphere of harmful vehicle combustion products. For leaves from individual trees, magnetization values fall significantly from high values proximal to the roadside to lower values at the distal side, confirming the ability of trees to reduce aerosol concentrations in the atmosphere. Magnetic analysis of leaves over days and weeks shows that rainfall produces a net decrease in the leaf magnetic loadings.
Tree-, stand- and site-specific controls on landscape-scale patterns of transpiration
NASA Astrophysics Data System (ADS)
Kathrin Hassler, Sibylle; Weiler, Markus; Blume, Theresa
2018-01-01
Transpiration is a key process in the hydrological cycle, and a sound understanding and quantification of transpiration and its spatial variability is essential for management decisions as well as for improving the parameterisation and evaluation of hydrological and soil-vegetation-atmosphere transfer models. For individual trees, transpiration is commonly estimated by measuring sap flow. Besides evaporative demand and water availability, tree-specific characteristics such as species, size or social status control sap flow amounts of individual trees. Within forest stands, properties such as species composition, basal area or stand density additionally affect sap flow, for example via competition mechanisms. Finally, sap flow patterns might also be influenced by landscape-scale characteristics such as geology and soils, slope position or aspect because they affect water and energy availability; however, little is known about the dynamic interplay of these controls.We studied the relative importance of various tree-, stand- and site-specific characteristics with multiple linear regression models to explain the variability of sap velocity measurements in 61 beech and oak trees, located at 24 sites across a 290 km2 catchment in Luxembourg. For each of 132 consecutive days of the growing season of 2014 we modelled the daily sap velocity and derived sap flow patterns of these 61 trees, and we determined the importance of the different controls.Results indicate that a combination of mainly tree- and site-specific factors controls sap velocity patterns in the landscape, namely tree species, tree diameter, geology and aspect. For sap flow we included only the stand- and site-specific predictors in the models to ensure variable independence. Of those, geology and aspect were most important. Compared to these predictors, spatial variability of atmospheric demand and soil moisture explains only a small fraction of the variability in the daily datasets. However, the temporal dynamics of the explanatory power of the tree-specific characteristics, especially species, are correlated to the temporal dynamics of potential evaporation. We conclude that transpiration estimates on the landscape scale would benefit from not only consideration of hydro-meteorological drivers, but also tree, stand and site characteristics in order to improve the spatial and temporal representation of transpiration for hydrological and soil-vegetation-atmosphere transfer models.
Kinematics of polygonal fault systems: observations from the northern North Sea
NASA Astrophysics Data System (ADS)
Wrona, Thilo; Magee, Craig; Jackson, Christopher A.-L.; Huuse, Mads; Taylor, Kevin G.
2017-12-01
Layer-bound, low-displacement normal faults, arranged into a broadly polygonal pattern, are common in many sedimentary basins. Despite having constrained their gross geometry, we have a relatively poor understanding of the processes controlling the nucleation and growth (i.e. the kinematics) of polygonal fault systems. In this study we use high-resolution 3-D seismic reflection and borehole data from the northern North Sea to undertake a detailed kinematic analysis of faults forming part of a seismically well-imaged polygonal fault system hosted within the up to 1000 m thick, Early Palaeocene-to-Middle Miocene mudstones of the Hordaland Group. Growth strata and displacement-depth profiles indicate faulting commenced during the Eocene to early Oligocene, with reactivation possibly occurring in the late Oligocene to middle Miocene. Mapping the position of displacement maxima on 137 polygonal faults suggests that the majority (64%) nucleated in the lower 500 m of the Hordaland Group. The uniform distribution of polygonal fault strikes in the area indicates that nucleation and growth were not driven by gravity or far-field tectonic extension as has previously been suggested. Instead, fault growth was likely facilitated by low coefficients of residual friction on existing slip surfaces, and probably involved significant layer-parallel contraction (strains of 0.01-0.19) of the host strata. To summarize, our kinematic analysis provides new insights into the spatial and temporal evolution of polygonal fault systems.
NASA Astrophysics Data System (ADS)
West, D. P., Jr.; Hussey, A. M., II
2015-12-01
It has long been recognized that Paleozoic stratified rocks in some regions of central New England are dominated by relatively flat structural features (e.g., recumbent folds, shallow dipping foliation) while other areas are dominated by near vertical upright structures. The northern Casco Bay region of coastal Maine (Brunswick 7.5' quadrangle and adjacent areas) provides an excellent venue for studying the relationships between these two structural regimes as they are in close proximity due to juxtaposition by high angle faulting associated with the Norumbega fault system. Stratified rocks exposed west of the Flying Point fault in northern Casco Bay are dominated by moderately east dipping foliation (ave. = 025o, 37o), moderate northeast plunging mineral lineations, and recumbent to gently inclined minor folds. In stark contrast, immediately east of the Flying Point fault, stratified rocks are dominated by steep east dipping foliation (ave. = 014o, 73o), subhorizontal mineral lineations, and upright to steeply inclined minor folds. The structural differences correspond directly to differences in the thermal histories preserved in these rocks as revealed by earlier thermochronological studies. Rocks in the zone of upright structures east of the Flying Point fault were last subjected to high grade metamorphic conditions and granitic plutonism in the Late Devonian and were relatively cold (<300oC) by Late Carboniferous time. In contrast, flat lying rocks west of the Flying Point fault were over 500oC in the Early Permian and Permian pegmatites are common. Geochronological studies north of the study area have revealed that the two distinctly different structural styles are not the product of strain partitioning during the same deformational episode, but rather they represent two temporally and kinematically distinct deformational events. Swanson (1999), originally suggested flat structures west of the Flying Point fault are consistent with an episode of northwest directed thrusting and our findings are consistent with this interpretation. However, this flat phase of deformation significantly post-dates the older upright structures preserved to the east and thus models for the structural evolution of the region must integrate both the kinematic and temporal differences in this deformation.
Periodical cicadas use light for oviposition site selection.
Yang, Louie H
2006-12-07
Organisms use incomplete information from local experience to assess the suitability of potential habitat sites over a wide range of spatial and temporal scales. Although ecologists have long recognized the importance of spatial scales in habitat selection, few studies have investigated the temporal scales of habitat selection. In particular, cues in the immediate environment may commonly provide indirect information about future habitat quality. In periodical cicadas (Magicicada spp.), oviposition site selection represents a very long-term habitat choice. Adult female cicadas insert eggs into tree branches during a few weeks in the summer of emergence, but their oviposition choices determine the underground habitats of root-feeding nymphs over the following 13 or 17 years. Here, field experiments are used to show that female cicadas use the local light environment of host trees during the summer of emergence to select long-term host trees. Light environments may also influence oviposition microsite selection within hosts, suggesting a potential behavioural mechanism for associating solar cues with host trees. In contrast, experimental nutrient enrichment of host trees did not influence cicada oviposition densities. These findings suggest that the light environments around host trees may provide a robust predictor of host tree quality in the near future. This habitat selection may influence the spatial distribution of several cicada-mediated ecological processes in eastern North American forests.
Periodical cicadas use light for oviposition site selection
Yang, Louie H
2006-01-01
Organisms use incomplete information from local experience to assess the suitability of potential habitat sites over a wide range of spatial and temporal scales. Although ecologists have long recognized the importance of spatial scales in habitat selection, few studies have investigated the temporal scales of habitat selection. In particular, cues in the immediate environment may commonly provide indirect information about future habitat quality. In periodical cicadas (Magicicada spp.), oviposition site selection represents a very long-term habitat choice. Adult female cicadas insert eggs into tree branches during a few weeks in the summer of emergence, but their oviposition choices determine the underground habitats of root-feeding nymphs over the following 13 or 17 years. Here, field experiments are used to show that female cicadas use the local light environment of host trees during the summer of emergence to select long-term host trees. Light environments may also influence oviposition microsite selection within hosts, suggesting a potential behavioural mechanism for associating solar cues with host trees. In contrast, experimental nutrient enrichment of host trees did not influence cicada oviposition densities. These findings suggest that the light environments around host trees may provide a robust predictor of host tree quality in the near future. This habitat selection may influence the spatial distribution of several cicada-mediated ecological processes in eastern North American forests. PMID:17015354
Quality-based Multimodal Classification Using Tree-Structured Sparsity
2014-03-08
Pennsylvania State University soheil@psu.edu Asok Ray Pennsylvania State University axr2@psu.edu@psu.edu Nasser M. Nasrabadi Army Research Laboratory...clustering for on- line fault detection and isolation. Applied Intelligence, 35(2):269–284, 2011. 4 [2] S. Bahrampour, A. Ray , S. Sarkar, T. Damarla, and N
Assessing Institutional Ineffectiveness: A Strategy for Improvement.
ERIC Educational Resources Information Center
Cameron, Kim S.
1984-01-01
Based on the theory that institutional change and improvement are motivated more by knowledge of problems than by knowledge of successes, a fault tree analysis technique using Boolean logic for assessing institutional ineffectiveness by determining weaknesses in the system is presented. Advantages and disadvantages of focusing on weakness rather…
Low-Temperature Thermochronology for Unraveling Thermal Processes and Dating of Fault Zones
NASA Astrophysics Data System (ADS)
Tagami, T.
2016-12-01
Thermal signatures as well as timing of fault motions can be constrained by thermochronological analyses of fault-zone rocks (e.g., Tagami, 2012). Fault-zone materials suitable for such analyses are produced by tectocic and geochemical processes, such as (1) mechanical fragmentation of host rocks, grain-size reduction of fragments and recrystallization of grains to form mica and clay minerals, (2) secondary heating/melting of host rocks by frictional fault motions, and (3) mineral vein formation as a consequence of fluid advection associated with fault motions. The geothermal structure of fault zones are primarily controlled by the following three factors: (a) regional geothermal structure around the fault zone that reflect background thermo-tectonic history of studied province, (b) frictional heating of wall rocks by fault motions and resultant heat transfer into surrounding rocks, and (c) thermal influences by hot fluid advection in and around the fault zone. Thermochronological methods widely applied in fault zones are K-Ar (40Ar/39Ar), fission-track (FT), and U-Th methods. In addition, OSL, TL, ESR and (U-Th)/He methods are applied in some fault zones, in order to extract temporal imformation related to low temperature and/or very recent fault activities. Here I briefly review the thermal sensitivity of individual thermochronological systems, which basically controls the response of each method against faulting processes. Then, the thermal sensitivity of FTs is highlighted, with a particular focus on the thermal processes characteristic to fault zones, i.e., flash and hydrothermal heating. On these basis, representative examples as well as key issues, including sampling strategy, are presented to make thermochronologic analysis of fault-zone materials, such as fault gouges, pseudotachylytes and mylonites, along with geological, geomorphological and seismological implications. Finally, the thermochronologic analyses of the Nojima fault are overviewed, as an example of multidisciplinary investigations of an active seismogenic fault system. References: T. Tagami, 2012. Thermochronological investigation of fault zones. Tectonophys., 538-540, 67-85, doi:10.1016/j.tecto.2012.01.032.
Horton, J. Wright; Shah, Anjana K.; McNamara, Daniel E.; Snyder, Stephen L.; Carter, Aina M
2015-01-01
Deployment of temporary seismic stations after the 2011 Mineral, Virginia (USA), earthquake produced a well-recorded aftershock sequence. The majority of aftershocks are in a tabular cluster that delineates the previously unknown Quail fault zone. Quail fault zone aftershocks range from ~3 to 8 km in depth and are in a 1-km-thick zone striking ~036° and dipping ~50°SE, consistent with a 028°, 50°SE main-shock nodal plane having mostly reverse slip. This cluster extends ~10 km along strike. The Quail fault zone projects to the surface in gneiss of the Ordovician Chopawamsic Formation just southeast of the Ordovician–Silurian Ellisville Granodiorite pluton tail. The following three clusters of shallow (<3 km) aftershocks illuminate other faults. (1) An elongate cluster of early aftershocks, ~10 km east of the Quail fault zone, extends 8 km from Fredericks Hall, strikes ~035°–039°, and appears to be roughly vertical. The Fredericks Hall fault may be a strand or splay of the older Lakeside fault zone, which to the south spans a width of several kilometers. (2) A cluster of later aftershocks ~3 km northeast of Cuckoo delineates a fault near the eastern contact of the Ordovician Quantico Formation. (3) An elongate cluster of late aftershocks ~1 km northwest of the Quail fault zone aftershock cluster delineates the northwest fault (described herein), which is temporally distinct, dips more steeply, and has a more northeastward strike. Some aftershock-illuminated faults coincide with preexisting units or structures evident from radiometric anomalies, suggesting tectonic inheritance or reactivation.
Managing Risk to Ensure a Successful Cassini/Huygens Saturn Orbit Insertion (SOI)
NASA Technical Reports Server (NTRS)
Witkowski, Mona M.; Huh, Shin M.; Burt, John B.; Webster, Julie L.
2004-01-01
I. Design: a) S/C designed to be largely single fault tolerant; b) Operate in flight demonstrated envelope, with margin; and c) Strict compliance with requirements & flight rules. II. Test: a) Baseline, fault & stress testing using flight system testbeds (H/W & S/W); b) In-flight checkout & demos to remove first time events. III. Failure Analysis: a) Critical event driven fault tree analysis; b) Risk mitigation & development of contingencies. IV) Residual Risks: a) Accepted pre-launch waivers to Single Point Failures; b) Unavoidable risks (e.g. natural disaster). V) Mission Assurance: a) Strict process for characterization of variances (ISAs, PFRs & Waivers; b) Full time Mission Assurance Manager reports to Program Manager: 1) Independent assessment of compliance with institutional standards; 2) Oversight & risk assessment of ISAs, PFRs & Waivers etc.; and 3) Risk Management Process facilitator.
Integration of Advanced Probabilistic Analysis Techniques with Multi-Physics Models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cetiner, Mustafa Sacit; none,; Flanagan, George F.
2014-07-30
An integrated simulation platform that couples probabilistic analysis-based tools with model-based simulation tools can provide valuable insights for reactive and proactive responses to plant operating conditions. The objective of this work is to demonstrate the benefits of a partial implementation of the Small Modular Reactor (SMR) Probabilistic Risk Assessment (PRA) Detailed Framework Specification through the coupling of advanced PRA capabilities and accurate multi-physics plant models. Coupling a probabilistic model with a multi-physics model will aid in design, operations, and safety by providing a more accurate understanding of plant behavior. This represents the first attempt at actually integrating these two typesmore » of analyses for a control system used for operations, on a faster than real-time basis. This report documents the development of the basic communication capability to exchange data with the probabilistic model using Reliability Workbench (RWB) and the multi-physics model using Dymola. The communication pathways from injecting a fault (i.e., failing a component) to the probabilistic and multi-physics models were successfully completed. This first version was tested with prototypic models represented in both RWB and Modelica. First, a simple event tree/fault tree (ET/FT) model was created to develop the software code to implement the communication capabilities between the dynamic-link library (dll) and RWB. A program, written in C#, successfully communicates faults to the probabilistic model through the dll. A systems model of the Advanced Liquid-Metal Reactor–Power Reactor Inherently Safe Module (ALMR-PRISM) design developed under another DOE project was upgraded using Dymola to include proper interfaces to allow data exchange with the control application (ConApp). A program, written in C+, successfully communicates faults to the multi-physics model. The results of the example simulation were successfully plotted.« less
NASA Astrophysics Data System (ADS)
Whitetree, A.; Van Stan, J. T., II; Wagner, S.; Guillemette, F.; Lewis, J.; Silva, L.; Stubbins, A.
2017-12-01
Studies on the fate and transport of dissolved organic matter (DOM) along the rainfall-to-discharge flow pathway typically begin in streams or soils, neglecting the initial enrichment of rainfall with DOM during contact with plant canopies. However, rain water can gather significant amounts of tree-derived DOM (tree-DOM) when it drains from the canopy, as throughfall, and down the stem, as stemflow. We examined the temporal variability of event-scale tree-DOM concentrations, yield, and optical (light absorbance and fluorescence) characteristics from an epiphyte-laden Quercus virginiana-Juniperus virginiana forest on Skidaway Island, Savannah, Georgia (USA). All tree-DOM fluxes were highly enriched compared to rainfall and epiphytes further increased concentrations. Stemflow DOC concentrations were greater than throughfall across study species, yet larger throughfall water yields produced greater DOC yields versus stemflow. Tree-DOM optical characteristics indicate it is aromatic-rich with FDOM dominated by humic-like fluorescence, containing 10-20% protein-like (tryptophan-like) fluorescence. Storm size was the only storm condition that strongly correlated with tree-DOM concentration and flux; however, throughfall and stemflow optical characteristics varied little across a wide range of storm conditions (from low magnitude events to intense tropical storms). Annual tree-DOM yields from the study forest (0.8-46 g-C m-2 yr-1) compared well to other yields along the rainfall-to- discharge flow pathway, exceeding DOM yields from some river watersheds.
NASA Astrophysics Data System (ADS)
Anders, Mark H.; Geissman, John Wm.; Piety, Lucille A.; Sullivan, J. Timothy
1989-02-01
The Intermountain and Idaho seismic belts within Idaho, Wyoming, and Montana form an unusual parabolic pattern about the axis of the aseismic eastern Snake River Plain (SRP). This pattern is also reflected in the distribution of latest Quaternary normal faults. Several late Cenozoic normal faults that trend perpendicular to the axis of the eastern SRP extend from the aseismic region to the region of latest Quaternary faulting and seismicity. A study of the late Miocene to Holocene displacement history of one of these, the Grand Valley fault system in southeastern Idaho and western Wyoming, indicates that a locus of high displacement rates has migrated away from the eastern SRP to its present location in southern Star Valley in western Wyoming. In Swan Valley the studied area closest to the eastern SRP, isotopic ages, and paleomagnetic data for over 300 samples from 47 sites on well-exposed late Cenozoic volcanic rocks (the tuff of Spring Creek, the tuff of Heise, the Huckleberry Ridge tuff, the Pine Creek Basalt, and an older tuff thought to be the tuff of Cosgrove Road) are used to demonstrate differences in the displacement rate on the Grand Valley fault over the last ˜10 m.y. Tectonic tilts for these volcanic rocks are estimated by comparing the results of paleomagnetic analyses in Swan Valley to similar analyses of samples from undeformed volcanic rocks outside of Swan Valley. Basin geometry and tilt axes are established using seismic reflection profiles and field mapping. Combining these data with the tilt data makes it possible to calculate displacement rates during discrete temporal intervals. An average displacement rate of ˜1.8 mm/yr is calculated for the Grand Valley fault in Swan Valley between 4.4 and 2.0 Ma. In the subsequent 2.0-m.y. interval the rate dropped 2 orders of magnitude to ˜0.014 mm/yr; during the preceding 5.5-m.y. interval the displacement rate is ˜0.15 mm/yr, or about 1 order of magnitude less than the rate between 4.4 and 2.0 Ma. Mapping of fault scarps and unfaulted deposits along the Grand Valley fault system shows that latest Quaternary fault scarps are restricted to the portion farthest from the eastern SRP, the southern part of the Star Valley fault. Surface displacements estimated from scarp profiles and deposit ages estimated from soil development suggest a latest Quaternary displacement rate of 0.6-1.2 mm/yr for the southern portion of the Star Valley fault. Morphologic evidence suggests that this displacement rate persisted on the Star Valley fault throughout most of the Quaternary. The latest Quaternary displacement rate calculated for the southern portion of the Star Valley fault is similar to the rate calculated for Swan Valley during the interval from 2.0 to 4.4 Ma. This similarity, together with evidence for a low Quaternary displacement rate on the fault system in Swan Valley, suggests that the location of the highest displacement rate has migrated away from the eastern SRP. Other normal faults in southeastern Idaho, northwestern Wyoming, and southwestern Montana, while less well described than the Grand Valley fault system, exhibit a similar outward migrating pattern of increased fault activity followed by quiescence. Furthermore, a temporal and spatial relationship between fault activity and the 3.5 cm/yr northeastward track of the Yellowstone hotspot is observable on the Grand Valley fault system and on other north-northwest trending late Cenozoic faults that border the eastern SRP. The temporal and spatial relationship of Miocene to present high displacement rates for other circumeastern SRP faults and the observable outwardly migrating pattern of fault activity suggest that a similar parabolic distribution of seismicity and high displacement rates was symmetrically positioned about the former position of the hotspot. Moreover, the tandem migration of the hotspot and the parabolic distribution of increased fault activity and seismicity are closely followed by a parabolic-shaped "collapse shadow," or region of fault inactivity and aseismicity. We suggest that the outwardly migrating pattern of increased fault activity (active region) results from reduced integrated lithospheric strength caused by thermal effects of the hotspot. Conversely, the outwardly propagating quiescent region is the result of a reduction or "collapse" of crustal extension rates caused by increased integrated lithospheric strength. Lithospheric strength in this region is increased by addition of mafic materials at the base of the crust and at midcrustal levels. Although the strength of the mantle portion of the lithosphere is reduced, the increased strength of the crust results in a total integrated increase in lithospheric strength. Paradoxically, the surface heat flow data suggest that the region within the interior parabola has a higher heat flow (after accounting for the cooling effects of the eastern SRP aquifer) than the adjacent regions, yet the interior region exhibits significantly lower extension rates. It appears that in this region the surface heat flow is not a good predictor of rates of lithospheric extension.
Active, capable, and potentially active faults - a paleoseismic perspective
Machette, M.N.
2000-01-01
Maps of faults (geologically defined source zones) may portray seismic hazards in a wide range of completeness depending on which types of faults are shown. Three fault terms - active, capable, and potential - are used in a variety of ways for different reasons or applications. Nevertheless, to be useful for seismic-hazards analysis, fault maps should encompass a time interval that includes several earthquake cycles. For example, if the common recurrence in an area is 20,000-50,000 years, then maps should include faults that are 50,000-100,000 years old (two to five typical earthquake cycles), thus allowing for temporal variability in slip rate and recurrence intervals. Conversely, in more active areas such as plate boundaries, maps showing faults that are <10,000 years old should include those with at least 2 to as many as 20 paleoearthquakes. For the International Lithosphere Programs' Task Group II-2 Project on Major Active Faults of the World our maps and database will show five age categories and four slip rate categories that allow one to select differing time spans and activity rates for seismic-hazard analysis depending on tectonic regime. The maps are accompanied by a database that describes evidence for Quaternary faulting, geomorphic expression, and paleoseismic parameters (slip rate, recurrence interval and time of most recent surface faulting). These maps and databases provide an inventory of faults that would be defined as active, capable, and potentially active for seismic-hazard assessments.
NASA Astrophysics Data System (ADS)
Zinke, R. W.; Dolan, J. F.; Hatem, A. E.; Van Dissen, R. J.; Langridge, R.; Grenader, J.; McGuire, C. P.; Rhodes, E. J.; Nicol, A., , Prof
2016-12-01
Analysis of a large new high-resolution aerial lidar microtopographic data set provides > 500 measured fault offsets from sections of the four primary right-lateral strike-slip faults of the Marlborough Fault System (MFS), in northern South Island, New Zealand. With a shot density of >12 shots/m2 (and locally up to 18 shots/m2) these high-quality data allow us to resolve topographically defined geomorphic offsets with decimeter precision along 250 km of combined fault length. The measured offsets range in size from 2 m to > 100 m, and allow us to constrain displacements in the past one to several surface ruptures along stretches of the Wairau, Awatere, Clarence, and Hope faults. Our results reveal a number of important details of the rupture history of these faults, including: (1) the amount of slip and spatial variability (along and across strike) of strain released in the most recent event along sections of each of the four faults; (2) the consistency of slip throughout the past several ruptures on specific faults; and (3) suggestions of potential linkages and segment boundaries along each fault. The lidar data also facilitate precise measurements of larger offsets that, when combined with age data collected as part of our broader collaborative analyses of incremental fault slip rates and paleoearthquake ages, help to constrain the broader spatial and temporal patterns of strain release across the MFS during Holocene and latest Pleistocene time.
Earthquake and volcano clustering via stress transfer at Yucca Mountain, Nevada
Parsons, T.; Thompson, G.A.; Cogbill, A.H.
2006-01-01
The proposed national high-level nuclear waste repository at Yucca Mountain is close to Quaternary cinder cones and faults with Quaternary slip. Volcano eruption and earthquake frequencies are low, with indications of spatial and temporal clustering, making probabilistic assessments difficult. In an effort to identify the most likely intrusion sites, we based a three-dimensional finite-element model on the expectation that faulting and basalt intrusions are sensitive to the magnitude and orientation of the least principal stress in extensional terranes. We found that in the absence of fault slip, variation in overburden pressure caused a stress state that preferentially favored intrusions at Crater Flat. However, when we allowed central Yucca Mountain faults to slip in the model, we found that magmatic clustering was not favored at Crater Flat or in the central Yucca Mountain block. Instead, we calculated that the stress field was most encouraging to intrusions near fault terminations, consistent with the location of the most recent volcanism at Yucca Mountain, the Lathrop Wells cone. We found this linked fault and magmatic system to be mutually reinforcing in the model in that Lathrop Wells feeder dike inflation favored renewed fault slip. ?? 2006 Geological Society of America.
Global Interactions Analysis of Epileptic ECoG Data
NASA Astrophysics Data System (ADS)
Ortega, Guillermo J.; Sola, Rafael G.; Pastor, Jesús
2007-05-01
Localization of the epileptogenic zone is an important issue in epileptology, even though there is not a unique definition of the epileptic focus. The objective of the present study is to test ultrametric analysis to uncover cortical interactions in human epileptic data. Correlation analysis has been carried out over intraoperative Electro-Corticography (ECoG) data in 2 patients suffering from temporal lobe epilepsy (TLE). Recordings were obtained using a grid of 20 electrodes (5×4) covering the lateral temporal lobe and a strip of either 4 or 8 electrodes at the mesial temporal lobe. Ultrametric analysis was performed in the averaged final correlation matrices. By using the matrix of linear correlation coefficients and the appropriate metric distance between pairs of electrodes time series, we were able to construct Minimum Spanning Trees (MST). The topological connectivity displayed by these trees gives useful and valuable information regarding physiological and pathological information in the temporal lobe of epileptic patients.
Analysis of Ground Displacements in Taipei Area by Using High Resolution X-band SAR Interferometry
NASA Astrophysics Data System (ADS)
Tung, H.; Chen, H. Y.; Hu, J. C.
2014-12-01
Located at the northern part of Taiwan, Taipei is the most densely populated city and the center of politic, economic, and culture of this island. North of the Taipei basin, the active Tatun volcano group with the eruptive potential to devastate the entire Taipei is only 15 km away from the capital Taipei. Furthermore, the active Shanchiao fault located in the western margin of Taipei basin. Therefore, it is not only an interesting scientific topic but also a strong social impact to better understand the assessment and mitigation of geological hazard in the metropolitan Taipei city. In this study, we use 12 high resolution X-band SAR images from the new generation COSMO-SkyMed (CSK) constellation for associating with leveling and GPS data to monitor surface deformation around the Shanchiao fault and the Tatun volcano group. The stripmap mode of CSK SAR images provides spatial resolution of 3 m x 3 m, which is one order of magnitude better than the previous available satellite SAR data. Furthermore, the more frequent revisit of the same Area of Interest (AOI) of the present X-band missions provides massive datasets to avoid the baseline limitation and temporal decorrelation to improve the temporal resolution of deformation in time series. After transferring the GPS vectors and leveling data to the LOS direction by referring to continuous GPS station BANC, the R square between PS velocities and GPS velocities is approximate to 0.9, which indicates the high reliability of our PSInSAR result. In addition, the well-fitting profiles between leveling data and PSInSAR result along two leveling routes both demonstrate that the significant deformation gradient mainly occurs along the Shanchiao fault. The severe land subsidence area is located in the western part of Taipei basin just next to the Shanchiao fault with a maximum of SRD rate of 30 mm/yr. However, the severe subsidence area, Wuku, is also one industrial area in Taipei which could be attributed to anthropogenic effect. In the future, we will use all available images to monitor the temporal and spatial variation in deformation to better understand the activity of the Shanchiao fault.
NASA Astrophysics Data System (ADS)
Luther, A. L.; Axen, G. J.; Selverstone, J.
2011-12-01
Paleostress analyses from the footwall of the West Salton and Whipple detachment faults (WSD and WD, respectively), 2 lanfs, indicate both spatial and temporal stress field changes. Lanf's slip at a higher angle to S1 than predicted by Anderson. Hypotheses allowing slip on misoriented faults include a local stress field rotation in the fault zone, low friction materials, high pore-fluid pressure, and/or dynamic effects. The WSD, is part of the dextral-transtensional southern San Andreas fault system, slipped ~10 km from ~8 to 1 Ma, and the footwall exposures reflect only brittle deformation. The WD slipped at least ~40 km from ~25 to ~16 Ma, and has a mylonitic footwall overprinted by brittle deformation. Both lanf's were folded during extension. 80% of inversions that fit extension have a steeply-plunging S1, consistent with lanf slip at a high angle to S1. These require some weakening mechanism and the absence of known weak materials along these faults suggest pore-fluid pressure or dynamic effects are relevant. Most spatial S1 changes that occur are across minidetachments, which are faults sub-parallel to main faults that have similar damage zones that we interpret formed early in WD history, at the frictional-viscous transition [Selverstone et al. this session]. Their footwalls record a more moderately-plunging S1 than their hanging walls. Thus, we infer that older, deeper stress fields were rotated, consistent with a gradual rotation with depth. Alternating stress fields apparently affected many single outcrops and arise from mutually cross-cutting fracture sets that cannot be fit by a single stress field. In places where the alternation is between extensional and shortening fields, the shortening directions are subhorizontal, ~perpendicular to fold-axes and consistent with dextral-oblique slip in the case of the WSD. Commonly, S1 and S3 swap positions. In other places, two extensional stress fields differ, with S1 changing from a steep to a moderate angle to the lanf. We hypothesize that alternating stress fields result from earthquake stress drops large enough to allow at least 2 principal stresses to switch orientations. Either the differential stresses are small and similar to hypothesized stress drops or stress drops are larger than suggested by seismic data.
Kendrick, K.J.; Morton, D.M.; Wells, S.G.; Simpson, R.W.
2002-01-01
The San Timoteo badlands is an area of uplift and erosional dissection that has formed as a result of late Quaternary uplift along a restraining bend in the San Jacinto fault, of the San Andreas fault system in southern California. This bend currently is located in a region where late Quaternary deposits and associated surfaces have formed in lower San Timoteo Canyon. We have used morphometric analysis of these surfaces, in conjunction with computer modeling of deformational patterns along the San Jacinto fault, to reconstruct spatial and temporal variations in uplift along the bend. Morphometric techniques used include envelope/subenvelope mapping, a gradient-length index along channels, and denudation values. Age control is determined using a combination of thermoluminescence (TL) and near infrared optical simulation luminescence dating (IROSL) and correlation of soil-development indices. These approaches are combined with an elastic half-space model used to determine the deformation associated with the fault bend. The region of modeled uplift has a similar distribution as that determined by morphometric techniques. Luminescence dates and soil-correlation age estimates generally agree. Based on soil development, surfaces within the study area were stabilized at approximately 300-700 ka for Q3, 43-67 ka for Q2, and 27.5-67 ka for Q1. Luminescence ages (both TL and IROSL) for the formation of the younger two surfaces are 58 to 94 ka for Q2 and 37 to 62 ka for Q1 (ages reported to 1?? uncertainty). Periods of uplift were determined for the surfaces in the study area, resulting in approximate uplift rates of 0.34 to 0.84 m/ka for the past 100 ka and 0.13 to 1.00 m/ka for the past 66 ka. Comparison of these rates of uplift to those generated by the model support a higher rate of lateral slip along the San Jacinto fault than commonly assumed (greater than 20 mm/yr, as compared to 8-12 mm/yr commonly cited). This higher slip rate supports the proposal that a greater amount of slip has transferred from the San Andreas fault to the San Jacinto fault than generally held. The San Jacinto fault may have accommodated a significant portion of the plate boundary slip during the past 100 ka.
Reliability and availability evaluation of Wireless Sensor Networks for industrial applications.
Silva, Ivanovitch; Guedes, Luiz Affonso; Portugal, Paulo; Vasques, Francisco
2012-01-01
Wireless Sensor Networks (WSN) currently represent the best candidate to be adopted as the communication solution for the last mile connection in process control and monitoring applications in industrial environments. Most of these applications have stringent dependability (reliability and availability) requirements, as a system failure may result in economic losses, put people in danger or lead to environmental damages. Among the different type of faults that can lead to a system failure, permanent faults on network devices have a major impact. They can hamper communications over long periods of time and consequently disturb, or even disable, control algorithms. The lack of a structured approach enabling the evaluation of permanent faults, prevents system designers to optimize decisions that minimize these occurrences. In this work we propose a methodology based on an automatic generation of a fault tree to evaluate the reliability and availability of Wireless Sensor Networks, when permanent faults occur on network devices. The proposal supports any topology, different levels of redundancy, network reconfigurations, criticality of devices and arbitrary failure conditions. The proposed methodology is particularly suitable for the design and validation of Wireless Sensor Networks when trying to optimize its reliability and availability requirements.
Reliability and Availability Evaluation of Wireless Sensor Networks for Industrial Applications
Silva, Ivanovitch; Guedes, Luiz Affonso; Portugal, Paulo; Vasques, Francisco
2012-01-01
Wireless Sensor Networks (WSN) currently represent the best candidate to be adopted as the communication solution for the last mile connection in process control and monitoring applications in industrial environments. Most of these applications have stringent dependability (reliability and availability) requirements, as a system failure may result in economic losses, put people in danger or lead to environmental damages. Among the different type of faults that can lead to a system failure, permanent faults on network devices have a major impact. They can hamper communications over long periods of time and consequently disturb, or even disable, control algorithms. The lack of a structured approach enabling the evaluation of permanent faults, prevents system designers to optimize decisions that minimize these occurrences. In this work we propose a methodology based on an automatic generation of a fault tree to evaluate the reliability and availability of Wireless Sensor Networks, when permanent faults occur on network devices. The proposal supports any topology, different levels of redundancy, network reconfigurations, criticality of devices and arbitrary failure conditions. The proposed methodology is particularly suitable for the design and validation of Wireless Sensor Networks when trying to optimize its reliability and availability requirements. PMID:22368497
Adaptive Hierarchical Voltage Control of a DFIG-Based Wind Power Plant for a Grid Fault
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Jinho; Muljadi, Eduard; Park, Jung-Wook
This paper proposes an adaptive hierarchical voltage control scheme of a doubly-fed induction generator (DFIG)-based wind power plant (WPP) that can secure more reserve of reactive power (Q) in the WPP against a grid fault. To achieve this, each DFIG controller employs an adaptive reactive power to voltage (Q-V) characteristic. The proposed adaptive Q-V characteristic is temporally modified depending on the available Q capability of a DFIG; it is dependent on the distance from a DFIG to the point of common coupling (PCC). The proposed characteristic secures more Q reserve in the WPP than the fixed one. Furthermore, it allowsmore » DFIGs to promptly inject up to the Q limit, thereby improving the PCC voltage support. To avert an overvoltage after the fault clearance, washout filters are implemented in the WPP and DFIG controllers; they can prevent a surplus Q injection after the fault clearance by eliminating the accumulated values in the proportional-integral controllers of both controllers during the fault. Test results demonstrate that the scheme can improve the voltage support capability during the fault and suppress transient overvoltage after the fault clearance under scenarios of various system and fault conditions; therefore, it helps ensure grid resilience by supporting the voltage stability.« less
Oberhuber, Walter
2017-04-01
High-resolution time series of stem radius variations (SRVs) record fluctuations in tree water status and temporal dynamics of radial growth. The focus of this study was to evaluate the influence of tree size (i.e., saplings vs. mature trees) and soil water availability on SRVs. Dendrometers were installed on Pinus sylvestris at an open xeric site and on Picea abies at a dry-mesic site, and the SRVs of co-occurring saplings and mature trees were analyzed during two consecutive years. The results revealed that irrespective of tree size, radial growth in P. sylvestris occurred in April-May, whereas the main growing period of P. abies was April-June (saplings) and May-June (mature trees). Linear relationships between growth-detrended SRVs (SSRVs) of mature trees vs. saplings and climate-SSRV relationships revealed greater use of water reserves by mature P. abies compared with saplings. This suggests that the strikingly depressed growth of saplings compared with mature P. abies was caused by source limitation, i.e., restricted photosynthesis beneath the dense canopy. In contrast, a tree size effect on the annual increment, SSRV, and climate-SSRV relationships was less obvious in P. sylvestris , indicating comparable water status in mature trees and saplings under an open canopy. The results of this study provided evidence that water availability and a canopy atmosphere can explain differences in temporal dynamics of radial growth and use of stem water reserves among mature trees and saplings.
Timescale dependent deformation of orogenic belts?
NASA Astrophysics Data System (ADS)
Hoth, S.; Friedrich, A. M.; Vietor, T.; Hoffmann-Rothe, A.; Kukowski, N.; Oncken, O.
2004-12-01
The principle aim to link geodetic, paleoseismologic and geologic estimates of fault slip is to extrapolate the respective rates from one timescale to the other to finally predict the recurrence interval of large earthquakes, which threat human habitats. This approach however, is based on two often implicitly made assumptions: a uniform slip distribution through time and space and no changes of the boundary conditions during the time interval of interest. Both assumptions are often hard to verify. A recent study, which analysed an exceptionally complete record of seismic slip for the Wasatch and related faults (Basin and Range province), ranging from 10 yr to 10 Myr suggests that such a link between geodetic and geologic rates might not exist, i.e., that our records of fault displacement may depend on the timescale over which they were measured. This view derives support from results of scaled 2D sandbox experiments, as well as numerical simulations with distinct elements, both of which investigated the effect of boundary conditions such as flexure, mechanic stratigraphy and erosion on the spatio-temporal distribution of deformation within bivergent wedges. We identified three types of processes based on their distinct spatio-temporal distribution of deformation. First, incremental strain and local strain rates are very short-lived are broadly distributed within the bivergent wedge and no temporal pattern could be established. Second, footwall shortcuts and the re-activation of either internal thrusts or of the retro shear-zone are irregularly distributed in time and are thus not predictable either, but last for a longer time interval. Third, the stepwise initiation and propagation of the deformation front is very regular in time, since it depends on the thickness of the incoming layer and on its internal and basal material properties. We consider the propagation of the deformation front as an internal clock of a thrust belt, which is therefore predictable. A deformation front advance cycle requires the longest timescale. Thus, despite known and constant boundary conditions during the simulations, we found only one regular temporal pattern of deformation in a steady active bivergent-wedge. We therefore propose that the structural inventory of an orogenic belt is hierarchically ordered with respect to accumulated slip, in analogy to the discharge pattern in a drainage network. The deformation front would have the highest, a branching splay the lowest order. Since kinematic boundary conditions control deformation front advance, its timing and the related maximum magnitude of finite strain, i.e. throw on the frontal thrust are predictable. However, the number of controlling factors, such as the degree of strain softening, the orientation of faults or fluid flow and resulting cementation of faults, responsible for the reactivation of faults increases with increasing distance from the deformation front. Since it is rarely possible to determine the complete network of forces within a wedge, the reactivation of lower order structures is not predictable in time and space. Two implications for field studies may emerge: A change of the propagation of deformation can only be determined, if at least two accretion cycles are sampled. The link between geodetic, paleoseismologic and geologic fault slip estimates can only be successfully derived if the position of the investigated fault within the hierarchical order has not changed over the time interval of interest.
PAWS/STEM - PADE APPROXIMATION WITH SCALING AND SCALED TAYLOR EXPONENTIAL MATRIX (VAX VMS VERSION)
NASA Technical Reports Server (NTRS)
Butler, R. W.
1994-01-01
Traditional fault-tree techniques for analyzing the reliability of large, complex systems fail to model the dynamic reconfiguration capabilities of modern computer systems. Markov models, on the other hand, can describe fault-recovery (via system reconfiguration) as well as fault-occurrence. The Pade Approximation with Scaling (PAWS) and Scaled Taylor Exponential Matrix (STEM) programs provide a flexible, user-friendly, language-based interface for the creation and evaluation of Markov models describing the behavior of fault-tolerant reconfigurable computer systems. PAWS and STEM produce exact solutions for the probability of system failure and provide a conservative estimate of the number of significant digits in the solution. The calculation of the probability of entering a death state of a Markov model (representing system failure) requires the solution of a set of coupled differential equations. Because of the large disparity between the rates of fault arrivals and system recoveries, Markov models of fault-tolerant architectures inevitably lead to numerically stiff differential equations. Both PAWS and STEM have the capability to solve numerically stiff models. These complementary programs use separate methods to determine the matrix exponential in the solution of the model's system of differential equations. In general, PAWS is better suited to evaluate small and dense models. STEM operates at lower precision, but works faster than PAWS for larger models. The mathematical approach chosen to solve a reliability problem may vary with the size and nature of the problem. Although different solution techniques are utilized on different programs, it is possible to have a common input language. The Systems Validation Methods group at NASA Langley Research Center has created a set of programs that form the basis for a reliability analysis workstation. The set of programs are: SURE reliability analysis program (COSMIC program LAR-13789, LAR-14921); the ASSIST specification interface program (LAR-14193, LAR-14923), PAWS/STEM reliability analysis programs (LAR-14165, LAR-14920); and the FTC fault tree tool (LAR-14586, LAR-14922). FTC is used to calculate the top-event probability for a fault tree. PAWS/STEM and SURE are programs which interpret the same SURE language, but utilize different solution methods. ASSIST is a preprocessor that generates SURE language from a more abstract definition. SURE, ASSIST, and PAWS/STEM are also offered as a bundle. Please see the abstract for COS-10039/COS-10041, SARA - SURE/ASSIST Reliability Analysis Workstation, for pricing details. PAWS/STEM was originally developed for DEC VAX series computers running VMS and was later ported for use on Sun computers running SunOS. The package is written in PASCAL, ANSI compliant C-language, and FORTRAN 77. The standard distribution medium for the VMS version of PAWS/STEM (LAR-14165) is a 9-track 1600 BPI magnetic tape in VMSINSTAL format. It is also available on a TK50 tape cartridge in VMSINSTAL format. Executables are included. The standard distribution medium for the Sun version of PAWS/STEM (LAR-14920) is a .25 inch streaming magnetic tape cartridge in UNIX tar format. Both Sun3 and Sun4 executables are included. PAWS/STEM was developed in 1989 and last updated in 1991. DEC, VAX, VMS, and TK50 are trademarks of Digital Equipment Corporation. SunOS, Sun3, and Sun4 are trademarks of Sun Microsystems, Inc. UNIX is a registered trademark of AT&T Bell Laboratories.
PAWS/STEM - PADE APPROXIMATION WITH SCALING AND SCALED TAYLOR EXPONENTIAL MATRIX (SUN VERSION)
NASA Technical Reports Server (NTRS)
Butler, R. W.
1994-01-01
Traditional fault-tree techniques for analyzing the reliability of large, complex systems fail to model the dynamic reconfiguration capabilities of modern computer systems. Markov models, on the other hand, can describe fault-recovery (via system reconfiguration) as well as fault-occurrence. The Pade Approximation with Scaling (PAWS) and Scaled Taylor Exponential Matrix (STEM) programs provide a flexible, user-friendly, language-based interface for the creation and evaluation of Markov models describing the behavior of fault-tolerant reconfigurable computer systems. PAWS and STEM produce exact solutions for the probability of system failure and provide a conservative estimate of the number of significant digits in the solution. The calculation of the probability of entering a death state of a Markov model (representing system failure) requires the solution of a set of coupled differential equations. Because of the large disparity between the rates of fault arrivals and system recoveries, Markov models of fault-tolerant architectures inevitably lead to numerically stiff differential equations. Both PAWS and STEM have the capability to solve numerically stiff models. These complementary programs use separate methods to determine the matrix exponential in the solution of the model's system of differential equations. In general, PAWS is better suited to evaluate small and dense models. STEM operates at lower precision, but works faster than PAWS for larger models. The mathematical approach chosen to solve a reliability problem may vary with the size and nature of the problem. Although different solution techniques are utilized on different programs, it is possible to have a common input language. The Systems Validation Methods group at NASA Langley Research Center has created a set of programs that form the basis for a reliability analysis workstation. The set of programs are: SURE reliability analysis program (COSMIC program LAR-13789, LAR-14921); the ASSIST specification interface program (LAR-14193, LAR-14923), PAWS/STEM reliability analysis programs (LAR-14165, LAR-14920); and the FTC fault tree tool (LAR-14586, LAR-14922). FTC is used to calculate the top-event probability for a fault tree. PAWS/STEM and SURE are programs which interpret the same SURE language, but utilize different solution methods. ASSIST is a preprocessor that generates SURE language from a more abstract definition. SURE, ASSIST, and PAWS/STEM are also offered as a bundle. Please see the abstract for COS-10039/COS-10041, SARA - SURE/ASSIST Reliability Analysis Workstation, for pricing details. PAWS/STEM was originally developed for DEC VAX series computers running VMS and was later ported for use on Sun computers running SunOS. The package is written in PASCAL, ANSI compliant C-language, and FORTRAN 77. The standard distribution medium for the VMS version of PAWS/STEM (LAR-14165) is a 9-track 1600 BPI magnetic tape in VMSINSTAL format. It is also available on a TK50 tape cartridge in VMSINSTAL format. Executables are included. The standard distribution medium for the Sun version of PAWS/STEM (LAR-14920) is a .25 inch streaming magnetic tape cartridge in UNIX tar format. Both Sun3 and Sun4 executables are included. PAWS/STEM was developed in 1989 and last updated in 1991. DEC, VAX, VMS, and TK50 are trademarks of Digital Equipment Corporation. SunOS, Sun3, and Sun4 are trademarks of Sun Microsystems, Inc. UNIX is a registered trademark of AT&T Bell Laboratories.
NASA Astrophysics Data System (ADS)
Tong, X.; Sandwell, D. T.; Schmidt, D. A.
2018-04-01
We analyzed the interferometric synthetic aperture radar data from the ALOS-1/PALSAR-1 satellite to image the interseismic deformation along the Sumatran fault. The interferometric synthetic aperture radar time series analysis reveals up to 20 mm/year of aseismic creep on the Aceh segment along the Northern Sumatran fault. This is a large fraction of the total slip rate across this fault. The spatial extent of the aseismic creep extends for 100 km. The along-strike variation of the aseismic creep has an inverse "U" shape. An analysis of the moment accumulation rate shows that the central part of the creeping section accumulates moment at approximately 50% of the rate of the surrounding locked segments. An initial analysis of temporal variations suggests that the creep rate may be decelerating with time, suggesting that the creep rate is adjusting to a stress perturbation from nearby seismic activity. Our study has implications to the earthquake hazard along the northern Sumatran fault.
van der Pluijm, B.A.; Vrolijk, P.J.; Pevear, D.R.; Hall, C.M.; Solum, J.
2006-01-01
Fault rocks from the classic Rocky Mountain foreland fold-and-thrust belt in south-western Canada were dated by Ar analysis of clay grain-size fractions. Using X-ray diffraction quantification of the detrital and authigenic component of each fraction, these determinations give ages for individual faults in the area (illite age analysis). The resulting ages cluster around 72 and 52 Ma (here called the Rundle and McConnell pulses, respectively), challenging the traditional view of gradual forward progression of faulting and thrust-belt history of the area. The recognition of spatially and temporally restricted deformation episodes offers field support for theoretical models of critically stressed wedges, which result in geologically reasonable strain rates for the area. In addition to regional considerations, this study highlights the potential of direct dating of shallow fault rocks for our understanding of upper-crustal kinematics and regional tectonic analysis of ancient orogens. ?? 2006 Geological Society of America.
NASA Astrophysics Data System (ADS)
Materna, Kathryn; Taira, Taka'aki; Bürgmann, Roland
2018-01-01
The Mendocino Triple Junction (MTJ), at the northern terminus of the San Andreas Fault system, is an actively deforming plate boundary region with poorly constrained estimates of seismic coupling on most offshore fault surfaces. Characteristically repeating earthquakes provide spatial and temporal descriptions of aseismic creep at the MTJ, including on the oceanic transform Mendocino Fault Zone (MFZ) as it subducts beneath North America. Using a dataset of earthquakes from 2008 to 2017, we find that the easternmost segment of the MFZ displays creep during this period at about 65% of the long-term slip rate. We also find creep at slower rates on the shallower strike-slip interface between the Pacific plate and the North American accretionary wedge, as well as on a fault that accommodates Gorda subplate internal deformation. After a nearby
A Hierarchical Analysis of Tree Growth and Environmental Drivers Across Eastern US Temperate Forests
NASA Astrophysics Data System (ADS)
Mantooth, J.; Dietze, M.
2014-12-01
Improving predictions of how forests in the eastern United States will respond to future global change requires a better understanding of the drivers of variability in tree growth rates. Current inventory data lack the temporal resolution to characterize interannual variability, while existing growth records lack the extent required to assess spatial scales of variability. Therefore, we established a network of forest inventory plots across ten sites across the eastern US, and measured growth in adult trees using increment cores. Sites were chosen to maximize climate space explored, while within sites, plots were spread across primary environmental gradients to explore landscape-level variability in growth. Using the annual growth record available from tree cores, we explored the responses of trees to multiple environmental covariates over multiple spatial and temporal scales. We hypothesized that within and across sites growth rates vary among species, and that intraspecific growth rates increase with temperature along a species' range. We also hypothesized that trees show synchrony in growth responses to landscape-scale climatic changes. Initial analyses of growth increments indicate that across sites, trees with intermediate shade tolerance, e.g. Red Oak (Quercus rubra), tend to have the highest growth rates. At the site level, there is evidence for synchrony in response to large-scale climatic events (e.g. prolonged drought and above average temperatures). However, growth responses to climate at the landscape scale have yet to be detected. Our current analysis utilizes hierarchical Bayesian state-space modeling to focus on growth responses of adult trees to environmental covariates at multiple spatial and temporal scales. This predictive model of tree growth currently incorporates observed effects at the individual, plot, site, and landscape scale. Current analysis using this model shows a potential slowing of growth in the past decade for two sites in the northeastern US (Harvard Forest and Bartlett Experimental Forest), however more work is required to determine the robustness of this trend. Finally, these observations are being incorporated into ecosystem models using the Brown Dog informatics tools and the Predictive Ecosystem Analyzer (PEcAn) data assimilation workflow.
Communications and tracking expert systems study
NASA Technical Reports Server (NTRS)
Leibfried, T. F.; Feagin, Terry; Overland, David
1987-01-01
The original objectives of the study consisted of five broad areas of investigation: criteria and issues for explanation of communication and tracking system anomaly detection, isolation, and recovery; data storage simplification issues for fault detection expert systems; data selection procedures for decision tree pruning and optimization to enhance the abstraction of pertinent information for clear explanation; criteria for establishing levels of explanation suited to needs; and analysis of expert system interaction and modularization. Progress was made in all areas, but to a lesser extent in the criteria for establishing levels of explanation suited to needs. Among the types of expert systems studied were those related to anomaly or fault detection, isolation, and recovery.
[Medical Equipment Maintenance Methods].
Liu, Hongbin
2015-09-01
Due to the high technology and the complexity of medical equipment, as well as to the safety and effectiveness, it determines the high requirements of the medical equipment maintenance work. This paper introduces some basic methods of medical instrument maintenance, including fault tree analysis, node method and exclusive method which are the three important methods in the medical equipment maintenance, through using these three methods for the instruments that have circuit drawings, hardware breakdown maintenance can be done easily. And this paper introduces the processing methods of some special fault conditions, in order to reduce little detours in meeting the same problems. Learning is very important for stuff just engaged in this area.
Mori, J.
1996-01-01
Details of the M 4.3 foreshock to the Joshua Tree earthquake were studied using P waves recorded on the Southern California Seismic Network and the Anza network. Deconvolution, using an M 2.4 event as an empirical Green's function, corrected for complicated path and site effects in the seismograms and produced simple far-field displacement pulses that were inverted for a slip distribution. Both possible fault planes, north-south and east-west, for the focal mechanism were tested by a least-squares inversion procedure with a range of rupture velocities. The results showed that the foreshock ruptured the north-south plane, similar to the mainshock. The foreshock initiated a few hundred meters south of the mainshock and ruptured to the north, toward the mainshock hypocenter. The mainshock (M 6.1) initiated near the northern edge of the foreshock rupture 2 hr later. The foreshock had a high stress drop (320 to 800 bars) and broke a small portion of the fault adjacent to the mainshock but was not able to immediately initiate the mainshock rupture.
Magma-tectonic Interaction at Laguna del Maule, Chile
NASA Astrophysics Data System (ADS)
Keranen, K. M.; Peterson, D. E.; Miller, C. A.; Garibaldi, N.; Tikoff, B.; Williams-Jones, G.
2016-12-01
The Laguna del Maule Volcanic Field (LdM), Chile, the largest concentration of rhyolite <20 kyr globally, exhibits crustal deformation at rates higher than any non-erupting volcano. The interaction of large magmatic systems with faulting is poorly understood, however, the Chaitén rhyolitic system demonstrated that faults can serve as magma pathways during an eruption. We present a complex fault system at LdM in close proximity to the magma reservoir. In March 2016, 18 CHIRP seismic reflection lines were acquired at LdM to identify faults and analyze potential spatial and temporal impacts of the fault system on volcanic activity. We mapped three key horizons on each line, bounding sediment packages between Holocene onset, 870 ybp, and the present date. Faults were mapped on each line and offset was calculated across key horizons. Our results indicate a system of normal-component faults in the northern lake sector, striking subparallel to the mapped Troncoso Fault SW of the lake. These faults correlate to prominent magnetic lineations mapped by boat magnetic data acquired February 2016 which are interpreted as dykes intruding along faults. We also imaged a vertical fault, interpreted as a strike-slip fault, and a series of normal faults in the SW lake sector near the center of magmatic inflation. Isochron and fault offset maps illuminate areas of growth strata and indicate migration and increase of fault activity from south to north through time. We identify a domal structure in the SW lake sector, coincident with an area of low magnetization, in the region of maximum deformation from InSAR results. The dome experienced 10 ms TWT ( 10 meters) of uplift throughout the past 16 kybp, which we interpret as magmatic inflation in a shallow magma reservoir. This inflation is isolated to a 1.5 km diameter region in the hanging wall of the primary normal fault system, indicating possible fault-facilitated inflation.
NASA Astrophysics Data System (ADS)
Cabral-Cano, E.; Arciniega-Ceballos, A.; Vergara-Huerta, F.; Chaussard, E.; Wdowinski, S.; DeMets, C.; Salazar-Tlaczani, L.
2013-12-01
Subsidence has been a common occurrence in several cities in central Mexico for the past three decades. This process causes substantial damage to the urban infrastructure and housing in several cities and it is a major factor to be considered when planning urban development, land-use zoning and hazard mitigation strategies. Since the early 1980's the city of Morelia in Central Mexico has experienced subsidence associated with groundwater extraction in excess of natural recharge from rainfall. Previous works have focused on the detection and temporal evolution of the subsidence spatial distribution. The most recent InSAR analysis confirms the permanence of previously detected rapidly subsiding areas such as the Rio Grande Meander area and also defines 2 subsidence patches previously undetected in the newly developed suburban sectors west of Morelia at the Fraccionamiento Del Bosque along, south of Hwy. 15 and another patch located north of Morelia along Gabino Castañeda del Rio Ave. Because subsidence-induced, shallow faulting develops at high horizontal strain localization, newly developed a subsidence areas are particularly prone to faulting and fissuring. Shallow faulting increases groundwater vulnerability because it disrupts discharge hydraulic infrastructure and creates a direct path for transport of surface pollutants into the underlying aquifer. Other sectors in Morelia that have been experiencing subsidence for longer time have already developed well defined faults such as La Colina, Central Camionera, Torremolinos and La Paloma faults. Local construction codes in the vicinity of these faults define a very narrow swath along which housing construction is not allowed. In order to better characterize these fault systems and provide better criteria for future municipal construction codes we have surveyed the La Colina and Torremolinos fault systems in the western sector of Morelia using seismic tomographic techniques. Our results indicate that La Colina Fault include secondary faults at depths up to 4-8m below the surface and located up to 24m away from the main fault trace. The Torremolinos fault system includes secondary faults, which are present up to 8m deep and 12-18m away from the main fault trace. Even though the InSAR analysis provides an unsurpassed synoptic view, a higher temporal resolution observation of fault movement has been pursued using the MOIT continuously operating GPS station, which is located within 100 m from the La Colina main fault trace. GPS data is also particularly useful to decompose horizontal and vertical motion in the absence of both ascending and descending SAR data acquisitions. Observations since July 2009 show a total general displacement trend of -39mm/yr and a total horizontal differential motion of 41.8 mm/yr and -4.7mm/yr in its latitudinal and Longitudinal components respectively in respect to the motion observed at the MOGA GPS station located 5.0 km to the SSE within an area which is not affected by subsidence. In addition to the overall trend, high amplitude excursions at the MOIT station with individual residual amplitudes up to 20mm, 25mm, and 60mm in its latitudinal, longitudinal and vertical components respectively vertical are observed. The correlation of fault motion excursions in relationship to the rainfall records will be analyzed.
Bedrosian, Paul A.; Burgess, Matthew K.; Nishikawa, Tracy
2013-01-01
Within the south-western Mojave Desert, the Joshua Basin Water District is considering applying imported water into infiltration ponds in the Joshua Tree groundwater sub-basin in an attempt to artificially recharge the underlying aquifer. Scarce subsurface hydrogeological data are available near the proposed recharge site; therefore, time-domain electromagnetic (TDEM) data were collected and analysed to characterize the subsurface. TDEM soundings were acquired to estimate the depth to water on either side of the Pinto Mountain Fault, a major east-west trending strike-slip fault that transects the proposed recharge site. While TDEM is a standard technique for groundwater investigations, special care must be taken when acquiring and interpreting TDEM data in a twodimensional (2D) faulted environment. A subset of the TDEM data consistent with a layered-earth interpretation was identified through a combination of three-dimensional (3D) forward modelling and diffusion time-distance estimates. Inverse modelling indicates an offset in water table elevation of nearly 40 m across the fault. These findings imply that the fault acts as a low-permeability barrier to groundwater flow in the vicinity of the proposed recharge site. Existing production wells on the south side of the fault, together with a thick unsaturated zone and permeable near-surface deposits, suggest the southern half of the study area is suitable for artificial recharge. These results illustrate the effectiveness of targeted TDEM in support of hydrological studies in a heavily faulted desert environment where data are scarce and the cost of obtaining these data by conventional drilling techniques is prohibitive.
Langridge, R.M.; Stenner, Heidi D.; Fumal, T.E.; Christofferson, S.A.; Rockwell, T.K.; Hartleb, R.D.; Bachhuber, J.; Barka, A.A.
2002-01-01
The Mw 7.4 17 August 1999 İzmit earthquake ruptured five major fault segments of the dextral North Anatolian Fault Zone. The 26-km-long, N86°W-trending Sakarya fault segment (SFS) extends from the Sapanca releasing step-over in the west to near the town of Akyazi in the east. The SFS emerges from Lake Sapanca as two distinct fault traces that rejoin to traverse the Adapazari Plain to Akyazi. Offsets were measured across 88 cultural and natural features that cross the fault, such as roads, cornfield rows, rows of trees, walls, rails, field margins, ditches, vehicle ruts, a dike, and ground cracks. The maximum displacement observed for the İzmit earthquake (∼5.1 m) was encountered on this segment. Dextral displacement for the SFS rises from less than 1 m at Lake Sapanca to greater than 5 m near Arifiye, only 3 km away. Average slip decreases uniformly to the east from Arifiye until the fault steps left from Sagir to Kazanci to the N75°W, 6-km-long Akyazi strand, where slip drops to less than 1 m. The Akyazi strand passes eastward into the Akyazi Bend, which consists of a high-angle bend (18°-29°) between the Sakarya and Karadere fault segments, a 6-km gap in surface rupture, and high aftershock energy release. Complex structural geometries exist between the İzmit, Düzce, and 1967 Mudurnu fault segments that have arrested surface ruptures on timescales ranging from 30 sec to 88 days to 32 yr. The largest of these step-overs may have acted as a rupture segmentation boundary in previous earthquake cycles.
NASA Astrophysics Data System (ADS)
Dannowski, A.; Morgan, J. P.; Grevemeyer, I.; Ranero, C. R.
2018-02-01
Crustal structure provides the key to understand the interplay of magmatism and tectonism, while oceanic crust is constructed at Mid-Ocean Ridges (MORs). At slow spreading rates, magmatic processes dominate central areas of MOR segments, whereas segment ends are highly tectonized. The TAMMAR segment at the Mid-Atlantic Ridge (MAR) between 21°25'N and 22°N is a magmatically active segment. At 4.5 Ma this segment started to propagate south, causing the termination of the transform fault at 21°40'N. This stopped long-lived detachment faulting and caused the migration of the ridge offset to the south. Here a segment center with a high magmatic budget has replaced a transform fault region with limited magma supply. We present results from seismic refraction profiles that mapped the crustal structure across the ridge crest of the TAMMAR segment. Seismic data yield crustal structure changes at the segment center as a function of melt supply. Seismic Layer 3 underwent profound changes in thickness and became rapidly thicker 5 Ma. This correlates with the observed "Bull's Eye" gravimetric anomaly in that region. Our observations support a temporal change from thick lithosphere with oceanic core complex formation and transform faulting to thin lithosphere with focused mantle upwelling and segment growth. Temporal changes in crustal construction are connected to variations in the underlying mantle. We propose that there is a link between the neighboring segments at a larger scale within the asthenosphere, to form a long, highly magmatically active macrosegment, here called the TAMMAR-Kane Macrosegment.
Continuous micro-earthquake catalogue of the central Southern Alps, New Zealand
NASA Astrophysics Data System (ADS)
Michailos, Konstantinos; Townend, John; Savage, Martha; Chamberlain, Calum
2017-04-01
The Alpine Fault is one of the most prominent tectonic features in the South Island, New Zealand, and is inferred to be late in its seismic cycle of M 8 earthquakes based on paleoseismological evidence. Despite this, the Alpine Fault displays low levels of contemporary seismic activity, with little documented on-fault seismicity. This low magnitude seismicity, often below the completeness level of the GeoNet national seismic catalogue, may inform us of changes in fault character along-strike and might be used for rupture simulations and hazard planning. Thus, compiling a micro-earthquake catalogue for the Southern Alps prior to an expected major earthquake is of great interest. Areas of low seismic activity, like the central part of the Alpine Fault, require data recorded over a long duration to reveal temporal and spatial seismicity patterns and provide a better understanding for the processes controlling seismogenesis. The continuity and density of the Southern Alps Microearthquake Borehole Array (SAMBA; deployed in late 2008) allows us to study seismicity in the Southern Alps over a more extended time period than has ever been done previously. Furthermore, by using data from other temporary networks (e.g. WIZARD, ALFA08, DFDP-10) we are able to extend the region covered. To generate a spatially and temporally continuous catalogue of seismicity in New Zealand's central Southern Alps, we used automatic detection and phase-picking methods. We used an automatic phase-picking method for both P- and S- wave arrivals (kPick; Rawles and Thurber, 2015). Using almost 8 years of seismic data we calculated about 9,000 preliminary earthquake. The seismicity is clustered and scattered and a previously observed seismic gap between the Wanganui and Whataroa rivers is also identified.
Certification trails for data structures
NASA Technical Reports Server (NTRS)
Sullivan, Gregory F.; Masson, Gerald M.
1993-01-01
Certification trails are a recently introduced and promising approach to fault detection and fault tolerance. The applicability of the certification trail technique is significantly generalized. Previously, certification trails had to be customized to each algorithm application; trails appropriate to wide classes of algorithms were developed. These certification trails are based on common data-structure operations such as those carried out using these sets of operations such as those carried out using balanced binary trees and heaps. Any algorithms using these sets of operations can therefore employ the certification trail method to achieve software fault tolerance. To exemplify the scope of the generalization of the certification trail technique provided, constructions of trails for abstract data types such as priority queues and union-find structures are given. These trails are applicable to any data-structure implementation of the abstract data type. It is also shown that these ideals lead naturally to monitors for data-structure operations.
NASA Technical Reports Server (NTRS)
Braden, W. B.
1992-01-01
This talk discusses the importance of providing a process operator with concise information about a process fault including a root cause diagnosis of the problem, a suggested best action for correcting the fault, and prioritization of the problem set. A decision tree approach is used to illustrate one type of approach for determining the root cause of a problem. Fault detection in several different types of scenarios is addressed, including pump malfunctions and pipeline leaks. The talk stresses the need for a good data rectification strategy and good process models along with a method for presenting the findings to the process operator in a focused and understandable way. A real time expert system is discussed as an effective tool to help provide operators with this type of information. The use of expert systems in the analysis of actual versus predicted results from neural networks and other types of process models is discussed.
Modeling Off-Nominal Behavior in SysML
NASA Technical Reports Server (NTRS)
Day, John C.; Donahue, Kenneth; Ingham, Michel; Kadesch, Alex; Kennedy, Andrew K.; Post, Ethan
2012-01-01
Specification and development of fault management functionality in systems is performed in an ad hoc way - more of an art than a science. Improvements to system reliability, availability, safety and resilience will be limited without infusion of additional formality into the practice of fault management. Key to the formalization of fault management is a precise representation of off-nominal behavior. Using the upcoming Soil Moisture Active-Passive (SMAP) mission for source material, we have modeled the off-nominal behavior of the SMAP system during its initial spin-up activity, using the System Modeling Language (SysML). In the course of developing these models, we have developed generic patterns for capturing off-nominal behavior in SysML. We show how these patterns provide useful ways of reasoning about the system (e.g., checking for completeness and effectiveness) and allow the automatic generation of typical artifacts (e.g., success trees and FMECAs) used in system analyses.
Using Fuzzy Clustering for Real-time Space Flight Safety
NASA Technical Reports Server (NTRS)
Lee, Charles; Haskell, Richard E.; Hanna, Darrin; Alena, Richard L.
2004-01-01
To ensure space flight safety, it is necessary to monitor myriad sensor readings on the ground and in flight. Since a space shuttle has many sensors, monitoring data and drawing conclusions from information contained within the data in real time is challenging. The nature of the information can be critical to the success of the mission and safety of the crew and therefore, must be processed with minimal data-processing time. Data analysis algorithms could be used to synthesize sensor readings and compare data associated with normal operation with the data obtained that contain fault patterns to draw conclusions. Detecting abnormal operation during early stages in the transition from safe to unsafe operation requires a large amount of historical data that can be categorized into different classes (non-risk, risk). Even though the 40 years of shuttle flight program has accumulated volumes of historical data, these data don t comprehensively represent all possible fault patterns since fault patterns are usually unknown before the fault occurs. This paper presents a method that uses a similarity measure between fuzzy clusters to detect possible faults in real time. A clustering technique based on a fuzzy equivalence relation is used to characterize temporal data. Data collected during an initial time period are separated into clusters. These clusters are characterized by their centroids. Clusters formed during subsequent time periods are either merged with an existing cluster or added to the cluster list. The resulting list of cluster centroids, called a cluster group, characterizes the behavior of a particular set of temporal data. The degree to which new clusters formed in a subsequent time period are similar to the cluster group is characterized by a similarity measure, q. This method is applied to downlink data from Columbia flights. The results show that this technique can detect an unexpected fault that has not been present in the training data set.
David L. Peterson; Darren R. Anderson
1990-01-01
The wood of lodgepole pines and whitebark pines from a high elevation site in the east central Sierra Nevada of California was analyzed for chemical content to determine whether there were any temporal patterns of chemical distribution in tree rings. Cores were taken from 10 trees of each species and divided into 5-year increments for chemical analysis. Correlation...
Fates of live trees retained in forest cutting units, western Cascade Range, Oregon.
P.E. Busby; P. Adler; T.L. Warren; F.J. Swanson
2006-01-01
To assess the fate of live trees retained in dispersed patterns across cutting units in the western Cascade Range of Oregon, we conducted repeat surveys (1993 and 2001) of sites cut as early as 1983. Our objectives in this study are to (1) survey survival and mortality of trees retained at the time of harvest, (2) describe temporal patterns of windthrow and other...
R. Justin DeRose; Shih-Yu Wang; John D. Shaw
2013-01-01
This study introduces a novel tree-ring dataset, with unparalleled spatial density, for use as a climate proxy. Ancillary Douglas fir and pinyon pine tree-ring data collected by the U.S. Forest Service Forest Inventory and Analysis Program (FIA data) were subjected to a series of tests to determine their feasibility as climate proxies. First, temporal coherence between...
Nath, Cheryl D; Dattaraja, H S; Suresh, H S; Joshi, N V; Sukumar, R
2006-12-01
Tree diameter growth is sensitive to environmental fluctuations and tropical dry forests experience high seasonal and inter-annual environmental variation. Tree growth rates in a large permanent plot at Mudumalai, southern India, were examined for the influences of rainfall and three intrinsic factors (size, species and growth form) during three 4-year intervals over the period 1988-2000. Most trees had lowest growth during the second interval when rainfall was lowest, and skewness and kurtosis of growth distributions were reduced during this interval. Tree diameter generally explained less than 10% of growth variation and had less influence on growth than species identity or time interval. Intraspecific variation was high, yet species identity accounted for up to 16% of growth variation in the community. There were no consistent differences between canopy and understory tree growth rates; however, a few subgroups of species may potentially represent canopy and understory growth guilds. Environmentally-induced temporal variations in growth generally did not reduce the odds of subsequent survival. Growth rates appear to be strongly influenced by species identity and environmental variability in the Mudumalai dry forest. Understanding and predicting vegetation dynamics in the dry tropics thus also requires information on temporal variability in local climate.
A spatio-temporal model for probabilistic seismic hazard zonation of Tehran
NASA Astrophysics Data System (ADS)
Hashemi, Mahdi; Alesheikh, Ali Asghar; Zolfaghari, Mohammad Reza
2013-08-01
A precondition for all disaster management steps, building damage prediction, and construction code developments is a hazard assessment that shows the exceedance probabilities of different ground motion levels at a site considering different near- and far-field earthquake sources. The seismic sources are usually categorized as time-independent area sources and time-dependent fault sources. While the earlier incorporates the small and medium events, the later takes into account only the large characteristic earthquakes. In this article, a probabilistic approach is proposed to aggregate the effects of time-dependent and time-independent sources on seismic hazard. The methodology is then applied to generate three probabilistic seismic hazard maps of Tehran for 10%, 5%, and 2% exceedance probabilities in 50 years. The results indicate an increase in peak ground acceleration (PGA) values toward the southeastern part of the study area and the PGA variations are mostly controlled by the shear wave velocities across the city. In addition, the implementation of the methodology takes advantage of GIS capabilities especially raster-based analyses and representations. During the estimation of the PGA exceedance rates, the emphasis has been placed on incorporating the effects of different attenuation relationships and seismic source models by using a logic tree.
Magma storage in a strike-slip caldera
Saxby, J.; Gottsmann, J.; Cashman, K.; Gutiérrez, E.
2016-01-01
Silicic calderas form during explosive volcanic eruptions when magma withdrawal triggers collapse along bounding faults. The nature of specific interactions between magmatism and tectonism in caldera-forming systems is, however, unclear. Regional stress patterns may control the location and geometry of magma reservoirs, which in turn may control the spatial and temporal development of faults. Here we provide new insight into strike-slip volcano-tectonic relations by analysing Bouguer gravity data from Ilopango caldera, El Salvador, which has a long history of catastrophic explosive eruptions. The observed low gravity beneath the caldera is aligned along the principal horizontal stress orientations of the El Salvador Fault Zone. Data inversion shows that the causative low-density structure extends to ca. 6 km depth, which we interpret as a shallow plumbing system comprising a fractured hydrothermal reservoir overlying a magmatic reservoir with vol% exsolved vapour. Fault-controlled localization of magma constrains potential vent locations for future eruptions. PMID:27447932
Magma storage in a strike-slip caldera.
Saxby, J; Gottsmann, J; Cashman, K; Gutiérrez, E
2016-07-22
Silicic calderas form during explosive volcanic eruptions when magma withdrawal triggers collapse along bounding faults. The nature of specific interactions between magmatism and tectonism in caldera-forming systems is, however, unclear. Regional stress patterns may control the location and geometry of magma reservoirs, which in turn may control the spatial and temporal development of faults. Here we provide new insight into strike-slip volcano-tectonic relations by analysing Bouguer gravity data from Ilopango caldera, El Salvador, which has a long history of catastrophic explosive eruptions. The observed low gravity beneath the caldera is aligned along the principal horizontal stress orientations of the El Salvador Fault Zone. Data inversion shows that the causative low-density structure extends to ca. 6 km depth, which we interpret as a shallow plumbing system comprising a fractured hydrothermal reservoir overlying a magmatic reservoir with vol% exsolved vapour. Fault-controlled localization of magma constrains potential vent locations for future eruptions.
An evaluation of a real-time fault diagnosis expert system for aircraft applications
NASA Technical Reports Server (NTRS)
Schutte, Paul C.; Abbott, Kathy H.; Palmer, Michael T.; Ricks, Wendell R.
1987-01-01
A fault monitoring and diagnosis expert system called Faultfinder was conceived and developed to detect and diagnose in-flight failures in an aircraft. Faultfinder is an automated intelligent aid whose purpose is to assist the flight crew in fault monitoring, fault diagnosis, and recovery planning. The present implementation of this concept performs monitoring and diagnosis for a generic aircraft's propulsion and hydraulic subsystems. This implementation is capable of detecting and diagnosing failures of known and unknown (i.e., unforseeable) type in a real-time environment. Faultfinder uses both rule-based and model-based reasoning strategies which operate on causal, temporal, and qualitative information. A preliminary evaluation is made of the diagnostic concepts implemented in Faultfinder. The evaluation used actual aircraft accident and incident cases which were simulated to assess the effectiveness of Faultfinder in detecting and diagnosing failures. Results of this evaluation, together with the description of the current Faultfinder implementation, are presented.
A.P. Lamb,; L.M. Liberty,; Blakely, Richard J.; Pratt, Thomas L.; Sherrod, B.L.; Van Wijk, K.
2012-01-01
We present evidence that the Seattle fault zone of Washington State extends to the west edge of the Puget Lowland and is kinemati-cally linked to active faults that border the Olympic Massif, including the Saddle Moun-tain deformation zone. Newly acquired high-resolution seismic reflection and marine magnetic data suggest that the Seattle fault zone extends west beyond the Seattle Basin to form a >100-km-long active fault zone. We provide evidence for a strain transfer zone, expressed as a broad set of faults and folds connecting the Seattle and Saddle Mountain deformation zones near Hood Canal. This connection provides an explanation for the apparent synchroneity of M7 earthquakes on the two fault systems ~1100 yr ago. We redefi ne the boundary of the Tacoma Basin to include the previously termed Dewatto basin and show that the Tacoma fault, the southern part of which is a backthrust of the Seattle fault zone, links with a previously unidentifi ed fault along the western margin of the Seattle uplift. We model this north-south fault, termed the Dewatto fault, along the western margin of the Seattle uplift as a low-angle thrust that initiated with exhu-mation of the Olympic Massif and today accommodates north-directed motion. The Tacoma and Dewatto faults likely control both the southern and western boundaries of the Seattle uplift. The inferred strain trans-fer zone linking the Seattle fault zone and Saddle Mountain deformation zone defi nes the northern margin of the Tacoma Basin, and the Saddle Mountain deformation zone forms the northwestern boundary of the Tacoma Basin. Our observations and model suggest that the western portions of the Seattle fault zone and Tacoma fault are com-plex, require temporal variations in principal strain directions, and cannot be modeled as a simple thrust and/or backthrust system.
NASA Astrophysics Data System (ADS)
Paredes, José Matildo; Aguiar, Mariana; Ansa, Andrés; Giordano, Sergio; Ledesma, Mario; Tejada, Silvia
2018-01-01
We use three-dimensional (3D) seismic reflection data to analyze the structural style, fault kinematics and growth fault mechanisms of non-colinear normal fault systems in the South Flank of the Golfo San Jorge basin, central Patagonia. Pre-existing structural fabrics in the basement of the South Flank show NW-SE and NE-SW oriented faults. They control the location and geometry of wedge-shaped half grabens from the "main synrift phase" infilled with Middle Jurassic volcanic-volcaniclastic rocks and lacustrine units of Late Jurassic to Early Cretaceous age. The NE-striking, basement-involved normal faults resulted in the rapid establishment of fault lenght, followed by gradual increasing in displacement, and minor reactivation during subsequent extensional phases; NW-striking normal faults are characterized by fault segments that propagated laterally during the "main rifting phase", being subsequently reactivated during succesive extensional phases. The Aptian-Campanian Chubut Group is a continental succession up to 4 km thick associated to the "second rifting stage", characterized by propagation and linkage of W-E to WNW-ESE fault segments that increase their lenght and displacement in several extensional phases, recognized by detailed measurement of current throw distribution of selected seismic horizons along fault surfaces. Strain is distributed in an array of sub-parallel normal faults oriented normal to the extension direction. A Late Cretaceous-Paleogene (pre-late Eocene) extensional event is characterized by high-angle, NNW-SSE to NNE-SSW grabens coeval with intraplate alkali basaltic volcanism, evidencing clockwise rotation of the stress field following a ∼W-E extension direction. We demonstrate differences in growth fault mechanisms of non-colinear fault populations, and highlight the importance of follow a systematic approach to the analysis of fault geometry and throw distribution in a fault network, in order to understand temporal-spatial variations in the coeval topography, potential structural traps, and distribution of oil-bearing sandstone reservoirs.
Relocation of the 2012 Ms 7.0 Lushan Earthquake Aftershock Sequences and Its Implications
NASA Astrophysics Data System (ADS)
Fang, L.; Wu, J.; Sun, Z.; Su, J.; Du, W.
2013-12-01
At 08:02 am on 20 April 2013 (Beijing time), an Ms 7.0 earthquake occurred in Lushan County, Sichuan Province. Lushan earthquake is another devastating earthquake occurred in Sichuan Province after 12 May 2008 Ms 8.0 Wenchuan earthquake. 193 people were killed, 25 people were missing and more than ten thousand people were injured in the earthquake. Direct economic losses were estimated to be more than 80 billion yuan (RMB). Lushan earthquake occurred in the southern part of the Longmenshan fault zone. The distance between the epicenters of Lushan earthquake and Wenchuan earthquake is about 87 km. In an effort to maximize observations of the aftershock sequence and study the seismotetonic model, we deployed 35 temporal seismic stations around the source area. The earthquake was followed by a productive aftershock sequence. By the end of 20 July more than 10,254 aftershocks were recorded by the temporal seismic network. The magnitude of the aftershock ranges from ML-0.5 to ML5.6. We first located the aftershocks using Hypo2000 (Kevin, 2000) and refined the location results with HYPODD (Waldhauser & Ellsworth, 2000). The 1-D velocity model used in relocation is modified from a deep seismic sounding profile near Lushan earthquake (Wang et al., 2007). The Vp/Vs ratio is set to 1.83 according to receiver function h-k study. A total of 8,129 events were relocated. The average location error in N-S, E-W and U-D direction is 0.30, 0.29 and 0.59 km, respectively. The relocation results show that the aftershocks spread approximately 35 km in length and 16 km in width. The dominant distribution of the focal depth ranges from 10 to 20 km. A few earthquakes occurred in the shallow crust. Focal depth sections crossing the source area show that the seismogenic fault dips to the northwest, manifested itself as a listric thrust fault. The dip angle of the seismogenic fault is approximately 63° in the shallow crust, about 41° near the source of the mainshock, and about 17° at the bottom of the fault. The focal depths of 28 aftershocks with ML≥4.0 were determined using Himalaya Seismic Array and sPn phase. The focal depths obtained from sPn phase are consistent with HYPODD, which also reveals a northwest-dipping fault. Since the earthquake did not cause significant surface rupture, the seismogenic structure of Lushan earthquake remains controversial. On the basis of aftershock relocation results, we speculate that the seismogenic fault of Lushan earthquake may be a blind thrust fault on the eastern side of the Shuangshi-Dachuan fault. The relocation results also reveal that there is a southeastward tilt aftershock belt intersecting with the seismogenic fault with y-shape. We infer that it is a back thrust fault that often appears in a thrust fault system. Lushan earthquake triggered the seismic activity of the back thrust fault.
Fault detection and diagnosis for gas turbines based on a kernelized information entropy model.
Wang, Weiying; Xu, Zhiqiang; Tang, Rui; Li, Shuying; Wu, Wei
2014-01-01
Gas turbines are considered as one kind of the most important devices in power engineering and have been widely used in power generation, airplanes, and naval ships and also in oil drilling platforms. However, they are monitored without man on duty in the most cases. It is highly desirable to develop techniques and systems to remotely monitor their conditions and analyze their faults. In this work, we introduce a remote system for online condition monitoring and fault diagnosis of gas turbine on offshore oil well drilling platforms based on a kernelized information entropy model. Shannon information entropy is generalized for measuring the uniformity of exhaust temperatures, which reflect the overall states of the gas paths of gas turbine. In addition, we also extend the entropy to compute the information quantity of features in kernel spaces, which help to select the informative features for a certain recognition task. Finally, we introduce the information entropy based decision tree algorithm to extract rules from fault samples. The experiments on some real-world data show the effectiveness of the proposed algorithms.
Fault Detection and Diagnosis for Gas Turbines Based on a Kernelized Information Entropy Model
Wang, Weiying; Xu, Zhiqiang; Tang, Rui; Li, Shuying; Wu, Wei
2014-01-01
Gas turbines are considered as one kind of the most important devices in power engineering and have been widely used in power generation, airplanes, and naval ships and also in oil drilling platforms. However, they are monitored without man on duty in the most cases. It is highly desirable to develop techniques and systems to remotely monitor their conditions and analyze their faults. In this work, we introduce a remote system for online condition monitoring and fault diagnosis of gas turbine on offshore oil well drilling platforms based on a kernelized information entropy model. Shannon information entropy is generalized for measuring the uniformity of exhaust temperatures, which reflect the overall states of the gas paths of gas turbine. In addition, we also extend the entropy to compute the information quantity of features in kernel spaces, which help to select the informative features for a certain recognition task. Finally, we introduce the information entropy based decision tree algorithm to extract rules from fault samples. The experiments on some real-world data show the effectiveness of the proposed algorithms. PMID:25258726
Linking definitions, mechanisms, and modeling of drought-induced tree death.
Anderegg, William R L; Berry, Joseph A; Field, Christopher B
2012-12-01
Tree death from drought and heat stress is a critical and uncertain component in forest ecosystem responses to a changing climate. Recent research has illuminated how tree mortality is a complex cascade of changes involving interconnected plant systems over multiple timescales. Explicit consideration of the definitions, dynamics, and temporal and biological scales of tree mortality research can guide experimental and modeling approaches. In this review, we draw on the medical literature concerning human death to propose a water resource-based approach to tree mortality that considers the tree as a complex organism with a distinct growth strategy. This approach provides insight into mortality mechanisms at the tree and landscape scales and presents promising avenues into modeling tree death from drought and temperature stress. Copyright © 2012 Elsevier Ltd. All rights reserved.
Plio-Quaternary stress states in NE Iran: Kopeh Dagh and Allah Dagh-Binalud mountain ranges
NASA Astrophysics Data System (ADS)
Shabanian, Esmaeil; Bellier, Olivier; Abbassi, Mohammad R.; Siame, Lionel; Farbod, Yassaman
2010-01-01
NE Iran, including the Kopeh Dagh and Allah Dagh-Binalud deformation domains, comprises the northeastern boundary of the Arabia-Eurasia collision zone. This study focuses on the evolution of the Plio-Quaternary tectonic regimes of northeast Iran. We present evidence for drastic temporal changes in the stress state by inversion of both geologically and seismically determined fault slip vectors. The inversions of fault kinematics data reveal distinct temporal changes in states of stress during the Plio-Quaternary (since ˜ 5 Ma). The paleostress state is characterized by a regional transpressional tectonic regime with a mean N140 ± 10°E trending horizontal maximum stress axis ( σ1). The youngest (modern) state of stress shows two distinct strike-slip and compressional tectonic regimes with a regional mean of N030 ± 15°E trending horizontal σ1. The change from the paleostress to modern stress states has occurred through an intermediate stress field characterized by a mean regional N trending σ1. The inversion analysis of earthquake focal mechanisms reveals a homogeneous, transpressional tectonic regime with a regional N023 ± 5°E trending σ1. The modern stress state, deduced from the youngest fault kinematics data, is in close agreement with the present-day stress state given by the inversions of earthquake focal mechanisms. According to our data and the deduced results, in northeast Iran, the Arabia-Eurasia convergence is taken up by strike-slip faulting along NE trending left-lateral and NNW trending right-lateral faults, as well as reverse to oblique-slip reverse faulting along NW trending faults. Such a structural assemblage is involved in a mechanically compatible and homogeneous modern stress field. This implies that no strain and/or stress partitioning or systematic block rotations have occurred in the Kopeh Dagh and Allah Dagh-Binalud deformation domains. The Plio-Quaternary stress changes documented in this paper call into question the extrapolation of the present-day seismic and GPS-derived deformation rates over geological time intervals encompassing tens of millions of years.
NASA Astrophysics Data System (ADS)
Fu, Ching-Chou; Yang, Tsanyao Frank; Chen, Cheng-Hong; Lee, Lou-Chuang; Wu, Yih-Min; Liu, Tsung-Kwei; Walia, Vivek; Kumar, Arvind; Lai, Tzu-Hua
2017-11-01
In this paper, we study (1) the spatial anomalies and (2) the temporal anomalies of soil gas in northern Taiwan. The spatial anomalies of soil gas are related to tectonic faults, while the temporal anomalies of soil gas are associated with pre-earthquake activities. Detailed soil gas sampling was systematically performed, and the analysis of the collected gas species shows that high helium and nitrogen concentrations appear in samples from specific sites, which coincide with the structural setting of the area studied. This analysis indicates the possibility of using these soil gases to determine fault zones in the studied area. Based on the soil gas data, a station (Tapingti) for automatic soil gas monitoring was constructed on an appropriate site at the fault zone. Some anomalous high radon concentrations at certain times can be identified from the dataset, which was generated by the continuous monitoring of soil gas for over a year. Notably, many of these anomalies were observed several hours to a few days before the earthquakes (ML > 3) that occurred in northern Taiwan. By combining the information of epicenters and fault plane solutions of these earthquakes, we find that the shallow earthquakes (<15 km) were mainly strike-slip and normal-type earthquakes, and concentrated within a distance of 30 km to the monitoring site (Group A). The deep earthquakes (>20 km) were mainly thrust-type earthquakes and distributed in greater distances (>45 km) east of the monitoring site (Group B). Such focal mechanisms of earthquakes suggest an extensional and compressional structural domain in the continental crust for Group A and Group B earthquakes, respectively. It is suggested that the pre-earthquake activities associated with the seismicity of Group B may be transmitted along the major decollement in the region below the Tapingti station, leading to the observed soil gas enhancements.
NASA Astrophysics Data System (ADS)
Gülerce, Zeynep; Buğra Soyman, Kadir; Güner, Barış; Kaymakci, Nuretdin
2017-12-01
This contribution provides an updated planar seismic source characterization (SSC) model to be used in the probabilistic seismic hazard assessment (PSHA) for Istanbul. It defines planar rupture systems for the four main segments of the North Anatolian fault zone (NAFZ) that are critical for the PSHA of Istanbul: segments covering the rupture zones of the 1999 Kocaeli and Düzce earthquakes, central Marmara, and Ganos/Saros segments. In each rupture system, the source geometry is defined in terms of fault length, fault width, fault plane attitude, and segmentation points. Activity rates and the magnitude recurrence models for each rupture system are established by considering geological and geodetic constraints and are tested based on the observed seismicity that is associated with the rupture system. Uncertainty in the SSC model parameters (e.g., b value, maximum magnitude, slip rate, weights of the rupture scenarios) is considered, whereas the uncertainty in the fault geometry is not included in the logic tree. To acknowledge the effect of earthquakes that are not associated with the defined rupture systems on the hazard, a background zone is introduced and the seismicity rates in the background zone are calculated using smoothed-seismicity approach. The state-of-the-art SSC model presented here is the first fully documented and ready-to-use fault-based SSC model developed for the PSHA of Istanbul.
NASA Astrophysics Data System (ADS)
Biel, C.; Molina, A.; Aranda, X.; Llorens, P.; Savé, R.
2012-04-01
Tree plantation for wood production has been proposed to mitigate CO2-related climate change. Although these agroforestry systems can contribute to maintain the agriculture in some areas placed between rainfed crops and secondary forests, water scarcity in Mediterranean climate could restrict its growth, and their presence will affect the water balance. Tree plantations management (species, plant density, irrigation, etc), hence, can be used to affect the water balance, resulting in water availability improvement and buffering of the water cycle. Soil water content and meteorological data are widely used in agroforestry systems as indicators of vegetation water use, and consequently to define water management. However, the available information of ecohydrological processes in this kind of ecosystem is scarce. The present work studies how the temporal and spatial variation of soil water content is affected by transpiration and interception loss fluxes in a Mediterranean rainfed plantation of cherry tree (Prunus avium) located in Caldes de Montbui (Northeast of Spain). From May till December 2011, rainfall partitioning, canopy transpiration, soil water content and meteorological parameters were continuously recorded. Rainfall partitioning was measured in 6 trees, with 6 automatic rain recorders for throughfall and 1 automatic rain recorder for stemflow per tree. Transpiration was monitored in 12 nearby trees by means of heat pulse sap flow sensors. Soil water content was also measured at three different depths under selected trees and at two depths between rows without tree cover influence. This work presents the relationships between rainfall partitioning, transpiration and soil water content evolution under the tree canopy. The effect of tree cover on the soil water content dynamics is also analyzed.
NASA Astrophysics Data System (ADS)
Dygert, Nick; Liang, Yan
2015-06-01
Mantle peridotites from ophiolites are commonly interpreted as having mid-ocean ridge (MOR) or supra-subduction zone (SSZ) affinity. Recently, an REE-in-two-pyroxene thermometer was developed (Liang et al., 2013) that has higher closure temperatures (designated as TREE) than major element based two-pyroxene thermometers for mafic and ultramafic rocks that experienced cooling. The REE-in-two-pyroxene thermometer has the potential to extract meaningful cooling rates from ophiolitic peridotites and thus shed new light on the thermal history of the different tectonic regimes. We calculated TREE for available literature data from abyssal peridotites, subcontinental (SC) peridotites, and ophiolites around the world (Alps, Coast Range, Corsica, New Caledonia, Oman, Othris, Puerto Rico, Russia, and Turkey), and augmented the data with new measurements for peridotites from the Trinity and Josephine ophiolites and the Mariana trench. TREE are compared to major element based thermometers, including the two-pyroxene thermometer of Brey and Köhler (1990) (TBKN). Samples with SC affinity have TREE and TBKN in good agreement. Samples with MOR and SSZ affinity have near-solidus TREE but TBKN hundreds of degrees lower. Closure temperatures for REE and Fe-Mg in pyroxenes were calculated to compare cooling rates among abyssal peridotites, MOR ophiolites, and SSZ ophiolites. Abyssal peridotites appear to cool more rapidly than peridotites from most ophiolites. On average, SSZ ophiolites have lower closure temperatures than abyssal peridotites and many ophiolites with MOR affinity. We propose that these lower temperatures can be attributed to the residence time in the cooling oceanic lithosphere prior to obduction. MOR ophiolites define a continuum spanning cooling rates from SSZ ophiolites to abyssal peridotites. Consistent high closure temperatures for abyssal peridotites and the Oman and Corsica ophiolites suggests hydrothermal circulation and/or rapid cooling events (e.g., normal faulting, unroofing) control the late thermal histories of peridotites from transform faults and slow and fast spreading centers with or without a crustal section.
Charles B. Halpern; Joseph A. Antos; Janine M. Rice; Ryan D. Haugo; Nicole L. Lang
2010-01-01
We combined spatial point pattern analysis, population age structures, and a time-series of stem maps to quantify spatial and temporal patterns of conifer invasion over a 200-yr period in three plots totaling 4 ha. In combination, spatial and temporal patterns of establishment suggest an invasion process shaped by biotic interactions, with facilitation promoting...
NASA Astrophysics Data System (ADS)
Pousse Beltran, Léa.; Pathier, Erwan; Jouanne, François; Vassallo, Riccardo; Reinoza, Carlos; Audemard, Franck; Doin, Marie Pierre; Volat, Matthieu
2016-11-01
In eastern Venezuela, the Caribbean-South American plate boundary follows the El Pilar fault system. Previous studies based on three GPS campaigns (2003-2005-2013) demonstrated that the El Pilar fault accommodates the whole relative displacement between the two tectonic plates (20 mm/yr) and proposed that 50-60% of the slip is aseismic. In order to quantify the possible variations of the aseismic creep in time and space, we conducted an interferometric synthetic aperture radar (InSAR) time series analysis, using the (NSBAS) New Small BAseline Subset method, on 18 images from the Advanced Land Observing Satellite (ALOS-1) satellite spanning the 2007-2011 period. During this 3.5 year period, InSAR observations show that aseismic slip decreases eastward along the fault: the creep rate of the western segment reaches 25.3 ± 9.4 mm/yr on average, compared to 13.4 ± 6.9 mm/yr on average for the eastern segment. This is interpreted, through slip distribution models, as being related to coupled and uncoupled areas between the surface and 20 km in depth. InSAR observations also show significant temporal creep rate variations (accelerations) during the considered time span along the western segment. The transient behavior of the creep is not consistent with typical postseismic afterslip following the 1997 Ms 6.8 earthquake. The creep is thus interpreted as persistent aseismic slip during an interseismic period, which has a pulse- or transient-like behavior.
Castagneri, Daniele; Battipaglia, Giovanna; von Arx, Georg; Pacheco, Arturo; Carrer, Marco
2018-04-24
Understanding how climate affects xylem formation is critical for predicting the impact of future conditions on tree growth and functioning in the Mediterranean region, which is expected to face warmer and drier conditions. However, mechanisms of growth response to climate at different temporal scales are still largely unknown, being complicated by separation between spring and autumn xylogenesis (bimodal temporal pattern) in most species such as Mediterranean pines. We investigated wood anatomical characteristics and carbon stable isotope composition in Mediterranean Pinus pinea L. along tree-ring series at intra-ring resolution to assess xylem formation processes and responses to intra-annual climate variability. Xylem anatomy was strongly related to environmental conditions occurring a few months before and during the growing season, but was not affected by summer drought. In particular, the lumen diameter of the first earlywood tracheids was related to winter precipitation, whereas the size of tracheids produced later was influenced by mid-spring precipitation. Diameter of latewood tracheids was associated with precipitation in mid-autumn. In contrast, tree-ring carbon isotope composition was mostly related to climate of the previous seasons. Earlywood was likely formed using both recently and formerly assimilated carbon, while latewood relied mostly on carbon accumulated many months prior to its formation. Our integrated approach provided new evidence on the short-term and carry-over effects of climate on the bimodal temporal xylem formation in P. pinea. Investigations on different variables and time scales are necessary to disentangle the complex climate influence on tree growth processes under Mediterranean conditions.
NASA Astrophysics Data System (ADS)
Hatem, A. E.; Dolan, J. F.; Langridge, R.; Zinke, R. W.; McGuire, C. P.; Rhodes, E. J.; Van Dissen, R. J.
2015-12-01
The Marlborough fault system, which links the Alpine fault with the Hikurangi subduction zone within the complex Australian-Pacific plate boundary zone, partitions strain between the Wairau, Awatere, Clarence and Hope faults. Previous best estimates of dextral strike-slip along the Hope fault are ≤ ~23 mm/yr± 4 mm/year. Those rates, however, are poorly constrained and could be improved using better age determinations in conjunction with measurements of fault offsets using high-resolution imagery. In this study, we use airborne lidar- and field-based mapping together with the subsurface geometry of offset channels at the Hossack site 12 km ESE of Hanmer Springs to more precisely determine stream offsets that were previously identified by McMorran (1991). Specifically, we measured fault offsets of ~10m, ~75 m, and ~195m. Together with 65 radiocarbon ages on charcoal, peat, and wood and 25 pending post-IR50-IRSL225 luminescence ages from the channel deposits, these offsets yield three different fault slip rates for the early Holocene, the late Holocene, and the past ca. 500-1,000 years. Using the large number of age determinations, we document in detail the timing of initiation and abandonment of each channel, enhancing the geomorphic interpretation at the Hossack site as channels deform over many earthquake cycles. Our preliminary incremental slip rate results from the Hossack site may indicate temporally variable strain release along the Hope fault. This study is part of a broader effort aimed at determining incremental slip rates and paleo-earthquake ages and displacements from all four main Marlborough faults. Collectively, these data will allow us to determine how the four main Marlborough faults have work together during Holocene-late Pleistocene to accommodate plate-boundary deformation in time and space.
NASA Astrophysics Data System (ADS)
La Femina, P.; Weber, J. C.; Geirsson, H.; Latchman, J. L.; Robertson, R. E. A.; Higgins, M.; Miller, K.; Churches, C.; Shaw, K.
2017-12-01
We studied active faults in Trinidad and Tobago in the Caribbean-South American (CA-SA) transform plate boundary zone using episodic GPS (eGPS) data from 19 sites and continuous GPS (cGPS) data from 8 sites, then by modeling these data using a series of simple screw dislocation models. Our best-fit model for interseismic (interseimic = between major earthquakes) fault slip requires: 12-15 mm/yr of right-lateral movement and very shallow locking (0.2 ± 0.2 km; essentially creep) across the Central Range Fault (CRF); 3.4 +0.3/-0.2 mm/yr across the Soldado Fault in south Trinidad, and 3.5 +0.3/-0.2 mm/yr of dextral shear on fault(s) between Trinidad and Tobago. The upper-crustal faults in Trinidad show very little seismicity (1954-current from local network) and do not appear to have generated significant historic earthquakes. However, paleoseismic studies indicate that the CRF ruptured between 2710 and 500 yr. B.P. and thus it was recently capable of storing elastic strain. Together, these data suggest spatial and/or temporal fault segmentation on the CRF. The CRF marks a physical boundary between rocks associated with thermogenically generated petroleum and over-pressured fluids in south and central Trinidad, from rocks containing only biogenic gas to the north, and a long string of active mud volcanoes align with the trace of the Soldado Fault along Trinidad's south coast. Fluid (oil and gas) overpressure, as an alternative or in addition to weak mineral phases in the fault zone, may thus cause the CRF fault creep and the lack of seismicity that we observe.
Pulsed strain release on the Altyn Tagh fault, northwest China
Gold, Ryan D.; Cowgill, Eric; Arrowsmith, J. Ramón; Friedrich, Anke M.
2017-01-01
Earthquake recurrence models assume that major surface-rupturing earthquakes are followed by periods of reduced rupture probability as stress rebuilds. Although purely periodic, time- or slip-predictable rupture models are known to be oversimplifications, a paucity of long records of fault slip clouds understanding of fault behavior and earthquake recurrence over multiple ruptures. Here, we report a 16 kyr history of fault slip—including a pulse of accelerated slip from 6.4 to 6.0 ka—determined using a Monte Carlo analysis of well-dated offset landforms along the central Altyn Tagh strike-slip fault (ATF) in northwest China. This pulse punctuates a median rate of 8.1+1.2/−0.9 mm/a and likely resulted from either a flurry of temporally clustered ∼Mw 7.5 ground-rupturing earthquakes or a single large >Mw 8.2 earthquake. The clustered earthquake scenario implies rapid re-rupture of a fault reach >195 km long and indicates decoupled rates of elastic strain energy accumulation versus dissipation, conceptualized as a crustal stress battery. If the pulse reflects a single event, slip-magnitude scaling implies that it ruptured much of the ATF with slip similar to, or exceeding, the largest documented historical ruptures. Both scenarios indicate fault rupture behavior that deviates from classic time- or slip-predictable models.
NASA Astrophysics Data System (ADS)
Tvedt, Anette B. M.; Rotevatn, Atle; Jackson, Christopher A.-L.
2016-10-01
Normal faulting and the deep subsurface flow of salt are key processes controlling the structural development of many salt-bearing sedimentary basins. However, our detailed understanding of the spatial and temporal relationship between normal faulting and salt movement is poor due to a lack of natural examples constraining their geometric and kinematic relationship in three-dimensions. To improve our understanding of these processes, we here use 3D seismic reflection and borehole data from the Egersund Basin, offshore Norway, to determine the structure and growth of a normal fault array formed during the birth, growth and decay of an array of salt structures. We show that the fault array and salt structures developed in response to: (i) Late Triassic-to-Middle Jurassic extension, which involved thick-skinned, sub-salt and thin-skinned supra-salt faulting with the latter driving reactive diapirism; (ii) Early Cretaceous extensional collapse of the walls; and (iii) Jurassic-to-Neogene, active and passive diapirism, which was at least partly coeval with and occurred along-strike from areas of reactive diapirism and wall collapse. Our study supports physical model predictions, showcasing a three-dimensional example of how protracted, multiphase salt diapirism can influence the structure and growth of normal fault arrays.
Yigit, O.; Nelson, E.P.; Hitzman, M.W.; Hofstra, A.H.
2003-01-01
The Gold Bar district in the southern Roberts Mountains, 48 km northwest of Eureka, Nevada, contains one main deposit (Gold Bar), five satellite deposits, and other resources. Approximately 0.5 Moz of gold have been recovered from a resource of 1,639,000 oz of gold in Carlin-type gold deposits in lower plate, miogeoclinal carbonate rocks below the Roberts Mountains thrust. Host rocks are unit 2 of the Upper Member of the Devonian Denay Formation and the Bartine Member of the McColley Canyon Formation. Spatial and temporal relations between structures and gold mineralization indicate that both pre-Tertiary and Tertiary structures were important controls on gold mineralization. Gold mineralization occurs primarily along high-angle Tertiary normal faults, some of which are reactivated reverse faults of Paleozoic or Mesozoic age. Most deposits are localized at the intersection of northwest- and northeast-striking faults. Alteration includes decalcification, and to a lesser extent, silicification along high-angle faults. Jasperoid (pervasive silicification), which formed along most faults and in some strata-bound zones, accounts for a small portion of the ore in every deposit. In the Gold Canyon deposit, a high-grade jasperoid pipe formed along a Tertiary normal fault which was localized along a zone of overturned fault-propagation folds and thrust faults of Paleozoic or Mesozoic age.
NASA Astrophysics Data System (ADS)
Singh, Jaskaran; Darpe, A. K.; Singh, S. P.
2018-02-01
Local damage in rolling element bearings usually generates periodic impulses in vibration signals. The severity, repetition frequency and the fault excited resonance zone by these impulses are the key indicators for diagnosing bearing faults. In this paper, a methodology based on over complete rational dilation wavelet transform (ORDWT) is proposed, as it enjoys a good shift invariance. ORDWT offers flexibility in partitioning the frequency spectrum to generate a number of subbands (filters) with diverse bandwidths. The selection of the optimal filter that perfectly overlaps with the bearing fault excited resonance zone is based on the maximization of a proposed impulse detection measure "Temporal energy operated auto correlated kurtosis". The proposed indicator is robust and consistent in evaluating the impulsiveness of fault signals in presence of interfering vibration such as heavy background noise or sporadic shocks unrelated to the fault or normal operation. The structure of the proposed indicator enables it to be sensitive to fault severity. For enhanced fault classification, an autocorrelation of the energy time series of the signal filtered through the optimal subband is proposed. The application of the proposed methodology is validated on simulated and experimental data. The study shows that the performance of the proposed technique is more robust and consistent in comparison to the original fast kurtogram and wavelet kurtogram.
Köcher, Paul; Horna, Viviana; Leuschner, Christoph
2013-08-01
The functional role of internal water storage is increasingly well understood in tropical trees and conifers, while temperate broad-leaved trees have only rarely been studied. We examined the magnitude and dynamics of the use of stem water reserves for transpiration in five coexisting temperate broad-leaved trees with largely different morphology and physiology (genera Fagus, Fraxinus, Tilia, Carpinus and Acer). We expected that differences in water storage patterns would mostly reflect species differences in wood anatomy (ring vs. diffuse-porous) and wood density. Sap flux density was recorded synchronously at five positions along the root-to-branch flow path of mature trees (roots, three stem positions and branches) with high temporal resolution (2 min) and related to stem radius changes recorded with electronic point dendrometers. The daily amount of stored stem water withdrawn for transpiration was estimated by comparing the integrated flow at stem base and stem top. The temporal coincidence of flows at different positions and apparent time lags were examined by cross-correlation analysis. Our results confirm that internal water stores play an important role in the four diffuse-porous species with estimated 5-12 kg day(-1) being withdrawn on average in 25-28 m tall trees representing 10-22% of daily transpiration; in contrast, only 0.5-2.0 kg day(-1) was withdrawn in ring-porous Fraxinus. Wood density had a large influence on storage; sapwood area (diffuse- vs. ring-porous) may be another influential factor but its effect was not significant. Across the five species, the length of the time lag in flow at stem top and stem base was positively related to the size of stem storage. The stem stores were mostly exhausted when the soil matrix potential dropped below -0.1 MPa and daily mean vapor pressure deficit exceeded 3-5 hPa. We conclude that stem storage is an important factor improving the water balance of diffuse-porous temperate broad-leaved trees in moist periods, while it may be of low relevance in dry periods and in ring-porous species.
Wagner, David L.; Saucedo, George J.; Clahan, Kevin B.; Fleck, Robert J.; Langenheim, Victoria E.; McLaughlin, Robert J.; Sarna-Wojcicki, Andrei M.; Allen, James R.; Deino, Alan L.
2011-01-01
Recent geologic mapping in the northern San Francisco Bay region (California, USA) supported by radiometric dating and tephrochronologic correlations, provides insights into the framework geology, stratigraphy, tectonic evolution, and geologic history of this part of the San Andreas transform plate boundary. There are 25 new and existing radiometric dates that define three temporally distinct volcanic packages along the north margin of San Pablo Bay, i.e., the Burdell Mountain Volcanics (11.1 Ma), the Tolay Volcanics (ca. 10–8 Ma), and the Sonoma Volcanics (ca. 8–2.5 Ma). The Burdell Mountain and the Tolay Volcanics are allochthonous, having been displaced from the Quien Sabe Volcanics and the Berkeley Hills Volcanics, respectively. Two samples from a core of the Tolay Volcanics taken from the Murphy #1 well in the Petaluma oilfield yielded ages of 8.99 ± 0.06 and 9.13 ± 0.06 Ma, demonstrating that volcanic rocks exposed along Tolay Creek near Sears Point previously thought to be a separate unit, the Donnell Ranch volcanics, are part of the Tolay Volcanics. Other new dates reported herein show that volcanic rocks in the Meacham Hill area and extending southwest to the Burdell Mountain fault are also part of the Tolay Volcanics. In the Sonoma volcanic field, strongly bimodal volcanic sequences are intercalated with sediments. In the Mayacmas Mountains a belt of eruptive centers youngs to the north. The youngest of these volcanic centers at Sugarloaf Ridge, which lithologically, chemically, and temporally matches the Napa Valley eruptive center, was apparently displaced 30 km to the northwest by movement along the Carneros and West Napa faults. The older parts of the Sonoma Volcanics have been displaced at least 28 km along the Rodgers Creek fault since ca. 7 Ma. The Petaluma Formation also youngs to the north along the Rodgers Creek–Hayward fault and the Bennett Valley fault. The Petaluma basin formed as part of the Contra Costa basin in the Late Miocene and was displaced to its present location along the Rodgers Creek–Hayward and older faults. The Tolay fault, previously thought to be a major dextral fault, is part of a fold-and-thrust belt that does not exhibit lateral displacement.
NASA Astrophysics Data System (ADS)
Wei, M.
2016-12-01
Progress towards a quantitative and predictive understanding of the earthquake behavior can be achieved by improved understanding of earthquake cycles. However, it is hindered by the long repeat times (100s to 1000s of years) of the largest earthquakes on most faults. At fast-spreading oceanic transform faults, the typical repeating time ranges from 5-20 years, making them a unique tectonic environment for studying the earthquake cycle. One important observation on OTFs is the quasi-periodicity and the spatial-temporal clustering of large earthquakes: same fault segment ruptured repeatedly at a near constant interval and nearby segments ruptured during a short time period. This has been observed on the Gofar and Discovery faults in the East Pacific Rise. Between 1992 and 2014, five clusters of M6 earthquakes occurred on the Gofar and Discovery fault system with recurrence intervals of 4-6 years. Each cluster consisted of a westward migration of seismicity from the Discovery to Gofar segment within a 2-year period, providing strong evidence for spatial-temporal clustering of large OTFs earthquakes. I simulated earthquake cycles of oceanic transform fault in the framework of rate-and-state friction, motivated by the observations at the Gofar and Discovery faults. I focus on a model with two seismic segments, each 20 km long and 5 km wide, separated by an aseismic segment of 10 km wide. This geometry is set based on aftershock locations of the 2008 M6.0 earthquake on Gofar. The repeating large earthquake on both segments are reproduced with similar magnitude as observed. I set the state parameter differently for the two seismic segments so initially they are not synchornized. Results also show that synchronization of the two seismic patches can be achieved after several earthquake cycles when the effective normal stress or the a-b parameter is smaller than surrounding aseismic areas, both having reduced the resistance to seismic rupture in the VS segment. These parameter settings likely reflect the alteration of stress and friction property by the enhanced hydrothermal activity suggested by McGuire et al., 2012. The seismic coupling ratio of the entire model is about 0.3, not far from the global average of 0.15.
Synthetic Earthquake Statistics From Physical Fault Models for the Lower Rhine Embayment
NASA Astrophysics Data System (ADS)
Brietzke, G. B.; Hainzl, S.; Zöller, G.
2012-04-01
As of today, seismic risk and hazard estimates mostly use pure empirical, stochastic models of earthquake fault systems tuned specifically to the vulnerable areas of interest. Although such models allow for reasonable risk estimates they fail to provide a link between the observed seismicity and the underlying physical processes. Solving a state-of-the-art fully dynamic description set of all relevant physical processes related to earthquake fault systems is likely not useful since it comes with a large number of degrees of freedom, poor constraints on its model parameters and a huge computational effort. Here, quasi-static and quasi-dynamic physical fault simulators provide a compromise between physical completeness and computational affordability and aim at providing a link between basic physical concepts and statistics of seismicity. Within the framework of quasi-static and quasi-dynamic earthquake simulators we investigate a model of the Lower Rhine Embayment (LRE) that is based upon seismological and geological data. We present and discuss statistics of the spatio-temporal behavior of generated synthetic earthquake catalogs with respect to simplification (e.g. simple two-fault cases) as well as to complication (e.g. hidden faults, geometric complexity, heterogeneities of constitutive parameters).
Strike-slip faulting in the Inner California Borderlands, offshore Southern California.
NASA Astrophysics Data System (ADS)
Bormann, J. M.; Kent, G. M.; Driscoll, N. W.; Harding, A. J.; Sahakian, V. J.; Holmes, J. J.; Klotsko, S.; Kell, A. M.; Wesnousky, S. G.
2015-12-01
In the Inner California Borderlands (ICB), offshore of Southern California, modern dextral strike-slip faulting overprints a prominent system of basins and ridges formed during plate boundary reorganization 30-15 Ma. Geodetic data indicate faults in the ICB accommodate 6-8 mm/yr of Pacific-North American plate boundary deformation; however, the hazard posed by the ICB faults is poorly understood due to unknown fault geometry and loosely constrained slip rates. We present observations from high-resolution and reprocessed legacy 2D multichannel seismic (MCS) reflection datasets and multibeam bathymetry to constrain the modern fault architecture and tectonic evolution of the ICB. We use a sequence stratigraphy approach to identify discrete episodes of deformation in the MCS data and present the results of our mapping in a regional fault model that distinguishes active faults from relict structures. Significant differences exist between our model of modern ICB deformation and existing models. From east to west, the major active faults are the Newport-Inglewood/Rose Canyon, Palos Verdes, San Diego Trough, and San Clemente fault zones. Localized deformation on the continental slope along the San Mateo, San Onofre, and Carlsbad trends results from geometrical complexities in the dextral fault system. Undeformed early to mid-Pleistocene age sediments onlap and overlie deformation associated with the northern Coronado Bank fault (CBF) and the breakaway zone of the purported Oceanside Blind Thrust. Therefore, we interpret the northern CBF to be inactive, and slip rate estimates based on linkage with the Holocene active Palos Verdes fault are unwarranted. In the western ICB, the San Diego Trough fault (SDTF) and San Clemente fault have robust linear geomorphic expression, which suggests that these faults may accommodate a significant portion of modern ICB slip in a westward temporal migration of slip. The SDTF offsets young sediments between the US/Mexico border and the eastern margin of Avalon Knoll, where the fault is spatially coincident and potentially linked with the San Pedro Basin fault (SPBF). Kinematic linkage between the SDTF and the SPBF increases the potential rupture length for earthquakes on either fault and may allow events nucleating on the SDTF to propagate much closer to the LA Basin.
Pattern recognition approach to the subsequent event of damaging earthquakes in Italy
NASA Astrophysics Data System (ADS)
Gentili, S.; Di Giovambattista, R.
2017-05-01
In this study, we investigate the occurrence of large aftershocks following the most significant earthquakes that occurred in Italy after 1980. In accordance with previous studies (Vorobieva and Panza, 1993; Vorobieva, 1999), we group clusters associated with mainshocks into two categories: ;type A; if, given a main shock of magnitude M, the subsequent strongest earthquake in the cluster has magnitude ≥M - 1 or type B otherwise. In this paper, we apply a pattern recognition approach using statistical features to foresee the class of the analysed clusters. The classification of the two categories is based on some features of the time, space, and magnitude distribution of the aftershocks. Specifically, we analyse the temporal evolution of the radiated energy at different elapsed times after the mainshock, the spatio-temporal evolution of the aftershocks occurring within a few days, and the probability of a strong earthquake. An attempt is made to classify the studied region into smaller seismic zones with a prevalence of type A and B clusters. We demonstrate that the two types of clusters have distinct preferred geographic locations inside the Italian territory that likely reflected key properties of the deforming regions, different crustal domains and faulting style. We use decision trees as classifiers of single features to characterize the features depending on the cluster type. The performance of the classification is tested by the Leave-One-Out method. The analysis is performed on different time-spans after the mainshock to simulate the dependence of the accuracy on the information available as data increased over a longer period with increasing time after the mainshock.
Safety Study of TCAS II for Logic Version 6.04
1992-07-01
used in the fault tree of the 198 tdy. The fu given for Logic and Altimetry effects represent the site averages, and we bued upon TCAS RAs always being...comparison with the results of Monte Carlo simulations. Five million iterations were carril out for each of the four cases (eqs. 3, 4, 6 and 7
Code of Federal Regulations, 2010 CFR
2010-10-01
..., national, or international standards. (f) The reviewer shall analyze all Fault Tree Analyses (FTA), Failure... cited by the reviewer; (4) Identification of any documentation or information sought by the reviewer...) Identification of the hardware and software verification and validation procedures for the PTC system's safety...
The Two-By-Two Array: An Aid in Conceptualization and Problem Solving
ERIC Educational Resources Information Center
Eberhart, James
2004-01-01
The fields of mathematics, science, and engineering are replete with diagrams of many varieties. They range in nature from the Venn diagrams of symbolic logic to the Periodic Chart of the Elements; and from the fault trees of risk assessment to the flow charts used to describe laboratory procedures, industrial processes, and computer programs. All…
Perone, A; Cocozza, C; Cherubini, P; Bachmann, O; Guillong, M; Lasserre, B; Marchetti, M; Tognetti, R
2018-02-01
Monitoring atmospheric pollution in industrial areas near urban center is essential to infer past levels of contamination and to evaluate the impact for environmental health and safety. The main aim of this study was to understand if the chemical composition of tree-ring wood can be used for monitoring spatial-temporal variability of pollutants in Terni, Central Italy, one of the most polluted towns in Italy. Tree cores were taken from 32 downy oaks (Quercus pubescens) located at different distances from several pollutant sources, including a large steel factory. Trace element (Cr, Co, Cu, Pb, Hg, Mo, Ni, Tl, W, U, V, and Zn) index in tree-ring wood was determined using high-resolution laser ablation inductively coupled plasma mass spectrometry (LA-ICP-MS). We hypothesized that the presence of contaminants detected in tree-rings reflected industrial activities over time. The accumulation of contaminants in tree-rings was affected by anthropogenic activities in the period 1958-2009, though signals varied in intensity with the distance of trees from the industrial plant. A stronger limitation of tree growth was observed in the proximity of the industrial plant in comparison with other pollutant sources. Levels of Cr, Ni, Mo, V, U and W increased in tree-ring profiles of trees close to the steel factory, especially during the 80's and 90's, in correspondence to a peak of pollution in this period, as recorded by air quality monitoring stations. Uranium contents in our tree-rings were difficult to explain, while the higher contents of Cu, Hg, Pb, and Tl could be related to the contaminants released from an incinerator located close to the industrial plant. The accumulation of contaminants in tree-rings reflected the historical variation of environmental pollution in the considered urban context. Copyright © 2017 Elsevier Ltd. All rights reserved.
Tree-, stand- and site-specific controls on landscape-scale patterns of transpiration
NASA Astrophysics Data System (ADS)
Hassler, Sibylle; Markus, Weiler; Theresa, Blume
2017-04-01
Transpiration is a key process in the hydrological cycle and a sound understanding and quantification of transpiration and its spatial variability is essential for management decisions as well as for improving the parameterisation of hydrological and soil-vegetation-atmosphere transfer models. For individual trees, transpiration is commonly estimated by measuring sap flow. Besides evaporative demand and water availability, tree-specific characteristics such as species, size or social status control sap flow amounts of individual trees. Within forest stands, properties such as species composition, basal area or stand density additionally affect sap flow, for example via competition mechanisms. Finally, sap flow patterns might also be influenced by landscape-scale characteristics such as geology, slope position or aspect because they affect water and energy availability; however, little is known about the dynamic interplay of these controls. We studied the relative importance of various tree-, stand- and site-specific characteristics with multiple linear regression models to explain the variability of sap velocity measurements in 61 beech and oak trees, located at 24 sites spread over a 290 km2-catchment in Luxembourg. For each of 132 consecutive days of the growing season of 2014 we modelled the daily sap velocities of these 61 trees and determined the importance of the different predictors. Results indicate that a combination of tree-, stand- and site-specific factors controls sap velocity patterns in the landscape, namely tree species, tree diameter, the stand density, geology and aspect. Compared to these predictors, spatial variability of atmospheric demand and soil moisture explains only a small fraction of the variability in the daily datasets. However, the temporal dynamics of the explanatory power of the tree-specific characteristics, especially species, are correlated to the temporal dynamics of potential evaporation. Thus, transpiration estimates at the landscape scale would benefit from not only considering hydro-meteorological drivers, but also including tree, stand and site characteristics in order to improve the spatial representation of transpiration for hydrological and soil-vegetation-atmosphere transfer models.
Detailed seismicity analysis revealing the dynamics of the southern Dead Sea area
NASA Astrophysics Data System (ADS)
Braeuer, B.; Asch, G.; Hofstetter, R.; Haberland, Ch.; Jaser, D.; El-Kelani, R.; Weber, M.
2014-10-01
Within the framework of the international DESIRE (DEad Sea Integrated REsearch) project, a dense temporary local seismological network was operated in the southern Dead Sea area. During 18 recording months, 648 events were detected. Based on an already published tomography study clustering, focal mechanisms, statistics and the distribution of the microseismicity in relation to the velocity models from the tomography are analysed. The determined b value of 0.74 leads to a relatively high risk of large earthquakes compared to the moderate microseismic activity. The distribution of the seismicity indicates an asymmetric basin with a vertical strike-slip fault forming the eastern boundary of the basin, and an inclined western boundary, made up of strike-slip and normal faults. Furthermore, significant differences between the area north and south of the Bokek fault were observed. South of the Bokek fault, the western boundary is inactive while the entire seismicity occurs on the eastern boundary and below the basin-fill sediments. The largest events occurred here, and their focal mechanisms represent the northwards transform motion of the Arabian plate along the Dead Sea Transform. The vertical extension of the spatial and temporal cluster from February 2007 is interpreted as being related to the locking of the region around the Bokek fault. North of the Bokek fault similar seismic activity occurs on both boundaries most notably within the basin-fill sediments, displaying mainly small events with strike-slip mechanism and normal faulting in EW direction. Therefore, we suggest that the Bokek fault forms the border between the single transform fault and the pull-apart basin with two active border faults.
Hybrid information privacy system: integration of chaotic neural network and RSA coding
NASA Astrophysics Data System (ADS)
Hsu, Ming-Kai; Willey, Jeff; Lee, Ting N.; Szu, Harold H.
2005-03-01
Electronic mails are adopted worldwide; most are easily hacked by hackers. In this paper, we purposed a free, fast and convenient hybrid privacy system to protect email communication. The privacy system is implemented by combining private security RSA algorithm with specific chaos neural network encryption process. The receiver can decrypt received email as long as it can reproduce the specified chaos neural network series, so called spatial-temporal keys. The chaotic typing and initial seed value of chaos neural network series, encrypted by the RSA algorithm, can reproduce spatial-temporal keys. The encrypted chaotic typing and initial seed value are hidden in watermark mixed nonlinearly with message media, wrapped with convolution error correction codes for wireless 3rd generation cellular phones. The message media can be an arbitrary image. The pattern noise has to be considered during transmission and it could affect/change the spatial-temporal keys. Since any change/modification on chaotic typing or initial seed value of chaos neural network series is not acceptable, the RSA codec system must be robust and fault-tolerant via wireless channel. The robust and fault-tolerant properties of chaos neural networks (CNN) were proved by a field theory of Associative Memory by Szu in 1997. The 1-D chaos generating nodes from the logistic map having arbitrarily negative slope a = p/q generating the N-shaped sigmoid was given first by Szu in 1992. In this paper, we simulated the robust and fault-tolerance properties of CNN under additive noise and pattern noise. We also implement a private version of RSA coding and chaos encryption process on messages.
NASA Astrophysics Data System (ADS)
Kamer, Yavor; Ouillon, Guy; Sornette, Didier; Wössner, Jochen
2014-05-01
We present applications of a new clustering method for fault network reconstruction based on the spatial distribution of seismicity. Unlike common approaches that start from the simplest large scale and gradually increase the complexity trying to explain the small scales, our method uses a bottom-up approach, by an initial sampling of the small scales and then reducing the complexity. The new approach also exploits the location uncertainty associated with each event in order to obtain a more accurate representation of the spatial probability distribution of the seismicity. For a given dataset, we first construct an agglomerative hierarchical cluster (AHC) tree based on Ward's minimum variance linkage. Such a tree starts out with one cluster and progressively branches out into an increasing number of clusters. To atomize the structure into its constitutive protoclusters, we initialize a Gaussian Mixture Modeling (GMM) at a given level of the hierarchical clustering tree. We then let the GMM converge using an Expectation Maximization (EM) algorithm. The kernels that become ill defined (less than 4 points) at the end of the EM are discarded. By incrementing the number of initialization clusters (by atomizing at increasingly populated levels of the AHC tree) and repeating the procedure above, we are able to determine the maximum number of Gaussian kernels the structure can hold. The kernels in this configuration constitute our protoclusters. In this setting, merging of any pair will lessen the likelihood (calculated over the pdf of the kernels) but in turn will reduce the model's complexity. The information loss/gain of any possible merging can thus be quantified based on the Minimum Description Length (MDL) principle. Similar to an inter-distance matrix, where the matrix element di,j gives the distance between points i and j, we can construct a MDL gain/loss matrix where mi,j gives the information gain/loss resulting from the merging of kernels i and j. Based on this matrix, merging events resulting in MDL gain are performed in descending order until no gainful merging is possible anymore. We envision that the results of this study could lead to a better understanding of the complex interactions within the Californian fault system and hopefully use the acquired insights for earthquake forecasting.
Ohashi, Shinta; Kuroda, Katsushi; Takano, Tsutomu; Suzuki, Youki; Fujiwara, Takeshi; Abe, Hisashi; Kagawa, Akira; Sugiyama, Masaki; Kubojima, Yoshitaka; Zhang, Chunhua; Yamamoto, Koichi
2017-11-01
To understand the changes in radiocesium ( 137 Cs) concentrations in stem woods after the Fukushima Dai-ichi Nuclear Power Plant (FDNPP) accident, we investigated 137 Cs concentrations in the bark, sapwood, heartwood, and whole wood of four major tree species at multiple sites with different levels of radiocesium deposition from the FDNPP accident since 2011 (since 2012 at some sites): Japanese cedar at four sites, hinoki cypress and Japanese konara oak at two sites, and Japanese red pine at one site. Our previous report on 137 Cs concentrations in bark and whole wood samples collected from 2011 to 2015 suggested that temporal variations were different among sites even within the same species. In the present study, we provided data on bark and whole wood samples in 2016 and separately measured 137 Cs concentrations in sapwood and heartwood samples from 2011 to 2016; we further discussed temporal trends in 137 Cs concentrations in each part of tree stems, particularly those in 137 Cs distributions between sapwood and heartwood, in relation to their species and site dependencies. Temporal trends in bark and whole wood samples collected from 2011 to 2016 were consistent with those reported in samples collected from 2011 to 2015. Temporal variations in 137 Cs concentrations in barks showed either a decreasing trend or no clear trend, implying that 137 Cs deposition in barks is inhomogeneous and that decontamination is relatively slow in some cases. Temporal trends in 137 Cs concentrations in sapwood, heartwood, and whole wood were different among species and also among sites within the same species. Relatively common trends within the same species, which were increasing, were observed in cedar heartwood, and in oak sapwood and whole wood. On the other hand, the ratio of 137 Cs concentration in heartwood to that in sapwood (fresh weight basis) was commonly increased to more than 2 in cedar, although distinct temporal trends were not found in the other species, for which the ratio was around 1 in cypress and pine and below 0.5 in oak, suggesting that 137 Cs transfer from sapwood to heartwood shows species dependency. Consequently, the species dependency of 137 Cs transfer within the tree appears easily, while that from the environment to the trees can be masked by various factors. Thus, prediction of 137 Cs concentrations in stem wood should be carried out carefully as it still requires investigations at multiple sites with a larger sample size and an understanding of the species-specific 137 Cs transfer mechanism. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.
NASA Astrophysics Data System (ADS)
Hoeft, J. S.; Frankel, K. L.
2010-12-01
The eastern California shear zone (ECSZ) and Walker Lane represent an evolving segment of the Pacific-North America plate boundary. Understanding temporal variations in strain accumulation and release along plate boundary structures is critical to assessing how deformation is accommodated throughout the lithosphere. Late Pleistocene displacement along the Lone Mountain fault suggests the Silver Peak-Lone Mountain (SPLM) extensional complex is an important structure in accommodating and transferring strain within the ECSZ and Walker Lane. Using geologic and geomorphic mapping, differential global positioning system surveys, and terrestrial cosmogenic nuclide (TCN) geochronology, we determined rates of extension across the Lone Mountain fault in western Nevada. The Lone Mountain fault displaces the northwestern Lone Mountain and Weepah Hills piedmonts and is the northeastern component of the SPLM extensional complex, a series of down-to-the-northwest normal faults. We mapped seven distinct alluvial fan deposits and dated three of the surfaces using 10Be TCN geochronology, yielding ages of 16.5 ± 1.2 ka, 92 ± 9 ka, and 137 ± 25 ka for the Q3b, Q2c, and Q2b deposits, respectively. The ages were combined with scarp profile measurements across the displaced fans to obtain minimum rates of extension; the Q2b and Q2c surfaces yield an extension rate between 0.1 ± 0.1 and 0.2 ± 01 mm/yr and the Q3b surface yields a rate of 0.2 ± 0.1 to 0.4 ± 0.1 mm/yr, depending on the dip of the fault. Active extension on the Lone Mountain fault suggests that it helps partition strain off of the major strike-slip faults in the northern ECSZ and transfers deformation to the east around the Mina Deflection and northward into the Walker Lane. Combining our results with estimates from other faults accommodating dextral shear in the northern ECSZ reveals an apparent discrepancy between short- and long-term rates of strain accumulation and release. If strain rates have remained constant since the late Pleistocene, this could reflect transient strain accumulation, similar to the Mojave segment of the ECSZ. However, our data also suggest a potential increase in strain rates between ~92 ka and ~17 ka, and possibly to present day, which may also help explain the mismatch between long- and short-term rates of deformation in the region.
NASA Astrophysics Data System (ADS)
Shi, J. T.; Han, X. T.; Xie, J. F.; Yao, L.; Huang, L. T.; Li, L.
2013-03-01
A Pulsed High Magnetic Field Facility (PHMFF) has been established in Wuhan National High Magnetic Field Center (WHMFC) and various protection measures are applied in its control system. In order to improve the reliability and robustness of the control system, the safety analysis of the PHMFF is carried out based on Fault Tree Analysis (FTA) technique. The function and realization of 5 protection systems, which include sequence experiment operation system, safety assistant system, emergency stop system, fault detecting and processing system and accident isolating protection system, are given. The tests and operation indicate that these measures improve the safety of the facility and ensure the safety of people.
Garrity, Steven R.; Allen, Craig D.; Brumby, Steven P.; Gangodagamage, Chandana; McDowell, Nate G.; Cai, D. Michael
2013-01-01
Widespread tree mortality events have recently been observed in several biomes. To effectively quantify the severity and extent of these events, tools that allow for rapid assessment at the landscape scale are required. Past studies using high spatial resolution satellite imagery have primarily focused on detecting green, red, and gray tree canopies during and shortly after tree damage or mortality has occurred. However, detecting trees in various stages of death is not always possible due to limited availability of archived satellite imagery. Here we assess the capability of high spatial resolution satellite imagery for tree mortality detection in a southwestern U.S. mixed species woodland using archived satellite images acquired prior to mortality and well after dead trees had dropped their leaves. We developed a multistep classification approach that uses: supervised masking of non-tree image elements; bi-temporal (pre- and post-mortality) differencing of normalized difference vegetation index (NDVI) and red:green ratio (RGI); and unsupervised multivariate clustering of pixels into live and dead tree classes using a Gaussian mixture model. Classification accuracies were improved in a final step by tuning the rules of pixel classification using the posterior probabilities of class membership obtained from the Gaussian mixture model. Classifications were produced for two images acquired post-mortality with overall accuracies of 97.9% and 98.5%, respectively. Classified images were combined with land cover data to characterize the spatiotemporal characteristics of tree mortality across areas with differences in tree species composition. We found that 38% of tree crown area was lost during the drought period between 2002 and 2006. The majority of tree mortality during this period was concentrated in piñon-juniper (Pinus edulis-Juniperus monosperma) woodlands. An additional 20% of the tree canopy died or was removed between 2006 and 2011, primarily in areas experiencing wildfire and management activity. -Our results demonstrate that unsupervised clustering of bi-temporal NDVI and RGI differences can be used to detect tree mortality resulting from numerous causes and in several forest cover types.
Patterns of mortality in a montane mixed-conifer forest in San Diego County, California.
Freeman, Mary Pyott; Stow, Douglas A; An, Li
2017-10-01
We examine spatial patterns of conifer tree mortality and their changes over time for the montane mixed-conifer forests of San Diego County. These forest areas have recently experienced extensive tree mortality due to multiple factors. A spatial contextual image processing approach was utilized with high spatial resolution digital airborne imagery to map dead trees for the years 1997, 2000, 2002, and 2005 for three study areas: Palomar, Volcan, and Laguna mountains. Plot-based fieldwork was conducted to further assess mortality patterns. Mean mortality remained static from 1997 to 2002 (4, 2.2, and 4.2 trees/ha for Palomar, Volcan, and Laguna) and then increased by 2005 to 10.3, 9.7, and 5.2 trees/ha, respectively. The increase in mortality between 2002 and 2005 represents the temporal pattern of a discrete disturbance event, attributable to the 2002-2003 drought. Dead trees are significantly clustered for all dates, based on spatial cluster analysis, indicating that they form distinct groups, as opposed to spatially random single dead trees. Other tests indicate no directional shift or spread of mortality over time, but rather an increase in density. While general temporal and spatial mortality processes are uniform across all study areas, the plot-based species and quantity distribution of mortality, and diameter distributions of dead vs. living trees, vary by study area. The results of this study improve our understanding of stand- to landscape-level forest structure and dynamics, particularly by examining them from the multiple perspectives of field and remotely sensed data. © 2017 by the Ecological Society of America.
NASA Astrophysics Data System (ADS)
Van Stan, John T.; Wagner, Sasha; Guillemette, François; Whitetree, Ansley; Lewis, Julius; Silva, Leticia; Stubbins, Aron
2017-11-01
Studies on the fate and transport of dissolved organic matter (DOM) along the rainfall-to-discharge flow pathway typically begin in streams or soils, neglecting the initial enrichment of rainfall with DOM during contact with plant canopies. However, rain water can gather significant amounts of tree-derived DOM (tree-DOM) when it drains from the canopy, as throughfall, and down the stem, as stemflow. We examined the temporal variability of event-scale tree-DOM concentrations, yield, and optical (light absorbance and fluorescence) characteristics from an epiphyte-laden Quercus virginiana-Juniperus virginiana forest on Skidaway Island, Savannah, Georgia (USA). All tree-DOM fluxes were highly enriched in dissolved organic carbon (DOC) compared to rainfall, and epiphytes further increased concentrations. Stemflow DOC concentrations were greater than throughfall across study species, yet larger throughfall water yields produced greater DOC yields versus stemflow. Tree-DOM optical characteristics indicate it is aromatic-rich with fluorescent DOM dominated by humic-like fluorescence, containing 10-20% protein-like (tryptophan-like) fluorescence. Storm size was the only storm condition that strongly correlated with tree-DOM concentration and flux; however, throughfall and stemflow optical characteristics varied little across a wide range of storm conditions (from low magnitude events to intense tropical storms). Annual tree-DOM yields from the study forest (0.8-46 g C m-2 yr-1) were similar to other yields from discrete down-gradient fluxes (litter leachates, soil leachates, and stream discharge) along the rainfall-to-discharge flow path.
Dynamics of folding: Impact of fault bend folds on earthquake cycles
NASA Astrophysics Data System (ADS)
Sathiakumar, S.; Barbot, S.; Hubbard, J.
2017-12-01
Earthquakes in subduction zones and subaerial convergent margins are some of the largest in the world. So far, forecasts of future earthquakes have primarily relied on assessing past earthquakes to look for seismic gaps and slip deficits. However, the roles of fault geometry and off-fault plasticity are typically overlooked. We use structural geology (fault-bend folding theory) to inform fault modeling in order to better understand how deformation is accommodated on the geological time scale and through the earthquake cycle. Fault bends in megathrusts, like those proposed for the Nepal Himalaya, will induce folding of the upper plate. This introduces changes in the slip rate on different fault segments, and therefore on the loading rate at the plate interface, profoundly affecting the pattern of earthquake cycles. We develop numerical simulations of slip evolution under rate-and-state friction and show that this effect introduces segmentation of the earthquake cycle. In crustal dynamics, it is challenging to describe the dynamics of fault-bend folds, because the deformation is accommodated by small amounts of slip parallel to bedding planes ("flexural slip"), localized on axial surface, i.e. folding axes pinned to fault bends. We use dislocation theory to describe the dynamics of folding along these axial surfaces, using analytic solutions that provide displacement and stress kernels to simulate the temporal evolution of folding and assess the effects of folding on earthquake cycles. Studies of the 2015 Gorkha earthquake, Nepal, have shown that fault geometry can affect earthquake segmentation. Here, we show that in addition to the fault geometry, the actual geology of the rocks in the hanging wall of the fault also affect critical parameters, including the loading rate on parts of the fault, based on fault-bend folding theory. Because loading velocity controls the recurrence time of earthquakes, these two effects together are likely to have a strong impact on the earthquake cycle.
NASA Astrophysics Data System (ADS)
Townend, John; Sutherland, Rupert; Toy, Virginia G.; Doan, Mai-Linh; Célérier, Bernard; Massiot, Cécile; Coussens, Jamie; Jeppson, Tamara; Janku-Capova, Lucie; Remaud, Léa.; Upton, Phaedra; Schmitt, Douglas R.; Pezard, Philippe; Williams, Jack; Allen, Michael John; Baratin, Laura-May; Barth, Nicolas; Becroft, Leeza; Boese, Carolin M.; Boulton, Carolyn; Broderick, Neil; Carpenter, Brett; Chamberlain, Calum J.; Cooper, Alan; Coutts, Ashley; Cox, Simon C.; Craw, Lisa; Eccles, Jennifer D.; Faulkner, Dan; Grieve, Jason; Grochowski, Julia; Gulley, Anton; Hartog, Arthur; Henry, Gilles; Howarth, Jamie; Jacobs, Katrina; Kato, Naoki; Keys, Steven; Kirilova, Martina; Kometani, Yusuke; Langridge, Rob; Lin, Weiren; Little, Tim; Lukacs, Adrienn; Mallyon, Deirdre; Mariani, Elisabetta; Mathewson, Loren; Melosh, Ben; Menzies, Catriona; Moore, Jo; Morales, Luis; Mori, Hiroshi; Niemeijer, André; Nishikawa, Osamu; Nitsch, Olivier; Paris, Jehanne; Prior, David J.; Sauer, Katrina; Savage, Martha K.; Schleicher, Anja; Shigematsu, Norio; Taylor-Offord, Sam; Teagle, Damon; Tobin, Harold; Valdez, Robert; Weaver, Konrad; Wiersberg, Thomas; Zimmer, Martin
2017-12-01
Fault rock assemblages reflect interaction between deformation, stress, temperature, fluid, and chemical regimes on distinct spatial and temporal scales at various positions in the crust. Here we interpret measurements made in the hanging-wall of the Alpine Fault during the second stage of the Deep Fault Drilling Project (DFDP-2). We present observational evidence for extensive fracturing and high hanging-wall hydraulic conductivity (˜10-9 to 10-7 m/s, corresponding to permeability of ˜10-16 to 10-14 m2) extending several hundred meters from the fault's principal slip zone. Mud losses, gas chemistry anomalies, and petrophysical data indicate that a subset of fractures intersected by the borehole are capable of transmitting fluid volumes of several cubic meters on time scales of hours. DFDP-2 observations and other data suggest that this hydrogeologically active portion of the fault zone in the hanging-wall is several kilometers wide in the uppermost crust. This finding is consistent with numerical models of earthquake rupture and off-fault damage. We conclude that the mechanically and hydrogeologically active part of the Alpine Fault is a more dynamic and extensive feature than commonly described in models based on exhumed faults. We propose that the hydrogeologically active damage zone of the Alpine Fault and other large active faults in areas of high topographic relief can be subdivided into an inner zone in which damage is controlled principally by earthquake rupture processes and an outer zone in which damage reflects coseismic shaking, strain accumulation and release on interseismic timescales, and inherited fracturing related to exhumation.
Arbellay, Estelle; Jarvis, Ingrid; Chavardès, Raphaël D; Daniels, Lori D; Stoffel, Markus
2018-05-19
Reconstructions of defoliation by larch bud moth (LBM, Zeiraphera diniana Gn.) based on European larch (Larix decidua Mill.) tree rings have unraveled outbreak patterns over exceptional temporal and spatial scales. In this study, we conducted tree-ring analyses on 105 increment cores of European larch from the Valais Alps, Switzerland. The well-documented history of LBM outbreaks in Valais provided a solid baseline for evaluating the LBM defoliation signal in multiple tree-ring parameters. First, we used tree-ring width measurements along with regional records of LBM outbreaks to reconstruct the occurrence of these events at two sites within the Swiss Alps. Second, we measured earlywood width, latewood width and blue intensity, and compared these parameters with tree-ring width to assess the capacity of each proxy to detect LBM defoliation. A total of six LBM outbreaks were reconstructed for the two sites between AD 1850 and 2000. Growth suppression induced by LBM was, on average, highest in latewood width (59%), followed by total ring width (54%), earlywood width (51%) and blue intensity (26%). We show that latewood width and blue intensity can improve the temporal accuracy of LBM outbreak reconstructions, as both proxies systematically detected LBM defoliation in the first year it occurred, as well as the differentiation between defoliation and non-defoliation years. This study introduces blue intensity as a promising new proxy of insect defoliation and encourages its use in conjunction with latewood width.
Turktas, Mine; Inal, Behcet; Okay, Sezer; Erkilic, Emine Gulden; Dundar, Ekrem; Hernandez, Pilar; Dorado, Gabriel; Unver, Turgay
2013-01-01
The olive tree (Olea europaea L.) is widely known for its strong tendency for alternate bearing, which severely affects the fruit yield from year to year. Microarray based gene expression analysis using RNA from olive samples (on-off years leaves and ripe-unripe fruits) are particularly useful to understand the molecular mechanisms influencing the periodicity in the olive tree. Thus, we carried out genome wide transcriptome analyses involving different organs and temporal stages of the olive tree using the NimbleGen Array containing 136,628 oligonucleotide probe sets. Cluster analyses of the genes showed that cDNAs originated from different organs could be sorted into separate groups. The nutritional control had a particularly remarkable impact on the alternate bearing of olive, as shown by the differential expression of transcripts under different temporal phases and organs. Additionally, hormonal control and flowering processes also played important roles in this phenomenon. Our analyses provide further insights into the transcript changes between "on year" and "off year" leaves along with the changes from unrpipe to ripe fruits, which shed light on the molecular mechanisms underlying the olive tree alternate bearing. These findings have important implications for the breeding and agriculture of the olive tree and other crops showing periodicity. To our knowledge, this is the first study reporting the development and use of an olive array to document the gene expression profiling associated with the alternate bearing in olive tree.
Turktas, Mine; Inal, Behcet; Okay, Sezer; Erkilic, Emine Gulden; Dundar, Ekrem; Hernandez, Pilar; Dorado, Gabriel; Unver, Turgay
2013-01-01
The olive tree (Olea europaea L.) is widely known for its strong tendency for alternate bearing, which severely affects the fruit yield from year to year. Microarray based gene expression analysis using RNA from olive samples (on-off years leaves and ripe-unripe fruits) are particularly useful to understand the molecular mechanisms influencing the periodicity in the olive tree. Thus, we carried out genome wide transcriptome analyses involving different organs and temporal stages of the olive tree using the NimbleGen Array containing 136,628 oligonucleotide probe sets. Cluster analyses of the genes showed that cDNAs originated from different organs could be sorted into separate groups. The nutritional control had a particularly remarkable impact on the alternate bearing of olive, as shown by the differential expression of transcripts under different temporal phases and organs. Additionally, hormonal control and flowering processes also played important roles in this phenomenon. Our analyses provide further insights into the transcript changes between ”on year” and “off year” leaves along with the changes from unrpipe to ripe fruits, which shed light on the molecular mechanisms underlying the olive tree alternate bearing. These findings have important implications for the breeding and agriculture of the olive tree and other crops showing periodicity. To our knowledge, this is the first study reporting the development and use of an olive array to document the gene expression profiling associated with the alternate bearing in olive tree. PMID:23555820
Nelson, Alan R.; Personius, Stephen F.; Sherrod, Brian L.; Buck, Jason; Bradley, Lee-Ann; Henley, Gary; Liberty, Lee M.; Kelsey, Harvey M.; Witter, Robert C.; Koehler, R.D.; Schermer, Elizabeth R.; Nemser, Eliza S.; Cladouhos, Trenton T.
2008-01-01
As part of the effort to assess seismic hazard in the Puget Sound region, we map fault scarps on Airborne Laser Swath Mapping (ALSM, an application of LiDAR) imagery (with 2.5-m elevation contours on 1:4,000-scale maps) and show field and laboratory data from backhoe trenches across the scarps that are being used to develop a latest Pleistocene and Holocene history of large earthquakes on the Tacoma fault. We supplement previous Tacoma fault paleoseismic studies with data from five trenches on the hanging wall of the fault. In a new trench across the Catfish Lake scarp, broad folding of more tightly folded glacial sediment does not predate 4.3 ka because detrital charcoal of this age was found in stream-channel sand in the trench beneath the crest of the scarp. A post-4.3-ka age for scarp folding is consistent with previously identified uplift across the fault during AD 770-1160. In the trench across the younger of the two Stansberry Lake scarps, six maximum 14C ages on detrital charcoal in pre-faulting B and C soil horizons and three minimum ages on a tree root in post-faulting colluvium, limit a single oblique-slip (right-lateral) surface faulting event to AD 410-990. Stratigraphy and sedimentary structures in the trench across the older scarp at the same site show eroded glacial sediments, probably cut by a meltwater channel, with no evidence of post-glacial deformation. At the northeast end of the Sunset Beach scarps, charcoal ages in two trenches across graben-forming scarps give a close maximum age of 1.3 ka for graben formation. The ages that best limit the time of faulting and folding in each of the trenches are consistent with the time of the large regional earthquake in southern Puget Sound about AD 900-930.
Artacho, Pamela; Bonomelli, Claudia
2016-05-01
Factors regulating fine-root growth are poorly understood, particularly in fruit tree species. In this context, the effects of N addition on the temporal and spatial distribution of fine-root growth and on the fine-root turnover were assessed in irrigated sweet cherry trees. The influence of other exogenous and endogenous factors was also examined. The rhizotron technique was used to measure the length-based fine-root growth in trees fertilized at two N rates (0 and 60 kg ha(-1)), and the above-ground growth, leaf net assimilation, and air and soil variables were simultaneously monitored. N fertilization exerted a basal effect throughout the season, changing the magnitude, temporal patterns and spatial distribution of fine-root production and mortality. Specifically, N addition enhanced the total fine-root production by increasing rates and extending the production period. On average, N-fertilized trees had a length-based production that was 110-180% higher than in control trees, depending on growing season. Mortality was proportional to production, but turnover rates were inconsistently affected. Root production and mortality was homogeneously distributed in the soil profile of N-fertilized trees while control trees had 70-80% of the total fine-root production and mortality concentrated below 50 cm depth. Root mortality rates were associated with soil temperature and water content. In contrast, root production rates were primarily under endogenous control, specifically through source-sink relationships, which in turn were affected by N supply through changes in leaf photosynthetic level. Therefore, exogenous and endogenous factors interacted to control the fine-root dynamics of irrigated sweet cherry trees. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Artacho, Pamela; Bonomelli, Claudia
2016-01-01
Factors regulating fine-root growth are poorly understood, particularly in fruit tree species. In this context, the effects of N addition on the temporal and spatial distribution of fine-root growth and on the fine-root turnover were assessed in irrigated sweet cherry trees. The influence of other exogenous and endogenous factors was also examined. The rhizotron technique was used to measure the length-based fine-root growth in trees fertilized at two N rates (0 and 60 kg ha−1), and the above-ground growth, leaf net assimilation, and air and soil variables were simultaneously monitored. N fertilization exerted a basal effect throughout the season, changing the magnitude, temporal patterns and spatial distribution of fine-root production and mortality. Specifically, N addition enhanced the total fine-root production by increasing rates and extending the production period. On average, N-fertilized trees had a length-based production that was 110–180% higher than in control trees, depending on growing season. Mortality was proportional to production, but turnover rates were inconsistently affected. Root production and mortality was homogeneously distributed in the soil profile of N-fertilized trees while control trees had 70–80% of the total fine-root production and mortality concentrated below 50 cm depth. Root mortality rates were associated with soil temperature and water content. In contrast, root production rates were primarily under endogenous control, specifically through source–sink relationships, which in turn were affected by N supply through changes in leaf photosynthetic level. Therefore, exogenous and endogenous factors interacted to control the fine-root dynamics of irrigated sweet cherry trees. PMID:26888890
Hauglin, Marius; Bollandsås, Ole Martin; Gobakken, Terje; Næsset, Erik
2017-12-08
Monitoring of forest resources through national forest inventory programmes is carried out in many countries. The expected climate changes will affect trees and forests and might cause an expansion of trees into presently treeless areas, such as above the current alpine tree line. It is therefore a need to develop methods that enable the inclusion of also these areas into monitoring programmes. Airborne laser scanning (ALS) is an established tool in operational forest inventories, and could be a viable option for monitoring tasks. In the present study, we used multi-temporal ALS data with point density of 8-15 points per m 2 , together with field measurements from single trees in the forest-tundra ecotone along a 1500-km-long transect in Norway. The material comprised 262 small trees with an average height of 1.78 m. The field-measured height growth was derived from height measurements at two points in time. The elapsed time between the two measurements was 4 years. Regression models were then used to model the relationship between ALS-derived variables and tree heights as well as the height growth. Strong relationships between ALS-derived variables and tree heights were found, with R 2 values of 0.93 and 0.97 for the two points in time. The relationship between the ALS data and the field-derived height growth was weaker, with R 2 values of 0.36-0.42. A cross-validation gave corresponding results, with root mean square errors of 19 and 11% for the ALS height models and 60% for the model relating ALS data to single-tree height growth.
Vegetation optical depth measured by microwave radiometry as an indicator of tree mortality risk
NASA Astrophysics Data System (ADS)
Rao, K.; Anderegg, W.; Sala, A.; Martínez-Vilalta, J.; Konings, A. G.
2017-12-01
Increased drought-related tree mortality has been observed across several regions in recent years. Vast spatial extent and high temporal variability makes field monitoring of tree mortality cumbersome and expensive. With global coverage and high temporal revisit, satellite remote sensing offers an unprecedented tool to monitor terrestrial ecosystems and identify areas at risk of large drought-driven tree mortality events. To date, studies that use remote sensing data to monitor tree mortality have focused on external climatic thresholds such as temperature and evapotranspiration. However, this approach fails to consider internal water stress in vegetation - which can vary across trees even for similar climatic conditions due to differences in hydraulic behavior, soil type, etc - and may therefore be a poor basis for measuring mortality events. There is a consensus that xylem hydraulic failure often precedes drought-induced mortality, suggesting depleted canopy water content shortly before onset of mortality. Observations of vegetation optical depth (VOD) derived from passive microwave are proportional to canopy water content. In this study, we propose to use variations in VOD as an indicator of potential tree mortality. Since VOD accounts for intrinsic water stress undergone by vegetation, it is expected to be more accurate than external climatic stress indicators. Analysis of tree mortality events in California, USA observed by airborne detection shows a consistent relationship between mortality and the proposed VOD metric. Although this approach is limited by the kilometer-scale resolution of passive microwave radiometry, our results nevertheless demonstrate that microwave-derived estimates of vegetation water content can be used to study drought-driven tree mortality, and may be a valuable tool for mortality predictions if they can be combined with higher-resolution variables.
Monotone Boolean approximation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hulme, B.L.
1982-12-01
This report presents a theory of approximation of arbitrary Boolean functions by simpler, monotone functions. Monotone increasing functions can be expressed without the use of complements. Nonconstant monotone increasing functions are important in their own right since they model a special class of systems known as coherent systems. It is shown here that when Boolean expressions for noncoherent systems become too large to treat exactly, then monotone approximations are easily defined. The algorithms proposed here not only provide simpler formulas but also produce best possible upper and lower monotone bounds for any Boolean function. This theory has practical application formore » the analysis of noncoherent fault trees and event tree sequences.« less
Rockwell, Thomas K.; Lindvall, Scott; Dawson, Tim; Langridge, Rob; Lettis, William; Klinger, Yann
2002-01-01
Surveys of multiple tree lines within groves of poplar trees, planted in straight lines across the fault prior to the earthquake, show surprisingly large lateral variations. In one grove, slip increases by nearly 1.8 m, or 35% of the maximum measured value, over a lateral distance of nearly 100 m. This and other observations along the 1999 ruptures suggest that the lateral variability of slip observed from displaced geomorphic features in many earthquakes of the past may represent a combination of (1) actual differences in slip at the surface and (2) the difficulty in recognizing distributed nonbrittle deformation.
Using minimal spanning trees to compare the reliability of network topologies
NASA Technical Reports Server (NTRS)
Leister, Karen J.; White, Allan L.; Hayhurst, Kelly J.
1990-01-01
Graph theoretic methods are applied to compute the reliability for several types of networks of moderate size. The graph theory methods used are minimal spanning trees for networks with bi-directional links and the related concept of strongly connected directed graphs for networks with uni-directional links. A comparison is conducted of ring networks and braided networks. The case is covered where just the links fail and the case where both links and nodes fail. Two different failure modes for the links are considered. For one failure mode, the link no longer carries messages. For the other failure mode, the link delivers incorrect messages. There is a description and comparison of link-redundancy versus path-redundancy as methods to achieve reliability. All the computations are carried out by means of a fault tree program.
NASA Technical Reports Server (NTRS)
Patterson, Jonathan D.; Breckenridge, Jonathan T.; Johnson, Stephen B.
2013-01-01
Building upon the purpose, theoretical approach, and use of a Goal-Function Tree (GFT) being presented by Dr. Stephen B. Johnson, described in a related Infotech 2013 ISHM abstract titled "Goal-Function Tree Modeling for Systems Engineering and Fault Management", this paper will describe the core framework used to implement the GFTbased systems engineering process using the Systems Modeling Language (SysML). These two papers are ideally accepted and presented together in the same Infotech session. Statement of problem: SysML, as a tool, is currently not capable of implementing the theoretical approach described within the "Goal-Function Tree Modeling for Systems Engineering and Fault Management" paper cited above. More generally, SysML's current capabilities to model functional decompositions in the rigorous manner required in the GFT approach are limited. The GFT is a new Model-Based Systems Engineering (MBSE) approach to the development of goals and requirements, functions, and its linkage to design. As a growing standard for systems engineering, it is important to develop methods to implement GFT in SysML. Proposed Method of Solution: Many of the central concepts of the SysML language are needed to implement a GFT for large complex systems. In the implementation of those central concepts, the following will be described in detail: changes to the nominal SysML process, model view definitions and examples, diagram definitions and examples, and detailed SysML construct and stereotype definitions.
The Ring-Barking Experiment: Analysis of Forest Vitality Using Multi-Temporal Hyperspectral Data
NASA Astrophysics Data System (ADS)
Reichmuth, Anne; Bachmann, Martin; Heiden, Uta; Pinnel, Nicole; Holzwarth, Stefanie; Muller, Andreas; Henning, Lea; Einzmann, Kathrin; Immitzer, Markus; Seitz, Rudolf
2016-08-01
Through new operational optical spaceborne sensors (En- MAP and Sentinel-2) the impact analysis of climate change on forest ecosystems will be fostered. This analysis examines the potential of high spectral, spatial and temporal resolution data for detecting forest vegetation parameters, in particular Chlorophyll and Canopy Water content. The study site is a temperate spruce forest in Germany where in 2013 several trees were Ring-barked for a controlled die-off. During this experiment Ring- barked and Control trees were observed. Twelve airborne hyperspectral HySpex VNIR (Visible/Near Infrared) and SWIR (Shortwave Infrared) data with 1m spatial and 416 bands spectral resolution were acquired during the vegetation periods of 2013 and 2014. Additional laboratory spectral measurements of collected needle samples from Ring-barked and Control trees are available for needle level analysis. Index analysis of the laboratory measurements and image data are presented in this study.
BOREAS RSS-17 Xylem Flux Density Measurements at the SSA-OBS Site
NASA Technical Reports Server (NTRS)
Zimmerman, Reiner; Way, JoBea; McDonald, Kyle; Nickeson, Jaime (Editor); Hall, Forrest G. (Editor); Smith, David E. (Technical Monitor)
2000-01-01
As part of its efforts to determine environmental and phenological states from radar imagery, the Boreal Ecosystem-Atmosphere Study (BOREAS) Remote Sensing Science (RSS)-17 team collected in situ tree xylem flow measurements for one growing season on five Picea mariana (black spruce) trees. The data were collected to obtain information on the temporal and spatial variability in water uptake by trees in the Southern Study Area-Old Black Spruce (SSA-OBS) stand in the BOREAS SSA. Temporally, the data were collected in 30-minute intervals for 120 days from 31 May 1994 until 27 September 1994. The data are stored in tabular ASCII files. The xylem flux data are available from the Earth Observing System Data and Information System (EOSDIS) Oak Ridge National Laboratory (ORNL) Distributed Active Archive Center (DAAC). The data files are available on a CD-ROM (see document number 20010000884).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Coyle, David, Robert: Aubrey, Doug, Patric; Bentz, Jo-Ann
2010-01-01
Abstract 1 Abundance and feeding injury of the leafhopper Erythroneura lawsoni Robinson was measured in an intensively-managed American sycamore Platanus occidentalis L. plantation. Trees were planted in spring 2000 in a randomized complete block design, and received one of three annual treatments: (i) fertilization (120 kg N/ha/year); (ii) irrigation (3.0 cm/week); (iii) fertilization + irrigation; or (iv) control (no treatment). 2 Foliar nutrient concentrations were significantly influenced by the treatments because only sulphur and manganese levels were not statistically greater in trees receiving fertilization. 3 Over 116 000 E. lawsoni were captured on sticky traps during the study. Leafhopper abundancemore » was highest on nonfertilized trees for the majority of the season, and was positively correlated with foliar nutrient concentrations. Significant temporal variation in E. lawsoni abundance occurred, suggesting five discrete generations in South Carolina. 4 Significant temporal variation occurred in E. lawsoni foliar injury levels, with the highest injury ratings occurring in late June and August. Foliar injury was negatively correlated with foliar nutrient content, and higher levels of injury occurred more frequently on nonfertilized trees. 5 The results obtained in the present study indicated that increased E. lawsoni abundance occurred on trees that did not receive fertilization. Nonfertilized trees experienced greater foliar injury, suggesting that lower foliar nutrient status may have led to increased levels of compensatory feeding.« less
Cloud-Free Satellite Image Mosaics with Regression Trees and Histogram Matching.
E.H. Helmer; B. Ruefenacht
2005-01-01
Cloud-free optical satellite imagery simplifies remote sensing, but land-cover phenology limits existing solutions to persistent cloudiness to compositing temporally resolute, spatially coarser imagery. Here, a new strategy for developing cloud-free imagery at finer resolution permits simple automatic change detection. The strategy uses regression trees to predict...
Risk Assessment for the Southern Pine Beetle
Andrew Birt
2011-01-01
The southern pine beetle (SPB) causes significant damage (tree mortality) to pine forests. Although this tree mortality has characteristic temporal and spatial patterns, the precise location and timing of damage is to some extent unpredictable. Consequently, although forest managers are able to identify stands that are predisposed to SPB damage, they are unable to...
Temporal and spacial aspects of root and stem sucrose metabolism in loblolly pine trees
Shi-Jean S. Sung; Paul P. Kormanik; C.C. Black
1996-01-01
We studied root and stem sucrose metabolism in trees excavated from a 9-year-old artificially regenerated loblolly pine (Pinus taeda L.) plantation. Sucrose synthase (SS) activities in stem and taproot vascular cambial tissues followed similar seasonal patterns until they peaked during September. After September, stem SS activity disappeared...
Radiocarbon content in the annual tree rings during last 150 years and time variation of cosmic rays
NASA Technical Reports Server (NTRS)
Kocharov, G. E.; Metskvarishvili, R. Y.; Tsereteli, S. L.
1985-01-01
The results of the high accuracy measurements of radiocarbon abundance in precisely dated tree rings in the interval 1800 to 1950 yrs are discussed. Radiocarbon content caused by solar activity is established. The temporal dependence of cosmic rays is constructed, by use of radio abundance data.
Nicholas S. Skowronski; Kenneth L. Clark; Michael Gallagher; Richard A. Birdsey; John L. Hom
2014-01-01
We estimated aboveground tree biomass and change in aboveground tree biomass using repeated airborne laser scanner (ALS) acquisitions and temporally coincident ground observations of forest biomass, for a relatively undisturbed period (2004-2007; ∇07-04), a contrasting period of disturbance (2007-2009; ∇09-07...
Do mesoscale faults in a young fold belt indicate regional or local stress?
NASA Astrophysics Data System (ADS)
Kokado, Akihiro; Yamaji, Atsushi; Sato, Katsushi
2017-04-01
The result of paleostress analyses of mesoscale faults is usually thought of as evidence of a regional stress. On the other hand, the recent advancement of the trishear modeling has enabled us to predict the deformation field around fault-propagation folds without the difficulty of assuming paleo mechanical properties of rocks and sediments. We combined the analysis of observed mesoscale faults and the trishear modeling to understand the significance of regional and local stresses for the formation of mesoscale faults. To this end, we conducted the 2D trishear inverse modeling with a curved thrust fault to predict the subsurface structure and strain field of an anticline, which has a more or less horizontal axis and shows a map-scale plane strain perpendicular to the axis, in the active fold belt of Niigata region, central Japan. The anticline is thought to have been formed by fault-propagation folding under WNW-ESE regional compression. Based on the attitudes of strata and the positions of key tephra beds in Lower Pleistocene soft sediments cropping out at the surface, we obtained (1) a fault-propagation fold with the fault tip at a depth of ca. 4 km as the optimal subsurface structure, and (2) the temporal variation of deformation field during the folding. We assumed that mesoscale faults were activated along the direction of maximum shear strain on the faults to test whether the fault-slip data collected at the surface were consistent with the deformation in some stage(s) of folding. The Wallace-Bott hypothesis was used to estimate the consistence of faults with the regional stress. As a result, the folding and the regional stress explained 27 and 33 of 45 observed faults, respectively, with the 11 faults being consistent with the both. Both the folding and regional one were inconsistent with the remaining 17 faults, which could be explained by transfer faulting and/or the gravitational spreading of the growing anticline. The lesson we learnt from this work was that we should pay attention not only to regional but also to local stresses to interpret the results of paleostress analysis in the shallow levels of young orogenic belts.
Zou, Zhengting; Zhang, Jianzhi
2017-07-01
Several authors reported lower frequencies of protein sequence convergence between more distantly related evolutionary lineages and attributed this trend to epistasis, which renders the acceptable amino acids at a site more different and convergence less likely in more divergent lineages. A recent primate study, however, suggested that this trend is at least partially and potentially entirely an artifact of gene tree discordance (GTD). Here, we demonstrate in a genome-wide data set from 17 mammals that the temporal trend remains (1) upon the control of the GTD level, (2) in genes whose genealogies are concordant with the species tree, and (3) for convergent changes, which are extremely unlikely to be caused by GTD. Similar results are observed in a comparable data set of 12 fruit flies in some but not all of these tests. We conclude that, at least in some cases, the temporal decline of convergence is genuine, reflecting an impact of epistasis on protein evolution. © The Author 2017. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Precipitation and Topography as Drivers of Tree Water Use and Productivity at Multiple Scales
NASA Astrophysics Data System (ADS)
Martin, J. T.; Hu, J.; Looker, N. T.; Jencso, K. G.
2014-12-01
Water is commonly the primary limiting factor for tree growth in semi-arid regions of the Western U.S. and tree productivity can vary drastically across landscapes as a function of water availability. The role of topography as a first order control on soil and ground water has been well studied; however, the strategies trees use to cope with water limitation in different landscape positions and across time remain unclear. As growing seasons progress, the availability of water changes temporally, as water inputs transition from snowmelt to rainfall, and spatially, as divergent positions dry more than convergent ones. We seek to understand how the interaction of these processes dictate where trees access water and which strategies most successfully avert water limitation of growth. We take advantage of clear differences in the isotopic signatures of snow and summer rain to track water utilized by Douglas fir, Ponderosa pine, Subalpine fir, Engelmann spruce, and Western larch in both convergent and divergent landscape positions and across time. We couple these data with evidence of growth limitation inferred from reductions in lateral growth rates observed by continuous dendrometer measurements to link tree water use and productivity. Xylem waters reflect both the precipitation type and soil profile distribution of water used by trees for growth and dendrometer measurements reflect the effects of water limitation through changes in the lateral growth curve as soil moistures decline. Isotope signatures from rain, snow and stream water fell predictably along the local meteoric water line with values from xylem samples falling between those of rain and snow. Trees on southern aspects exhibit more growth limitation in divergent than convergent positions while this effect appears muted or non-existent on northern aspects. Trees in convergent hollow positions rely more on snow water while trees on slopes utilize more rain water. Surprisingly, trees at lower elevation rely more on snow water than trees at higher elevation, suggesting that trees in drier, low elevation sites are accessing deeper, older water from snowmelt throughout the growing season. Our research suggests previously under-recognized topographic and hydrologic modulation of tree growth at surprisingly small spatial and temporal scales.
Statistical Methods in Ai: Rare Event Learning Using Associative Rules and Higher-Order Statistics
NASA Astrophysics Data System (ADS)
Iyer, V.; Shetty, S.; Iyengar, S. S.
2015-07-01
Rare event learning has not been actively researched since lately due to the unavailability of algorithms which deal with big samples. The research addresses spatio-temporal streams from multi-resolution sensors to find actionable items from a perspective of real-time algorithms. This computing framework is independent of the number of input samples, application domain, labelled or label-less streams. A sampling overlap algorithm such as Brooks-Iyengar is used for dealing with noisy sensor streams. We extend the existing noise pre-processing algorithms using Data-Cleaning trees. Pre-processing using ensemble of trees using bagging and multi-target regression showed robustness to random noise and missing data. As spatio-temporal streams are highly statistically correlated, we prove that a temporal window based sampling from sensor data streams converges after n samples using Hoeffding bounds. Which can be used for fast prediction of new samples in real-time. The Data-cleaning tree model uses a nonparametric node splitting technique, which can be learned in an iterative way which scales linearly in memory consumption for any size input stream. The improved task based ensemble extraction is compared with non-linear computation models using various SVM kernels for speed and accuracy. We show using empirical datasets the explicit rule learning computation is linear in time and is only dependent on the number of leafs present in the tree ensemble. The use of unpruned trees (t) in our proposed ensemble always yields minimum number (m) of leafs keeping pre-processing computation to n × t log m compared to N2 for Gram Matrix. We also show that the task based feature induction yields higher Qualify of Data (QoD) in the feature space compared to kernel methods using Gram Matrix.
Modeling temporal changes of low-frequency earthquake bursts near Parkfield, CA
NASA Astrophysics Data System (ADS)
Wu, C.; Daub, E. G.
2016-12-01
Tectonic tremor and low-frequency earthquakes (LFE) are found in the deeper crust of various tectonic environments in the last decade. LFEs are presumed to be caused by failure of deep fault patches during a slow slip event, and the long-term variation in LFE recurrence could provide crucial insight into the deep fault zone processes that may lead to future large earthquakes. However, the physical mechanisms causing the temporal changes of LFE recurrence are still under debate. In this study, we combine observations of long-term changes in LFE burst activities near Parkfield, CA with a brittle and ductile friction (BDF) model, and use the model to constrain the possible physical mechanisms causing the observed long-term changes in LFE burst activities after the 2004 M6 Parkfield earthquake. The BDF model mimics the slipping of deep fault patches by a spring-drugged block slider with both brittle and ductile friction components. We use the BDF model to test possible mechanisms including static stress imposed by the Parkfield earthquake, changes in pore pressure, tectonic force, afterslip, brittle friction strength, and brittle contact failure distance. The simulation results suggest that changes in brittle friction strength and failure distance are more likely to cause the observed changes in LFE bursts than other mechanisms.
Secular Variation in Slip (Invited)
NASA Astrophysics Data System (ADS)
Cowgill, E.; Gold, R. D.
2010-12-01
Faults show temporal variations in slip rate at time scales ranging from the hours following a major rupture to the millions of years over which plate boundaries reorganize. One such behavior is secular variation in slip (SVS), which we define as a pulse of accelerated strain release along a single fault that occurs at a frequency that is > 1 order of magnitude longer than the recurrence interval of earthquakes within the pulse. Although numerous mechanical models have been proposed to explain SVS, it has proven much harder to measure long (5-500 kyr) records of fault displacement as a function of time. Such fault-slip histories may be obtained from morphochronologic data, which are measurements of offset and age obtained from faulted landforms. Here we describe slip-history modeling of morphochronologic data and show how this method holds promise for obtaining long records of fault slip. In detail we place SVS in the context of other types of time-varying fault-slip phenomena, explain the importance of measuring fault-slip histories, summarize models proposed to explain SVS, review current approaches for measuring SVS in the geologic record, and illustrate the slip-history modeling approach we advocate here using data from the active, left-slip Altyn Tagh fault in NW Tibet. In addition to SVS, other types of temporal variation in fault slip include post-seismic transients, discrepancies between geologic slip rates and those derived from geodetic and/or paleoseismic data, and single changes in slip rate resulting from plate reorganization. Investigating secular variation in slip is important for advancing understanding of long-term continental deformation, fault mechanics, and seismic risk. Mechanical models producing such behavior include self-driven mode switching, changes in pore-fluid pressure, viscoelasticity, postseismic reloading, and changes in local surface loads (e.g., ice sheets, large lakes, etc.) among others. However, a key problem in testing these models is the paucity of long records of fault slip. Paleoseismic data are unlikely to yield such histories because measurements of the slip associated with each event are generally unavailable and long records require large accumulated offsets, which can result in structural duplication or omission of the stratigraphic records of events. In contrast, morphochronologic data capture both the age and offset of individual piercing points, although this approach generally does not resolve individual earthquake events. Because the uncertainties in both age and offset are generally large (5-15%) for individual markers, SVS is best resolved by obtaining suites of such measurements, in which case the errors can be used to reduce the range of slip histories common to all such data points. A suite of such data from the central Altyn Tagh fault reveals a pulse of accelerated strain release in the mid Holocene, with ~20 m of slip being released from ~6.7 to ~5.9 ka at a short-term rate (~28 mm/yr) that is 3 times greater than the average rate (~9 mm/yr). We interpret this pulse to represent a cluster of two to six, Mw > 7.2 earthquakes. To our knowledge, this is the first possible earthquake cluster detected using morphochronologic techniques.
Temporal expression of pecan allergens during nut development
USDA-ARS?s Scientific Manuscript database
Pecan nuts and other tree nuts are among a group of eight foods that most commonly cause food allergy. The growth of pecan nuts is a highly complex process orchestrated by the temporal and spatial expression of specific genes. Three conserved seed-storage proteins from the prolamin and cupin super...
Distributed deformation and block rotation in 3D
NASA Technical Reports Server (NTRS)
Scotti, Oona; Nur, Amos; Estevez, Raul
1990-01-01
The authors address how block rotation and complex distributed deformation in the Earth's shallow crust may be explained within a stationary regional stress field. Distributed deformation is characterized by domains of sub-parallel fault-bounded blocks. In response to the contemporaneous activity of neighboring domains some domains rotate, as suggested by both structural and paleomagnetic evidence. Rotations within domains are achieved through the contemporaneous slip and rotation of the faults and of the blocks they bound. Thus, in regions of distributed deformation, faults must remain active in spite of their poor orientation in the stress field. The authors developed a model that tracks the orientation of blocks and their bounding faults during rotation in a 3D stress field. In the model, the effective stress magnitudes of the principal stresses (sigma sub 1, sigma sub 2, and sigma sub 3) are controlled by the orientation of fault sets in each domain. Therefore, adjacent fault sets with differing orientations may be active and may display differing faulting styles, and a given set of faults may change its style of motion as it rotates within a stationary stress regime. The style of faulting predicted by the model depends on a dimensionless parameter phi = (sigma sub 2 - sigma sub 3)/(sigma sub 1 - sigma sub 3). Thus, the authors present a model for complex distributed deformation and complex offset history requiring neither geographical nor temporal changes in the stress regime. They apply the model to the Western Transverse Range domain of southern California. There, it is mechanically feasible for blocks and faults to have experienced up to 75 degrees of clockwise rotation in a phi = 0.1 strike-slip stress regime. The results of the model suggest that this domain may first have accommodated deformation along preexisting NNE-SSW faults, reactivated as normal faults. After rotation, these same faults became strike-slip in nature.
Strike-slip faulting, wrinkle ridges, and time variable stress states in the Coprates Region of Mars
NASA Technical Reports Server (NTRS)
Schultz, Richard A.
1990-01-01
The existence of strike-slip faults was recently documented in two locations on Mars. Two clear examples are reviewed located southeast of Valles Marineris and preliminary evidence is presented for more widespread strike-slip deformation elsewhere in Coprates. The first two examples show that strike-slip faulting occurred in a broad zone east of the Coprates Rise spanning approximately 400 km east-west by perhaps 1000 km north-south. The last example suggests that the growth of major wrinkle ridges throughout Coprates may have been influenced by horizontally directed shear stresses and that more than one generation of ridges was produced. Thus, 'compressional' deformation of ridged plains south of Valles Marineris was spatially heterogeneous and a temporal change in stress may have been involved.
NASA Astrophysics Data System (ADS)
Farge, G.; Delbridge, B. G.; Materna, K.; Johnson, C. W.; Chaussard, E.; Jones, C. E.; Burgmann, R.
2016-12-01
Understanding the role of the Hayward/Calaveras fault junction in major earthquake ruptures in the East San Francisco Bay Area is a major challenge in trying to assess the regional seismic hazard. We use updated GPS velocities, and surface geodetic measurements from both traditional space-based InSAR and the NASA JPL's Uninhabited Aerial Vehicle Synthetic Aperture Radar (UAVSAR) system to quantify the deep long-term interseismic deformation and shallow temporally variable fault creep. Here, we present a large data set of interseismic deformation over the Hayward/Calaveras fault system, combining far-field deformation from 1992-2011 ERS and Envisat InSAR data, near-field deformation from 2009-2016 UAVSAR data and 1997-2016 regional GPS measurements from the Bay Area Velocity Unification model (BAVU4) in both near-field and far field. We perform a joint inversion of the data to obtain the long-term slip on deep through-going dislocations and the distribution of shallow creep on a 3D model of the Hayward and Calaveras faults. Spatially adaptative weights are given to each data set in order to account for its importance in constraining slip at different depths. The coherence and resolution of the UAVSAR data allow us to accurately resolve the near-field fault deformation, thus providing stronger constraints on the location of active strands of the southern Hayward and Calaveras faults and their shallow interseismic creep distribution.
Architecture Analysis with AADL: The Speed Regulation Case-Study
2014-11-01
Overview Functional Hazard Analysis ( FHA ) Failures inventory with description, classification, etc. Fault-Tree Analysis (FTA) Dependencies between...University Pittsburgh, PA 15213 Julien Delange Report Documentation Page Form ApprovedOMB No. 0704-0188 Public reporting burden for the collection of...Information Operations and Reports , 1215 Jefferson Davis Highway, Suite 1204, Arlington VA 22202-4302. Respondents should be aware that notwithstanding any
Journal of Air Transportation, Volume 12, No. 2 (ATRS Special Edition)
NASA Technical Reports Server (NTRS)
Bowen, Brent D. (Editor); Kabashkin, Igor (Editor); Fink, Mary (Editor)
2007-01-01
Topics covered include: Competition and Change in the Long-Haul Markets from Europe; Insights into the Maintenance, Repair, and Overhaul Configurations of European Airlines; Validation of Fault Tree Analysis in Aviation Safety Management; An Investigation into Airline Service Quality Performance between U.S. Legacy Carriers and Their EU Competitors and Partners; and Climate Impact of Aircraft Technology and Design Changes.
Project DAFNE - Drilling Active Faults in Northern Europe
NASA Astrophysics Data System (ADS)
Kukkonen, I. T.; Ask, M. S. V.; Olesen, O.
2012-04-01
We are currently developing a new ICDP project 'Drillling Active Faults in Northern Europe' (DAFNE) which aims at investigating, via scientific drilling, the tectonic and structural characteristics of postglacial (PG) faults in northern Fennoscandia, including their hydrogeology and associated deep biosphere [1, 2]. During the last stages of the Weichselian glaciation (ca. 9,000 - 15,000 years B.P.), reduced ice load and glacially affected stress field resulted in active faulting in Fennoscandia with fault scarps up to 160 km long and 30 m high. These postglacial (PG) faults are usually SE dipping, SW-NE oriented thrusts, and represent reactivated, pre-existing crustal discontinuities. Postglacial faulting indicates that the glacio-isostatic compensation is not only a gradual viscoelastic phenomenon, but includes also unexpected violent earthquakes, suggestively larger than other known earthquakes in stable continental regions. The research is anticipated to advance science in neotectonics, hydrogeology and deep biosphere studies, and provide important information for nuclear waste and CO2 disposal, petroleum exploration on the Norwegian continental shelf and studies of mineral resources in PG fault areas. We expect that multidisciplinary research applying shallow and deep drilling of postglacial faults would provide significant scientific results through generating new data and models, namely: (1) Understanding PG fault genesis and controls of their locations; (2) Deep structure and depth extent of PG faults; (3) Textural, mineralogical and physical alteration of rocks in the PG faults; (4) State of stress and estimates of paleostress of PG faults; (5) Hydrogeology, hydrochemistry and hydraulic properties of PG faults; (6) Dating of tectonic reactivation(s) and temporal evolution of tectonic systems hosting PG faults; (7) Existence/non-existence of deep biosphere in PG faults; (8) Data useful for planning radioactive waste disposal in crystalline bedrock; (9) Data on rock stress changes in the periphery of the inland ice; (10) Stress pattern along the Norwegian continental margin in relation to the bending spreading ridge and Plio-Pleistocene erosion, uplift and sedimentation with implications for fluid migration and sealing properties of petroleum reservoirs. (11) Data useful in predicting future seismic activity in areas of current deglaciation due to ongoing climatic warming.
NASA Astrophysics Data System (ADS)
Meschis, M.; Roberts, G.; Robertson, J.; Houghton, S.; Briant, R. M.
2017-12-01
Whether slip-rates on active faults accumulated over multiple seismic events is constant or varying over tens to hundreds of millenia timescales is an open question that can be addressed through study of deformed Quaternary palaeoshorelines. It is important to know the answer so that one can judge whether shorter timescale measurements (e.g. Holocene palaeoseismology or decadal geodesy) are suitable for determining earthquake recurrence intervals for Probabilistic Seismic Hazard Assessment or more suitable for studying temporal earthquake clustering. We present results from the Vibo Fault and the Capo D'Orlando Fault, that lie within the deforming Calabrian Arc, which has experienced damaging seismic events such as the 1908 Messina Strait earthquake ( Mw 7) and the 1905 Capo Vaticano earthquake ( Mw 7). These normal faults deform uplifted Late Quaternary palaeoshorelines, which outcrop mainly within their hangingwalls, but also partially in their footwalls, showing that a regional subduction and mantle-related uplift outpaces local fault-related subsidence. Through (1) field and DEM-based mapping of palaeoshorelines, both up flights of successively higher, older inner edges, and along the strike of the faults, and (2) utilisation of synchronous correlation of non-uniformly-spaced inner edge elevations with non-uniformly spaced sea-level highstand ages, we show that slip-rates decrease towards fault tips and that slip-rates have remained constant since 340 ka (given the time resolution we obtain). The slip-rates for the Capo D'Orlando Fault and Vibo Fault are 0.61mm/yr and 1mm/yr respectively. We show that the along-strike gradients in slip-rate towards fault tips differ for the two faults hinting at fault interaction and also discuss this in terms of other regions of extension like the Gulf of Corinth, Greece, where slip-rate has been shown to change through time through the Quaternary. We make the point that slip-rates may change through time as fault systems grow and fault interaction changes due to geometrical effects.