NASA Technical Reports Server (NTRS)
Butler, Ricky W.; Boerschlein, David P.
1993-01-01
Fault-Tree Compiler (FTC) program, is software tool used to calculate probability of top event in fault tree. Gates of five different types allowed in fault tree: AND, OR, EXCLUSIVE OR, INVERT, and M OF N. High-level input language easy to understand and use. In addition, program supports hierarchical fault-tree definition feature, which simplifies tree-description process and reduces execution time. Set of programs created forming basis for reliability-analysis workstation: SURE, ASSIST, PAWS/STEM, and FTC fault-tree tool (LAR-14586). Written in PASCAL, ANSI-compliant C language, and FORTRAN 77. Other versions available upon request.
NASA Technical Reports Server (NTRS)
Butler, Ricky W.; Martensen, Anna L.
1992-01-01
FTC, Fault-Tree Compiler program, is reliability-analysis software tool used to calculate probability of top event of fault tree. Five different types of gates allowed in fault tree: AND, OR, EXCLUSIVE OR, INVERT, and M OF N. High-level input language of FTC easy to understand and use. Program supports hierarchical fault-tree-definition feature simplifying process of description of tree and reduces execution time. Solution technique implemented in FORTRAN, and user interface in Pascal. Written to run on DEC VAX computer operating under VMS operating system.
NASA Technical Reports Server (NTRS)
Martensen, Anna L.; Butler, Ricky W.
1987-01-01
The Fault Tree Compiler Program is a new reliability tool used to predict the top event probability for a fault tree. Five different gate types are allowed in the fault tree: AND, OR, EXCLUSIVE OR, INVERT, and M OF N gates. The high level input language is easy to understand and use when describing the system tree. In addition, the use of the hierarchical fault tree capability can simplify the tree description and decrease program execution time. The current solution technique provides an answer precise (within the limits of double precision floating point arithmetic) to the five digits in the answer. The user may vary one failure rate or failure probability over a range of values and plot the results for sensitivity analyses. The solution technique is implemented in FORTRAN; the remaining program code is implemented in Pascal. The program is written to run on a Digital Corporation VAX with the VMS operation system.
The Fault Tree Compiler (FTC): Program and mathematics
NASA Technical Reports Server (NTRS)
Butler, Ricky W.; Martensen, Anna L.
1989-01-01
The Fault Tree Compiler Program is a new reliability tool used to predict the top-event probability for a fault tree. Five different gate types are allowed in the fault tree: AND, OR, EXCLUSIVE OR, INVERT, AND m OF n gates. The high-level input language is easy to understand and use when describing the system tree. In addition, the use of the hierarchical fault tree capability can simplify the tree description and decrease program execution time. The current solution technique provides an answer precisely (within the limits of double precision floating point arithmetic) within a user specified number of digits accuracy. The user may vary one failure rate or failure probability over a range of values and plot the results for sensitivity analyses. The solution technique is implemented in FORTRAN; the remaining program code is implemented in Pascal. The program is written to run on a Digital Equipment Corporation (DEC) VAX computer with the VMS operation system.
FTC - THE FAULT-TREE COMPILER (SUN VERSION)
NASA Technical Reports Server (NTRS)
Butler, R. W.
1994-01-01
FTC, the Fault-Tree Compiler program, is a tool used to calculate the top-event probability for a fault-tree. Five different gate types are allowed in the fault tree: AND, OR, EXCLUSIVE OR, INVERT, and M OF N. The high-level input language is easy to understand and use. In addition, the program supports a hierarchical fault tree definition feature which simplifies the tree-description process and reduces execution time. A rigorous error bound is derived for the solution technique. This bound enables the program to supply an answer precisely (within the limits of double precision floating point arithmetic) at a user-specified number of digits accuracy. The program also facilitates sensitivity analysis with respect to any specified parameter of the fault tree such as a component failure rate or a specific event probability by allowing the user to vary one failure rate or the failure probability over a range of values and plot the results. The mathematical approach chosen to solve a reliability problem may vary with the size and nature of the problem. Although different solution techniques are utilized on different programs, it is possible to have a common input language. The Systems Validation Methods group at NASA Langley Research Center has created a set of programs that form the basis for a reliability analysis workstation. The set of programs are: SURE reliability analysis program (COSMIC program LAR-13789, LAR-14921); the ASSIST specification interface program (LAR-14193, LAR-14923), PAWS/STEM reliability analysis programs (LAR-14165, LAR-14920); and the FTC fault tree tool (LAR-14586, LAR-14922). FTC is used to calculate the top-event probability for a fault tree. PAWS/STEM and SURE are programs which interpret the same SURE language, but utilize different solution methods. ASSIST is a preprocessor that generates SURE language from a more abstract definition. SURE, ASSIST, and PAWS/STEM are also offered as a bundle. Please see the abstract for COS-10039/COS-10041, SARA - SURE/ASSIST Reliability Analysis Workstation, for pricing details. FTC was originally developed for DEC VAX series computers running VMS and was later ported for use on Sun computers running SunOS. The program is written in PASCAL, ANSI compliant C-language, and FORTRAN 77. The TEMPLATE graphics library is required to obtain graphical output. The standard distribution medium for the VMS version of FTC (LAR-14586) is a 9-track 1600 BPI magnetic tape in VMSINSTAL format. It is also available on a TK50 tape cartridge in VMSINSTAL format. Executables are included. The standard distribution medium for the Sun version of FTC (LAR-14922) is a .25 inch streaming magnetic tape cartridge in UNIX tar format. Both Sun3 and Sun4 executables are included. FTC was developed in 1989 and last updated in 1992. DEC, VAX, VMS, and TK50 are trademarks of Digital Equipment Corporation. UNIX is a registered trademark of AT&T Bell Laboratories. SunOS is a trademark of Sun Microsystems, Inc.
FTC - THE FAULT-TREE COMPILER (VAX VMS VERSION)
NASA Technical Reports Server (NTRS)
Butler, R. W.
1994-01-01
FTC, the Fault-Tree Compiler program, is a tool used to calculate the top-event probability for a fault-tree. Five different gate types are allowed in the fault tree: AND, OR, EXCLUSIVE OR, INVERT, and M OF N. The high-level input language is easy to understand and use. In addition, the program supports a hierarchical fault tree definition feature which simplifies the tree-description process and reduces execution time. A rigorous error bound is derived for the solution technique. This bound enables the program to supply an answer precisely (within the limits of double precision floating point arithmetic) at a user-specified number of digits accuracy. The program also facilitates sensitivity analysis with respect to any specified parameter of the fault tree such as a component failure rate or a specific event probability by allowing the user to vary one failure rate or the failure probability over a range of values and plot the results. The mathematical approach chosen to solve a reliability problem may vary with the size and nature of the problem. Although different solution techniques are utilized on different programs, it is possible to have a common input language. The Systems Validation Methods group at NASA Langley Research Center has created a set of programs that form the basis for a reliability analysis workstation. The set of programs are: SURE reliability analysis program (COSMIC program LAR-13789, LAR-14921); the ASSIST specification interface program (LAR-14193, LAR-14923), PAWS/STEM reliability analysis programs (LAR-14165, LAR-14920); and the FTC fault tree tool (LAR-14586, LAR-14922). FTC is used to calculate the top-event probability for a fault tree. PAWS/STEM and SURE are programs which interpret the same SURE language, but utilize different solution methods. ASSIST is a preprocessor that generates SURE language from a more abstract definition. SURE, ASSIST, and PAWS/STEM are also offered as a bundle. Please see the abstract for COS-10039/COS-10041, SARA - SURE/ASSIST Reliability Analysis Workstation, for pricing details. FTC was originally developed for DEC VAX series computers running VMS and was later ported for use on Sun computers running SunOS. The program is written in PASCAL, ANSI compliant C-language, and FORTRAN 77. The TEMPLATE graphics library is required to obtain graphical output. The standard distribution medium for the VMS version of FTC (LAR-14586) is a 9-track 1600 BPI magnetic tape in VMSINSTAL format. It is also available on a TK50 tape cartridge in VMSINSTAL format. Executables are included. The standard distribution medium for the Sun version of FTC (LAR-14922) is a .25 inch streaming magnetic tape cartridge in UNIX tar format. Both Sun3 and Sun4 executables are included. FTC was developed in 1989 and last updated in 1992. DEC, VAX, VMS, and TK50 are trademarks of Digital Equipment Corporation. UNIX is a registered trademark of AT&T Bell Laboratories. SunOS is a trademark of Sun Microsystems, Inc.
Integrated Approach To Design And Analysis Of Systems
NASA Technical Reports Server (NTRS)
Patterson-Hine, F. A.; Iverson, David L.
1993-01-01
Object-oriented fault-tree representation unifies evaluation of reliability and diagnosis of faults. Programming/fault tree described more fully in "Object-Oriented Algorithm For Evaluation Of Fault Trees" (ARC-12731). Augmented fault tree object contains more information than fault tree object used in quantitative analysis of reliability. Additional information needed to diagnose faults in system represented by fault tree.
[The Application of the Fault Tree Analysis Method in Medical Equipment Maintenance].
Liu, Hongbin
2015-11-01
In this paper, the traditional fault tree analysis method is presented, detailed instructions for its application characteristics in medical instrument maintenance is made. It is made significant changes when the traditional fault tree analysis method is introduced into the medical instrument maintenance: gave up the logic symbolic, logic analysis and calculation, gave up its complicated programs, and only keep its image and practical fault tree diagram, and the fault tree diagram there are also differences: the fault tree is no longer a logical tree but the thinking tree in troubleshooting, the definition of the fault tree's nodes is different, the composition of the fault tree's branches is also different.
NASA Technical Reports Server (NTRS)
Lee, Charles; Alena, Richard L.; Robinson, Peter
2004-01-01
We started from ISS fault trees example to migrate to decision trees, presented a method to convert fault trees to decision trees. The method shows that the visualizations of root cause of fault are easier and the tree manipulating becomes more programmatic via available decision tree programs. The visualization of decision trees for the diagnostic shows a format of straight forward and easy understands. For ISS real time fault diagnostic, the status of the systems could be shown by mining the signals through the trees and see where it stops at. The other advantage to use decision trees is that the trees can learn the fault patterns and predict the future fault from the historic data. The learning is not only on the static data sets but also can be online, through accumulating the real time data sets, the decision trees can gain and store faults patterns in the trees and recognize them when they come.
Automatic translation of digraph to fault-tree models
NASA Technical Reports Server (NTRS)
Iverson, David L.
1992-01-01
The author presents a technique for converting digraph models, including those models containing cycles, to a fault-tree format. A computer program which automatically performs this translation using an object-oriented representation of the models has been developed. The fault-trees resulting from translations can be used for fault-tree analysis and diagnosis. Programs to calculate fault-tree and digraph cut sets and perform diagnosis with fault-tree models have also been developed. The digraph to fault-tree translation system has been successfully tested on several digraphs of varying size and complexity. Details of some representative translation problems are presented. Most of the computation performed by the program is dedicated to finding minimal cut sets for digraph nodes in order to break cycles in the digraph. Fault-trees produced by the translator have been successfully used with NASA's Fault-Tree Diagnosis System (FTDS) to produce automated diagnostic systems.
Application Research of Fault Tree Analysis in Grid Communication System Corrective Maintenance
NASA Astrophysics Data System (ADS)
Wang, Jian; Yang, Zhenwei; Kang, Mei
2018-01-01
This paper attempts to apply the fault tree analysis method to the corrective maintenance field of grid communication system. Through the establishment of the fault tree model of typical system and the engineering experience, the fault tree analysis theory is used to analyze the fault tree model, which contains the field of structural function, probability importance and so on. The results show that the fault tree analysis can realize fast positioning and well repairing of the system. Meanwhile, it finds that the analysis method of fault tree has some guiding significance to the reliability researching and upgrading f the system.
Map and Data for Quaternary Faults and Fault Systems on the Island of Hawai`i
Cannon, Eric C.; Burgmann, Roland; Crone, Anthony J.; Machette, Michael N.; Dart, Richard L.
2007-01-01
Introduction This report and digitally prepared, GIS-based map is one of a series of similar products covering individual states or regions of United States that show the locations, ages, and activity rates of major earthquake-related features such as faults and fault-related folds. It is part of a continuing the effort to compile a comprehensive Quaternary fault and fold map and database for the United States, which is supported by the U.S. Geological Survey's (USGS) Earthquake Hazards Program. Guidelines for the compilation of the Quaternary fault and fold maps for the United States were published by Haller and others (1993) at the onset of this project. This compilation of Quaternary surface faulting and folding in Hawai`i is one of several similar state and regional compilations that were planned for the United States. Reports published to date include West Texas (Collins and others, 1996), New Mexico (Machette and others, 1998), Arizona (Pearthree, 1998), Colorado (Widmann and others, 1998), Montana (Stickney and others, 2000), Idaho (Haller and others, 2005), and Washington (Lidke and others, 2003). Reports for other states such as California and Alaska are still in preparation. The primary intention of this compilation is to aid in seismic-hazard evaluations. The report contains detailed information on the location and style of faulting, the time of most recent movement, and assigns each feature to a slip-rate category (as a proxy for fault activity). It also contains the name and affiliation of the compiler, date of compilation, geographic and other paleoseismologic parameters, as well as an extensive set of references for each feature. The map (plate 1) shows faults, volcanic rift zones, and lineaments that show evidence of Quaternary surface movement related to faulting, including data on the time of most recent movement, sense of movement, slip rate, and continuity of surface expression. This compilation is presented as a digitally prepared map product and catalog of data, both in Adobe Acrobat PDF format. The senior authors (Eric C. Cannon and Roland Burgmann) compiled the fault data as part of ongoing studies of active faulting on the Island of Hawai`i. The USGS is responsible for organizing and integrating the State or regional products under their National Seismic Hazard Mapping project, including the coordination and oversight of contributions from individuals and groups (Michael N. Machette and Anthony J. Crone), database design and management (Kathleen M. Haller), and digitization and analysis of map data (Richard L. Dart). After being released an Open-File Report, the data in this report will be available online at http://earthquake.usgs.gov/regional/qfaults/, the USGS Quaternary Fault and Fold Database of the United States.
Fault Tree in the Trenches, A Success Story
NASA Technical Reports Server (NTRS)
Long, R. Allen; Goodson, Amanda (Technical Monitor)
2000-01-01
Getting caught up in the explanation of Fault Tree Analysis (FTA) minutiae is easy. In fact, most FTA literature tends to address FTA concepts and methodology. Yet there seems to be few articles addressing actual design changes resulting from the successful application of fault tree analysis. This paper demonstrates how fault tree analysis was used to identify and solve a potentially catastrophic mechanical problem at a rocket motor manufacturer. While developing the fault tree given in this example, the analyst was told by several organizations that the piece of equipment in question had been evaluated by several committees and organizations, and that the analyst was wasting his time. The fault tree/cutset analysis resulted in a joint-redesign of the control system by the tool engineering group and the fault tree analyst, as well as bragging rights for the analyst. (That the fault tree found problems where other engineering reviews had failed was not lost on the other engineering groups.) Even more interesting was that this was the analyst's first fault tree which further demonstrates how effective fault tree analysis can be in guiding (i.e., forcing) the analyst to take a methodical approach in evaluating complex systems.
Map and data for Quaternary faults and folds in New Mexico
Machette, M.N.; Personius, S.F.; Kelson, K.I.; Haller, K.M.; Dart, R.L.
1998-01-01
The "World Map of Major Active Faults" Task Group is compiling a series of digital maps for the United States and other countries in the Western Hemisphere that show the locations, ages, and activity rates of major earthquake-related features such as faults and fault-related folds; the companion database includes published information on these seismogenic features. The Western Hemisphere effort is sponsored by International Lithosphere Program (ILP) Task Group H-2, whereas the effort to compile a new map and database for the United States is funded by the Earthquake Reduction Program (ERP) through the U.S. Geological Survey. The maps and accompanying databases represent a key contribution to the new Global Seismic Hazards Assessment Program (ILP Task Group II-O) for the International Decade for Natural Disaster Reduction. This compilation, which describes evidence for surface faulting and folding in New Mexico, is the third of many similar State and regional compilations that are planned for the U.S. The compilation for West Texas is available as U.S. Geological Survey Open-File Report 96-002 (Collins and others, 1996 #993) and the compilation for Montana will be released as a Montana Bureau of Mines product (Haller and others, in press #1750).
Tutorial: Advanced fault tree applications using HARP
NASA Technical Reports Server (NTRS)
Dugan, Joanne Bechta; Bavuso, Salvatore J.; Boyd, Mark A.
1993-01-01
Reliability analysis of fault tolerant computer systems for critical applications is complicated by several factors. These modeling difficulties are discussed and dynamic fault tree modeling techniques for handling them are described and demonstrated. Several advanced fault tolerant computer systems are described, and fault tree models for their analysis are presented. HARP (Hybrid Automated Reliability Predictor) is a software package developed at Duke University and NASA Langley Research Center that is capable of solving the fault tree models presented.
Technology transfer by means of fault tree synthesis
NASA Astrophysics Data System (ADS)
Batzias, Dimitris F.
2012-12-01
Since Fault Tree Analysis (FTA) attempts to model and analyze failure processes of engineering, it forms a common technique for good industrial practice. On the contrary, fault tree synthesis (FTS) refers to the methodology of constructing complex trees either from dentritic modules built ad hoc or from fault tress already used and stored in a Knowledge Base. In both cases, technology transfer takes place in a quasi-inductive mode, from partial to holistic knowledge. In this work, an algorithmic procedure, including 9 activity steps and 3 decision nodes is developed for performing effectively this transfer when the fault under investigation occurs within one of the latter stages of an industrial procedure with several stages in series. The main parts of the algorithmic procedure are: (i) the construction of a local fault tree within the corresponding production stage, where the fault has been detected, (ii) the formation of an interface made of input faults that might occur upstream, (iii) the fuzzy (to count for uncertainty) multicriteria ranking of these faults according to their significance, and (iv) the synthesis of an extended fault tree based on the construction of part (i) and on the local fault tree of the first-ranked fault in part (iii). An implementation is presented, referring to 'uneven sealing of Al anodic film', thus proving the functionality of the developed methodology.
Faults Discovery By Using Mined Data
NASA Technical Reports Server (NTRS)
Lee, Charles
2005-01-01
Fault discovery in the complex systems consist of model based reasoning, fault tree analysis, rule based inference methods, and other approaches. Model based reasoning builds models for the systems either by mathematic formulations or by experiment model. Fault Tree Analysis shows the possible causes of a system malfunction by enumerating the suspect components and their respective failure modes that may have induced the problem. The rule based inference build the model based on the expert knowledge. Those models and methods have one thing in common; they have presumed some prior-conditions. Complex systems often use fault trees to analyze the faults. Fault diagnosis, when error occurs, is performed by engineers and analysts performing extensive examination of all data gathered during the mission. International Space Station (ISS) control center operates on the data feedback from the system and decisions are made based on threshold values by using fault trees. Since those decision-making tasks are safety critical and must be done promptly, the engineers who manually analyze the data are facing time challenge. To automate this process, this paper present an approach that uses decision trees to discover fault from data in real-time and capture the contents of fault trees as the initial state of the trees.
Fault trees and sequence dependencies
NASA Technical Reports Server (NTRS)
Dugan, Joanne Bechta; Boyd, Mark A.; Bavuso, Salvatore J.
1990-01-01
One of the frequently cited shortcomings of fault-tree models, their inability to model so-called sequence dependencies, is discussed. Several sources of such sequence dependencies are discussed, and new fault-tree gates to capture this behavior are defined. These complex behaviors can be included in present fault-tree models because they utilize a Markov solution. The utility of the new gates is demonstrated by presenting several models of the fault-tolerant parallel processor, which include both hot and cold spares.
McElroy, Lisa M; Khorzad, Rebeca; Rowe, Theresa A; Abecassis, Zachary A; Apley, Daniel W; Barnard, Cynthia; Holl, Jane L
The purpose of this study was to use fault tree analysis to evaluate the adequacy of quality reporting programs in identifying root causes of postoperative bloodstream infection (BSI). A systematic review of the literature was used to construct a fault tree to evaluate 3 postoperative BSI reporting programs: National Surgical Quality Improvement Program (NSQIP), Centers for Medicare and Medicaid Services (CMS), and The Joint Commission (JC). The literature review revealed 699 eligible publications, 90 of which were used to create the fault tree containing 105 faults. A total of 14 identified faults are currently mandated for reporting to NSQIP, 5 to CMS, and 3 to JC; 2 or more programs require 4 identified faults. The fault tree identifies numerous contributing faults to postoperative BSI and reveals substantial variation in the requirements and ability of national quality data reporting programs to capture these potential faults. Efforts to prevent postoperative BSI require more comprehensive data collection to identify the root causes and develop high-reliability improvement strategies.
A dynamic fault tree model of a propulsion system
NASA Technical Reports Server (NTRS)
Xu, Hong; Dugan, Joanne Bechta; Meshkat, Leila
2006-01-01
We present a dynamic fault tree model of the benchmark propulsion system, and solve it using Galileo. Dynamic fault trees (DFT) extend traditional static fault trees with special gates to model spares and other sequence dependencies. Galileo solves DFT models using a judicious combination of automatically generated Markov and Binary Decision Diagram models. Galileo easily handles the complexities exhibited by the benchmark problem. In particular, Galileo is designed to model phased mission systems.
Chen, Yingyi; Zhen, Zhumi; Yu, Huihui; Xu, Jing
2017-01-14
In the Internet of Things (IoT) equipment used for aquaculture is often deployed in outdoor ponds located in remote areas. Faults occur frequently in these tough environments and the staff generally lack professional knowledge and pay a low degree of attention in these areas. Once faults happen, expert personnel must carry out maintenance outdoors. Therefore, this study presents an intelligent method for fault diagnosis based on fault tree analysis and a fuzzy neural network. In the proposed method, first, the fault tree presents a logic structure of fault symptoms and faults. Second, rules extracted from the fault trees avoid duplicate and redundancy. Third, the fuzzy neural network is applied to train the relationship mapping between fault symptoms and faults. In the aquaculture IoT, one fault can cause various fault symptoms, and one symptom can be caused by a variety of faults. Four fault relationships are obtained. Results show that one symptom-to-one fault, two symptoms-to-two faults, and two symptoms-to-one fault relationships can be rapidly diagnosed with high precision, while one symptom-to-two faults patterns perform not so well, but are still worth researching. This model implements diagnosis for most kinds of faults in the aquaculture IoT.
Chen, Yingyi; Zhen, Zhumi; Yu, Huihui; Xu, Jing
2017-01-01
In the Internet of Things (IoT) equipment used for aquaculture is often deployed in outdoor ponds located in remote areas. Faults occur frequently in these tough environments and the staff generally lack professional knowledge and pay a low degree of attention in these areas. Once faults happen, expert personnel must carry out maintenance outdoors. Therefore, this study presents an intelligent method for fault diagnosis based on fault tree analysis and a fuzzy neural network. In the proposed method, first, the fault tree presents a logic structure of fault symptoms and faults. Second, rules extracted from the fault trees avoid duplicate and redundancy. Third, the fuzzy neural network is applied to train the relationship mapping between fault symptoms and faults. In the aquaculture IoT, one fault can cause various fault symptoms, and one symptom can be caused by a variety of faults. Four fault relationships are obtained. Results show that one symptom-to-one fault, two symptoms-to-two faults, and two symptoms-to-one fault relationships can be rapidly diagnosed with high precision, while one symptom-to-two faults patterns perform not so well, but are still worth researching. This model implements diagnosis for most kinds of faults in the aquaculture IoT. PMID:28098822
Reliability computation using fault tree analysis
NASA Technical Reports Server (NTRS)
Chelson, P. O.
1971-01-01
A method is presented for calculating event probabilities from an arbitrary fault tree. The method includes an analytical derivation of the system equation and is not a simulation program. The method can handle systems that incorporate standby redundancy and it uses conditional probabilities for computing fault trees where the same basic failure appears in more than one fault path.
Object-oriented fault tree evaluation program for quantitative analyses
NASA Technical Reports Server (NTRS)
Patterson-Hine, F. A.; Koen, B. V.
1988-01-01
Object-oriented programming can be combined with fault free techniques to give a significantly improved environment for evaluating the safety and reliability of large complex systems for space missions. Deep knowledge about system components and interactions, available from reliability studies and other sources, can be described using objects that make up a knowledge base. This knowledge base can be interrogated throughout the design process, during system testing, and during operation, and can be easily modified to reflect design changes in order to maintain a consistent information source. An object-oriented environment for reliability assessment has been developed on a Texas Instrument (TI) Explorer LISP workstation. The program, which directly evaluates system fault trees, utilizes the object-oriented extension to LISP called Flavors that is available on the Explorer. The object representation of a fault tree facilitates the storage and retrieval of information associated with each event in the tree, including tree structural information and intermediate results obtained during the tree reduction process. Reliability data associated with each basic event are stored in the fault tree objects. The object-oriented environment on the Explorer also includes a graphical tree editor which was modified to display and edit the fault trees.
Map and database of Quaternary faults in Venezuela and its offshore regions
Audemard, F.A.; Machette, M.N.; Cox, J.W.; Dart, R.L.; Haller, K.M.
2000-01-01
As part of the International Lithosphere Program’s “World Map of Major Active Faults,” the U.S. Geological Survey is assisting in the compilation of a series of digital maps of Quaternary faults and folds in Western Hemisphere countries. The maps show the locations, ages, and activity rates of major earthquake-related features such as faults and fault-related folds. They are accompanied by databases that describe these features and document current information on their activity in the Quaternary. The project is a key part of the Global Seismic Hazards Assessment Program (ILP Project II-0) for the International Decade for Natural Hazard Disaster Reduction.The project is sponsored by the International Lithosphere Program and funded by the USGS’s National Earthquake Hazards Reduction Program. The primary elements of the project are general supervision and interpretation of geologic/tectonic information, data compilation and entry for fault catalog, database design and management, and digitization and manipulation of data in †ARCINFO. For the compilation of data, we engaged experts in Quaternary faulting, neotectonics, paleoseismology, and seismology.
Systems Theoretic Process Analysis Applied to an Offshore Supply Vessel Dynamic Positioning System
2016-06-01
additional safety issues that were either not identified or inadequately mitigated through the use of Fault Tree Analysis and Failure Modes and...Techniques ...................................................................................................... 15 1.3.1. Fault Tree Analysis...49 3.2. Fault Tree Analysis Comparison
An overview of the phase-modular fault tree approach to phased mission system analysis
NASA Technical Reports Server (NTRS)
Meshkat, L.; Xing, L.; Donohue, S. K.; Ou, Y.
2003-01-01
We look at how fault tree analysis (FTA), a primary means of performing reliability analysis of PMS, can meet this challenge in this paper by presenting an overview of the modular approach to solving fault trees that represent PMS.
Try Fault Tree Analysis, a Step-by-Step Way to Improve Organization Development.
ERIC Educational Resources Information Center
Spitzer, Dean
1980-01-01
Fault Tree Analysis, a systems safety engineering technology used to analyze organizational systems, is described. Explains the use of logic gates to represent the relationship between failure events, qualitative analysis, quantitative analysis, and effective use of Fault Tree Analysis. (CT)
Fault Tree Analysis: A Research Tool for Educational Planning. Technical Report No. 1.
ERIC Educational Resources Information Center
Alameda County School Dept., Hayward, CA. PACE Center.
This ESEA Title III report describes fault tree analysis and assesses its applicability to education. Fault tree analysis is an operations research tool which is designed to increase the probability of success in any system by analyzing the most likely modes of failure that could occur. A graphic portrayal, which has the form of a tree, is…
Review: Evaluation of Foot-and-Mouth Disease Control Using Fault Tree Analysis.
Isoda, N; Kadohira, M; Sekiguchi, S; Schuppers, M; Stärk, K D C
2015-06-01
An outbreak of foot-and-mouth disease (FMD) causes huge economic losses and animal welfare problems. Although much can be learnt from past FMD outbreaks, several countries are not satisfied with their degree of contingency planning and aiming at more assurance that their control measures will be effective. The purpose of the present article was to develop a generic fault tree framework for the control of an FMD outbreak as a basis for systematic improvement and refinement of control activities and general preparedness. Fault trees are typically used in engineering to document pathways that can lead to an undesired event, that is, ineffective FMD control. The fault tree method allows risk managers to identify immature parts of the control system and to analyse the events or steps that will most probably delay rapid and effective disease control during a real outbreak. The present developed fault tree is generic and can be tailored to fit the specific needs of countries. For instance, the specific fault tree for the 2001 FMD outbreak in the UK was refined based on control weaknesses discussed in peer-reviewed articles. Furthermore, the specific fault tree based on the 2001 outbreak was applied to the subsequent FMD outbreak in 2007 to assess the refinement of control measures following the earlier, major outbreak. The FMD fault tree can assist risk managers to develop more refined and adequate control activities against FMD outbreaks and to find optimum strategies for rapid control. Further application using the current tree will be one of the basic measures for FMD control worldwide. © 2013 Blackwell Verlag GmbH.
The weakest t-norm based intuitionistic fuzzy fault-tree analysis to evaluate system reliability.
Kumar, Mohit; Yadav, Shiv Prasad
2012-07-01
In this paper, a new approach of intuitionistic fuzzy fault-tree analysis is proposed to evaluate system reliability and to find the most critical system component that affects the system reliability. Here weakest t-norm based intuitionistic fuzzy fault tree analysis is presented to calculate fault interval of system components from integrating expert's knowledge and experience in terms of providing the possibility of failure of bottom events. It applies fault-tree analysis, α-cut of intuitionistic fuzzy set and T(ω) (the weakest t-norm) based arithmetic operations on triangular intuitionistic fuzzy sets to obtain fault interval and reliability interval of the system. This paper also modifies Tanaka et al.'s fuzzy fault-tree definition. In numerical verification, a malfunction of weapon system "automatic gun" is presented as a numerical example. The result of the proposed method is compared with the listing approaches of reliability analysis methods. Copyright © 2012 ISA. Published by Elsevier Ltd. All rights reserved.
Software For Fault-Tree Diagnosis Of A System
NASA Technical Reports Server (NTRS)
Iverson, Dave; Patterson-Hine, Ann; Liao, Jack
1993-01-01
Fault Tree Diagnosis System (FTDS) computer program is automated-diagnostic-system program identifying likely causes of specified failure on basis of information represented in system-reliability mathematical models known as fault trees. Is modified implementation of failure-cause-identification phase of Narayanan's and Viswanadham's methodology for acquisition of knowledge and reasoning in analyzing failures of systems. Knowledge base of if/then rules replaced with object-oriented fault-tree representation. Enhancement yields more-efficient identification of causes of failures and enables dynamic updating of knowledge base. Written in C language, C++, and Common LISP.
Obtaining correct compile results by absorbing mismatches between data types representations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Horie, Michihiro; Horii, Hiroshi H.; Kawachiya, Kiyokuni
Methods and a system are provided. A method includes implementing a function, which a compiler for a first language does not have, using a compiler for a second language. The implementing step includes generating, by the compiler for the first language, a first abstract syntax tree. The implementing step further includes converting, by a converter, the first abstract syntax tree to a second abstract syntax tree of the compiler for the second language using a conversion table from data representation types in the first language to data representation types in the second language. When a compilation error occurs, the implementingmore » step also includes generating a special node for error processing in the second abstract syntax tree and storing an error token in the special node. When unparsing, the implementing step additionally includes outputting the error token, in the form of source code written in the first language.« less
Obtaining correct compile results by absorbing mismatches between data types representations
Horie, Michihiro; Horii, Hiroshi H.; Kawachiya, Kiyokuni; Takeuchi, Mikio
2017-03-21
Methods and a system are provided. A method includes implementing a function, which a compiler for a first language does not have, using a compiler for a second language. The implementing step includes generating, by the compiler for the first language, a first abstract syntax tree. The implementing step further includes converting, by a converter, the first abstract syntax tree to a second abstract syntax tree of the compiler for the second language using a conversion table from data representation types in the first language to data representation types in the second language. When a compilation error occurs, the implementing step also includes generating a special node for error processing in the second abstract syntax tree and storing an error token in the special node. When unparsing, the implementing step additionally includes outputting the error token, in the form of source code written in the first language.
Obtaining correct compile results by absorbing mismatches between data types representations
Horie, Michihiro; Horii, Hiroshi H.; Kawachiya, Kiyokuni; Takeuchi, Mikio
2017-11-21
Methods and a system are provided. A method includes implementing a function, which a compiler for a first language does not have, using a compiler for a second language. The implementing step includes generating, by the compiler for the first language, a first abstract syntax tree. The implementing step further includes converting, by a converter, the first abstract syntax tree to a second abstract syntax tree of the compiler for the second language using a conversion table from data representation types in the first language to data representation types in the second language. When a compilation error occurs, the implementing step also includes generating a special node for error processing in the second abstract syntax tree and storing an error token in the special node. When unparsing, the implementing step additionally includes outputting the error token, in the form of source code written in the first language.
Fault tree models for fault tolerant hypercube multiprocessors
NASA Technical Reports Server (NTRS)
Boyd, Mark A.; Tuazon, Jezus O.
1991-01-01
Three candidate fault tolerant hypercube architectures are modeled, their reliability analyses are compared, and the resulting implications of these methods of incorporating fault tolerance into hypercube multiprocessors are discussed. In the course of performing the reliability analyses, the use of HARP and fault trees in modeling sequence dependent system behaviors is demonstrated.
Comparative analysis of techniques for evaluating the effectiveness of aircraft computing systems
NASA Technical Reports Server (NTRS)
Hitt, E. F.; Bridgman, M. S.; Robinson, A. C.
1981-01-01
Performability analysis is a technique developed for evaluating the effectiveness of fault-tolerant computing systems in multiphase missions. Performability was evaluated for its accuracy, practical usefulness, and relative cost. The evaluation was performed by applying performability and the fault tree method to a set of sample problems ranging from simple to moderately complex. The problems involved as many as five outcomes, two to five mission phases, permanent faults, and some functional dependencies. Transient faults and software errors were not considered. A different analyst was responsible for each technique. Significantly more time and effort were required to learn performability analysis than the fault tree method. Performability is inherently as accurate as fault tree analysis. For the sample problems, fault trees were more practical and less time consuming to apply, while performability required less ingenuity and was more checkable. Performability offers some advantages for evaluating very complex problems.
Product Support Manager Guidebook
2011-04-01
package is being developed using supportability analysis concepts such as Failure Mode, Effects and Criticality Analysis (FMECA), Fault Tree Analysis ( FTA ...Analysis (LORA) Condition Based Maintenance + (CBM+) Fault Tree Analysis ( FTA ) Failure Mode, Effects, and Criticality Analysis (FMECA) Maintenance Task...Reporting and Corrective Action System (FRACAS), Fault Tree Analysis ( FTA ), Level of Repair Analysis (LORA), Maintenance Task Analysis (MTA
MIRAP, microcomputer reliability analysis program
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jehee, J.N.T.
1989-01-01
A program for a microcomputer is outlined that can determine minimal cut sets from a specified fault tree logic. The speed and memory limitations of the microcomputers on which the program is implemented (Atari ST and IBM) are addressed by reducing the fault tree's size and by storing the cut set data on disk. Extensive well proven fault tree restructuring techniques, such as the identification of sibling events and of independent gate events, reduces the fault tree's size but does not alter its logic. New methods are used for the Boolean reduction of the fault tree logic. Special criteria formore » combining events in the 'AND' and 'OR' logic avoid the creation of many subsuming cut sets which all would cancel out due to existing cut sets. Figures and tables illustrates these methods. 4 refs., 5 tabs.« less
The FTA Method And A Possibility Of Its Application In The Area Of Road Freight Transport
NASA Astrophysics Data System (ADS)
Poliaková, Adela
2015-06-01
The Fault Tree process utilizes logic diagrams to portray and analyse potentially hazardous events. Three basic symbols (logic gates) are adequate for diagramming any fault tree. However, additional recently developed symbols can be used to reduce the time and effort required for analysis. A fault tree is a graphical representation of the relationship between certain specific events and the ultimate undesired event (2). This paper deals to method of Fault Tree Analysis basic description and provides a practical view on possibility of application by quality improvement in road freight transport company.
Fault Tree Analysis: Its Implications for Use in Education.
ERIC Educational Resources Information Center
Barker, Bruce O.
This study introduces the concept of Fault Tree Analysis as a systems tool and examines the implications of Fault Tree Analysis (FTA) as a technique for isolating failure modes in educational systems. A definition of FTA and discussion of its history, as it relates to education, are provided. The step by step process for implementation and use of…
Preventing medical errors by designing benign failures.
Grout, John R
2003-07-01
One way to successfully reduce medical errors is to design health care systems that are more resistant to the tendencies of human beings to err. One interdisciplinary approach entails creating design changes, mitigating human errors, and making human error irrelevant to outcomes. This approach is intended to facilitate the creation of benign failures, which have been called mistake-proofing devices and forcing functions elsewhere. USING FAULT TREES TO DESIGN FORCING FUNCTIONS: A fault tree is a graphical tool used to understand the relationships that either directly cause or contribute to the cause of a particular failure. A careful analysis of a fault tree enables the analyst to anticipate how the process will behave after the change. EXAMPLE OF AN APPLICATION: A scenario in which a patient is scalded while bathing can serve as an example of how multiple fault trees can be used to design forcing functions. The first fault tree shows the undesirable event--patient scalded while bathing. The second fault tree has a benign event--no water. Adding a scald valve changes the outcome from the undesirable event ("patient scalded while bathing") to the benign event ("no water") Analysis of fault trees does not ensure or guarantee that changes necessary to eliminate error actually occur. Most mistake-proofing is used to prevent simple errors and to create well-defended processes, but complex errors can also result. The utilization of mistake-proofing or forcing functions can be thought of as changing the logic of a process. Errors that formerly caused undesirable failures can be converted into the causes of benign failures. The use of fault trees can provide a variety of insights into the design of forcing functions that will improve patient safety.
Fault Tree Analysis Application for Safety and Reliability
NASA Technical Reports Server (NTRS)
Wallace, Dolores R.
2003-01-01
Many commercial software tools exist for fault tree analysis (FTA), an accepted method for mitigating risk in systems. The method embedded in the tools identifies a root as use in system components, but when software is identified as a root cause, it does not build trees into the software component. No commercial software tools have been built specifically for development and analysis of software fault trees. Research indicates that the methods of FTA could be applied to software, but the method is not practical without automated tool support. With appropriate automated tool support, software fault tree analysis (SFTA) may be a practical technique for identifying the underlying cause of software faults that may lead to critical system failures. We strive to demonstrate that existing commercial tools for FTA can be adapted for use with SFTA, and that applied to a safety-critical system, SFTA can be used to identify serious potential problems long before integrator and system testing.
ERIC Educational Resources Information Center
Barker, Bruce O.; Petersen, Paul D.
This paper explores the fault-tree analysis approach to isolating failure modes within a system. Fault tree investigates potentially undesirable events and then looks for failures in sequence that would lead to their occurring. Relationships among these events are symbolized by AND or OR logic gates, AND used when single events must coexist to…
Evidential Networks for Fault Tree Analysis with Imprecise Knowledge
NASA Astrophysics Data System (ADS)
Yang, Jianping; Huang, Hong-Zhong; Liu, Yu; Li, Yan-Feng
2012-06-01
Fault tree analysis (FTA), as one of the powerful tools in reliability engineering, has been widely used to enhance system quality attributes. In most fault tree analyses, precise values are adopted to represent the probabilities of occurrence of those events. Due to the lack of sufficient data or imprecision of existing data at the early stage of product design, it is often difficult to accurately estimate the failure rates of individual events or the probabilities of occurrence of the events. Therefore, such imprecision and uncertainty need to be taken into account in reliability analysis. In this paper, the evidential networks (EN) are employed to quantify and propagate the aforementioned uncertainty and imprecision in fault tree analysis. The detailed conversion processes of some logic gates to EN are described in fault tree (FT). The figures of the logic gates and the converted equivalent EN, together with the associated truth tables and the conditional belief mass tables, are also presented in this work. The new epistemic importance is proposed to describe the effect of ignorance degree of event. The fault tree of an aircraft engine damaged by oil filter plugs is presented to demonstrate the proposed method.
Object-oriented fault tree models applied to system diagnosis
NASA Technical Reports Server (NTRS)
Iverson, David L.; Patterson-Hine, F. A.
1990-01-01
When a diagnosis system is used in a dynamic environment, such as the distributed computer system planned for use on Space Station Freedom, it must execute quickly and its knowledge base must be easily updated. Representing system knowledge as object-oriented augmented fault trees provides both features. The diagnosis system described here is based on the failure cause identification process of the diagnostic system described by Narayanan and Viswanadham. Their system has been enhanced in this implementation by replacing the knowledge base of if-then rules with an object-oriented fault tree representation. This allows the system to perform its task much faster and facilitates dynamic updating of the knowledge base in a changing diagnosis environment. Accessing the information contained in the objects is more efficient than performing a lookup operation on an indexed rule base. Additionally, the object-oriented fault trees can be easily updated to represent current system status. This paper describes the fault tree representation, the diagnosis algorithm extensions, and an example application of this system. Comparisons are made between the object-oriented fault tree knowledge structure solution and one implementation of a rule-based solution. Plans for future work on this system are also discussed.
Probabilistic fault tree analysis of a radiation treatment system.
Ekaette, Edidiong; Lee, Robert C; Cooke, David L; Iftody, Sandra; Craighead, Peter
2007-12-01
Inappropriate administration of radiation for cancer treatment can result in severe consequences such as premature death or appreciably impaired quality of life. There has been little study of vulnerable treatment process components and their contribution to the risk of radiation treatment (RT). In this article, we describe the application of probabilistic fault tree methods to assess the probability of radiation misadministration to patients at a large cancer treatment center. We conducted a systematic analysis of the RT process that identified four process domains: Assessment, Preparation, Treatment, and Follow-up. For the Preparation domain, we analyzed possible incident scenarios via fault trees. For each task, we also identified existing quality control measures. To populate the fault trees we used subjective probabilities from experts and compared results with incident report data. Both the fault tree and the incident report analysis revealed simulation tasks to be most prone to incidents, and the treatment prescription task to be least prone to incidents. The probability of a Preparation domain incident was estimated to be in the range of 0.1-0.7% based on incident reports, which is comparable to the mean value of 0.4% from the fault tree analysis using probabilities from the expert elicitation exercise. In conclusion, an analysis of part of the RT system using a fault tree populated with subjective probabilities from experts was useful in identifying vulnerable components of the system, and provided quantitative data for risk management.
Reconfigurable tree architectures using subtree oriented fault tolerance
NASA Technical Reports Server (NTRS)
Lowrie, Matthew B.
1987-01-01
An approach to the design of reconfigurable tree architecture is presented in which spare processors are allocated at the leaves. The approach is unique in that spares are associated with subtrees and sharing of spares between these subtrees can occur. The Subtree Oriented Fault Tolerance (SOFT) approach is more reliable than previous approaches capable of tolerating link and switch failures for both single chip and multichip tree implementations while reducing redundancy in terms of both spare processors and links. VLSI layout is 0(n) for binary trees and is directly extensible to N-ary trees and fault tolerance through performance degradation.
Secure Embedded System Design Methodologies for Military Cryptographic Systems
2016-03-31
Fault- Tree Analysis (FTA); Built-In Self-Test (BIST) Introduction Secure access-control systems restrict operations to authorized users via methods...failures in the individual software/processor elements, the question of exactly how unlikely is difficult to answer. Fault- Tree Analysis (FTA) has a...Collins of Sandia National Laboratories for years of sharing his extensive knowledge of Fail-Safe Design Assurance and Fault- Tree Analysis
Rymer, M.J.
2000-01-01
The Coachella Valley area was strongly shaken by the 1992 Joshua Tree (23 April) and Landers (28 June) earthquakes, and both events caused triggered slip on active faults within the area. Triggered slip associated with the Joshua Tree earthquake was on a newly recognized fault, the East Wide Canyon fault, near the southwestern edge of the Little San Bernardino Mountains. Slip associated with the Landers earthquake formed along the San Andreas fault in the southeastern Coachella Valley. Surface fractures formed along the East Wide Canyon fault in association with the Joshua Tree earthquake. The fractures extended discontinuously over a 1.5-km stretch of the fault, near its southern end. Sense of slip was consistently right-oblique, west side down, similar to the long-term style of faulting. Measured offset values were small, with right-lateral and vertical components of slip ranging from 1 to 6 mm and 1 to 4 mm, respectively. This is the first documented historic slip on the East Wide Canyon fault, which was first mapped only months before the Joshua Tree earthquake. Surface slip associated with the Joshua Tree earthquake most likely developed as triggered slip given its 5 km distance from the Joshua Tree epicenter and aftershocks. As revealed in a trench investigation, slip formed in an area with only a thin (<3 m thick) veneer of alluvium in contrast to earlier documented triggered slip events in this region, all in the deep basins of the Salton Trough. A paleoseismic trench study in an area of 1992 surface slip revealed evidence of two and possibly three surface faulting events on the East Wide Canyon fault during the late Quaternary, probably latest Pleistocene (first event) and mid- to late Holocene (second two events). About two months after the Joshua Tree earthquake, the Landers earthquake then triggered slip on many faults, including the San Andreas fault in the southeastern Coachella Valley. Surface fractures associated with this event formed discontinuous breaks over a 54-km-long stretch of the fault, from the Indio Hills southeastward to Durmid Hill. Sense of slip was right-lateral; only locally was there a minor (~1 mm) vertical component of slip. Measured dextral displacement values ranged from 1 to 20 mm, with the largest amounts found in the Mecca Hills where large slip values have been measured following past triggered-slip events.
NASA Astrophysics Data System (ADS)
de Barros, Felipe P. J.; Bolster, Diogo; Sanchez-Vila, Xavier; Nowak, Wolfgang
2011-05-01
Assessing health risk in hydrological systems is an interdisciplinary field. It relies on the expertise in the fields of hydrology and public health and needs powerful translation concepts to provide decision support and policy making. Reliable health risk estimates need to account for the uncertainties and variabilities present in hydrological, physiological, and human behavioral parameters. Despite significant theoretical advancements in stochastic hydrology, there is still a dire need to further propagate these concepts to practical problems and to society in general. Following a recent line of work, we use fault trees to address the task of probabilistic risk analysis and to support related decision and management problems. Fault trees allow us to decompose the assessment of health risk into individual manageable modules, thus tackling a complex system by a structural divide and conquer approach. The complexity within each module can be chosen individually according to data availability, parsimony, relative importance, and stage of analysis. Three differences are highlighted in this paper when compared to previous works: (1) The fault tree proposed here accounts for the uncertainty in both hydrological and health components, (2) system failure within the fault tree is defined in terms of risk being above a threshold value, whereas previous studies that used fault trees used auxiliary events such as exceedance of critical concentration levels, and (3) we introduce a new form of stochastic fault tree that allows us to weaken the assumption of independent subsystems that is required by a classical fault tree approach. We illustrate our concept in a simple groundwater-related setting.
Planning effectiveness may grow on fault trees.
Chow, C W; Haddad, K; Mannino, B
1991-10-01
The first step of a strategic planning process--identifying and analyzing threats and opportunities--requires subjective judgments. By using an analytical tool known as a fault tree, healthcare administrators can reduce the unreliability of subjective decision making by creating a logical structure for problem solving and decision making. A case study of 11 healthcare administrators showed that an analysis technique called prospective hindsight can add to a fault tree's ability to improve a strategic planning process.
NASA Astrophysics Data System (ADS)
Batzias, Dimitris F.
2012-12-01
Fault Tree Analysis (FTA) can be used for technology transfer when the relevant problem (called 'top even' in FTA) is solved in a technology centre and the results are diffused to interested parties (usually Small Medium Enterprises - SMEs) that have not the proper equipment and the required know-how to solve the problem by their own. Nevertheless, there is a significant drawback in this procedure: the information usually provided by the SMEs to the technology centre, about production conditions and corresponding quality characteristics of the product, and (sometimes) the relevant expertise in the Knowledge Base of this centre may be inadequate to form a complete fault tree. Since such cases are quite frequent in practice, we have developed a methodology for transforming incomplete fault tree to Ishikawa diagram, which is more flexible and less strict in establishing causal chains, because it uses a surface phenomenological level with a limited number of categories of faults. On the other hand, such an Ishikawa diagram can be extended to simulate a fault tree as relevant knowledge increases. An implementation of this transformation, referring to anodization of aluminium, is presented.
Testing-Based Compiler Validation for Synchronous Languages
NASA Technical Reports Server (NTRS)
Garoche, Pierre-Loic; Howar, Falk; Kahsai, Temesghen; Thirioux, Xavier
2014-01-01
In this paper we present a novel lightweight approach to validate compilers for synchronous languages. Instead of verifying a compiler for all input programs or providing a fixed suite of regression tests, we extend the compiler to generate a test-suite with high behavioral coverage and geared towards discovery of faults for every compiled artifact. We have implemented and evaluated our approach using a compiler from Lustre to C.
A systematic risk management approach employed on the CloudSat project
NASA Technical Reports Server (NTRS)
Basilio, R. R.; Plourde, K. S.; Lam, T.
2000-01-01
The CloudSat Project has developed a simplified approach for fault tree analysis and probabilistic risk assessment. A system-level fault tree has been constructed to identify credible fault scenarios and failure modes leading up to a potential failure to meet the nominal mission success criteria.
Fault Tree Analysis: A Bibliography
NASA Technical Reports Server (NTRS)
2000-01-01
Fault tree analysis is a top-down approach to the identification of process hazards. It is as one of the best methods for systematically identifying an graphically displaying the many ways some things can go wrong. This bibliography references 266 documents in the NASA STI Database that contain the major concepts. fault tree analysis, risk an probability theory, in the basic index or major subject terms. An abstract is included with most citations, followed by the applicable subject terms.
Preliminary atlas of active shallow tectonic deformation in the Puget Lowland, Washington
Barnett, Elizabeth A.; Haugerud, Ralph A.; Sherrod, Brian L.; Weaver, Craig S.; Pratt, Thomas L.; Blakely, Richard J.
2010-01-01
This atlas presents an up-to-date map compilation of the geological and geophysical observations that underpin interpretations of active, surface-deforming faults in the Puget Lowland, Washington. Shallow lowland faults are mapped where observations of deformation from paleoseismic, seismic-reflection, and potential-field investigations converge. Together, results from these studies strengthen the identification and characterization of regional faults and show that as many as a dozen shallow faults have been active during the Holocene. The suite of maps presented in our atlas identifies sites that have evidence of deformation attributed to these shallow faults. For example, the paleoseismic-investigations map shows where coseismic surface rupture and deformation produced geomorphic scarps and deformed shorelines. Other maps compile results of seismic-reflection and potential-field studies that demonstrate evidence of deformation along suspected fault structures in the subsurface. Summary maps show the fault traces derived from, and draped over, the datasets presented in the preceding maps. Overall, the atlas provides map users with a visual overview of the observations and interpretations that support the existence of active, shallow faults beneath the densely populated Puget Lowland.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sarrack, A.G.
The purpose of this report is to document fault tree analyses which have been completed for the Defense Waste Processing Facility (DWPF) safety analysis. Logic models for equipment failures and human error combinations that could lead to flammable gas explosions in various process tanks, or failure of critical support systems were developed for internal initiating events and for earthquakes. These fault trees provide frequency estimates for support systems failures and accidents that could lead to radioactive and hazardous chemical releases both on-site and off-site. Top event frequency results from these fault trees will be used in further APET analyses tomore » calculate accident risk associated with DWPF facility operations. This report lists and explains important underlying assumptions, provides references for failure data sources, and briefly describes the fault tree method used. Specific commitments from DWPF to provide new procedural/administrative controls or system design changes are listed in the ''Facility Commitments'' section. The purpose of the ''Assumptions'' section is to clarify the basis for fault tree modeling, and is not necessarily a list of items required to be protected by Technical Safety Requirements (TSRs).« less
Graphical fault tree analysis for fatal falls in the construction industry.
Chi, Chia-Fen; Lin, Syuan-Zih; Dewi, Ratna Sari
2014-11-01
The current study applied a fault tree analysis to represent the causal relationships among events and causes that contributed to fatal falls in the construction industry. Four hundred and eleven work-related fatalities in the Taiwanese construction industry were analyzed in terms of age, gender, experience, falling site, falling height, company size, and the causes for each fatality. Given that most fatal accidents involve multiple events, the current study coded up to a maximum of three causes for each fall fatality. After the Boolean algebra and minimal cut set analyses, accident causes associated with each falling site can be presented as a fault tree to provide an overview of the basic causes, which could trigger fall fatalities in the construction industry. Graphical icons were designed for each falling site along with the associated accident causes to illustrate the fault tree in a graphical manner. A graphical fault tree can improve inter-disciplinary discussion of risk management and the communication of accident causation to first line supervisors. Copyright © 2014 Elsevier Ltd. All rights reserved.
Fault Tree Analysis for an Inspection Robot in a Nuclear Power Plant
NASA Astrophysics Data System (ADS)
Ferguson, Thomas A.; Lu, Lixuan
2017-09-01
The life extension of current nuclear reactors has led to an increasing demand on inspection and maintenance of critical reactor components that are too expensive to replace. To reduce the exposure dosage to workers, robotics have become an attractive alternative as a preventative safety tool in nuclear power plants. It is crucial to understand the reliability of these robots in order to increase the veracity and confidence of their results. This study presents the Fault Tree (FT) analysis to a coolant outlet piper snake-arm inspection robot in a nuclear power plant. Fault trees were constructed for a qualitative analysis to determine the reliability of the robot. Insight on the applicability of fault tree methods for inspection robotics in the nuclear industry is gained through this investigation.
Interim reliability evaluation program, Browns Ferry fault trees
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stewart, M.E.
1981-01-01
An abbreviated fault tree method is used to evaluate and model Browns Ferry systems in the Interim Reliability Evaluation programs, simplifying the recording and displaying of events, yet maintaining the system of identifying faults. The level of investigation is not changed. The analytical thought process inherent in the conventional method is not compromised. But the abbreviated method takes less time, and the fault modes are much more visible.
Object-Oriented Algorithm For Evaluation Of Fault Trees
NASA Technical Reports Server (NTRS)
Patterson-Hine, F. A.; Koen, B. V.
1992-01-01
Algorithm for direct evaluation of fault trees incorporates techniques of object-oriented programming. Reduces number of calls needed to solve trees with repeated events. Provides significantly improved software environment for such computations as quantitative analyses of safety and reliability of complicated systems of equipment (e.g., spacecraft or factories).
NASA Technical Reports Server (NTRS)
English, Thomas
2005-01-01
A standard tool of reliability analysis used at NASA-JSC is the event tree. An event tree is simply a probability tree, with the probabilities determining the next step through the tree specified at each node. The nodal probabilities are determined by a reliability study of the physical system at work for a particular node. The reliability study performed at a node is typically referred to as a fault tree analysis, with the potential of a fault tree existing.for each node on the event tree. When examining an event tree it is obvious why the event tree/fault tree approach has been adopted. Typical event trees are quite complex in nature, and the event tree/fault tree approach provides a systematic and organized approach to reliability analysis. The purpose of this study was two fold. Firstly, we wanted to explore the possibility that a semi-Markov process can create dependencies between sojourn times (the times it takes to transition from one state to the next) that can decrease the uncertainty when estimating time to failures. Using a generalized semi-Markov model, we studied a four element reliability model and were able to demonstrate such sojourn time dependencies. Secondly, we wanted to study the use of semi-Markov processes to introduce a time variable into the event tree diagrams that are commonly developed in PRA (Probabilistic Risk Assessment) analyses. Event tree end states which change with time are more representative of failure scenarios than are the usual static probability-derived end states.
Structural system reliability calculation using a probabilistic fault tree analysis method
NASA Technical Reports Server (NTRS)
Torng, T. Y.; Wu, Y.-T.; Millwater, H. R.
1992-01-01
The development of a new probabilistic fault tree analysis (PFTA) method for calculating structural system reliability is summarized. The proposed PFTA procedure includes: developing a fault tree to represent the complex structural system, constructing an approximation function for each bottom event, determining a dominant sampling sequence for all bottom events, and calculating the system reliability using an adaptive importance sampling method. PFTA is suitable for complicated structural problems that require computer-intensive computer calculations. A computer program has been developed to implement the PFTA.
Using Fault Trees to Advance Understanding of Diagnostic Errors.
Rogith, Deevakar; Iyengar, M Sriram; Singh, Hardeep
2017-11-01
Diagnostic errors annually affect at least 5% of adults in the outpatient setting in the United States. Formal analytic techniques are only infrequently used to understand them, in part because of the complexity of diagnostic processes and clinical work flows involved. In this article, diagnostic errors were modeled using fault tree analysis (FTA), a form of root cause analysis that has been successfully used in other high-complexity, high-risk contexts. How factors contributing to diagnostic errors can be systematically modeled by FTA to inform error understanding and error prevention is demonstrated. A team of three experts reviewed 10 published cases of diagnostic error and constructed fault trees. The fault trees were modeled according to currently available conceptual frameworks characterizing diagnostic error. The 10 trees were then synthesized into a single fault tree to identify common contributing factors and pathways leading to diagnostic error. FTA is a visual, structured, deductive approach that depicts the temporal sequence of events and their interactions in a formal logical hierarchy. The visual FTA enables easier understanding of causative processes and cognitive and system factors, as well as rapid identification of common pathways and interactions in a unified fashion. In addition, it enables calculation of empirical estimates for causative pathways. Thus, fault trees might provide a useful framework for both quantitative and qualitative analysis of diagnostic errors. Future directions include establishing validity and reliability by modeling a wider range of error cases, conducting quantitative evaluations, and undertaking deeper exploration of other FTA capabilities. Copyright © 2017 The Joint Commission. Published by Elsevier Inc. All rights reserved.
Locating hardware faults in a data communications network of a parallel computer
Archer, Charles J.; Megerian, Mark G.; Ratterman, Joseph D.; Smith, Brian E.
2010-01-12
Hardware faults location in a data communications network of a parallel computer. Such a parallel computer includes a plurality of compute nodes and a data communications network that couples the compute nodes for data communications and organizes the compute node as a tree. Locating hardware faults includes identifying a next compute node as a parent node and a root of a parent test tree, identifying for each child compute node of the parent node a child test tree having the child compute node as root, running a same test suite on the parent test tree and each child test tree, and identifying the parent compute node as having a defective link connected from the parent compute node to a child compute node if the test suite fails on the parent test tree and succeeds on all the child test trees.
Khan, F I; Abbasi, S A
2000-07-10
Fault tree analysis (FTA) is based on constructing a hypothetical tree of base events (initiating events) branching into numerous other sub-events, propagating the fault and eventually leading to the top event (accident). It has been a powerful technique used traditionally in identifying hazards in nuclear installations and power industries. As the systematic articulation of the fault tree is associated with assigning probabilities to each fault, the exercise is also sometimes called probabilistic risk assessment. But powerful as this technique is, it is also very cumbersome and costly, limiting its area of application. We have developed a new algorithm based on analytical simulation (named as AS-II), which makes the application of FTA simpler, quicker, and cheaper; thus opening up the possibility of its wider use in risk assessment in chemical process industries. Based on the methodology we have developed a computer-automated tool. The details are presented in this paper.
DG TO FT - AUTOMATIC TRANSLATION OF DIGRAPH TO FAULT TREE MODELS
NASA Technical Reports Server (NTRS)
Iverson, D. L.
1994-01-01
Fault tree and digraph models are frequently used for system failure analysis. Both types of models represent a failure space view of the system using AND and OR nodes in a directed graph structure. Each model has its advantages. While digraphs can be derived in a fairly straightforward manner from system schematics and knowledge about component failure modes and system design, fault tree structure allows for fast processing using efficient techniques developed for tree data structures. The similarities between digraphs and fault trees permits the information encoded in the digraph to be translated into a logically equivalent fault tree. The DG TO FT translation tool will automatically translate digraph models, including those with loops or cycles, into fault tree models that have the same minimum cut set solutions as the input digraph. This tool could be useful, for example, if some parts of a system have been modeled using digraphs and others using fault trees. The digraphs could be translated and incorporated into the fault trees, allowing them to be analyzed using a number of powerful fault tree processing codes, such as cut set and quantitative solution codes. A cut set for a given node is a group of failure events that will cause the failure of the node. A minimum cut set for a node is any cut set that, if any of the failures in the set were to be removed, the occurrence of the other failures in the set will not cause the failure of the event represented by the node. Cut sets calculations can be used to find dependencies, weak links, and vital system components whose failures would cause serious systems failure. The DG TO FT translation system reads in a digraph with each node listed as a separate object in the input file. The user specifies a terminal node for the digraph that will be used as the top node of the resulting fault tree. A fault tree basic event node representing the failure of that digraph node is created and becomes a child of the terminal root node. A subtree is created for each of the inputs to the digraph terminal node and the root of those subtrees are added as children of the top node of the fault tree. Every node in the digraph upstream of the terminal node will be visited and converted. During the conversion process, the algorithm keeps track of the path from the digraph terminal node to the current digraph node. If a node is visited twice, then the program has found a cycle in the digraph. This cycle is broken by finding the minimal cut sets of the twice visited digraph node and forming those cut sets into subtrees. Another implementation of the algorithm resolves loops by building a subtree based on the digraph minimal cut sets calculation. It does not reduce the subtree to minimal cut set form. This second implementation produces larger fault trees, but runs much faster than the version using minimal cut sets since it does not spend time reducing the subtrees to minimal cut sets. The fault trees produced by DG TO FT will contain OR gates, AND gates, Basic Event nodes, and NOP gates. The results of a translation can be output as a text object description of the fault tree similar to the text digraph input format. The translator can also output a LISP language formatted file and an augmented LISP file which can be used by the FTDS (ARC-13019) diagnosis system, available from COSMIC, which performs diagnostic reasoning using the fault tree as a knowledge base. DG TO FT is written in C-language to be machine independent. It has been successfully implemented on a Sun running SunOS, a DECstation running ULTRIX, a Macintosh running System 7, and a DEC VAX running VMS. The RAM requirement varies with the size of the models. DG TO FT is available in UNIX tar format on a .25 inch streaming magnetic tape cartridge (standard distribution) or on a 3.5 inch diskette. It is also available on a 3.5 inch Macintosh format diskette or on a 9-track 1600 BPI magnetic tape in DEC VAX FILES-11 format. Sample input and sample output are provided on the distribution medium. An electronic copy of the documentation in Macintosh Microsoft Word format is provided on the distribution medium. DG TO FT was developed in 1992. Sun, and SunOS are trademarks of Sun Microsystems, Inc. DECstation, ULTRIX, VAX, and VMS are trademarks of Digital Equipment Corporation. UNIX is a registered trademark of AT&T Bell Laboratories. Macintosh is a registered trademark of Apple Computer, Inc. System 7 is a trademark of Apple Computers Inc. Microsoft Word is a trademark of Microsoft Corporation.
Assessing Survivability Using Software Fault Injection
2001-04-01
UNCLASSIFIED Defense Technical Information Center Compilation Part Notice ADPO10875 TITLE: Assessing Survivability Using Software Fault Injection...Esc to exit .......................................................................... = 11-1 Assessing Survivability Using Software Fault Injection...Jeffrey Voas Reliable Software Technologies 21351 Ridgetop Circle, #400 Dulles, VA 20166 jmvoas@rstcorp.crom Abstract approved sources have the
Reliability database development for use with an object-oriented fault tree evaluation program
NASA Technical Reports Server (NTRS)
Heger, A. Sharif; Harringtton, Robert J.; Koen, Billy V.; Patterson-Hine, F. Ann
1989-01-01
A description is given of the development of a fault-tree analysis method using object-oriented programming. In addition, the authors discuss the programs that have been developed or are under development to connect a fault-tree analysis routine to a reliability database. To assess the performance of the routines, a relational database simulating one of the nuclear power industry databases has been constructed. For a realistic assessment of the results of this project, the use of one of existing nuclear power reliability databases is planned.
Crone, Anthony J.; Machette, Michael N.; Bradley, Lee-Ann; Mahan, Shannon
2006-01-01
In this report, we present detailed maps of the trenches and a compilation of field and laboratory data used to support our interpretation of the history of four (PE1-PE4) prehistoric surface-faulting earthquakes at this site.
Fault diagnosis of power transformer based on fault-tree analysis (FTA)
NASA Astrophysics Data System (ADS)
Wang, Yongliang; Li, Xiaoqiang; Ma, Jianwei; Li, SuoYu
2017-05-01
Power transformers is an important equipment in power plants and substations, power distribution transmission link is made an important hub of power systems. Its performance directly affects the quality and health of the power system reliability and stability. This paper summarizes the five parts according to the fault type power transformers, then from the time dimension divided into three stages of power transformer fault, use DGA routine analysis and infrared diagnostics criterion set power transformer running state, finally, according to the needs of power transformer fault diagnosis, by the general to the section by stepwise refinement of dendritic tree constructed power transformer fault
CUTSETS - MINIMAL CUT SET CALCULATION FOR DIGRAPH AND FAULT TREE RELIABILITY MODELS
NASA Technical Reports Server (NTRS)
Iverson, D. L.
1994-01-01
Fault tree and digraph models are frequently used for system failure analysis. Both type of models represent a failure space view of the system using AND and OR nodes in a directed graph structure. Fault trees must have a tree structure and do not allow cycles or loops in the graph. Digraphs allow any pattern of interconnection between loops in the graphs. A common operation performed on digraph and fault tree models is the calculation of minimal cut sets. A cut set is a set of basic failures that could cause a given target failure event to occur. A minimal cut set for a target event node in a fault tree or digraph is any cut set for the node with the property that if any one of the failures in the set is removed, the occurrence of the other failures in the set will not cause the target failure event. CUTSETS will identify all the minimal cut sets for a given node. The CUTSETS package contains programs that solve for minimal cut sets of fault trees and digraphs using object-oriented programming techniques. These cut set codes can be used to solve graph models for reliability analysis and identify potential single point failures in a modeled system. The fault tree minimal cut set code reads in a fault tree model input file with each node listed in a text format. In the input file the user specifies a top node of the fault tree and a maximum cut set size to be calculated. CUTSETS will find minimal sets of basic events which would cause the failure at the output of a given fault tree gate. The program can find all the minimal cut sets of a node, or minimal cut sets up to a specified size. The algorithm performs a recursive top down parse of the fault tree, starting at the specified top node, and combines the cut sets of each child node into sets of basic event failures that would cause the failure event at the output of that gate. Minimal cut set solutions can be found for all nodes in the fault tree or just for the top node. The digraph cut set code uses the same techniques as the fault tree cut set code, except it includes all upstream digraph nodes in the cut sets for a given node and checks for cycles in the digraph during the solution process. CUTSETS solves for specified nodes and will not automatically solve for all upstream digraph nodes. The cut sets will be output as a text file. CUTSETS includes a utility program that will convert the popular COD format digraph model description files into text input files suitable for use with the CUTSETS programs. FEAT (MSC-21873) and FIRM (MSC-21860) available from COSMIC are examples of programs that produce COD format digraph model description files that may be converted for use with the CUTSETS programs. CUTSETS is written in C-language to be machine independent. It has been successfully implemented on a Sun running SunOS, a DECstation running ULTRIX, a Macintosh running System 7, and a DEC VAX running VMS. The RAM requirement varies with the size of the models. CUTSETS is available in UNIX tar format on a .25 inch streaming magnetic tape cartridge (standard distribution) or on a 3.5 inch diskette. It is also available on a 3.5 inch Macintosh format diskette or on a 9-track 1600 BPI magnetic tape in DEC VAX FILES-11 format. Sample input and sample output are provided on the distribution medium. An electronic copy of the documentation in Macintosh Microsoft Word format is included on the distribution medium. Sun and SunOS are trademarks of Sun Microsystems, Inc. DEC, DeCstation, ULTRIX, VAX, and VMS are trademarks of Digital Equipment Corporation. UNIX is a registered trademark of AT&T Bell Laboratories. Macintosh is a registered trademark of Apple Computer, Inc.
Fault trees for decision making in systems analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lambert, Howard E.
1975-10-09
The application of fault tree analysis (FTA) to system safety and reliability is presented within the framework of system safety analysis. The concepts and techniques involved in manual and automated fault tree construction are described and their differences noted. The theory of mathematical reliability pertinent to FTA is presented with emphasis on engineering applications. An outline of the quantitative reliability techniques of the Reactor Safety Study is given. Concepts of probabilistic importance are presented within the fault tree framework and applied to the areas of system design, diagnosis and simulation. The computer code IMPORTANCE ranks basic events and cut setsmore » according to a sensitivity analysis. A useful feature of the IMPORTANCE code is that it can accept relative failure data as input. The output of the IMPORTANCE code can assist an analyst in finding weaknesses in system design and operation, suggest the most optimal course of system upgrade, and determine the optimal location of sensors within a system. A general simulation model of system failure in terms of fault tree logic is described. The model is intended for efficient diagnosis of the causes of system failure in the event of a system breakdown. It can also be used to assist an operator in making decisions under a time constraint regarding the future course of operations. The model is well suited for computer implementation. New results incorporated in the simulation model include an algorithm to generate repair checklists on the basis of fault tree logic and a one-step-ahead optimization procedure that minimizes the expected time to diagnose system failure.« less
Fire safety in transit systems fault tree analysis
DOT National Transportation Integrated Search
1981-09-01
Fire safety countermeasures applicable to transit vehicles are identified and evaluated. This document contains fault trees which illustrate the sequences of events which may lead to a transit-fire related casualty. A description of the basis for the...
System Analysis by Mapping a Fault-tree into a Bayesian-network
NASA Astrophysics Data System (ADS)
Sheng, B.; Deng, C.; Wang, Y. H.; Tang, L. H.
2018-05-01
In view of the limitations of fault tree analysis in reliability assessment, Bayesian Network (BN) has been studied as an alternative technology. After a brief introduction to the method for mapping a Fault Tree (FT) into an equivalent BN, equations used to calculate the structure importance degree, the probability importance degree and the critical importance degree are presented. Furthermore, the correctness of these equations is proved mathematically. Combining with an aircraft landing gear’s FT, an equivalent BN is developed and analysed. The results show that richer and more accurate information have been achieved through the BN method than the FT, which demonstrates that the BN is a superior technique in both reliability assessment and fault diagnosis.
A diagnosis system using object-oriented fault tree models
NASA Technical Reports Server (NTRS)
Iverson, David L.; Patterson-Hine, F. A.
1990-01-01
Spaceborne computing systems must provide reliable, continuous operation for extended periods. Due to weight, power, and volume constraints, these systems must manage resources very effectively. A fault diagnosis algorithm is described which enables fast and flexible diagnoses in the dynamic distributed computing environments planned for future space missions. The algorithm uses a knowledge base that is easily changed and updated to reflect current system status. Augmented fault trees represented in an object-oriented form provide deep system knowledge that is easy to access and revise as a system changes. Given such a fault tree, a set of failure events that have occurred, and a set of failure events that have not occurred, this diagnosis system uses forward and backward chaining to propagate causal and temporal information about other failure events in the system being diagnosed. Once the system has established temporal and causal constraints, it reasons backward from heuristically selected failure events to find a set of basic failure events which are a likely cause of the occurrence of the top failure event in the fault tree. The diagnosis system has been implemented in common LISP using Flavors.
Reset Tree-Based Optical Fault Detection
Lee, Dong-Geon; Choi, Dooho; Seo, Jungtaek; Kim, Howon
2013-01-01
In this paper, we present a new reset tree-based scheme to protect cryptographic hardware against optical fault injection attacks. As one of the most powerful invasive attacks on cryptographic hardware, optical fault attacks cause semiconductors to misbehave by injecting high-energy light into a decapped integrated circuit. The contaminated result from the affected chip is then used to reveal secret information, such as a key, from the cryptographic hardware. Since the advent of such attacks, various countermeasures have been proposed. Although most of these countermeasures are strong, there is still the possibility of attack. In this paper, we present a novel optical fault detection scheme that utilizes the buffers on a circuit's reset signal tree as a fault detection sensor. To evaluate our proposal, we model radiation-induced currents into circuit components and perform a SPICE simulation. The proposed scheme is expected to be used as a supplemental security tool. PMID:23698267
Fault tree applications within the safety program of Idaho Nuclear Corporation
NASA Technical Reports Server (NTRS)
Vesely, W. E.
1971-01-01
Computerized fault tree analyses are used to obtain both qualitative and quantitative information about the safety and reliability of an electrical control system that shuts the reactor down when certain safety criteria are exceeded, in the design of a nuclear plant protection system, and in an investigation of a backup emergency system for reactor shutdown. The fault tree yields the modes by which the system failure or accident will occur, the most critical failure or accident causing areas, detailed failure probabilities, and the response of safety or reliability to design modifications and maintenance schemes.
Crone, Anthony J.; Wheeler, Russell L.
2000-01-01
The USGS is currently leading an effort to compile published geological information on Quaternary faults, folds, and earthquake-induced liquefaction in order to develop an internally consistent database on the locations, ages, and activity rates of major earthquake-related features throughout the United States. This report is the compilation for such features in the Central and Eastern United States (CEUS), which for the purposes of the compilation, is defined as the region extending from the Rocky Mountain Front eastward to the Atlantic seaboard. A key objective of this national compilation is to provide a comprehensive database of Quaternary features that might generate strong ground motion and therefore, should be considered in assessing the seismic hazard throughout the country. In addition to printed versions of regional and individual state compilations, the database will be available on the World-Wide Web, where it will be readily available to everyone. The primary purpose of these compilations and the derivative database is to provide a comprehensive, uniform source of geological information that can by used to complement the other types of data that are used in seismic-hazard assessments. Within our CEUS study area, which encompasses more than 60 percent of the continuous U.S., we summarize the geological information on 69 features that are categorized into four classes (Class A, B, C, and D) based on what is known about the feature's Quaternary activity. The CEUS contains only 13 features of tectonic origin for which there is convincing evidence of Quaternary activity (Class A features). Of the remaining 56 features, 11 require further study in order to confidently define their potential as possible sources of earthquake-induced ground motion (Class B), whereas the remaining features either lack convincing geologic evidence of Quaternary tectonic faulting or have been studied carefully enough to determine that they do not pose a significant seismic hazard (Classes C and D). The correlation between historical seismicity and Quaternary faults and liquefaction features in the CEUS is generally poor, which probably reflects the long return times between successive movements on individual structures. Some Quaternary faults and liquefaction features are located in aseismic areas or where historical seismicity is sparse. These relations indicate that the record of historical seismicity does not identify all potential seismic sources in the CEUS. Furthermore, geological studies of some currently aseismic faults have shown that the faults have generated strong earthquakes in the geologically recent past. Thus, the combination of geological information and seismological data can provide better insight into potential earthquake sources and thereby, contribute to better, more comprehensive seismic-hazard assessments.
Lognormal Approximations of Fault Tree Uncertainty Distributions.
El-Shanawany, Ashraf Ben; Ardron, Keith H; Walker, Simon P
2018-01-26
Fault trees are used in reliability modeling to create logical models of fault combinations that can lead to undesirable events. The output of a fault tree analysis (the top event probability) is expressed in terms of the failure probabilities of basic events that are input to the model. Typically, the basic event probabilities are not known exactly, but are modeled as probability distributions: therefore, the top event probability is also represented as an uncertainty distribution. Monte Carlo methods are generally used for evaluating the uncertainty distribution, but such calculations are computationally intensive and do not readily reveal the dominant contributors to the uncertainty. In this article, a closed-form approximation for the fault tree top event uncertainty distribution is developed, which is applicable when the uncertainties in the basic events of the model are lognormally distributed. The results of the approximate method are compared with results from two sampling-based methods: namely, the Monte Carlo method and the Wilks method based on order statistics. It is shown that the closed-form expression can provide a reasonable approximation to results obtained by Monte Carlo sampling, without incurring the computational expense. The Wilks method is found to be a useful means of providing an upper bound for the percentiles of the uncertainty distribution while being computationally inexpensive compared with full Monte Carlo sampling. The lognormal approximation method and Wilks's method appear attractive, practical alternatives for the evaluation of uncertainty in the output of fault trees and similar multilinear models. © 2018 Society for Risk Analysis.
Fault Tree Analysis as a Planning and Management Tool: A Case Study
ERIC Educational Resources Information Center
Witkin, Belle Ruth
1977-01-01
Fault Tree Analysis is an operations research technique used to analyse the most probable modes of failure in a system, in order to redesign or monitor the system more closely in order to increase its likelihood of success. (Author)
NASA Astrophysics Data System (ADS)
Rodak, C. M.; McHugh, R.; Wei, X.
2016-12-01
The development and combination of horizontal drilling and hydraulic fracturing has unlocked unconventional hydrocarbon reserves around the globe. These advances have triggered a number of concerns regarding aquifer contamination and over-exploitation, leading to scientific studies investigating potential risks posed by directional hydraulic fracturing activities. These studies, balanced with potential economic benefits of energy production, are a crucial source of information for communities considering the development of unconventional reservoirs. However, probabilistic quantification of the overall risk posed by hydraulic fracturing at the system level are rare. Here we present the concept of fault tree analysis to determine the overall probability of groundwater contamination or over-exploitation, broadly referred to as the probability of failure. The potential utility of fault tree analysis for the quantification and communication of risks is approached with a general application. However, the fault tree design is robust and can handle various combinations of regional-specific data pertaining to relevant spatial scales, geological conditions, and industry practices where available. All available data are grouped into quantity and quality-based impacts and sub-divided based on the stage of the hydraulic fracturing process in which the data is relevant as described by the USEPA. Each stage is broken down into the unique basic events required for failure; for example, to quantify the risk of an on-site spill we must consider the likelihood, magnitude, composition, and subsurface transport of the spill. The structure of the fault tree described above can be used to render a highly complex system of variables into a straightforward equation for risk calculation based on Boolean logic. This project shows the utility of fault tree analysis for the visual communication of the potential risks of hydraulic fracturing activities on groundwater resources.
Fault Tree Analysis: An Emerging Methodology for Instructional Science.
ERIC Educational Resources Information Center
Wood, R. Kent; And Others
1979-01-01
Describes Fault Tree Analysis, a tool for systems analysis which attempts to identify possible modes of failure in systems to increase the probability of success. The article defines the technique and presents the steps of FTA construction, focusing on its application to education. (RAO)
Program listing for fault tree analysis of JPL technical report 32-1542
NASA Technical Reports Server (NTRS)
Chelson, P. O.
1971-01-01
The computer program listing for the MAIN program and those subroutines unique to the fault tree analysis are described. Some subroutines are used for analyzing the reliability block diagram. The program is written in FORTRAN 5 and is running on a UNIVAC 1108.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Powell, Danny H; Elwood Jr, Robert H
2011-01-01
Analysis of the material protection, control, and accountability (MPC&A) system is necessary to understand the limits and vulnerabilities of the system to internal threats. A self-appraisal helps the facility be prepared to respond to internal threats and reduce the risk of theft or diversion of nuclear material. The material control and accountability (MC&A) system effectiveness tool (MSET) fault tree was developed to depict the failure of the MPC&A system as a result of poor practices and random failures in the MC&A system. It can also be employed as a basis for assessing deliberate threats against a facility. MSET uses faultmore » tree analysis, which is a top-down approach to examining system failure. The analysis starts with identifying a potential undesirable event called a 'top event' and then determining the ways it can occur (e.g., 'Fail To Maintain Nuclear Materials Under The Purview Of The MC&A System'). The analysis proceeds by determining how the top event can be caused by individual or combined lower level faults or failures. These faults, which are the causes of the top event, are 'connected' through logic gates. The MSET model uses AND-gates and OR-gates and propagates the effect of event failure using Boolean algebra. To enable the fault tree analysis calculations, the basic events in the fault tree are populated with probability risk values derived by conversion of questionnaire data to numeric values. The basic events are treated as independent variables. This assumption affects the Boolean algebraic calculations used to calculate results. All the necessary calculations are built into the fault tree codes, but it is often useful to estimate the probabilities manually as a check on code functioning. The probability of failure of a given basic event is the probability that the basic event primary question fails to meet the performance metric for that question. The failure probability is related to how well the facility performs the task identified in that basic event over time (not just one performance or exercise). Fault tree calculations provide a failure probability for the top event in the fault tree. The basic fault tree calculations establish a baseline relative risk value for the system. This probability depicts relative risk, not absolute risk. Subsequent calculations are made to evaluate the change in relative risk that would occur if system performance is improved or degraded. During the development effort of MSET, the fault tree analysis program used was SAPHIRE. SAPHIRE is an acronym for 'Systems Analysis Programs for Hands-on Integrated Reliability Evaluations.' Version 1 of the SAPHIRE code was sponsored by the Nuclear Regulatory Commission in 1987 as an innovative way to draw, edit, and analyze graphical fault trees primarily for safe operation of nuclear power reactors. When the fault tree calculations are performed, the fault tree analysis program will produce several reports that can be used to analyze the MPC&A system. SAPHIRE produces reports showing risk importance factors for all basic events in the operational MC&A system. The risk importance information is used to examine the potential impacts when performance of certain basic events increases or decreases. The initial results produced by the SAPHIRE program are considered relative risk values. None of the results can be interpreted as absolute risk values since the basic event probability values represent estimates of risk associated with the performance of MPC&A tasks throughout the material balance area (MBA). The RRR for a basic event represents the decrease in total system risk that would result from improvement of that one event to a perfect performance level. Improvement of the basic event with the greatest RRR value produces a greater decrease in total system risk than improvement of any other basic event. Basic events with the greatest potential for system risk reduction are assigned performance improvement values, and new fault tree calculations show the improvement in total system risk. The operational impact or cost-effectiveness from implementing the performance improvements can then be evaluated. The improvements being evaluated can be system performance improvements, or they can be potential, or actual, upgrades to the system. The RIR for a basic event represents the increase in total system risk that would result from failure of that one event. Failure of the basic event with the greatest RIR value produces a greater increase in total system risk than failure of any other basic event. Basic events with the greatest potential for system risk increase are assigned failure performance values, and new fault tree calculations show the increase in total system risk. This evaluation shows the importance of preventing performance degradation of the basic events. SAPHIRE identifies combinations of basic events where concurrent failure of the events results in failure of the top event.« less
SARA - SURE/ASSIST RELIABILITY ANALYSIS WORKSTATION (VAX VMS VERSION)
NASA Technical Reports Server (NTRS)
Butler, R. W.
1994-01-01
SARA, the SURE/ASSIST Reliability Analysis Workstation, is a bundle of programs used to solve reliability problems. The mathematical approach chosen to solve a reliability problem may vary with the size and nature of the problem. The Systems Validation Methods group at NASA Langley Research Center has created a set of four software packages that form the basis for a reliability analysis workstation, including three for use in analyzing reconfigurable, fault-tolerant systems and one for analyzing non-reconfigurable systems. The SARA bundle includes the three for reconfigurable, fault-tolerant systems: SURE reliability analysis program (COSMIC program LAR-13789, LAR-14921); the ASSIST specification interface program (LAR-14193, LAR-14923), and PAWS/STEM reliability analysis programs (LAR-14165, LAR-14920). As indicated by the program numbers in parentheses, each of these three packages is also available separately in two machine versions. The fourth package, which is only available separately, is FTC, the Fault Tree Compiler (LAR-14586, LAR-14922). FTC is used to calculate the top-event probability for a fault tree which describes a non-reconfigurable system. PAWS/STEM and SURE are analysis programs which utilize different solution methods, but have a common input language, the SURE language. ASSIST is a preprocessor that generates SURE language from a more abstract definition. ASSIST, SURE, and PAWS/STEM are described briefly in the following paragraphs. For additional details about the individual packages, including pricing, please refer to their respective abstracts. ASSIST, the Abstract Semi-Markov Specification Interface to the SURE Tool program, allows a reliability engineer to describe the failure behavior of a fault-tolerant computer system in an abstract, high-level language. The ASSIST program then automatically generates a corresponding semi-Markov model. A one-page ASSIST-language description may result in a semi-Markov model with thousands of states and transitions. The ASSIST program also includes model-reduction techniques to facilitate efficient modeling of large systems. The semi-Markov model generated by ASSIST is in the format needed for input to SURE and PAWS/STEM. The Semi-Markov Unreliability Range Evaluator, SURE, is an analysis tool for reconfigurable, fault-tolerant systems. SURE provides an efficient means for calculating accurate upper and lower bounds for the death state probabilities for a large class of semi-Markov models, not just those which can be reduced to critical-pair architectures. The calculated bounds are close enough (usually within 5 percent of each other) for use in reliability studies of ultra-reliable computer systems. The SURE bounding theorems have algebraic solutions and are consequently computationally efficient even for large and complex systems. SURE can optionally regard a specified parameter as a variable over a range of values, enabling an automatic sensitivity analysis. SURE output is tabular. The PAWS/STEM package includes two programs for the creation and evaluation of pure Markov models describing the behavior of fault-tolerant reconfigurable computer systems: the Pade Approximation with Scaling (PAWS) and Scaled Taylor Exponential Matrix (STEM) programs. PAWS and STEM produce exact solutions for the probability of system failure and provide a conservative estimate of the number of significant digits in the solution. Markov models of fault-tolerant architectures inevitably lead to numerically stiff differential equations. Both PAWS and STEM have the capability to solve numerically stiff models. These complementary programs use separate methods to determine the matrix exponential in the solution of the model's system of differential equations. In general, PAWS is better suited to evaluate small and dense models. STEM operates at lower precision, but works faster than PAWS for larger models. The programs that comprise the SARA package were originally developed for use on DEC VAX series computers running VMS and were later ported for use on Sun series computers running SunOS. They are written in C-language, Pascal, and FORTRAN 77. An ANSI compliant C compiler is required in order to compile the C portion of the Sun version source code. The Pascal and FORTRAN code can be compiled on Sun computers using Sun Pascal and Sun Fortran. For the VMS version, VAX C, VAX PASCAL, and VAX FORTRAN can be used to recompile the source code. The standard distribution medium for the VMS version of SARA (COS-10041) is a 9-track 1600 BPI magnetic tape in VMSINSTAL format. It is also available on a TK50 tape cartridge in VMSINSTAL format. Executables are included. The standard distribution medium for the Sun version of SARA (COS-10039) is a .25 inch streaming magnetic tape cartridge in UNIX tar format. Both Sun3 and Sun4 executables are included. Electronic copies of the ASSIST user's manual in TeX and PostScript formats are provided on the distribution medium. DEC, VAX, VMS, and TK50 are registered trademarks of Digital Equipment Corporation. Sun, Sun3, Sun4, and SunOS are trademarks of Sun Microsystems, Inc. TeX is a trademark of the American Mathematical Society. PostScript is a registered trademark of Adobe Systems Incorporated.
SARA - SURE/ASSIST RELIABILITY ANALYSIS WORKSTATION (UNIX VERSION)
NASA Technical Reports Server (NTRS)
Butler, R. W.
1994-01-01
SARA, the SURE/ASSIST Reliability Analysis Workstation, is a bundle of programs used to solve reliability problems. The mathematical approach chosen to solve a reliability problem may vary with the size and nature of the problem. The Systems Validation Methods group at NASA Langley Research Center has created a set of four software packages that form the basis for a reliability analysis workstation, including three for use in analyzing reconfigurable, fault-tolerant systems and one for analyzing non-reconfigurable systems. The SARA bundle includes the three for reconfigurable, fault-tolerant systems: SURE reliability analysis program (COSMIC program LAR-13789, LAR-14921); the ASSIST specification interface program (LAR-14193, LAR-14923), and PAWS/STEM reliability analysis programs (LAR-14165, LAR-14920). As indicated by the program numbers in parentheses, each of these three packages is also available separately in two machine versions. The fourth package, which is only available separately, is FTC, the Fault Tree Compiler (LAR-14586, LAR-14922). FTC is used to calculate the top-event probability for a fault tree which describes a non-reconfigurable system. PAWS/STEM and SURE are analysis programs which utilize different solution methods, but have a common input language, the SURE language. ASSIST is a preprocessor that generates SURE language from a more abstract definition. ASSIST, SURE, and PAWS/STEM are described briefly in the following paragraphs. For additional details about the individual packages, including pricing, please refer to their respective abstracts. ASSIST, the Abstract Semi-Markov Specification Interface to the SURE Tool program, allows a reliability engineer to describe the failure behavior of a fault-tolerant computer system in an abstract, high-level language. The ASSIST program then automatically generates a corresponding semi-Markov model. A one-page ASSIST-language description may result in a semi-Markov model with thousands of states and transitions. The ASSIST program also includes model-reduction techniques to facilitate efficient modeling of large systems. The semi-Markov model generated by ASSIST is in the format needed for input to SURE and PAWS/STEM. The Semi-Markov Unreliability Range Evaluator, SURE, is an analysis tool for reconfigurable, fault-tolerant systems. SURE provides an efficient means for calculating accurate upper and lower bounds for the death state probabilities for a large class of semi-Markov models, not just those which can be reduced to critical-pair architectures. The calculated bounds are close enough (usually within 5 percent of each other) for use in reliability studies of ultra-reliable computer systems. The SURE bounding theorems have algebraic solutions and are consequently computationally efficient even for large and complex systems. SURE can optionally regard a specified parameter as a variable over a range of values, enabling an automatic sensitivity analysis. SURE output is tabular. The PAWS/STEM package includes two programs for the creation and evaluation of pure Markov models describing the behavior of fault-tolerant reconfigurable computer systems: the Pade Approximation with Scaling (PAWS) and Scaled Taylor Exponential Matrix (STEM) programs. PAWS and STEM produce exact solutions for the probability of system failure and provide a conservative estimate of the number of significant digits in the solution. Markov models of fault-tolerant architectures inevitably lead to numerically stiff differential equations. Both PAWS and STEM have the capability to solve numerically stiff models. These complementary programs use separate methods to determine the matrix exponential in the solution of the model's system of differential equations. In general, PAWS is better suited to evaluate small and dense models. STEM operates at lower precision, but works faster than PAWS for larger models. The programs that comprise the SARA package were originally developed for use on DEC VAX series computers running VMS and were later ported for use on Sun series computers running SunOS. They are written in C-language, Pascal, and FORTRAN 77. An ANSI compliant C compiler is required in order to compile the C portion of the Sun version source code. The Pascal and FORTRAN code can be compiled on Sun computers using Sun Pascal and Sun Fortran. For the VMS version, VAX C, VAX PASCAL, and VAX FORTRAN can be used to recompile the source code. The standard distribution medium for the VMS version of SARA (COS-10041) is a 9-track 1600 BPI magnetic tape in VMSINSTAL format. It is also available on a TK50 tape cartridge in VMSINSTAL format. Executables are included. The standard distribution medium for the Sun version of SARA (COS-10039) is a .25 inch streaming magnetic tape cartridge in UNIX tar format. Both Sun3 and Sun4 executables are included. Electronic copies of the ASSIST user's manual in TeX and PostScript formats are provided on the distribution medium. DEC, VAX, VMS, and TK50 are registered trademarks of Digital Equipment Corporation. Sun, Sun3, Sun4, and SunOS are trademarks of Sun Microsystems, Inc. TeX is a trademark of the American Mathematical Society. PostScript is a registered trademark of Adobe Systems Incorporated.
Probabilistic Seismic Hazard Assessment of the Chiapas State (SE Mexico)
NASA Astrophysics Data System (ADS)
Rodríguez-Lomelí, Anabel Georgina; García-Mayordomo, Julián
2015-04-01
The Chiapas State, in southeastern Mexico, is a very active seismic region due to the interaction of three tectonic plates: Northamerica, Cocos and Caribe. We present a probabilistic seismic hazard assessment (PSHA) specifically performed to evaluate seismic hazard in the Chiapas state. The PSHA was based on a composited seismic catalogue homogenized to Mw and was used a logic tree procedure for the consideration of different seismogenic source models and ground motion prediction equations (GMPEs). The results were obtained in terms of peak ground acceleration as well as spectral accelerations. The earthquake catalogue was compiled from the International Seismological Center and the Servicio Sismológico Nacional de México sources. Two different seismogenic source zones (SSZ) models were devised based on a revision of the tectonics of the region and the available geomorphological and geological maps. The SSZ were finally defined by the analysis of geophysical data, resulting two main different SSZ models. The Gutenberg-Richter parameters for each SSZ were calculated from the declustered and homogenized catalogue, while the maximum expected earthquake was assessed from both the catalogue and geological criteria. Several worldwide and regional GMPEs for subduction and crustal zones were revised. For each SSZ model we considered four possible combinations of GMPEs. Finally, hazard was calculated in terms of PGA and SA for 500-, 1000-, and 2500-years return periods for each branch of the logic tree using the CRISIS2007 software. The final hazard maps represent the mean values obtained from the two seismogenic and four attenuation models considered in the logic tree. For the three return periods analyzed, the maps locate the most hazardous areas in the Chiapas Central Pacific Zone, the Pacific Coastal Plain and in the Motagua and Polochic Fault Zone; intermediate hazard values in the Chiapas Batholith Zone and in the Strike-Slip Faults Province. The hazard decreases towards the northeast across the Reverse Faults Province and up to Yucatan Platform, where the lowest values are reached. We also produced uniform hazard spectra (UHS) for the three main cities of Chiapas. Tapachula city presents the highest spectral accelerations, while Tuxtla Gutierrez and San Cristobal de las Casas cities show similar values. We conclude that seismic hazard in Chiapas is chiefly controlled by the subduction of the Cocos beneath Northamerica and Caribe tectonic plates, that makes the coastal areas the most hazardous. Additionally, the Motagua and Polochic Fault Zones are also important, increasing the hazard particularly in southeastern Chiapas.
Direct evaluation of fault trees using object-oriented programming techniques
NASA Technical Reports Server (NTRS)
Patterson-Hine, F. A.; Koen, B. V.
1989-01-01
Object-oriented programming techniques are used in an algorithm for the direct evaluation of fault trees. The algorithm combines a simple bottom-up procedure for trees without repeated events with a top-down recursive procedure for trees with repeated events. The object-oriented approach results in a dynamic modularization of the tree at each step in the reduction process. The algorithm reduces the number of recursive calls required to solve trees with repeated events and calculates intermediate results as well as the solution of the top event. The intermediate results can be reused if part of the tree is modified. An example is presented in which the results of the algorithm implemented with conventional techniques are compared to those of the object-oriented approach.
NASA Astrophysics Data System (ADS)
Guns, K. A.; Bennett, R. A.; Blisniuk, K.
2017-12-01
To better evaluate the distribution and transfer of strain and slip along the Southern San Andreas Fault (SSAF) zone in the northern Coachella valley in southern California, we integrate geological and geodetic observations to test whether strain is being transferred away from the SSAF system towards the Eastern California Shear Zone through microblock rotation of the Eastern Transverse Ranges (ETR). The faults of the ETR consist of five east-west trending left lateral strike slip faults that have measured cumulative offsets of up to 20 km and as low as 1 km. Present kinematic and block models present a variety of slip rate estimates, from as low as zero to as high as 7 mm/yr, suggesting a gap in our understanding of what role these faults play in the larger system. To determine whether present-day block rotation along these faults is contributing to strain transfer in the region, we are applying 10Be surface exposure dating methods to observed offset channel and alluvial fan deposits in order to estimate fault slip rates along two faults in the ETR. We present observations of offset geomorphic landforms using field mapping and LiDAR data at three sites along the Blue Cut Fault and one site along the Smoke Tree Wash Fault in Joshua Tree National Park which indicate recent Quaternary fault activity. Initial results of site mapping and clast count analyses reveal at least three stages of offset, including potential Holocene offsets, for one site along the Blue Cut Fault, while preliminary 10Be geochronology is in progress. This geologic slip rate data, combined with our new geodetic surface velocity field derived from updated campaign-based GPS measurements within Joshua Tree National Park will allow us to construct a suite of elastic fault block models to elucidate rates of strain transfer away from the SSAF and how that strain transfer may be affecting the length of the interseismic period along the SSAF.
FAULT TREE ANALYSIS FOR EXPOSURE TO REFRIGERANTS USED FOR AUTOMOTIVE AIR CONDITIONING IN THE U.S.
A fault tree analysis was used to estimate the number of refrigerant exposures of automotive service technicians and vehicle occupants in the United States. Exposures of service technicians can occur when service equipment or automotive air-conditioning systems leak during servic...
A Fault Tree Approach to Analysis of Organizational Communication Systems.
ERIC Educational Resources Information Center
Witkin, Belle Ruth; Stephens, Kent G.
Fault Tree Analysis (FTA) is a method of examing communication in an organization by focusing on: (1) the complex interrelationships in human systems, particularly in communication systems; (2) interactions across subsystems and system boundaries; and (3) the need to select and "prioritize" channels which will eliminate noise in the…
Applying fault tree analysis to the prevention of wrong-site surgery.
Abecassis, Zachary A; McElroy, Lisa M; Patel, Ronak M; Khorzad, Rebeca; Carroll, Charles; Mehrotra, Sanjay
2015-01-01
Wrong-site surgery (WSS) is a rare event that occurs to hundreds of patients each year. Despite national implementation of the Universal Protocol over the past decade, development of effective interventions remains a challenge. We performed a systematic review of the literature reporting root causes of WSS and used the results to perform a fault tree analysis to assess the reliability of the system in preventing WSS and identifying high-priority targets for interventions aimed at reducing WSS. Process components where a single error could result in WSS were labeled with OR gates; process aspects reinforced by verification were labeled with AND gates. The overall redundancy of the system was evaluated based on prevalence of AND gates and OR gates. In total, 37 studies described risk factors for WSS. The fault tree contains 35 faults, most of which fall into five main categories. Despite the Universal Protocol mandating patient verification, surgical site signing, and a brief time-out, a large proportion of the process relies on human transcription and verification. Fault tree analysis provides a standardized perspective of errors or faults within the system of surgical scheduling and site confirmation. It can be adapted by institutions or specialties to lead to more targeted interventions to increase redundancy and reliability within the preoperative process. Copyright © 2015 Elsevier Inc. All rights reserved.
Janecke, S.U.; Blankenau, J.J.; VanDenburg, C.J.; VanGosen, B.S.
2001-01-01
Compilation of a 1:100,000-scale map of normal faults and extensional folds in southwest Montana and adjacent Idaho reveals a complex history of normal faulting that spanned at least the last 50 m.y. and involved six or more generations of normal faults. The map is based on both published and unpublished mapping and shows normal faults and extensional folds between the valley of the Red Rock River of southwest Montana and the Lemhi and Birch Creek valleys of eastern Idaho between latitudes 45°05' N. and 44°15' N. in the Tendoy and Beaverhead Mountains. Some of the unpublished mapping has been compiled in Lonn and others (2000). Many traces of the normal faults parallel the generally northwest to north-northwest structural grain of the preexisting Sevier fold and thrust belt and dip west-southwest, but northeastand east-striking normal faults are also prominent. Northeaststriking normal faults are subparallel to the traces of southeast-directed thrusts that shortened the foreland during the Laramide orogeny. It is unlikely that the northeast-striking normal faults reactivated fabrics in the underlying Precambrian basement, as has been documented elsewhere in southwestern Montana (Schmidt and others, 1984), because exposures of basement rocks in the map area exhibit north-northwest- to northwest-striking deformational fabrics (Lowell, 1965; M’Gonigle, 1993, 1994; M’Gonigle and Hait, 1997; M’Gonigle and others, 1991). The largest normal faults in the area are southwest-dipping normal faults that locally reactivate thrust faults (fig. 1). Normal faulting began before middle Eocene Challis volcanism and continues today. The extension direction flipped by about 90° four times.
Development of the Global Earthquake Model’s neotectonic fault database
Christophersen, Annemarie; Litchfield, Nicola; Berryman, Kelvin; Thomas, Richard; Basili, Roberto; Wallace, Laura; Ries, William; Hayes, Gavin P.; Haller, Kathleen M.; Yoshioka, Toshikazu; Koehler, Richard D.; Clark, Dan; Wolfson-Schwehr, Monica; Boettcher, Margaret S.; Villamor, Pilar; Horspool, Nick; Ornthammarath, Teraphan; Zuñiga, Ramon; Langridge, Robert M.; Stirling, Mark W.; Goded, Tatiana; Costa, Carlos; Yeats, Robert
2015-01-01
The Global Earthquake Model (GEM) aims to develop uniform, openly available, standards, datasets and tools for worldwide seismic risk assessment through global collaboration, transparent communication and adapting state-of-the-art science. GEM Faulted Earth (GFE) is one of GEM’s global hazard module projects. This paper describes GFE’s development of a modern neotectonic fault database and a unique graphical interface for the compilation of new fault data. A key design principle is that of an electronic field notebook for capturing observations a geologist would make about a fault. The database is designed to accommodate abundant as well as sparse fault observations. It features two layers, one for capturing neotectonic faults and fold observations, and the other to calculate potential earthquake fault sources from the observations. In order to test the flexibility of the database structure and to start a global compilation, five preexisting databases have been uploaded to the first layer and two to the second. In addition, the GFE project has characterised the world’s approximately 55,000 km of subduction interfaces in a globally consistent manner as a basis for generating earthquake event sets for inclusion in earthquake hazard and risk modelling. Following the subduction interface fault schema and including the trace attributes of the GFE database schema, the 2500-km-long frontal thrust fault system of the Himalaya has also been characterised. We propose the database structure to be used widely, so that neotectonic fault data can make a more complete and beneficial contribution to seismic hazard and risk characterisation globally.
Efficient Probabilistic Diagnostics for Electrical Power Systems
NASA Technical Reports Server (NTRS)
Mengshoel, Ole J.; Chavira, Mark; Cascio, Keith; Poll, Scott; Darwiche, Adnan; Uckun, Serdar
2008-01-01
We consider in this work the probabilistic approach to model-based diagnosis when applied to electrical power systems (EPSs). Our probabilistic approach is formally well-founded, as it based on Bayesian networks and arithmetic circuits. We investigate the diagnostic task known as fault isolation, and pay special attention to meeting two of the main challenges . model development and real-time reasoning . often associated with real-world application of model-based diagnosis technologies. To address the challenge of model development, we develop a systematic approach to representing electrical power systems as Bayesian networks, supported by an easy-to-use speci.cation language. To address the real-time reasoning challenge, we compile Bayesian networks into arithmetic circuits. Arithmetic circuit evaluation supports real-time diagnosis by being predictable and fast. In essence, we introduce a high-level EPS speci.cation language from which Bayesian networks that can diagnose multiple simultaneous failures are auto-generated, and we illustrate the feasibility of using arithmetic circuits, compiled from Bayesian networks, for real-time diagnosis on real-world EPSs of interest to NASA. The experimental system is a real-world EPS, namely the Advanced Diagnostic and Prognostic Testbed (ADAPT) located at the NASA Ames Research Center. In experiments with the ADAPT Bayesian network, which currently contains 503 discrete nodes and 579 edges, we .nd high diagnostic accuracy in scenarios where one to three faults, both in components and sensors, were inserted. The time taken to compute the most probable explanation using arithmetic circuits has a small mean of 0.2625 milliseconds and standard deviation of 0.2028 milliseconds. In experiments with data from ADAPT we also show that arithmetic circuit evaluation substantially outperforms joint tree propagation and variable elimination, two alternative algorithms for diagnosis using Bayesian network inference.
A distributed programming environment for Ada
NASA Technical Reports Server (NTRS)
Brennan, Peter; Mcdonnell, Tom; Mcfarland, Gregory; Timmins, Lawrence J.; Litke, John D.
1986-01-01
Despite considerable commercial exploitation of fault tolerance systems, significant and difficult research problems remain in such areas as fault detection and correction. A research project is described which constructs a distributed computing test bed for loosely coupled computers. The project is constructing a tool kit to support research into distributed control algorithms, including a distributed Ada compiler, distributed debugger, test harnesses, and environment monitors. The Ada compiler is being written in Ada and will implement distributed computing at the subsystem level. The design goal is to provide a variety of control mechanics for distributed programming while retaining total transparency at the code level.
Langenheim, Victoria E.; Rymer, Michael J.; Catchings, Rufus D.; Goldman, Mark R.; Watt, Janet T.; Powell, Robert E.; Matti, Jonathan C.
2016-03-02
We describe high-resolution gravity and seismic refraction surveys acquired to determine the thickness of valley-fill deposits and to delineate geologic structures that might influence groundwater flow beneath the Smoke Tree Wash area in Joshua Tree National Park. These surveys identified a sedimentary basin that is fault-controlled. A profile across the Smoke Tree Wash fault zone reveals low gravity values and seismic velocities that coincide with a mapped strand of the Smoke Tree Wash fault. Modeling of the gravity data reveals a basin about 2–2.5 km long and 1 km wide that is roughly centered on this mapped strand, and bounded by inferred faults. According to the gravity model the deepest part of the basin is about 270 m, but this area coincides with low velocities that are not characteristic of typical basement complex rocks. Most likely, the density contrast assumed in the inversion is too high or the uncharacteristically low velocities represent highly fractured or weathered basement rocks, or both. A longer seismic profile extending onto basement outcrops would help differentiate which scenario is more accurate. The seismic velocities also determine the depth to water table along the profile to be about 40–60 m, consistent with water levels measured in water wells near the northern end of the profile.
A Fault Tree Approach to Needs Assessment -- An Overview.
ERIC Educational Resources Information Center
Stephens, Kent G.
A "failsafe" technology is presented based on a new unified theory of needs assessment. Basically the paper discusses fault tree analysis as a technique for enhancing the probability of success in any system by analyzing the most likely modes of failure that could occur and then suggesting high priority avoidance strategies for those…
Huang, Weiqing; Fan, Hongbo; Qiu, Yongfu; Cheng, Zhiyu; Xu, Pingru; Qian, Yu
2016-05-01
Recently, China has frequently experienced large-scale, severe and persistent haze pollution due to surging urbanization and industrialization and a rapid growth in the number of motor vehicles and energy consumption. The vehicle emission due to the consumption of a large number of fossil fuels is no doubt a critical factor of the haze pollution. This work is focused on the causation mechanism of haze pollution related to the vehicle emission for Guangzhou city by employing the Fault Tree Analysis (FTA) method for the first time. With the establishment of the fault tree system of "Haze weather-Vehicle exhausts explosive emission", all of the important risk factors are discussed and identified by using this deductive FTA method. The qualitative and quantitative assessments of the fault tree system are carried out based on the structure, probability and critical importance degree analysis of the risk factors. The study may provide a new simple and effective tool/strategy for the causation mechanism analysis and risk management of haze pollution in China. Copyright © 2016 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sharma, Vishal C.; Gopalakrishnan, Ganesh; Krishnamoorthy, Sriram
The systems resilience research community has developed methods to manually insert additional source-program level assertions to trap errors, and also devised tools to conduct fault injection studies for scalar program codes. In this work, we contribute the first vector oriented LLVM-level fault injector VULFI to help study the effects of faults in vector architectures that are of growing importance, especially for vectorizing loops. Using VULFI, we conduct a resiliency study of nine real-world vector benchmarks using Intel’s AVX and SSE extensions as the target vector instruction sets, and offer the first reported understanding of how faults affect vector instruction sets.more » We take this work further toward automating the insertion of resilience assertions during compilation. This is based on our observation that during intermediate (e.g., LLVM-level) code generation to handle full and partial vectorization, modern compilers exploit (and explicate in their code-documentation) critical invariants. These invariants are turned into error-checking code. We confirm the efficacy of these automatically inserted low-overhead error detectors for vectorized for-loops.« less
NASA Astrophysics Data System (ADS)
Sanchez-Vila, X.; de Barros, F.; Bolster, D.; Nowak, W.
2010-12-01
Assessing the potential risk of hydro(geo)logical supply systems to human population is an interdisciplinary field. It relies on the expertise in fields as distant as hydrogeology, medicine, or anthropology, and needs powerful translation concepts to provide decision support and policy making. Reliable health risk estimates need to account for the uncertainties in hydrological, physiological and human behavioral parameters. We propose the use of fault trees to address the task of probabilistic risk analysis (PRA) and to support related management decisions. Fault trees allow decomposing the assessment of health risk into individual manageable modules, thus tackling a complex system by a structural “Divide and Conquer” approach. The complexity within each module can be chosen individually according to data availability, parsimony, relative importance and stage of analysis. The separation in modules allows for a true inter- and multi-disciplinary approach. This presentation highlights the three novel features of our work: (1) we define failure in terms of risk being above a threshold value, whereas previous studies used auxiliary events such as exceedance of critical concentration levels, (2) we plot an integrated fault tree that handles uncertainty in both hydrological and health components in a unified way, and (3) we introduce a new form of stochastic fault tree that allows to weaken the assumption of independent subsystems that is required by a classical fault tree approach. We illustrate our concept in a simple groundwater-related setting.
A fuzzy decision tree for fault classification.
Zio, Enrico; Baraldi, Piero; Popescu, Irina C
2008-02-01
In plant accident management, the control room operators are required to identify the causes of the accident, based on the different patterns of evolution of the monitored process variables thereby developing. This task is often quite challenging, given the large number of process parameters monitored and the intense emotional states under which it is performed. To aid the operators, various techniques of fault classification have been engineered. An important requirement for their practical application is the physical interpretability of the relationships among the process variables underpinning the fault classification. In this view, the present work propounds a fuzzy approach to fault classification, which relies on fuzzy if-then rules inferred from the clustering of available preclassified signal data, which are then organized in a logical and transparent decision tree structure. The advantages offered by the proposed approach are precisely that a transparent fault classification model is mined out of the signal data and that the underlying physical relationships among the process variables are easily interpretable as linguistic if-then rules that can be explicitly visualized in the decision tree structure. The approach is applied to a case study regarding the classification of simulated faults in the feedwater system of a boiling water reactor.
On Fusing Recursive Traversals of K-d Trees
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rajbhandari, Samyam; Kim, Jinsung; Krishnamoorthy, Sriram
Loop fusion is a key program transformation for data locality optimization that is implemented in production compilers. But optimizing compilers currently cannot exploit fusion opportunities across a set of recursive tree traversal computations with producer-consumer relationships. In this paper, we develop a compile-time approach to dependence characterization and program transformation to enable fusion across recursively specified traversals over k-ary trees. We present the FuseT source-to-source code transformation framework to automatically generate fused composite recursive operators from an input program containing a sequence of primitive recursive operators. We use our framework to implement fused operators for MADNESS, Multiresolution Adaptive Numerical Environmentmore » for Scientific Simulation. We show that locality optimization through fusion can offer more than an order of magnitude performance improvement.« less
SPACE PROPULSION SYSTEM PHASED-MISSION PROBABILITY ANALYSIS USING CONVENTIONAL PRA METHODS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Curtis Smith; James Knudsen
As part of a series of papers on the topic of advance probabilistic methods, a benchmark phased-mission problem has been suggested. This problem consists of modeling a space mission using an ion propulsion system, where the mission consists of seven mission phases. The mission requires that the propulsion operate for several phases, where the configuration changes as a function of phase. The ion propulsion system itself consists of five thruster assemblies and a single propellant supply, where each thruster assembly has one propulsion power unit and two ion engines. In this paper, we evaluate the probability of mission failure usingmore » the conventional methodology of event tree/fault tree analysis. The event tree and fault trees are developed and analyzed using Systems Analysis Programs for Hands-on Integrated Reliability Evaluations (SAPHIRE). While the benchmark problem is nominally a "dynamic" problem, in our analysis the mission phases are modeled in a single event tree to show the progression from one phase to the next. The propulsion system is modeled in fault trees to account for the operation; or in this case, the failure of the system. Specifically, the propulsion system is decomposed into each of the five thruster assemblies and fed into the appropriate N-out-of-M gate to evaluate mission failure. A separate fault tree for the propulsion system is developed to account for the different success criteria of each mission phase. Common-cause failure modeling is treated using traditional (i.e., parametrically) methods. As part of this paper, we discuss the overall results in addition to the positive and negative aspects of modeling dynamic situations with non-dynamic modeling techniques. One insight from the use of this conventional method for analyzing the benchmark problem is that it requires significant manual manipulation to the fault trees and how they are linked into the event tree. The conventional method also requires editing the resultant cut sets to obtain the correct results. While conventional methods may be used to evaluate a dynamic system like that in the benchmark, the level of effort required may preclude its use on real-world problems.« less
A Fault Tree Approach to Analysis of Behavioral Systems: An Overview.
ERIC Educational Resources Information Center
Stephens, Kent G.
Developed at Brigham Young University, Fault Tree Analysis (FTA) is a technique for enhancing the probability of success in any system by analyzing the most likely modes of failure that could occur. It provides a logical, step-by-step description of possible failure events within a system and their interaction--the combinations of potential…
Medical History: Compiling Your Medical Family Tree
... family medical history, sometimes called a medical family tree, is a record of illnesses and medical conditions ... to consult family documents, such as existing family trees, baby books, old letters, obituaries or records from ...
The engine fuel system fault analysis
NASA Astrophysics Data System (ADS)
Zhang, Yong; Song, Hanqiang; Yang, Changsheng; Zhao, Wei
2017-05-01
For improving the reliability of the engine fuel system, the typical fault factor of the engine fuel system was analyzed from the point view of structure and functional. The fault character was gotten by building the fuel system fault tree. According the utilizing of fault mode effect analysis method (FMEA), several factors of key component fuel regulator was obtained, which include the fault mode, the fault cause, and the fault influences. All of this made foundation for next development of fault diagnosis system.
Fault-tolerant, high-level quantum circuits: form, compilation and description
NASA Astrophysics Data System (ADS)
Paler, Alexandru; Polian, Ilia; Nemoto, Kae; Devitt, Simon J.
2017-06-01
Fault-tolerant quantum error correction is a necessity for any quantum architecture destined to tackle interesting, large-scale problems. Its theoretical formalism has been well founded for nearly two decades. However, we still do not have an appropriate compiler to produce a fault-tolerant, error-corrected description from a higher-level quantum circuit for state-of the-art hardware models. There are many technical hurdles, including dynamic circuit constructions that occur when constructing fault-tolerant circuits with commonly used error correcting codes. We introduce a package that converts high-level quantum circuits consisting of commonly used gates into a form employing all decompositions and ancillary protocols needed for fault-tolerant error correction. We call this form the (I)initialisation, (C)NOT, (M)measurement form (ICM) and consists of an initialisation layer of qubits into one of four distinct states, a massive, deterministic array of CNOT operations and a series of time-ordered X- or Z-basis measurements. The form allows a more flexible approach towards circuit optimisation. At the same time, the package outputs a standard circuit or a canonical geometric description which is a necessity for operating current state-of-the-art hardware architectures using topological quantum codes.
Modeling Missing Remeasurement Tree Heights in Forest Inventory Data
Raymond M. Sheffield; Callie J. Schweitzer
2005-01-01
Missing tree heights are often problematic in compiling forest inventory remeasurement data. Heights for cut and mortality trees are usually not available; calculations of removal or mortality volumes must utilize either a modeled height at the time of tree death or the height assigned to the tree at a previous remeasurement. Less often, tree heights are not available...
Modeling missing remeasurement tree heights in forest inventory data
Raymond M. Sheffield; Callie J. Schweitzer
2002-01-01
Missing tree heights are often problematic in compiling forest inventory renleasureinent data. Heights for cut and niortality trees are usually not available; calculations of removal or mortality volumes must utilize either a modeled height at the time of tree death or the height assigned to the tree at a previous remeasurement. Less often, tree heights are not...
SURE - SEMI-MARKOV UNRELIABILITY RANGE EVALUATOR (VAX VMS VERSION)
NASA Technical Reports Server (NTRS)
Butler, R. W.
1994-01-01
The Semi-Markov Unreliability Range Evaluator, SURE, is an analysis tool for reconfigurable, fault-tolerant systems. Traditional reliability analyses are based on aggregates of fault-handling and fault-occurrence models. SURE provides an efficient means for calculating accurate upper and lower bounds for the death state probabilities for a large class of semi-Markov models, not just those which can be reduced to critical-pair architectures. The calculated bounds are close enough (usually within 5 percent of each other) for use in reliability studies of ultra-reliable computer systems. The SURE bounding theorems have algebraic solutions and are consequently computationally efficient even for large and complex systems. SURE can optionally regard a specified parameter as a variable over a range of values, enabling an automatic sensitivity analysis. Highly reliable systems employ redundancy and reconfiguration as methods of ensuring operation. When such systems are modeled stochastically, some state transitions are orders of magnitude faster than others; that is, fault recovery is usually faster than fault arrival. SURE takes these time differences into account. Slow transitions are described by exponential functions and fast transitions are modeled by either the White or Lee theorems based on means, variances, and percentiles. The user must assign identifiers to every state in the system and define all transitions in the semi-Markov model. SURE input statements are composed of variables and constants related by FORTRAN-like operators such as =, +, *, SIN, EXP, etc. There are a dozen major commands such as READ, READO, SAVE, SHOW, PRUNE, TRUNCate, CALCulator, and RUN. Once the state transitions have been defined, SURE calculates the upper and lower probability bounds for entering specified death states within a specified mission time. SURE output is tabular. The mathematical approach chosen to solve a reliability problem may vary with the size and nature of the problem. Although different solution techniques are utilized on different programs, it is possible to have a common input language. The Systems Validation Methods group at NASA Langley Research Center has created a set of programs that form the basis for a reliability analysis workstation. The set of programs are: SURE reliability analysis program (COSMIC program LAR-13789, LAR-14921); the ASSIST specification interface program (LAR-14193, LAR-14923), PAWS/STEM reliability analysis programs (LAR-14165, LAR-14920); and the FTC fault tree tool (LAR-14586, LAR-14922). FTC is used to calculate the top-event probability for a fault tree. PAWS/STEM and SURE are programs which interpret the same SURE language, but utilize different solution methods. ASSIST is a preprocessor that generates SURE language from a more abstract definition. SURE, ASSIST, and PAWS/STEM are also offered as a bundle. Please see the abstract for COS-10039/COS-10041, SARA - SURE/ASSIST Reliability Analysis Workstation, for pricing details. SURE was originally developed for DEC VAX series computers running VMS and was later ported for use on Sun computers running SunOS. The VMS version (LAR13789) is written in PASCAL, C-language, and FORTRAN 77. The standard distribution medium for the VMS version of SURE is a 9-track 1600 BPI magnetic tape in VMSINSTAL format. It is also available on a TK50 tape cartridge in VMSINSTAL format. Executables are included. The Sun UNIX version (LAR14921) is written in ANSI C-language and PASCAL. An ANSI compliant C compiler is required in order to compile the C portion of this package. The standard distribution medium for the Sun version of SURE is a .25 inch streaming magnetic tape cartridge in UNIX tar format. Both Sun3 and Sun4 executables are included. SURE was developed in 1988 and last updated in 1992. DEC, VAX, VMS, and TK50 are trademarks of Digital Equipment Corporation. TEMPLATE is a registered trademark of Template Graphics Software, Inc. UNIX is a registered trademark of AT&T Bell Laboratories. Sun3 and Sun4 are trademarks of Sun Microsystems, Inc.
SURE - SEMI-MARKOV UNRELIABILITY RANGE EVALUATOR (SUN VERSION)
NASA Technical Reports Server (NTRS)
Butler, R. W.
1994-01-01
The Semi-Markov Unreliability Range Evaluator, SURE, is an analysis tool for reconfigurable, fault-tolerant systems. Traditional reliability analyses are based on aggregates of fault-handling and fault-occurrence models. SURE provides an efficient means for calculating accurate upper and lower bounds for the death state probabilities for a large class of semi-Markov models, not just those which can be reduced to critical-pair architectures. The calculated bounds are close enough (usually within 5 percent of each other) for use in reliability studies of ultra-reliable computer systems. The SURE bounding theorems have algebraic solutions and are consequently computationally efficient even for large and complex systems. SURE can optionally regard a specified parameter as a variable over a range of values, enabling an automatic sensitivity analysis. Highly reliable systems employ redundancy and reconfiguration as methods of ensuring operation. When such systems are modeled stochastically, some state transitions are orders of magnitude faster than others; that is, fault recovery is usually faster than fault arrival. SURE takes these time differences into account. Slow transitions are described by exponential functions and fast transitions are modeled by either the White or Lee theorems based on means, variances, and percentiles. The user must assign identifiers to every state in the system and define all transitions in the semi-Markov model. SURE input statements are composed of variables and constants related by FORTRAN-like operators such as =, +, *, SIN, EXP, etc. There are a dozen major commands such as READ, READO, SAVE, SHOW, PRUNE, TRUNCate, CALCulator, and RUN. Once the state transitions have been defined, SURE calculates the upper and lower probability bounds for entering specified death states within a specified mission time. SURE output is tabular. The mathematical approach chosen to solve a reliability problem may vary with the size and nature of the problem. Although different solution techniques are utilized on different programs, it is possible to have a common input language. The Systems Validation Methods group at NASA Langley Research Center has created a set of programs that form the basis for a reliability analysis workstation. The set of programs are: SURE reliability analysis program (COSMIC program LAR-13789, LAR-14921); the ASSIST specification interface program (LAR-14193, LAR-14923), PAWS/STEM reliability analysis programs (LAR-14165, LAR-14920); and the FTC fault tree tool (LAR-14586, LAR-14922). FTC is used to calculate the top-event probability for a fault tree. PAWS/STEM and SURE are programs which interpret the same SURE language, but utilize different solution methods. ASSIST is a preprocessor that generates SURE language from a more abstract definition. SURE, ASSIST, and PAWS/STEM are also offered as a bundle. Please see the abstract for COS-10039/COS-10041, SARA - SURE/ASSIST Reliability Analysis Workstation, for pricing details. SURE was originally developed for DEC VAX series computers running VMS and was later ported for use on Sun computers running SunOS. The VMS version (LAR13789) is written in PASCAL, C-language, and FORTRAN 77. The standard distribution medium for the VMS version of SURE is a 9-track 1600 BPI magnetic tape in VMSINSTAL format. It is also available on a TK50 tape cartridge in VMSINSTAL format. Executables are included. The Sun UNIX version (LAR14921) is written in ANSI C-language and PASCAL. An ANSI compliant C compiler is required in order to compile the C portion of this package. The standard distribution medium for the Sun version of SURE is a .25 inch streaming magnetic tape cartridge in UNIX tar format. Both Sun3 and Sun4 executables are included. SURE was developed in 1988 and last updated in 1992. DEC, VAX, VMS, and TK50 are trademarks of Digital Equipment Corporation. TEMPLATE is a registered trademark of Template Graphics Software, Inc. UNIX is a registered trademark of AT&T Bell Laboratories. Sun3 and Sun4 are trademarks of Sun Microsystems, Inc.
Fault tree analysis: NiH2 aerospace cells for LEO mission
NASA Technical Reports Server (NTRS)
Klein, Glenn C.; Rash, Donald E., Jr.
1992-01-01
The Fault Tree Analysis (FTA) is one of several reliability analyses or assessments applied to battery cells to be utilized in typical Electric Power Subsystems for spacecraft in low Earth orbit missions. FTA is generally the process of reviewing and analytically examining a system or equipment in such a way as to emphasize the lower level fault occurrences which directly or indirectly contribute to the major fault or top level event. This qualitative FTA addresses the potential of occurrence for five specific top level events: hydrogen leakage through either discrete leakage paths or through pressure vessel rupture; and four distinct modes of performance degradation - high charge voltage, suppressed discharge voltage, loss of capacity, and high pressure.
Modular techniques for dynamic fault-tree analysis
NASA Technical Reports Server (NTRS)
Patterson-Hine, F. A.; Dugan, Joanne B.
1992-01-01
It is noted that current approaches used to assess the dependability of complex systems such as Space Station Freedom and the Air Traffic Control System are incapable of handling the size and complexity of these highly integrated designs. A novel technique for modeling such systems which is built upon current techniques in Markov theory and combinatorial analysis is described. It enables the development of a hierarchical representation of system behavior which is more flexible than either technique alone. A solution strategy which is based on an object-oriented approach to model representation and evaluation is discussed. The technique is virtually transparent to the user since the fault tree models can be built graphically and the objects defined automatically. The tree modularization procedure allows the two model types, Markov and combinatoric, to coexist and does not require that the entire fault tree be translated to a Markov chain for evaluation. This effectively reduces the size of the Markov chain required and enables solutions with less truncation, making analysis of longer mission times possible. Using the fault-tolerant parallel processor as an example, a model is built and solved for a specific mission scenario and the solution approach is illustrated in detail.
Geologic map of Detrital, Hualapai, and Sacramento Valleys and surrounding areas, northwest Arizona
Beard, L. Sue; Kennedy, Jeffrey; Truini, Margot; Felger, Tracey
2011-01-01
A 1:250,000-scale geologic map and report covering the Detrital, Hualapai, and Sacramento valleys in northwest Arizona is presented for the purpose of improving understanding of the geology and geohydrology of the basins beneath those valleys. The map was compiled from existing geologic mapping, augmented by digital photogeologic reconnaissance mapping. The most recent geologic map for the area, and the only digital one, is the 1:1,000,000-scale Geologic Map of Arizona. The larger scale map presented here includes significantly more detailed geology than the Geologic Map of Arizona in terms of accuracy of geologic unit contacts, number of faults, fault type, fault location, and details of Neogene and Quaternary deposits. Many sources were used to compile the geology; the accompanying geodatabase includes a source field in the polygon feature class that lists source references for polygon features. The citations for the source field are included in the reference section.
Soft error evaluation and vulnerability analysis in Xilinx Zynq-7010 system-on chip
NASA Astrophysics Data System (ADS)
Du, Xuecheng; He, Chaohui; Liu, Shuhuan; Zhang, Yao; Li, Yonghong; Xiong, Ceng; Tan, Pengkang
2016-09-01
Radiation-induced soft errors are an increasingly important threat to the reliability of modern electronic systems. In order to evaluate system-on chip's reliability and soft error, the fault tree analysis method was used in this work. The system fault tree was constructed based on Xilinx Zynq-7010 All Programmable SoC. Moreover, the soft error rates of different components in Zynq-7010 SoC were tested by americium-241 alpha radiation source. Furthermore, some parameters that used to evaluate the system's reliability and safety were calculated using Isograph Reliability Workbench 11.0, such as failure rate, unavailability and mean time to failure (MTTF). According to fault tree analysis for system-on chip, the critical blocks and system reliability were evaluated through the qualitative and quantitative analysis.
Learning from examples - Generation and evaluation of decision trees for software resource analysis
NASA Technical Reports Server (NTRS)
Selby, Richard W.; Porter, Adam A.
1988-01-01
A general solution method for the automatic generation of decision (or classification) trees is investigated. The approach is to provide insights through in-depth empirical characterization and evaluation of decision trees for software resource data analysis. The trees identify classes of objects (software modules) that had high development effort. Sixteen software systems ranging from 3,000 to 112,000 source lines were selected for analysis from a NASA production environment. The collection and analysis of 74 attributes (or metrics), for over 4,700 objects, captured information about the development effort, faults, changes, design style, and implementation style. A total of 9,600 decision trees were automatically generated and evaluated. The trees correctly identified 79.3 percent of the software modules that had high development effort or faults, and the trees generated from the best parameter combinations correctly identified 88.4 percent of the modules on the average.
Linking Incoming Plate Faulting and Intermediate Depth Seismicity
NASA Astrophysics Data System (ADS)
Kwong, K. B.; van Zelst, I.; Tong, X.; Eimer, M. O.; Naif, S.; Hu, Y.; Zhan, Z.; Boneh, Y.; Schottenfels, E.; Miller, M. S.; Moresi, L. N.; Warren, J. M.; Wiens, D. A.
2017-12-01
Intermediate depth earthquakes, occurring between 70-350 km depth, are often attributed to dehydration reactions within the subducting plate. It is proposed that incoming plate normal faulting associated with plate bending at the trench may control the amount of hydration in the plate by producing large damage zones that create pathways for the infiltration of seawater deep into the subducting mantle. However, a relationship between incoming plate seismicity, faulting, and intermediate depth seismicity has not been established. We compiled a global dataset consisting of incoming plate earthquake moment tensor (CMT) solutions, focal depths, bend fault spacing and offset measurements, along with plate age and convergence rates. In addition, a global intermediate depth seismicity dataset was compiled with parameters such as the maximum seismic moment and seismicity rate, as well as thicknesses of double seismic zones. The maximum fault offset in the bending region has a strong correlation with the intermediate depth seismicity rate, but a more modest correlation with other parameters such as convergence velocity and plate age. We estimated the expected rate of seismic moment release for the incoming plate faults using mapped fault scarps from bathymetry. We compare this with the cumulative moment from normal faulting earthquakes in the incoming plate from the global CMT catalog to determine whether outer rise fault movement has an aseismic component. Preliminary results from Tonga and the Middle America Trench suggest there may be an aseismic component to incoming plate bending faulting. The cumulative seismic moment calculated for the outer rise faults will also be compared to the cumulative moment from intermediate depth earthquakes to assess whether these parameters are related. To support the observational part of this study, we developed a geodynamic numerical modeling study to systematically explore the influence of parameters such as plate age and convergence rate on the offset, depth, and spacing of outer rise faults. We then compare these robust constraints on outer rise faulting to the observed widths of intermediate depth earthquakes globally.
Database and Map of Quaternary Faults and Folds in Peru and its Offshore Region
Machare, Jose; Fenton, Clark H.; Machette, Michael N.; Lavenu, Alain; Costa, Carlos; Dart, Richard L.
2003-01-01
This publication consists of a main map of Quaternary faults and fiolds of Peru, a table of Quaternary fault data, a region inset map showing relative plate motion, and a second inset map of an enlarged area of interest in southern Peru. These maps and data compilation show evidence for activity of Quaternary faults and folds in Peru and its offshore regions of the Pacific Ocean. The maps show the locations, ages, and activity rates of major earthquake-related features such as faults and fault-related folds. These data are accompanied by text databases that describe these features and document current information on their activity in the Quaternary.
Recurrence Interval and Event Age Data for Type A Faults
Dawson, Timothy E.; Weldon, Ray J.; Biasi, Glenn P.
2008-01-01
This appendix summarizes available recurrence interval, event age, and timing of most recent event data for Type A faults considered in the Earthquake Rate Model 2 (ERM 2) and used in the ERM 2 Appendix C analysis as well as Appendix N (time-dependent probabilities). These data have been compiled into an Excel workbook named Appendix B A-fault event ages_recurrence_V5.0 (herein referred to as the Appendix B workbook). For convenience, the Appendix B workbook is attached to the end of this document as a series of tables. The tables within the Appendix B workbook include site locations, event ages, and recurrence data, and in some cases, the interval of time between earthquakes is also reported. The Appendix B workbook is organized as individual worksheets, with each worksheet named by fault and paleoseismic site. Each worksheet contains the site location in latitude and longitude, as well as information on event ages, and a summary of recurrence data. Because the data has been compiled from different sources with different presentation styles, descriptions of the contents of each worksheet within the Appendix B spreadsheet are summarized.
Minor, S.A.; Vick, G.S.; Carr, M.D.; Wahl, R.R.
1996-01-01
This map database, identified as Faults, lineaments, and earthquake epicenters digital map of the Pahute Mesa 30' X 60' quadrangle, Nevada, has been approved for release and publication by the Director of the USGS. Although this database has been subjected to rigorous review and is substantially complete, the USGS reserves the right to revise the data pursuant to further analysis and review. Furthermore, it is released on condition that neither the USGS nor the United States Government may be held liable for any damages resulting from its authorized or unauthorized use. This digital map compilation incorporates fault, air photo lineament, and earthquake epicenter data from within the Pahute Mesa 30' by 60' quadrangle, southern Nye County, Nevada (fig. 1). The compilation contributes to the U.S. Department of Energy's Yucca Mountain Project, established to determine whether or not the Yucca Mountain site is suitable for the disposal of high-level nuclear waste. Studies of local and regional faulting and earthquake activity, including the features depicted in this compilation, are carried out to help characterize seismic hazards and tectonic processes that may be relevant to the future stability of Yucca Mountain. The Yucca Mountain site is located in the central part of the Beatty 30' by 60' quadrangle approximately 15 km south of the south edge of the Pahute Mesa quadrangle (fig. 1). The U.S. Geological Survey participates in studies of the Yucca Mountain site under Interagency Agreement DE-AI08-78ET44802. The map compilation is only available on line as a digital database in ARC/INFO ASCII (Generate) and export formats. The database can be downloaded via 'anonymous ftp' from a USGS system named greenwood.cr.usgs.gov (136.177.48.5). The files are located in a directory named /pub/open-file-reports/ofr-96-0262. This directory contains a text document named 'README.1 ST' that contains database technical and explanatory documentation, including instructions for uncompressing the bundled (tar) file. In displaying the compilation it is important to note that the map data set is considered accurate when depicted at a scale of about 1:100,000; displaying the compilation at scales significantly larger than this may result in distortions and (or) mislocations of the data.
Decision tree and PCA-based fault diagnosis of rotating machinery
NASA Astrophysics Data System (ADS)
Sun, Weixiang; Chen, Jin; Li, Jiaqing
2007-04-01
After analysing the flaws of conventional fault diagnosis methods, data mining technology is introduced to fault diagnosis field, and a new method based on C4.5 decision tree and principal component analysis (PCA) is proposed. In this method, PCA is used to reduce features after data collection, preprocessing and feature extraction. Then, C4.5 is trained by using the samples to generate a decision tree model with diagnosis knowledge. At last the tree model is used to make diagnosis analysis. To validate the method proposed, six kinds of running states (normal or without any defect, unbalance, rotor radial rub, oil whirl, shaft crack and a simultaneous state of unbalance and radial rub), are simulated on Bently Rotor Kit RK4 to test C4.5 and PCA-based method and back-propagation neural network (BPNN). The result shows that C4.5 and PCA-based diagnosis method has higher accuracy and needs less training time than BPNN.
PV System Component Fault and Failure Compilation and Analysis.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Klise, Geoffrey Taylor; Lavrova, Olga; Gooding, Renee Lynne
This report describes data collection and analysis of solar photovoltaic (PV) equipment events, which consist of faults and fa ilures that occur during the normal operation of a distributed PV system or PV power plant. We present summary statistics from locations w here maintenance data is being collected at various intervals, as well as reliability statistics gathered from that da ta, consisting of fault/failure distributions and repair distributions for a wide range of PV equipment types.
NASA Technical Reports Server (NTRS)
Chang, Chi-Yung (Inventor); Fang, Wai-Chi (Inventor); Curlander, John C. (Inventor)
1995-01-01
A system for data compression utilizing systolic array architecture for Vector Quantization (VQ) is disclosed for both full-searched and tree-searched. For a tree-searched VQ, the special case of a Binary Tree-Search VQ (BTSVQ) is disclosed with identical Processing Elements (PE) in the array for both a Raw-Codebook VQ (RCVQ) and a Difference-Codebook VQ (DCVQ) algorithm. A fault tolerant system is disclosed which allows a PE that has developed a fault to be bypassed in the array and replaced by a spare at the end of the array, with codebook memory assignment shifted one PE past the faulty PE of the array.
Fault tree analysis for system modeling in case of intentional EMI
NASA Astrophysics Data System (ADS)
Genender, E.; Mleczko, M.; Döring, O.; Garbe, H.; Potthast, S.
2011-08-01
The complexity of modern systems on the one hand and the rising threat of intentional electromagnetic interference (IEMI) on the other hand increase the necessity for systematical risk analysis. Most of the problems can not be treated deterministically since slight changes in the configuration (source, position, polarization, ...) can dramatically change the outcome of an event. For that purpose, methods known from probabilistic risk analysis can be applied. One of the most common approaches is the fault tree analysis (FTA). The FTA is used to determine the system failure probability and also the main contributors to its failure. In this paper the fault tree analysis is introduced and a possible application of that method is shown using a small computer network as an example. The constraints of this methods are explained and conclusions for further research are drawn.
NASA Astrophysics Data System (ADS)
Akinci, A.; Pace, B.
2017-12-01
In this study, we discuss the seismic hazard variability of peak ground acceleration (PGA) at 475 years return period in the Southern Apennines of Italy. The uncertainty and parametric sensitivity are presented to quantify the impact of the several fault parameters on ground motion predictions for 10% exceedance in 50-year hazard. A time-independent PSHA model is constructed based on the long-term recurrence behavior of seismogenic faults adopting the characteristic earthquake model for those sources capable of rupturing the entire fault segment with a single maximum magnitude. The fault-based source model uses the dimensions and slip rates of mapped fault to develop magnitude-frequency estimates for characteristic earthquakes. Variability of the selected fault parameter is given with a truncated normal random variable distribution presented by standard deviation about a mean value. A Monte Carlo approach, based on the random balanced sampling by logic tree, is used in order to capture the uncertainty in seismic hazard calculations. For generating both uncertainty and sensitivity maps, we perform 200 simulations for each of the fault parameters. The results are synthesized both in frequency-magnitude distribution of modeled faults as well as the different maps: the overall uncertainty maps provide a confidence interval for the PGA values and the parameter uncertainty maps determine the sensitivity of hazard assessment to variability of every logic tree branch. These branches of logic tree, analyzed through the Monte Carlo approach, are maximum magnitudes, fault length, fault width, fault dip and slip rates. The overall variability of these parameters is determined by varying them simultaneously in the hazard calculations while the sensitivity of each parameter to overall variability is determined varying each of the fault parameters while fixing others. However, in this study we do not investigate the sensitivity of mean hazard results to the consideration of different GMPEs. Distribution of possible seismic hazard results is illustrated by 95% confidence factor map, which indicates the dispersion about mean value, and coefficient of variation map, which shows percent variability. The results of our study clearly illustrate the influence of active fault parameters to probabilistic seismic hazard maps.
Li, Jia; Wang, Deming; Huang, Zonghou
2017-01-01
Coal dust explosions (CDE) are one of the main threats to the occupational safety of coal miners. Aiming to identify and assess the risk of CDE, this paper proposes a novel method of fuzzy fault tree analysis combined with the Visual Basic (VB) program. In this methodology, various potential causes of the CDE are identified and a CDE fault tree is constructed. To overcome drawbacks from the lack of exact probability data for the basic events, fuzzy set theory is employed and the probability data of each basic event is treated as intuitionistic trapezoidal fuzzy numbers. In addition, a new approach for calculating the weighting of each expert is also introduced in this paper to reduce the error during the expert elicitation process. Specifically, an in-depth quantitative analysis of the fuzzy fault tree, such as the importance measure of the basic events and the cut sets, and the CDE occurrence probability is given to assess the explosion risk and acquire more details of the CDE. The VB program is applied to simplify the analysis process. A case study and analysis is provided to illustrate the effectiveness of this proposed method, and some suggestions are given to take preventive measures in advance and avoid CDE accidents. PMID:28793348
Shi, Lei; Shuai, Jian; Xu, Kui
2014-08-15
Fire and explosion accidents of steel oil storage tanks (FEASOST) occur occasionally during the petroleum and chemical industry production and storage processes and often have devastating impact on lives, the environment and property. To contribute towards the development of a quantitative approach for assessing the occurrence probability of FEASOST, a fault tree of FEASOST is constructed that identifies various potential causes. Traditional fault tree analysis (FTA) can achieve quantitative evaluation if the failure data of all of the basic events (BEs) are available, which is almost impossible due to the lack of detailed data, as well as other uncertainties. This paper makes an attempt to perform FTA of FEASOST by a hybrid application between an expert elicitation based improved analysis hierarchy process (AHP) and fuzzy set theory, and the occurrence possibility of FEASOST is estimated for an oil depot in China. A comparison between statistical data and calculated data using fuzzy fault tree analysis (FFTA) based on traditional and improved AHP is also made. Sensitivity and importance analysis has been performed to identify the most crucial BEs leading to FEASOST that will provide insights into how managers should focus effective mitigation. Copyright © 2014 Elsevier B.V. All rights reserved.
Wang, Hetang; Li, Jia; Wang, Deming; Huang, Zonghou
2017-01-01
Coal dust explosions (CDE) are one of the main threats to the occupational safety of coal miners. Aiming to identify and assess the risk of CDE, this paper proposes a novel method of fuzzy fault tree analysis combined with the Visual Basic (VB) program. In this methodology, various potential causes of the CDE are identified and a CDE fault tree is constructed. To overcome drawbacks from the lack of exact probability data for the basic events, fuzzy set theory is employed and the probability data of each basic event is treated as intuitionistic trapezoidal fuzzy numbers. In addition, a new approach for calculating the weighting of each expert is also introduced in this paper to reduce the error during the expert elicitation process. Specifically, an in-depth quantitative analysis of the fuzzy fault tree, such as the importance measure of the basic events and the cut sets, and the CDE occurrence probability is given to assess the explosion risk and acquire more details of the CDE. The VB program is applied to simplify the analysis process. A case study and analysis is provided to illustrate the effectiveness of this proposed method, and some suggestions are given to take preventive measures in advance and avoid CDE accidents.
Earthquake Model of the Middle East (EMME) Project: Active Fault Database for the Middle East Region
NASA Astrophysics Data System (ADS)
Gülen, L.; Wp2 Team
2010-12-01
The Earthquake Model of the Middle East (EMME) Project is a regional project of the umbrella GEM (Global Earthquake Model) project (http://www.emme-gem.org/). EMME project region includes Turkey, Georgia, Armenia, Azerbaijan, Syria, Lebanon, Jordan, Iran, Pakistan, and Afghanistan. Both EMME and SHARE projects overlap and Turkey becomes a bridge connecting the two projects. The Middle East region is tectonically and seismically very active part of the Alpine-Himalayan orogenic belt. Many major earthquakes have occurred in this region over the years causing casualties in the millions. The EMME project will use PSHA approach and the existing source models will be revised or modified by the incorporation of newly acquired data. More importantly the most distinguishing aspect of the EMME project from the previous ones will be its dynamic character. This very important characteristic is accomplished by the design of a flexible and scalable database that will permit continuous update, refinement, and analysis. A digital active fault map of the Middle East region is under construction in ArcGIS format. We are developing a database of fault parameters for active faults that are capable of generating earthquakes above a threshold magnitude of Mw≥5.5. Similar to the WGCEP-2007 and UCERF-2 projects, the EMME project database includes information on the geometry and rates of movement of faults in a “Fault Section Database”. The “Fault Section” concept has a physical significance, in that if one or more fault parameters change, a new fault section is defined along a fault zone. So far over 3,000 Fault Sections have been defined and parameterized for the Middle East region. A separate “Paleo-Sites Database” includes information on the timing and amounts of fault displacement for major fault zones. A digital reference library that includes the pdf files of the relevant papers, reports is also being prepared. Another task of the WP-2 of the EMME project is to prepare a strain and slip rate map of the Middle East region by basically compiling already published data. The third task is to calculate b-values, Mmax and determine the activity rates. New data and evidences will be interpreted to revise or modify the existing source models. A logic tree approach will be utilized for the areas where there is no consensus to encompass different interpretations. Finally seismic source zones in the Middle East region will be delineated using all available data. EMME Project WP2 Team: Levent Gülen, Murat Utkucu, M. Dinçer Köksal, Hilal Domaç, Yigit Ince, Mine Demircioglu, Shota Adamia, Nino Sandradze, Aleksandre Gvencadze, Arkadi Karakhanyan, Mher Avanesyan, Tahir Mammadli, Gurban Yetirmishli, Arif Axundov, Khaled Hessami, M. Asif Khan, M. Sayab.
Graphical workstation capability for reliability modeling
NASA Technical Reports Server (NTRS)
Bavuso, Salvatore J.; Koppen, Sandra V.; Haley, Pamela J.
1992-01-01
In addition to computational capabilities, software tools for estimating the reliability of fault-tolerant digital computer systems must also provide a means of interfacing with the user. Described here is the new graphical interface capability of the hybrid automated reliability predictor (HARP), a software package that implements advanced reliability modeling techniques. The graphics oriented (GO) module provides the user with a graphical language for modeling system failure modes through the selection of various fault-tree gates, including sequence-dependency gates, or by a Markov chain. By using this graphical input language, a fault tree becomes a convenient notation for describing a system. In accounting for any sequence dependencies, HARP converts the fault-tree notation to a complex stochastic process that is reduced to a Markov chain, which it can then solve for system reliability. The graphics capability is available for use on an IBM-compatible PC, a Sun, and a VAX workstation. The GO module is written in the C programming language and uses the graphical kernal system (GKS) standard for graphics implementation. The PC, VAX, and Sun versions of the HARP GO module are currently in beta-testing stages.
Fault tree analysis for urban flooding.
ten Veldhuis, J A E; Clemens, F H L R; van Gelder, P H A J M
2009-01-01
Traditional methods to evaluate flood risk generally focus on heavy storm events as the principal cause of flooding. Conversely, fault tree analysis is a technique that aims at modelling all potential causes of flooding. It quantifies both overall flood probability and relative contributions of individual causes of flooding. This paper presents a fault model for urban flooding and an application to the case of Haarlem, a city of 147,000 inhabitants. Data from a complaint register, rainfall gauges and hydrodynamic model calculations are used to quantify probabilities of basic events in the fault tree. This results in a flood probability of 0.78/week for Haarlem. It is shown that gully pot blockages contribute to 79% of flood incidents, whereas storm events contribute only 5%. This implies that for this case more efficient gully pot cleaning is a more effective strategy to reduce flood probability than enlarging drainage system capacity. Whether this is also the most cost-effective strategy can only be decided after risk assessment has been complemented with a quantification of consequences of both types of events. To do this will be the next step in this study.
NASA Astrophysics Data System (ADS)
Koji, Yusuke; Kitamura, Yoshinobu; Kato, Yoshikiyo; Tsutsui, Yoshio; Mizoguchi, Riichiro
In conceptual design, it is important to develop functional structures which reflect the rich experience in the knowledge from previous design failures. Especially, if a designer learns possible abnormal behaviors from a previous design failure, he or she can add an additional function which prevents such abnormal behaviors and faults. To do this, it is a crucial issue to share such knowledge about possible faulty phenomena and how to cope with them. In fact, a part of such knowledge is described in FMEA (Failure Mode and Effect Analysis) sheets, function structure models for systematic design and fault trees for FTA (Fault Tree Analysis).
Failure analysis of energy storage spring in automobile composite brake chamber
NASA Astrophysics Data System (ADS)
Luo, Zai; Wei, Qing; Hu, Xiaofeng
2015-02-01
This paper set energy storage spring of parking brake cavity, part of automobile composite brake chamber, as the research object. And constructed the fault tree model of energy storage spring which caused parking brake failure based on the fault tree analysis method. Next, the parking brake failure model of energy storage spring was established by analyzing the working principle of composite brake chamber. Finally, the data of working load and the push rod stroke measured by comprehensive test-bed valve was used to validate the failure model above. The experimental result shows that the failure model can distinguish whether the energy storage spring is faulted.
A fast bottom-up algorithm for computing the cut sets of noncoherent fault trees
DOE Office of Scientific and Technical Information (OSTI.GOV)
Corynen, G.C.
1987-11-01
An efficient procedure for finding the cut sets of large fault trees has been developed. Designed to address coherent or noncoherent systems, dependent events, shared or common-cause events, the method - called SHORTCUT - is based on a fast algorithm for transforming a noncoherent tree into a quasi-coherent tree (COHERE), and on a new algorithm for reducing cut sets (SUBSET). To assure sufficient clarity and precision, the procedure is discussed in the language of simple sets, which is also developed in this report. Although the new method has not yet been fully implemented on the computer, we report theoretical worst-casemore » estimates of its computational complexity. 12 refs., 10 figs.« less
Electromagnetic Compatibility (EMC) in Microelectronics.
1983-02-01
Fault Tree Analysis", System Saftey Symposium, June 8-9, 1965, Seattle: The Boeing Company . 12. Fussell, J.B., "Fault Tree Analysis-Concepts and...procedure for assessing EMC in microelectronics and for applying DD, 1473 EOiTO OP I, NOV6 IS OESOL.ETE UNCLASSIFIED SECURITY CLASSIFICATION OF THIS...CRITERIA 2.1 Background 2 2.2 The Probabilistic Nature of EMC 2 2.3 The Probabilistic Approach 5 2.4 The Compatibility Factor 6 3 APPLYING PROBABILISTIC
A graphical language for reliability model generation
NASA Technical Reports Server (NTRS)
Howell, Sandra V.; Bavuso, Salvatore J.; Haley, Pamela J.
1990-01-01
A graphical interface capability of the hybrid automated reliability predictor (HARP) is described. The graphics-oriented (GO) module provides the user with a graphical language for modeling system failure modes through the selection of various fault tree gates, including sequence dependency gates, or by a Markov chain. With this graphical input language, a fault tree becomes a convenient notation for describing a system. In accounting for any sequence dependencies, HARP converts the fault-tree notation to a complex stochastic process that is reduced to a Markov chain which it can then solve for system reliability. The graphics capability is available for use on an IBM-compatible PC, a Sun, and a VAX workstation. The GO module is written in the C programming language and uses the Graphical Kernel System (GKS) standard for graphics implementation. The PC, VAX, and Sun versions of the HARP GO module are currently in beta-testing.
A Simple Model for Estimating Total and Merchantable Tree Heights
Alan R. Ek; Earl T. Birdsall; Rebecca J. Spears
1984-01-01
A model is described for estimating total and merchantable tree heights for Lake States tree species. It is intended to be used for compiling forest survey data and in conjunction with growth models for developing projections of tree product yield. Model coefficients are given for 25 species along with fit statistics. Supporting data sets are also described.
Fault tolerant, radiation hard, high performance digital signal processor
NASA Technical Reports Server (NTRS)
Holmann, Edgar; Linscott, Ivan R.; Maurer, Michael J.; Tyler, G. L.; Libby, Vibeke
1990-01-01
An architecture has been developed for a high-performance VLSI digital signal processor that is highly reliable, fault-tolerant, and radiation-hard. The signal processor, part of a spacecraft receiver designed to support uplink radio science experiments at the outer planets, organizes the connections between redundant arithmetic resources, register files, and memory through a shuffle exchange communication network. The configuration of the network and the state of the processor resources are all under microprogram control, which both maps the resources according to algorithmic needs and reconfigures the processing should a failure occur. In addition, the microprogram is reloadable through the uplink to accommodate changes in the science objectives throughout the course of the mission. The processor will be implemented with silicon compiler tools, and its design will be verified through silicon compilation simulation at all levels from the resources to full functionality. By blending reconfiguration with redundancy the processor implementation is fault-tolerant and reliable, and possesses the long expected lifetime needed for a spacecraft mission to the outer planets.
Where's the Hayward Fault? A Green Guide to the Fault
Stoffer, Philip W.
2008-01-01
This report describes self-guided field trips to one of North America?s most dangerous earthquake faults?the Hayward Fault. Locations were chosen because of their easy access using mass transit and/or their significance relating to the natural and cultural history of the East Bay landscape. This field-trip guidebook was compiled to help commemorate the 140th anniversary of an estimated M 7.0 earthquake that occurred on the Hayward Fault at approximately 7:50 AM, October 21st, 1868. Although many reports and on-line resources have been compiled about the science and engineering associated with earthquakes on the Hayward Fault, this report has been prepared to serve as an outdoor guide to the fault for the interested public and for educators. The first chapter is a general overview of the geologic setting of the fault. This is followed by ten chapters of field trips to selected areas along the fault, or in the vicinity, where landscape, geologic, and man-made features that have relevance to understanding the nature of the fault and its earthquake history can be found. A glossary is provided to define and illustrate scientific term used throughout this guide. A ?green? theme helps conserve resources and promotes use of public transportation, where possible. Although access to all locations described in this guide is possible by car, alternative suggestions are provided. To help conserve paper, this guidebook is available on-line only; however, select pages or chapters (field trips) within this guide can be printed separately to take along on an excursion. The discussions in this paper highlight transportation alternatives to visit selected field trip locations. In some cases, combinations, such as a ride on BART and a bus, can be used instead of automobile transportation. For other locales, bicycles can be an alternative means of transportation. Transportation descriptions on selected pages are intended to help guide fieldtrip planners or participants choose trip destinations based on transportation options, interests, or special needs.
Copilot: Monitoring Embedded Systems
NASA Technical Reports Server (NTRS)
Pike, Lee; Wegmann, Nis; Niller, Sebastian; Goodloe, Alwyn
2012-01-01
Runtime verification (RV) is a natural fit for ultra-critical systems, where correctness is imperative. In ultra-critical systems, even if the software is fault-free, because of the inherent unreliability of commodity hardware and the adversity of operational environments, processing units (and their hosted software) are replicated, and fault-tolerant algorithms are used to compare the outputs. We investigate both software monitoring in distributed fault-tolerant systems, as well as implementing fault-tolerance mechanisms using RV techniques. We describe the Copilot language and compiler, specifically designed for generating monitors for distributed, hard real-time systems. We also describe two case-studies in which we generated Copilot monitors in avionics systems.
NASA Astrophysics Data System (ADS)
Wu, Jianing; Yan, Shaoze; Xie, Liyang
2011-12-01
To address the impact of solar array anomalies, it is important to perform analysis of the solar array reliability. This paper establishes the fault tree analysis (FTA) and fuzzy reasoning Petri net (FRPN) models of a solar array mechanical system and analyzes reliability to find mechanisms of the solar array fault. The index final truth degree (FTD) and cosine matching function (CMF) are employed to resolve the issue of how to evaluate the importance and influence of different faults. So an improvement reliability analysis method is developed by means of the sorting of FTD and CMF. An example is analyzed using the proposed method. The analysis results show that harsh thermal environment and impact caused by particles in space are the most vital causes of the solar array fault. Furthermore, other fault modes and the corresponding improvement methods are discussed. The results reported in this paper could be useful for the spacecraft designers, particularly, in the process of redesigning the solar array and scheduling its reliability growth plan.
Seera, Manjeevan; Lim, Chee Peng; Ishak, Dahaman; Singh, Harapajan
2012-01-01
In this paper, a novel approach to detect and classify comprehensive fault conditions of induction motors using a hybrid fuzzy min-max (FMM) neural network and classification and regression tree (CART) is proposed. The hybrid model, known as FMM-CART, exploits the advantages of both FMM and CART for undertaking data classification and rule extraction problems. A series of real experiments is conducted, whereby the motor current signature analysis method is applied to form a database comprising stator current signatures under different motor conditions. The signal harmonics from the power spectral density are extracted as discriminative input features for fault detection and classification with FMM-CART. A comprehensive list of induction motor fault conditions, viz., broken rotor bars, unbalanced voltages, stator winding faults, and eccentricity problems, has been successfully classified using FMM-CART with good accuracy rates. The results are comparable, if not better, than those reported in the literature. Useful explanatory rules in the form of a decision tree are also elicited from FMM-CART to analyze and understand different fault conditions of induction motors.
P.J. Radtke; D.M. Walker; A.R. Weiskittel; J. Frank; J.W. Coulston; J.A. Westfall
2015-01-01
Forest mensurationists in the United States have expended considerable effort over the past century making detailed observations of treesâ dimensions. In recent decades efforts have focused increasingly on weights and physical properties. Work is underway to compile original measurements from past volume, taper, and weight or biomass studies for North American tree...
Tectonic and neotectonic framework of the Yucca Mountain Region
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schweickert, R.A.
1992-09-30
Highlights of major research accomplishments concerned with the tectonics and neotectonics of the Yucca Mountain Region include: structural studies in Grapevine Mountains, Bullfrog Hills, and Bare Mountain; recognition of significance of pre-Middle Miocene normal and strike-slip faulting at Bare Mountain; compilation of map of quaternary faulting in Southern Amargosa Valley; and preliminary paleomagnetic analysis of Paleozoic and Cenozoic units at Bare Mountain.
Comprehensive database of diameter-based biomass regressions for North American tree species
Jennifer C. Jenkins; David C. Chojnacky; Linda S. Heath; Richard A. Birdsey
2004-01-01
A database consisting of 2,640 equations compiled from the literature for predicting the biomass of trees and tree components from diameter measurements of species found in North America. Bibliographic information, geographic locations, diameter limits, diameter and biomass units, equation forms, statistical errors, and coefficients are provided for each equation,...
Structure, function and value of street trees in California, USA
E. Gregory McPherson; Natalie van Doorn; John de Goede
2016-01-01
This study compiled recent inventory data from 929,823 street trees in 50 cities to determine trends in tree number and density, identify priority investments and create baseline data against which the efficacy of future practices can be evaluated. The number of street trees increased from 5.9 million in 1988 to 9.1 million in 2014, about one for every four residents....
Aydin, Ilhan; Karakose, Mehmet; Akin, Erhan
2014-03-01
Although reconstructed phase space is one of the most powerful methods for analyzing a time series, it can fail in fault diagnosis of an induction motor when the appropriate pre-processing is not performed. Therefore, boundary analysis based a new feature extraction method in phase space is proposed for diagnosis of induction motor faults. The proposed approach requires the measurement of one phase current signal to construct the phase space representation. Each phase space is converted into an image, and the boundary of each image is extracted by a boundary detection algorithm. A fuzzy decision tree has been designed to detect broken rotor bars and broken connector faults. The results indicate that the proposed approach has a higher recognition rate than other methods on the same dataset. © 2013 ISA Published by ISA All rights reserved.
Ada 9X Project Revision Request Report. Supplement 1
1990-01-01
Non-portable use of operating system primitives or of Ada run time system internals. POSSIBLE SOLUTIONS: Mandate that compilers recognize tasks that...complex than a simple operating system file, the compiler vendor must provide routines to manipulate it (create, copy, move etc .) as a single entity... system , to support fault tolerance, load sharing, change of system operating mode etc . It is highly desirable that such important software be written in
The P-Mesh: A Commodity-based Scalable Network Architecture for Clusters
NASA Technical Reports Server (NTRS)
Nitzberg, Bill; Kuszmaul, Chris; Stockdale, Ian; Becker, Jeff; Jiang, John; Wong, Parkson; Tweten, David (Technical Monitor)
1998-01-01
We designed a new network architecture, the P-Mesh which combines the scalability and fault resilience of a torus with the performance of a switch. We compare the scalability, performance, and cost of the hub, switch, torus, tree, and P-Mesh architectures. The latter three are capable of scaling to thousands of nodes, however, the torus has severe performance limitations with that many processors. The tree and P-Mesh have similar latency, bandwidth, and bisection bandwidth, but the P-Mesh outperforms the switch architecture (a lower bound for tree performance) on 16-node NAB Parallel Benchmark tests by up to 23%, and costs 40% less. Further, the P-Mesh has better fault resilience characteristics. The P-Mesh architecture trades increased management overhead for lower cost, and is a good bridging technology while the price of tree uplinks is expensive.
U.S. Quaternary Fault and Fold Database Released
NASA Astrophysics Data System (ADS)
Haller, Kathleen M.; Machette, Michael N.; Dart, Richard L.; Rhea, B. Susan
2004-06-01
A comprehensive online compilation of Quaternary-age faults and folds throughout the United States was recently released by the U.S. Geological Survey, with cooperation from state geological surveys, academia, and the private sector. The Web site at http://Qfaults.cr.usgs.gov/ contains searchable databases and related geo-spatial data that characterize earthquake-related structures that could be potential seismic sources for large-magnitude (M > 6) earthquakes.
NASA Technical Reports Server (NTRS)
Gryc, G. (Principal Investigator); Lathram, E. H.
1973-01-01
The author has identified the following significant results. Analysis of lineated lakes in the Umiat, Alaska area and comparison with known geology, gravity, and magnetic data in the the area suggest concealed structures exist at depth, possibly at or near basement, which may represent targets for petroleum exploration. Compilation of reconnaissance geologic data on 1:250,000 scale enlargements of ERTS-1 images near Corwin reveal structural and stratigraphic anomalies that suggest the Cretaceous sequence is less thick than supposed and is repeated in a series of plates superimposed by flat thrust faults. The structural style differs from that in coeval strata to the northeast, across the northwest-trending linear zone separating differing tectonic styles in older strata noted earlier. The regional extension of a fault known locally in the McCarthy area has been recognized; this fault appears to form the boundary of a significant terrane of mid-Paleozoic metamorphic rocks. ERTS-1 images are being used operationally, at 1:1,000,000 scale in the compilation of regional geologic maps, and at 1:250,000 scale in field mapping in the Brooks Range, in the study of faults in seismically active southern Alaska, in field-checking interpretations previously made from ERTS-1 imagery, and orthophoto base maps for geologic maps.
The 1992 Landers earthquake sequence; seismological observations
Egill Hauksson,; Jones, Lucile M.; Hutton, Kate; Eberhart-Phillips, Donna
1993-01-01
The (MW6.1, 7.3, 6.2) 1992 Landers earthquakes began on April 23 with the MW6.1 1992 Joshua Tree preshock and form the most substantial earthquake sequence to occur in California in the last 40 years. This sequence ruptured almost 100 km of both surficial and concealed faults and caused aftershocks over an area 100 km wide by 180 km long. The faulting was predominantly strike slip and three main events in the sequence had unilateral rupture to the north away from the San Andreas fault. The MW6.1 Joshua Tree preshock at 33°N58′ and 116°W19′ on 0451 UT April 23 was preceded by a tightly clustered foreshock sequence (M≤4.6) beginning 2 hours before the mainshock and followed by a large aftershock sequence with more than 6000 aftershocks. The aftershocks extended along a northerly trend from about 10 km north of the San Andreas fault, northwest of Indio, to the east-striking Pinto Mountain fault. The Mw7.3 Landers mainshock occurred at 34°N13′ and 116°W26′ at 1158 UT, June 28, 1992, and was preceded for 12 hours by 25 small M≤3 earthquakes at the mainshock epicenter. The distribution of more than 20,000 aftershocks, analyzed in this study, and short-period focal mechanisms illuminate a complex sequence of faulting. The aftershocks extend 60 km to the north of the mainshock epicenter along a system of at least five different surficial faults, and 40 km to the south, crossing the Pinto Mountain fault through the Joshua Tree aftershock zone towards the San Andreas fault near Indio. The rupture initiated in the depth range of 3–6 km, similar to previous M∼5 earthquakes in the region, although the maximum depth of aftershocks is about 15 km. The mainshock focal mechanism showed right-lateral strike-slip faulting with a strike of N10°W on an almost vertical fault. The rupture formed an arclike zone well defined by both surficial faulting and aftershocks, with more westerly faulting to the north. This change in strike is accomplished by jumping across dilational jogs connecting surficial faults with strikes rotated progressively to the west. A 20-km-long linear cluster of aftershocks occurred 10–20 km north of Barstow, or 30–40 km north of the end of the mainshock rupture. The most prominent off-fault aftershock cluster occurred 30 km to the west of the Landers mainshock. The largest aftershock was within this cluster, the Mw6.2 Big Bear aftershock occurring at 34°N10′ and 116°W49′ at 1505 UT June 28. It exhibited left-lateral strike-slip faulting on a northeast striking and steeply dipping plane. The Big Bear aftershocks form a linear trend extending 20 km to the northeast with a scattered distribution to the north. The Landers mainshock occurred near the southernmost extent of the Eastern California Shear Zone, an 80-km-wide, more than 400-km-long zone of deformation. This zone extends into the Death Valley region and accommodates about 10 to 20% of the plate motion between the Pacific and North American plates. The Joshua Tree preshock, its aftershocks, and Landers aftershocks form a previously missing link that connects the Eastern California Shear Zone to the southern San Andreas fault.
Initiating Event Analysis of a Lithium Fluoride Thorium Reactor
NASA Astrophysics Data System (ADS)
Geraci, Nicholas Charles
The primary purpose of this study is to perform an Initiating Event Analysis for a Lithium Fluoride Thorium Reactor (LFTR) as the first step of a Probabilistic Safety Assessment (PSA). The major objective of the research is to compile a list of key initiating events capable of resulting in failure of safety systems and release of radioactive material from the LFTR. Due to the complex interactions between engineering design, component reliability and human reliability, probabilistic safety assessments are most useful when the scope is limited to a single reactor plant. Thus, this thesis will study the LFTR design proposed by Flibe Energy. An October 2015 Electric Power Research Institute report on the Flibe Energy LFTR asked "what-if?" questions of subject matter experts and compiled a list of key hazards with the most significant consequences to the safety or integrity of the LFTR. The potential exists for unforeseen hazards to pose additional risk for the LFTR, but the scope of this thesis is limited to evaluation of those key hazards already identified by Flibe Energy. These key hazards are the starting point for the Initiating Event Analysis performed in this thesis. Engineering evaluation and technical study of the plant using a literature review and comparison to reference technology revealed four hazards with high potential to cause reactor core damage. To determine the initiating events resulting in realization of these four hazards, reference was made to previous PSAs and existing NRC and EPRI initiating event lists. Finally, fault tree and event tree analyses were conducted, completing the logical classification of initiating events. Results are qualitative as opposed to quantitative due to the early stages of system design descriptions and lack of operating experience or data for the LFTR. In summary, this thesis analyzes initiating events using previous research and inductive and deductive reasoning through traditional risk management techniques to arrive at a list of key initiating events that can be used to address vulnerabilities during the design phases of LFTR development.
Tree chemistry database (version 1.0)
Linda H. Pardo; Molly Robin-Abbott; Natasha Duarte; Eric K. Miller
2005-01-01
The Tree Chemistry Database is a relational database of C, N, P, K, Ca, Mg, Mn, and Al concentrations in bole bark, bole wood, branches, twigs, and foliage. Compiled from data in 218 articles and publications, the database contains reported nutrient and biomass values for tree species in the Northeastern United States. Nutrient data can be sorted on parameters such as...
Genetic diversity and conservation of Mexican forest trees
C. Wehenkel; S. Mariscal-Lucero; J.P. Jaramillo-Correa; C.A. López-Sánchez; J.J. Vargas Hernández; C. Sáenz-Romero
2017-01-01
Over the last 200 years, humans have impacted the genetic diversity of forest trees. Because of widespread deforestation and over-exploitation, about 9,000 tree species are listed worldwide as threatened with extinction, including more than half of the ~600 known conifer taxa. A comprehensive review of the floristic-taxonomic literature compiled a list of 4,331...
Map and database of Quaternary faults and folds in Colombia and its offshore regions
Paris, Gabriel; Machette, Michael N.; Dart, Richard L.; Haller, Kathleen M.
2000-01-01
As part of the International Lithosphere Program’s “World Map of Major Active Faults,” the U.S. Geological Survey (USGS) is assisting in the compilation of a series of digital maps of Quaternary faults and folds in Western Hemisphere countries. The maps show the locations, ages, and activity rates of major earthquake-related features such as faults and fault-related folds. They are accompanied by databases that describe these features and document current information on their activity in the Quaternary. Top date, the project has published fault and fold maps for Costa Rica (Montero and others, 1998), Panama (Cowan and others, 1998), Venezuela (Audemard and others, 2000), Bolovia/Chile (Lavenu, and others, 2000), and Argentina (Costa and others, 2000). The project is a key part of the Global Seismic Hazards Assessment Program (ILP Project II-0) for the International Decade for Natural Hazard Disaster Reduction.
Reliability analysis of the solar array based on Fault Tree Analysis
NASA Astrophysics Data System (ADS)
Jianing, Wu; Shaoze, Yan
2011-07-01
The solar array is an important device used in the spacecraft, which influences the quality of in-orbit operation of the spacecraft and even the launches. This paper analyzes the reliability of the mechanical system and certifies the most vital subsystem of the solar array. The fault tree analysis (FTA) model is established according to the operating process of the mechanical system based on DFH-3 satellite; the logical expression of the top event is obtained by Boolean algebra and the reliability of the solar array is calculated. The conclusion shows that the hinges are the most vital links between the solar arrays. By analyzing the structure importance(SI) of the hinge's FTA model, some fatal causes, including faults of the seal, insufficient torque of the locking spring, temperature in space, and friction force, can be identified. Damage is the initial stage of the fault, so limiting damage is significant to prevent faults. Furthermore, recommendations for improving reliability associated with damage limitation are discussed, which can be used for the redesigning of the solar array and the reliability growth planning.
Fault tree safety analysis of a large Li/SOCl(sub)2 spacecraft battery
NASA Technical Reports Server (NTRS)
Uy, O. Manuel; Maurer, R. H.
1987-01-01
The results of the safety fault tree analysis on the eight module, 576 F cell Li/SOCl2 battery on the spacecraft and in the integration and test environment prior to launch on the ground are presented. The analysis showed that with the right combination of blocking diodes, electrical fuses, thermal fuses, thermal switches, cell balance, cell vents, and battery module vents the probability of a single cell or a 72 cell module exploding can be reduced to .000001, essentially the probability due to explosion for unexplained reasons.
Forest composition of Maine: an analysis using number of trees
Douglas S. Powell
1985-01-01
Number-of-trees data compiled by the USDA Forest Service from three periodic statewide inventories of Maine's forest resources are used to analyze the composition of the state's timberland in terms of species, tree class, and size. Conditions are compared and contrasted for periods from 1959 to 1971 to 1982 across different regions and counties of the state....
Global surface displacement data for assessing variability of displacement at a point on a fault
Hecker, Suzanne; Sickler, Robert; Feigelson, Leah; Abrahamson, Norman; Hassett, Will; Rosa, Carla; Sanquini, Ann
2014-01-01
This report presents a global dataset of site-specific surface-displacement data on faults. We have compiled estimates of successive displacements attributed to individual earthquakes, mainly paleoearthquakes, at sites where two or more events have been documented, as a basis for analyzing inter-event variability in surface displacement on continental faults. An earlier version of this composite dataset was used in a recent study relating the variability of surface displacement at a point to the magnitude-frequency distribution of earthquakes on faults, and to hazard from fault rupture (Hecker and others, 2013). The purpose of this follow-on report is to provide potential data users with an updated comprehensive dataset, largely complete through 2010 for studies in English-language publications, as well as in some unpublished reports and abstract volumes.
NASA Astrophysics Data System (ADS)
LI, Y.; Yang, S. H.
2017-05-01
The Antarctica astronomical telescopes work chronically on the top of the unattended South Pole, and they have only one chance to maintain every year. Due to the complexity of the optical, mechanical, and electrical systems, the telescopes are hard to be maintained and need multi-tasker expedition teams, which means an excessive awareness is essential for the reliability of the Antarctica telescopes. Based on the fault mechanism and fault mode of the main-axis control system for the equatorial Antarctica astronomical telescope AST3-3 (Antarctic Schmidt Telescopes 3-3), the method of fault tree analysis is introduced in this article, and we obtains the importance degree of the top event from the importance degree of the bottom event structure. From the above results, the hidden problems and weak links can be effectively found out, which will indicate the direction for promoting the stability of the system and optimizing the design of the system.
Silvics of Missouri bottomland tree species
John Kabrick; Daniel Dey
2001-01-01
This issue of Notes For Forest Managers provides a concise summary of important silvical characteristics of Missouri's bottomland trees. It focuses on species adaptations to or tolerances of, environmental and site conditions. It is a compilation of information from seven different references cited in the text.
Fault tree analysis of most common rolling bearing tribological failures
NASA Astrophysics Data System (ADS)
Vencl, Aleksandar; Gašić, Vlada; Stojanović, Blaža
2017-02-01
Wear as a tribological process has a major influence on the reliability and life of rolling bearings. Field examinations of bearing failures due to wear indicate possible causes and point to the necessary measurements for wear reduction or elimination. Wear itself is a very complex process initiated by the action of different mechanisms, and can be manifested by different wear types which are often related. However, the dominant type of wear can be approximately determined. The paper presents the classification of most common bearing damages according to the dominant wear type, i.e. abrasive wear, adhesive wear, surface fatigue wear, erosive wear, fretting wear and corrosive wear. The wear types are correlated with the terms used in ISO 15243 standard. Each wear type is illustrated with an appropriate photograph, and for each wear type, appropriate description of causes and manifestations is presented. Possible causes of rolling bearing failure are used for the fault tree analysis (FTA). It was performed to determine the root causes for bearing failures. The constructed fault tree diagram for rolling bearing failure can be useful tool for maintenance engineers.
Renjith, V R; Madhu, G; Nayagam, V Lakshmana Gomathi; Bhasi, A B
2010-11-15
The hazards associated with major accident hazard (MAH) industries are fire, explosion and toxic gas releases. Of these, toxic gas release is the worst as it has the potential to cause extensive fatalities. Qualitative and quantitative hazard analyses are essential for the identification and quantification of these hazards related to chemical industries. Fault tree analysis (FTA) is an established technique in hazard identification. This technique has the advantage of being both qualitative and quantitative, if the probabilities and frequencies of the basic events are known. This paper outlines the estimation of the probability of release of chlorine from storage and filling facility of chlor-alkali industry using FTA. An attempt has also been made to arrive at the probability of chlorine release using expert elicitation and proven fuzzy logic technique for Indian conditions. Sensitivity analysis has been done to evaluate the percentage contribution of each basic event that could lead to chlorine release. Two-dimensional fuzzy fault tree analysis (TDFFTA) has been proposed for balancing the hesitation factor involved in expert elicitation. Copyright © 2010 Elsevier B.V. All rights reserved.
A domain-specific compiler for a parallel multiresolution adaptive numerical simulation environment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rajbhandari, Samyam; Kim, Jinsung; Krishnamoorthy, Sriram
This paper describes the design and implementation of a layered domain-specific compiler to support MADNESS---Multiresolution ADaptive Numerical Environment for Scientific Simulation. MADNESS is a high-level software environment for the solution of integral and differential equations in many dimensions, using adaptive and fast harmonic analysis methods with guaranteed precision. MADNESS uses k-d trees to represent spatial functions and implements operators like addition, multiplication, differentiation, and integration on the numerical representation of functions. The MADNESS runtime system provides global namespace support and a task-based execution model including futures. MADNESS is currently deployed on massively parallel supercomputers and has enabled many science advances.more » Due to the highly irregular and statically unpredictable structure of the k-d trees representing the spatial functions encountered in MADNESS applications, only purely runtime approaches to optimization have previously been implemented in the MADNESS framework. This paper describes a layered domain-specific compiler developed to address some performance bottlenecks in MADNESS. The newly developed static compile-time optimizations, in conjunction with the MADNESS runtime support, enable significant performance improvement for the MADNESS framework.« less
Development and validation of techniques for improving software dependability
NASA Technical Reports Server (NTRS)
Knight, John C.
1992-01-01
A collection of document abstracts are presented on the topic of improving software dependability through NASA grant NAG-1-1123. Specific topics include: modeling of error detection; software inspection; test cases; Magnetic Stereotaxis System safety specifications and fault trees; and injection of synthetic faults into software.
The Realization of Drilling Fault Diagnosis Based on Hybrid Programming with Matlab and VB
NASA Astrophysics Data System (ADS)
Wang, Jiangping; Hu, Yingcai
This paper presents a method using hybrid programming with Matlab and VB based on ActiveX to design the system of drilling accident prediction and diagnosis. So that the powerful calculating function and graphical display function of Matlab and visual development interface of VB are combined fully. The main interface of the diagnosis system is compiled in VB,and the analysis and fault diagnosis are implemented by neural network tool boxes in Matlab.The system has favorable interactive interface,and the fault example validation shows that the diagnosis result is feasible and can meet the demands of drilling accident prediction and diagnosis.
Trade Studies of Space Launch Architectures using Modular Probabilistic Risk Analysis
NASA Technical Reports Server (NTRS)
Mathias, Donovan L.; Go, Susie
2006-01-01
A top-down risk assessment in the early phases of space exploration architecture development can provide understanding and intuition of the potential risks associated with new designs and technologies. In this approach, risk analysts draw from their past experience and the heritage of similar existing systems as a source for reliability data. This top-down approach captures the complex interactions of the risk driving parts of the integrated system without requiring detailed knowledge of the parts themselves, which is often unavailable in the early design stages. Traditional probabilistic risk analysis (PRA) technologies, however, suffer several drawbacks that limit their timely application to complex technology development programs. The most restrictive of these is a dependence on static planning scenarios, expressed through fault and event trees. Fault trees incorporating comprehensive mission scenarios are routinely constructed for complex space systems, and several commercial software products are available for evaluating fault statistics. These static representations cannot capture the dynamic behavior of system failures without substantial modification of the initial tree. Consequently, the development of dynamic models using fault tree analysis has been an active area of research in recent years. This paper discusses the implementation and demonstration of dynamic, modular scenario modeling for integration of subsystem fault evaluation modules using the Space Architecture Failure Evaluation (SAFE) tool. SAFE is a C++ code that was originally developed to support NASA s Space Launch Initiative. It provides a flexible framework for system architecture definition and trade studies. SAFE supports extensible modeling of dynamic, time-dependent risk drivers of the system and functions at the level of fidelity for which design and failure data exists. The approach is scalable, allowing inclusion of additional information as detailed data becomes available. The tool performs a Monte Carlo analysis to provide statistical estimates. Example results of an architecture system reliability study are summarized for an exploration system concept using heritage data from liquid-fueled expendable Saturn V/Apollo launch vehicles.
Survey of critical failure events in on-chip interconnect by fault tree analysis
NASA Astrophysics Data System (ADS)
Yokogawa, Shinji; Kunii, Kyousuke
2018-07-01
In this paper, a framework based on reliability physics is proposed for adopting fault tree analysis (FTA) to the on-chip interconnect system of a semiconductor. By integrating expert knowledge and experience regarding the possibilities of failure on basic events, critical issues of on-chip interconnect reliability will be evaluated by FTA. In particular, FTA is used to identify the minimal cut sets with high risk priority. Critical events affecting the on-chip interconnect reliability are identified and discussed from the viewpoint of long-term reliability assessment. The moisture impact is evaluated as an external event.
Sun, Weifang; Yao, Bin; Zeng, Nianyin; Chen, Binqiang; He, Yuchao; Cao, Xincheng; He, Wangpeng
2017-07-12
As a typical example of large and complex mechanical systems, rotating machinery is prone to diversified sorts of mechanical faults. Among these faults, one of the prominent causes of malfunction is generated in gear transmission chains. Although they can be collected via vibration signals, the fault signatures are always submerged in overwhelming interfering contents. Therefore, identifying the critical fault's characteristic signal is far from an easy task. In order to improve the recognition accuracy of a fault's characteristic signal, a novel intelligent fault diagnosis method is presented. In this method, a dual-tree complex wavelet transform (DTCWT) is employed to acquire the multiscale signal's features. In addition, a convolutional neural network (CNN) approach is utilized to automatically recognise a fault feature from the multiscale signal features. The experiment results of the recognition for gear faults show the feasibility and effectiveness of the proposed method, especially in the gear's weak fault features.
Improving Quality of Seal Leak Test Product using Six Sigma
NASA Astrophysics Data System (ADS)
Luthfi Malik, Abdullah; Akbar, Muhammad; Irianto, Dradjad
2016-02-01
Seal leak test part is a polyurethane material-based product. Based on past data, defect level of this product was 8%, higher than the target of 5%. Quality improvement effort was done using six sigma method that included phases of define, measure, analyse, improve, and control. In the design phase, a Delphi method was used to identify factors that were critical to quality. In the measure phase, stability and process capability was measured. Fault tree analysis (FTA) and failure mode and effect analysis (FMEA) were used in the next phase to analize the root cause and to determine the priority issues. Improve phase was done by compiling, selecting, and designing alternative repair. Some improvement efforts were identified, i.e. (i) making a checklist for maintenance schedules, (ii) making written reminder form, (iii) modifying the SOP more detail, and (iv) performing a major service to the vacuum machine. To ensure the continuity of improvement efforts, some control activities were executed, i.e. (i) controlling, monitoring, documenting, and setting target frequently, (ii) implementing reward and punishment system, (iii) adding cleaning tool, and (iv) building six sigma organizational structure.
Minor, Scott A.; Brandt, Theodore R.
2015-01-01
A principal aim of the new mapping and associated fault-kinematic measurements is to document and constrain the nature of transpressional strain transfer between various regional, potentially seismogenic faults. In the accompanying pamphlet, surficial and bedrock map units are described in detail as well as a summary of the structural and fault-kinematic framework of the map area. New biostratigraphic and biochronologic data based on microfossil identifications are presented in expanded unit descriptions of the marine Neogene Monterey and Sisquoc Formations. Site-specific fault kinematic observations are embedded in the digital map database. This compilation provides a uniform geologic digital geodatabase and map plot files that can be used for visualization, analysis, and interpretation of the area’s geology, geologic hazards, and natural resources.
Common forest trees of Hawaii (native and introduced)
Elbert L. Little; Roger G. Skolmen
1989-01-01
This handbook provides an illustrated reference for identifying the common trees in the forests of Hawaii. Useful information about each species is also compiled, including Hawaiian, English, and scientific names; description; distribution within the islands and beyond; uses of wood and other products; and additional notes.The 152 species described...
Analysis of a hardware and software fault tolerant processor for critical applications
NASA Technical Reports Server (NTRS)
Dugan, Joanne B.
1993-01-01
Computer systems for critical applications must be designed to tolerate software faults as well as hardware faults. A unified approach to tolerating hardware and software faults is characterized by classifying faults in terms of duration (transient or permanent) rather than source (hardware or software). Errors arising from transient faults can be handled through masking or voting, but errors arising from permanent faults require system reconfiguration to bypass the failed component. Most errors which are caused by software faults can be considered transient, in that they are input-dependent. Software faults are triggered by a particular set of inputs. Quantitative dependability analysis of systems which exhibit a unified approach to fault tolerance can be performed by a hierarchical combination of fault tree and Markov models. A methodology for analyzing hardware and software fault tolerant systems is applied to the analysis of a hypothetical system, loosely based on the Fault Tolerant Parallel Processor. The models consider both transient and permanent faults, hardware and software faults, independent and related software faults, automatic recovery, and reconfiguration.
Climate: A factor in the origin of the pole blight disease of Pinus monticola Dougl
Charles D. Leaphart; Albert R. Stage
1971-01-01
Measurements of cores or disc samples representing slightly more than 76,000 annual rings from 336 western white pine trees were compiled to obtain a set of deviations from normal growth of healthy trees that would express the response of these trees to variation in the environment during the last 280 years. Their growth was demonstrated to be a function of temperature...
Determining preventability of pediatric readmissions using fault tree analysis.
Jonas, Jennifer A; Devon, Erin Pete; Ronan, Jeanine C; Ng, Sonia C; Owusu-McKenzie, Jacqueline Y; Strausbaugh, Janet T; Fieldston, Evan S; Hart, Jessica K
2016-05-01
Previous studies attempting to distinguish preventable from nonpreventable readmissions reported challenges in completing reviews efficiently and consistently. (1) Examine the efficiency and reliability of a Web-based fault tree tool designed to guide physicians through chart reviews to a determination about preventability. (2) Investigate root causes of general pediatrics readmissions and identify the percent that are preventable. General pediatricians from The Children's Hospital of Philadelphia used a Web-based fault tree tool to classify root causes of all general pediatrics 15-day readmissions in 2014. The tool guided reviewers through a logical progression of questions, which resulted in 1 of 18 root causes of readmission, 8 of which were considered potentially preventable. Twenty percent of cases were cross-checked to measure inter-rater reliability. Of the 7252 discharges, 248 were readmitted, for an all-cause general pediatrics 15-day readmission rate of 3.4%. Of those readmissions, 15 (6.0%) were deemed potentially preventable, corresponding to 0.2% of total discharges. The most common cause of potentially preventable readmissions was premature discharge. For the 50 cross-checked cases, both reviews resulted in the same root cause for 44 (86%) of files (κ = 0.79; 95% confidence interval: 0.60-0.98). Completing 1 review using the tool took approximately 20 minutes. The Web-based fault tree tool helped physicians to identify root causes of hospital readmissions and classify them as either preventable or not preventable in an efficient and consistent way. It also confirmed that only a small percentage of general pediatrics 15-day readmissions are potentially preventable. Journal of Hospital Medicine 2016;11:329-335. © 2016 Society of Hospital Medicine. © 2016 Society of Hospital Medicine.
Risk Analysis of Return Support Material on Gas Compressor Platform Project
NASA Astrophysics Data System (ADS)
Silvianita; Aulia, B. U.; Khakim, M. L. N.; Rosyid, Daniel M.
2017-07-01
On a fixed platforms project are not only carried out by a contractor, but two or more contractors. Cooperation in the construction of fixed platforms is often not according to plan, it is caused by several factors. It takes a good synergy between the contractor to avoid miss communication may cause problems on the project. For the example is about support material (sea fastening, skid shoe and shipping support) used in the process of sending a jacket structure to operation place often does not return to the contractor. It needs a systematic method to overcome the problem of support material. This paper analyses the causes and effects of GAS Compressor Platform that support material is not return, using Fault Tree Analysis (FTA) and Event Tree Analysis (ETA). From fault tree analysis, the probability of top event is 0.7783. From event tree analysis diagram, the contractors lose Rp.350.000.000, - to Rp.10.000.000.000, -.
Mines Systems Safety Improvement Using an Integrated Event Tree and Fault Tree Analysis
NASA Astrophysics Data System (ADS)
Kumar, Ranjan; Ghosh, Achyuta Krishna
2017-04-01
Mines systems such as ventilation system, strata support system, flame proof safety equipment, are exposed to dynamic operational conditions such as stress, humidity, dust, temperature, etc., and safety improvement of such systems can be done preferably during planning and design stage. However, the existing safety analysis methods do not handle the accident initiation and progression of mine systems explicitly. To bridge this gap, this paper presents an integrated Event Tree (ET) and Fault Tree (FT) approach for safety analysis and improvement of mine systems design. This approach includes ET and FT modeling coupled with redundancy allocation technique. In this method, a concept of top hazard probability is introduced for identifying system failure probability and redundancy is allocated to the system either at component or system level. A case study on mine methane explosion safety with two initiating events is performed. The results demonstrate that the presented method can reveal the accident scenarios and improve the safety of complex mine systems simultaneously.
NASA Astrophysics Data System (ADS)
Li, Yongbo; Li, Guoyan; Yang, Yuantao; Liang, Xihui; Xu, Minqiang
2018-05-01
The fault diagnosis of planetary gearboxes is crucial to reduce the maintenance costs and economic losses. This paper proposes a novel fault diagnosis method based on adaptive multi-scale morphological filter (AMMF) and modified hierarchical permutation entropy (MHPE) to identify the different health conditions of planetary gearboxes. In this method, AMMF is firstly adopted to remove the fault-unrelated components and enhance the fault characteristics. Second, MHPE is utilized to extract the fault features from the denoised vibration signals. Third, Laplacian score (LS) approach is employed to refine the fault features. In the end, the obtained features are fed into the binary tree support vector machine (BT-SVM) to accomplish the fault pattern identification. The proposed method is numerically and experimentally demonstrated to be able to recognize the different fault categories of planetary gearboxes.
An update of Quaternary faults of central and eastern Oregon
Weldon, Ray J.; Fletcher, D.K.; Weldon, E.M.; Scharer, K.M.; McCrory, P.A.
2002-01-01
This is the online version of a CD-ROM publication. We have updated the eastern portion of our previous active fault map of Oregon (Pezzopane, Nakata, and Weldon, 1992) as a contribution to the larger USGS effort to produce digital maps of active faults in the Pacific Northwest region. The 1992 fault map has seen wide distribution and has been reproduced in essentially all subsequent compilations of active faults of Oregon. The new map provides a substantial update of known active or suspected active faults east of the Cascades. Improvements in the new map include (1) many newly recognized active faults, (2) a linked ArcInfo map and reference database, (3) more precise locations for previously recognized faults on shaded relief quadrangles generated from USGS 30-m digital elevations models (DEM), (4) more uniform coverage resulting in more consistent grouping of the ages of active faults, and (5) a new category of 'possibly' active faults that share characteristics with known active faults, but have not been studied adequately to assess their activity. The distribution of active faults has not changed substantially from the original Pezzopane, Nakata and Weldon map. Most faults occur in the south-central Basin and Range tectonic province that is located in the backarc portion of the Cascadia subduction margin. These faults occur in zones consisting of numerous short faults with similar rates, ages, and styles of movement. Many active faults strongly correlate with the most active volcanic centers of Oregon, including Newberry Craters and Crater Lake.
2013-05-01
specifics of the correlation will be explored followed by discussion of new paradigms— the ordered event list (OEL) and the decision tree — that result from...4.2.1 Brief Overview of the Decision Tree Paradigm ................................................15 4.2.2 OEL Explained...6 Figure 3. A depiction of a notional fault/activation tree . ................................................................7
A Seismic Source Model for Central Europe and Italy
NASA Astrophysics Data System (ADS)
Nyst, M.; Williams, C.; Onur, T.
2006-12-01
We present a seismic source model for Central Europe (Belgium, Germany, Switzerland, and Austria) and Italy, as part of an overall seismic risk and loss modeling project for this region. A separate presentation at this conference discusses the probabilistic seismic hazard and risk assessment (Williams et al., 2006). Where available we adopt regional consensus models and adjusts these to fit our format, otherwise we develop our own model. Our seismic source model covers the whole region under consideration and consists of the following components: 1. A subduction zone environment in Calabria, SE Italy, with interface events between the Eurasian and African plates and intraslab events within the subducting slab. The subduction zone interface is parameterized as a set of dipping area sources that follow the geometry of the surface of the subducting plate, whereas intraslab events are modeled as plane sources at depth; 2. The main normal faults in the upper crust along the Apennines mountain range, in Calabria and Central Italy. Dipping faults and (sub-) vertical faults are parameterized as dipping plane and line sources, respectively; 3. The Upper and Lower Rhine Graben regime that runs from northern Italy into eastern Belgium, parameterized as a combination of dipping plane and line sources, and finally 4. Background seismicity, parameterized as area sources. The fault model is based on slip rates using characteristic recurrence. The modeling of background and subduction zone seismicity is based on a compilation of several national and regional historic seismic catalogs using a Gutenberg-Richter recurrence model. Merging the catalogs encompasses the deletion of double, fake and very old events and the application of a declustering algorithm (Reasenberg, 2000). The resulting catalog contains a little over 6000 events, has an average b-value of -0.9, is complete for moment magnitudes 4.5 and larger, and is used to compute a gridded a-value model (smoothed historical seismicity) for the region. The logic tree weighs various completeness intervals and minimum magnitudes. Using a weighted scheme of European and global ground motion models together with a detailed site classification map for Europe based on Eurocode 8, we generate hazard maps for recurrence periods of 200, 475, 1000 and 2500 yrs.
Monitoring of Microseismicity with ArrayTechniques in the Peach Tree Valley Region
NASA Astrophysics Data System (ADS)
Garcia-Reyes, J. L.; Clayton, R. W.
2016-12-01
This study is focused on the analysis of microseismicity along the San Andreas Fault in the PeachTree Valley region. This zone is part of the transition zone between the locked portion to the south (Parkfield, CA) and the creeping section to the north (Jovilet, et al., JGR, 2014). The data for the study comes from a 2-week deployment of 116 Zland nodes in a cross-shaped configuration along (8.2 km) and across (9 km) the Fault. We analyze the distribution of microseismicity using a 3D backprojection technique, and we explore the use of Hidden Markov Models to identify different patterns of microseismicity (Hammer et al., GJI, 2013). The goal of the study is to relate the style of seismicity to the mechanical state of the Fault. The results show the evolution of seismic activity as well as at least two different patterns of seismic signals.
[Impact of water pollution risk in water transfer project based on fault tree analysis].
Liu, Jian-Chang; Zhang, Wei; Wang, Li-Min; Li, Dai-Qing; Fan, Xiu-Ying; Deng, Hong-Bing
2009-09-15
The methods to assess water pollution risk for medium water transfer are gradually being explored. The event-nature-proportion method was developed to evaluate the probability of the single event. Fault tree analysis on the basis of calculation on single event was employed to evaluate the extent of whole water pollution risk for the channel water body. The result indicates, that the risk of pollutants from towns and villages along the line of water transfer project to the channel water body is at high level with the probability of 0.373, which will increase pollution to the channel water body at the rate of 64.53 mg/L COD, 4.57 mg/L NH4(+) -N and 0.066 mg/L volatilization hydroxybenzene, respectively. The measurement of fault probability on the basis of proportion method is proved to be useful in assessing water pollution risk under much uncertainty.
Active faults in Africa: a review
NASA Astrophysics Data System (ADS)
Skobelev, S. F.; Hanon, M.; Klerkx, J.; Govorova, N. N.; Lukina, N. V.; Kazmin, V. G.
2004-03-01
The active fault database and Map of active faults in Africa, in scale of 1:5,000,000, were compiled according to the ILP Project II-2 "World Map of Major Active Faults". The data were collected in the Royal Museum of Central Africa, Tervuren, Belgium, and in the Geological Institute, Moscow, where the final edition was carried out. Active faults of Africa form three groups. The first group is represented by thrusts and reverse faults associated with compressed folds in the northwest Africa. They belong to the western part of the Alpine-Central Asian collision belt. The faults disturb only the Earth's crust and some of them do not penetrate deeper than the sedimentary cover. The second group comprises the faults of the Great African rift system. The faults form the known Western and Eastern branches, which are rifts with abnormal mantle below. The deep-seated mantle "hot" anomaly probably relates to the eastern volcanic branch. In the north, it joins with the Aden-Red Sea rift zone. Active faults in Egypt, Libya and Tunis may represent a link between the East African rift system and Pantellerian rift zone in the Mediterranean. The third group included rare faults in the west of Equatorial Africa. The data were scarce, so that most of the faults of this group were identified solely by interpretation of space imageries and seismicity. Some longer faults of the group may continue the transverse faults of the Atlantic and thus can penetrate into the mantle. This seems evident for the Cameron fault line.
Revised seismic hazard map for the Kyrgyz Republic
NASA Astrophysics Data System (ADS)
Fleming, Kevin; Ullah, Shahid; Parolai, Stefano; Walker, Richard; Pittore, Massimiliano; Free, Matthew; Fourniadis, Yannis; Villiani, Manuela; Sousa, Luis; Ormukov, Cholponbek; Moldobekov, Bolot; Takeuchi, Ko
2017-04-01
As part of a seismic risk study sponsored by the World Bank, a revised seismic hazard map for the Kyrgyz Republic has been produced, using the OpenQuake-engine developed by the Global Earthquake Model Foundation (GEM). In this project, an earthquake catalogue spanning a period from 250 BCE to 2014 was compiled and processed through spatial and temporal declustering tools. The territory of the Kyrgyz Republic was divided into 31 area sources defined based on local seismicity, including a total area covering 200 km from the border. The results are presented in terms of Peak Ground Acceleration (PGA). In addition, macroseismic intensity estimates, making use of recent intensity prediction equations, were also provided, given that this measure is still widely used in Central Asia. In order to accommodate the associated epistemic uncertainty, three ground motion prediction equations were used in a logic tree structure. A set of representative earthquake scenarios were further identified based on historical data and the nature of the considered faults. The resulting hazard map, as expected, follows the country's seismicity, with the highest levels of hazard in the northeast, south and southwest of the country, with an elevated part around the centre. When considering PGA, the hazard is slightly greater for major urban centres than in previous works (e.g., Abdrakhmatov et al., 2003), although the macroseismic intensity estimates are less than previous studies, e.g., Ulomov (1999). For the scenario assessments, the examples that most affect the urban centres assessed are the Issyk Ata fault (in particular for Bishkek), the Chilik and Kemin faults (in particular Balykchy and Karakol), the Ferghana Valley fault system (in particular Osh, Jalah-Abad and Uzgen), the Oinik Djar fault (Naryn) and the central and western Talas-Ferghanafaukt (Talas). Finally, while site effects (in particular, those dependent on the upper-most geological structure) have an obvious effect on the final hazard level, this is still not fully accounted for, even if a nation-wide first order Vs30 model (i.e., from the USGS) is available. Abdrakhmatov, K., Havenith, H.-B., Delvaux, D., Jongsmans, D. and Trefois, P. (2003) Probabilistic PGA and Arias Intensity maps of Kyrgyzstan (Central Asia), Journal of Seismology, 7, 203-220. Ulomov, V.I., The GSHAP Region 7 working group (1999) Seismic hazard of Northern Eurasia, Annali di Geofisica, 42, 1012-1038.
Earthquakes and faults in the San Francisco Bay area (1970-2003)
Sleeter, Benjamin M.; Calzia, James P.; Walter, Stephen R.; Wong, Florence L.; Saucedo, George J.
2004-01-01
The map depicts both active and inactive faults and earthquakes magnitude 1.5 to 7.0 in the greater San Francisco Bay area. Twenty-two earthquakes magnitude 5.0 and greater are indicated on the map and listed chronologically in an accompanying table. The data are compiled from records from 1970-2003. The bathymetry was generated from a digital version of NOAA maps and hydrogeographic data for San Francisco Bay. Elevation data are from the USGS National Elevation Database. Landsat satellite image is from seven Landsat 7 Enhanced Thematic Mapper Plus scenes. Fault data are reproduced with permission from the California Geological Survey. The earthquake data are from the Northern California Earthquake Catalog.
Viewpoint on ISA TR84.0.02--simplified methods and fault tree analysis.
Summers, A E
2000-01-01
ANSI/ISA-S84.01-1996 and IEC 61508 require the establishment of a safety integrity level for any safety instrumented system or safety related system used to mitigate risk. Each stage of design, operation, maintenance, and testing is judged against this safety integrity level. Quantitative techniques can be used to verify whether the safety integrity level is met. ISA-dTR84.0.02 is a technical report under development by ISA, which discusses how to apply quantitative analysis techniques to safety instrumented systems. This paper discusses two of those techniques: (1) Simplified equations and (2) Fault tree analysis.
TH-EF-BRC-03: Fault Tree Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thomadsen, B.
2016-06-15
This Hands-on Workshop will be focused on providing participants with experience with the principal tools of TG 100 and hence start to build both competence and confidence in the use of risk-based quality management techniques. The three principal tools forming the basis of TG 100’s risk analysis: Process mapping, Failure-Modes and Effects Analysis and fault-tree analysis will be introduced with a 5 minute refresher presentation and each presentation will be followed by a 30 minute small group exercise. An exercise on developing QM from the risk analysis follows. During the exercise periods, participants will apply the principles in 2 differentmore » clinical scenarios. At the conclusion of each exercise there will be ample time for participants to discuss with each other and the faculty their experience and any challenges encountered. Learning Objectives: To review the principles of Process Mapping, Failure Modes and Effects Analysis and Fault Tree Analysis. To gain familiarity with these three techniques in a small group setting. To share and discuss experiences with the three techniques with faculty and participants. Director, TreatSafely, LLC. Director, Center for the Assessment of Radiological Sciences. Occasional Consultant to the IAEA and Varian.« less
Estimating earthquake-induced failure probability and downtime of critical facilities.
Porter, Keith; Ramer, Kyle
2012-01-01
Fault trees have long been used to estimate failure risk in earthquakes, especially for nuclear power plants (NPPs). One interesting application is that one can assess and manage the probability that two facilities - a primary and backup - would be simultaneously rendered inoperative in a single earthquake. Another is that one can calculate the probabilistic time required to restore a facility to functionality, and the probability that, during any given planning period, the facility would be rendered inoperative for any specified duration. A large new peer-reviewed library of component damageability and repair-time data for the first time enables fault trees to be used to calculate the seismic risk of operational failure and downtime for a wide variety of buildings other than NPPs. With the new library, seismic risk of both the failure probability and probabilistic downtime can be assessed and managed, considering the facility's unique combination of structural and non-structural components, their seismic installation conditions, and the other systems on which the facility relies. An example is offered of real computer data centres operated by a California utility. The fault trees were created and tested in collaboration with utility operators, and the failure probability and downtime results validated in several ways.
Huang, Weiqing; Fan, Hongbo; Qiu, Yongfu; Cheng, Zhiyu; Qian, Yu
2016-02-15
Haze weather has become a serious environmental pollution problem which occurs in many Chinese cities. One of the most critical factors for the formation of haze weather is the exhausts of coal combustion, thus it is meaningful to figure out the causation mechanism between urban haze and the exhausts of coal combustion. Based on above considerations, the fault tree analysis (FAT) approach was employed for the causation mechanism of urban haze in Beijing by considering the risk events related with the exhausts of coal combustion for the first time. Using this approach, firstly the fault tree of the urban haze causation system connecting with coal combustion exhausts was established; consequently the risk events were discussed and identified; then, the minimal cut sets were successfully determined using Boolean algebra; finally, the structure, probability and critical importance degree analysis of the risk events were completed for the qualitative and quantitative assessment. The study results proved that the FTA was an effective and simple tool for the causation mechanism analysis and risk management of urban haze in China. Copyright © 2015 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Mulyana, Cukup; Muhammad, Fajar; Saad, Aswad H.; Mariah, Riveli, Nowo
2017-03-01
Storage tank component is the most critical component in LNG regasification terminal. It has the risk of failure and accident which impacts to human health and environment. Risk assessment is conducted to detect and reduce the risk of failure in storage tank. The aim of this research is determining and calculating the probability of failure in regasification unit of LNG. In this case, the failure is caused by Boiling Liquid Expanding Vapor Explosion (BLEVE) and jet fire in LNG storage tank component. The failure probability can be determined by using Fault Tree Analysis (FTA). Besides that, the impact of heat radiation which is generated is calculated. Fault tree for BLEVE and jet fire on storage tank component has been determined and obtained with the value of failure probability for BLEVE of 5.63 × 10-19 and for jet fire of 9.57 × 10-3. The value of failure probability for jet fire is high enough and need to be reduced by customizing PID scheme of regasification LNG unit in pipeline number 1312 and unit 1. The value of failure probability after customization has been obtained of 4.22 × 10-6.
Comparative autecological characteristics of northwestern tree speciesa literature review.
Don Minore
1979-01-01
This report is a compilation of autecological information previously scattered about in several hundred publications. It includes a comparison of the tolerances, traits, and attributes of native northwestern tree species. The species are ranked with respect to 69 environmental factors, phenotypic characteristics, and physical parameters. These rankings, with the...
Updated generalized biomass equations for North American tree species
David C. Chojnacky; Linda S. Heath; Jennifer C. Jenkins
2014-01-01
Historically, tree biomass at large scales has been estimated by applying dimensional analysis techniques and field measurements such as diameter at breast height (dbh) in allometric regression equations. Equations often have been developed using differing methods and applied only to certain species or isolated areas. We previously had compiled and combined (in meta-...
Morgan, K.S.; Pattyn, G.J.; Morgan, M.L.
2005-01-01
Internet mapping applications for geologic data allow simultaneous data delivery and collection, enabling quick data modification while efficiently supplying the end user with information. Utilizing Web-based technologies, the Colorado Geological Survey's Colorado Late Cenozoic Fault and Fold Database was transformed from a monothematic, nonspatial Microsoft Access database into a complex information set incorporating multiple data sources. The resulting user-friendly format supports easy analysis and browsing. The core of the application is the Microsoft Access database, which contains information compiled from available literature about faults and folds that are known or suspected to have moved during the late Cenozoic. The database contains nonspatial fields such as structure type, age, and rate of movement. Geographic locations of the fault and fold traces were compiled from previous studies at 1:250,000 scale to form a spatial database containing information such as length and strike. Integration of the two databases allowed both spatial and nonspatial information to be presented on the Internet as a single dataset (http://geosurvey.state.co.us/pubs/ceno/). The user-friendly interface enables users to view and query the data in an integrated manner, thus providing multiple ways to locate desired information. Retaining the digital data format also allows continuous data updating and quick delivery of newly acquired information. This dataset is a valuable resource to anyone interested in earthquake hazards and the activity of faults and folds in Colorado. Additional geologic hazard layers and imagery may aid in decision support and hazard evaluation. The up-to-date and customizable maps are invaluable tools for researchers or the public.
NASA Astrophysics Data System (ADS)
Li, Shuanghong; Cao, Hongliang; Yang, Yupu
2018-02-01
Fault diagnosis is a key process for the reliability and safety of solid oxide fuel cell (SOFC) systems. However, it is difficult to rapidly and accurately identify faults for complicated SOFC systems, especially when simultaneous faults appear. In this research, a data-driven Multi-Label (ML) pattern identification approach is proposed to address the simultaneous fault diagnosis of SOFC systems. The framework of the simultaneous-fault diagnosis primarily includes two components: feature extraction and ML-SVM classifier. The simultaneous-fault diagnosis approach can be trained to diagnose simultaneous SOFC faults, such as fuel leakage, air leakage in different positions in the SOFC system, by just using simple training data sets consisting only single fault and not demanding simultaneous faults data. The experimental result shows the proposed framework can diagnose the simultaneous SOFC system faults with high accuracy requiring small number training data and low computational burden. In addition, Fault Inference Tree Analysis (FITA) is employed to identify the correlations among possible faults and their corresponding symptoms at the system component level.
NASA Astrophysics Data System (ADS)
Schwartz, D. P.; Haeussler, P. J.; Seitz, G. G.; Dawson, T. E.; Stenner, H. D.; Matmon, A.; Crone, A. J.; Personius, S.; Burns, P. B.; Cadena, A.; Thoms, E.
2005-12-01
Developing accurate rupture histories of long, high-slip-rate strike-slip faults is is especially challenging where recurrence is relatively short (hundreds of years), adjacent segments may fail within decades of each other, and uncertainties in dating can be as large as, or larger than, the time between events. The Denali Fault system (DFS) is the major active structure of interior Alaska, but received little study since pioneering fault investigations in the early 1970s. Until the summer of 2003 essentially no data existed on the timing or spatial distribution of past ruptures on the DFS. This changed with the occurrence of the M7.9 2002 Denali fault earthquake, which has been a catalyst for present paleoseismic investigations. It provided a well-constrained rupture length and slip distribution. Strike-slip faulting occurred along 290 km of the Denali and Totschunda faults, leaving unruptured ?140km of the eastern Denali fault, ?180 km of the western Denali fault, and ?70 km of the eastern Totschunda fault. The DFS presents us with a blank canvas on which to fill a chronology of past earthquakes using modern paleoseismic techniques. Aware of correlation issues with potentially closely-timed earthquakes we have a) investigated 11 paleoseismic sites that allow a variety of dating techniques, b) measured paleo offsets, which provide insight into magnitude and rupture length of past events, at 18 locations, and c) developed late Pleistocene and Holocene slip rates using exposure age dating to constrain long-term fault behavior models. We are in the process of: 1) radiocarbon-dating peats involved in faulting and liquefaction, and especially short-lived forest floor vegetation that includes outer rings of trees, spruce needles, and blueberry leaves killed and buried during paleoearthquakes; 2) supporting development of a 700-900 year tree-ring time-series for precise dating of trees used in event timing; 3) employing Pb 210 for constraining the youngest ruptures in sag ponds on the eastern and western Denali fault; and 4) using volcanic ashes in trenches for dating and correlation. Initial results are: 1) Large earthquakes occurred along the 2002 rupture section 350-700 yrb02 (2-sigma, calendar-corrected, years before 2002) with offsets about the same as 2002. The Denali penultimate rupture appears younger (350-570 yrb02) than the Totschunda (580-700 yrb02); 2) The western Denali fault is geomorphically fresh, its MRE likely occurred within the past 250 years, the penultimate event occurred 570-680 yrb02, and slip in each event was 4m; 3) The eastern Denali MRE post-dates peat dated at 550-680 yrb02, is younger than the penultimate Totschunda event, and could be part of the penultimate Denali fault rupture or a separate earthquake; 4) A 120-km section of the Denali fault between tNenana glacier and the Delta River may be a zone of overlap for large events and/or capable of producing smaller earthquakes; its western part has fresh scarps with small (1m) offsets. 2004/2005 field observations show there are longer datable records, with 4-5 events recorded in trenches on the eastern Denali fault and the west end of the 2002 rupture, 2-3 events on the western part of the fault in Denali National Park, and 3-4 events on the Totschunda fault. These and extensive datable material provide the basis to define the paleoseismic history of DFS earthquake ruptures through multiple and complete earthquake cycles.
Support vector machines-based fault diagnosis for turbo-pump rotor
NASA Astrophysics Data System (ADS)
Yuan, Sheng-Fa; Chu, Fu-Lei
2006-05-01
Most artificial intelligence methods used in fault diagnosis are based on empirical risk minimisation principle and have poor generalisation when fault samples are few. Support vector machines (SVM) is a new general machine-learning tool based on structural risk minimisation principle that exhibits good generalisation even when fault samples are few. Fault diagnosis based on SVM is discussed. Since basic SVM is originally designed for two-class classification, while most of fault diagnosis problems are multi-class cases, a new multi-class classification of SVM named 'one to others' algorithm is presented to solve the multi-class recognition problems. It is a binary tree classifier composed of several two-class classifiers organised by fault priority, which is simple, and has little repeated training amount, and the rate of training and recognition is expedited. The effectiveness of the method is verified by the application to the fault diagnosis for turbo pump rotor.
EDNA: Expert fault digraph analysis using CLIPS
NASA Technical Reports Server (NTRS)
Dixit, Vishweshwar V.
1990-01-01
Traditionally fault models are represented by trees. Recently, digraph models have been proposed (Sack). Digraph models closely imitate the real system dependencies and hence are easy to develop, validate and maintain. However, they can also contain directed cycles and analysis algorithms are hard to find. Available algorithms tend to be complicated and slow. On the other hand, the tree analysis (VGRH, Tayl) is well understood and rooted in vast research effort and analytical techniques. The tree analysis algorithms are sophisticated and orders of magnitude faster. Transformation of a digraph (cyclic) into trees (CLP, LP) is a viable approach to blend the advantages of the representations. Neither the digraphs nor the trees provide the ability to handle heuristic knowledge. An expert system, to capture the engineering knowledge, is essential. We propose an approach here, namely, expert network analysis. We combine the digraph representation and tree algorithms. The models are augmented by probabilistic and heuristic knowledge. CLIPS, an expert system shell from NASA-JSC will be used to develop a tool. The technique provides the ability to handle probabilities and heuristic knowledge. Mixed analysis, some nodes with probabilities, is possible. The tool provides graphics interface for input, query, and update. With the combined approach it is expected to be a valuable tool in the design process as well in the capture of final design knowledge.
NASA Astrophysics Data System (ADS)
Hu, Bingbing; Li, Bing
2016-02-01
It is very difficult to detect weak fault signatures due to the large amount of noise in a wind turbine system. Multiscale noise tuning stochastic resonance (MSTSR) has proved to be an effective way to extract weak signals buried in strong noise. However, the MSTSR method originally based on discrete wavelet transform (DWT) has disadvantages such as shift variance and the aliasing effects in engineering application. In this paper, the dual-tree complex wavelet transform (DTCWT) is introduced into the MSTSR method, which makes it possible to further improve the system output signal-to-noise ratio and the accuracy of fault diagnosis by the merits of DTCWT (nearly shift invariant and reduced aliasing effects). Moreover, this method utilizes the relationship between the two dual-tree wavelet basis functions, instead of matching the single wavelet basis function to the signal being analyzed, which may speed up the signal processing and be employed in on-line engineering monitoring. The proposed method is applied to the analysis of bearing outer ring and shaft coupling vibration signals carrying fault information. The results confirm that the method performs better in extracting the fault features than the original DWT-based MSTSR, the wavelet transform with post spectral analysis, and EMD-based spectral analysis methods.
Locating hardware faults in a parallel computer
Archer, Charles J.; Megerian, Mark G.; Ratterman, Joseph D.; Smith, Brian E.
2010-04-13
Locating hardware faults in a parallel computer, including defining within a tree network of the parallel computer two or more sets of non-overlapping test levels of compute nodes of the network that together include all the data communications links of the network, each non-overlapping test level comprising two or more adjacent tiers of the tree; defining test cells within each non-overlapping test level, each test cell comprising a subtree of the tree including a subtree root compute node and all descendant compute nodes of the subtree root compute node within a non-overlapping test level; performing, separately on each set of non-overlapping test levels, an uplink test on all test cells in a set of non-overlapping test levels; and performing, separately from the uplink tests and separately on each set of non-overlapping test levels, a downlink test on all test cells in a set of non-overlapping test levels.
NASA Technical Reports Server (NTRS)
Shooman, Martin L.
1991-01-01
Many of the most challenging reliability problems of our present decade involve complex distributed systems such as interconnected telephone switching computers, air traffic control centers, aircraft and space vehicles, and local area and wide area computer networks. In addition to the challenge of complexity, modern fault-tolerant computer systems require very high levels of reliability, e.g., avionic computers with MTTF goals of one billion hours. Most analysts find that it is too difficult to model such complex systems without computer aided design programs. In response to this need, NASA has developed a suite of computer aided reliability modeling programs beginning with CARE 3 and including a group of new programs such as: HARP, HARP-PC, Reliability Analysts Workbench (Combination of model solvers SURE, STEM, PAWS, and common front-end model ASSIST), and the Fault Tree Compiler. The HARP program is studied and how well the user can model systems using this program is investigated. One of the important objectives will be to study how user friendly this program is, e.g., how easy it is to model the system, provide the input information, and interpret the results. The experiences of the author and his graduate students who used HARP in two graduate courses are described. Some brief comparisons were made with the ARIES program which the students also used. Theoretical studies of the modeling techniques used in HARP are also included. Of course no answer can be any more accurate than the fidelity of the model, thus an Appendix is included which discusses modeling accuracy. A broad viewpoint is taken and all problems which occurred in the use of HARP are discussed. Such problems include: computer system problems, installation manual problems, user manual problems, program inconsistencies, program limitations, confusing notation, long run times, accuracy problems, etc.
Geologic Map of the Santa Barbara Coastal Plain Area, Santa Barbara County, California
Minor, Scott A.; Kellogg, Karl S.; Stanley, Richard G.; Gurrola, Larry D.; Keller, Edward A.; Brandt, Theodore R.
2009-01-01
This report presents a newly revised and expanded digital geologic map of the Santa Barbara coastal plain area at a compilation scale of 1:24,000 (one inch on the map to 2,000 feet on the ground)1 and with a horizontal positional accuracy of at least 20 m. The map depicts the distribution of bedrock units and surficial deposits and associated deformation underlying and adjacent to the coastal plain within the contiguous Dos Pueblos Canyon, Goleta, Santa Barbara, and Carpinteria 7.5' quadrangles. The new map supersedes an earlier preliminary geologic map of the central part of the coastal plain (Minor and others, 2002; revised 2006) that provided coastal coverage only within the Goleta and Santa Barbara quadrangles. In addition to new mapping to the west and east, geologic mapping in parts of the central map area has been significantly revised from the preliminary map compilation - especially north of downtown Santa Barbara in the Mission Ridge area - based on new structural interpretations supplemented by new biostratigraphic data. All surficial and bedrock map units, including several new units recognized in the areas of expanded mapping, are described in detail in the accompanying pamphlet. Abundant new biostratigraphic and biochronologic data based on microfossil identifications are presented in expanded unit descriptions of the marine Neogene Monterey and Sisquoc Formations. Site-specific fault kinematic observations embedded in the digital map database are more complete owing to the addition of slip-sense determinations. Finally, the pamphlet accompanying the present report includes an expanded and refined summary of stratigraphic and structural observations and interpretations that are based on the composite geologic data contained in the new map compilation. The Santa Barbara coastal plain is located in the western Transverse Ranges physiographic province along an east-west-trending segment of the southern California coastline about 100 km (62 mi) northwest of Los Angeles. The coastal plain surface includes several mesas and hills that are geomorphic expressions of potentially active folds and partly buried oblique and reverse faults of the Santa Barbara fold and fault belt (SBFFB) that transects the coastal plain. Strong earthquakes have occurred offshore within 10 km of the Santa Barbara coastal plain in 1925 (6.3 magnitude), 1941 (5.5 magnitude), and 1978 (5.1 magnitude). These and numerous smaller seismic events located beneath and offshore of the coastal plain, likely occurred on reverse-oblique-slip faults that are similar to, or continuous with, Quaternary reverse faults crossing the coastal plain. Thus, faults of the SBFFB pose a significant earthquake hazard to the approximately 200,000 people living within the major coastal population centers of Santa Barbara, Goleta, and Carpinteria. In addition, numerous Quaternary landslide deposits along the steep southern flank of the Santa Ynez Mountains indicate the potential for continued slope failures and mass movements in developed areas. Folded, faulted, and fractured sedimentary rocks in the subsurface of the coastal plain and adjacent Santa Barbara Channel are sources and form reservoirs for economic deposits of oil and gas, some of which are currently being extracted offshore. Shallow, localized sedimentary aquifers underlying the coastal plain provide limited amounts of water for the urban areas, but the quality of some of this groundwater is compromised by coastal salt-water contamination. The present map compilation provides a set of uniform geologic digital coverages that can be used for analysis and interpretation of these and other geologic hazards and resources in the coastal plain region.
FTMP (Fault Tolerant Multiprocessor) programmer's manual
NASA Technical Reports Server (NTRS)
Feather, F. E.; Liceaga, C. A.; Padilla, P. A.
1986-01-01
The Fault Tolerant Multiprocessor (FTMP) computer system was constructed using the Rockwell/Collins CAPS-6 processor. It is installed in the Avionics Integration Research Laboratory (AIRLAB) of NASA Langley Research Center. It is hosted by AIRLAB's System 10, a VAX 11/750, for the loading of programs and experimentation. The FTMP support software includes a cross compiler for a high level language called Automated Engineering Design (AED) System, an assembler for the CAPS-6 processor assembly language, and a linker. Access to this support software is through an automated remote access facility on the VAX which relieves the user of the burden of learning how to use the IBM 4381. This manual is a compilation of information about the FTMP support environment. It explains the FTMP software and support environment along many of the finer points of running programs on FTMP. This will be helpful to the researcher trying to run an experiment on FTMP and even to the person probing FTMP with fault injections. Much of the information in this manual can be found in other sources; we are only attempting to bring together the basic points in a single source. If the reader should need points clarified, there is a list of support documentation in the back of this manual.
Snake River Plain Geothermal Play Fairway Analysis - Phase 1 KMZ files
John Shervais
2015-10-10
This dataset contain raw data files in kmz files (Google Earth georeference format). These files include volcanic vent locations and age, the distribution of fine-grained lacustrine sediments (which act as both a seal and an insulating layer for hydrothermal fluids), and post-Miocene faults compiled from the Idaho Geological Survey, the USGS Quaternary Fault database, and unpublished mapping. It also contains the Composite Common Risk Segment Map created during Phase 1 studies, as well as a file with locations of select deep wells used to interrogate the subsurface.
Ex situ gene conservation for conifers in the Pacific Northwest.
Sara R. Lipow; J. Bradley St. Clair; G.R. Johnson
2002-01-01
Recently, a group of public and private organizations responsible for managing much of the timberland in western Oregon and Washington formed the Pacific Northwest forest tree Gene Conservation Group (GCG) to ensure that the evolutionary potential of important regional tree species is maintained. The group is first compiling data to evaluate the genetic resource status...
San Andreas drilling sites selected
NASA Astrophysics Data System (ADS)
Ellsworth, Bill; Zoback, Mark
A new initiative for drilling and coring directly into the San Andreas fault at depths up to 10 km is being proposed by an international team of scientists led by Mark Zoback, Stanford University; Steve Hickman and Bill Ellsworth, U.S. Geological Survey; and Lee Younker, Lawrence Livermore Laboratory. In addition to exhuming samples of fault rock and fluids from seismogenic depths, the hole will be used to make a wide range of geophysical measurements within the fault zone and to monitor the fault zone over time. Four areas along the San Andreas have been selected as candidates for deep drilling: the Mojave segment of the San Andreas between Leona Valley and Big Pine, the Carrizo Plain, the San Francisco Peninsula between Los Altos and Daly City, and the Northern Gabilan Range between the Cienga winery and Melendy Ranch. These sites were chosen from an initial list compiled at the International Fault Zone Drilling Workshop held in Asilomar, Calif., in December 1992 and at meetings held this winter and spring in Menlo Park, Calif.
A new North American fire scar network for reconstructing historical pyrogeography, 1600-1900 AD
Donald A. Falk; Thomas Swetnam; Thomas Kitzberger; Elaine Sutherland; Peter Brown; Erica Bigio; Matthew Hall
2013-01-01
The Fire and Climate Synthesis (FACS) project is a collaboration of about 50 fire ecologists to compile and synthesize fire and climate data for western North America. We have compiled nearly 900 multi-century fire-scar based fire histories from the western United States, Canada, and Mexico. The resulting tree-ring based fire history is the largest and most spatially...
Model authoring system for fail safe analysis
NASA Technical Reports Server (NTRS)
Sikora, Scott E.
1990-01-01
The Model Authoring System is a prototype software application for generating fault tree analyses and failure mode and effects analyses for circuit designs. Utilizing established artificial intelligence and expert system techniques, the circuits are modeled as a frame-based knowledge base in an expert system shell, which allows the use of object oriented programming and an inference engine. The behavior of the circuit is then captured through IF-THEN rules, which then are searched to generate either a graphical fault tree analysis or failure modes and effects analysis. Sophisticated authoring techniques allow the circuit to be easily modeled, permit its behavior to be quickly defined, and provide abstraction features to deal with complexity.
A quantitative analysis of the F18 flight control system
NASA Technical Reports Server (NTRS)
Doyle, Stacy A.; Dugan, Joanne B.; Patterson-Hine, Ann
1993-01-01
This paper presents an informal quantitative analysis of the F18 flight control system (FCS). The analysis technique combines a coverage model with a fault tree model. To demonstrate the method's extensive capabilities, we replace the fault tree with a digraph model of the F18 FCS, the only model available to us. The substitution shows that while digraphs have primarily been used for qualitative analysis, they can also be used for quantitative analysis. Based on our assumptions and the particular failure rates assigned to the F18 FCS components, we show that coverage does have a significant effect on the system's reliability and thus it is important to include coverage in the reliability analysis.
Chen, Gang; Song, Yongduan; Lewis, Frank L
2016-05-03
This paper investigates the distributed fault-tolerant control problem of networked Euler-Lagrange systems with actuator and communication link faults. An adaptive fault-tolerant cooperative control scheme is proposed to achieve the coordinated tracking control of networked uncertain Lagrange systems on a general directed communication topology, which contains a spanning tree with the root node being the active target system. The proposed algorithm is capable of compensating for the actuator bias fault, the partial loss of effectiveness actuation fault, the communication link fault, the model uncertainty, and the external disturbance simultaneously. The control scheme does not use any fault detection and isolation mechanism to detect, separate, and identify the actuator faults online, which largely reduces the online computation and expedites the responsiveness of the controller. To validate the effectiveness of the proposed method, a test-bed of multiple robot-arm cooperative control system is developed for real-time verification. Experiments on the networked robot-arms are conduced and the results confirm the benefits and the effectiveness of the proposed distributed fault-tolerant control algorithms.
Friction Laws Derived From the Acoustic Emissions of a Laboratory Fault by Machine Learning
NASA Astrophysics Data System (ADS)
Rouet-Leduc, B.; Hulbert, C.; Ren, C. X.; Bolton, D. C.; Marone, C.; Johnson, P. A.
2017-12-01
Fault friction controls nearly all aspects of fault rupture, yet it is only possible to measure in the laboratory. Here we describe laboratory experiments where acoustic emissions are recorded from the fault. We find that by applying a machine learning approach known as "extreme gradient boosting trees" to the continuous acoustical signal, the fault friction can be directly inferred, showing that instantaneous characteristics of the acoustic signal are a fingerprint of the frictional state. This machine learning-based inference leads to a simple law that links the acoustic signal to the friction state, and holds for every stress cycle the laboratory fault goes through. The approach does not use any other measured parameter than instantaneous statistics of the acoustic signal. This finding may have importance for inferring frictional characteristics from seismic waves in Earth where fault friction cannot be measured.
The Design of a Fault-Tolerant COTS-Based Bus Architecture for Space Applications
NASA Technical Reports Server (NTRS)
Chau, Savio N.; Alkalai, Leon; Tai, Ann T.
2000-01-01
The high-performance, scalability and miniaturization requirements together with the power, mass and cost constraints mandate the use of commercial-off-the-shelf (COTS) components and standards in the X2000 avionics system architecture for deep-space missions. In this paper, we report our experiences and findings on the design of an IEEE 1394 compliant fault-tolerant COTS-based bus architecture. While the COTS standard IEEE 1394 adequately supports power management, high performance and scalability, its topological criteria impose restrictions on fault tolerance realization. To circumvent the difficulties, we derive a "stack-tree" topology that not only complies with the IEEE 1394 standard but also facilitates fault tolerance realization in a spaceborne system with limited dedicated resource redundancies. Moreover, by exploiting pertinent standard features of the 1394 interface which are not purposely designed for fault tolerance, we devise a comprehensive set of fault detection mechanisms to support the fault-tolerant bus architecture.
Statistical mechanics and scaling of fault populations with increasing strain in the Corinth Rift
NASA Astrophysics Data System (ADS)
Michas, Georgios; Vallianatos, Filippos; Sammonds, Peter
2015-12-01
Scaling properties of fracture/fault systems are studied in order to characterize the mechanical properties of rocks and to provide insight into the mechanisms that govern fault growth. A comprehensive image of the fault network in the Corinth Rift, Greece, obtained through numerous field studies and marine geophysical surveys, allows for the first time such a study over the entire area of the Rift. We compile a detailed fault map of the area and analyze the scaling properties of fault trace-lengths by using a statistical mechanics model, derived in the framework of generalized statistical mechanics and associated maximum entropy principle. By using this framework, a range of asymptotic power-law to exponential-like distributions are derived that can well describe the observed scaling patterns of fault trace-lengths in the Rift. Systematic variations and in particular a transition from asymptotic power-law to exponential-like scaling are observed to be a function of increasing strain in distinct strain regimes in the Rift, providing quantitative evidence for such crustal processes in a single tectonic setting. These results indicate the organization of the fault system as a function of brittle strain in the Earth's crust and suggest there are different mechanisms for fault growth in the distinct parts of the Rift. In addition, other factors such as fault interactions and the thickness of the brittle layer affect how the fault system evolves in time. The results suggest that regional strain, fault interactions and the boundary condition of the brittle layer may control fault growth and the fault network evolution in the Corinth Rift.
Fault-zone waves observed at the southern Joshua Tree earthquake rupture zone
Hough, S.E.; Ben-Zion, Y.; Leary, P.
1994-01-01
Waveform and spectral characteristics of several aftershocks of the M 6.1 22 April 1992 Joshua Tree earthquake recorded at stations just north of the Indio Hills in the Coachella Valley can be interpreted in terms of waves propagating within narrow, low-velocity, high-attenuation, vertical zones. Evidence for our interpretation consists of: (1) emergent P arrivals prior to and opposite in polarity to the impulsive direct phase; these arrivals can be modeled as headwaves indicative of a transfault velocity contrast; (2) spectral peaks in the S wave train that can be interpreted as internally reflected, low-velocity fault-zone wave energy; and (3) spatial selectivity of event-station pairs at which these data are observed, suggesting a long, narrow geologic structure. The observed waveforms are modeled using the analytical solution of Ben-Zion and Aki (1990) for a plane-parallel layered fault-zone structure. Synthetic waveform fits to the observed data indicate the presence of NS-trending vertical fault-zone layers characterized by a thickness of 50 to 100 m, a velocity decrease of 10 to 15% relative to the surrounding rock, and a P-wave quality factor in the range 25 to 50.
Probability and possibility-based representations of uncertainty in fault tree analysis.
Flage, Roger; Baraldi, Piero; Zio, Enrico; Aven, Terje
2013-01-01
Expert knowledge is an important source of input to risk analysis. In practice, experts might be reluctant to characterize their knowledge and the related (epistemic) uncertainty using precise probabilities. The theory of possibility allows for imprecision in probability assignments. The associated possibilistic representation of epistemic uncertainty can be combined with, and transformed into, a probabilistic representation; in this article, we show this with reference to a simple fault tree analysis. We apply an integrated (hybrid) probabilistic-possibilistic computational framework for the joint propagation of the epistemic uncertainty on the values of the (limiting relative frequency) probabilities of the basic events of the fault tree, and we use possibility-probability (probability-possibility) transformations for propagating the epistemic uncertainty within purely probabilistic and possibilistic settings. The results of the different approaches (hybrid, probabilistic, and possibilistic) are compared with respect to the representation of uncertainty about the top event (limiting relative frequency) probability. Both the rationale underpinning the approaches and the computational efforts they require are critically examined. We conclude that the approaches relevant in a given setting depend on the purpose of the risk analysis, and that further research is required to make the possibilistic approaches operational in a risk analysis context. © 2012 Society for Risk Analysis.
Fault Tree Based Diagnosis with Optimal Test Sequencing for Field Service Engineers
NASA Technical Reports Server (NTRS)
Iverson, David L.; George, Laurence L.; Patterson-Hine, F. A.; Lum, Henry, Jr. (Technical Monitor)
1994-01-01
When field service engineers go to customer sites to service equipment, they want to diagnose and repair failures quickly and cost effectively. Symptoms exhibited by failed equipment frequently suggest several possible causes which require different approaches to diagnosis. This can lead the engineer to follow several fruitless paths in the diagnostic process before they find the actual failure. To assist in this situation, we have developed the Fault Tree Diagnosis and Optimal Test Sequence (FTDOTS) software system that performs automated diagnosis and ranks diagnostic hypotheses based on failure probability and the time or cost required to isolate and repair each failure. FTDOTS first finds a set of possible failures that explain exhibited symptoms by using a fault tree reliability model as a diagnostic knowledge to rank the hypothesized failures based on how likely they are and how long it would take or how much it would cost to isolate and repair them. This ordering suggests an optimal sequence for the field service engineer to investigate the hypothesized failures in order to minimize the time or cost required to accomplish the repair task. Previously, field service personnel would arrive at the customer site and choose which components to investigate based on past experience and service manuals. Using FTDOTS running on a portable computer, they can now enter a set of symptoms and get a list of possible failures ordered in an optimal test sequence to help them in their decisions. If facilities are available, the field engineer can connect the portable computer to the malfunctioning device for automated data gathering. FTDOTS is currently being applied to field service of medical test equipment. The techniques are flexible enough to use for many different types of devices. If a fault tree model of the equipment and information about component failure probabilities and isolation times or costs are available, a diagnostic knowledge base for that device can be developed easily.
W. John Kress; David L. Erickson; Nathan G. Swenson; Jill Thompson; Maria Uriarte; Jess K. Zimmerman; Jerome Chave
2010-01-01
BACKGROUND: Species number, functional traits, and phylogenetic history all contribute to characterizing the biological diversity in plant communities. The phylogenetic component of diversity has been particularly difficult to quantify in species-rich tropical tree assemblages. The compilation of previously published (and often incomplete) data on evolutionary...
Upstream watershed condition predicts rural children's health across 35 developing countries.
Herrera, Diego; Ellis, Alicia; Fisher, Brendan; Golden, Christopher D; Johnson, Kiersten; Mulligan, Mark; Pfaff, Alexander; Treuer, Timothy; Ricketts, Taylor H
2017-10-09
Diarrheal disease (DD) due to contaminated water is a major cause of child mortality globally. Forests and wetlands can provide ecosystem services that help maintain water quality. To understand the connections between land cover and childhood DD, we compiled a database of 293,362 children in 35 countries with information on health, socioeconomic factors, climate, and watershed condition. Using hierarchical models, here we find that higher upstream tree cover is associated with lower probability of DD downstream. This effect is significant for rural households but not for urban households, suggesting differing dependence on watershed conditions. In rural areas, the effect of a 30% increase in upstream tree cover is similar to the effect of improved sanitation, but smaller than the effect of improved water source, wealth or education. We conclude that maintaining natural capital within watersheds can be an important public health investment, especially for populations with low levels of built capital.Globally diarrheal disease through contaminated water sources is a major cause of child mortality. Here, the authors compile a database of 293,362 children in 35 countries and find that upstream tree cover is linked to a lower probability of diarrheal disease and that increasing tree cover may lower mortality.
Sequential Test Strategies for Multiple Fault Isolation
NASA Technical Reports Server (NTRS)
Shakeri, M.; Pattipati, Krishna R.; Raghavan, V.; Patterson-Hine, Ann; Kell, T.
1997-01-01
In this paper, we consider the problem of constructing near optimal test sequencing algorithms for diagnosing multiple faults in redundant (fault-tolerant) systems. The computational complexity of solving the optimal multiple-fault isolation problem is super-exponential, that is, it is much more difficult than the single-fault isolation problem, which, by itself, is NP-hard. By employing concepts from information theory and Lagrangian relaxation, we present several static and dynamic (on-line or interactive) test sequencing algorithms for the multiple fault isolation problem that provide a trade-off between the degree of suboptimality and computational complexity. Furthermore, we present novel diagnostic strategies that generate a static diagnostic directed graph (digraph), instead of a static diagnostic tree, for multiple fault diagnosis. Using this approach, the storage complexity of the overall diagnostic strategy reduces substantially. Computational results based on real-world systems indicate that the size of a static multiple fault strategy is strictly related to the structure of the system, and that the use of an on-line multiple fault strategy can diagnose faults in systems with as many as 10,000 failure sources.
MacDonald Iii, Angus W; Zick, Jennifer L; Chafee, Matthew V; Netoff, Theoden I
2015-01-01
The grand challenges of schizophrenia research are linking the causes of the disorder to its symptoms and finding ways to overcome those symptoms. We argue that the field will be unable to address these challenges within psychiatry's standard neo-Kraepelinian (DSM) perspective. At the same time the current corrective, based in molecular genetics and cognitive neuroscience, is also likely to flounder due to its neglect for psychiatry's syndromal structure. We suggest adopting a new approach long used in reliability engineering, which also serves as a synthesis of these approaches. This approach, known as fault tree analysis, can be combined with extant neuroscientific data collection and computational modeling efforts to uncover the causal structures underlying the cognitive and affective failures in people with schizophrenia as well as other complex psychiatric phenomena. By making explicit how causes combine from basic faults to downstream failures, this approach makes affordances for: (1) causes that are neither necessary nor sufficient in and of themselves; (2) within-diagnosis heterogeneity; and (3) between diagnosis co-morbidity.
Optical fiber-fault surveillance for passive optical networks in S-band operation window
NASA Astrophysics Data System (ADS)
Yeh, Chien-Hung; Chi, Sien
2005-07-01
An S-band (1470 to 1520 nm) fiber laser scheme, which uses multiple fiber Bragg grating (FBG) elements as feedback elements on each passive branch, is proposed and described for in-service fault identification in passive optical networks (PONs). By tuning a wavelength selective filter located within the laser cavity over a gain bandwidth, the fiber-fault of each branch can be monitored without affecting the in-service channels. In our experiment, an S-band four-branch monitoring tree-structured PON system is demonstrated and investigated experimentally.
Optical fiber-fault surveillance for passive optical networks in S-band operation window.
Yeh, Chien-Hung; Chi, Sien
2005-07-11
An S-band (1470 to 1520 nm) fiber laser scheme, which uses multiple fiber Bragg grating (FBG) elements as feedback elements on each passive branch, is proposed and described for in-service fault identification in passive optical networks (PONs). By tuning a wavelength selective filter located within the laser cavity over a gain bandwidth, the fiber-fault of each branch can be monitored without affecting the in-service channels. In our experiment, an S-band four-branch monitoring tree-structured PON system is demonstrated and investigated experimentally.
Sun, Weifang; Yao, Bin; Zeng, Nianyin; He, Yuchao; Cao, Xincheng; He, Wangpeng
2017-01-01
As a typical example of large and complex mechanical systems, rotating machinery is prone to diversified sorts of mechanical faults. Among these faults, one of the prominent causes of malfunction is generated in gear transmission chains. Although they can be collected via vibration signals, the fault signatures are always submerged in overwhelming interfering contents. Therefore, identifying the critical fault’s characteristic signal is far from an easy task. In order to improve the recognition accuracy of a fault’s characteristic signal, a novel intelligent fault diagnosis method is presented. In this method, a dual-tree complex wavelet transform (DTCWT) is employed to acquire the multiscale signal’s features. In addition, a convolutional neural network (CNN) approach is utilized to automatically recognise a fault feature from the multiscale signal features. The experiment results of the recognition for gear faults show the feasibility and effectiveness of the proposed method, especially in the gear’s weak fault features. PMID:28773148
NASA Astrophysics Data System (ADS)
Legg, Mark R.; Kohler, Monica D.; Shintaku, Natsumi; Weeraratne, Dayanthie S.
2015-05-01
New mapping of two active transpressional fault zones in the California Continental Borderland, the Santa Cruz-Catalina Ridge fault and the Ferrelo fault, was carried out to characterize their geometries, using over 4500 line-km of new multibeam bathymetry data collected in 2010 combined with existing data. Faults identified from seafloor morphology were verified in the subsurface using existing seismic reflection data including single-channel and multichannel seismic profiles compiled over the past three decades. The two fault systems are parallel and are capable of large lateral offsets and reverse slip during earthquakes. The geometry of the fault systems shows evidence of multiple segments that could experience throughgoing rupture over distances exceeding 100 km. Published earthquake hypocenters from regional seismicity studies further define the lateral and depth extent of the historic fault ruptures. Historical and recent focal mechanisms obtained from first-motion and moment tensor studies confirm regional strain partitioning dominated by right slip on major throughgoing faults with reverse-oblique mechanisms on adjacent structures. Transpression on west and northwest trending structures persists as far as 270 km south of the Transverse Ranges; extension persists in the southern Borderland. A logjam model describes the tectonic evolution of crustal blocks bounded by strike-slip and reverse faults which are restrained from northwest displacement by the Transverse Ranges and the southern San Andreas fault big bend. Because of their potential for dip-slip rupture, the faults may also be capable of generating local tsunamis that would impact Southern California coastlines, including populated regions in the Channel Islands.
NASA Astrophysics Data System (ADS)
Chartier, Thomas; Scotti, Oona; Boiselet, Aurelien; Lyon-Caen, Hélène
2016-04-01
Including faults in probabilistic seismic hazard assessment tends to increase the degree of uncertainty in the results due to the intrinsically uncertain nature of the fault data. This is especially the case in the low to moderate seismicity regions of Europe, where slow slipping faults are difficult to characterize. In order to better understand the key parameters that control the uncertainty in the fault-related hazard computations, we propose to build an analytic tool that provides a clear link between the different components of the fault-related hazard computations and their impact on the results. This will allow identifying the important parameters that need to be better constrained in order to reduce the resulting uncertainty in hazard and also provide a more hazard-oriented strategy for collecting relevant fault parameters in the field. The tool will be illustrated through the example of the West Corinth rifts fault-models. Recent work performed in the gulf has shown the complexity of the normal faulting system that is accommodating the extensional deformation of the rift. A logic-tree approach is proposed to account for this complexity and the multiplicity of scientifically defendable interpretations. At the nodes of the logic tree, different options that could be considered at each step of the fault-related seismic hazard will be considered. The first nodes represent the uncertainty in the geometries of the faults and their slip rates, which can derive from different data and methodologies. The subsequent node explores, for a given geometry/slip rate of faults, different earthquake rupture scenarios that may occur in the complex network of faults. The idea is to allow the possibility of several faults segments to break together in a single rupture scenario. To build these multiple-fault-segment scenarios, two approaches are considered: one based on simple rules (i.e. minimum distance between faults) and a second one that relies on physically-based simulations. The following nodes represents for each rupture scenario different rupture forecast models (i.e; characteristic or Gutenberg-Richter) and for a given rupture forecast, two probability models commonly used in seismic hazard assessment: poissonian or time-dependent. The final node represents an exhaustive set of ground motion prediction equations chosen in order to be compatible with the region. Finally, the expected probability of exceeding a given ground motion level is computed at each sites. Results will be discussed for a few specific localities of the West Corinth Gulf.
NASA Astrophysics Data System (ADS)
Styron, Richard; Pagani, Marco; Garcia, Julio
2017-04-01
The region encompassing Central America and the Caribbean is tectonically complex, defined by the Caribbean plate's interactions with the North American, South American and Cocos plates. Though active deformation over much of the region has received at least cursory investigation the past 50 years, the area is chronically understudied and lacks a modern, synoptic characterization. Regardless, the level of risk in the region - as dramatically demonstrated by the 2010 Haiti earthquake - remains high because of high-vulnerability buildings and dense urban areas home to over 100 million people, who are concentrated near plate boundaries and other major structures. As part of a broader program to study seismic hazard worldwide, the Global Earthquake Model Foundation is currently working to quantify seismic hazard in the region. To this end, we are compiling a database of active faults throughout the region that will be integrated into similar models as recently done in South America. Our initial compilation hosts about 180 fault traces in the region. The faults show a wide range of characteristics, reflecting the diverse styles of plate boundary and plate-margin deformation observed. Regional deformation ranges from highly localized faulting along well-defined strike-slip faults to broad zones of distributed normal or thrust faulting, and from readily-observable yet slowly-slipping structures to inferred faults with geodetically-measured slip rates >10 mm/yr but essentially no geomorphic expression. Furthermore, primary structures such as the Motagua-Polochic Fault Zone (the strike-slip plate boundary between the North American and Caribbean plates in Guatemala) display strong along-strike slip rate gradients, and many other structures are undersea for most or all of their length. A thorough assessment of seismic hazard in the region will require the integration of a range of datasets and techniques and a comprehensive characterization of epistemic uncertainties driving the overall variability of hazard and risk results. For this reason and in order to leverage from the knowledge available in the region, datasets and the hazard model will be developed in close collaboration with local experts coherently with GEM's principles of transparency and collaboration. For what pertains active faults in shallow crust, we are currently working on assigning slip rates to structures based on geologic and geodetic strain rates, though this will be challenging in areas of sparse constraints. An additional area of ongoing work is the delineation of 3D seismic sources from disjoint fault traces; we are currently evaluating methods for this. Though work in the region is challenging, we anticipate that our results will not only lead to more robust seismic hazard and risk estimates for the region, but may serve as a template for workflows in other zones of poor or inhomogeneous data.
Field, Edward; Biasi, Glenn P.; Bird, Peter; Dawson, Timothy E.; Felzer, Karen R.; Jackson, David A.; Johnson, Kaj M.; Jordan, Thomas H.; Madden, Christopher; Michael, Andrew J.; Milner, Kevin; Page, Morgan T.; Parsons, Thomas E.; Powers, Peter; Shaw, Bruce E.; Thatcher, Wayne R.; Weldon, Ray J.; Zeng, Yuehua
2015-01-01
The 2014 Working Group on California Earthquake Probabilities (WGCEP 2014) presents time-dependent earthquake probabilities for the third Uniform California Earthquake Rupture Forecast (UCERF3). Building on the UCERF3 time-independent model, published previously, renewal models are utilized to represent elastic-rebound-implied probabilities. A new methodology has been developed that solves applicability issues in the previous approach for un-segmented models. The new methodology also supports magnitude-dependent aperiodicity and accounts for the historic open interval on faults that lack a date-of-last-event constraint. Epistemic uncertainties are represented with a logic tree, producing 5,760 different forecasts. Results for a variety of evaluation metrics are presented, including logic-tree sensitivity analyses and comparisons to the previous model (UCERF2). For 30-year M≥6.7 probabilities, the most significant changes from UCERF2 are a threefold increase on the Calaveras fault and a threefold decrease on the San Jacinto fault. Such changes are due mostly to differences in the time-independent models (e.g., fault slip rates), with relaxation of segmentation and inclusion of multi-fault ruptures being particularly influential. In fact, some UCERF2 faults were simply too long to produce M 6.7 sized events given the segmentation assumptions in that study. Probability model differences are also influential, with the implied gains (relative to a Poisson model) being generally higher in UCERF3. Accounting for the historic open interval is one reason. Another is an effective 27% increase in the total elastic-rebound-model weight. The exact factors influencing differences between UCERF2 and UCERF3, as well as the relative importance of logic-tree branches, vary throughout the region, and depend on the evaluation metric of interest. For example, M≥6.7 probabilities may not be a good proxy for other hazard or loss measures. This sensitivity, coupled with the approximate nature of the model and known limitations, means the applicability of UCERF3 should be evaluated on a case-by-case basis.
Application of Landsat imagery to problems of petroleum exploration in Qaidam Basin, China
Bailey, G.B.; Anderson, P.D.
1982-01-01
Tertiary and Quaternary nonmarine, petroleum-bearing sedimentary rocks have been extensively deformed by compressive forces. These forces created many folds which are current targets of Chinese exploration programs. Image-derived interpretations of folds, strike-slip faults, thrust faults, normal or reverse faults, and fractures compared very favorably, in terms of locations and numbers mapped, with Chinese data compiled from years of extensive field mapping. Many potential hydrocarbon trapping structures were precisely located. Orientations of major structural trends defined from Landsat imagery correlate well with those predicted for the area based on global tectonic theory. These correlations suggest that similar orientations exist in the eastern half of the basin where folded rocks are mostly obscured by unconsolidated surface sediments and where limited exploration has occurred.--Modified journal abstract.
SUMC fault tolerant computer system
NASA Technical Reports Server (NTRS)
1980-01-01
The results of the trade studies are presented. These trades cover: establishing the basic configuration, establishing the CPU/memory configuration, establishing an approach to crosstrapping interfaces, defining the requirements of the redundancy management unit (RMU), establishing a spare plane switching strategy for the fault-tolerant memory (FTM), and identifying the most cost effective way of extending the memory addressing capability beyond the 64 K-bytes (K=1024) of SUMC-II B. The results of the design are compiled in Contract End Item (CEI) Specification for the NASA Standard Spacecraft Computer II (NSSC-II), IBM 7934507. The implementation of the FTM and memory address expansion.
Study on the evaluation method for fault displacement based on characterized source model
NASA Astrophysics Data System (ADS)
Tonagi, M.; Takahama, T.; Matsumoto, Y.; Inoue, N.; Irikura, K.; Dalguer, L. A.
2016-12-01
In IAEA Specific Safety Guide (SSG) 9 describes that probabilistic methods for evaluating fault displacement should be used if no sufficient basis is provided to decide conclusively that the fault is not capable by using the deterministic methodology. In addition, International Seismic Safety Centre compiles as ANNEX to realize seismic hazard for nuclear facilities described in SSG-9 and shows the utility of the deterministic and probabilistic evaluation methods for fault displacement. In Japan, it is required that important nuclear facilities should be established on ground where fault displacement will not arise when earthquakes occur in the future. Under these situations, based on requirements, we need develop evaluation methods for fault displacement to enhance safety in nuclear facilities. We are studying deterministic and probabilistic methods with tentative analyses using observed records such as surface fault displacement and near-fault strong ground motions of inland crustal earthquake which fault displacements arose. In this study, we introduce the concept of evaluation methods for fault displacement. After that, we show parts of tentative analysis results for deterministic method as follows: (1) For the 1999 Chi-Chi earthquake, referring slip distribution estimated by waveform inversion, we construct a characterized source model (Miyake et al., 2003, BSSA) which can explain observed near-fault broad band strong ground motions. (2) Referring a characterized source model constructed in (1), we study an evaluation method for surface fault displacement using hybrid method, which combines particle method and distinct element method. At last, we suggest one of the deterministic method to evaluate fault displacement based on characterized source model. This research was part of the 2015 research project `Development of evaluating method for fault displacement` by the Secretariat of Nuclear Regulation Authority (S/NRA), Japan.
A GIS-based time-dependent seismic source modeling of Northern Iran
NASA Astrophysics Data System (ADS)
Hashemi, Mahdi; Alesheikh, Ali Asghar; Zolfaghari, Mohammad Reza
2017-01-01
The first step in any seismic hazard study is the definition of seismogenic sources and the estimation of magnitude-frequency relationships for each source. There is as yet no standard methodology for source modeling and many researchers have worked on this topic. This study is an effort to define linear and area seismic sources for Northern Iran. The linear or fault sources are developed based on tectonic features and characteristic earthquakes while the area sources are developed based on spatial distribution of small to moderate earthquakes. Time-dependent recurrence relationships are developed for fault sources using renewal approach while time-independent frequency-magnitude relationships are proposed for area sources based on Poisson process. GIS functionalities are used in this study to introduce and incorporate spatial-temporal and geostatistical indices in delineating area seismic sources. The proposed methodology is used to model seismic sources for an area of about 500 by 400 square kilometers around Tehran. Previous researches and reports are studied to compile an earthquake/fault catalog that is as complete as possible. All events are transformed to uniform magnitude scale; duplicate events and dependent shocks are removed. Completeness and time distribution of the compiled catalog is taken into account. The proposed area and linear seismic sources in conjunction with defined recurrence relationships can be used to develop time-dependent probabilistic seismic hazard analysis of Northern Iran.
Width of surface rupture zone for thrust earthquakes: implications for earthquake fault zoning
NASA Astrophysics Data System (ADS)
Boncio, Paolo; Liberi, Francesca; Caldarella, Martina; Nurminen, Fiia-Charlotta
2018-01-01
The criteria for zoning the surface fault rupture hazard (SFRH) along thrust faults are defined by analysing the characteristics of the areas of coseismic surface faulting in thrust earthquakes. Normal and strike-slip faults have been deeply studied by other authors concerning the SFRH, while thrust faults have not been studied with comparable attention. Surface faulting data were compiled for 11 well-studied historic thrust earthquakes occurred globally (5.4 ≤ M ≤ 7.9). Several different types of coseismic fault scarps characterize the analysed earthquakes, depending on the topography, fault geometry and near-surface materials (simple and hanging wall collapse scarps, pressure ridges, fold scarps and thrust or pressure ridges with bending-moment or flexural-slip fault ruptures due to large-scale folding). For all the earthquakes, the distance of distributed ruptures from the principal fault rupture (r) and the width of the rupture zone (WRZ) were compiled directly from the literature or measured systematically in GIS-georeferenced published maps. Overall, surface ruptures can occur up to large distances from the main fault ( ˜ 2150 m on the footwall and ˜ 3100 m on the hanging wall). Most of the ruptures occur on the hanging wall, preferentially in the vicinity of the principal fault trace ( > ˜ 50 % at distances < ˜ 250 m). The widest WRZ are recorded where sympathetic slip (Sy) on distant faults occurs, and/or where bending-moment (B-M) or flexural-slip (F-S) fault ruptures, associated with large-scale folds (hundreds of metres to kilometres in wavelength), are present. A positive relation between the earthquake magnitude and the total WRZ is evident, while a clear correlation between the vertical displacement on the principal fault and the total WRZ is not found. The distribution of surface ruptures is fitted with probability density functions, in order to define a criterion to remove outliers (e.g. 90 % probability of the cumulative distribution function) and define the zone where the likelihood of having surface ruptures is the highest. This might help in sizing the zones of SFRH during seismic microzonation (SM) mapping. In order to shape zones of SFRH, a very detailed earthquake geologic study of the fault is necessary (the highest level of SM, i.e. Level 3 SM according to Italian guidelines). In the absence of such a very detailed study (basic SM, i.e. Level 1 SM of Italian guidelines) a width of ˜ 840 m (90 % probability from "simple thrust" database of distributed ruptures, excluding B-M, F-S and Sy fault ruptures) is suggested to be sufficiently precautionary. For more detailed SM, where the fault is carefully mapped, one must consider that the highest SFRH is concentrated in a narrow zone, ˜ 60 m in width, that should be considered as a fault avoidance zone (more than one-third of the distributed ruptures are expected to occur within this zone). The fault rupture hazard zones should be asymmetric compared to the trace of the principal fault. The average footwall to hanging wall ratio (FW : HW) is close to 1 : 2 in all analysed cases. These criteria are applicable to "simple thrust" faults, without considering possible B-M or F-S fault ruptures due to large-scale folding, and without considering sympathetic slip on distant faults. Areas potentially susceptible to B-M or F-S fault ruptures should have their own zones of fault rupture hazard that can be defined by detailed knowledge of the structural setting of the area (shape, wavelength, tightness and lithology of the thrust-related large-scale folds) and by geomorphic evidence of past secondary faulting. Distant active faults, potentially susceptible to sympathetic triggering, should be zoned as separate principal faults. The entire database of distributed ruptures (including B-M, F-S and Sy fault ruptures) can be useful in poorly known areas, in order to assess the extent of the area within which potential sources of fault displacement hazard can be present. The results from this study and the database made available in the Supplement can be used for improving the attenuation relationships for distributed faulting, with possible applications in probabilistic studies of fault displacement hazard.
NASA Astrophysics Data System (ADS)
Inoue, N.
2017-12-01
The conditional probability of surface ruptures is affected by various factors, such as shallow material properties, process of earthquakes, ground motions and so on. Toda (2013) pointed out difference of the conditional probability of strike and reverse fault by considering the fault dip and width of seismogenic layer. This study evaluated conditional probability of surface rupture based on following procedures. Fault geometry was determined from the randomly generated magnitude based on The Headquarters for Earthquake Research Promotion (2017) method. If the defined fault plane was not saturated in the assumed width of the seismogenic layer, the fault plane depth was randomly provided within the seismogenic layer. The logistic analysis was performed to two data sets: surface displacement calculated by dislocation methods (Wang et al., 2003) from the defined source fault, the depth of top of the defined source fault. The estimated conditional probability from surface displacement indicated higher probability of reverse faults than that of strike faults, and this result coincides to previous similar studies (i.e. Kagawa et al., 2004; Kataoka and Kusakabe, 2005). On the contrary, the probability estimated from the depth of the source fault indicated higher probability of thrust faults than that of strike and reverse faults, and this trend is similar to the conditional probability of PFDHA results (Youngs et al., 2003; Moss and Ross, 2011). The probability of combined simulated results of thrust and reverse also shows low probability. The worldwide compiled reverse fault data include low fault dip angle earthquake. On the other hand, in the case of Japanese reverse fault, there is possibility that the conditional probability of reverse faults with less low dip angle earthquake shows low probability and indicates similar probability of strike fault (i.e. Takao et al., 2013). In the future, numerical simulation by considering failure condition of surface by the source fault would be performed in order to examine the amount of the displacement and conditional probability quantitatively.
Dataset on records of Hericium erinaceus in Slovakia.
Kunca, Vladimír; Čiliak, Marek
2017-06-01
The data presented in this article are related to the research article entitled "Habitat preferences of Hericium erinaceus in Slovakia" (Kunca and Čiliak, 2016) [FUNECO607] [2]. The dataset include all available and unpublished data from Slovakia, besides the records from the same tree or stem. We compiled a database of records of collections by processing data from herbaria, personal records and communication with mycological activists. Data on altitude, tree species, host tree vital status, host tree position and intensity of management of forest stands were evaluated in this study. All surveys were based on basidioma occurrence and some result from targeted searches.
ASSIST - THE ABSTRACT SEMI-MARKOV SPECIFICATION INTERFACE TO THE SURE TOOL PROGRAM (SUN VERSION)
NASA Technical Reports Server (NTRS)
Johnson, S. C.
1994-01-01
ASSIST, the Abstract Semi-Markov Specification Interface to the SURE Tool program, is an interface that will enable reliability engineers to accurately design large semi-Markov models. The user describes the failure behavior of a fault-tolerant computer system in an abstract, high-level language. The ASSIST program then automatically generates a corresponding semi-Markov model. The abstract language allows efficient description of large, complex systems; a one-page ASSIST-language description may result in a semi-Markov model with thousands of states and transitions. The ASSIST program also includes model-reduction techniques to facilitate efficient modeling of large systems. Instead of listing the individual states of the Markov model, reliability engineers can specify the rules governing the behavior of a system, and these are used to automatically generate the model. ASSIST reads an input file describing the failure behavior of a system in an abstract language and generates a Markov model in the format needed for input to SURE, the semi-Markov Unreliability Range Evaluator program, and PAWS/STEM, the Pade Approximation with Scaling program and Scaled Taylor Exponential Matrix. A Markov model consists of a number of system states and transitions between them. Each state in the model represents a possible state of the system in terms of which components have failed, which ones have been removed, etc. Within ASSIST, each state is defined by a state vector, where each element of the vector takes on an integer value within a defined range. An element can represent any meaningful characteristic, such as the number of working components of one type in the system, or the number of faulty components of another type in use. Statements representing transitions between states in the model have three parts: a condition expression, a destination expression, and a rate expression. The first expression is a Boolean expression describing the state space variable values of states for which the transition is valid. The second expression defines the destination state for the transition in terms of state space variable values. The third expression defines the distribution of elapsed time for the transition. The mathematical approach chosen to solve a reliability problem may vary with the size and nature of the problem. Although different solution techniques are utilized on different programs, it is possible to have a common input language. The Systems Validation Methods group at NASA Langley Research Center has created a set of programs that form the basis for a reliability analysis workstation. The set of programs are: SURE reliability analysis program (COSMIC program LAR-13789, LAR-14921); the ASSIST specification interface program (LAR-14193, LAR-14923), PAWS/STEM reliability analysis programs (LAR-14165, LAR-14920); and the FTC fault tree tool (LAR-14586, LAR-14922). FTC is used to calculate the top-event probability for a fault tree. PAWS/STEM and SURE are programs which interpret the same SURE language, but utilize different solution methods. ASSIST is a preprocessor that generates SURE language from a more abstract definition. SURE, ASSIST, and PAWS/STEM are also offered as a bundle. Please see the abstract for COS-10039/COS-10041, SARA - SURE/ASSIST Reliability Analysis Workstation, for pricing details. ASSIST was originally developed for DEC VAX series computers running VMS and was later ported for use on Sun computers running SunOS. The VMS version (LAR14193) is written in C-language and can be compiled with the VAX C compiler. The standard distribution medium for the VMS version of ASSIST is a 9-track 1600 BPI magnetic tape in VMSINSTAL format. It is also available on a TK50 tape cartridge in VMSINSTAL format. Executables are included. The Sun version (LAR14923) is written in ANSI C-language. An ANSI compliant C compiler is required in order to compile this package. The standard distribution medium for the Sun version of ASSIST is a .25 inch streaming magnetic tape cartridge in UNIX tar format. Both Sun3 and Sun4 executables are included. Electronic copies of the documentation in PostScript, TeX, and DVI formats are provided on the distribution medium. (The VMS distribution lacks the .DVI format files, however.) ASSIST was developed in 1986 and last updated in 1992. DEC, VAX, VMS, and TK50 are trademarks of Digital Equipment Corporation. SunOS, Sun3, and Sun4 are trademarks of Sun Microsystems, Inc. UNIX is a registered trademark of AT&T Bell Laboratories.
ASSIST - THE ABSTRACT SEMI-MARKOV SPECIFICATION INTERFACE TO THE SURE TOOL PROGRAM (VAX VMS VERSION)
NASA Technical Reports Server (NTRS)
Johnson, S. C.
1994-01-01
ASSIST, the Abstract Semi-Markov Specification Interface to the SURE Tool program, is an interface that will enable reliability engineers to accurately design large semi-Markov models. The user describes the failure behavior of a fault-tolerant computer system in an abstract, high-level language. The ASSIST program then automatically generates a corresponding semi-Markov model. The abstract language allows efficient description of large, complex systems; a one-page ASSIST-language description may result in a semi-Markov model with thousands of states and transitions. The ASSIST program also includes model-reduction techniques to facilitate efficient modeling of large systems. Instead of listing the individual states of the Markov model, reliability engineers can specify the rules governing the behavior of a system, and these are used to automatically generate the model. ASSIST reads an input file describing the failure behavior of a system in an abstract language and generates a Markov model in the format needed for input to SURE, the semi-Markov Unreliability Range Evaluator program, and PAWS/STEM, the Pade Approximation with Scaling program and Scaled Taylor Exponential Matrix. A Markov model consists of a number of system states and transitions between them. Each state in the model represents a possible state of the system in terms of which components have failed, which ones have been removed, etc. Within ASSIST, each state is defined by a state vector, where each element of the vector takes on an integer value within a defined range. An element can represent any meaningful characteristic, such as the number of working components of one type in the system, or the number of faulty components of another type in use. Statements representing transitions between states in the model have three parts: a condition expression, a destination expression, and a rate expression. The first expression is a Boolean expression describing the state space variable values of states for which the transition is valid. The second expression defines the destination state for the transition in terms of state space variable values. The third expression defines the distribution of elapsed time for the transition. The mathematical approach chosen to solve a reliability problem may vary with the size and nature of the problem. Although different solution techniques are utilized on different programs, it is possible to have a common input language. The Systems Validation Methods group at NASA Langley Research Center has created a set of programs that form the basis for a reliability analysis workstation. The set of programs are: SURE reliability analysis program (COSMIC program LAR-13789, LAR-14921); the ASSIST specification interface program (LAR-14193, LAR-14923), PAWS/STEM reliability analysis programs (LAR-14165, LAR-14920); and the FTC fault tree tool (LAR-14586, LAR-14922). FTC is used to calculate the top-event probability for a fault tree. PAWS/STEM and SURE are programs which interpret the same SURE language, but utilize different solution methods. ASSIST is a preprocessor that generates SURE language from a more abstract definition. SURE, ASSIST, and PAWS/STEM are also offered as a bundle. Please see the abstract for COS-10039/COS-10041, SARA - SURE/ASSIST Reliability Analysis Workstation, for pricing details. ASSIST was originally developed for DEC VAX series computers running VMS and was later ported for use on Sun computers running SunOS. The VMS version (LAR14193) is written in C-language and can be compiled with the VAX C compiler. The standard distribution medium for the VMS version of ASSIST is a 9-track 1600 BPI magnetic tape in VMSINSTAL format. It is also available on a TK50 tape cartridge in VMSINSTAL format. Executables are included. The Sun version (LAR14923) is written in ANSI C-language. An ANSI compliant C compiler is required in order to compile this package. The standard distribution medium for the Sun version of ASSIST is a .25 inch streaming magnetic tape cartridge in UNIX tar format. Both Sun3 and Sun4 executables are included. Electronic copies of the documentation in PostScript, TeX, and DVI formats are provided on the distribution medium. (The VMS distribution lacks the .DVI format files, however.) ASSIST was developed in 1986 and last updated in 1992. DEC, VAX, VMS, and TK50 are trademarks of Digital Equipment Corporation. SunOS, Sun3, and Sun4 are trademarks of Sun Microsystems, Inc. UNIX is a registered trademark of AT&T Bell Laboratories.
Fast computation of close-coupling exchange integrals using polynomials in a tree representation
NASA Astrophysics Data System (ADS)
Wallerberger, Markus; Igenbergs, Katharina; Schweinzer, Josef; Aumayr, Friedrich
2011-03-01
The semi-classical atomic-orbital close-coupling method is a well-known approach for the calculation of cross sections in ion-atom collisions. It strongly relies on the fast and stable computation of exchange integrals. We present an upgrade to earlier implementations of the Fourier-transform method. For this purpose, we implement an extensive library for symbolic storage of polynomials, relying on sophisticated tree structures to allow fast manipulation and numerically stable evaluation. Using this library, we considerably speed up creation and computation of exchange integrals. This enables us to compute cross sections for more complex collision systems. Program summaryProgram title: TXINT Catalogue identifier: AEHS_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEHS_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 12 332 No. of bytes in distributed program, including test data, etc.: 157 086 Distribution format: tar.gz Programming language: Fortran 95 Computer: All with a Fortran 95 compiler Operating system: All with a Fortran 95 compiler RAM: Depends heavily on input, usually less than 100 MiB Classification: 16.10 Nature of problem: Analytical calculation of one- and two-center exchange matrix elements for the close-coupling method in the impact parameter model. Solution method: Similar to the code of Hansen and Dubois [1], we use the Fourier-transform method suggested by Shakeshaft [2] to compute the integrals. However, we heavily speed up the calculation using a library for symbolic manipulation of polynomials. Restrictions: We restrict ourselves to a defined collision system in the impact parameter model. Unusual features: A library for symbolic manipulation of polynomials, where polynomials are stored in a space-saving left-child right-sibling binary tree. This provides stable numerical evaluation and fast mutation while maintaining full compatibility with the original code. Additional comments: This program makes heavy use of the new features provided by the Fortran 90 standard, most prominently pointers, derived types and allocatable structures and a small portion of Fortran 95. Only newer compilers support these features. Following compilers support all features needed by the program. GNU Fortran Compiler "gfortran" from version 4.3.0 GNU Fortran 95 Compiler "g95" from version 4.2.0 Intel Fortran Compiler "ifort" from version 11.0
ROMPS critical design review. Volume 2: Robot module design documentation
NASA Technical Reports Server (NTRS)
Dobbs, M. E.
1992-01-01
The robot module design documentation for the Remote Operated Materials Processing in Space (ROMPS) experiment is compiled. This volume presents the following information: robot module modifications; Easylab commands definitions and flowcharts; Easylab program definitions and flowcharts; robot module fault conditions and structure charts; and C-DOC flow structure and cross references.
Writing filter processes for the SAGA editor, appendix G
NASA Technical Reports Server (NTRS)
Kirslis, Peter A.
1985-01-01
The SAGA editor provides a mechanism by which separate processes can be invoked during an editing session to traverse portions of the parse tree being edited. These processes, termed filter processes, read, analyze, and possibly transform the parse tree, returning the result to the editor. By defining new commands with the editor's user defined command facility, which invoke filter processes, authors of filter can provide complex operations as simple commands. A tree plotter, pretty printer, and Pascal tree transformation program were already written using this facility. The filter processes are introduced, parse tree structure is described and the library interface made available to the programmer. Also discussed is how to compile and run filter processes. Examples are presented to illustrate aspect of each of these areas.
NASA Astrophysics Data System (ADS)
Over, Semir; Akin, Ugur; Sen, Rahime
2018-01-01
The gravity and magnetic maps of the area between Adana-Kahramanmaras-Hatay provinces were produced from a compilation of data gathered during the period between 1973 and 1989. Reduced to the pole (RTP) and pseudo-gravity transformation (PGT) methods were applied to the magnetic data, while derivative ratio (DR) processing was applied to both gravity and magnetic data, respectively. Bouguer, RTP and PGT maps show the image of a buried structure corresponding to ophiolites under undifferentiated Quaternary deposits in the Adana depression and Iskenderun Gulf. DR maps show two important faults which reflect the tectonic framework in the study area: (1) the Karatas-Osmaniye Fault extending from Osmaniye to Karatas in the south between Adana and Iskenderun depressions and (2) Amanos Fault (southern part of East Anatolian Fault) in the Hatay region running southward from Turkoglu to Amik Basin along Amanos Mountain forming the actual plate boundary between the Anatolian block (part of Eurasian plate) and Arabian plate.
NASA Astrophysics Data System (ADS)
Abbey, A. L.; Niemi, N. A.
2017-12-01
Low-temperature thermochronometry in the Rio Grande rift (RGR) in CO and NM, USA, allows for quantification of exhumation magnitudes and rates across the rift and reveals insights into rift basin segmentation and symmetry as well as the timing of extensional fault initiation and dominant mechanisms for rift accommodation. We combine new apatite helium (AHe) and zircon helium (ZHe) thermochronologic data with previously published AHe and apatite fission track (AFT) data to compile 17 vertical transects, each consisting of at least four samples, spanning more than >800 km along the RGR axis. Inverse thermal modeling (QTQt; Gallagher, 2012) of these vertical transects and compilation of bimodal rift related volcanism highlight transfer regions that separate several asymmetric basins with opposing fault dip directions. The Tularosa, Jornada and Albuquerque basins, in the southern RGR show extension initiation ca. 15 Ma with 3-4 km of exhumation accommodated on east dipping faults. Northward, the Española basin, a transfer zone of several strike slip, oblique-slip and smaller normal faults, does not record significant exhumation since the early Cenozoic. In the north-central part of the rift data from the San Luis Basin reveals 3-5 km of exhumation on west dipping faults began 20-15 Ma. East dipping faults in the upper Arkansas and Blue River grabens represent the northern extent of the rift and accommodate 3-5 km of exhumation beginning 15-10 Ma. RGR extension and magmatism initiation is commonly cited at 28 Ma (Tweto, 1979) however, our low-temperature thermochronometry modeling indicates that the majority of upper crustal extension initiated somewhat synchronously 15 Ma along the entire length of the rift. Rift related volcanism increased significantly in volume at 15 Ma, as well, but the locus of this volcanism is the Jemez lineament rather than the rift axis. As a result rifting within the RGR appears to be accommodated primarily by extensional faulting, with the exception of the central part of the rift (Española Basin) where the rift intersects the Jemez lineament. Widespread pre-rift thermochronometric ages in the Española Basin suggest that rifting in the central RGR is accommodated by, non-tectonic processes, most-likely magmatism.
Cole, James C.
1997-01-01
The lateral and vertical distributions of Proterozoic and Paleozoic sedimentary rocks in southern Nevada are the combined products of original stratigraphic relationships and post-depositional faults and folds. This map compilation shows the distribution of these pre-Tertiary rocks in the region including and surrounding the Nevada Test Site. It is based on considerable new evidence from detailed geologic mapping, biostratigraphic control, sedimentological analysis, and a review of regional map relationships.Proterozoic and Paleozoic rocks of the region record paleogeographic transitions between continental shelf depositional environments on the east and deeper-water slopefacies depositional environments on the west. Middle Devonian and Mississippian sequences, in particular, show strong lateral facies variations caused by contemporaneous changes in the western margin of North America during the Antler orogeny. Sections of rock that were originally deposited in widely separated facies localities presently lie in close proximity. These spatial relationships chiefly result from major east- and southeastdirected thrusts that deformed the region in Permian or later time.Somewhat younger contractional structures are identified within two irregular zones that traverse the region. These folds and thrusts typically verge toward the west and northwest and overprint the relatively simple pattern of the older contractional terranes. Local structural complications are significant near these younger structures due to the opposing vergence and due to irregularities in the previously folded and faulted crustal section.Structural and stratigraphic discontinuities are identified on opposing sides of two north-trending fault zones in the central part of the compilation region north of Yucca Flat. The origin and significance of these zones are enigmatic because they are largely covered by Tertiary and younger deposits. These faults most likely result from significant lateral offset, most likely in the sinistral sense.Low-angle normal faults that are at least older than Oligocene, and may pre-date Late Cretaceous time, are also present in the region. These faults are shown to locally displace blocks of pre-Tertiary rock by several kilometers. However, none of these structures can be traced for significant distances beyond its outcrop extent, and the inference is made that they do not exert regional influence on the distribution of pre-Tertiary rocks. The extensional strain accommodated by these low-angle normal faults appears to be local and highly irregular.
Redundancy management for efficient fault recovery in NASA's distributed computing system
NASA Technical Reports Server (NTRS)
Malek, Miroslaw; Pandya, Mihir; Yau, Kitty
1991-01-01
The management of redundancy in computer systems was studied and guidelines were provided for the development of NASA's fault-tolerant distributed systems. Fault recovery and reconfiguration mechanisms were examined. A theoretical foundation was laid for redundancy management by efficient reconfiguration methods and algorithmic diversity. Algorithms were developed to optimize the resources for embedding of computational graphs of tasks in the system architecture and reconfiguration of these tasks after a failure has occurred. The computational structure represented by a path and the complete binary tree was considered and the mesh and hypercube architectures were targeted for their embeddings. The innovative concept of Hybrid Algorithm Technique was introduced. This new technique provides a mechanism for obtaining fault tolerance while exhibiting improved performance.
Failure mode effect analysis and fault tree analysis as a combined methodology in risk management
NASA Astrophysics Data System (ADS)
Wessiani, N. A.; Yoshio, F.
2018-04-01
There have been many studies reported the implementation of Failure Mode Effect Analysis (FMEA) and Fault Tree Analysis (FTA) as a method in risk management. However, most of the studies usually only choose one of these two methods in their risk management methodology. On the other side, combining these two methods will reduce the drawbacks of each methods when implemented separately. This paper aims to combine the methodology of FMEA and FTA in assessing risk. A case study in the metal company will illustrate how this methodology can be implemented. In the case study, this combined methodology will assess the internal risks that occur in the production process. Further, those internal risks should be mitigated based on their level of risks.
Using Decision Trees to Detect and Isolate Simulated Leaks in the J-2X Rocket Engine
NASA Technical Reports Server (NTRS)
Schwabacher, Mark A.; Aguilar, Robert; Figueroa, Fernando F.
2009-01-01
The goal of this work was to use data-driven methods to automatically detect and isolate faults in the J-2X rocket engine. It was decided to use decision trees, since they tend to be easier to interpret than other data-driven methods. The decision tree algorithm automatically "learns" a decision tree by performing a search through the space of possible decision trees to find one that fits the training data. The particular decision tree algorithm used is known as C4.5. Simulated J-2X data from a high-fidelity simulator developed at Pratt & Whitney Rocketdyne and known as the Detailed Real-Time Model (DRTM) was used to "train" and test the decision tree. Fifty-six DRTM simulations were performed for this purpose, with different leak sizes, different leak locations, and different times of leak onset. To make the simulations as realistic as possible, they included simulated sensor noise, and included a gradual degradation in both fuel and oxidizer turbine efficiency. A decision tree was trained using 11 of these simulations, and tested using the remaining 45 simulations. In the training phase, the C4.5 algorithm was provided with labeled examples of data from nominal operation and data including leaks in each leak location. From the data, it "learned" a decision tree that can classify unseen data as having no leak or having a leak in one of the five leak locations. In the test phase, the decision tree produced very low false alarm rates and low missed detection rates on the unseen data. It had very good fault isolation rates for three of the five simulated leak locations, but it tended to confuse the remaining two locations, perhaps because a large leak at one of these two locations can look very similar to a small leak at the other location.
Ion-absorption band analysis for the discrimination of iron-rich zones. [Nevada
NASA Technical Reports Server (NTRS)
Rowan, L. C. (Principal Investigator); Wetlaufer, P. H.
1974-01-01
The author has identified the following significant results. A technique which combines digital computer processing and color composition was devised for detecting hydrothermally altered areas and for discriminating among many rock types in an area in south-central Nevada. Subtle spectral reflectance differences among the rock types are enhanced by ratioing and contrast-stretching MSS radiance values for form ratio images which subsequently are displayed in color-ratio composites. Landform analysis of Nevada shows that linear features compiled without respect to length results in approximately 25 percent coincidence with mapped faults. About 80 percent of the major lineaments coincides with mapped faults, and substantial extension of locally mapped faults is commonly indicated. Seven major lineament systems appear to be old zones of crustal weakness which have provided preferred conduits for rising magma through periodic reactivation.
MacDonald III, Angus W.; Zick, Jennifer L.; Chafee, Matthew V.; Netoff, Theoden I.
2016-01-01
The grand challenges of schizophrenia research are linking the causes of the disorder to its symptoms and finding ways to overcome those symptoms. We argue that the field will be unable to address these challenges within psychiatry’s standard neo-Kraepelinian (DSM) perspective. At the same time the current corrective, based in molecular genetics and cognitive neuroscience, is also likely to flounder due to its neglect for psychiatry’s syndromal structure. We suggest adopting a new approach long used in reliability engineering, which also serves as a synthesis of these approaches. This approach, known as fault tree analysis, can be combined with extant neuroscientific data collection and computational modeling efforts to uncover the causal structures underlying the cognitive and affective failures in people with schizophrenia as well as other complex psychiatric phenomena. By making explicit how causes combine from basic faults to downstream failures, this approach makes affordances for: (1) causes that are neither necessary nor sufficient in and of themselves; (2) within-diagnosis heterogeneity; and (3) between diagnosis co-morbidity. PMID:26779007
Geology of Joshua Tree National Park geodatabase
Powell, Robert E.; Matti, Jonathan C.; Cossette, Pamela M.
2015-09-16
The database in this Open-File Report describes the geology of Joshua Tree National Park and was completed in support of the National Cooperative Geologic Mapping Program of the U.S. Geological Survey (USGS) and in cooperation with the National Park Service (NPS). The geologic observations and interpretations represented in the database are relevant to both the ongoing scientific interests of the USGS in southern California and the management requirements of NPS, specifically of Joshua Tree National Park (JOTR).Joshua Tree National Park is situated within the eastern part of California’s Transverse Ranges province and straddles the transition between the Mojave and Sonoran deserts. The geologically diverse terrain that underlies JOTR reveals a rich and varied geologic evolution, one that spans nearly two billion years of Earth history. The Park’s landscape is the current expression of this evolution, its varied landforms reflecting the differing origins of underlying rock types and their differing responses to subsequent geologic events. Crystalline basement in the Park consists of Proterozoic plutonic and metamorphic rocks intruded by a composite Mesozoic batholith of Triassic through Late Cretaceous plutons arrayed in northwest-trending lithodemic belts. The basement was exhumed during the Cenozoic and underwent differential deep weathering beneath a low-relief erosion surface, with the deepest weathering profiles forming on quartz-rich, biotite-bearing granitoid rocks. Disruption of the basement terrain by faults of the San Andreas system began ca. 20 Ma and the JOTR sinistral domain, preceded by basalt eruptions, began perhaps as early as ca. 7 Ma, but no later than 5 Ma. Uplift of the mountain blocks during this interval led to erosional stripping of the thick zones of weathered quartz-rich granitoid rocks to form etchplains dotted by bouldery tors—the iconic landscape of the Park. The stripped debris filled basins along the fault zones.Mountain ranges and basins in the Park exhibit an east-west physiographic grain controlled by left-lateral fault zones that form a sinistral domain within the broad zone of dextral shear along the transform boundary between the North American and Pacific plates. Geologic and geophysical evidence reveal that movement on the sinistral faults zones has resulted in left steps along the zones, resulting in the development of sub-basins beneath Pinto Basin and Shavers and Chuckwalla Valleys. The sinistral fault zones connect the Mojave Desert dextral faults of the Eastern California Shear Zone to the north and east with the Coachella Valley strands of the southern San Andreas Fault Zone to the west.Quaternary surficial deposits accumulated in alluvial washes and playas and lakes along the valley floors; in alluvial fans, washes, and sheet wash aprons along piedmonts flanking the mountain ranges; and in eolian dunes and sand sheets that span the transition from valley floor to piedmont slope. Sequences of Quaternary pediments are planed into piedmonts flanking valley-floor and upland basins, each pediment in turn overlain by successively younger residual and alluvial surficial deposits.
Improved FTA methodology and application to subsea pipeline reliability design.
Lin, Jing; Yuan, Yongbo; Zhang, Mingyuan
2014-01-01
An innovative logic tree, Failure Expansion Tree (FET), is proposed in this paper, which improves on traditional Fault Tree Analysis (FTA). It describes a different thinking approach for risk factor identification and reliability risk assessment. By providing a more comprehensive and objective methodology, the rather subjective nature of FTA node discovery is significantly reduced and the resulting mathematical calculations for quantitative analysis are greatly simplified. Applied to the Useful Life phase of a subsea pipeline engineering project, the approach provides a more structured analysis by constructing a tree following the laws of physics and geometry. Resulting improvements are summarized in comparison table form.
Improved FTA Methodology and Application to Subsea Pipeline Reliability Design
Lin, Jing; Yuan, Yongbo; Zhang, Mingyuan
2014-01-01
An innovative logic tree, Failure Expansion Tree (FET), is proposed in this paper, which improves on traditional Fault Tree Analysis (FTA). It describes a different thinking approach for risk factor identification and reliability risk assessment. By providing a more comprehensive and objective methodology, the rather subjective nature of FTA node discovery is significantly reduced and the resulting mathematical calculations for quantitative analysis are greatly simplified. Applied to the Useful Life phase of a subsea pipeline engineering project, the approach provides a more structured analysis by constructing a tree following the laws of physics and geometry. Resulting improvements are summarized in comparison table form. PMID:24667681
22 CFR 92.79 - Procuring copies of foreign public documents.
Code of Federal Regulations, 2012 CFR
2012-04-01
... foreign officials copies of birth, death, and marriage certificates, or copies of other public records... assistance. Persons requesting documents for use in the preparation of family trees or in the compilation of...
22 CFR 92.79 - Procuring copies of foreign public documents.
Code of Federal Regulations, 2013 CFR
2013-04-01
... foreign officials copies of birth, death, and marriage certificates, or copies of other public records... assistance. Persons requesting documents for use in the preparation of family trees or in the compilation of...
22 CFR 92.79 - Procuring copies of foreign public documents.
Code of Federal Regulations, 2011 CFR
2011-04-01
... foreign officials copies of birth, death, and marriage certificates, or copies of other public records... assistance. Persons requesting documents for use in the preparation of family trees or in the compilation of...
22 CFR 92.79 - Procuring copies of foreign public documents.
Code of Federal Regulations, 2014 CFR
2014-04-01
... foreign officials copies of birth, death, and marriage certificates, or copies of other public records... assistance. Persons requesting documents for use in the preparation of family trees or in the compilation of...
Geologic Map of the Goleta Quadrangle, Santa Barbara County, California
Minor, Scott A.; Kellogg, Karl S.; Stanley, Richard G.; Brandt, Theodore R.
2007-01-01
This map depicts the distribution of bedrock units and surficial deposits and associated deformation underlying those parts of the Santa Barbara coastal plain and adjacent southern flank of the Santa Ynez Mountains within the Goleta 7 ?? quadrangle at a compilation scale of 1:24,000 (one inch on the map = 2,000 feet on the ground) and with a horizontal positional accuracy of at least 20 m. The Goleta map overlaps an earlier preliminary geologic map of the central part of the coastal plain (Minor and others, 2002) that provided coverage within the coastal, central parts of the Goleta and contiguous Santa Barbara quadrangles. In addition to new mapping in the northern part of the Goleta quadrangle, geologic mapping in other parts of the map area has been revised from the preliminary map compilation based on new structural interpretations supplemented by new biostratigraphic data. All surficial and bedrock map units are described in detail in the accompanying map pamphlet. Abundant biostratigraphic and biochronologic data based on microfossil identifications are presented in expanded unit descriptions of the marine Neogene Monterey and Sisquoc Formations. Site-specific fault-kinematic observations (including slip-sense determinations) are embedded in the digital map database. The Goleta quadrangle is located in the western Transverse Ranges physiographic province along an east-west-trending segment of the southern California coastline about 100 km (62 mi) northwest of Los Angeles. The Santa Barbara coastal plain surface, which spans the central part of the quadrangle, includes several mesas and hills that are geomorphic expressions of underlying, potentially active folds and partly buried oblique and reverse faults of the Santa Barbara fold and fault belt (SBFFB). Strong earthquakes have occurred offshore within 10 km of the Santa Barbara coastal plain in 1925 (6.3 magnitude), 1941 (5.5 magnitude) and 1978 (5.1 magnitude). These and numerous smaller seismic events located beneath and offshore of the coastal plain, likely occurred on reverse-oblique-slip faults that are similar to, or continuous with, Quaternary reverse faults crossing the coastal plain. Thus, faults of the SBFFB pose a significant earthquake hazard to the approximately 200,000 people living within the major coastal population centers of Santa Barbara and Goleta. In addition, numerous Quaternary landslide deposits along the steep southern flank of the Santa Ynez Mountains indicate the potential for continued slope failures and mass movements in developed areas. Folded, faulted, and fractured sedimentary rocks in the subsurface of the coastal plain and adjacent Santa Barbara Channel are sources and form reservoirs for economic deposits of oil and gas, some of which are currently being extracted offshore. Shallow, localized sedimentary aquifers underlying the coastal plain provide limited amounts of water for the urban areas, but the quality of some of this groundwater is compromised by coastal salt-water contamination. The present map compilation provides a set of uniform geologic digital coverages that can be used for analysis and interpretation of these and other geologic hazards and resources in the Goleta region.
Sweetkind, Donald S.
2017-09-08
As part of a U.S. Geological Survey study in cooperation with the Bureau of Reclamation, a digital three-dimensional hydrogeologic framework model was constructed for the Rio Grande transboundary region of New Mexico and Texas, USA, and northern Chihuahua, Mexico. This model was constructed to define the aquifer system geometry and subsurface lithologic characteristics and distribution for use in a regional numerical hydrologic model. The model includes five hydrostratigraphic units: river channel alluvium, three informal subdivisions of Santa Fe Group basin fill, and an undivided pre-Santa Fe Group bedrock unit. Model input data were compiled from published cross sections, well data, structure contour maps, selected geophysical data, and contiguous compilations of surficial geology and structural features in the study area. These data were used to construct faulted surfaces that represent the upper and lower subsurface hydrostratigraphic unit boundaries. The digital three-dimensional hydrogeologic framework model is constructed through combining faults, the elevation of the tops of each hydrostratigraphic unit, and boundary lines depicting the subsurface extent of each hydrostratigraphic unit. The framework also compiles a digital representation of the distribution of sedimentary facies within each hydrostratigraphic unit. The digital three-dimensional hydrogeologic model reproduces with reasonable accuracy the previously published subsurface hydrogeologic conceptualization of the aquifer system and represents the large-scale geometry of the subsurface aquifers. The model is at a scale and resolution appropriate for use as the foundation for a numerical hydrologic model of the study area.
Geologic map of the greater Denver area, Front Range urban corridor, Colorado
Trimble, Donald E.; Machette, Michael N.
1979-01-01
This digital map shows the areal extent of surficial deposits and rock stratigraphic units (formations) as compiled by Trimble and Machette from 1973 to 1977 and published in 1979 under the Front Range Urban Corridor Geology Program. Trimble and Machette compiled their geologic map from published geologic maps and unpublished geologic mapping having varied map unit schemes. A convenient feature of the compiled map is its uniform classification of geologic units that mostly matches those of companion maps to the north (USGS I-855-G) and to the south (USGS I-857-F). Published as a color paper map, the Trimble and Machette map was intended for land-use planning in the Front Range Urban Corridor. This map recently (1997-1999) was digitized under the USGS Front Range Infrastructure Resources Project. In general, the mountainous areas in the western part of the map exhibit various igneous and metamorphic bedrock units of Precambrian age, major faults, and fault brecciation zones at the east margin (5-20 km wide) of the Front Range. The eastern and central parts of the map (Colorado Piedmont) depict a mantle of unconsolidated deposits of Quaternary age and interspersed outcroppings of Cretaceous or Tertiary-Cretaceous sedimentary bedrock. The Quaternary mantle comprises eolian deposits (quartz sand and silt), alluvium (gravel, sand, and silt of variable composition), colluvium, and a few landslides. At the mountain front, north-trending, dipping Paleozoic and Mesozoic sandstone, shale, and limestone bedrock formations form hogbacks and intervening valleys.
An Overview of the Production Quality Compiler-Compiler Project
1979-02-01
process. A parse tree is assumed, and there is a set of primitives for extracting information from it and for "walking" it: using its structure to...not adequate for, and even preclude, techniques that involve multiple phases, or non-trivial auxiliary data structures. In recent years there have...VALUE field of node 23: would indicate that the type of the value field was mtcger. As with "union mode" or "variant record" features in many
Wang, Chun-Yong; Mooney, W.D.; Ding, Z.; Yang, J.; Yao, Z.; Lou, H.
2009-01-01
The shallow seismic velocity structure of the Kunlun fault zone (KLFZ) was jointly deduced from seismic refraction profiling and the records of trapped waves that were excited by five explosions. The data were collected after the 2001 Kunlun M s8.1 earthquake in the northern Tibetan Plateau. Seismic phases for the in-line record sections (26 records up to a distance of 15 km) along the fault zone were analysed, and 1-D P- and S-wave velocity models of shallow crust within the fault zone were determined by using the seismic refraction method. Sixteen seismic stations were deployed along the off-line profile perpendicular to the fault zone. Fault-zone trapped waves appear clearly on the record sections, which were simulated with a 3-D finite difference algorithm. Quantitative analysis of the correlation coefficients of the synthetic and observed trapped waveforms indicates that the Kunlun fault-zone width is 300 m, and S-wave quality factor Q within the fault zone is 15. Significantly, S-wave velocities within the fault zone are reduced by 30-45 per cent from surrounding rocks to a depth of at least 1-2 km, while P-wave velocities are reduced by 7-20 per cent. A fault-zone with such P- and S-low velocities is an indication of high fluid pressure because Vs is affected more than Vp. The low-velocity and low-Q zone in the KLFZ model is the effect of multiple ruptures along the fault trace of the 2001 M s8.1 Kunlun earthquake. ?? 2009 The Authors Journal compilation ?? 2009 RAS.
Time-dependent seismic hazard analysis for the Greater Tehran and surrounding areas
NASA Astrophysics Data System (ADS)
Jalalalhosseini, Seyed Mostafa; Zafarani, Hamid; Zare, Mehdi
2018-01-01
This study presents a time-dependent approach for seismic hazard in Tehran and surrounding areas. Hazard is evaluated by combining background seismic activity, and larger earthquakes may emanate from fault segments. Using available historical and paleoseismological data or empirical relation, the recurrence time and maximum magnitude of characteristic earthquakes for the major faults have been explored. The Brownian passage time (BPT) distribution has been used to calculate equivalent fictitious seismicity rate for major faults in the region. To include ground motion uncertainty, a logic tree and five ground motion prediction equations have been selected based on their applicability in the region. Finally, hazard maps have been presented.
LIDAR Helps Identify Source of 1872 Earthquake Near Chelan, Washington
NASA Astrophysics Data System (ADS)
Sherrod, B. L.; Blakely, R. J.; Weaver, C. S.
2015-12-01
One of the largest historic earthquakes in the Pacific Northwest occurred on 15 December 1872 (M6.5-7) near the south end of Lake Chelan in north-central Washington State. Lack of recognized surface deformation suggested that the earthquake occurred on a blind, perhaps deep, fault. New LiDAR data show landslides and a ~6 km long, NW-side-up scarp in Spencer Canyon, ~30 km south of Lake Chelan. Two landslides in Spencer Canyon impounded small ponds. An historical account indicated that dead trees were visible in one pond in AD1884. Wood from a snag in the pond yielded a calibrated age of AD1670-1940. Tree ring counts show that the oldest living trees on each landslide are 130 and 128 years old. The larger of the two landslides obliterated the scarp and thus, post-dates the last scarp-forming event. Two trenches across the scarp exposed a NW-dipping thrust fault. One trench exposed alluvial fan deposits, Mazama ash, and scarp colluvium cut by a single thrust fault. Three charcoal samples from a colluvium buried during the last fault displacement had calibrated ages between AD1680 and AD1940. The second trench exposed gneiss thrust over colluvium during at least two, and possibly three fault displacements. The younger of two charcoal samples collected from a colluvium below gneiss had a calibrated age of AD1665- AD1905. For an historical constraint, we assume that the lack of felt reports for large earthquakes in the period between 1872 and today indicates that no large earthquakes capable of rupturing the ground surface occurred in the region after the 1872 earthquake; thus the last displacement on the Spencer Canyon scarp cannot post-date the 1872 earthquake. Modeling of the age data suggests that the last displacement occurred between AD1840 and AD1890. These data, combined with the historical record, indicate that this fault is the source of the 1872 earthquake. Analyses of aeromagnetic data reveal lithologic contacts beneath the scarp that form an ENE-striking, curvilinear zone ~2.5 km wide and ~55 km long. This zone coincides with monoclines mapped in Mesozoic bedrock and Miocene flood basalts. This study ends uncertainty regarding the source of the 1872 earthquake and provides important information for seismic hazard analyses of major infrastructure projects in Washington and British Columbia.
Fault detection and fault tolerance in robotics
NASA Technical Reports Server (NTRS)
Visinsky, Monica; Walker, Ian D.; Cavallaro, Joseph R.
1992-01-01
Robots are used in inaccessible or hazardous environments in order to alleviate some of the time, cost and risk involved in preparing men to endure these conditions. In order to perform their expected tasks, the robots are often quite complex, thus increasing their potential for failures. If men must be sent into these environments to repair each component failure in the robot, the advantages of using the robot are quickly lost. Fault tolerant robots are needed which can effectively cope with failures and continue their tasks until repairs can be realistically scheduled. Before fault tolerant capabilities can be created, methods of detecting and pinpointing failures must be perfected. This paper develops a basic fault tree analysis of a robot in order to obtain a better understanding of where failures can occur and how they contribute to other failures in the robot. The resulting failure flow chart can also be used to analyze the resiliency of the robot in the presence of specific faults. By simulating robot failures and fault detection schemes, the problems involved in detecting failures for robots are explored in more depth.
NASA Astrophysics Data System (ADS)
Lai, Wenqing; Wang, Yuandong; Li, Wenpeng; Sun, Guang; Qu, Guomin; Cui, Shigang; Li, Mengke; Wang, Yongqiang
2017-10-01
Based on long term vibration monitoring of the No.2 oil-immersed fat wave reactor in the ±500kV converter station in East Mongolia, the vibration signals in normal state and in core loose fault state were saved. Through the time-frequency analysis of the signals, the vibration characteristics of the core loose fault were obtained, and a fault diagnosis method based on the dual tree complex wavelet (DT-CWT) and support vector machine (SVM) was proposed. The vibration signals were analyzed by DT-CWT, and the energy entropy of the vibration signals were taken as the feature vector; the support vector machine was used to train and test the feature vector, and the accurate identification of the core loose fault of the flat wave reactor was realized. Through the identification of many groups of normal and core loose fault state vibration signals, the diagnostic accuracy of the result reached 97.36%. The effectiveness and accuracy of the method in the fault diagnosis of the flat wave reactor core is verified.
Method and system for dynamic probabilistic risk assessment
NASA Technical Reports Server (NTRS)
Dugan, Joanne Bechta (Inventor); Xu, Hong (Inventor)
2013-01-01
The DEFT methodology, system and computer readable medium extends the applicability of the PRA (Probabilistic Risk Assessment) methodology to computer-based systems, by allowing DFT (Dynamic Fault Tree) nodes as pivot nodes in the Event Tree (ET) model. DEFT includes a mathematical model and solution algorithm, supports all common PRA analysis functions and cutsets. Additional capabilities enabled by the DFT include modularization, phased mission analysis, sequence dependencies, and imperfect coverage.
Fault diagnosis of helical gearbox using acoustic signal and wavelets
NASA Astrophysics Data System (ADS)
Pranesh, SK; Abraham, Siju; Sugumaran, V.; Amarnath, M.
2017-05-01
The efficient transmission of power in machines is needed and gears are an appropriate choice. Faults in gears result in loss of energy and money. The monitoring and fault diagnosis are done by analysis of the acoustic and vibrational signals which are generally considered to be unwanted by products. This study proposes the usage of machine learning algorithm for condition monitoring of a helical gearbox by using the sound signals produced by the gearbox. Artificial faults were created and subsequently signals were captured by a microphone. An extensive study using different wavelet transformations for feature extraction from the acoustic signals was done, followed by waveletselection and feature selection using J48 decision tree and feature classification was performed using K star algorithm. Classification accuracy of 100% was obtained in the study
Five centuries of Southern Moravian drought variations revealed from living and historic tree rings
NASA Astrophysics Data System (ADS)
Büntgen, Ulf; Brázdil, Rudolf; Dobrovolný, Petr; Trnka, Mirek; Kyncl, Tomáš
2011-08-01
Past, present, and projected fluctuations of the hydrological cycle, associated to anthropogenic climate change, describe a pending challenge for natural ecosystems and human civilizations. Here, we compile and analyze long meteorological records from Brno, Czech Republic and nearby tree-ring measurements of living and historic firs from Southern Moravia. This unique paleoclimatic compilation together with innovative reconstruction methods and error estimates allows regional-scale May-June drought variability to be estimated back to ad 1500. Driest and wettest conditions occurred in 1653 and 1713, respectively. The ten wettest decades are evenly distributed throughout time, whereas the driest episodes occurred in the seventeenth century and from the 1840s onward. Discussion emphasizes agreement between the new reconstruction and documentary evidence, and stresses possible sources of reconstruction uncertainty including station inhomogeneity, limited frequency preservation, reduced climate sensitivity, and large-scale constraints.
Inferring patterns in mitochondrial DNA sequences through hypercube independent spanning trees.
Silva, Eduardo Sant Ana da; Pedrini, Helio
2016-03-01
Given a graph G, a set of spanning trees rooted at a vertex r of G is said vertex/edge independent if, for each vertex v of G, v≠r, the paths of r to v in any pair of trees are vertex/edge disjoint. Independent spanning trees (ISTs) provide a number of advantages in data broadcasting due to their fault tolerant properties. For this reason, some studies have addressed the issue by providing mechanisms for constructing independent spanning trees efficiently. In this work, we investigate how to construct independent spanning trees on hypercubes, which are generated based upon spanning binomial trees, and how to use them to predict mitochondrial DNA sequence parts through paths on the hypercube. The prediction works both for inferring mitochondrial DNA sequences comprised of six bases as well as infer anomalies that probably should not belong to the mitochondrial DNA standard. Copyright © 2016 Elsevier Ltd. All rights reserved.
Nouri.Gharahasanlou, Ali; Mokhtarei, Ashkan; Khodayarei, Aliasqar; Ataei, Mohammad
2014-01-01
Evaluating and analyzing the risk in the mining industry is a new approach for improving the machinery performance. Reliability, safety, and maintenance management based on the risk analysis can enhance the overall availability and utilization of the mining technological systems. This study investigates the failure occurrence probability of the crushing and mixing bed hall department at Azarabadegan Khoy cement plant by using fault tree analysis (FTA) method. The results of the analysis in 200 h operating interval show that the probability of failure occurrence for crushing, conveyor systems, crushing and mixing bed hall department is 73, 64, and 95 percent respectively and the conveyor belt subsystem found as the most probable system for failure. Finally, maintenance as a method of control and prevent the occurrence of failure is proposed. PMID:26779433
Towards generating ECSS-compliant fault tree analysis results via ConcertoFLA
NASA Astrophysics Data System (ADS)
Gallina, B.; Haider, Z.; Carlsson, A.
2018-05-01
Attitude Control Systems (ACSs) maintain the orientation of the satellite in three-dimensional space. ACSs need to be engineered in compliance with ECSS standards and need to ensure a certain degree of dependability. Thus, dependability analysis is conducted at various levels and by using ECSS-compliant techniques. Fault Tree Analysis (FTA) is one of these techniques. FTA is being automated within various Model Driven Engineering (MDE)-based methodologies. The tool-supported CHESS-methodology is one of them. This methodology incorporates ConcertoFLA, a dependability analysis technique enabling failure behavior analysis and thus FTA-results generation. ConcertoFLA, however, similarly to other techniques, still belongs to the academic research niche. To promote this technique within the space industry, we apply it on an ACS and discuss about its multi-faceted potentialities in the context of ECSS-compliant engineering.
NASA Astrophysics Data System (ADS)
Zeng, Yajun; Skibniewski, Miroslaw J.
2013-08-01
Enterprise resource planning (ERP) system implementations are often characterised with large capital outlay, long implementation duration, and high risk of failure. In order to avoid ERP implementation failure and realise the benefits of the system, sound risk management is the key. This paper proposes a probabilistic risk assessment approach for ERP system implementation projects based on fault tree analysis, which models the relationship between ERP system components and specific risk factors. Unlike traditional risk management approaches that have been mostly focused on meeting project budget and schedule objectives, the proposed approach intends to address the risks that may cause ERP system usage failure. The approach can be used to identify the root causes of ERP system implementation usage failure and quantify the impact of critical component failures or critical risk events in the implementation process.
Accelerated Monte Carlo Simulation for Safety Analysis of the Advanced Airspace Concept
NASA Technical Reports Server (NTRS)
Thipphavong, David
2010-01-01
Safe separation of aircraft is a primary objective of any air traffic control system. An accelerated Monte Carlo approach was developed to assess the level of safety provided by a proposed next-generation air traffic control system. It combines features of fault tree and standard Monte Carlo methods. It runs more than one order of magnitude faster than the standard Monte Carlo method while providing risk estimates that only differ by about 10%. It also preserves component-level model fidelity that is difficult to maintain using the standard fault tree method. This balance of speed and fidelity allows sensitivity analysis to be completed in days instead of weeks or months with the standard Monte Carlo method. Results indicate that risk estimates are sensitive to transponder, pilot visual avoidance, and conflict detection failure probabilities.
Logic flowgraph methodology - A tool for modeling embedded systems
NASA Technical Reports Server (NTRS)
Muthukumar, C. T.; Guarro, S. B.; Apostolakis, G. E.
1991-01-01
The logic flowgraph methodology (LFM), a method for modeling hardware in terms of its process parameters, has been extended to form an analytical tool for the analysis of integrated (hardware/software) embedded systems. In the software part of a given embedded system model, timing and the control flow among different software components are modeled by augmenting LFM with modified Petrinet structures. The objective of the use of such an augmented LFM model is to uncover possible errors and the potential for unanticipated software/hardware interactions. This is done by backtracking through the augmented LFM mode according to established procedures which allow the semiautomated construction of fault trees for any chosen state of the embedded system (top event). These fault trees, in turn, produce the possible combinations of lower-level states (events) that may lead to the top event.
Nouri Gharahasanlou, Ali; Mokhtarei, Ashkan; Khodayarei, Aliasqar; Ataei, Mohammad
2014-04-01
Evaluating and analyzing the risk in the mining industry is a new approach for improving the machinery performance. Reliability, safety, and maintenance management based on the risk analysis can enhance the overall availability and utilization of the mining technological systems. This study investigates the failure occurrence probability of the crushing and mixing bed hall department at Azarabadegan Khoy cement plant by using fault tree analysis (FTA) method. The results of the analysis in 200 h operating interval show that the probability of failure occurrence for crushing, conveyor systems, crushing and mixing bed hall department is 73, 64, and 95 percent respectively and the conveyor belt subsystem found as the most probable system for failure. Finally, maintenance as a method of control and prevent the occurrence of failure is proposed.
Risk assessment techniques with applicability in marine engineering
NASA Astrophysics Data System (ADS)
Rudenko, E.; Panaitescu, F. V.; Panaitescu, M.
2015-11-01
Nowadays risk management is a carefully planned process. The task of risk management is organically woven into the general problem of increasing the efficiency of business. Passive attitude to risk and awareness of its existence are replaced by active management techniques. Risk assessment is one of the most important stages of risk management, since for risk management it is necessary first to analyze and evaluate risk. There are many definitions of this notion but in general case risk assessment refers to the systematic process of identifying the factors and types of risk and their quantitative assessment, i.e. risk analysis methodology combines mutually complementary quantitative and qualitative approaches. Purpose of the work: In this paper we will consider as risk assessment technique Fault Tree analysis (FTA). The objectives are: understand purpose of FTA, understand and apply rules of Boolean algebra, analyse a simple system using FTA, FTA advantages and disadvantages. Research and methodology: The main purpose is to help identify potential causes of system failures before the failures actually occur. We can evaluate the probability of the Top event.The steps of this analize are: the system's examination from Top to Down, the use of symbols to represent events, the use of mathematical tools for critical areas, the use of Fault tree logic diagrams to identify the cause of the Top event. Results: In the finally of study it will be obtained: critical areas, Fault tree logical diagrams and the probability of the Top event. These results can be used for the risk assessment analyses.
NASA Astrophysics Data System (ADS)
Jackson, C. A. L.; Bell, R. E.; Rotevatn, A.; Tvedt, A. B. M.
2015-12-01
Normal faulting accommodates stretching of the Earth's crust and is one of the fundamental controls on landscape evolution and sediment dispersal in rift basins. Displacement-length scaling relationships compiled from global datasets suggest normal faults grow via a sympathetic increase in these two parameters (the 'isolated fault model'). This model has dominated the structural geology literature for >20 years and underpins the structural and tectono-stratigraphic models developed for active rifts. However, relatively recent analysis of high-quality 3D seismic reflection data suggests faults may grow by rapid establishment of their near-final length prior to significant displacement accumulation (the 'coherent fault model'). The isolated and coherent fault models make very different predictions regarding the tectono-stratigraphic evolution of rift basin, thus assessing their applicability is important. To-date, however, very few studies have explicitly set out to critically test the coherent fault model thus, it may be argued, it has yet to be widely accepted in the structural geology community. Displacement backstripping is a simple graphical technique typically used to determine how faults lengthen and accumulate displacement; this technique should therefore allow us to test the competing fault models. However, in this talk we use several subsurface case studies to show that the most commonly used backstripping methods (the 'original' and 'modified' methods) are, however, of limited value, because application of one over the other requires an a priori assumption of the model most applicable to any given fault; we argue this is illogical given that the style of growth is exactly what the analysis is attempting to determine. We then revisit our case studies and demonstrate that, in the case of seismic-scale growth faults, growth strata thickness patterns and relay zone kinematics, rather than displacement backstripping, should be assessed to directly constrain fault length and thus tip behaviour through time. We conclude that rapid length establishment prior to displacement accumulation may be more common than is typically assumed, thus challenging the well-established, widely cited and perhaps overused, isolated fault model.
Using certification trails to achieve software fault tolerance
NASA Technical Reports Server (NTRS)
Sullivan, Gregory F.; Masson, Gerald M.
1993-01-01
A conceptually novel and powerful technique to achieve fault tolerance in hardware and software systems is introduced. When used for software fault tolerance, this new technique uses time and software redundancy and can be outlined as follows. In the initial phase, a program is run to solve a problem and store the result. In addition, this program leaves behind a trail of data called a certification trail. In the second phase, another program is run which solves the original problem again. This program, however, has access to the certification trail left by the first program. Because of the availability of the certification trail, the second phase can be performed by a less complex program and can execute more quickly. In the final phase, the two results are accepted as correct; otherwise an error is indicated. An essential aspect of this approach is that the second program must always generate either an error indication or a correct output even when the certification trail it receives from the first program is incorrect. The certification trail approach to fault tolerance was formalized and it was illustrated by applying it to the fundamental problem of finding a minimum spanning tree. Cases in which the second phase can be run concorrectly with the first and act as a monitor are discussed. The certification trail approach was compared to other approaches to fault tolerance. Because of space limitations we have omitted examples of our technique applied to the Huffman tree, and convex hull problems. These can be found in the full version of this paper.
Ching, K.-E.; Rau, R.-J.; Zeng, Y.
2007-01-01
A coseismic source model of the 2003 Mw 6.8 Chengkung, Taiwan, earthquake was well determined with 213 GPS stations, providing a unique opportunity to study the characteristics of coseismic displacements of a high-angle buried reverse fault. Horizontal coseismic displacements show fault-normal shortening across the fault trace. Displacements on the hanging wall reveal fault-parallel and fault-normal lengthening. The largest horizontal and vertical GPS displacements reached 153 and 302 mm, respectively, in the middle part of the network. Fault geometry and slip distribution were determined by inverting GPS data using a three-dimensional (3-D) layered-elastic dislocation model. The slip is mainly concentrated within a 44 ?? 14 km slip patch centered at 15 km depth with peak amplitude of 126.6 cm. Results from 3-D forward-elastic model tests indicate that the dome-shaped folding on the hanging wall is reproduced with fault dips greater than 40??. Compared with the rupture area and average slip from slow slip earthquakes and a compilation of finite source models of 18 earthquakes, the Chengkung earthquake generated a larger rupture area and a lower stress drop, suggesting lower than average friction. Hence the Chengkung earthquake seems to be a transitional example between regular and slow slip earthquakes. The coseismic source model of this event indicates that the Chihshang fault is divided into a creeping segment in the north and the locked segment in the south. An average recurrence interval of 50 years for a magnitude 6.8 earthquake was estimated for the southern fault segment. Copyright 2007 by the American Geophysical Union.
Using Earthquake Analysis to Expand the Oklahoma Fault Database
NASA Astrophysics Data System (ADS)
Chang, J. C.; Evans, S. C.; Walter, J. I.
2017-12-01
The Oklahoma Geological Survey (OGS) is compiling a comprehensive Oklahoma Fault Database (OFD), which includes faults mapped in OGS publications, university thesis maps, and industry-contributed shapefiles. The OFD includes nearly 20,000 fault segments, but the work is far from complete. The OGS plans on incorporating other sources of data into the OFD, such as new faults from earthquake sequence analyses, geologic field mapping, active-source seismic surveys, and potential fields modeling. A comparison of Oklahoma seismicity and the OFD reveals that earthquakes in the state appear to nucleate on mostly unmapped or unknown faults. Here, we present faults derived from earthquake sequence analyses. From 2015 to present, there has been a five-fold increase in realtime seismic stations in Oklahoma, which has greatly expanded and densified the state's seismic network. The current seismic network not only improves our threshold for locating weaker earthquakes, but also allows us to better constrain focal plane solutions (FPS) from first motion analyses. Using nodal planes from the FPS, HypoDD relocation, and historic seismic data, we can elucidate these previously unmapped seismogenic faults. As the OFD is a primary resource for various scientific investigations, the inclusion of seismogenic faults improves further derivative studies, particularly with respect to seismic hazards. Our primal focus is on four areas of interest, which have had M5+ earthquakes in recent Oklahoma history: Pawnee (M5.8), Prague (M5.7), Fairview (M5.1), and Cushing (M5.0). Subsequent areas of interest will include seismically active data-rich areas, such as the central and northcentral parts of the state.
Tectonic implications of Mars crustal magnetism
Connerney, J. E. P.; Acuña, M. H.; Ness, N. F.; Kletetschka, G.; Mitchell, D. L.; Lin, R. P.; Reme, H.
2005-01-01
Mars currently has no global magnetic field of internal origin but must have had one in the past, when the crust acquired intense magnetization, presumably by cooling in the presence of an Earth-like magnetic field (thermoremanent magnetization). A new map of the magnetic field of Mars, compiled by using measurements acquired at an ≈400-km mapping altitude by the Mars Global Surveyor spacecraft, is presented here. The increased spatial resolution and sensitivity of this map provide new insight into the origin and evolution of the Mars crust. Variations in the crustal magnetic field appear in association with major faults, some previously identified in imagery and topography (Cerberus Rupes and Valles Marineris). Two parallel great faults are identified in Terra Meridiani by offset magnetic field contours. They appear similar to transform faults that occur in oceanic crust on Earth, and support the notion that the Mars crust formed during an early era of plate tectonics. PMID:16217034
Tectonic implications of Mars crustal magnetism.
Connerney, J E P; Acuña, M H; Ness, N F; Kletetschka, G; Mitchell, D L; Lin, R P; Reme, H
2005-10-18
Mars currently has no global magnetic field of internal origin but must have had one in the past, when the crust acquired intense magnetization, presumably by cooling in the presence of an Earth-like magnetic field (thermoremanent magnetization). A new map of the magnetic field of Mars, compiled by using measurements acquired at an approximately 400-km mapping altitude by the Mars Global Surveyor spacecraft, is presented here. The increased spatial resolution and sensitivity of this map provide new insight into the origin and evolution of the Mars crust. Variations in the crustal magnetic field appear in association with major faults, some previously identified in imagery and topography (Cerberus Rupes and Valles Marineris). Two parallel great faults are identified in Terra Meridiani by offset magnetic field contours. They appear similar to transform faults that occur in oceanic crust on Earth, and support the notion that the Mars crust formed during an early era of plate tectonics.
Bodin, Paul; Bilham, Roger; Behr, Jeff; Gomberg, Joan; Hudnut, Kenneth W.
1994-01-01
Five out of six functioning creepmeters on southern California faults recorded slip triggered at the time of some or all of the three largest events of the 1992 Landers earthquake sequence. Digital creep data indicate that dextral slip was triggered within 1 min of each mainshock and that maximum slip velocities occurred 2 to 3 min later. The duration of triggered slip events ranged from a few hours to several weeks. We note that triggered slip occurs commonly on faults that exhibit fault creep. To account for the observation that slip can be triggered repeatedly on a fault, we propose that the amplitude of triggered slip may be proportional to the depth of slip in the creep event and to the available near-surface tectonic strain that would otherwise eventually be released as fault creep. We advance the notion that seismic surface waves, perhaps amplified by sediments, generate transient local conditions that favor the release of tectonic strain to varying depths. Synthetic strain seismograms are presented that suggest increased pore pressure during periods of fault-normal contraction may be responsible for triggered slip, since maximum dextral shear strain transients correspond to times of maximum fault-normal contraction.
Read buffer optimizations to support compiler-assisted multiple instruction retry
NASA Technical Reports Server (NTRS)
Alewine, N. J.; Fuchs, W. K.; Hwu, W. M.
1993-01-01
Multiple instruction retry is a recovery mechanism for transient processor faults. We previously developed a compiler-assisted approach to multiple instruction ferry in which a read buffer of size 2N (where N represents the maximum instruction rollback distance) was used to resolve some data hazards while the compiler resolved the remaining hazards. The compiler-assisted scheme was shown to reduce the performance overhead and/or hardware complexity normally associated with hardware-only retry schemes. This paper examines the size and design of the read buffer. We establish a practical lower bound and average size requirement for the read buffer by modifying the scheme to save only the data required for rollback. The study measures the effect on the performance of a DECstation 3100 running ten application programs using six read buffer configurations with varying read buffer sizes. Two alternative configurations are shown to be the most efficient and differed depending on whether split-cycle-saves are assumed. Up to a 55 percent read buffer size reduction is achievable with an average reduction of 39 percent given the most efficient read buffer configuration and a variety of applications.
Machine Learning of Fault Friction
NASA Astrophysics Data System (ADS)
Johnson, P. A.; Rouet-Leduc, B.; Hulbert, C.; Marone, C.; Guyer, R. A.
2017-12-01
We are applying machine learning (ML) techniques to continuous acoustic emission (AE) data from laboratory earthquake experiments. Our goal is to apply explicit ML methods to this acoustic datathe AE in order to infer frictional properties of a laboratory fault. The experiment is a double direct shear apparatus comprised of fault blocks surrounding fault gouge comprised of glass beads or quartz powder. Fault characteristics are recorded, including shear stress, applied load (bulk friction = shear stress/normal load) and shear velocity. The raw acoustic signal is continuously recorded. We rely on explicit decision tree approaches (Random Forest and Gradient Boosted Trees) that allow us to identify important features linked to the fault friction. A training procedure that employs both the AE and the recorded shear stress from the experiment is first conducted. Then, testing takes place on data the algorithm has never seen before, using only the continuous AE signal. We find that these methods provide rich information regarding frictional processes during slip (Rouet-Leduc et al., 2017a; Hulbert et al., 2017). In addition, similar machine learning approaches predict failure times, as well as slip magnitudes in some cases. We find that these methods work for both stick slip and slow slip experiments, for periodic slip and for aperiodic slip. We also derive a fundamental relationship between the AE and the friction describing the frictional behavior of any earthquake slip cycle in a given experiment (Rouet-Leduc et al., 2017b). Our goal is to ultimately scale these approaches to Earth geophysical data to probe fault friction. References Rouet-Leduc, B., C. Hulbert, N. Lubbers, K. Barros, C. Humphreys and P. A. Johnson, Machine learning predicts laboratory earthquakes, in review (2017). https://arxiv.org/abs/1702.05774Rouet-LeDuc, B. et al., Friction Laws Derived From the Acoustic Emissions of a Laboratory Fault by Machine Learning (2017), AGU Fall Meeting Session S025: Earthquake source: from the laboratory to the fieldHulbert, C., Characterizing slow slip applying machine learning (2017), AGU Fall Meeting Session S019: Slow slip, Tectonic Tremor, and the Brittle-to-Ductile Transition Zone: What mechanisms control the diversity of slow and fast earthquakes?
d'Alessio, M. A.; Johanson, I.A.; Burgmann, R.; Schmidt, D.A.; Murray, M.H.
2005-01-01
Observations of surface deformation allow us to determine the kinematics of faults in the San Francisco Bay Area. We present the Bay Area velocity unification (BA??VU??, "bay view"), a compilation of over 200 horizontal surface velocities computed from campaign-style and continuous Global Positioning System (GPS) observations from 1993 to 2003. We interpret this interseismic velocity field using a three-dimensional block model to determine the relative contributions of block motion, elastic strain accumulation, and shallow aseismic creep. The total relative motion between the Pacific plate and the rigid Sierra Nevada/Great Valley (SNGV) microplate is 37.9 ?? 0.6 mm yr-1 directed toward N30.4??W ?? 0.8?? at San Francisco (??2??). Fault slip rates from our preferred model are typically within the error bounds of geologic estimates but provide a better fit to geodetic data (notable right-lateral slip rates in mm yr-1: San Gregorio fault, 2.4 ?? 1.0; West Napa fault, 4.0 ?? 3.0; zone of faulting along the eastern margin of the Coast Range, 5.4 ?? 1.0; and Mount Diablo thrust, 3.9 ?? 1.0 of reverse slip and 4.0 ?? 0.2 of right-lateral strike slip). Slip on the northern Calaveras is partitioned between both the West Napa and Concord/ Green Valley fault systems. The total convergence across the Bay Area is negligible. Poles of rotation for Bay Area blocks progress systematically from the North America-Pacific to North America-SNGV poles. The resulting present-day relative motion cannot explain the strike of most Bay Area faults, but fault strike does loosely correlate with inferred plate motions at the time each fault initiated. Copyright 2005 by the American Geophysical Union.
Scaling Relations of Earthquakes on Inland Active Mega-Fault Systems
NASA Astrophysics Data System (ADS)
Murotani, S.; Matsushima, S.; Azuma, T.; Irikura, K.; Kitagawa, S.
2010-12-01
Since 2005, The Headquarters for Earthquake Research Promotion (HERP) has been publishing 'National Seismic Hazard Maps for Japan' to provide useful information for disaster prevention countermeasures for the country and local public agencies, as well as promote public awareness of disaster prevention of earthquakes. In the course of making the year 2009 version of the map, which is the commemorate of the tenth anniversary of the settlement of the Comprehensive Basic Policy, the methods to evaluate magnitude of earthquakes, to predict strong ground motion, and to construct underground structure were investigated in the Earthquake Research Committee and its subcommittees. In order to predict the magnitude of earthquakes occurring on mega-fault systems, we examined the scaling relations for mega-fault systems using 11 earthquakes of which source processes were analyzed by waveform inversion and of which surface information was investigated. As a result, we found that the data fit in between the scaling relations of seismic moment and rupture area by Somerville et al. (1999) and Irikura and Miyake (2001). We also found that maximum displacement of surface rupture is two to three times larger than the average slip on the seismic fault and surface fault length is equal to length of the source fault. Furthermore, compiled data of the source fault shows that displacement saturates at 10m when fault length(L) is beyond 100km, L>100km. By assuming the fault width (W) to be 18km in average of inland earthquakes in Japan, and the displacement saturate at 10m for length of more than 100 km, we derived a new scaling relation between source area and seismic moment, S[km^2] = 1.0 x 10^-17 M0 [Nm] for mega-fault systems that seismic moment (M0) exceeds 1.8×10^20 Nm.
ERIC Educational Resources Information Center
Natural History, 1987
1987-01-01
Provides a compilation of recommended field guides which deal with birds, mammals, trees, and wildflowers. Each recommended volume is described, noting its distinguishing features and evaluating its organization, photography, and text. Includes the author, publisher, and suggested retail price. (TW)
The embedded operating system project
NASA Technical Reports Server (NTRS)
Campbell, R. H.
1985-01-01
The design and construction of embedded operating systems for real-time advanced aerospace applications was investigated. The applications require reliable operating system support that must accommodate computer networks. Problems that arise in the construction of such operating systems, reconfiguration, consistency and recovery in a distributed system, and the issues of real-time processing are reported. A thesis that provides theoretical foundations for the use of atomic actions to support fault tolerance and data consistency in real-time object-based system is included. The following items are addressed: (1) atomic actions and fault-tolerance issues; (2) operating system structure; (3) program development; (4) a reliable compiler for path Pascal; and (5) mediators, a mechanism for scheduling distributed system processes.
Sweetkind, Donald S.; Bova, Shiera C.; Langenheim, V.E.; Shumaker, Lauren E.; Scheirer, Daniel S.
2013-01-01
Stratigraphic information from 391 oil and gas exploration wells from Cuyama Valley, California, and surrounding areas are herein compiled in digital form from reports that were released originally in paper form. The Cuyama Basin is located within the southeasternmost part of the Coast Ranges and north of the western Transverse Ranges, west of the San Andreas fault. Knowledge of the location and elevation of stratigraphic tops of formations throughout the basin is a first step toward understanding depositional trends and the structural evolution of the basin through time, and helps in understanding the slip history and partitioning of slip on San Andreas and related faults.
Methods for Probabilistic Fault Diagnosis: An Electrical Power System Case Study
NASA Technical Reports Server (NTRS)
Ricks, Brian W.; Mengshoel, Ole J.
2009-01-01
Health management systems that more accurately and quickly diagnose faults that may occur in different technical systems on-board a vehicle will play a key role in the success of future NASA missions. We discuss in this paper the diagnosis of abrupt continuous (or parametric) faults within the context of probabilistic graphical models, more specifically Bayesian networks that are compiled to arithmetic circuits. This paper extends our previous research, within the same probabilistic setting, on diagnosis of abrupt discrete faults. Our approach and diagnostic algorithm ProDiagnose are domain-independent; however we use an electrical power system testbed called ADAPT as a case study. In one set of ADAPT experiments, performed as part of the 2009 Diagnostic Challenge, our system turned out to have the best performance among all competitors. In a second set of experiments, we show how we have recently further significantly improved the performance of the probabilistic model of ADAPT. While these experiments are obtained for an electrical power system testbed, we believe they can easily be transitioned to real-world systems, thus promising to increase the success of future NASA missions.
1983-04-01
tolerances or spaci - able assets diagnostic/fault ness float fications isolation devices Operation of cannibalL- zation point Why Sustain materiel...with diagnostic software based on "fault tree " representation of the M65 ThS) to bridge the gap in diagnostics capability was demonstrated in 1980 and... identification friend or foe) which has much lower reliability than TSQ-73 peculiar hardware). Thus, as in other examples, reported readiness does not reflect
AADL Fault Modeling and Analysis Within an ARP4761 Safety Assessment
2014-10-01
Analysis Generator 27 3.2.3 Mapping to OpenFTA Format File 27 3.2.4 Mapping to Generic XML Format 28 3.2.5 AADL and FTA Mapping Rules 28 3.2.6 Issues...PSSA), System Safety Assessment (SSA), Common Cause Analysis (CCA), Fault Tree Analysis ( FTA ), Failure Modes and Effects Analysis (FMEA), Failure...Modes and Effects Summary, Mar - kov Analysis (MA), and Dependence Diagrams (DDs), also referred to as Reliability Block Dia- grams (RBDs). The
Do evergreen and deciduous trees have different effects on net N mineralization in soil?
Mueller, Kevin E; Hobbie, Sarah E; Oleksyn, Jacek; Reich, Peter B; Eissenstat, David M
2012-06-01
Evergreen and deciduous plants are widely expected to have different impacts on soil nitrogen (N) availability because of differences in leaf litter chemistry and ensuing effects on net N mineralization (N(min)). We evaluated this hypothesis by compiling published data on net N(min) rates beneath co-occurring stands of evergreen and deciduous trees. The compiled data included 35 sets of co-occurring stands in temperate and boreal forests. Evergreen and deciduous stands did not have consistently divergent effects on net N(min) rates; net N(min) beneath deciduous trees was higher when comparing natural stands (19 contrasts), but equivalent to evergreens in plantations (16 contrasts). We also compared net N(min) rates beneath pairs of co-occurring genera. Most pairs of genera did not differ consistently, i.e., tree species from one genus had higher net N(min) at some sites and lower net N(min) at other sites. Moreover, several common deciduous genera (Acer, Betula, Populus) and deciduous Quercus spp. did not typically have higher net N(min) rates than common evergreen genera (Pinus, Picea). There are several reasons why tree effects on net N(min) are poorly predicted by leaf habit and phylogeny. For example, the amount of N mineralized from decomposing leaves might be less than the amount of N mineralized from organic matter pools that are less affected by leaf litter traits, such as dead roots and soil organic matter. Also, effects of plant traits and plant groups on net N(min) probably depend on site-specific factors such as stand age and soil type.
Unsupervised Learning —A Novel Clustering Method for Rolling Bearing Faults Identification
NASA Astrophysics Data System (ADS)
Kai, Li; Bo, Luo; Tao, Ma; Xuefeng, Yang; Guangming, Wang
2017-12-01
To promptly process the massive fault data and automatically provide accurate diagnosis results, numerous studies have been conducted on intelligent fault diagnosis of rolling bearing. Among these studies, such as artificial neural networks, support vector machines, decision trees and other supervised learning methods are used commonly. These methods can detect the failure of rolling bearing effectively, but to achieve better detection results, it often requires a lot of training samples. Based on above, a novel clustering method is proposed in this paper. This novel method is able to find the correct number of clusters automatically the effectiveness of the proposed method is validated using datasets from rolling element bearings. The diagnosis results show that the proposed method can accurately detect the fault types of small samples. Meanwhile, the diagnosis results are also relative high accuracy even for massive samples.
The Quest for the Africa-Eurasia plate boundary West of the Strait of Gibraltar
NASA Astrophysics Data System (ADS)
Zitellini, N.
2009-04-01
A new swath bathymetry compilation of the Gulf of Cadiz Area and SW Iberia is presented. The new map is the result of a collaborative research performed after year 2000 by teams from 7 European countries and 14 research institutions. This new dataset allow for the first time to present and to discuss the missing link in the plate boundary between Eurasia and Africa in the Central Atlantic. A set of almost linear and sub parallel dextral strike-slip faults, the SWIM Faults (SWIM is the acronym of the ESF EuroMargins project "Earthquake and Tsunami hazards of active faults at the South West Iberian Margin: deep structure, high-resolution imaging and paleoseismic signature") was mapped using a the new swath bathymetry compilation available in the area. The SWIM Faults form a narrow band of deformation over a length of 600 km coincident with a small circle centred on the pole of rotation of Africa with respect to Eurasia, This narrow band of deformation connects the Gloria Fault to the Rif-Tell Fault Zone, two segments of the plate boundary between Africa and Eurasia. In addition, the SWIM faults cuts across the Gulf of Cadiz, in the Atlantic Ocean, where the 1755 Great Lisbon earthquake, M~8.5-8.7, and tsunami were generated, providing a new insights on its source location. SWIM Team: E. Gràcia (2), L. Matias (3), P. Terrinha (4), M.A. Abreu (5), G. DeAlteriis(6), J.P. Henriet (7), J.J. Dañobeitia (2), D.G. Masson (8), T. Mulder (9), R. Ramella (10), L. Somoza (11) and S. Diez (2) (2) Unitat de Tecnologia Marina (CSIC), Centre Mediterrani d'Investigacions Marines i Ambientals, Barcelona, Spain (3) Centro Geofísica da Universidade de Lisboa (CGUL, IDL), Lisboa, Portugal (4) National Institute for Engineering, Technology and Innovation (INETI, LATTEX), Departamento de Geologia Marinha, Amadora, Portugal (5) Estrutura de Missão para a Extensão da Plataforma Continental, Lisboa, Portugal (6) Geomare Sud IAMC, CNR, Napoli, Italy (7) Renard Centre of Marine Geology, Dpt. Geology and Soil Science, Gent University, Gent, Belgium (8) National Oceanography Centre, European Way, Southampton, United Kingdom (9) Département de Géologie et Océanographie, Talence Cedex, France (10) Department for the Development of Marine Technology and Research, Istituto Nazionale di Oceanografia e di Geofisica Sperimentale (OGS), Sgonico, Italy (11) Geología Marina, Instituto Geológico y Minero de España, Madrid, Spain
Fault Analysis on Bevel Gear Teeth Surface Damage of Aeroengine
NASA Astrophysics Data System (ADS)
Cheng, Li; Chen, Lishun; Li, Silu; Liang, Tao
2017-12-01
Aiming at the trouble phenomenon for bevel gear teeth surface damage of Aero-engine, Fault Tree of bevel gear teeth surface damage was drawing by logical relations, the possible cause of trouble was analyzed, scanning electron-microscope, energy spectrum analysis, Metallographic examination, hardness measurement and other analysis means were adopted to investigate the spall gear tooth. The results showed that Material composition, Metallographic structure, Micro-hardness, Carburization depth of the fault bevel gear accord with technical requirements. Contact fatigue spall defect caused bevel gear teeth surface damage. The small magnitude of Interference of accessory gearbox install hole and driving bevel gear bearing seat was mainly caused. Improved measures were proposed, after proof, Thermoelement measures are effective.
Goal-Function Tree Modeling for Systems Engineering and Fault Management
NASA Technical Reports Server (NTRS)
Johnson, Stephen B.; Breckenridge, Jonathan T.
2013-01-01
This paper describes a new representation that enables rigorous definition and decomposition of both nominal and off-nominal system goals and functions: the Goal-Function Tree (GFT). GFTs extend the concept and process of functional decomposition, utilizing state variables as a key mechanism to ensure physical and logical consistency and completeness of the decomposition of goals (requirements) and functions, and enabling full and complete traceabilitiy to the design. The GFT also provides for means to define and represent off-nominal goals and functions that are activated when the system's nominal goals are not met. The physical accuracy of the GFT, and its ability to represent both nominal and off-nominal goals enable the GFT to be used for various analyses of the system, including assessments of the completeness and traceability of system goals and functions, the coverage of fault management failure detections, and definition of system failure scenarios.
Risk management of PPP project in the preparation stage based on Fault Tree Analysis
NASA Astrophysics Data System (ADS)
Xing, Yuanzhi; Guan, Qiuling
2017-03-01
The risk management of PPP(Public Private Partnership) project can improve the level of risk control between government departments and private investors, so as to make more beneficial decisions, reduce investment losses and achieve mutual benefit as well. Therefore, this paper takes the PPP project preparation stage venture as the research object to identify and confirm four types of risks. At the same time, fault tree analysis(FTA) is used to evaluate the risk factors that belong to different parts, and quantify the influencing degree of risk impact on the basis of risk identification. In addition, it determines the importance order of risk factors by calculating unit structure importance on PPP project preparation stage. The result shows that accuracy of government decision-making, rationality of private investors funds allocation and instability of market returns are the main factors to generate the shared risk on the project.
Rocket engine system reliability analyses using probabilistic and fuzzy logic techniques
NASA Technical Reports Server (NTRS)
Hardy, Terry L.; Rapp, Douglas C.
1994-01-01
The reliability of rocket engine systems was analyzed by using probabilistic and fuzzy logic techniques. Fault trees were developed for integrated modular engine (IME) and discrete engine systems, and then were used with the two techniques to quantify reliability. The IRRAS (Integrated Reliability and Risk Analysis System) computer code, developed for the U.S. Nuclear Regulatory Commission, was used for the probabilistic analyses, and FUZZYFTA (Fuzzy Fault Tree Analysis), a code developed at NASA Lewis Research Center, was used for the fuzzy logic analyses. Although both techniques provided estimates of the reliability of the IME and discrete systems, probabilistic techniques emphasized uncertainty resulting from randomness in the system whereas fuzzy logic techniques emphasized uncertainty resulting from vagueness in the system. Because uncertainty can have both random and vague components, both techniques were found to be useful tools in the analysis of rocket engine system reliability.
Enterprise architecture availability analysis using fault trees and stakeholder interviews
NASA Astrophysics Data System (ADS)
Närman, Per; Franke, Ulrik; König, Johan; Buschle, Markus; Ekstedt, Mathias
2014-01-01
The availability of enterprise information systems is a key concern for many organisations. This article describes a method for availability analysis based on Fault Tree Analysis and constructs from the ArchiMate enterprise architecture (EA) language. To test the quality of the method, several case-studies within the banking and electrical utility industries were performed. Input data were collected through stakeholder interviews. The results from the case studies were compared with availability of log data to determine the accuracy of the method's predictions. In the five cases where accurate log data were available, the yearly downtime estimates were within eight hours from the actual downtimes. The cost of performing the analysis was low; no case study required more than 20 man-hours of work, making the method ideal for practitioners with an interest in obtaining rapid availability estimates of their enterprise information systems.
Khan, F I; Iqbal, A; Ramesh, N; Abbasi, S A
2001-10-12
As it is conventionally done, strategies for incorporating accident--prevention measures in any hazardous chemical process industry are developed on the basis of input from risk assessment. However, the two steps-- risk assessment and hazard reduction (or safety) measures--are not linked interactively in the existing methodologies. This prevents a quantitative assessment of the impacts of safety measures on risk control. We have made an attempt to develop a methodology in which risk assessment steps are interactively linked with implementation of safety measures. The resultant system tells us the extent of reduction of risk by each successive safety measure. It also tells based on sophisticated maximum credible accident analysis (MCAA) and probabilistic fault tree analysis (PFTA) whether a given unit can ever be made 'safe'. The application of the methodology has been illustrated with a case study.
Uncertainty analysis in fault tree models with dependent basic events.
Pedroni, Nicola; Zio, Enrico
2013-06-01
In general, two types of dependence need to be considered when estimating the probability of the top event (TE) of a fault tree (FT): "objective" dependence between the (random) occurrences of different basic events (BEs) in the FT and "state-of-knowledge" (epistemic) dependence between estimates of the epistemically uncertain probabilities of some BEs of the FT model. In this article, we study the effects on the TE probability of objective and epistemic dependences. The well-known Frèchet bounds and the distribution envelope determination (DEnv) method are used to model all kinds of (possibly unknown) objective and epistemic dependences, respectively. For exemplification, the analyses are carried out on a FT with six BEs. Results show that both types of dependence significantly affect the TE probability; however, the effects of epistemic dependence are likely to be overwhelmed by those of objective dependence (if present). © 2012 Society for Risk Analysis.
A fault tree model to assess probability of contaminant discharge from shipwrecks.
Landquist, H; Rosén, L; Lindhe, A; Norberg, T; Hassellöv, I-M; Lindgren, J F; Dahllöf, I
2014-11-15
Shipwrecks on the sea floor around the world may contain hazardous substances that can cause harm to the marine environment. Today there are no comprehensive methods for environmental risk assessment of shipwrecks, and thus there is poor support for decision-making on prioritization of mitigation measures. The purpose of this study was to develop a tool for quantitative risk estimation of potentially polluting shipwrecks, and in particular an estimation of the annual probability of hazardous substance discharge. The assessment of the probability of discharge is performed using fault tree analysis, facilitating quantification of the probability with respect to a set of identified hazardous events. This approach enables a structured assessment providing transparent uncertainty and sensitivity analyses. The model facilitates quantification of risk, quantification of the uncertainties in the risk calculation and identification of parameters to be investigated further in order to obtain a more reliable risk calculation. Copyright © 2014 Elsevier Ltd. All rights reserved.
Qualitative Importance Measures of Systems Components - A New Approach and Its Applications
NASA Astrophysics Data System (ADS)
Chybowski, Leszek; Gawdzińska, Katarzyna; Wiśnicki, Bogusz
2016-12-01
The paper presents an improved methodology of analysing the qualitative importance of components in the functional and reliability structures of the system. We present basic importance measures, i.e. the Birnbaum's structural measure, the order of the smallest minimal cut-set, the repetition count of an i-th event in the Fault Tree and the streams measure. A subsystem of circulation pumps and fuel heaters in the main engine fuel supply system of a container vessel illustrates the qualitative importance analysis. We constructed a functional model and a Fault Tree which we analysed using qualitative measures. Additionally, we compared the calculated measures and introduced corrected measures as a tool for improving the analysis. We proposed scaled measures and a common measure taking into account the location of the component in the reliability and functional structures. Finally, we proposed an area where the measures could be applied.
Schwartz, D.P.; Pantosti, D.; Okumura, K.; Powers, T.J.; Hamilton, J.C.
1998-01-01
Trenching, microgeomorphic mapping, and tree ring analysis provide information on timing of paleoearthquakes and behavior of the San Andreas fault in the Santa Cruz mountains. At the Grizzly Flat site alluvial units dated at 1640-1659 A.D., 1679-1894 A.D., 1668-1893 A.D., and the present ground surface are displaced by a single event. This was the 1906 surface rupture. Combined trench dates and tree ring analysis suggest that the penultimate event occurred in the mid-1600s, possibly in an interval as narrow as 1632-1659 A.D. There is no direct evidence in the trenches for the 1838 or 1865 earthquakes, which have been proposed as occurring on this part of the fault zone. In a minimum time of about 340 years only one large surface faulting event (1906) occurred at Grizzly Flat, in contrast to previous recurrence estimates of 95-110 years for the Santa Cruz mountains segment. Comparison with dates of the penultimate San Andreas earthquake at sites north of San Francisco suggests that the San Andreas fault between Point Arena and the Santa Cruz mountains may have failed either as a sequence of closely timed earthquakes on adjacent segments or as a single long rupture similar in length to the 1906 rupture around the mid-1600s. The 1906 coseismic geodetic slip and the late Holocene geologic slip rate on the San Francisco peninsula and southward are about 50-70% and 70% of their values north of San Francisco, respectively. The slip gradient along the 1906 rupture section of the San Andreas reflects partitioning of plate boundary slip onto the San Gregorio, Sargent, and other faults south of the Golden Gate. If a mid-1600s event ruptured the same section of the fault that failed in 1906, it supports the concept that long strike-slip faults can contain master rupture segments that repeat in both length and slip distribution. Recognition of a persistent slip rate gradient along the northern San Andreas fault and the concept of a master segment remove the requirement that lower slip sections of large events such as 1906 must fill in on a periodic basis with smaller and more frequent earthquakes.
NASA Astrophysics Data System (ADS)
Muluneh, Ameha A.; Kidane, Tesfaye; Corti, Giacomo; Keir, Derek
2018-04-01
We evaluate the frictional strength of seismogenic faults in the Main Ethiopian Rift (MER) by inverting the available, well-constrained earthquake focal mechanisms. The regional stress field is given by - 119.6°/77.2°, 6.2°/7.6°, and 97.5°/10.2° for trend/plunge of σ1, σ2 and σ3, respectively agrees well with previous fault kinematic and focal mechanism inversions. We determine the coefficient of friction, μ, for 44 seismogenic faults by assuming the pore pressure to be at hydrostatic conditions. Slip on 36 seismogenic faults occurs with μ ≥ 0.4. Slip on the remaining eight faults is possible with low μ. In general, the coefficient of friction in the MER is compatible with a value of μ of 0.59 ± 0.16 (2σ standard deviation). The shear stresses range from 16 to 129 MPa, is similar to crustal shear stress observed in extensional tectonic regimes and global compilations of shear stresses from major fault zones. The maximum shear stress is observed in the ductile crust, below the seismologically determined brittle-ductile transition (BDT) zone. Below the BDT, the crust is assumed to be weak due to thermal modification and/or high pore fluid pressure. Our results indicate linearly increasing μ and shear stress with depth. We argue that in the MER upper crust is strong and deforms according to Coulomb frictional-failure criterion.
Felger, Tracey J.; Beard, Sue
2010-01-01
Regional stratigraphic units and structural features of the Lake Mead region are presented as a 1:250,000 scale map, and as a Geographic Information System database. The map, which was compiled from existing geologic maps of various scales, depicts geologic units, bedding and foliation attitudes, faults and folds. Units and structural features were generalized to highlight the regional stratigraphic and tectonic aspects of the geology of the Lake Mead region. This map was prepared in support of the papers presented in this volume, Special Paper 463, as well as to facilitate future investigations in the region. Stratigraphic units exposed within the area record 1800 million years of geologic history and include Proterozoic crystalline rocks, Paleozoic and Mesozoic sedimentary rocks, Mesozoic plutonic rocks, Cenozoic volcanic and intrusive rocks, sedimentary rocks and surfi cial deposits. Following passive margin sedimentation in the Paleozoic and Mesozoic, late Mesozoic (Sevier) thrusting and Late Cretaceous and early Tertiary compression produced major folding, reverse faulting, and thrust faulting in the Basin and Range, and resulted in regional uplift and monoclinal folding in the Colorado Plateau. Cenozoic extensional deformation, accompanied by sedimentation and volcanism, resulted in large-magnitude high- and low-angle normal faulting and strike-slip faulting in the Basin and Range; on the Colorado Plateau, extension produced north-trending high-angle normal faults. The latest history includes integration of the Colorado River system, dissection, development of alluvial fans, extensive pediment surfaces, and young faulting.
Trimble, Donald E.; Machette, Michael N.; Brandt, Theodore R.; Moore, David W.; Murray, Kyle E.
2003-01-01
This digital map shows bedding attitude symbols display over the geographic extent of surficial deposits and rock stratigraphic units (formations) as compiled by Trimble and Machette 1973-1977 and published in 1979 (U.S. Geological Survey Map I-856-H) under the Front Range Urban Corridor Geology Program. Trimble and Machette compiled their geologic map from published geologic maps and unpublished geologic mapping having varied map unit schemes. A convenient feature of the compiled map is its uniform classification of geologic units that mostly matches those of companion maps to the north (USGS I-855-G) and to the south (USGS I-857-F). Published as a color paper map, the Trimble and Machette map was intended for land-use planning in the Front Range Urban Corridor. This map recently (1997-1999), was digitized under the USGS Front Range Infrastructure Resources Project (see cross-reference). In general, the mountainous areas in the west part of the map exhibit various igneous and metamorphic bedrock units of Precambrian age, major faults, and fault brecciation zones at the east margin (5-20 km wide) of the Front Range. The eastern and central parts of the map (Colorado Piedmont) depict a mantle of unconsolidated deposits of Quaternary age and interspersed outcroppings of Cretaceous or Tertiary-Cretaceous sedimentary bedrock. The Quaternary mantle is comprised of eolian deposits (quartz sand and silt), alluvium (gravel, sand, and silt of variable composition), colluvium, and few landslides. At the mountain front, north-trending, dipping Paleozoic and Mesozoic sandstone, shale, and limestone bedrock formations form hogbacks and intervening valleys.
Slip Rates of Main Active Fault Zones Through Turkey Inferred From GPS Observations
NASA Astrophysics Data System (ADS)
Ozener, H.; Aktug, B.; Dogru, A.; Tasci, L.; Acar, M.; Emre, O.; Yilmaz, O.; Turgut, B.; Halicioglu, K.; Sabuncu, A.; Bal, O.; Eraslan, A.
2015-12-01
Active Fault Map of Turkey was revised and published by General Directorate of Mineral Research and Exploration in 2012. This map reveals that there are about 500 faults can generate earthquakes.In order to understand the earthquake potential of these faults, it is needed to determine the slip rates. Although many regional and local studies were performed in the past, the slip rates of the active faults in Turkey have not been determined. In this study, the block modelling, which is the most common method to produce slip rates, will be done. GPS velocities required for block modeling is being compiled from the published studies and the raw data provided then velocity field is combined. To form a homogeneous velocity field, different stochastic models will be used and the optimal velocity field will be achieved. In literature, GPS site velocities, which are computed for different purposes and published, are combined globally and this combined velocity field are used in the analysis of strain accumulation. It is also aimed to develop optimal stochastic models to combine the velocity data. Real time, survey mode and published GPS observations is being combined in this study. We also perform new GPS observations. Furthermore, micro blocks and main fault zones from Active Fault Map Turkey will be determined and homogeneous velocity field will be used to infer slip rates of these active faults. Here, we present the result of first year of the study. This study is being supported by THE SCIENTIFIC AND TECHNOLOGICAL RESEARCH COUNCIL OF TURKEY (TUBITAK)-CAYDAG with grant no. 113Y430.
NASA Astrophysics Data System (ADS)
Studnikigizbert, C.; Eich, L.; King, R.; Burchfiel, B. C.; Chen, Z.; Chen, L.
2004-12-01
Seismological (Holt et. al. 1996), geodetic (King et. al. 1996, Chen et. al. 2000) and geological (Wang et. al. 1995, Wang and Burchfiel 2002) studies have shown that upper crustal material north and east of the eastern Himalayan syntaxis rotates clockwise about the syntaxis, with the Xianshuihe fault accommodating most of this motion. Within the zone of rotating material, however, deformation is not completely homogenous, and numerous differentially rotating small crustal fragments are recognised. We combine seismic (CSB and Harvard CMT catalogues), geodetic (CSB and MIT-Chengdu networks), remote sensing, compilation of existing regional maps and our own detailed field mapping to characterise the active tectonics of a clockwise rotating crustal block between Zhongdian and Dali. The northeastern boundary is well-defined by the northwest striking left-lateral Zhongdian and Daju faults. The eastern boundary, on the other hand, is made up of a 80 km wide zone characterised by north-south trending extensional basins linked by NNE trending left-lateral faults. Geological mapping suggests that strain is accommodated by three major transtensional fault systems: the Jianchuan-Lijiang, Heqing and Chenghai fault systems. Geodetic data indicates that this zone accommodates 10 +/- 1.4 mm/year of E-W extension, but strain may be (presently) preferentially partitioned along the easternmost (Chenghai) fault. Not all geodetic velocities are consistent with geological observations. In particular, rotation and concomitant transtension are somehow transferred across the Red River-Tongdian faults to Nan Tinghe fault with no apparent accommodating structures. Rotation and extension is surmised to be related to the northward propagation of the syntaxis.
Moran, Michael J.; Wilson, Jon W.; Beard, L. Sue
2015-11-03
Several major faults, including the Salt Cedar Fault and the Palm Tree Fault, play an important role in the movement of groundwater. Groundwater may move along these faults and discharge where faults intersect volcanic breccias or fractured rock. Vertical movement of groundwater along faults is suggested as a mechanism for the introduction of heat energy present in groundwater from many of the springs. Groundwater altitudes in the study area indicate a potential for flow from Eldorado Valley to Black Canyon although current interpretations of the geology of this area do not favor such flow. If groundwater from Eldorado Valley discharges at springs in Black Canyon then the development of groundwater resources in Eldorado Valley could result in a decrease in discharge from the springs. Geology and structure indicate that it is not likely that groundwater can move between Detrital Valley and Black Canyon. Thus, the development of groundwater resources in Detrital Valley may not result in a decrease in discharge from springs in Black Canyon.
von Huene, Roland E.; Miller, John J.; Dartnell, Peter
2016-01-01
The Semidi segment of the Alaska convergent margin appears capable of generating a giant tsunami like the one produced along the nearby Unimak segment in 1946. Reprocessed legacy seismic reflection data and a compilation of multibeam bathymetric surveys reveal structures that could generate such a tsunami. A 200 km long ridge or escarpment with crests >1 km high is the surface expression of an active out-of-sequence fault zone, recently referred to as a splay fault. Such faults are potentially tsunamigenic. This type of fault zone separates the relatively rigid rock of the margin framework from the anelastic accreted sediment prism. Seafloor relief of the ridge exceeds that of similar age accretionary prism ridges indicating preferential slip along the splay fault zone. The greater slip may derive from Quaternary subduction of the Patton Murray hot spot ridge that extends 200 km toward the east across the north Pacific. Estimates of tsunami repeat times from paleotsunami studies indicate that the Semidi segment could be near the end of its current inter-seismic cycle. GPS records from Chirikof Island at the shelf edge indicate 90% locking of plate interface faults. An earthquake in the shallow Semidi subduction zone could generate a tsunami that will inundate the US west coast more than the 1946 and 1964 earthquakes because the Semidi continental slope azimuth directs a tsunami southeastward.
Early Miocene Tectonic Activity in the western Ross Sea (Antarctica)
NASA Astrophysics Data System (ADS)
Sauli, C.; Sorlien, C. C.; Busetti, M.; Geletti, R.; De Santis, L.
2012-12-01
In the framework of the Rossmap Italian PNRA work objectives to compile extended and revised digital maps of the main unconformities in Ross Sea, Antarctica, much additional seismic reflection data, that were not available to previous ANTOSTRAT compilation, were incorporated into a new ROSSMAP interpretation. The correlation across almost all of Ross Sea, from DSDP Site 270 and Site 272 in Eastern Basin to northern Victoria Land Basin, of additional early Miocene and late Oligocene horizons that were not part of ANTOSTRAT allows interpretations to be made of fault activity and glacial erosion or deposition at a finer time resolution. New conclusions include that extensional or transtensional fault activity within the zone between Victoria Land Basin and Northern Basin, initiated by 23 Ma or earlier, and continued after 18 Ma. Steep parallel-striking faults in southern Victoria Land Basin display both reverse and normal separation of 17.5 Ma (from Cape Roberts Program-core 1) and post-16 Ma horizons, suggesting an important strike-slip component. This result may be compared with published papers that proposed post-17 Ma extension in southern Victoria Land Basin, 16-17 Ma extension in the AdareTrough, north of the Ross Sea continental shelf, but no Miocene extension affecting the Northern Basin (Granot et al., 2010). Thus, our evidence for extension through the early Miocene is significant to post-spreading tectonic models. Reference Granot R., Cande S. C., Stock J. M., Davey F. J. and Clayton R. W. (2010) Postspreading rifting in the Adare Basin, Antarctica: Regional tectonic consequences. Geochem. Geophys. Geosyst., 8, Q08005, doi:10.1029/2010GC003105.
NASA Astrophysics Data System (ADS)
Abdelrhman, Ahmed M.; Sei Kien, Yong; Salman Leong, M.; Meng Hee, Lim; Al-Obaidi, Salah M. Ali
2017-07-01
The vibration signals produced by rotating machinery contain useful information for condition monitoring and fault diagnosis. Fault severities assessment is a challenging task. Wavelet Transform (WT) as a multivariate analysis tool is able to compromise between the time and frequency information in the signals and served as a de-noising method. The CWT scaling function gives different resolutions to the discretely signals such as very fine resolution at lower scale but coarser resolution at a higher scale. However, the computational cost increased as it needs to produce different signal resolutions. DWT has better low computation cost as the dilation function allowed the signals to be decomposed through a tree of low and high pass filters and no further analysing the high-frequency components. In this paper, a method for bearing faults identification is presented by combing Continuous Wavelet Transform (CWT) and Discrete Wavelet Transform (DWT) with envelope analysis for bearing fault diagnosis. The experimental data was sampled by Case Western Reserve University. The analysis result showed that the proposed method is effective in bearing faults detection, identify the exact fault’s location and severity assessment especially for the inner race and outer race faults.
Experimental evaluation of the certification-trail method
NASA Technical Reports Server (NTRS)
Sullivan, Gregory F.; Wilson, Dwight S.; Masson, Gerald M.; Itoh, Mamoru; Smith, Warren W.; Kay, Jonathan S.
1993-01-01
Certification trails are a recently introduced and promising approach to fault-detection and fault-tolerance. A comprehensive attempt to assess experimentally the performance and overall value of the method is reported. The method is applied to algorithms for the following problems: huffman tree, shortest path, minimum spanning tree, sorting, and convex hull. Our results reveal many cases in which an approach using certification-trails allows for significantly faster overall program execution time than a basic time redundancy-approach. Algorithms for the answer-validation problem for abstract data types were also examined. This kind of problem provides a basis for applying the certification-trail method to wide classes of algorithms. Answer-validation solutions for two types of priority queues were implemented and analyzed. In both cases, the algorithm which performs answer-validation is substantially faster than the original algorithm for computing the answer. Next, a probabilistic model and analysis which enables comparison between the certification-trail method and the time-redundancy approach were presented. The analysis reveals some substantial and sometimes surprising advantages for ther certification-trail method. Finally, the work our group performed on the design and implementation of fault injection testbeds for experimental analysis of the certification trail technique is discussed. This work employs two distinct methodologies, software fault injection (modification of instruction, data, and stack segments of programs on a Sun Sparcstation ELC and on an IBM 386 PC) and hardware fault injection (control, address, and data lines of a Motorola MC68000-based target system pulsed at logical zero/one values). Our results indicate the viability of the certification trail technique. It is also believed that the tools developed provide a solid base for additional exploration.
Investigation of Fuel Oil/Lube Oil Spray Fires On Board Vessels. Volume 3.
1998-11-01
U.S. Coast Guard Research and Development Center 1082 Shennecossett Road, Groton, CT 06340-6096 Report No. CG-D-01-99, III Investigation of Fuel ...refinery). Developed the technical and mathematical specifications for BRAVO™2.0, a state-of-the-art Windows program for performing event tree and fault...tree analyses. Also managed the development of and prepared the technical specifications for QRA ROOTS™, a Windows program for storing, searching K-4
1992-01-01
boost plenum which houses the camshaft . The compressed mixture is metered by a throttle to intake valves of the engine. The engine is constructed from...difficulties associated with a time-tagged fault tree . In particular, recent work indicates that the multi-layer perception architecture can give good fdi...Abstract: In the past decade, wastepaper recycling has gained a wider acceptance. Depletion of tree stocks, waste water treatment demands and
Interim reliability evaluation program, Browns Ferry 1
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mays, S.E.; Poloski, J.P.; Sullivan, W.H.
1981-01-01
Probabilistic risk analysis techniques, i.e., event tree and fault tree analysis, were utilized to provide a risk assessment of the Browns Ferry Nuclear Plant Unit 1. Browns Ferry 1 is a General Electric boiling water reactor of the BWR 4 product line with a Mark 1 (drywell and torus) containment. Within the guidelines of the IREP Procedure and Schedule Guide, dominant accident sequences that contribute to public health and safety risks were identified and grouped according to release categories.
NASA Astrophysics Data System (ADS)
Lin, S.; Luo, D.; Yanlin, F.; Li, Y.
2016-12-01
Shallow Seismic Reflection (SSR) is a major geophysical exploration method with its exploration depth range, high-resolution in urban active fault exploration. In this paper, we carried out (SSR) and High-resolution refraction (HRR) test in the Liangyun Basin to explore a buried fault. We used NZ distributed 64 channel seismic instrument, 60HZ high sensitivity detector, Geode multi-channel portable acquisition system and hammer source. We selected single side hammer hit multiple overlay, 48 channels received and 12 times of coverage. As there are some coincidence measuring lines of SSR and HRR, we chose multi chase and encounter observation system. Based on the satellite positioning, we arranged 11 survey lines in our study area with total length for 8132 meters. GEOGIGA seismic reflection data processing software was used to deal with the SSR data. After repeated tests from the aspects of single shot record compilation, interference wave pressing, static correction, velocity parameter extraction, dynamic correction, eventually got the shallow seismic reflection profile images. Meanwhile, we used Canadian technology company good refraction and tomographic imaging software to deal with HRR seismic data, which is based on nonlinear first arrival wave travel time tomography. Combined with drilling geological profiles, we explained 11 measured seismic profiles. Results show 18 obvious fault feature breakpoints, including 4 normal faults of south-west, 7 reverse faults of south-west, one normal fault of north-east and 6 reverse faults of north-east. Breakpoints buried depth is 15-18 meters, and the inferred fault distance is 3-12 meters. Comprehensive analysis shows that the fault property is reverse fault with northeast incline section, and fewer branch normal faults presenting southwest incline section. Since good corresponding relationship between the seismic interpretation results, drilling data and SEM results on the property, occurrence, broken length of the fault, we considered the Liangyun fault to be an active fault which has strong activity during the Neogene Pliocene and early Pleistocene, Middle Pleistocene period. The combined application of SSR and HRR can provide more parameters to explain the seismic results, and improve the accuracy of the interpretation.
NASA Astrophysics Data System (ADS)
Jackson, Christopher; Bell, Rebecca; Rotevatn, Atle; Tvedt, Anette
2016-04-01
Normal faulting accommodates stretching of the Earth's crust, and it is arguably the most fundamental tectonic process leading to continent rupture and oceanic crust emplacement. Furthermore, the incremental and finite geometries associated with normal faulting dictate landscape evolution, sediment dispersal and hydrocarbon systems development in rifts. Displacement-length scaling relationships compiled from global datasets suggest normal faults grow via a sympathetic increase in these two parameters (the 'isolated fault model'). This model has dominated the structural geology literature for >20 years and underpins the structural and tectono-stratigraphic models developed for active rifts. However, relatively recent analysis of high-quality 3D seismic reflection data suggests faults may grow by rapid establishment of their near-final length prior to significant displacement accumulation (the 'coherent fault model'). The isolated and coherent fault models make very different predictions regarding the tectono-stratigraphic evolution of rift basin, thus assessing their applicability is important. To-date, however, very few studies have explicitly set out to critically test the coherent fault model thus, it may be argued, it has yet to be widely accepted in the structural geology community. Displacement backstripping is a simple graphical technique typically used to determine how faults lengthen and accumulate displacement; this technique should therefore allow us to test the competing fault models. However, in this talk we use several subsurface case studies to show that the most commonly used backstripping methods (the 'original' and 'modified' methods) are, however, of limited value, because application of one over the other requires an a priori assumption of the model most applicable to any given fault; we argue this is illogical given that the style of growth is exactly what the analysis is attempting to determine. We then revisit our case studies and demonstrate that, in the case of seismic-scale growth faults, growth strata thickness patterns and relay zone kinematics, rather than displacement backstripping, should be assessed to directly constrain fault length and thus tip behaviour through time. We conclude that rapid length establishment prior to displacement accumulation may be more common than is typically assumed, thus challenging the well-established, widely cited and perhaps overused, isolated fault model.
Cost-effectiveness analysis of risk-reduction measures to reach water safety targets.
Lindhe, Andreas; Rosén, Lars; Norberg, Tommy; Bergstedt, Olof; Pettersson, Thomas J R
2011-01-01
Identifying the most suitable risk-reduction measures in drinking water systems requires a thorough analysis of possible alternatives. In addition to the effects on the risk level, also the economic aspects of the risk-reduction alternatives are commonly considered important. Drinking water supplies are complex systems and to avoid sub-optimisation of risk-reduction measures, the entire system from source to tap needs to be considered. There is a lack of methods for quantification of water supply risk reduction in an economic context for entire drinking water systems. The aim of this paper is to present a novel approach for risk assessment in combination with economic analysis to evaluate risk-reduction measures based on a source-to-tap approach. The approach combines a probabilistic and dynamic fault tree method with cost-effectiveness analysis (CEA). The developed approach comprises the following main parts: (1) quantification of risk reduction of alternatives using a probabilistic fault tree model of the entire system; (2) combination of the modelling results with CEA; and (3) evaluation of the alternatives with respect to the risk reduction, the probability of not reaching water safety targets and the cost-effectiveness. The fault tree method and CEA enable comparison of risk-reduction measures in the same quantitative unit and consider costs and uncertainties. The approach provides a structured and thorough analysis of risk-reduction measures that facilitates transparency and long-term planning of drinking water systems in order to avoid sub-optimisation of available resources for risk reduction. Copyright © 2010 Elsevier Ltd. All rights reserved.
Preliminary geologic map of the San Guillermo Mountain Quadrangle, Ventura County, California
Minor, S.A.
1999-01-01
New 1:24,000-scale geologic mapping in the Cuyama 30' x 60' quadrangle, in support of the USGS Southern California Areal Mapping Project (SCAMP), is contributing to a more complete understanding of the stratigraphy, structure, and tectonic evolution of the complex junction area between the NW-striking Coast Ranges and EW-striking western Transverse Ranges. The 1:24,000-scale geologic map of the San Guillermo Mountain quadrangle is one of six contiguous 7 1/2' quadrangle geologic maps in the eastern part of the Cuyama map area being compiled for a more detailed portrayal and reevaluation of geologic structures and rock units shown on previous geologic maps of the area (e.g., Dibblee, 1979). The following observations and interpretations are based on the new San Guillermo Mountain geologic compilation: (1) The new geologic mapping in the northern part of the San Guillermo Mountain quadrangle allows for reinterpretation of fault architecture that bears on potential seismic hazards of the region. Previous mapping had depicted the eastern Big Pine fault (BPF) as a northeast-striking, sinistral strike-slip fault that extends for 30 km northeast of the Cuyama River to its intersection with the San Andreas fault (SAF). In contrast the new mapping indicates that the eastern BPF is a thrust fault that curves from a northeast strike to an east strike, where it is continuous with the San Guillermo thrust fault, and dies out further east about 15 km south of the SAF. This redefined segment of the BPF is a south-dipping, north-directed thrust, with dominantly dip slip components (rakes > 60 deg.), that places Middle Eocene marine rocks (Juncal and Matilija Formations) over Miocene through Pliocene(?) nonmarine rocks (Caliente, Quatal, and Morales Formations). Although a broad northeast-striking fault zone, exhibiting predominantly sinistral components of slip (rakes < 45 deg.), extends to the SAF as previously mapped, the fault zone does not connect to the southwest with the BPF but instead curves into a southwest-directed thrust fault system a short distance north of the BPF. Oligocene to Pliocene(?) nonmarine sedimentary and volcanic rocks of the Plush Ranch, Caliente, and Morales(?) Formations are folded on both sides of this fault zone (informally named the Lockwood Valley fault zone [LVFZ] on the map). South-southeast of the LVFZ overturned folds have southward vergence. Several moderate-displacement (< 50 m), mainly northwest-dipping thrust and reverse faults, exhibiting mostly sinistral-oblique slip, flank and strike parallel to the overturned folds. The fold vergence and thrust direction associated with the LVFZ is opposite to that of the redefined BPF, providing further evidence that the two faults are distinct structures. These revised fault interpretations bring into question earlier estimates of net sinistral strike-slip displacement of as much as 13 km along the originally defined eastern BPF, which assumed structural connection with the LVFZ. Also, despite sparse evidence for repeated Quaternary movement on the LVFZ (e.g., Dibblee, 1982), the potential for a large earthquake involving coseismic slip on both the LVFZ and the central BPF to the southwest may not be as great as once believed. (2) Several generations of Pleistocene and younger dissected alluvial terrace and fan deposits sit at various levels above modern stream channels throughout the quadrangle. These deposits give testimony to the recent uplift and related fault deformation that has occurred in the area. (3) A vast terrane of Eocene marine sedimentary rocks (Juncal and Matilija Formations and Cozy Dell Shale) exposed south of the Big Pine fault forms the southern two-thirds of the San Guillermo Mountain quadrangle. Benthic foraminifers collected from various shale intervals within the Juncal Formation indicate a Middle Eocene age (Ulatisian) for the entire formation (K. McDougall, unpub. data, 1998) and deposition at paleodepths as great as 2,000 m (i.e., lowe
CARE3MENU- A CARE III USER FRIENDLY INTERFACE
NASA Technical Reports Server (NTRS)
Pierce, J. L.
1994-01-01
CARE3MENU generates an input file for the CARE III program. CARE III is used for reliability prediction of complex, redundant, fault-tolerant systems including digital computers, aircraft, nuclear and chemical control systems. The CARE III input file often becomes complicated and is not easily formatted with a text editor. CARE3MENU provides an easy, interactive method of creating an input file by automatically formatting a set of user-supplied inputs for the CARE III system. CARE3MENU provides detailed on-line help for most of its screen formats. The reliability model input process is divided into sections using menu-driven screen displays. Each stage, or set of identical modules comprising the model, must be identified and described in terms of number of modules, minimum number of modules for stage operation, and critical fault threshold. The fault handling and fault occurence models are detailed in several screens by parameters such as transition rates, propagation and detection densities, Weibull or exponential characteristics, and model accuracy. The system fault tree and critical pairs fault tree screens are used to define the governing logic and to identify modules affected by component failures. Additional CARE3MENU screens prompt the user for output options and run time control values such as mission time and truncation values. There are fourteen major screens, many with default values and HELP options. The documentation includes: 1) a users guide with several examples of CARE III models, the dialog required to input them to CARE3MENU, and the output files created; and 2) a maintenance manual for assistance in changing the HELP files and modifying any of the menu formats or contents. CARE3MENU is written in FORTRAN 77 for interactive execution and has been implemented on a DEC VAX series computer operating under VMS. This program was developed in 1985.
NASA Astrophysics Data System (ADS)
Polverino, Pierpaolo; Pianese, Cesare; Sorrentino, Marco; Marra, Dario
2015-04-01
The paper focuses on the design of a procedure for the development of an on-field diagnostic algorithm for solid oxide fuel cell (SOFC) systems. The diagnosis design phase relies on an in-deep analysis of the mutual interactions among all system components by exploiting the physical knowledge of the SOFC system as a whole. This phase consists of the Fault Tree Analysis (FTA), which identifies the correlations among possible faults and their corresponding symptoms at system components level. The main outcome of the FTA is an inferential isolation tool (Fault Signature Matrix - FSM), which univocally links the faults to the symptoms detected during the system monitoring. In this work the FTA is considered as a starting point to develop an improved FSM. Making use of a model-based investigation, a fault-to-symptoms dependency study is performed. To this purpose a dynamic model, previously developed by the authors, is exploited to simulate the system under faulty conditions. Five faults are simulated, one for the stack and four occurring at BOP level. Moreover, the robustness of the FSM design is increased by exploiting symptom thresholds defined for the investigation of the quantitative effects of the simulated faults on the affected variables.
TU-AB-BRD-03: Fault Tree Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dunscombe, P.
2015-06-15
Current quality assurance and quality management guidelines provided by various professional organizations are prescriptive in nature, focusing principally on performance characteristics of planning and delivery devices. However, published analyses of events in radiation therapy show that most events are often caused by flaws in clinical processes rather than by device failures. This suggests the need for the development of a quality management program that is based on integrated approaches to process and equipment quality assurance. Industrial engineers have developed various risk assessment tools that are used to identify and eliminate potential failures from a system or a process before amore » failure impacts a customer. These tools include, but are not limited to, process mapping, failure modes and effects analysis, fault tree analysis. Task Group 100 of the American Association of Physicists in Medicine has developed these tools and used them to formulate an example risk-based quality management program for intensity-modulated radiotherapy. This is a prospective risk assessment approach that analyzes potential error pathways inherent in a clinical process and then ranks them according to relative risk, typically before implementation, followed by the design of a new process or modification of the existing process. Appropriate controls are then put in place to ensure that failures are less likely to occur and, if they do, they will more likely be detected before they propagate through the process, compromising treatment outcome and causing harm to the patient. Such a prospective approach forms the basis of the work of Task Group 100 that has recently been approved by the AAPM. This session will be devoted to a discussion of these tools and practical examples of how these tools can be used in a given radiotherapy clinic to develop a risk based quality management program. Learning Objectives: Learn how to design a process map for a radiotherapy process Learn how to perform failure modes and effects analysis analysis for a given process Learn what fault trees are all about Learn how to design a quality management program based upon the information obtained from process mapping, failure modes and effects analysis and fault tree analysis. Dunscombe: Director, TreatSafely, LLC and Center for the Assessment of Radiological Sciences; Consultant to IAEA and Varian Thomadsen: President, Center for the Assessment of Radiological Sciences Palta: Vice President of the Center for the Assessment of Radiological Sciences.« less
Probabilistic seismic hazard study based on active fault and finite element geodynamic models
NASA Astrophysics Data System (ADS)
Kastelic, Vanja; Carafa, Michele M. C.; Visini, Francesco
2016-04-01
We present a probabilistic seismic hazard analysis (PSHA) that is exclusively based on active faults and geodynamic finite element input models whereas seismic catalogues were used only in a posterior comparison. We applied the developed model in the External Dinarides, a slow deforming thrust-and-fold belt at the contact between Adria and Eurasia.. is the Our method consists of establishing s two earthquake rupture forecast models: (i) a geological active fault input (GEO) model and, (ii) a finite element (FEM) model. The GEO model is based on active fault database that provides information on fault location and its geometric and kinematic parameters together with estimations on its slip rate. By default in this model all deformation is set to be released along the active faults. The FEM model is based on a numerical geodynamic model developed for the region of study. In this model the deformation is, besides along the active faults, released also in the volumetric continuum elements. From both models we calculated their corresponding activity rates, its earthquake rates and their final expected peak ground accelerations. We investigated both the source model and the earthquake model uncertainties by varying the main active fault and earthquake rate calculation parameters through constructing corresponding branches of the seismic hazard logic tree. Hazard maps and UHS curves have been produced for horizontal ground motion on bedrock conditions VS 30 ≥ 800 m/s), thereby not considering local site amplification effects. The hazard was computed over a 0.2° spaced grid considering 648 branches of the logic tree and the mean value of 10% probability of exceedance in 50 years hazard level, while the 5th and 95th percentiles were also computed to investigate the model limits. We conducted a sensitivity analysis to control which of the input parameters influence the final hazard results in which measure. The results of such comparison evidence the deformation model and with their internal variability together with the choice of the ground motion prediction equations (GMPEs) are the most influencing parameter. Both of these parameters have significan affect on the hazard results. Thus having good knowledge of the existence of active faults and their geometric and activity characteristics is of key importance. We also show that PSHA models based exclusively on active faults and geodynamic inputs, which are thus not dependent on past earthquake occurrences, provide a valid method for seismic hazard calculation.
NASA Astrophysics Data System (ADS)
Brandsdottir, B.; Magnusdottir, S.; Karson, J. A.; Detrick, R. S.; Driscoll, N. W.
2015-12-01
The multi-branched plate boundary across Iceland is made up of divergent and oblique rifts, and transform zones, characterized by entwined extensional and transform tectonics. The Tjörnes Fracture Zone (TFZ), located on the coast and offshore Northern Iceland, is a complex transform linking the northern rift zone (NVZ) on land with the Kolbeinsey Ridge offshore. Extension across TFZ is partitioned across three N-S trending rift basins; Eyjafjarðaráll, Skjálfandadjúp (SB) and Öxarfjörður and three WNW-NW oriented seismic lineaments; the Grímsey Oblique Rift, Húsavík-Flatey Faults (HFFs) and Dalvík Lineament. We compile the tectonic framework of the TFZ ridge-transform from aerial photos, satellite images, multibeam bathymetry and high-resolution seismic reflection data (Chirp). The rift basins are made up of normal faults with vertical displacements of up to 50-60 m, and post-glacial sediments of variable thickness. The SB comprises N5°W obliquely trending, eastward dipping normal faults as well as N10°E striking, westward dipping faults oriented roughly perpendicular to the N104°E spreading direction, indicative of early stages of rifting. Correlation of Chirp reflection data and tephrachronology from a sediment core within SB reveal major rifting episodes between 10-12.1 kyrs BP activating the whole basin, followed by smaller-scale fault movements throughout Holocene. Onshore faults have the same orientations as those mapped offshore and provide a basis for the interpretation of the kinematics of the faults throughout the region. These include transform parallel right-lateral, strike-slip faults separating domains dominated by spreading parallel left-lateral bookshelf faults. Shearing is most prominent along the HFFs, a system of right-lateral strike-slip faults with vertical displacement up to 15 m. Vertical fault movements reflect increased tectonic activity during early postglacial time coinciding with isostatic rebound enhancing volcanism within Iceland.
Personius, Stephen; Briggs, Richard; Maharrey, J. Zebulon; Angster, Stephen J.; Mahan, Shannon
2017-01-01
We use new and existing data to compile a record of ∼18 latest Quaternary large-magnitude surface-rupturing earthquakes on 7 fault zones in the northwestern Basin and Range Province of northwestern Nevada and northeastern California. The most recent earthquake on all faults postdates the ca. 18–15 ka last glacial highstand of pluvial Lake Lahontan and other pluvial lakes in the region. These lacustrine data provide a window in which we calculate latest Quaternary vertical slip rates and compare them with rates of modern deformation in a global positioning system (GPS) transect spanning the region. Average vertical slip rates on these fault zones range from 0.1 to 0.8 mm/yr and total ∼2 mm/yr across a 265-km-wide transect from near Paradise Valley, Nevada, to the Warner Mountains in California. We converted vertical slip rates to horizontal extension rates using fault dips of 30°–60°, and then compared the extension rates to GPS-derived rates of modern (last 7–9 yr) deformation. Our preferred fault dip values (45°–55°) yield estimated long-term extension rates (1.3–1.9 mm/yr) that underestimate our modern rate (2.4 mm/yr) by ∼21%–46%. The most likely sources of this underestimate are geologically unrecognizable deformation from moderate-sized earthquakes and unaccounted-for coseismic off-fault deformation from large surface-rupturing earthquakes. However, fault dip values of ≤40° yield long-term rates comparable to or greater than modern rates, so an alternative explanation is that fault dips are closer to 40° than our preferred values. We speculate that the large component of right-lateral shear apparent in the GPS signal is partitioned on faults with primary strike-slip displacement, such as the Long Valley fault zone, and as not easily detected oblique slip on favorably oriented normal faults in the region.
A fault is born: The Landers-Mojave earthquake line
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nur, A.; Ron, H.
1993-04-01
The epicenter and the southern portion of the 1992 Landers earthquake fell on an approximately N-S earthquake line, defined by both epicentral locations and by the rupture directions of four previous M>5 earthquakes in the Mojave: The 1947 Manix; 1975 Galway Lake; 1979 Homestead Valley: and 1992 Joshua Tree events. Another M 5.2 earthquake epicenter in 1965 fell on this line where it intersects the Calico fault. In contrast, the northern part of the Landers rupture followed the NW-SE trending Camp Rock and parallel faults, exhibiting an apparently unusual rupture kink. The block tectonic model (Ron et al., 1984) combiningmore » fault kinematic and mechanics, explains both the alignment of the events, and their ruptures (Nur et al., 1986, 1989), as well as the Landers kink (Nur et al., 1992). Accordingly, the now NW oriented faults have rotated into their present direction away from the direction of maximum shortening, close to becoming locked, whereas a new fault set, optimally oriented relative to the direction of shortening, is developing to accommodate current crustal deformation. The Mojave-Landers line may thus be a new fault in formation. During the transition of faulting from the old, well developed and wak but poorly oriented faults to the strong, but favorably oriented new ones, both can slip simultaneously, giving rise to kinks such as Landers.« less
Quantitative method of medication system interface evaluation.
Pingenot, Alleene Anne; Shanteau, James; Pingenot, James D F
2007-01-01
The objective of this study was to develop a quantitative method of evaluating the user interface for medication system software. A detailed task analysis provided a description of user goals and essential activity. A structural fault analysis was used to develop a detailed description of the system interface. Nurses experienced with use of the system under evaluation provided estimates of failure rates for each point in this simplified fault tree. Means of estimated failure rates provided quantitative data for fault analysis. Authors note that, although failures of steps in the program were frequent, participants reported numerous methods of working around these failures so that overall system failure was rare. However, frequent process failure can affect the time required for processing medications, making a system inefficient. This method of interface analysis, called Software Efficiency Evaluation and Fault Identification Method, provides quantitative information with which prototypes can be compared and problems within an interface identified.
An Application of the Geo-Semantic Micro-services in Seamless Data-Model Integration
NASA Astrophysics Data System (ADS)
Jiang, P.; Elag, M.; Kumar, P.; Liu, R.; Hu, Y.; Marini, L.; Peckham, S. D.; Hsu, L.
2016-12-01
We are applying machine learning (ML) techniques to continuous acoustic emission (AE) data from laboratory earthquake experiments. Our goal is to apply explicit ML methods to this acoustic datathe AE in order to infer frictional properties of a laboratory fault. The experiment is a double direct shear apparatus comprised of fault blocks surrounding fault gouge comprised of glass beads or quartz powder. Fault characteristics are recorded, including shear stress, applied load (bulk friction = shear stress/normal load) and shear velocity. The raw acoustic signal is continuously recorded. We rely on explicit decision tree approaches (Random Forest and Gradient Boosted Trees) that allow us to identify important features linked to the fault friction. A training procedure that employs both the AE and the recorded shear stress from the experiment is first conducted. Then, testing takes place on data the algorithm has never seen before, using only the continuous AE signal. We find that these methods provide rich information regarding frictional processes during slip (Rouet-Leduc et al., 2017a; Hulbert et al., 2017). In addition, similar machine learning approaches predict failure times, as well as slip magnitudes in some cases. We find that these methods work for both stick slip and slow slip experiments, for periodic slip and for aperiodic slip. We also derive a fundamental relationship between the AE and the friction describing the frictional behavior of any earthquake slip cycle in a given experiment (Rouet-Leduc et al., 2017b). Our goal is to ultimately scale these approaches to Earth geophysical data to probe fault friction. References Rouet-Leduc, B., C. Hulbert, N. Lubbers, K. Barros, C. Humphreys and P. A. Johnson, Machine learning predicts laboratory earthquakes, in review (2017). https://arxiv.org/abs/1702.05774Rouet-LeDuc, B. et al., Friction Laws Derived From the Acoustic Emissions of a Laboratory Fault by Machine Learning (2017), AGU Fall Meeting Session S025: Earthquake source: from the laboratory to the fieldHulbert, C., Characterizing slow slip applying machine learning (2017), AGU Fall Meeting Session S019: Slow slip, Tectonic Tremor, and the Brittle-to-Ductile Transition Zone: What mechanisms control the diversity of slow and fast earthquakes?
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mays, S.E.; Poloski, J.P.; Sullivan, W.H.
1982-07-01
This report describes a risk study of the Browns Ferry, Unit 1, nuclear plant. The study is one of four such studies sponsored by the NRC Office of Research, Division of Risk Assessment, as part of its Interim Reliability Evaluation Program (IREP), Phase II. This report is contained in four volumes: a main report and three appendixes. Appendix B provides a description of Browns Ferry, Unit 1, plant systems and the failure evaluation of those systems as they apply to accidents at Browns Ferry. Information is presented concerning front-line system fault analysis; support system fault analysis; human error models andmore » probabilities; and generic control circuit analyses.« less
Risk Analysis Methods for Deepwater Port Oil Transfer Systems
DOT National Transportation Integrated Search
1976-06-01
This report deals with the risk analysis methodology for oil spills from the oil transfer systems in deepwater ports. Failure mode and effect analysis in combination with fault tree analysis are identified as the methods best suited for the assessmen...
A-Priori Rupture Models for Northern California Type-A Faults
Wills, Chris J.; Weldon, Ray J.; Field, Edward H.
2008-01-01
This appendix describes how a-priori rupture models were developed for the northern California Type-A faults. As described in the main body of this report, and in Appendix G, ?a-priori? models represent an initial estimate of the rate of single and multi-segment surface ruptures on each fault. Whether or not a given model is moment balanced (i.e., satisfies section slip-rate data) depends on assumptions made regarding the average slip on each segment in each rupture (which in turn depends on the chosen magnitude-area relationship). Therefore, for a given set of assumptions, or branch on the logic tree, the methodology of the present Working Group (WGCEP-2007) is to find a final model that is as close as possible to the a-priori model, in the least squares sense, but that also satisfies slip rate and perhaps other data. This is analogous the WGCEP- 2002 approach of effectively voting on the relative rate of each possible rupture, and then finding the closest moment-balance model (under a more limiting set of assumptions than adopted by the present WGCEP, as described in detail in Appendix G). The 2002 Working Group Report (WCCEP, 2003, referred to here as WGCEP-2002), created segmented earthquake rupture forecast models for all faults in the region, including some that had been designated as Type B faults in the NSHMP, 1996, and one that had not previously been considered. The 2002 National Seismic Hazard Maps used the values from WGCEP-2002 for all the faults in the region, essentially treating all the listed faults as Type A faults. As discussed in Appendix A, the current WGCEP found that there are a number of faults with little or no data on slip-per-event, or dates of previous earthquakes. As a result, the WGCEP recommends that faults with minimal available earthquake recurrence data: the Greenville, Mount Diablo, San Gregorio, Monte Vista-Shannon and Concord-Green Valley be modeled as Type B faults to be consistent with similarly poorly-known faults statewide. As a result, the modified segmented models discussed here only concern the San Andreas, Hayward-Rodgers Creek, and Calaveras faults. Given the extensive level of effort given by the recent Bay-Area WGCEP-2002, our approach has been to adopt their final average models as our preferred a-prior models. We have modified the WGCEP-2002 models where necessary to match data that were not available or not used by that WGCEP and where the models needed by WGCEP-2007 for a uniform statewide model require different assumptions and/or logic-tree branch weights. In these cases we have made what are usually slight modifications to the WGCEP-2002 model. This Appendix presents the minor changes needed to accomodate updated information and model construction. We do not attempt to reproduce here the extensive documentation of data, model parameters and earthquake probablilities in the WG-2002 report.
Seismic Hazard Implication of the Seismotectonics of southern Africa
NASA Astrophysics Data System (ADS)
Midzi, Vunganai; Mulabisana, Thifelimbilu; Manzunzu, Brassnavy
2014-05-01
The work presented in this report / presentation was prepared as part of the requirements for the SIDA/IGCP Project 601 titled "Seismotectonics and Seismic Hazards in Africa" as well as part of the seismic source characterisation of the GEM-Africa Seismic hazard study. An effort was made to compile information necessary to prepare a seismotectonic map of Africa which can then be used in carrying out a seismic hazard assessment of the continent or locations within the continent. Information on major faults, fault plane solutions, geophysical data as well as stress data has so far been collected and included in a database for the southern Africa region. Reports published by several experts contributed much to the collected information. The seismicity data used are part of the earthquake catalogue being prepared for the GEM-Africa project, which includes historical and instrumental records as collected from various sources. An effort has been made to characterise the identified major faults and through further analysis investigate their possible impact on the seismic hazard of southern Africa.
Applicability of ERTS-1 to Montana geology
NASA Technical Reports Server (NTRS)
Weidman, R. M. (Principal Investigator)
1973-01-01
The author has identified the following significant results. Rapid construction of a lineament map for western Montana, drawn as an overlay to a late August band 7 mosaic at a scale of 1:1,000,000 indicates ERTS-1 imagery to be very suitable for quick compilation of topographically expressed lineaments representing scarps and straight canyons. Over 100 such lineaments were detected, ranging in length from 80 down to 5 miles. Most of the major high angle faults of the area are represented, but low angle faults such as the Lewis overthrust are not apparent. Short and medium length lineaments of northeast trend are abundant southeast of a line connecting Missoula and Great Falls. Only about half of the lineaments are shown on the state geologic map, and limited comparisons with more detailed maps suggest that many will merit investigation as possible faults. It is already apparent that ERTS-1 imagery will be useful in construction of a needed tectonic map of Montana.
NASA Astrophysics Data System (ADS)
Tian, Y.; Vermeesch, P.; Carter, A.; Zhang, P.
2017-12-01
The Cenozoic deformation of the Tibetan Plateau were dominated by the north-south collision between the Indian and Eurasian continents since Early Cenozoic time. Numerous lines of evidence suggest that the plateau has expanded outward after the collision, forming a diverging stress-regime from the collisional belt to plateau margins. When and how the expansional strain had propagated to the current plateau margins has been hotly debated. This work presents results of an on-going projects for understanding the long-term strain history along the Longmen Shan, the eastern margin of the Tibetan Plateau, where deformation is controlled by three parallel NW-dipping faults. From foreland (southeast) to hinterland (northwest), the main faults are the Guanxian-Anxian fault, Yingxiu-Beichuan fault and Wenchuan-Maowen fault. Exhumation pattern constrained by one-dimensional modelling made from a compilation of published and unpublished thermochronometry data shows a strong structural control, with highest amounts of exhumation in the hinterland region, a pattern that is characteristic of out-of-sequence reverse faulting (Tian et al., 2013, Tectonics, doi:10.1002/tect.20043; Tian et al., 2015, Geophys. Res. Lett., doi:10.1002/2014GL062383). Three-dimensional thermo-kinematic modelling of these data suggests that the Longmen Shan faults are listric in geometry, merging into a detachment at a depth of 20-30 km. The models require a marked decrease in slip-rate along the frontal Yingxiu-Beichuan in the late Miocene, whereas the slip-rate along the hinterland Wenchuan-Maowen fault remained relatively constant since early Miocene time. The long-term pattern of strain accommodation revealed by the three-dimensional thermo-kinematic modelling have important implications for distinguishing geodynamic models proposed for explaining the eastward growth of the Tibetan Plateau.
Spectral element modelling of fault-plane reflections arising from fluid pressure distributions
Haney, M.; Snieder, R.; Ampuero, J.-P.; Hofmann, R.
2007-01-01
The presence of fault-plane reflections in seismic images, besides indicating the locations of faults, offers a possible source of information on the properties of these poorly understood zones. To better understand the physical mechanism giving rise to fault-plane reflections in compacting sedimentary basins, we numerically model the full elastic wavefield via the spectral element method (SEM) for several different fault models. Using well log data from the South Eugene Island field, offshore Louisiana, we derive empirical relationships between the elastic parameters (e.g. P-wave velocity and density) and the effective-stress along both normal compaction and unloading paths. These empirical relationships guide the numerical modelling and allow the investigation of how differences in fluid pressure modify the elastic wavefield. We choose to simulate the elastic wave equation via SEM since irregular model geometries can be accommodated and slip boundary conditions at an interface, such as a fault or fracture, are implemented naturally. The method we employ for including a slip interface retains the desirable qualities of SEM in that it is explicit in time and, therefore, does not require the inversion of a large matrix. We performa complete numerical study by forward modelling seismic shot gathers over a faulted earth model using SEM followed by seismic processing of the simulated data. With this procedure, we construct post-stack time-migrated images of the kind that are routinely interpreted in the seismic exploration industry. We dip filter the seismic images to highlight the fault-plane reflections prior to making amplitude maps along the fault plane. With these amplitude maps, we compare the reflectivity from the different fault models to diagnose which physical mechanism contributes most to observed fault reflectivity. To lend physical meaning to the properties of a locally weak fault zone characterized as a slip interface, we propose an equivalent-layer model under the assumption of weak scattering. This allows us to use the empirical relationships between density, velocity and effective stress from the South Eugene Island field to relate a slip interface to an amount of excess pore-pressure in a fault zone. ?? 2007 The Authors Journal compilation ?? 2007 RAS.
FAIL-SAFE: Fault Aware IntelLigent Software for Exascale
2016-06-13
and that these programs can continue to correct solutions. To broaden the impact of this research, we also needed to be able to ameliorate errors...designing an interface between the application and an introspection framework for resilience ( IFR ) based on the inference engine SHINE; (4) using...the ROSE compiler to translate annotations into reasoning rules for the IFR ; and (5) designing a Knowledge/Experience Database, which will store
Slate, Janet L.; Berry, Margaret E.; Rowley, Peter D.; Fridrich, Christopher J.; Morgan, Karen S.; Workman, Jeremiah B.; Young, Owen D.; Dixon, Gary L.; Williams, Van S.; McKee, Edwin H.; Ponce, David A.; Hildenbrand, Thomas G.; Swadley, W.C.; Lundstrom, Scott C.; Ekren, E. Bartlett; Warren, Richard G.; Cole, James C.; Fleck, Robert J.; Lanphere, Marvin A.; Sawyer, David A.; Minor, Scott A.; Grunwald, Daniel J.; Laczniak, Randell J.; Menges, Christopher M.; Yount, James C.; Jayko, Angela S.
1999-01-01
This digital geologic map of the Nevada Test Site (NTS) and vicinity, as well as its accompanying digital geophysical maps, are compiled at 1:100,000 scale. The map compilation presents new polygon (geologic map unit contacts), line (fault, fold axis, metamorphic isograd, dike, and caldera wall) and point (structural attitude) vector data for the NTS and vicinity, Nye, Lincoln, and Clark Counties, Nevada, and Inyo County, California. The map area covers two 30 x 60-minute quadrangles-the Pahute Mesa quadrangle to the north and the Beatty quadrangle to the south-plus a strip of 7.5-minute quadrangles on the east side-72 quadrangles in all. In addition to the NTS, the map area includes the rest of the southwest Nevada volcanic field, part of the Walker Lane, most of the Amargosa Desert, part of the Funeral and Grapevine Mountains, some of Death Valley, and the northern Spring Mountains. This geologic map improves on previous geologic mapping of the same area (Wahl and others, 1997) by providing new and updated Quaternary and bedrock geology, new geophysical interpretations of faults beneath the basins, and improved GIS coverages. Concurrent publications to this one include a new isostatic gravity map (Ponce and others, 1999) and a new aeromagnetic map (Ponce, 1999).
Multi-interferogram method for measuring interseismic deformation: Denali Fault, Alaska
Biggs, Juliet; Wright, Tim; Lu, Zhong; Parsons, Barry
2007-01-01
Studies of interseismic strain accumulation are crucial to our understanding of continental deformation, the earthquake cycle and seismic hazard. By mapping small amounts of ground deformation over large spatial areas, InSAR has the potential to produce continental-scale maps of strain accumulation on active faults. However, most InSAR studies to date have focused on areas where the coherence is relatively good (e.g. California, Tibet and Turkey) and most analysis techniques (stacking, small baseline subset algorithm, permanent scatterers, etc.) only include information from pixels which are coherent throughout the time-span of the study. In some areas, such as Alaska, where the deformation rate is small and coherence very variable, it is necessary to include information from pixels which are coherent in some but not all interferograms. We use a three-stage iterative algorithm based on distributed scatterer interferometry. We validate our method using synthetic data created using realistic parameters from a test site on the Denali Fault, Alaska, and present a preliminary result of 10.5 ?? 5.0 mm yr-1 for the slip rate on the Denali Fault based on a single track of radar data from ERS1/2. ?? 2007 The Authors Journal compilation ?? 2007 RAS.
Methodology for Designing Fault-Protection Software
NASA Technical Reports Server (NTRS)
Barltrop, Kevin; Levison, Jeffrey; Kan, Edwin
2006-01-01
A document describes a methodology for designing fault-protection (FP) software for autonomous spacecraft. The methodology embodies and extends established engineering practices in the technical discipline of Fault Detection, Diagnosis, Mitigation, and Recovery; and has been successfully implemented in the Deep Impact Spacecraft, a NASA Discovery mission. Based on established concepts of Fault Monitors and Responses, this FP methodology extends the notion of Opinion, Symptom, Alarm (aka Fault), and Response with numerous new notions, sub-notions, software constructs, and logic and timing gates. For example, Monitor generates a RawOpinion, which graduates into Opinion, categorized into no-opinion, acceptable, or unacceptable opinion. RaiseSymptom, ForceSymptom, and ClearSymptom govern the establishment and then mapping to an Alarm (aka Fault). Local Response is distinguished from FP System Response. A 1-to-n and n-to- 1 mapping is established among Monitors, Symptoms, and Responses. Responses are categorized by device versus by function. Responses operate in tiers, where the early tiers attempt to resolve the Fault in a localized step-by-step fashion, relegating more system-level response to later tier(s). Recovery actions are gated by epoch recovery timing, enabling strategy, urgency, MaxRetry gate, hardware availability, hazardous versus ordinary fault, and many other priority gates. This methodology is systematic, logical, and uses multiple linked tables, parameter files, and recovery command sequences. The credibility of the FP design is proven via a fault-tree analysis "top-down" approach, and a functional fault-mode-effects-and-analysis via "bottoms-up" approach. Via this process, the mitigation and recovery strategy(s) per Fault Containment Region scope (width versus depth) the FP architecture.
NASA Astrophysics Data System (ADS)
Yoshimi, M.; Matsushima, S.; Ando, R.; Miyake, H.; Imanishi, K.; Hayashida, T.; Takenaka, H.; Suzuki, H.; Matsuyama, H.
2017-12-01
We conducted strong ground motion prediction for the active Beppu-Haneyama Fault zone (BHFZ), Kyushu island, southwestern Japan. Since the BHFZ runs through Oita and Beppy cities, strong ground motion as well as fault displacement may affect much to the cities.We constructed a 3-dimensional velocity structure of a sedimentary basin, Beppu bay basin, where the fault zone runs through and Oita and Beppu cities are located. Minimum shear wave velocity of the 3d model is 500 m/s. Additional 1-d structure is modeled for sites with softer sediment: holocene plain area. We observed, collected, and compiled data obtained from microtremor surveys, ground motion observations, boreholes etc. phase velocity and H/V ratio. Finer structure of the Oita Plain is modeled, as 250m-mesh model, with empirical relation among N-value, lithology, depth and Vs, using borehole data, then validated with the phase velocity data obtained by the dense microtremor array observation (Yoshimi et al., 2016).Synthetic ground motion has been calculated with a hybrid technique composed of a stochastic Green's function method (for HF wave), a 3D finite difference (LF wave) and 1D amplification calculation. Fault geometry has been determined based on reflection surveys and active fault map. The rake angles are calculated with a dynamic rupture simulation considering three fault segments under a stress filed estimated from source mechanism of earthquakes around the faults (Ando et al., JpGU-AGU2017). Fault parameters such as the average stress drop, a size of asperity etc. are determined based on an empirical relation proposed by Irikura and Miyake (2001). As a result, strong ground motion stronger than 100 cm/s is predicted in the hanging wall side of the Oita plain.This work is supported by the Comprehensive Research on the Beppu-Haneyama Fault Zone funded by the Ministry of Education, Culture, Sports, Science, and Technology (MEXT), Japan.
Map of the Rinconada and Reliz Fault Zones, Salinas River Valley, California
Rosenberg, Lewis I.; Clark, Joseph C.
2009-01-01
The Rinconada Fault and its related faults constitute a major structural element of the Salinas River valley, which is known regionally, and referred to herein, as the 'Salinas Valley'. The Rinconada Fault extends 230 km from King City in the north to the Big Pine Fault in the south. At the south end of the map area near Santa Margarita, the Rinconada Fault separates granitic and metamorphic crystalline rocks of the Salinian Block to the northeast from the subduction-zone assemblage of the Franciscan Complex to the southwest. Northwestward, the Rinconada Fault lies entirely within the Salinian Block and generally divides this region into two physiographically and structurally distinct areas, the Santa Lucia Range to the west and the Salinas Valley to the east. The Reliz Fault, which continues as a right stepover from the Rinconada Fault, trends northwestward along the northeastern base of the Sierra de Salinas of the Santa Lucia Range and beyond for 60 km to the vicinity of Spreckels, where it is largely concealed. Aeromagnetic data suggest that the Reliz Fault continues northwestward another 25 km into Monterey Bay, where it aligns with a high-definition magnetic boundary. Geomorphic evidence of late Quaternary movement along the Rinconada and Reliz Fault Zones has been documented by Tinsley (1975), Dibblee (1976, 1979), Hart (1976, 1985), and Klaus (1999). Although definitive geologic evidence of Holocene surface rupture has not been found on these faults, they were regarded as an earthquake source for the California Geological Survey [formerly, California Division of Mines and Geology]/U.S. Geological Survey (CGS/USGS) Probabilistic Seismic Hazards Assessment because of their postulated slip rate of 1+-1 mm/yr and their calculated maximum magnitude of 7.3. Except for published reports by Durham (1965, 1974), Dibblee (1976), and Hart (1976), most information on these faults is unpublished or is contained in theses, field trip guides, and other types of reports. Therefore, the main purpose of this project is to compile and synthesize this body of knowledge into a comprehensive report for the geologic community. This report follows the format of Dibblee (1976) and includes discussions of the sections of the Rinconada Fault and of the Reliz Fault, as well as their Neogene history and key localities. Accompanying this report is a geologic map database of the faults, key localities, and earthquake epicenters, in ESRI shapefile format.
Automated Generation of Fault Management Artifacts from a Simple System Model
NASA Technical Reports Server (NTRS)
Kennedy, Andrew K.; Day, John C.
2013-01-01
Our understanding of off-nominal behavior - failure modes and fault propagation - in complex systems is often based purely on engineering intuition; specific cases are assessed in an ad hoc fashion as a (fallible) fault management engineer sees fit. This work is an attempt to provide a more rigorous approach to this understanding and assessment by automating the creation of a fault management artifact, the Failure Modes and Effects Analysis (FMEA) through querying a representation of the system in a SysML model. This work builds off the previous development of an off-nominal behavior model for the upcoming Soil Moisture Active-Passive (SMAP) mission at the Jet Propulsion Laboratory. We further developed the previous system model to more fully incorporate the ideas of State Analysis, and it was restructured in an organizational hierarchy that models the system as layers of control systems while also incorporating the concept of "design authority". We present software that was developed to traverse the elements and relationships in this model to automatically construct an FMEA spreadsheet. We further discuss extending this model to automatically generate other typical fault management artifacts, such as Fault Trees, to efficiently portray system behavior, and depend less on the intuition of fault management engineers to ensure complete examination of off-nominal behavior.
Naive Bayes Bearing Fault Diagnosis Based on Enhanced Independence of Data
Zhang, Nannan; Wu, Lifeng; Yang, Jing; Guan, Yong
2018-01-01
The bearing is the key component of rotating machinery, and its performance directly determines the reliability and safety of the system. Data-based bearing fault diagnosis has become a research hotspot. Naive Bayes (NB), which is based on independent presumption, is widely used in fault diagnosis. However, the bearing data are not completely independent, which reduces the performance of NB algorithms. In order to solve this problem, we propose a NB bearing fault diagnosis method based on enhanced independence of data. The method deals with data vector from two aspects: the attribute feature and the sample dimension. After processing, the classification limitation of NB is reduced by the independence hypothesis. First, we extract the statistical characteristics of the original signal of the bearings effectively. Then, the Decision Tree algorithm is used to select the important features of the time domain signal, and the low correlation features is selected. Next, the Selective Support Vector Machine (SSVM) is used to prune the dimension data and remove redundant vectors. Finally, we use NB to diagnose the fault with the low correlation data. The experimental results show that the independent enhancement of data is effective for bearing fault diagnosis. PMID:29401730
NASA Astrophysics Data System (ADS)
Li, Yongbo; Xu, Minqiang; Wang, Rixin; Huang, Wenhu
2016-01-01
This paper presents a new rolling bearing fault diagnosis method based on local mean decomposition (LMD), improved multiscale fuzzy entropy (IMFE), Laplacian score (LS) and improved support vector machine based binary tree (ISVM-BT). When the fault occurs in rolling bearings, the measured vibration signal is a multi-component amplitude-modulated and frequency-modulated (AM-FM) signal. LMD, a new self-adaptive time-frequency analysis method can decompose any complicated signal into a series of product functions (PFs), each of which is exactly a mono-component AM-FM signal. Hence, LMD is introduced to preprocess the vibration signal. Furthermore, IMFE that is designed to avoid the inaccurate estimation of fuzzy entropy can be utilized to quantify the complexity and self-similarity of time series for a range of scales based on fuzzy entropy. Besides, the LS approach is introduced to refine the fault features by sorting the scale factors. Subsequently, the obtained features are fed into the multi-fault classifier ISVM-BT to automatically fulfill the fault pattern identifications. The experimental results validate the effectiveness of the methodology and demonstrate that proposed algorithm can be applied to recognize the different categories and severities of rolling bearings.
Mumma, Joel M; Durso, Francis T; Ferguson, Ashley N; Gipson, Christina L; Casanova, Lisa; Erukunuakpor, Kimberly; Kraft, Colleen S; Walsh, Victoria L; Zimring, Craig; DuBose, Jennifer; Jacob, Jesse T
2018-03-05
Doffing protocols for personal protective equipment (PPE) are critical for keeping healthcare workers (HCWs) safe during care of patients with Ebola virus disease. We assessed the relationship between errors and self-contamination during doffing. Eleven HCWs experienced with doffing Ebola-level PPE participated in simulations in which HCWs donned PPE marked with surrogate viruses (ɸ6 and MS2), completed a clinical task, and were assessed for contamination after doffing. Simulations were video recorded, and a failure modes and effects analysis and fault tree analyses were performed to identify errors during doffing, quantify their risk (risk index), and predict contamination data. Fifty-one types of errors were identified, many having the potential to spread contamination. Hand hygiene and removing the powered air purifying respirator (PAPR) hood had the highest total risk indexes (111 and 70, respectively) and number of types of errors (9 and 13, respectively). ɸ6 was detected on 10% of scrubs and the fault tree predicted a 10.4% contamination rate, likely occurring when the PAPR hood inadvertently contacted scrubs during removal. MS2 was detected on 10% of hands, 20% of scrubs, and 70% of inner gloves and the predicted rates were 7.3%, 19.4%, 73.4%, respectively. Fault trees for MS2 and ɸ6 contamination suggested similar pathways. Ebola-level PPE can both protect and put HCWs at risk for self-contamination throughout the doffing process, even among experienced HCWs doffing with a trained observer. Human factors methodologies can identify error-prone steps, delineate the relationship between errors and self-contamination, and suggest remediation strategies.
NASA Astrophysics Data System (ADS)
Krechowicz, Maria
2017-10-01
Nowadays, one of the characteristic features of construction industry is an increased complexity of a growing number of projects. Almost each construction project is unique, has its project-specific purpose, its own project structural complexity, owner’s expectations, ground conditions unique to a certain location, and its own dynamics. Failure costs and costs resulting from unforeseen problems in complex construction projects are very high. Project complexity drivers pose many vulnerabilities to a successful completion of a number of projects. This paper discusses the process of effective risk management in complex construction projects in which renewable energy sources were used, on the example of the realization phase of the ENERGIS teaching-laboratory building, from the point of view of DORBUD S.A., its general contractor. This paper suggests a new approach to risk management for complex construction projects in which renewable energy sources were applied. The risk management process was divided into six stages: gathering information, identification of the top, critical project risks resulting from the project complexity, construction of the fault tree for each top, critical risks, logical analysis of the fault tree, quantitative risk assessment applying fuzzy logic and development of risk response strategy. A new methodology for the qualitative and quantitative risk assessment for top, critical risks in complex construction projects was developed. Risk assessment was carried out applying Fuzzy Fault Tree analysis on the example of one top critical risk. Application of the Fuzzy sets theory to the proposed model allowed to decrease uncertainty and eliminate problems with gaining the crisp values of the basic events probability, common during expert risk assessment with the objective to give the exact risk score of each unwanted event probability.
Taheriyoun, Masoud; Moradinejad, Saber
2015-01-01
The reliability of a wastewater treatment plant is a critical issue when the effluent is reused or discharged to water resources. Main factors affecting the performance of the wastewater treatment plant are the variation of the influent, inherent variability in the treatment processes, deficiencies in design, mechanical equipment, and operational failures. Thus, meeting the established reuse/discharge criteria requires assessment of plant reliability. Among many techniques developed in system reliability analysis, fault tree analysis (FTA) is one of the popular and efficient methods. FTA is a top down, deductive failure analysis in which an undesired state of a system is analyzed. In this study, the problem of reliability was studied on Tehran West Town wastewater treatment plant. This plant is a conventional activated sludge process, and the effluent is reused in landscape irrigation. The fault tree diagram was established with the violation of allowable effluent BOD as the top event in the diagram, and the deficiencies of the system were identified based on the developed model. Some basic events are operator's mistake, physical damage, and design problems. The analytical method is minimal cut sets (based on numerical probability) and Monte Carlo simulation. Basic event probabilities were calculated according to available data and experts' opinions. The results showed that human factors, especially human error had a great effect on top event occurrence. The mechanical, climate, and sewer system factors were in subsequent tier. Literature shows applying FTA has been seldom used in the past wastewater treatment plant (WWTP) risk analysis studies. Thus, the developed FTA model in this study considerably improves the insight into causal failure analysis of a WWTP. It provides an efficient tool for WWTP operators and decision makers to achieve the standard limits in wastewater reuse and discharge to the environment.
Jetter, J J; Forte, R; Rubenstein, R
2001-02-01
A fault tree analysis was used to estimate the number of refrigerant exposures of automotive service technicians and vehicle occupants in the United States. Exposures of service technicians can occur when service equipment or automotive air-conditioning systems leak during servicing. The number of refrigerant exposures of service technicians was estimated to be 135,000 per year. Exposures of vehicle occupants can occur when refrigerant enters passenger compartments due to sudden leaks in air-conditioning systems, leaks following servicing, or leaks caused by collisions. The total number of exposures of vehicle occupants was estimated to be 3,600 per year. The largest number of exposures of vehicle occupants was estimated for leaks caused by collisions, and the second largest number of exposures was estimated for leaks following servicing. Estimates used in the fault tree analysis were based on a survey of automotive air-conditioning service shops, the best available data from the literature, and the engineering judgement of the authors and expert reviewers from the Society of Automotive Engineers Interior Climate Control Standards Committee. Exposure concentrations and durations were estimated and compared with toxicity data for refrigerants currently used in automotive air conditioners. Uncertainty was high for the estimated numbers of exposures, exposure concentrations, and exposure durations. Uncertainty could be reduced in the future by conducting more extensive surveys, measurements of refrigerant concentrations, and exposure monitoring. Nevertheless, the analysis indicated that the risk of exposure of service technicians and vehicle occupants is significant, and it is recommended that no refrigerant that is substantially more toxic than currently available substitutes be accepted for use in vehicle air-conditioning systems, absent a means of mitigating exposure.
Fault tree analysis for integrated and probabilistic risk analysis of drinking water systems.
Lindhe, Andreas; Rosén, Lars; Norberg, Tommy; Bergstedt, Olof
2009-04-01
Drinking water systems are vulnerable and subject to a wide range of risks. To avoid sub-optimisation of risk-reduction options, risk analyses need to include the entire drinking water system, from source to tap. Such an integrated approach demands tools that are able to model interactions between different events. Fault tree analysis is a risk estimation tool with the ability to model interactions between events. Using fault tree analysis on an integrated level, a probabilistic risk analysis of a large drinking water system in Sweden was carried out. The primary aims of the study were: (1) to develop a method for integrated and probabilistic risk analysis of entire drinking water systems; and (2) to evaluate the applicability of Customer Minutes Lost (CML) as a measure of risk. The analysis included situations where no water is delivered to the consumer (quantity failure) and situations where water is delivered but does not comply with water quality standards (quality failure). Hard data as well as expert judgements were used to estimate probabilities of events and uncertainties in the estimates. The calculations were performed using Monte Carlo simulations. CML is shown to be a useful measure of risks associated with drinking water systems. The method presented provides information on risk levels, probabilities of failure, failure rates and downtimes of the system. This information is available for the entire system as well as its different sub-systems. Furthermore, the method enables comparison of the results with performance targets and acceptable levels of risk. The method thus facilitates integrated risk analysis and consequently helps decision-makers to minimise sub-optimisation of risk-reduction options.
Checkpointing Shared Memory Programs at the Application-level
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bronevetsky, G; Schulz, M; Szwed, P
2004-09-08
Trends in high-performance computing are making it necessary for long-running applications to tolerate hardware faults. The most commonly used approach is checkpoint and restart(CPR)-the state of the computation is saved periodically on disk, and when a failure occurs, the computation is restarted from the last saved state. At present, it is the responsibility of the programmer to instrument applications for CPR. Our group is investigating the use of compiler technology to instrument codes to make them self-checkpointing and self-restarting, thereby providing an automatic solution to the problem of making long-running scientific applications resilient to hardware faults. Our previous work focusedmore » on message-passing programs. In this paper, we describe such a system for shared-memory programs running on symmetric multiprocessors. The system has two components: (i)a pre-compiler for source-to-source modification of applications, and (ii) a runtime system that implements a protocol for coordinating CPR among the threads of the parallel application. For the sake of concreteness, we focus on a non-trivial subset of OpenMP that includes barriers and locks. One of the advantages of this approach is that the ability to tolerate faults becomes embedded within the application itself, so applications become self-checkpointing and self-restarting on any platform. We demonstrate this by showing that our transformed benchmarks can checkpoint and restart on three different platforms (Windows/x86, Linux/x86, and Tru64/Alpha). Our experiments show that the overhead introduced by this approach is usually quite small; they also suggest ways in which the current implementation can be tuned to reduced overheads further.« less
Naghibi, Seyed Amir; Pourghasemi, Hamid Reza; Dixon, Barnali
2016-01-01
Groundwater is considered one of the most valuable fresh water resources. The main objective of this study was to produce groundwater spring potential maps in the Koohrang Watershed, Chaharmahal-e-Bakhtiari Province, Iran, using three machine learning models: boosted regression tree (BRT), classification and regression tree (CART), and random forest (RF). Thirteen hydrological-geological-physiographical (HGP) factors that influence locations of springs were considered in this research. These factors include slope degree, slope aspect, altitude, topographic wetness index (TWI), slope length (LS), plan curvature, profile curvature, distance to rivers, distance to faults, lithology, land use, drainage density, and fault density. Subsequently, groundwater spring potential was modeled and mapped using CART, RF, and BRT algorithms. The predicted results from the three models were validated using the receiver operating characteristics curve (ROC). From 864 springs identified, 605 (≈70 %) locations were used for the spring potential mapping, while the remaining 259 (≈30 %) springs were used for the model validation. The area under the curve (AUC) for the BRT model was calculated as 0.8103 and for CART and RF the AUC were 0.7870 and 0.7119, respectively. Therefore, it was concluded that the BRT model produced the best prediction results while predicting locations of springs followed by CART and RF models, respectively. Geospatially integrated BRT, CART, and RF methods proved to be useful in generating the spring potential map (SPM) with reasonable accuracy.
Caine, Jonathan S.; Nelson, E.P.; Beach, S.T.; Layer, P.W.
2006-01-01
The Idaho Springs and Central City mining districts form the central portion of a structurally controlled hydrothermal precious- and base-metal vein system in the Front Range of the northeast-trending Colorado Mineral Belt. Three new 40Ar/39Ar plateau ages on hydrothermal sericite indicate the veins formed during the Laramide orogeny between 65.4??1.5 - 61.9??1.3 Ma. We compile structural geologic data from surface geological maps, subsurface mine maps, and theses for analysis using modern graphical methods and integration into models of formation of economic mineral deposits. Structural data sets, produced in the 1950s and 1960s by the U.S. Geological Survey, are compiled for fabric elements, including metamorphic foliations, fold axial trends, major brittle fault zones, quartz and precious- and base-metal veins and fault veins, Tertiary dikes, and joints. These fabric elements are plotted on equal-area projections and analyzed for mean fabric orientations. Strike-slip fault-vein sets are mostly parallel or sub-parallel, and not conjugate as interpreted by previous work; late-stage, normal-slip fault veins possibly show a pattern indicative of triaxial strain. Fault-slip kinematic analysis was used to model the trend of the Laramide maximum horizontal stress axis, or compression direction, and to determine compatibility of opening and shear motions within a single stress field. The combined-model maximum compression direction for all strike slip fault veins is ???068??, which is consistent with published Laramide compression directions of ???064?? (mean of 23 regional models) and ???072?? for the Front Range uplift. The orientations of fabric elements were analyzed for mechanical and kinematic compatibility with opening, and thus permeability enhancement, in the modeled regional east-northeast, Laramide compression direction. The fabric orientation analysis and paleostress modeling show that structural permeability during mineralization was enhanced along pre-existing metamorphic foliations and fold axial planes. Large orientation dispersion in most fabric elements likely caused myriad potential pathways for permeability. The dominant orientations of opening and shear mode structures are consistent with a sub-parallel network of structures that formed in the Laramide east-northeast compression direction. The results presented demonstrate the importance of using mechanical and kinematic theory integrated with contemporary ideas of permeability structure to better understand the coupled nature of fluid flow, mineral deposition, stress, and strain. Further, the results demonstrate that there is significant internal strain within this basement-cored uplift that was localized by optimally oriented pre-existing structures in a regional stress field.
PCG: A prototype incremental compilation facility for the SAGA environment, appendix F
NASA Technical Reports Server (NTRS)
Kimball, Joseph John
1985-01-01
A programming environment supports the activity of developing and maintaining software. New environments provide language-oriented tools such as syntax-directed editors, whose usefulness is enhanced because they embody language-specific knowledge. When syntactic and semantic analysis occur early in the cycle of program production, that is, during editing, the use of a standard compiler is inefficient, for it must re-analyze the program before generating code. Likewise, it is inefficient to recompile an entire file, when the editor can determine that only portions of it need updating. The pcg, or Pascal code generation, facility described here generates code directly from the syntax trees produced by the SAGA syntax directed Pascal editor. By preserving the intermediate code used in the previous compilation, it can limit recompilation to the routines actually modified by editing.
Madrigal-González, Jaime; Ruiz-Benito, Paloma; Ratcliffe, Sophia; Calatayud, Joaquín; Kändler, Gerald; Lehtonen, Aleksi; Dahlgren, Jonas; Wirth, Christian; Zavala, Miguel A.
2016-01-01
Neglecting tree size and stand structure dynamics might bias the interpretation of the diversity-productivity relationship in forests. Here we show evidence that complementarity is contingent on tree size across large-scale climatic gradients in Europe. We compiled growth data of the 14 most dominant tree species in 32,628 permanent plots covering boreal, temperate and Mediterranean forest biomes. Niche complementarity is expected to result in significant growth increments of trees surrounded by a larger proportion of functionally dissimilar neighbours. Functional dissimilarity at the tree level was assessed using four functional types: i.e. broad-leaved deciduous, broad-leaved evergreen, needle-leaved deciduous and needle-leaved evergreen. Using Linear Mixed Models we show that, complementarity effects depend on tree size along an energy availability gradient across Europe. Specifically: (i) complementarity effects at low and intermediate positions of the gradient (coldest-temperate areas) were stronger for small than for large trees; (ii) in contrast, at the upper end of the gradient (warmer regions), complementarity is more widespread in larger than smaller trees, which in turn showed negative growth responses to increased functional dissimilarity. Our findings suggest that the outcome of species mixing on stand productivity might critically depend on individual size distribution structure along gradients of environmental variation. PMID:27571971
Reconnaissance geologic map of the Kuskokwim Bay region, southwest Alaska
Wilson, Frederic H.; Hults, Chad P.; Mohadjer, Solmaz; Coonrad, Warren L.
2013-01-01
The rocks of the map area range from Proterozoic age metamorphic rocks of the Kanektok metamorphic complex (Kilbuck terrane) to Quaternary age mafic volcanic rocks of Nunivak Island. The map area encompasses much of the type area of the Togiak-Tikchik Complex. The geologic maps used to construct this compilation were, for the most part, reconnaissance studies done in the time period from the 1950s to 1990s. Pioneering work in the map area by J.M. Hoare and W.L. Coonrad forms the basis for much of this map, either directly or as the stepping off point for later studies compiled here. Physiographically, the map area ranges from glaciated mountains, as much as 1,500 m high, in the Ahklun Mountains to the coastal lowlands of northern Bristol Bay and the Kuskokwim River delta. The mountains and the finger lakes (drowned fiords) on the east have been strongly affected by Pleistocene and Holocene glaciation. Within the map area are a number of major faults. The Togiak-Tikchik Fault and its extension to the northeast, the Holitna Fault, are considered extensions of the Denali fault system of central Alaska. Other sub-parallel faults include the Golden Gate, Sawpit, Goodnews, and East Kulukak Faults. Northwest-trending strike-slip faults crosscut and offset northeast-trending fault systems. Rocks of the area are assigned to a number of distinctive lithologic packages. Most distinctive among these packages are the high-grade metamorphic rocks of the Kanektok metamorphic complex or Kilbuck terrane, composed of a high-grade metamorphic orthogneiss core surrounded by greenschist and amphibolite facies schist, gneiss, and rare marble and quartzite. These rocks have yielded radiometric ages strongly suggestive of a 2.05 Ga emplacement age. Poorly known Paleozoic rocks, including Ordovician to Devonian and Permian limestone, are found east of the Kanektok metamorphic complex. A Triassic(?) ophiolite complex is on the southeast side of Kuskokwim Bay; otherwise only minor Triassic rock units are known. The most widespread rocks of the area are Jurassic and Early Cretaceous(?) volcanic and volcaniclastic rocks. The Kuskokwim Group flysch is restricted largely to the northeast part of the map area. It consists primarily of shelf and minor nearshore facies rocks. Primarily exposed in the lowlands west of the Ahklun Mountains, extensive latest Tertiary and Quaternary alkalic basalt flows and lesser pyroclastic rocks form much of the bedrock of the remaining area. On Saint Matthew Island, Cretaceous volcanic and pyroclastic rocks occur that are not found elsewhere within the map area. The Kuskokwim Group and older rocks, including on Saint Matthew Island, but not the Kanektok metamorphic complex, are intruded by widely dispersed Late Cretaceous and (or) Early Tertiary granitic rocks. Much of the lowland area is mantled by unconsolidated deposits that include glacial, alluvial and fluvial, marine, estuarine, and eolian deposits. These formed during several episodes of Quaternary glaciation.
National scale biomass estimators for United States tree species
Jennifer C. Jenkins; David C. Chojnacky; Linda S. Heath; Richard A. Birdsey
2003-01-01
Estimates of national-scale forest carbon (C) stocks and fluxes are typically based on allometric regression equations developed using dimensional analysis techniques. However, the literature is inconsistent and incomplete with respect to large-scale forest C estimation. We compiled all available diameter-based allometric regression equations for estimating total...
Philip J. Radtke; Nathan D. Herring; David L. Loftis; Chad E. Keyser
2012-01-01
Prediction accuracy for projected basal area and trees per acre was assessed for the growth and yield model of the Forest Vegetation Simulator Southern Variant (FVS-Sn). Data for comparison with FVS-Sn predictions were compiled from a collection of n
NASA Astrophysics Data System (ADS)
Armadillo, E.; Ferraccioli, F.; Balbi, P.; Bozzo, E.
2013-12-01
Terrane bounding and intra-terrane faults of the Ross Orogen in East Antarctica are linked to several phases of Cambrian to Ordovician age subduction and accretion along the active paleo-Pacific margin of Gondwana. Here we compile and analyse new enhanced aeromagnetic anomaly images over the Northern Victoria Land (NVL) segment of the Ross Orogen and the eastern margin of the Wilkes Subglacial Basin (WSB) that help constrain the extent and structural architecture of these fault systems and enable us re-assess their tectonic evolution. Long-wavelength magnetic lows and residual Bouguer gravity highs are modelled as several-km thick inverted sedimentary basins of early Cambrian(?) age. Tectonic inversion occurred along major thrust faults during the late stages of the Ross Orogen, forming a major high-grade pop-up structure within the central Wilson Terrane, flanked by lower grade rocks. The Prince Albert Fault System can now be recongnised as being located to the west of the Exiles Thrust fault system rather than representing its southern continuation. Relatively thin sheets of mylonitic sheared granitoids and possible ultramafic lenses are associated with the late-Ross (ca 480 Ma) Exiles Thrust fault system, while significantly larger and thicker batholiths were emplaced along the Prince Albert Fault System. Recent zircon U-Pb dating over small exposures of gabbro-diorites within the Prince Albert Mountains to the south lead us to propose that this part of the magmatic arc was emplaced during an earlier phase of subduction (~520 Ma or older?), compared to the late-Ross intrusions to the east. Whether the Prince Albert Fault System was indeed a major cryptic suture in early Cambrian times (Ferraccioli et al., 2002, GRL) remains speculative, but possible. Our aeromagnetic interpretation leads us to conclude that these inherited terrane bounding and intra-terrane fault systems of the Ross Orogen exerted a key influence on Cenozoic tectonic blocks and faults of the Transantarctic Mountains, and that the eastern margin of the WSB adjacent to NVL was also strongly controlled by a complex array of major intraplate strike-slip fault systems.
Risk management of key issues of FPSO
NASA Astrophysics Data System (ADS)
Sun, Liping; Sun, Hai
2012-12-01
Risk analysis of key systems have become a growing topic late of because of the development of offshore structures. Equipment failures of offloading system and fire accidents were analyzed based on the floating production, storage and offloading (FPSO) features. Fault tree analysis (FTA), and failure modes and effects analysis (FMEA) methods were examined based on information already researched on modules of relex reliability studio (RRS). Equipment failures were also analyzed qualitatively by establishing a fault tree and Boolean structure function based on the shortage of failure cases, statistical data, and risk control measures examined. Failure modes of fire accident were classified according to the different areas of fire occurrences during the FMEA process, using risk priority number (RPN) methods to evaluate their severity rank. The qualitative analysis of FTA gave the basic insight of forming the failure modes of FPSO offloading, and the fire FMEA gave the priorities and suggested processes. The research has practical importance for the security analysis problems of FPSO.
Yazdi, Mohammad; Korhan, Orhan; Daneshvar, Sahand
2018-05-09
This study aimed at establishing fault tree analysis (FTA) using expert opinion to compute the probability of an event. To find the probability of the top event (TE), all probabilities of the basic events (BEs) should be available when the FTA is drawn. In this case, employing expert judgment can be used as an alternative to failure data in an awkward situation. The fuzzy analytical hierarchy process as a standard technique is used to give a specific weight to each expert, and fuzzy set theory is engaged for aggregating expert opinion. In this regard, the probability of BEs will be computed and, consequently, the probability of the TE obtained using Boolean algebra. Additionally, to reduce the probability of the TE in terms of three parameters (safety consequences, cost and benefit), the importance measurement technique and modified TOPSIS was employed. The effectiveness of the proposed approach is demonstrated with a real-life case study.
NASA Astrophysics Data System (ADS)
Guan, Yifeng; Zhao, Jie; Shi, Tengfei; Zhu, Peipei
2016-09-01
In recent years, China's increased interest in environmental protection has led to a promotion of energy-efficient dual fuel (diesel/natural gas) ships in Chinese inland rivers. A natural gas as ship fuel may pose dangers of fire and explosion if a gas leak occurs. If explosions or fires occur in the engine rooms of a ship, heavy damage and losses will be incurred. In this paper, a fault tree model is presented that considers both fires and explosions in a dual fuel ship; in this model, dual fuel engine rooms are the top events. All the basic events along with the minimum cut sets are obtained through the analysis. The primary factors that affect accidents involving fires and explosions are determined by calculating the degree of structure importance of the basic events. According to these results, corresponding measures are proposed to ensure and improve the safety and reliability of Chinese inland dual fuel ships.
Kingman, D M; Field, W E
2005-11-01
Findings reported by researchers at Illinois State University and Purdue University indicated that since 1980, an average of eight individuals per year have become engulfed and died in farm grain bins in the U.S. and Canada and that all these deaths are significant because they are believed to be preventable. During a recent effort to develop intervention strategies and recommendations for an ASAE farm grain bin safety standard, fault tree analysis (FTA) was utilized to identify contributing factors to engulfments in grain stored in on-farm grain bins. FTA diagrams provided a spatial perspective of the circumstances that occurred prior to engulfment incidents, a perspective never before presented in other hazard analyses. The FTA also demonstrated relationships and interrelationships of the contributing factors. FTA is a useful tool that should be applied more often in agricultural incident investigations to assist in the more complete understanding of the problem studied.
Fault tree analysis for data-loss in long-term monitoring networks.
Dirksen, J; ten Veldhuis, J A E; Schilperoort, R P S
2009-01-01
Prevention of data-loss is an important aspect in the design as well as the operational phase of monitoring networks since data-loss can seriously limit intended information yield. In the literature limited attention has been paid to the origin of unreliable or doubtful data from monitoring networks. Better understanding of causes of data-loss points out effective solutions to increase data yield. This paper introduces FTA as a diagnostic tool to systematically deduce causes of data-loss in long-term monitoring networks in urban drainage systems. In order to illustrate the effectiveness of FTA, a fault tree is developed for a monitoring network and FTA is applied to analyze the data yield of a UV/VIS submersible spectrophotometer. Although some of the causes of data-loss cannot be recovered because the historical database of metadata has been updated infrequently, the example points out that FTA still is a powerful tool to analyze the causes of data-loss and provides useful information on effective data-loss prevention.
Accurate reliability analysis method for quantum-dot cellular automata circuits
NASA Astrophysics Data System (ADS)
Cui, Huanqing; Cai, Li; Wang, Sen; Liu, Xiaoqiang; Yang, Xiaokuo
2015-10-01
Probabilistic transfer matrix (PTM) is a widely used model in the reliability research of circuits. However, PTM model cannot reflect the impact of input signals on reliability, so it does not completely conform to the mechanism of the novel field-coupled nanoelectronic device which is called quantum-dot cellular automata (QCA). It is difficult to get accurate results when PTM model is used to analyze the reliability of QCA circuits. To solve this problem, we present the fault tree models of QCA fundamental devices according to different input signals. After that, the binary decision diagram (BDD) is used to quantitatively investigate the reliability of two QCA XOR gates depending on the presented models. By employing the fault tree models, the impact of input signals on reliability can be identified clearly and the crucial components of a circuit can be found out precisely based on the importance values (IVs) of components. So this method is contributive to the construction of reliable QCA circuits.
Rolex: Resilience-oriented language extensions for extreme-scale systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lucas, Robert F.; Hukerikar, Saurabh
Future exascale high-performance computing (HPC) systems will be constructed from VLSI devices that will be less reliable than those used today, and faults will become the norm, not the exception. This will pose significant problems for system designers and programmers, who for half-a-century have enjoyed an execution model that assumed correct behavior by the underlying computing system. The mean time to failure (MTTF) of the system scales inversely to the number of components in the system and therefore faults and resultant system level failures will increase, as systems scale in terms of the number of processor cores and memory modulesmore » used. However every error detected need not cause catastrophic failure. Many HPC applications are inherently fault resilient. Yet it is the application programmers who have this knowledge but lack mechanisms to convey it to the system. In this paper, we present new Resilience Oriented Language Extensions (Rolex) which facilitate the incorporation of fault resilience as an intrinsic property of the application code. We describe the syntax and semantics of the language extensions as well as the implementation of the supporting compiler infrastructure and runtime system. Furthermore, our experiments show that an approach that leverages the programmer's insight to reason about the context and significance of faults to the application outcome significantly improves the probability that an application runs to a successful conclusion.« less
Rolex: Resilience-oriented language extensions for extreme-scale systems
Lucas, Robert F.; Hukerikar, Saurabh
2016-05-26
Future exascale high-performance computing (HPC) systems will be constructed from VLSI devices that will be less reliable than those used today, and faults will become the norm, not the exception. This will pose significant problems for system designers and programmers, who for half-a-century have enjoyed an execution model that assumed correct behavior by the underlying computing system. The mean time to failure (MTTF) of the system scales inversely to the number of components in the system and therefore faults and resultant system level failures will increase, as systems scale in terms of the number of processor cores and memory modulesmore » used. However every error detected need not cause catastrophic failure. Many HPC applications are inherently fault resilient. Yet it is the application programmers who have this knowledge but lack mechanisms to convey it to the system. In this paper, we present new Resilience Oriented Language Extensions (Rolex) which facilitate the incorporation of fault resilience as an intrinsic property of the application code. We describe the syntax and semantics of the language extensions as well as the implementation of the supporting compiler infrastructure and runtime system. Furthermore, our experiments show that an approach that leverages the programmer's insight to reason about the context and significance of faults to the application outcome significantly improves the probability that an application runs to a successful conclusion.« less
Dislocation models of interseismic deformation in the western United States
Pollitz, F.F.; McCrory, P.; Svarc, J.; Murray, J.
2008-01-01
The GPS-derived crustal velocity field of the western United States is used to construct dislocation models in a viscoelastic medium of interseismic crustal deformation. The interseismic velocity field is constrained by 1052 GPS velocity vectors spanning the ???2500-km-long plate boundary zone adjacent to the San Andreas fault and Cascadia subduction zone and extending ???1000 km into the plate interior. The GPS data set is compiled from U.S. Geological Survey campaign data, Plate Boundary Observatory data, and the Western U.S. Cordillera velocity field of Bennett et al. (1999). In the context of viscoelastic cycle models of postearthquake deformation, the interseismic velocity field is modeled with a combination of earthquake sources on ???100 known faults plus broadly distributed sources. Models that best explain the observed interseismic velocity field include the contributions of viscoelastic relaxation from faulting near the major plate margins, viscoelastic relaxation from distributed faulting in the plate interior, as well as lateral variations in depth-averaged rigidity in the elastic lithosphere. Resulting rigidity variations are consistent with reduced effective elastic plate thickness in a zone a few tens of kilometers wide surrounding the San Andreas fault (SAF) system. Primary deformation characteristics are captured along the entire SAF system, Eastern California Shear Zone, Walker Lane, the Mendocino triple junction, the Cascadia margin, and the plate interior up to ???1000 km from the major plate boundaries.
NASA Technical Reports Server (NTRS)
Ricks, Brian W.; Mengshoel, Ole J.
2009-01-01
Reliable systems health management is an important research area of NASA. A health management system that can accurately and quickly diagnose faults in various on-board systems of a vehicle will play a key role in the success of current and future NASA missions. We introduce in this paper the ProDiagnose algorithm, a diagnostic algorithm that uses a probabilistic approach, accomplished with Bayesian Network models compiled to Arithmetic Circuits, to diagnose these systems. We describe the ProDiagnose algorithm, how it works, and the probabilistic models involved. We show by experimentation on two Electrical Power Systems based on the ADAPT testbed, used in the Diagnostic Challenge Competition (DX 09), that ProDiagnose can produce results with over 96% accuracy and less than 1 second mean diagnostic time.
1982-05-06
access 99 6.3.2 Input/output interrupt code 99 register (IOIC) 6.3.2.1 Read input/output interrupt 100 code, level 1 (OAOOOH) 6.3.2.2 Read input...output interrupt 100 code, level 2 (OA001H) 6.3.3 Console input/output 100 6.3.3.1 Clear console (4001H) 100 6.3.3.2 Console output (4000H) 100 6.3.3.3...Console input (COOOH) 100 6.3.3.4 Read console status (C0O01H) 100 6.3.4 Memory fault status register (MFSR) 100 6.3.4.1 Read memory fault register
Lattice surgery on the Raussendorf lattice
NASA Astrophysics Data System (ADS)
Herr, Daniel; Paler, Alexandru; Devitt, Simon J.; Nori, Franco
2018-07-01
Lattice surgery is a method to perform quantum computation fault-tolerantly by using operations on boundary qubits between different patches of the planar code. This technique allows for universal planar code computation without eliminating the intrinsic two-dimensional nearest-neighbor properties of the surface code that eases physical hardware implementations. Lattice surgery approaches to algorithmic compilation and optimization have been demonstrated to be more resource efficient for resource-intensive components of a fault-tolerant algorithm, and consequently may be preferable over braid-based logic. Lattice surgery can be extended to the Raussendorf lattice, providing a measurement-based approach to the surface code. In this paper we describe how lattice surgery can be performed on the Raussendorf lattice and therefore give a viable alternative to computation using braiding in measurement-based implementations of topological codes.
A dataset of forest biomass structure for Eurasia.
Schepaschenko, Dmitry; Shvidenko, Anatoly; Usoltsev, Vladimir; Lakyda, Petro; Luo, Yunjian; Vasylyshyn, Roman; Lakyda, Ivan; Myklush, Yuriy; See, Linda; McCallum, Ian; Fritz, Steffen; Kraxner, Florian; Obersteiner, Michael
2017-05-16
The most comprehensive dataset of in situ destructive sampling measurements of forest biomass in Eurasia have been compiled from a combination of experiments undertaken by the authors and from scientific publications. Biomass is reported as four components: live trees (stem, bark, branches, foliage, roots); understory (above- and below ground); green forest floor (above- and below ground); and coarse woody debris (snags, logs, dead branches of living trees and dead roots), consisting of 10,351 unique records of sample plots and 9,613 sample trees from ca 1,200 experiments for the period 1930-2014 where there is overlap between these two datasets. The dataset also contains other forest stand parameters such as tree species composition, average age, tree height, growing stock volume, etc., when available. Such a dataset can be used for the development of models of biomass structure, biomass extension factors, change detection in biomass structure, investigations into biodiversity and species distribution and the biodiversity-productivity relationship, as well as the assessment of the carbon pool and its dynamics, among many others.
A dataset of forest biomass structure for Eurasia
NASA Astrophysics Data System (ADS)
Schepaschenko, Dmitry; Shvidenko, Anatoly; Usoltsev, Vladimir; Lakyda, Petro; Luo, Yunjian; Vasylyshyn, Roman; Lakyda, Ivan; Myklush, Yuriy; See, Linda; McCallum, Ian; Fritz, Steffen; Kraxner, Florian; Obersteiner, Michael
2017-05-01
The most comprehensive dataset of in situ destructive sampling measurements of forest biomass in Eurasia have been compiled from a combination of experiments undertaken by the authors and from scientific publications. Biomass is reported as four components: live trees (stem, bark, branches, foliage, roots); understory (above- and below ground); green forest floor (above- and below ground); and coarse woody debris (snags, logs, dead branches of living trees and dead roots), consisting of 10,351 unique records of sample plots and 9,613 sample trees from ca 1,200 experiments for the period 1930-2014 where there is overlap between these two datasets. The dataset also contains other forest stand parameters such as tree species composition, average age, tree height, growing stock volume, etc., when available. Such a dataset can be used for the development of models of biomass structure, biomass extension factors, change detection in biomass structure, investigations into biodiversity and species distribution and the biodiversity-productivity relationship, as well as the assessment of the carbon pool and its dynamics, among many others.
Bonilla, M.G.; Mark, R.K.; Lienkaemper, J.J.
1984-01-01
In order to refine correlations of surface-wave magnitude, fault rupture length at the ground surface, and fault displacement at the surface by including the uncertainties in these variables, the existing data were critically reviewed and a new data base was compiled. Earthquake magnitudes were redetermined as necessary to make them as consistent as possible with the Gutenberg methods and results, which necessarily make up much of the data base. Measurement errors were estimated for the three variables for 58 moderate to large shallow-focus earthquakes. Regression analyses were then made utilizing the estimated measurement errors. The regression analysis demonstrates that the relations among the variables magnitude, length, and displacement are stochastic in nature. The stochastic variance, introduced in part by incomplete surface expression of seismogenic faulting, variation in shear modulus, and regional factors, dominates the estimated measurement errors. Thus, it is appropriate to use ordinary least squares for the regression models, rather than regression models based upon an underlying deterministic relation with the variance resulting from measurement errors. Significant differences exist in correlations of certain combinations of length, displacement, and magnitude when events are qrouped by fault type or by region, including attenuation regions delineated by Evernden and others. Subdivision of the data results in too few data for some fault types and regions, and for these only regressions using all of the data as a group are reported. Estimates of the magnitude and the standard deviation of the magnitude of a prehistoric or future earthquake associated with a fault can be made by correlating M with the logarithms of rupture length, fault displacement, or the product of length and displacement. Fault rupture area could be reliably estimated for about 20 of the events in the data set. Regression of MS on rupture area did not result in a marked improvement over regressions that did not involve rupture area. Because no subduction-zone earthquakes are included in this study, the reported results do not apply to such zones.
49 CFR Appendix B to Part 236 - Risk Assessment Criteria
Code of Federal Regulations, 2012 CFR
2012-10-01
... availability calculations for subsystems and components, Fault Tree Analysis (FTA) of the subsystems, and... upper bound, as estimated with a sensitivity analysis, and the risk value selected must be demonstrated... interconnected subsystems/components? The risk assessment of each safety-critical system (product) must account...
49 CFR Appendix B to Part 236 - Risk Assessment Criteria
Code of Federal Regulations, 2014 CFR
2014-10-01
... availability calculations for subsystems and components, Fault Tree Analysis (FTA) of the subsystems, and... upper bound, as estimated with a sensitivity analysis, and the risk value selected must be demonstrated... interconnected subsystems/components? The risk assessment of each safety-critical system (product) must account...
49 CFR Appendix D to Part 236 - Independent Review of Verification and Validation
Code of Federal Regulations, 2010 CFR
2010-10-01
... standards. (f) The reviewer shall analyze all Fault Tree Analyses (FTA), Failure Mode and Effects... for each product vulnerability cited by the reviewer; (4) Identification of any documentation or... not properly followed; (6) Identification of the software verification and validation procedures, as...
14 CFR 417.309 - Flight safety system analysis.
Code of Federal Regulations, 2012 CFR
2012-01-01
... system anomaly occurring and all of its effects as determined by the single failure point analysis and... termination system. (c) Single failure point. A command control system must undergo an analysis that... fault tree analysis or a failure modes effects and criticality analysis; (2) Identify all possible...
14 CFR 417.309 - Flight safety system analysis.
Code of Federal Regulations, 2010 CFR
2010-01-01
... system anomaly occurring and all of its effects as determined by the single failure point analysis and... termination system. (c) Single failure point. A command control system must undergo an analysis that... fault tree analysis or a failure modes effects and criticality analysis; (2) Identify all possible...
14 CFR 417.309 - Flight safety system analysis.
Code of Federal Regulations, 2013 CFR
2013-01-01
... system anomaly occurring and all of its effects as determined by the single failure point analysis and... termination system. (c) Single failure point. A command control system must undergo an analysis that... fault tree analysis or a failure modes effects and criticality analysis; (2) Identify all possible...
14 CFR 417.309 - Flight safety system analysis.
Code of Federal Regulations, 2014 CFR
2014-01-01
... system anomaly occurring and all of its effects as determined by the single failure point analysis and... termination system. (c) Single failure point. A command control system must undergo an analysis that... fault tree analysis or a failure modes effects and criticality analysis; (2) Identify all possible...
14 CFR 417.309 - Flight safety system analysis.
Code of Federal Regulations, 2011 CFR
2011-01-01
... system anomaly occurring and all of its effects as determined by the single failure point analysis and... termination system. (c) Single failure point. A command control system must undergo an analysis that... fault tree analysis or a failure modes effects and criticality analysis; (2) Identify all possible...
Neotectonic Geomorphology of the Owen Stanley Oblique-slip Fault System, Eastern Papua New Guinea
NASA Astrophysics Data System (ADS)
Watson, L.; Mann, P.; Taylor, F.
2003-12-01
Previous GPS studies have shown that the Australia-Woodlark plate boundary bisects the Papuan Peninsula of Papua New Guinea and that interplate motion along the boundary varies from about 19 mm/yr of orthogonal opening in the area of the western Woodlark spreading center and D'Entrecasteaux Islands, to about 12 mm/yr of highly oblique opening in the central part of the peninsula, to about 10 mm/yr of transpressional motion on the western part of the peninsula. We have compiled a GIS database for the peninsula that includes a digital elevation model, geologic map, LANDSAT and radar imagery, and earthquake focal mechanisms. This combined data set demonstrates the regional importance of the 600-km-long Owen Stanley fault system (OSFS) in accommodating interplate motion and controlling the geomorphology and geologic exposures of the peninsula. The OSFS originated as a NE-dipping, reactivated Oligocene-Early Miocene age ophiolitic suture zone between an Australian continental margin and the Melanesian arc system. Pliocene to recent motion on the plate boundary has reactivated motion on the former NE-dipping thrust fault either as a NE-dipping normal fault in the eastern area or as a more vertical strike-slip fault in the western area. The broadly arcuate shape of the OSFS is probably an inherited feature from the original thrust fault. Faults in the eastern area (east of 148° E) exhibit characteristics expected for normal and oblique slip faults including: discontinuous fault traces bounding an upthrown highland block and a downthrown coastal plain or submarine block, transfer faults parallel to the opening direction, scarps facing to both the northeast and southwest, and spatial association with recent volcanism. Faults in the western area (west of 148° E) exibit characteristics expected for left-lateral strike-slip faults including: linear and continuous fault trace commonly confined to a deep, intermontane valley and sinistral offsets and deflections of rivers and streams by 0.5 to 1.2 km. The northern edge of the OSFS merges with the Ramu-Markham strike-slip fault near Lae. SW tilting of the footwall block (Papuan Peninsula) is responsible for the asymmetrical topographic profile of the peninsula and drowned topography along the southern coast of the peninsula.
NASA Astrophysics Data System (ADS)
Hiramatsu, Y.; Matsumoto, N.; Sawada, A.
2016-12-01
We analyze gravity anomalies in the focal area of the 2016 Kumamoto earthquake, evaluate the continuity, segmentation and faulting type of the active fault zones, and discuss relationships between those features and the aftershock distribution. We compile the gravity data published by the Gravity Research Group in Southwest Japan (2001), the Geographical Survey Institute (2006), Yamamoto et al. (2011), Honda et al. (2012), and the Geological Survey of Japan, AIST (2013). We apply terrain corrections with 10 m DEM and a low-pass filter, then remove a linear trend to obtain Bouguer anomalies. We calculate the first horizontal derivative (HD), the first vertical derivative (VD), the normalized total horizontal derivative (TDX) (Cooper and Cowan, 2006), the dimensionality index (Di) (Beki and Pedersen, 2010), and dip angle (β) (Beki, 2013) from a gravity gradient tensor. The HD, VD and TDX show the existence of the continuous fault structure along the Futagawa fault zone, extending from the Uto peninsula to the Beppu Bay except Mt. Aso area. Aftershocks are distributed along this structural boundary from the confluence of the Futagawa and the Hinagu fault zones to the east end of the Aso volcano. The distribution of dip angle β along the Futagawa fault zone implies a normal faulting, which corresponds to the coseismic faulting estimated geologically and geomorphologically. We observe the S-shaped distribution of the Bouguer anomalies around the southern part of the Hinagu segment, indicating a right lateral faulting. The VD and TDX support the existence of the fault structure along the segment but it is not so clear. We can recognize no clear structural boundaries along the Takano-Shirahata segment. TDX implies the existence of a structural boundary with a NW-SE trend around the boundary between the Hinagu and Takano-Shirahata segments. The Di shows that this boundary has a 3D-like structure rather than a 2D-like one, suggesting the discontinuity of 2D-like fault structure along the fault zone. A geological map indicates that this structure boundary corresponds to a boundary between the metamorphic rock and the sedimentary rock. The active area of the aftershocks does not extend to the south beyond this structure boundary, implying that the spatial extent of the source fault is controlled by this boundary.
Constraints on Slow Slip from Landsliding and Faulting
NASA Astrophysics Data System (ADS)
Delbridge, Brent Gregory
The discovery of slow-slip has radically changed the way we understand the relative movement of Earth's tectonic plates and the accumulation of stress in fault zones that fail in large earthquakes. Prior to the discovery of slow-slip, faults were thought to relieve stress either through continuous aseismic sliding, as is the case for continental creeping faults, or in near instantaneous failure. Aseismic deformation reflects fault slip that is slow enough that both inertial forces and seismic radiation are negligible. The durations of observed aseismic slip events range from days to years, with displacements of up to tens of centimeters. These events are not unique to a specific depth range and occur on faults in a variety of tectonic settings. This aseismic slip can sometimes also trigger more rapid slip somewhere else on the fault, such as small embedded asperities. This is thought to be the mechanism generating observed Low Frequency Earthquakes (LFEs) and small repeating earthquakes. I have preformed a series of studies to better understanding the nature of tectonic faulting which are compiled here. The first is entitled "3D surface deformation derived from airborne interferometric UAVSAR: Application to the Slumgullion Landslide", and was originally published in the Journal of Geophysical Research in 2016. In order to understand how landslides respond to environmental forcing, we quantify how the hydro-mechanical forces controlling the Slumgullion Landslide express themselves kinematically in response to the infiltration of seasonal snowmelt. The well-studied Slumgullion Landslide, which is 3.9 km long and moves persistently at rates up to 2 cm/day is an ideal natural laboratory due to its large spatial extent and rapid deformation rates. The lateral boundaries of the landslide consist of strike-slip fault features, which over time have built up large flank ridges. The second study compiled here is entitled "Temporal variation of intermediate-depth earthquakes around the time of the M9.0 Tohoku-oki earthquake" and was originally published in Geophysical Research Letters in 2017. The temporal evolution of intermediate depth seismicity before and after the 2011 M 9.0 Tohoku-oki earthquake reveals interactions between plate interface slip and deformation in the subducting slab. I investigate seismicity rate changes in the upper and lower planes of the double seismic zone beneath northeast Japan. The average ratio of upper plane to lower plane activity and the mean deep aseismic slip rate both increased by factor of two. An increase of down-dip compression in the slab resulting from coseismic and postseismic deformation enhanced seismicity in the upper plane, which is dominated by events accommodating down-dip shortening from plate unbending. In the third and final study included here I use geodetic measurements to place a quantitative upper bound on the size of the slow slip accompanying large bursts of quasi-periodic tremors and LFEs on the Parkfield section of the SAF. We use a host of analysis methods to try to isolate the small signal due to the slow slip and characterize noise properties. We find that in addition to subduction zones, transform faults are also capable of producing ETSs. However, given the upper-bounds from our analysis, surface geodetic measurements of this slow slip is likely to remain highly challenging.
Toward a Model-Based Approach for Flight System Fault Protection
NASA Technical Reports Server (NTRS)
Day, John; Meakin, Peter; Murray, Alex
2012-01-01
Use SysML/UML to describe the physical structure of the system This part of the model would be shared with other teams - FS Systems Engineering, Planning & Execution, V&V, Operations, etc., in an integrated model-based engineering environment Use the UML Profile mechanism, defining Stereotypes to precisely express the concepts of the FP domain This extends the UML/SysML languages to contain our FP concepts Use UML/SysML, along with our profile, to capture FP concepts and relationships in the model Generate typical FP engineering products (the FMECA, Fault Tree, MRD, V&V Matrices)
Zhang, Jie; Shangguan, Tie-Liang; Duan, Yi-Hao; Guo, Wei; Liu, Wei-Hua; Guo, Dong-Gang
2014-11-01
Using the plant survivorship theory, the age structure, and the relationship between tree height and diameter (DBH) of Quercus wutaishanica population in Lingkong Mountain were analyzed, and the static life table was compiled and the survival curve plotted. The shuttle shape in age structure of Q. wutaishanica population suggested its temporal stability. The linear regression significantly fitted the positive correlation between tree height and DBH. The maximal life expectancy was observed among the trees beyond the age of the highest mortality and coincided with the lowest point of mortality density, suggesting the strong vitality of the seedlings and young trees that survived in the natural selection and intraspecific competition. The population stability of the Q. wutaishanica population was characterized by the Deevey-II of the survival curve. The dynamic pattern was characterized by the recession in the early phase, growth in the intermediate phase, and stability in the latter phase.
Conversion of Questionnaire Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Powell, Danny H; Elwood Jr, Robert H
During the survey, respondents are asked to provide qualitative answers (well, adequate, needs improvement) on how well material control and accountability (MC&A) functions are being performed. These responses can be used to develop failure probabilities for basic events performed during routine operation of the MC&A systems. The failure frequencies for individual events may be used to estimate total system effectiveness using a fault tree in a probabilistic risk analysis (PRA). Numeric risk values are required for the PRA fault tree calculations that are performed to evaluate system effectiveness. So, the performance ratings in the questionnaire must be converted to relativemore » risk values for all of the basic MC&A tasks performed in the facility. If a specific material protection, control, and accountability (MPC&A) task is being performed at the 'perfect' level, the task is considered to have a near zero risk of failure. If the task is performed at a less than perfect level, the deficiency in performance represents some risk of failure for the event. As the degree of deficiency in performance increases, the risk of failure increases. If a task that should be performed is not being performed, that task is in a state of failure. The failure probabilities of all basic events contribute to the total system risk. Conversion of questionnaire MPC&A system performance data to numeric values is a separate function from the process of completing the questionnaire. When specific questions in the questionnaire are answered, the focus is on correctly assessing and reporting, in an adjectival manner, the actual performance of the related MC&A function. Prior to conversion, consideration should not be given to the numeric value that will be assigned during the conversion process. In the conversion process, adjectival responses to questions on system performance are quantified based on a log normal scale typically used in human error analysis (see A.D. Swain and H.E. Guttmann, 'Handbook of Human Reliability Analysis with Emphasis on Nuclear Power Plant Applications,' NUREG/CR-1278). This conversion produces the basic event risk of failure values required for the fault tree calculations. The fault tree is a deductive logic structure that corresponds to the operational nuclear MC&A system at a nuclear facility. The conventional Delphi process is a time-honored approach commonly used in the risk assessment field to extract numerical values for the failure rates of actions or activities when statistically significant data is absent.« less
Seismic Hazard Analysis for Armenia and its Surrounding Areas
NASA Astrophysics Data System (ADS)
Klein, E.; Shen-Tu, B.; Mahdyiar, M.; Karakhanyan, A.; Pagani, M.; Weatherill, G.; Gee, R. C.
2017-12-01
The Republic of Armenia is located within the central part of a large, 800 km wide, intracontinental collision zone between the Arabian and Eurasian plates. Active deformation occurs along numerous structures in the form of faulting, folding, and volcanism distributed throughout the entire zone from the Bitlis-Zargos suture belt to the Greater Caucasus Mountains and between the relatively rigid Back Sea and Caspian Sea blocks without any single structure that can be claimed as predominant. In recent years, significant work has been done on mapping active faults, compiling and reviewing historic and paleoseismological studies in the region, especially in Armenia; these recent research contributions have greatly improved our understanding of the seismogenic sources and their characteristics. In this study we performed a seismic hazard analysis for Armenia and its surrounding areas using the latest detailed geological and paleoseismological information on active faults, strain rates estimated from kinematic modeling of GPS data and all available historic earthquake data. The seismic source model uses a combination of characteristic earthquake and gridded seismicity models to take advantage of the detailed knowledge of the known faults while acknowledging the distributed deformation and regional tectonic environment of the collision zone. In addition, the fault model considers earthquake ruptures that include single and multi-segment or fault rupture scenarios with earthquakes that can rupture any part of a multiple segment fault zone. The ground motion model uses a set of ground motion prediction equations (GMPE) selected from a pool of GMPEs based on the assessment of each GMPE against the available strong motion data in the region. The hazard is computed in the GEM's OpenQuake engine. We will present final hazard results and discuss the uncertainties associated with various input data and their impact on the hazard at various locations.
Preliminary Isostatic Gravity Map of Joshua Tree National Park and Vicinity, Southern California
Langenheim, V.E.; Biehler, Shawn; McPhee, D.K.; McCabe, C.A.; Watt, J.T.; Anderson, M.L.; Chuchel, B.A.; Stoffer, P.
2007-01-01
This isostatic residual gravity map is part of an effort to map the three-dimensional distribution of rocks in Joshua Tree National Park, southern California. This map will serve as a basis for modeling the shape of basins beneath the Park and in adjacent valleys and also for determining the location and geometry of faults within the area. Local spatial variations in the Earth's gravity field, after accounting for variations caused by elevation, terrain, and deep crustal structure, reflect the distribution of densities in the mid- to upper crust. Densities often can be related to rock type, and abrupt spatial changes in density commonly mark lithologic or structural boundaries. High-density basement rocks exposed within the Eastern Transverse Ranges include crystalline rocks that range in age from Proterozoic to Mesozoic and these rocks are generally present in the mountainous areas of the quadrangle. Alluvial sediments, usually located in the valleys, and Tertiary sedimentary rocks are characterized by low densities. However, with increasing depth of burial and age, the densities of these rocks may become indistinguishable from those of basement rocks. Tertiary volcanic rocks are characterized by a wide range of densities, but, on average, are less dense than the pre-Cenozoic basement rocks. Basalt within the Park is as dense as crystalline basement, but is generally thin (less than 100 m thick; e.g., Powell, 2003). Isostatic residual gravity values within the map area range from about 44 mGal over Coachella Valley to about 8 mGal between the Mecca Hills and the Orocopia Mountains. Steep linear gravity gradients are coincident with the traces of several Quaternary strike-slip faults, most notably along the San Andreas Fault bounding the east side of Coachella Valley and east-west-striking, left-lateral faults, such as the Pinto Mountain, Blue Cut, and Chiriaco Faults (Fig. 1). Gravity gradients also define concealed basin-bounding faults, such as those beneath the Chuckwalla Valley (e.g. Rotstein and others, 1976). These gradients result from juxtaposing dense basement rocks against thick Cenozoic sedimentary rocks.
www.fallasdechile.cl, the First Online Repository for Neotectonic Faults in the Chilean Andes
NASA Astrophysics Data System (ADS)
Aron, F.; Salas, V.; Bugueño, C. J.; Hernández, C.; Leiva, L.; Santibanez, I.; Cembrano, J. M.
2016-12-01
We introduce the site www.fallasdechile.cl, created and maintained by undergraduate students and researchers at the Catholic University of Chile. Though the web page seeks to inform and educate the general public about potentially seismogenic faults of the country, layers of increasing content complexity allow students, researchers and educators to consult the site as a scientific tool as well. This is the first comprehensive, open access database on Chilean geologic faults; we envision that it may grow organically with contributions from peer scientists, resembling the SCEC community fault model for southern California. Our website aims at filling a gap between science and society providing users the opportunity to get involved by self-driven learning through interactive education modules. The main page highlights recent developments and open questions in Chilean earthquake science. Front pages show first level information of general concepts in earthquake topics such as tectonic settings, definition of geologic faults, and space-time constraints of faults. Users can navigate interactive modules to explore, with real data, different earthquake scenarios and compute values of seismic moment and magnitude. A second level covers Chilean/Andean faults classified according to their geographic location containing at least one of the following parameters: mapped trace, 3D geometry, sense of slip, recurrence times and date of last event. Fault traces are displayed on an interactive map using a Google Maps API. The material is compiled and curated in an effort to present, up to our knowledge, accurate and up to date information. If interested, the user can navigate to a third layer containing more advanced technical details including primary sources of the data, a brief structural description, published scientific articles, and links to other online content complementing our site. Also, geographically referenced fault traces with attributes (kml, shapefiles) and fault 3D surfaces (contours, tsurf files) will be available to download. Given its potential for becoming a referential database for active faults in Chile, this project evidences that undergrads can go beyond the classroom, be of service to the scientific community, and make contributions with broader impacts.
A general law of fault wear and its implication to gouge zone evolution
NASA Astrophysics Data System (ADS)
Boneh, Yuval; Reches, Ze'ev
2017-04-01
Fault wear and gouge production are universal components of frictional sliding. Wear models commonly consider fault roughness, normal stress and rock strength, but ignore the effects of gouge presence and slip-velocity. In contrast, our experimental observations indicate that wear continues while gouge layer is fully developed, and that wear-rates vary by orders-of-magnitude during slip along experimental faults made of carbonites, sandstones and granites (Boneh et al., 2013, 2014). We derive here a new universal law for fault wear by incorporating the gouge layer and slip-velocity. Slip between two rock-blocks undergoes a transition from a 'two-body' mode, during which the blocks interact at surface roughness contacts, to 'three-body' mode, during which a gouge layer separates the two blocks. Our wear model considers 'effective roughness' as the mechanism for failure at resisting, interacting sites that control the global wear. The effective roughness is comprised of a time dependent, dynamic asperities which are different in population and scale from original surfaces asperities. The model assumes that the intensity of this failure is proportional to the mechanical impulse, which is the integrated force over loading time at the interacting sites. We use this concept to calculate the wear-rate as function of the impulse-density, which is the ratio [shear-stress/slip-velocity], during fault slip. The compilation of experimental wear-rates in a large range of slip-velocities (10 μm/s - 1 m/s) and normal stresses (0.2 - 200 MPa) reveal very good agreement with the model predictions. The model provides the first explanation why fault slip at seismic velocity, e.g., 1 m/s, generates significantly less wear and gouge than fault slip at creeping velocity. Thus, the model provides a tool to use the gouge thickness of fault-zones for estimation of paleo-velocity. Boneh, Y., Sagy, A., Reches, Z., 2013. Frictional strength and wear-rate of carbonate faults during high-velocity, steady-state sliding. Earth and Planetary Science Letters 381, 127-137. Boneh, Y., Chang, J.C., Lockner, D.A., Reches, Z., 2014. Evolution of Wear and Friction Along Experimental Faults. Pure and Applied Geophysics, 1-17.
NASA Astrophysics Data System (ADS)
Console, R.; Vannoli, P.; Carluccio, R.
2016-12-01
The application of a physics-based earthquake simulation algorithm to the central Apennines region, where the 24 August 2016 Amatrice earthquake occurred, allowed the compilation of a synthetic seismic catalog lasting 100 ky, and containing more than 500,000 M ≥ 4.0 events, without the limitations that real catalogs suffer in terms of completeness, homogeneity and time duration. The algorithm on which this simulator is based is constrained by several physical elements as: (a) an average slip rate for every single fault in the investigated fault systems, (b) the process of rupture growth and termination, leading to a self-organized earthquake magnitude distribution, and (c) interaction between earthquake sources, including small magnitude events. Events nucleated in one fault are allowed to expand into neighboring faults, even belonging to a different fault system, if they are separated by less than a given maximum distance. The seismogenic model upon which we applied the simulator code, was derived from the DISS 3.2.0 database (http://diss.rm.ingv.it/diss/), selecting all the fault systems that are recognized in the central Apennines region, for a total of 24 fault systems. The application of our simulation algorithm provides typical features in time, space and magnitude behavior of the seismicity, which are comparable with those of real observations. These features include long-term periodicity and clustering of strong earthquakes, and a realistic earthquake magnitude distribution departing from the linear Gutenberg-Richter distribution in the moderate and higher magnitude range. The statistical distribution of earthquakes with M ≥ 6.0 on single faults exhibits a fairly clear pseudo-periodic behavior, with a coefficient of variation Cv of the order of 0.3-0.6. We found in our synthetic catalog a clear trend of long-term acceleration of seismic activity preceding M ≥ 6.0 earthquakes and quiescence following those earthquakes. Lastly, as an example of a possible use of synthetic catalogs, an attenuation law was applied to all the events reported in the synthetic catalog for the production of maps showing the exceedence probability of given values of peak acceleration (PGA) on the territory under investigation. The application of a physics-based earthquake simulation algorithm to the central Apennines region, where the 24 August 2016 Amatrice earthquake occurred, allowed the compilation of a synthetic seismic catalog lasting 100 ky, and containing more than 500,000 M ≥ 4.0 events, without the limitations that real catalogs suffer in terms of completeness, homogeneity and time duration. The algorithm on which this simulator is based is constrained by several physical elements as: (a) an average slip rate for every single fault in the investigated fault systems, (b) the process of rupture growth and termination, leading to a self-organized earthquake magnitude distribution, and (c) interaction between earthquake sources, including small magnitude events. Events nucleated in one fault are allowed to expand into neighboring faults, even belonging to a different fault system, if they are separated by less than a given maximum distance. The seismogenic model upon which we applied the simulator code, was derived from the DISS 3.2.0 database (http://diss.rm.ingv.it/diss/), selecting all the fault systems that are recognized in the central Apennines region, for a total of 24 fault systems. The application of our simulation algorithm provides typical features in time, space and magnitude behavior of the seismicity, which are comparable with those of real observations. These features include long-term periodicity and clustering of strong earthquakes, and a realistic earthquake magnitude distribution departing from the linear Gutenberg-Richter distribution in the moderate and higher magnitude range. The statistical distribution of earthquakes with M ≥ 6.0 on single faults exhibits a fairly clear pseudo-periodic behavior, with a coefficient of variation Cv of the order of 0.3-0.6. We found in our synthetic catalog a clear trend of long-term acceleration of seismic activity preceding M ≥ 6.0 earthquakes and quiescence following those earthquakes. Lastly, as an example of a possible use of synthetic catalogs, an attenuation law was applied to all the events reported in the synthetic catalog for the production of maps showing the exceedence probability of given values of peak acceleration (PGA) on the territory under investigation.
Aeromagnetic map compilation: Procedures for merging and an example from Washington
Finn, C.
1999-01-01
Rocks in Antarctica and offshore have widely diverse magnetic properties. Consequently, aeromagnetic data collected there can improve knowledge of the geologic, tectonic and geothermal characteristics of the region. Aeromagnetic data can map concealed structures such as faults, folds and dikes, ascertain basin thickness and locate buried volcanic, as well as some intrusive and metamorphic rocks. Gridded, composite data sets allow a view of continental-scale trends that individual data sets do not provide and link widely-separated areas of outcrop and disparate geologic studies. Individual magnetic surveys must be processed so that they match adjacent surveys prior to merging. A consistent representation of the Earth's magnetic field (International Geomagnetic Reference Field (IGRF)) must be removed from each data set. All data sets need to be analytically continued to the same flight elevation with their datums shifted to match adjacent data. I advocate minimal processing to best represent the individual surveys in the merged compilation. An example of a compilation of aeromagnetic surveys from Washington illustrates the utility of aeromagnetic maps for providing synoptic views of regional tectonic features.
The Optimization of Automatically Generated Compilers.
1987-01-01
than their procedural counterparts, and are also easier to analyze for storage optimizations; (2) AGs can be algorithmically checked to be non-circular...Providing algorithms to move the storage for many attributes from the For structure tree into global stacks and variables. -Dd(2) Creating AEs which build and...54 3.5.2. Partitioning algorithm
Red alder (a nitrogen-fixing tree) and sea salt inputs can strongly influence stream nitrogen concentrations in western Oregon and Washington. We compiled a database of stream nitrogen and landscape characteristics in the Oregon Coast Range. Basal area of alder, expressed as a ...
Host and habitat index for Phytophthora species in Oregon
Everett Hansen; Paul Reeser; Wendy Sutton; Laura Sims
2012-01-01
In this contribution we compile existing records from available sources of reliably identified Phytophthora species from forests and forest trees in Oregon, USA. A searchable version of this information may be found in the Forest Phytophthoras of the World Disease Finder (select USA-Oregon). We have included isolations from soil and streams in...
Hydrostructural maps of the Death Valley regional flow system, Nevada and California
Potter, C.J.; Sweetkind, D.S.; Dickerson, R.P.; Killgore, M.L.
2002-01-01
The locations of principal faults and structural zones that may influence ground-water flow were compiled in support of a three-dimensional ground-water model for the Death Valley regional flow system (DVRFS), which covers 80,000 square km in southwestern Nevada and southeastern California. Faults include Neogene extensional and strike-slip faults and pre-Tertiary thrust faults. Emphasis was given to characteristics of faults and deformed zones that may have a high potential for influencing hydraulic conductivity. These include: (1) faulting that results in the juxtaposition of stratigraphic units with contrasting hydrologic properties, which may cause ground-water discharge and other perturbations in the flow system; (2) special physical characteristics of the fault zones, such as brecciation and fracturing, that may cause specific parts of the zone to act either as conduits or as barriers to fluid flow; (3) the presence of a variety of lithologies whose physical and deformational characteristics may serve to impede or enhance flow in fault zones; (4) orientation of a fault with respect to the present-day stress field, possibly influencing hydraulic conductivity along the fault zone; and (5) faults that have been active in late Pleistocene or Holocene time and areas of contemporary seismicity, which may be associated with enhanced permeabilities. The faults shown on maps A and B are largely from Workman and others (in press), and fit one or more of the following criteria: (1) faults that are more than 10 km in map length; (2) faults with more than 500 m of displacement; and (3) faults in sets that define a significant structural fabric that characterizes a particular domain of the DVRFS. The following fault types are shown: Neogene normal, Neogene strike-slip, Neogene low-angle normal, pre-Tertiary thrust, and structural boundaries of Miocene calderas. We have highlighted faults that have late Pleistocene to Holocene displacement (Piety, 1996). Areas of thick Neogene basin-fill deposits (thicknesses 1-2 km, 2-3 km, and >3 km) are shown on map A, based on gravity anomalies and depth-to-basement modeling by Blakely and others (1999). We have interpreted the positions of faults in the subsurface, generally following the interpretations of Blakely and others (1999). Where geophysical constraints are not present, the faults beneath late Tertiary and Quaternary cover have been extended based on geologic reasoning. Nearly all of these concealed faults are shown with continuous solid lines on maps A and B, in order to provide continuous structures for incorporation into the hydrogeologic framework model (HFM). Map A also shows the potentiometric surface, regional springs (25-35 degrees Celsius, D'Agnese and others, 1997), and cold springs (Turner and others, 1996).
Quality-based Multimodal Classification Using Tree-Structured Sparsity
2014-03-08
Pennsylvania State University soheil@psu.edu Asok Ray Pennsylvania State University axr2@psu.edu@psu.edu Nasser M. Nasrabadi Army Research Laboratory...clustering for on- line fault detection and isolation. Applied Intelligence, 35(2):269–284, 2011. 4 [2] S. Bahrampour, A. Ray , S. Sarkar, T. Damarla, and N
Assessing Institutional Ineffectiveness: A Strategy for Improvement.
ERIC Educational Resources Information Center
Cameron, Kim S.
1984-01-01
Based on the theory that institutional change and improvement are motivated more by knowledge of problems than by knowledge of successes, a fault tree analysis technique using Boolean logic for assessing institutional ineffectiveness by determining weaknesses in the system is presented. Advantages and disadvantages of focusing on weakness rather…
An earthquake rate forecast for Europe based on smoothed seismicity and smoothed fault contribution
NASA Astrophysics Data System (ADS)
Hiemer, Stefan; Woessner, Jochen; Basili, Roberto; Wiemer, Stefan
2013-04-01
The main objective of project SHARE (Seismic Hazard Harmonization in Europe) is to develop a community-based seismic hazard model for the Euro-Mediterranean region. The logic tree of earthquake rupture forecasts comprises several methodologies including smoothed seismicity approaches. Smoothed seismicity thus represents an alternative concept to express the degree of spatial stationarity of seismicity and provides results that are more objective, reproducible, and testable. Nonetheless, the smoothed-seismicity approach suffers from the common drawback of being generally based on earthquake catalogs alone, i.e. the wealth of knowledge from geology is completely ignored. We present a model that applies the kernel-smoothing method to both past earthquake locations and slip rates on mapped crustal faults and subductions. The result is mainly driven by the data, being independent of subjective delineation of seismic source zones. The core parts of our model are two distinct location probability densities: The first is computed by smoothing past seismicity (using variable kernel smoothing to account for varying data density). The second is obtained by smoothing fault moment rate contributions. The fault moment rates are calculated by summing the moment rate of each fault patch on a fully parameterized and discretized fault as available from the SHARE fault database. We assume that the regional frequency-magnitude distribution of the entire study area is well known and estimate the a- and b-value of a truncated Gutenberg-Richter magnitude distribution based on a maximum likelihood approach that considers the spatial and temporal completeness history of the seismic catalog. The two location probability densities are linearly weighted as a function of magnitude assuming that (1) the occurrence of past seismicity is a good proxy to forecast occurrence of future seismicity and (2) future large-magnitude events occur more likely in the vicinity of known faults. Consequently, the underlying location density of our model depends on the magnitude. We scale the density with the estimated a-value in order to construct a forecast that specifies the earthquake rate in each longitude-latitude-magnitude bin. The model is intended to be one branch of SHARE's logic tree of rupture forecasts and provides rates of events in the magnitude range of 5 <= m <= 8.5 for the entire region of interest and is suitable for comparison with other long-term models in the framework of the Collaboratory for the Study of Earthquake Predictability (CSEP).
Neotectonic Investigation of the southern Rodgers Creek fault, Sonoma County, California
NASA Astrophysics Data System (ADS)
Randolph, C. E.; Caskey, J.
2001-12-01
The 60-km-long Rodgers Creek fault (RCF) between San Pablo Bay and Santa Rosa strikes approximately N35W, and is characterized by a late Holocene right-lateral slip rate of 6.4-10.4 mm/yr. Recent field studies along the southern section of the fault have resulted in: 1) new insight concerning the structural relations across the fault and the long-term slip budget on the system of faults that make up the East Bay fault system; 2) a new annotated map documenting details of the tectonic geomorphology of the fault zone; 3) and new paleoseismic data. Structural relations found west of the RCF indicate that previously mapped thrust klippen of Donnell Ranch Volcanic's (DRV)(Ar/Ar 9-10 Ma), were emplaced over the Petaluma formation (Ar/Ar 8.52 Ma) along east-vergent thrust faults, rather than along west-vergent thrusts that splay from the RCF as previously proposed. This implies that: 1) the allochthonous DRV which have been correlated to volcanic rocks in the Berkeley Hills (Ar/Ar 9-10 Ma) must have orginated from west of the Tolay fault; and 2) much of the 45 km of northward translation of the DRV from the Berkeley Hills was accomplished along the Hayward-Tolay-Petaluma Valley system of faults, and not the RCF. Long-term offset along the RCF can be more reasonably estimated by matching similar aged Sonoma volcanic rocks (Ar/Ar 3-8 Ma) across the fault which suggests only about 10-15 km of net right-lateral translation across the fault. This estimate is more consistent with independently derived offsets across the RCF using paleogeographic reconstructions of the Roblar Tuff as well as Pliocene sedimentary units (Sarna-Wojcicki, 1992; Mclaughlin, 1996) An annotated strip map compiled from 1:6000 scale aerial photos for the southern 25 km of the fault has resulted in unprecedented new details on the surficial and bedrock deposits, and tectonic geomorphology along the fault. The new maps together with GPR surveys provided the basis for a site specific paleoseimic investigation. We recently opened a 50-meter-long exploratory trench located 2 km northwest of Wildcat Mountain, in Sonoma County. The trench exposed two main traces of the fault that bound a 7-meter-wide sag pond. Stratigraphic and structural relations have provided evidence for multiple faulting events, the youngest of which may have ruptured to the ground surface. Information pertaining to the timing and chronology of events recorded in the trench exposure is pending the results of laboratory analysis of radiocarbon samples.
Managing Risk to Ensure a Successful Cassini/Huygens Saturn Orbit Insertion (SOI)
NASA Technical Reports Server (NTRS)
Witkowski, Mona M.; Huh, Shin M.; Burt, John B.; Webster, Julie L.
2004-01-01
I. Design: a) S/C designed to be largely single fault tolerant; b) Operate in flight demonstrated envelope, with margin; and c) Strict compliance with requirements & flight rules. II. Test: a) Baseline, fault & stress testing using flight system testbeds (H/W & S/W); b) In-flight checkout & demos to remove first time events. III. Failure Analysis: a) Critical event driven fault tree analysis; b) Risk mitigation & development of contingencies. IV) Residual Risks: a) Accepted pre-launch waivers to Single Point Failures; b) Unavoidable risks (e.g. natural disaster). V) Mission Assurance: a) Strict process for characterization of variances (ISAs, PFRs & Waivers; b) Full time Mission Assurance Manager reports to Program Manager: 1) Independent assessment of compliance with institutional standards; 2) Oversight & risk assessment of ISAs, PFRs & Waivers etc.; and 3) Risk Management Process facilitator.
Integration of Advanced Probabilistic Analysis Techniques with Multi-Physics Models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cetiner, Mustafa Sacit; none,; Flanagan, George F.
2014-07-30
An integrated simulation platform that couples probabilistic analysis-based tools with model-based simulation tools can provide valuable insights for reactive and proactive responses to plant operating conditions. The objective of this work is to demonstrate the benefits of a partial implementation of the Small Modular Reactor (SMR) Probabilistic Risk Assessment (PRA) Detailed Framework Specification through the coupling of advanced PRA capabilities and accurate multi-physics plant models. Coupling a probabilistic model with a multi-physics model will aid in design, operations, and safety by providing a more accurate understanding of plant behavior. This represents the first attempt at actually integrating these two typesmore » of analyses for a control system used for operations, on a faster than real-time basis. This report documents the development of the basic communication capability to exchange data with the probabilistic model using Reliability Workbench (RWB) and the multi-physics model using Dymola. The communication pathways from injecting a fault (i.e., failing a component) to the probabilistic and multi-physics models were successfully completed. This first version was tested with prototypic models represented in both RWB and Modelica. First, a simple event tree/fault tree (ET/FT) model was created to develop the software code to implement the communication capabilities between the dynamic-link library (dll) and RWB. A program, written in C#, successfully communicates faults to the probabilistic model through the dll. A systems model of the Advanced Liquid-Metal Reactor–Power Reactor Inherently Safe Module (ALMR-PRISM) design developed under another DOE project was upgraded using Dymola to include proper interfaces to allow data exchange with the control application (ConApp). A program, written in C+, successfully communicates faults to the multi-physics model. The results of the example simulation were successfully plotted.« less
NASA Astrophysics Data System (ADS)
Kremer, Katrina; Reusch, Anna; Wirth, Stefanie B.; Anselmetti, Flavio S.; Girardclos, Stéphanie; Strasser, Michael
2016-04-01
Intraplate settings are characterized by low deformation rates and recurrence intervals of strong earthquakes that often exceed the time span covered by instrumental records. Switzerland, as an example for such settings, shows a low instrumentally recorded seismicity, in contrast to strong earthquakes (e.g. 1356 Basel earthquake, Mw=6.6 and 1601 Unterwalden earthquake, Mw=5.9) mentioned in the historical archives. As such long recurrence rates do not allow for instrumental identification of earthquake sources of these strong events, and as intense geomorphologic alterations prevent preservation of surface expressions of faults, the knowledge of active faults is very limited. Lake sediments are sensitive to seismic shaking and thus, can be used to extend the regional earthquake catalogue if the sedimentary deposits or deformation structures can be linked to an earthquake. Single lake records allow estimating local intensities of shaking while multiple lake records can furthermore be used to compare temporal and spatial distribution of earthquakes. In this study, we compile a large dataset of dated sedimentary event deposits recorded in Swiss lakes available from peer-reviewed publications and unpublished master theses. We combine these data in order to detect large prehistoric regional earthquake events or periods of intense shaking that might have affected multiple lake settings. In a second step, using empirical seismic attenuation equations, we test if lake records can be used to reconstruct magnitudes and epicentres of identified earthquakes.
The language parallel Pascal and other aspects of the massively parallel processor
NASA Technical Reports Server (NTRS)
Reeves, A. P.; Bruner, J. D.
1982-01-01
A high level language for the Massively Parallel Processor (MPP) was designed. This language, called Parallel Pascal, is described in detail. A description of the language design, a description of the intermediate language, Parallel P-Code, and details for the MPP implementation are included. Formal descriptions of Parallel Pascal and Parallel P-Code are given. A compiler was developed which converts programs in Parallel Pascal into the intermediate Parallel P-Code language. The code generator to complete the compiler for the MPP is being developed independently. A Parallel Pascal to Pascal translator was also developed. The architecture design for a VLSI version of the MPP was completed with a description of fault tolerant interconnection networks. The memory arrangement aspects of the MPP are discussed and a survey of other high level languages is given.
Aspect-Oriented Monitoring of C Programs
NASA Technical Reports Server (NTRS)
Havelund, Klaus; VanWyk, Eric
2008-01-01
The paper presents current work on extending ASPECTC with state machines, resulting in a framework for aspect-oriented monitoring of C programs. Such a framework can be used for testing purposes, or it can be part of a fault protection strategy. The long term goal is to explore the synergy between the fields of runtime verification, focused on program monitoring, and aspect-oriented programming, focused on more general program development issues. The work is inspired by the observation that most work in this direction has been done for JAVA, partly due to the lack of easily accessible extensible compiler frameworks for C. The work is performed using the SILVER extensible attribute grammar compiler framework, in which C has been defined as a host language. Our work consists of extending C with ASPECTC, and subsequently to extend ASPECTC with state machines.
Spatiotemporal Patterns of Fault Slip Rates Across the Central Sierra Nevada Frontal Fault Zone
NASA Astrophysics Data System (ADS)
Rood, D. H.; Burbank, D.; Finkel, R. C.
2010-12-01
We examine patterns in fault slip rates through time and space across the transition from the Sierra Nevada to the Eastern California Shear Zone-Walker Lane belt. At each of four sites along the eastern Sierra Nevada frontal fault zone between 38-39° N latitude, geomorphic markers, such as glacial moraines and outwash terraces, are displaced by a suite of range-front normal faults. Using geomorphic mapping, surveying, and Be-10 surface exposure dating, we define mean fault slip rates, and by utilizing markers of different ages (generally, ~20 ka and ~150 ka), we examine rates through time and interactions among multiple faults over 10-100 ky timescales. At each site for which data are available for the last ~150 ky, mean slip rates across the Sierra Nevada frontal fault zone have probably not varied by more than a factor of two over time spans equal to half of the total time interval (~20 ky and ~150 ky timescales): 0.3 ± 0.1 mm/yr (mode and 95% CI) at both Buckeye Creek in the Bridgeport basin and Sonora Junction; and 0.4 +0.3/-0.1 mm/yr along the West Fork of the Carson River at Woodfords. Our data permit that rates are relatively constant over the time scales examined. In contrast, slip rates are highly variable in space over the last ~20 ky. Slip rates decrease by a factor of 3-5 northward over a distance of ~20 km between the northern Mono Basin (1.3 +0.6/-0.3 mm/yr at Lundy Canyon site) and the Bridgeport Basin (0.3 ± 0.1 mm/yr). The 3-fold decrease in the slip rate on the Sierra Nevada frontal fault zone northward from Mono Basin reflects a change in the character of faulting north of the Mina Deflection as extension is transferred eastward onto normal faults between the Sierra Nevada and Walker Lane belt. A compilation of regional deformation rates reveal that the spatial pattern of extension rates changes along strike of the Eastern California Shear Zone-Walker Lane belt. South of the Mina Deflection, extension is accommodated within a diffuse zone of normal and oblique faults, with extension rates increasing northward on the Fish Lake Valley fault. Where faults of the Eastern California Shear Zone terminate northward into the Mina Deflection, extension rates increase northward along the Sierra Nevada frontal fault zone to ~0.7 mm/yr in northern Mono Basin. This spatial pattern suggests that extension is transferred from faults systems to the east (e.g. Fish Lake Valley fault) and localized on the Sierra Nevada frontal fault zone as Eastern California Shear Zone-Walker Lane belt faulting is transferred through the Mina Deflection.
Wood traits related to size and life history of trees in a Panamanian rainforest.
Hietz, Peter; Rosner, Sabine; Hietz-Seifert, Ursula; Wright, S Joseph
2017-01-01
Wood structure differs widely among tree species and species with faster growth, higher mortality and larger maximum size have been reported to have fewer but larger vessels and higher hydraulic conductivity (Kh). However, previous studies compiled data from various sources, often failed to control tree size and rarely controlled variation in other traits. We measured wood density, tree size and vessel traits for 325 species from a wet forest in Panama, and compared wood and leaf traits to demographic traits using species-level data and phylogenetically independent contrasts. Wood traits showed strong phylogenetic signal whereas pairwise relationships between traits were mostly phylogenetically independent. Trees with larger vessels had a lower fraction of the cross-sectional area occupied by vessel lumina, suggesting that the hydraulic efficiency of large vessels permits trees to dedicate a larger proportion of the wood to functions other than water transport. Vessel traits were more strongly correlated with the size of individual trees than with maximal size of a species. When individual tree size was included in models, Kh scaled positively with maximal size and was the best predictor for both diameter and biomass growth rates, but was unrelated to mortality. © 2016 The Authors. New Phytologist © 2016 New Phytologist Trust.
Reliability and availability evaluation of Wireless Sensor Networks for industrial applications.
Silva, Ivanovitch; Guedes, Luiz Affonso; Portugal, Paulo; Vasques, Francisco
2012-01-01
Wireless Sensor Networks (WSN) currently represent the best candidate to be adopted as the communication solution for the last mile connection in process control and monitoring applications in industrial environments. Most of these applications have stringent dependability (reliability and availability) requirements, as a system failure may result in economic losses, put people in danger or lead to environmental damages. Among the different type of faults that can lead to a system failure, permanent faults on network devices have a major impact. They can hamper communications over long periods of time and consequently disturb, or even disable, control algorithms. The lack of a structured approach enabling the evaluation of permanent faults, prevents system designers to optimize decisions that minimize these occurrences. In this work we propose a methodology based on an automatic generation of a fault tree to evaluate the reliability and availability of Wireless Sensor Networks, when permanent faults occur on network devices. The proposal supports any topology, different levels of redundancy, network reconfigurations, criticality of devices and arbitrary failure conditions. The proposed methodology is particularly suitable for the design and validation of Wireless Sensor Networks when trying to optimize its reliability and availability requirements.
Reliability and Availability Evaluation of Wireless Sensor Networks for Industrial Applications
Silva, Ivanovitch; Guedes, Luiz Affonso; Portugal, Paulo; Vasques, Francisco
2012-01-01
Wireless Sensor Networks (WSN) currently represent the best candidate to be adopted as the communication solution for the last mile connection in process control and monitoring applications in industrial environments. Most of these applications have stringent dependability (reliability and availability) requirements, as a system failure may result in economic losses, put people in danger or lead to environmental damages. Among the different type of faults that can lead to a system failure, permanent faults on network devices have a major impact. They can hamper communications over long periods of time and consequently disturb, or even disable, control algorithms. The lack of a structured approach enabling the evaluation of permanent faults, prevents system designers to optimize decisions that minimize these occurrences. In this work we propose a methodology based on an automatic generation of a fault tree to evaluate the reliability and availability of Wireless Sensor Networks, when permanent faults occur on network devices. The proposal supports any topology, different levels of redundancy, network reconfigurations, criticality of devices and arbitrary failure conditions. The proposed methodology is particularly suitable for the design and validation of Wireless Sensor Networks when trying to optimize its reliability and availability requirements. PMID:22368497
Scharer, Katherine M.; Salisbury, J. Barrett; Arrowsmith, J. Ramon; Rockwell, Thomas K.
2014-01-01
In southern California, where fast slip rates and sparse vegetation contribute to crisp expression of faults and microtopography, field and high‐resolution topographic data (<1 m/pixel) increasingly are used to investigate the mark left by large earthquakes on the landscape (e.g., Zielke et al., 2010; Zielke et al., 2012; Salisbury, Rockwell, et al., 2012, Madden et al., 2013). These studies measure offset streams or other geomorphic features along a stretch of a fault, analyze the offset values for concentrations or trends along strike, and infer that the common magnitudes reflect successive surface‐rupturing earthquakes along that fault section. Wallace (1968) introduced the use of such offsets, and the challenges in interpreting their “unique complex history” with offsets on the Carrizo section of the San Andreas fault; these were more fully mapped by Sieh (1978) and followed by similar field studies along other faults (e.g., Lindvall et al., 1989; McGill and Sieh, 1991). Results from such compilations spurred the development of classic fault behavior models, notably the characteristic earthquake and slip‐patch models, and thus constitute an important component of the long‐standing contrast between magnitude–frequency models (Schwartz and Coppersmith, 1984; Sieh, 1996; Hecker et al., 2013). The proliferation of offset datasets has led earthquake geologists to examine the methods and approaches for measuring these offsets, uncertainties associated with measurement of such features, and quality ranking schemes (Arrowsmith and Rockwell, 2012; Salisbury, Arrowsmith, et al., 2012; Gold et al., 2013; Madden et al., 2013). In light of this, the Southern San Andreas Fault Evaluation (SoSAFE) project at the Southern California Earthquake Center (SCEC) organized a combined field activity and workshop (the “Fieldshop”) to measure offsets, compare techniques, and explore differences in interpretation. A thorough analysis of the measurements from the field activity will be provided separately; this paper discusses the complications presented by such offset measurements using two channels from the San Andreas fault as illustrative cases. We conclude with best approaches for future data collection efforts based on input from the Fieldshop.
Langenheim, Victoria; Willis, H.; Athens, N.D.; Chuchel, Bruce A.; Roza, J.; Hiscock, H.I.; Hardwick, C.L.; Kraushaar, S.M.; Knepprath, N.E.; Rosario, Jose J.
2013-01-01
A new isostatic residual gravity map of the northwest corner of Utah is based on compilation of preexisting data and new data collected by the Utah and United States Geological Surveys. Pronounced gravity lows occur over Junction, Grouse Creek, and upper Raft River Valleys, indicating significant thickness of low-density Tertiary sedimentary rocks and deposits. Gravity highs coincide with exposures of dense pre-Cenozoic rocks in the Raft River Mountains. Higher values in the eastern part of the map may be produced in part by deeper crustal density variations or crustal thinning. Steep linear gravity gradients coincide with mapped Neogene normal faults near Goose Creek and may define basin-bounding faults concealed beneath Junction and Upper Raft River Valleys.
PAWS/STEM - PADE APPROXIMATION WITH SCALING AND SCALED TAYLOR EXPONENTIAL MATRIX (VAX VMS VERSION)
NASA Technical Reports Server (NTRS)
Butler, R. W.
1994-01-01
Traditional fault-tree techniques for analyzing the reliability of large, complex systems fail to model the dynamic reconfiguration capabilities of modern computer systems. Markov models, on the other hand, can describe fault-recovery (via system reconfiguration) as well as fault-occurrence. The Pade Approximation with Scaling (PAWS) and Scaled Taylor Exponential Matrix (STEM) programs provide a flexible, user-friendly, language-based interface for the creation and evaluation of Markov models describing the behavior of fault-tolerant reconfigurable computer systems. PAWS and STEM produce exact solutions for the probability of system failure and provide a conservative estimate of the number of significant digits in the solution. The calculation of the probability of entering a death state of a Markov model (representing system failure) requires the solution of a set of coupled differential equations. Because of the large disparity between the rates of fault arrivals and system recoveries, Markov models of fault-tolerant architectures inevitably lead to numerically stiff differential equations. Both PAWS and STEM have the capability to solve numerically stiff models. These complementary programs use separate methods to determine the matrix exponential in the solution of the model's system of differential equations. In general, PAWS is better suited to evaluate small and dense models. STEM operates at lower precision, but works faster than PAWS for larger models. The mathematical approach chosen to solve a reliability problem may vary with the size and nature of the problem. Although different solution techniques are utilized on different programs, it is possible to have a common input language. The Systems Validation Methods group at NASA Langley Research Center has created a set of programs that form the basis for a reliability analysis workstation. The set of programs are: SURE reliability analysis program (COSMIC program LAR-13789, LAR-14921); the ASSIST specification interface program (LAR-14193, LAR-14923), PAWS/STEM reliability analysis programs (LAR-14165, LAR-14920); and the FTC fault tree tool (LAR-14586, LAR-14922). FTC is used to calculate the top-event probability for a fault tree. PAWS/STEM and SURE are programs which interpret the same SURE language, but utilize different solution methods. ASSIST is a preprocessor that generates SURE language from a more abstract definition. SURE, ASSIST, and PAWS/STEM are also offered as a bundle. Please see the abstract for COS-10039/COS-10041, SARA - SURE/ASSIST Reliability Analysis Workstation, for pricing details. PAWS/STEM was originally developed for DEC VAX series computers running VMS and was later ported for use on Sun computers running SunOS. The package is written in PASCAL, ANSI compliant C-language, and FORTRAN 77. The standard distribution medium for the VMS version of PAWS/STEM (LAR-14165) is a 9-track 1600 BPI magnetic tape in VMSINSTAL format. It is also available on a TK50 tape cartridge in VMSINSTAL format. Executables are included. The standard distribution medium for the Sun version of PAWS/STEM (LAR-14920) is a .25 inch streaming magnetic tape cartridge in UNIX tar format. Both Sun3 and Sun4 executables are included. PAWS/STEM was developed in 1989 and last updated in 1991. DEC, VAX, VMS, and TK50 are trademarks of Digital Equipment Corporation. SunOS, Sun3, and Sun4 are trademarks of Sun Microsystems, Inc. UNIX is a registered trademark of AT&T Bell Laboratories.
PAWS/STEM - PADE APPROXIMATION WITH SCALING AND SCALED TAYLOR EXPONENTIAL MATRIX (SUN VERSION)
NASA Technical Reports Server (NTRS)
Butler, R. W.
1994-01-01
Traditional fault-tree techniques for analyzing the reliability of large, complex systems fail to model the dynamic reconfiguration capabilities of modern computer systems. Markov models, on the other hand, can describe fault-recovery (via system reconfiguration) as well as fault-occurrence. The Pade Approximation with Scaling (PAWS) and Scaled Taylor Exponential Matrix (STEM) programs provide a flexible, user-friendly, language-based interface for the creation and evaluation of Markov models describing the behavior of fault-tolerant reconfigurable computer systems. PAWS and STEM produce exact solutions for the probability of system failure and provide a conservative estimate of the number of significant digits in the solution. The calculation of the probability of entering a death state of a Markov model (representing system failure) requires the solution of a set of coupled differential equations. Because of the large disparity between the rates of fault arrivals and system recoveries, Markov models of fault-tolerant architectures inevitably lead to numerically stiff differential equations. Both PAWS and STEM have the capability to solve numerically stiff models. These complementary programs use separate methods to determine the matrix exponential in the solution of the model's system of differential equations. In general, PAWS is better suited to evaluate small and dense models. STEM operates at lower precision, but works faster than PAWS for larger models. The mathematical approach chosen to solve a reliability problem may vary with the size and nature of the problem. Although different solution techniques are utilized on different programs, it is possible to have a common input language. The Systems Validation Methods group at NASA Langley Research Center has created a set of programs that form the basis for a reliability analysis workstation. The set of programs are: SURE reliability analysis program (COSMIC program LAR-13789, LAR-14921); the ASSIST specification interface program (LAR-14193, LAR-14923), PAWS/STEM reliability analysis programs (LAR-14165, LAR-14920); and the FTC fault tree tool (LAR-14586, LAR-14922). FTC is used to calculate the top-event probability for a fault tree. PAWS/STEM and SURE are programs which interpret the same SURE language, but utilize different solution methods. ASSIST is a preprocessor that generates SURE language from a more abstract definition. SURE, ASSIST, and PAWS/STEM are also offered as a bundle. Please see the abstract for COS-10039/COS-10041, SARA - SURE/ASSIST Reliability Analysis Workstation, for pricing details. PAWS/STEM was originally developed for DEC VAX series computers running VMS and was later ported for use on Sun computers running SunOS. The package is written in PASCAL, ANSI compliant C-language, and FORTRAN 77. The standard distribution medium for the VMS version of PAWS/STEM (LAR-14165) is a 9-track 1600 BPI magnetic tape in VMSINSTAL format. It is also available on a TK50 tape cartridge in VMSINSTAL format. Executables are included. The standard distribution medium for the Sun version of PAWS/STEM (LAR-14920) is a .25 inch streaming magnetic tape cartridge in UNIX tar format. Both Sun3 and Sun4 executables are included. PAWS/STEM was developed in 1989 and last updated in 1991. DEC, VAX, VMS, and TK50 are trademarks of Digital Equipment Corporation. SunOS, Sun3, and Sun4 are trademarks of Sun Microsystems, Inc. UNIX is a registered trademark of AT&T Bell Laboratories.
Mechanical Effects of Normal Faulting Along the Eastern Escarpment of the Sierra Nevada, California
NASA Astrophysics Data System (ADS)
Martel, S. J.; Logan, J. M.; Stock, G. M.
2013-12-01
Here we test whether the regional near-surface stress field in the Sierra Nevada, California, and the near-surface fracturing that heavily influences the Sierran landscape are a mechanical response to normal faulting along its eastern escarpment. A compilation of existing near-surface stress measurements for the central Sierra Nevada, together with three new measurements, shows the most compressive horizontal stresses are 3-21 MPa, consistent with the widespread distribution of sheeting joints (near-surface fractures subparallel to the ground surface). In contrast, a new stress measurement at Aeolian Buttes in the Mono Basin, east of the range front fault system, reveals a horizontal principal tension of 0.014 MPa, consistent with the abundant vertical joints there. To evaluate mechanical effects of normal faulting, we modeled both normal faults and grabens in three ways: (1) dislocations of specified slip in an elastic half-space, (2) frictionless sliding surfaces in an elastic half-space; and (3) faults in thin elastic beams resting on an inviscid fluid. The different mechanical models predict concave upward flexure and widespread near-surface compressive stresses in the Sierra Nevada that surpass the measurements even for as little as 1 km of normal slip along the eastern escarpment, which exhibits 1-3 km of structural and topographic relief. The models also predict concave downward flexure of the bedrock floors and horizontal near-surface tensile stresses east of the escarpment. The thin-beam models account best for the topographic relief of the eastern escarpment and the measured stresses given current best estimates for the rheology of the Sierran lithosphere. Our findings collectively indicate that the regional near-surface stress field and the widespread near-surface fracturing directly reflect the mechanical response to normal faulting along the eastern escarpment. These results have broad scientific and engineering implications for slope stability, hydrology, and geomorphology in and near fault-bounded mountain ranges in general.
NASA Astrophysics Data System (ADS)
Custódio, Susana; Lima, Vânia; Vales, Dina; Cesca, Simone; Carrilho, Fernando
2016-04-01
The matching between linear trends of hypocentres and fault planes indicated by focal mechanisms (FMs) is frequently used to infer the location and geometry of active faults. This practice works well in regions of fast lithospheric deformation, where earthquake patterns are clear and major structures accommodate the bulk of deformation, but typically fails in regions of slow and distributed deformation. We present a new joint FM and hypocentre cluster algorithm that is able to detect systematically the consistency between hypocentre lineations and FMs, even in regions of distributed deformation. We apply the method to the Azores-western Mediterranean region, with particular emphasis on western Iberia. The analysis relies on a compilation of hypocentres and FMs taken from regional and global earthquake catalogues, academic theses and technical reports, complemented by new FMs for western Iberia. The joint clustering algorithm images both well-known and new seismo-tectonic features. The Azores triple junction is characterised by FMs with vertical pressure (P) axes, in good agreement with the divergent setting, and the Iberian domain is characterised by NW-SE oriented P axes, indicating a response of the lithosphere to the ongoing oblique convergence between Nubia and Eurasia. Several earthquakes remain unclustered in the western Mediterranean domain, which may indicate a response to local stresses. The major regions of consistent faulting that we identify are the mid-Atlantic ridge, the Terceira rift, the Trans-Alboran shear zone and the north coast of Algeria. In addition, other smaller earthquake clusters present a good match between epicentre lineations and FM fault planes. These clusters may signal single active faults or wide zones of distributed but consistent faulting. Mainland Portugal is dominated by strike-slip earthquakes with fault planes coincident with the predominant NNE-SSW and WNW-ESE oriented earthquake lineations. Clusters offshore SW Iberia are predominantly strike-slip or reverse, confirming previous suggestions of slip partitioning.
NASA Astrophysics Data System (ADS)
Essid, El Mabrouk; Kadri, Ali; Inoubli, Mohamed Hedi; Zargouni, Fouad
2016-07-01
The northern Tunisia is occupied by the Tellian domain constituent the eastern end of the Maghrebides, Alpine fold-thrust belt. Study area includes partially the Tellian domain (Mogodos belt) and its foreland (Bizerte region). Most of this region outcrops consist of Numidian thrust sheet flysch attributed to the lower Oligocene-Burdigalian. In the study area, the major fault systems are still subject of discussion. The Numidian nappe structure, the distribution of basalt and Triassic outcrops within and at the front of this Tellian domain deserve more explanation. In this work we intend to update the structural scheme and the tectonic evolution of the northern Tunisia, taking into account salt tectonics and magmatism. The updated tectonic evolution will be integrated in the geodynamic framework of the Central Mediterranean. For this purpose, we have analyzed morphologic, seismic and structural data. The compilation of the results has allowed the identification of new regional NE-trending faults dipping towards the NW: the Bled el Aouana-Bizerte, the Sejnane-Ras Enjla and the Oued el Harka faults. They correspond to the reactivation of deep-seated normal faults splaying on the Triassic evaporites. This fault system constitutes the main component of the northern Tunisia structural scheme and has influenced its tectonic evolution marked by the main following stages. The Tellian thrust-sheets were immobilized at the uppermost Langhian. During the major Tortonian NW-trending compressive phase, these faults were reactivated with reverse kinematics and controlled the distribution of the post-nappes Neogene continental deposits. At the early Pleistocene, a compressive NNW-trending event has reactivated again these faults with sinistral-reverse movements and deformed the post-nappes Neogene series. Late Quaternary to Actual, the tectonic regime continues to be compressive with a NNW-trending maximum horizontal stress.
Bonilla, Manuel G.; Mark, Robert K.; Lienkaemper, James J.
1984-01-01
In order to refine correlations of surface-wave magnitude, fault rupture length at the ground surface, and fault displacement at the surface by including the uncertainties in these variables, the existing data were critically reviewed and a new data base was compiled. Earthquake magnitudes were redetermined as necessary to make them as consistent as possible with the Gutenberg methods and results, which make up much of the data base. Measurement errors were estimated for the three variables for 58 moderate to large shallow-focus earthquakes. Regression analyses were then made utilizing the estimated measurement errors.The regression analysis demonstrates that the relations among the variables magnitude, length, and displacement are stochastic in nature. The stochastic variance, introduced in part by incomplete surface expression of seismogenic faulting, variation in shear modulus, and regional factors, dominates the estimated measurement errors. Thus, it is appropriate to use ordinary least squares for the regression models, rather than regression models based upon an underlying deterministic relation in which the variance results primarily from measurement errors.Significant differences exist in correlations of certain combinations of length, displacement, and magnitude when events are grouped by fault type or by region, including attenuation regions delineated by Evernden and others.Estimates of the magnitude and the standard deviation of the magnitude of a prehistoric or future earthquake associated with a fault can be made by correlating Ms with the logarithms of rupture length, fault displacement, or the product of length and displacement.Fault rupture area could be reliably estimated for about 20 of the events in the data set. Regression of Ms on rupture area did not result in a marked improvement over regressions that did not involve rupture area. Because no subduction-zone earthquakes are included in this study, the reported results do not apply to such zones.
Digital release of the Alaska Quaternary fault and fold database
NASA Astrophysics Data System (ADS)
Koehler, R. D.; Farrell, R.; Burns, P.; Combellick, R. A.; Weakland, J. R.
2011-12-01
The Alaska Division of Geological & Geophysical Surveys (DGGS) has designed a Quaternary fault and fold database for Alaska in conformance with standards defined by the U.S. Geological Survey for the National Quaternary fault and fold database. Alaska is the most seismically active region of the United States, however little information exists on the location, style of deformation, and slip rates of Quaternary faults. Thus, to provide an accurate, user-friendly, reference-based fault inventory to the public, we are producing a digital GIS shapefile of Quaternary fault traces and compiling summary information on each fault. Here, we present relevant information pertaining to the digital GIS shape file and online access and availability of the Alaska database. This database will be useful for engineering geologic studies, geologic, geodetic, and seismic research, and policy planning. The data will also contribute to the fault source database being constructed by the Global Earthquake Model (GEM), Faulted Earth project, which is developing tools to better assess earthquake risk. We derived the initial list of Quaternary active structures from The Neotectonic Map of Alaska (Plafker et al., 1994) and supplemented it with more recent data where available. Due to the limited level of knowledge on Quaternary faults in Alaska, pre-Quaternary fault traces from the Plafker map are shown as a layer in our digital database so users may view a more accurate distribution of mapped faults and to suggest the possibility that some older traces may be active yet un-studied. The database will be updated as new information is developed. We selected each fault by reviewing the literature and georegistered the faults from 1:250,000-scale paper maps contained in 1970's vintage and earlier bedrock maps. However, paper map scales range from 1:20,000 to 1:500,000. Fault parameters in our GIS fault attribute tables include fault name, age, slip rate, slip sense, dip direction, fault line type (i.e., well constrained, moderately constrained, or inferred), and mapped scale. Each fault is assigned a three-integer CODE, based upon age, slip rate, and how well the fault is located. This CODE dictates the line-type for the GIS files. To host the database, we are developing an interactive web-map application with ArcGIS for Server and the ArcGIS API for JavaScript from Environmental Systems Research Institute, Inc. (Esri). The web-map application will present the database through a visible scale range with each fault displayed at the resolution of the original map. Application functionality includes: search by name or location, identification of fault by manual selection, and choice of base map. Base map options include topographic, satellite imagery, and digital elevation maps available from ArcGIS on-line. We anticipate that the database will be publically accessible from a portal embedded on the DGGS website by the end of 2011.
Is Downtown Seattle on the Hanging Wall of the Seattle Fault?
NASA Astrophysics Data System (ADS)
Pratt, T. L.
2008-12-01
The Seattle fault is an ~80-km-long thrust or reverse fault that trends east-west beneath the Puget Lowland of western Washington State, and is interpreted to extend beneath the Seattle urban area just south of the downtown area. The fault ruptured about A.D. 930 in a large earthquake that uplifted parts of the Puget Sound shoreline as much as 7 m, caused a tsunami in Puget Sound and extensive landslides throughout the area. Seismic reflection profiles indicate that the fault has 3 or more fault splays that together form the Seattle fault zone. Models for the Seattle fault zone vary considerably, but most models place the northern edge of the Seattle fault zone south of the downtown area. These interpretations require that the fault zone shifts about 2 km to the south in the Seattle area relative to its location to the east (Bellevue) and west (Bainbridge Island). Potential field anomalies, particularly prominent magnetic highs associated with dipping, shallow conglomerate layers, are not continuous in the downtown Seattle area as observed to the east and west. Compilation and re-interpretation of all the existing seismic profiles in the area indicate that the northern strand of the Seattle fault, specifically a fold associated with the northernmost, blind fault strand, lies beneath the northern part of downtown Seattle, about 1.5 to 2 km farther north than has previously been interpreted. This study focuses on one previously unpublished seismic profile in central Puget Sound that shows a remarkable image of the Seattle fault, with shallow subhorizontal layers disrupted or folded by at least two thrust faults and several shallow backthrusts. These apparently Holocene layers are arched gently upwards, with the peak of the anticline in line with Alki and Restoration Points on the east and west sides of Puget Sound, respectively. The profile shows that the shallow part of the northern fault strand dips to the south at about 35 degrees, consistent with the 35 to 40 degree dip previously interpreted from tomography data. A second fault strand about 2 km south of the northern strand causes gentle folding of the Holocene strata. Two prominent backthrusts occur on the south side of the anticline, with the southern backthrust on strike with a prominent scarp on the eastern shoreline. A large erosional paleochannel beneath west Seattle and the Duwamish waterway extends beneath Elliot Bay and obscures potential field anomalies and seismic reflection evidence for the fault strands. However, hints of fault-related features on the profiles in Elliot Bay, and clear images in Lake Washington, indicate that the fault strands extend beneath the city of Seattle in the downtown area. If indeed the northern strand of the Seattle fault lies beneath the northern part of downtown Seattle, the downtown area may experience ground deformation during a major Seattle fault earthquake and that focusing of energy in the fault zone may occur farther north than previously estimated.
MetaJC++: A flexible and automatic program transformation technique using meta framework
NASA Astrophysics Data System (ADS)
Beevi, Nadera S.; Reghu, M.; Chitraprasad, D.; Vinodchandra, S. S.
2014-09-01
Compiler is a tool to translate abstract code containing natural language terms to machine code. Meta compilers are available to compile more than one languages. We have developed a meta framework intends to combine two dissimilar programming languages, namely C++ and Java to provide a flexible object oriented programming platform for the user. Suitable constructs from both the languages have been combined, thereby forming a new and stronger Meta-Language. The framework is developed using the compiler writing tools, Flex and Yacc to design the front end of the compiler. The lexer and parser have been developed to accommodate the complete keyword set and syntax set of both the languages. Two intermediate representations have been used in between the translation of the source program to machine code. Abstract Syntax Tree has been used as a high level intermediate representation that preserves the hierarchical properties of the source program. A new machine-independent stack-based byte-code has also been devised to act as a low level intermediate representation. The byte-code is essentially organised into an output class file that can be used to produce an interpreted output. The results especially in the spheres of providing C++ concepts in Java have given an insight regarding the potential strong features of the resultant meta-language.
Communications and tracking expert systems study
NASA Technical Reports Server (NTRS)
Leibfried, T. F.; Feagin, Terry; Overland, David
1987-01-01
The original objectives of the study consisted of five broad areas of investigation: criteria and issues for explanation of communication and tracking system anomaly detection, isolation, and recovery; data storage simplification issues for fault detection expert systems; data selection procedures for decision tree pruning and optimization to enhance the abstraction of pertinent information for clear explanation; criteria for establishing levels of explanation suited to needs; and analysis of expert system interaction and modularization. Progress was made in all areas, but to a lesser extent in the criteria for establishing levels of explanation suited to needs. Among the types of expert systems studied were those related to anomaly or fault detection, isolation, and recovery.
[Medical Equipment Maintenance Methods].
Liu, Hongbin
2015-09-01
Due to the high technology and the complexity of medical equipment, as well as to the safety and effectiveness, it determines the high requirements of the medical equipment maintenance work. This paper introduces some basic methods of medical instrument maintenance, including fault tree analysis, node method and exclusive method which are the three important methods in the medical equipment maintenance, through using these three methods for the instruments that have circuit drawings, hardware breakdown maintenance can be done easily. And this paper introduces the processing methods of some special fault conditions, in order to reduce little detours in meeting the same problems. Learning is very important for stuff just engaged in this area.
Ethnic Studies Activities for Bicentennial Celebration in the Classroom.
ERIC Educational Resources Information Center
Simpkins, Beverly
Compiled by a teacher, this handbook will serve as a guide to elementary teachers who want to provide students with an understanding of their American heritage. Classroom activities for use in a reading or social studies lesson are described. The activities are many and varied and include the following: developing a family tree; making a sandwich…
National Proceedings: Forest and Conservation Nursery Associations-2002
L.E. Riley; R.K. Dumroese; T.D. Landis
2002-01-01
This proceedings is a compilation of 33 papers that were presented at the regional meetings of the forest and conservation nursery associations in the United States in 2002. The joint meeting of the Southern Forest Nursery Association and the Northeastern Forest and Conservation Nursery Association was held at the DoubleTree Hotel and Conference Center in Gainesville,...
National Proceedings: Forest and Conservation Nursery Associations-2008
R. K. Dumroese; L. E. Riley
2009-01-01
These proceedings are a compilation of 27 papers that were presented at the regional meetings of the forest and conservation nursery associations in the United States in 2008. The Western Forest and Conservation Nursery Association meeting was held at the DoubleTree Hotel in Missoula, Montana, on June 23 to 25. The meeting was hosted by the Montana Conservation...
Chapter 8: Nest Success and the Effects of Predation on Marbled Murrelets
S. Kim Nelson; Thomas E. Hamer
1995-01-01
We summarize available information on Marbled Murrelet (Brachyramphus marmoratus) productivity and sources of mortality compiled from known tree nests in North America. We found that 72 percent (23 of 32) of nests were unsuccessful. Known causes of nest failure included predation of eggs and chicks (n = 10), nest abandonment by adults (n = 4), chicks...
Keller, Kathrin M.; Lienert, Sebastian; Bozbiyik, Anil; ...
2017-05-24
Measurements of the stable carbon isotope ratio ( δ 13C) on annual tree rings offer new opportunities to evaluate mechanisms of variations in photosynthesis and stomatal conductance under changing CO 2 and climate conditions, especially in conjunction with process-based biogeochemical model simulations. The isotopic discrimination is indicative of the ratio between the CO 2 partial pressure in the intercellular cavities and the atmosphere ( c i/ c a) and of the ratio of assimilation to stomatal conductance, termed intrinsic water-use efficiency (iWUE). We performed isotope-enabled simulations over the industrial period with the land biosphere module (CLM4.5) of the Community Earthmore » System Model and the Land Surface Processes and Exchanges (LPX-Bern) dynamic global vegetation model. Results for C3 tree species show good agreement with a global compilation of δ 13C measurements on leaves, though modeled 13C discrimination by C3 trees is smaller in arid regions than measured. A compilation of 76 tree-ring records, mainly from Europe, boreal Asia, and western North America, suggests on average small 20th century changes in isotopic discrimination and in c i/ c a and an increase in iWUE of about 27% since 1900. LPX-Bern results match these century-scale reconstructions, supporting the idea that the physiology of stomata has evolved to optimize trade-offs between carbon gain by assimilation and water loss by transpiration. In contrast, CLM4.5 simulates an increase in discrimination and in turn a change in iWUE that is almost twice as large as that revealed by the tree-ring data. Factorial simulations show that these changes are mainly in response to rising atmospheric CO 2. The results suggest that the downregulation of c i/ c a and of photosynthesis by nitrogen limitation is possibly too strong in the standard setup of CLM4.5 or that there may be problems associated with the implementation of conductance, assimilation, and related adjustment processes on long-term environmental changes.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Keller, Kathrin M.; Lienert, Sebastian; Bozbiyik, Anil
Measurements of the stable carbon isotope ratio ( δ 13C) on annual tree rings offer new opportunities to evaluate mechanisms of variations in photosynthesis and stomatal conductance under changing CO 2 and climate conditions, especially in conjunction with process-based biogeochemical model simulations. The isotopic discrimination is indicative of the ratio between the CO 2 partial pressure in the intercellular cavities and the atmosphere ( c i/ c a) and of the ratio of assimilation to stomatal conductance, termed intrinsic water-use efficiency (iWUE). We performed isotope-enabled simulations over the industrial period with the land biosphere module (CLM4.5) of the Community Earthmore » System Model and the Land Surface Processes and Exchanges (LPX-Bern) dynamic global vegetation model. Results for C3 tree species show good agreement with a global compilation of δ 13C measurements on leaves, though modeled 13C discrimination by C3 trees is smaller in arid regions than measured. A compilation of 76 tree-ring records, mainly from Europe, boreal Asia, and western North America, suggests on average small 20th century changes in isotopic discrimination and in c i/ c a and an increase in iWUE of about 27% since 1900. LPX-Bern results match these century-scale reconstructions, supporting the idea that the physiology of stomata has evolved to optimize trade-offs between carbon gain by assimilation and water loss by transpiration. In contrast, CLM4.5 simulates an increase in discrimination and in turn a change in iWUE that is almost twice as large as that revealed by the tree-ring data. Factorial simulations show that these changes are mainly in response to rising atmospheric CO 2. The results suggest that the downregulation of c i/ c a and of photosynthesis by nitrogen limitation is possibly too strong in the standard setup of CLM4.5 or that there may be problems associated with the implementation of conductance, assimilation, and related adjustment processes on long-term environmental changes.« less
NASA Astrophysics Data System (ADS)
Keller, Kathrin M.; Lienert, Sebastian; Bozbiyik, Anil; Stocker, Thomas F.; Churakova (Sidorova), Olga V.; Frank, David C.; Klesse, Stefan; Koven, Charles D.; Leuenberger, Markus; Riley, William J.; Saurer, Matthias; Siegwolf, Rolf; Weigt, Rosemarie B.; Joos, Fortunat
2017-05-01
Measurements of the stable carbon isotope ratio (δ13C) on annual tree rings offer new opportunities to evaluate mechanisms of variations in photosynthesis and stomatal conductance under changing CO2 and climate conditions, especially in conjunction with process-based biogeochemical model simulations. The isotopic discrimination is indicative of the ratio between the CO2 partial pressure in the intercellular cavities and the atmosphere (ci/ca) and of the ratio of assimilation to stomatal conductance, termed intrinsic water-use efficiency (iWUE). We performed isotope-enabled simulations over the industrial period with the land biosphere module (CLM4.5) of the Community Earth System Model and the Land Surface Processes and Exchanges (LPX-Bern) dynamic global vegetation model. Results for C3 tree species show good agreement with a global compilation of δ13C measurements on leaves, though modeled 13C discrimination by C3 trees is smaller in arid regions than measured. A compilation of 76 tree-ring records, mainly from Europe, boreal Asia, and western North America, suggests on average small 20th century changes in isotopic discrimination and in ci/ca and an increase in iWUE of about 27 % since 1900. LPX-Bern results match these century-scale reconstructions, supporting the idea that the physiology of stomata has evolved to optimize trade-offs between carbon gain by assimilation and water loss by transpiration. In contrast, CLM4.5 simulates an increase in discrimination and in turn a change in iWUE that is almost twice as large as that revealed by the tree-ring data. Factorial simulations show that these changes are mainly in response to rising atmospheric CO2. The results suggest that the downregulation of ci/ca and of photosynthesis by nitrogen limitation is possibly too strong in the standard setup of CLM4.5 or that there may be problems associated with the implementation of conductance, assimilation, and related adjustment processes on long-term environmental changes.
Tree cover and species composition effects on academic performance of primary school students.
Sivarajah, Sivajanani; Smith, Sandy M; Thomas, Sean C
2018-01-01
Human exposure to green space and vegetation is widely recognized to result in physical and mental health benefits; however, to date, the specific effects of tree cover, diversity, and species composition on student academic performance have not been investigated. We compiled standardized performance scores in Grades 3 and 6 for the collective student body in 387 schools across the Toronto District School Board (TDSB), and examined variation in relation to tree cover, tree diversity, and tree species composition based on comprehensive inventories of trees on school properties combined with aerial-photo-based assessments of tree cover. Analyses accounted for variation due to socioeconomic factors using the learning opportunity index (LOI), a regional composite index of external challenges to learning that incorporates income and other factors, such as students with English as a second language. As expected, LOI had the greatest influence on student academic performance; however, the proportion of tree cover, as distinct from other types of "green space" such as grass, was found to be a significant positive predictor of student performance, accounting for 13% of the variance explained in a statistical model predicting mean student performance assessments. The effects of tree cover and species composition were most pronounced in schools that showed the highest level of external challenges, suggesting the importance of urban forestry investments in these schools.
Tree cover and species composition effects on academic performance of primary school students
Smith, Sandy M.; Thomas, Sean C.
2018-01-01
Human exposure to green space and vegetation is widely recognized to result in physical and mental health benefits; however, to date, the specific effects of tree cover, diversity, and species composition on student academic performance have not been investigated. We compiled standardized performance scores in Grades 3 and 6 for the collective student body in 387 schools across the Toronto District School Board (TDSB), and examined variation in relation to tree cover, tree diversity, and tree species composition based on comprehensive inventories of trees on school properties combined with aerial-photo-based assessments of tree cover. Analyses accounted for variation due to socioeconomic factors using the learning opportunity index (LOI), a regional composite index of external challenges to learning that incorporates income and other factors, such as students with English as a second language. As expected, LOI had the greatest influence on student academic performance; however, the proportion of tree cover, as distinct from other types of “green space” such as grass, was found to be a significant positive predictor of student performance, accounting for 13% of the variance explained in a statistical model predicting mean student performance assessments. The effects of tree cover and species composition were most pronounced in schools that showed the highest level of external challenges, suggesting the importance of urban forestry investments in these schools. PMID:29474503
Mori, J.
1996-01-01
Details of the M 4.3 foreshock to the Joshua Tree earthquake were studied using P waves recorded on the Southern California Seismic Network and the Anza network. Deconvolution, using an M 2.4 event as an empirical Green's function, corrected for complicated path and site effects in the seismograms and produced simple far-field displacement pulses that were inverted for a slip distribution. Both possible fault planes, north-south and east-west, for the focal mechanism were tested by a least-squares inversion procedure with a range of rupture velocities. The results showed that the foreshock ruptured the north-south plane, similar to the mainshock. The foreshock initiated a few hundred meters south of the mainshock and ruptured to the north, toward the mainshock hypocenter. The mainshock (M 6.1) initiated near the northern edge of the foreshock rupture 2 hr later. The foreshock had a high stress drop (320 to 800 bars) and broke a small portion of the fault adjacent to the mainshock but was not able to immediately initiate the mainshock rupture.
Bedrosian, Paul A.; Burgess, Matthew K.; Nishikawa, Tracy
2013-01-01
Within the south-western Mojave Desert, the Joshua Basin Water District is considering applying imported water into infiltration ponds in the Joshua Tree groundwater sub-basin in an attempt to artificially recharge the underlying aquifer. Scarce subsurface hydrogeological data are available near the proposed recharge site; therefore, time-domain electromagnetic (TDEM) data were collected and analysed to characterize the subsurface. TDEM soundings were acquired to estimate the depth to water on either side of the Pinto Mountain Fault, a major east-west trending strike-slip fault that transects the proposed recharge site. While TDEM is a standard technique for groundwater investigations, special care must be taken when acquiring and interpreting TDEM data in a twodimensional (2D) faulted environment. A subset of the TDEM data consistent with a layered-earth interpretation was identified through a combination of three-dimensional (3D) forward modelling and diffusion time-distance estimates. Inverse modelling indicates an offset in water table elevation of nearly 40 m across the fault. These findings imply that the fault acts as a low-permeability barrier to groundwater flow in the vicinity of the proposed recharge site. Existing production wells on the south side of the fault, together with a thick unsaturated zone and permeable near-surface deposits, suggest the southern half of the study area is suitable for artificial recharge. These results illustrate the effectiveness of targeted TDEM in support of hydrological studies in a heavily faulted desert environment where data are scarce and the cost of obtaining these data by conventional drilling techniques is prohibitive.
NASA Astrophysics Data System (ADS)
Kelson, K. I.
2004-12-01
Detailed mapping of uplifted marine platforms bordering the Carquinez Strait between Benicia and Pinole, California, provides data on the pattern and rate of late Quaternary deformation across the northern East Bay Hills. Field mapping, interpretation of early 20th-century topographic data, analysis of aerial photography, and compilation of onshore borehole data show the presence of remnants of three platforms, with back-edge elevations of about 4 m, 12 m, and 18 m. Based on U-series dates (Helley et al., 1993) and comparison of platform elevations to published sea-level curves, the 12-m-high and 18-m-high platforms correlate with substage 5e (ca. 120 ka) and stage 9 (ca. 330 ka) sea-level high stands, respectively. West of the Southhampton fault, longitudinal profiles of platform back-edges suggest that the East Bay Hills between Pinole and Vallejo have undergone block uplift at a rate of 0.05 +/- 0.01 m/ka without substantial tilting or warping. With uncertainty of <3 m, the 120 ka and 330 ka platforms are at the same elevations across the NW-striking Franklin fault. This west-vergent reverse fault previously was interpreted to have had late Pleistocene activity and to accommodate crustal shortening in the East Bay Hills. Our data indicate an absence of vertical displacement across the Franklin fault within at least the past 120ka and perhaps 330ka. In contrast, the stage 5e and 9 have up-on-the-east vertical displacement and gentle westward tilting across the N-striking Southhampton fault, with a late Pleistocene vertical slip rate of >0.02 m/ka. The northerly strike and prominent geomorphic expression of this potentially active fault differs from the Franklin fault. Our mapping of the Southhampton fault suggests that it accommodates dextral shear in the East Bay Hills, and is one of several left-stepping, en echelon N-striking faults (collectively, the "Contra Costa shear zone", CCSZ) in the East Bay Hills. Faults within this zone coincide with geomorphic features suggestive of late Quaternary dextral strike slip and appear to truncate or displace NW-striking reverse faults (e.g., Franklin fault) that do not displace the late Quaternary marine platform sequence. These data support an interpretation that the CCSZ accommodates regional dextral shear, and possibly represents the northern extension of the Calaveras fault. Overall, the marine terraces provide excellent strain gauges from which to evaluate the pattern and rate of late Quaternary deformation throughout the northern East Bay Hills.
Langridge, R.M.; Stenner, Heidi D.; Fumal, T.E.; Christofferson, S.A.; Rockwell, T.K.; Hartleb, R.D.; Bachhuber, J.; Barka, A.A.
2002-01-01
The Mw 7.4 17 August 1999 İzmit earthquake ruptured five major fault segments of the dextral North Anatolian Fault Zone. The 26-km-long, N86°W-trending Sakarya fault segment (SFS) extends from the Sapanca releasing step-over in the west to near the town of Akyazi in the east. The SFS emerges from Lake Sapanca as two distinct fault traces that rejoin to traverse the Adapazari Plain to Akyazi. Offsets were measured across 88 cultural and natural features that cross the fault, such as roads, cornfield rows, rows of trees, walls, rails, field margins, ditches, vehicle ruts, a dike, and ground cracks. The maximum displacement observed for the İzmit earthquake (∼5.1 m) was encountered on this segment. Dextral displacement for the SFS rises from less than 1 m at Lake Sapanca to greater than 5 m near Arifiye, only 3 km away. Average slip decreases uniformly to the east from Arifiye until the fault steps left from Sagir to Kazanci to the N75°W, 6-km-long Akyazi strand, where slip drops to less than 1 m. The Akyazi strand passes eastward into the Akyazi Bend, which consists of a high-angle bend (18°-29°) between the Sakarya and Karadere fault segments, a 6-km gap in surface rupture, and high aftershock energy release. Complex structural geometries exist between the İzmit, Düzce, and 1967 Mudurnu fault segments that have arrested surface ruptures on timescales ranging from 30 sec to 88 days to 32 yr. The largest of these step-overs may have acted as a rupture segmentation boundary in previous earthquake cycles.
Dover, James H.; Tailleur, Irvin L.; Dumoulin, Julie A.
2004-01-01
The map depicts the field distribution and contact relations between stratigraphic units, the tectonic relations between major stratigraphic sequences, and the detailed internal structure of these sequences. The stratigraphic sequences formed in a variety of continental margin depositional environments, and subsequently underwent a complexde formational history of imbricate thrust faulting and folding. A compilation of micro and macro fossil identifications is included in this data set.
Care 3 model overview and user's guide, first revision
NASA Technical Reports Server (NTRS)
Bavuso, S. J.; Petersen, P. L.
1985-01-01
A manual was written to introduce the CARE III (Computer-Aided Reliability Estimation) capability to reliability and design engineers who are interested in predicting the reliability of highly reliable fault-tolerant systems. It was also structured to serve as a quick-look reference manual for more experienced users. The guide covers CARE III modeling and reliability predictions for execution in the CDC CYber 170 series computers, DEC VAX-11/700 series computer, and most machines that compile ANSI Standard FORTRAN 77.
Certification trails for data structures
NASA Technical Reports Server (NTRS)
Sullivan, Gregory F.; Masson, Gerald M.
1993-01-01
Certification trails are a recently introduced and promising approach to fault detection and fault tolerance. The applicability of the certification trail technique is significantly generalized. Previously, certification trails had to be customized to each algorithm application; trails appropriate to wide classes of algorithms were developed. These certification trails are based on common data-structure operations such as those carried out using these sets of operations such as those carried out using balanced binary trees and heaps. Any algorithms using these sets of operations can therefore employ the certification trail method to achieve software fault tolerance. To exemplify the scope of the generalization of the certification trail technique provided, constructions of trails for abstract data types such as priority queues and union-find structures are given. These trails are applicable to any data-structure implementation of the abstract data type. It is also shown that these ideals lead naturally to monitors for data-structure operations.
NASA Technical Reports Server (NTRS)
Braden, W. B.
1992-01-01
This talk discusses the importance of providing a process operator with concise information about a process fault including a root cause diagnosis of the problem, a suggested best action for correcting the fault, and prioritization of the problem set. A decision tree approach is used to illustrate one type of approach for determining the root cause of a problem. Fault detection in several different types of scenarios is addressed, including pump malfunctions and pipeline leaks. The talk stresses the need for a good data rectification strategy and good process models along with a method for presenting the findings to the process operator in a focused and understandable way. A real time expert system is discussed as an effective tool to help provide operators with this type of information. The use of expert systems in the analysis of actual versus predicted results from neural networks and other types of process models is discussed.
Modeling Off-Nominal Behavior in SysML
NASA Technical Reports Server (NTRS)
Day, John C.; Donahue, Kenneth; Ingham, Michel; Kadesch, Alex; Kennedy, Andrew K.; Post, Ethan
2012-01-01
Specification and development of fault management functionality in systems is performed in an ad hoc way - more of an art than a science. Improvements to system reliability, availability, safety and resilience will be limited without infusion of additional formality into the practice of fault management. Key to the formalization of fault management is a precise representation of off-nominal behavior. Using the upcoming Soil Moisture Active-Passive (SMAP) mission for source material, we have modeled the off-nominal behavior of the SMAP system during its initial spin-up activity, using the System Modeling Language (SysML). In the course of developing these models, we have developed generic patterns for capturing off-nominal behavior in SysML. We show how these patterns provide useful ways of reasoning about the system (e.g., checking for completeness and effectiveness) and allow the automatic generation of typical artifacts (e.g., success trees and FMECAs) used in system analyses.
Late Quaternary tectonic activity and lake level change in the Rukwa Rift Basin
NASA Astrophysics Data System (ADS)
Delvaux, D.; Kervyn, F.; Vittori, E.; Kajara, R. S. A.; Kilembe, E.
1998-04-01
Interpretation of remotely sensed images and air photographs, compilation of geological and topographical maps, morphostructural and fault kinematic observations and 14C dating reveal that, besides obvious climatic influences, the lake water extent and sedimentation in the closed hydrological system of Lake Rukwa is strongly influenced by tectonic processes. A series of sandy ridges, palaeolacustrine terraces and palaeounderwater delta fans are related to an Early Holocene high lake level and subsequent progressive lowering. The maximum lake level was controlled by the altitude of the watershed between the Rukwa and Tanganyika hydrological systems. Taking as reference the present elevation of the palaeolacustrine terraces around Lake Rukwa, two orders of vertical tectonic movement are evidenced: i) a general uplift centred on the Rungwe Volcanic Province between the Rukwa and Malawi Rift Basins; and ii) a tectonic northeastward tilting of the entire Rukwa Rift Basin, including the depression and rift shoulders. This is supported by the observed hydromorphological evolution. Local uplift is also induced by the development of an active fault zone in the central part of the depression, in a prolongation of the Mbeya Range-Galula Fault system. The Ufipa and Lupa Border Faults, bounding the Rukwa depression on the southwestern and northeastern sides, respectively, exert passive sedimentation control only. They appear inactive or at least less active in the Late Quaternary than during the previous rifting stage. The main Late Quaternary tectonic activity is represented by dextral strike-slip movement along the Mbeya Range-Galula Fault system, in the middle of the Rukwa Rift Basin, and by normal dip-slip movements along the Kanda Fault, in the western rift shoulder.
NASA Astrophysics Data System (ADS)
Gasser, D.; Mancktelow, N. S.
2009-04-01
The Helvetic nappes in the Swiss Alps form a classic fold-and-thrust belt related to overall NNW-directed transport. In western Switzerland, the plunge of nappe fold axes and the regional distribution of units define a broad depression, the Rawil depression, between the culminations of Aiguilles Rouge massif to the SW and Aar massif to the NE. A compilation of data from the literature establishes that, in addition to thrusts related to nappe stacking, the Rawil depression is cross-cut by four sets of brittle faults: (1) SW-NE striking normal faults that strike parallel to the regional fold axis trend, (2) NW-SE striking normal faults and joints that strike perpendicular to the regional fold axis trend, and (3) WNW-ESE striking normal plus dextral oblique-slip faults as well as (4) WSW-ENE striking normal plus dextral oblique-slip faults that both strike oblique to the regional fold axis trend. We studied in detail a beautifully exposed fault from set 3, the Rezli fault zone (RFZ) in the central Wildhorn nappe. The RFZ is a shallow to moderately-dipping (ca. 30-60˚) fault zone with an oblique-slip displacement vector, combining both dextral and normal components. It must have formed in approximately this orientation, because the local orientation of fold axes corresponds to the regional one, as does the generally vertical orientation of extensional joints and veins associated with the regional fault set 2. The fault zone crosscuts four different lithologies: limestone, intercalated marl and limestone, marl and sandstone, and it has a maximum horizontal dextral offset component of ~300 m and a maximum vertical normal offset component of ~200 m. Its internal architecture strongly depends on the lithology in which it developed. In the limestone, it consists of veins, stylolites, cataclasites and cemented gouge, in the intercalated marls and limestones of anastomosing shear zones, brittle fractures, veins and folds, in the marls of anastomosing shear zones, pressure solution seams and veins and in the sandstones of coarse breccia and veins. Later, straight, sharp fault planes cross-cut all these features. In all lithologies, common veins and calcite-cemented fault rocks indicate the strong involvement of fluids during faulting. Today, the southern Rawil depression and the Rhone Valley belong to one of the seismically most active regions in Switzerland. Seismogenic faults interpreted from earthquake focal mechanisms strike ENE-WSW to WNW-ESE, with dominant dextral strike-slip and minor normal components and epicentres at depths of < 15 km. All three Neogene fault sets (2-4) could have been active under the current stress field inferred from the current seismicity. This implies that the same mechanisms that formed these fault zones in the past may still persist at depth. The Rezli fault zone allows the detailed study of a fossil fault zone that can act as a model for processes still occurring at deeper levels in this seismically active region.
Map and Database of Probable and Possible Quaternary Faults in Afghanistan
Ruleman, C.A.; Crone, A.J.; Machette, M.N.; Haller, K.M.; Rukstales, K.S.
2007-01-01
The U.S. Geological Survey (USGS) with support from the U.S. Agency for International Development (USAID) mission in Afghanistan, has prepared a digital map showing the distribution of probable and suspected Quaternary faults in Afghanistan. This map is a key component of a broader effort to assess and map the country's seismic hazards. Our analyses of remote-sensing imagery reveal a complex array of tectonic features that we interpret to be probable and possible active faults within the country and in the surrounding border region. In our compilation, we have mapped previously recognized active faults in greater detail, and have categorized individual features based on their geomorphic expression. We assigned mapped features to eight newly defined domains, each of which contains features that appear to have similar styles of deformation. The styles of deformation associated with each domain provide insight into the kinematics of the modern tectonism, and define a tectonic framework that helps constrain deformational models of the Alpine-Himalayan orogenic belt. The modern fault movements, deformation, and earthquakes in Afghanistan are driven by the collision between the northward-moving Indian subcontinent and Eurasia. The patterns of probable and possible Quaternary faults generally show that much of the modern tectonic activity is related to transfer of plate-boundary deformation across the country. The left-lateral, strike-slip Chaman fault in southeastern Afghanistan probably has the highest slip rate of any fault in the country; to the north, this slip is distributed onto several fault systems. At the southern margin of the Kabul block, the style of faulting changes from mainly strike-slip motion associated with the boundary between the Indian and Eurasian plates, to transpressional and transtensional faulting. North and northeast of the Kabul block, we recognized a complex pattern of potentially active strike-slip, thrust, and normal faults that form a conjugate shear system in a transpressional region of the Trans-Himalayan orogenic belt. The general patterns and orientations of faults and the styles of deformation that we interpret from the imagery are consistent with the styles of faulting determined from focal mechanisms of historical earthquakes. Northwest-trending strike-slip fault zones are cut and displaced by younger, southeast-verging thrust faults; these relations define the interaction between northwest-southeast-oriented contraction and northwest-directed extrusion in the western Himalaya, Pamir, and Hindu Kush regions. Transpression extends into north-central Afghanistan where north-verging contraction along the east-west-trending Alburz-Marmul fault system interacts with northwest-trending strike-slip faults. Pressure ridges related to thrust faulting and extensional basins bounded by normal faults are located at major stepovers in these northwest-trending strike-slip systems. In contrast, young faulting in central and western Afghanistan indicates that the deformation is dominated by extension where strike-slip fault zones transition into regions of normal faults. In addition to these initial observations, our digital map and database provide a foundation that can be expanded, complemented, and modified as future investigations provide more detailed information about the location, characteristics, and history of movement on Quaternary faults in Afghanistan.
Mariano, John; Grauch, V.J.
1988-01-01
Aeromagnetic anomalies are produced by variations in the strength and direction of the magnetic field of rocks that include magnetic minerals, commonly magnetite. Patterns of anomalies on aeromagnetic maps can reveal structures - for example, faults which have juxtaposed magnetic rocks against non-magnetic rocks, or areas of alteration where magnetic minerals have been destroyed by hydrothermal activity. Tectonic features of regional extent may not become apparent until a number of aeromagnetic surveys have been compiled and plotted at the same scale. Commonly the compilation involves piecing together data from surveys that were flown at different times with widely disparate flight specifications and data reduction procedures. The data may be compiled into a composite map, where all the pieces are plotted onto one map without regard to the difference in flight elevation and datum, or they may be compiled into a merged map, where all survey data are analytically reduced to a common flight elevation and datum, and then digitally merged at the survey boundaries. The composite map retains the original resolution of all the survey data, but computer methods to enhance regional features crossing the survey boundaries may not be applied. On the other hand, computer methods can be applied to the merged data, but the accuracy of the data may be slightly diminished.
Fault detection and diagnosis for gas turbines based on a kernelized information entropy model.
Wang, Weiying; Xu, Zhiqiang; Tang, Rui; Li, Shuying; Wu, Wei
2014-01-01
Gas turbines are considered as one kind of the most important devices in power engineering and have been widely used in power generation, airplanes, and naval ships and also in oil drilling platforms. However, they are monitored without man on duty in the most cases. It is highly desirable to develop techniques and systems to remotely monitor their conditions and analyze their faults. In this work, we introduce a remote system for online condition monitoring and fault diagnosis of gas turbine on offshore oil well drilling platforms based on a kernelized information entropy model. Shannon information entropy is generalized for measuring the uniformity of exhaust temperatures, which reflect the overall states of the gas paths of gas turbine. In addition, we also extend the entropy to compute the information quantity of features in kernel spaces, which help to select the informative features for a certain recognition task. Finally, we introduce the information entropy based decision tree algorithm to extract rules from fault samples. The experiments on some real-world data show the effectiveness of the proposed algorithms.
Fault Detection and Diagnosis for Gas Turbines Based on a Kernelized Information Entropy Model
Wang, Weiying; Xu, Zhiqiang; Tang, Rui; Li, Shuying; Wu, Wei
2014-01-01
Gas turbines are considered as one kind of the most important devices in power engineering and have been widely used in power generation, airplanes, and naval ships and also in oil drilling platforms. However, they are monitored without man on duty in the most cases. It is highly desirable to develop techniques and systems to remotely monitor their conditions and analyze their faults. In this work, we introduce a remote system for online condition monitoring and fault diagnosis of gas turbine on offshore oil well drilling platforms based on a kernelized information entropy model. Shannon information entropy is generalized for measuring the uniformity of exhaust temperatures, which reflect the overall states of the gas paths of gas turbine. In addition, we also extend the entropy to compute the information quantity of features in kernel spaces, which help to select the informative features for a certain recognition task. Finally, we introduce the information entropy based decision tree algorithm to extract rules from fault samples. The experiments on some real-world data show the effectiveness of the proposed algorithms. PMID:25258726
NASA Astrophysics Data System (ADS)
Gülerce, Zeynep; Buğra Soyman, Kadir; Güner, Barış; Kaymakci, Nuretdin
2017-12-01
This contribution provides an updated planar seismic source characterization (SSC) model to be used in the probabilistic seismic hazard assessment (PSHA) for Istanbul. It defines planar rupture systems for the four main segments of the North Anatolian fault zone (NAFZ) that are critical for the PSHA of Istanbul: segments covering the rupture zones of the 1999 Kocaeli and Düzce earthquakes, central Marmara, and Ganos/Saros segments. In each rupture system, the source geometry is defined in terms of fault length, fault width, fault plane attitude, and segmentation points. Activity rates and the magnitude recurrence models for each rupture system are established by considering geological and geodetic constraints and are tested based on the observed seismicity that is associated with the rupture system. Uncertainty in the SSC model parameters (e.g., b value, maximum magnitude, slip rate, weights of the rupture scenarios) is considered, whereas the uncertainty in the fault geometry is not included in the logic tree. To acknowledge the effect of earthquakes that are not associated with the defined rupture systems on the hazard, a background zone is introduced and the seismicity rates in the background zone are calculated using smoothed-seismicity approach. The state-of-the-art SSC model presented here is the first fully documented and ready-to-use fault-based SSC model developed for the PSHA of Istanbul.
NASA Astrophysics Data System (ADS)
Dygert, Nick; Liang, Yan
2015-06-01
Mantle peridotites from ophiolites are commonly interpreted as having mid-ocean ridge (MOR) or supra-subduction zone (SSZ) affinity. Recently, an REE-in-two-pyroxene thermometer was developed (Liang et al., 2013) that has higher closure temperatures (designated as TREE) than major element based two-pyroxene thermometers for mafic and ultramafic rocks that experienced cooling. The REE-in-two-pyroxene thermometer has the potential to extract meaningful cooling rates from ophiolitic peridotites and thus shed new light on the thermal history of the different tectonic regimes. We calculated TREE for available literature data from abyssal peridotites, subcontinental (SC) peridotites, and ophiolites around the world (Alps, Coast Range, Corsica, New Caledonia, Oman, Othris, Puerto Rico, Russia, and Turkey), and augmented the data with new measurements for peridotites from the Trinity and Josephine ophiolites and the Mariana trench. TREE are compared to major element based thermometers, including the two-pyroxene thermometer of Brey and Köhler (1990) (TBKN). Samples with SC affinity have TREE and TBKN in good agreement. Samples with MOR and SSZ affinity have near-solidus TREE but TBKN hundreds of degrees lower. Closure temperatures for REE and Fe-Mg in pyroxenes were calculated to compare cooling rates among abyssal peridotites, MOR ophiolites, and SSZ ophiolites. Abyssal peridotites appear to cool more rapidly than peridotites from most ophiolites. On average, SSZ ophiolites have lower closure temperatures than abyssal peridotites and many ophiolites with MOR affinity. We propose that these lower temperatures can be attributed to the residence time in the cooling oceanic lithosphere prior to obduction. MOR ophiolites define a continuum spanning cooling rates from SSZ ophiolites to abyssal peridotites. Consistent high closure temperatures for abyssal peridotites and the Oman and Corsica ophiolites suggests hydrothermal circulation and/or rapid cooling events (e.g., normal faulting, unroofing) control the late thermal histories of peridotites from transform faults and slow and fast spreading centers with or without a crustal section.
NASA Technical Reports Server (NTRS)
Bennett, Richard A.; Reilinger, Robert E.; Rodi, William; Li, Yingping; Toksoz, M. Nafi; Hudnut, Ken
1995-01-01
Coseismic surface deformation associated with the M(sub w) 6.1, April 23, 1992, Joshua Tree earthquake is well represented by estimates of geodetic monument displacements at 20 locations independently derived from Global Positioning System and trilateration measurements. The rms signal to noise ratio for these inferred displacements is 1.8 with near-fault displacement estimates exceeding 40 mm. In order to determine the long-wavelength distribution of slip over the plane of rupture, a Tikhonov regularization operator is applied to these estimates which minimizes stress variability subject to purely right-lateral slip and zero surface slip constraints. The resulting slip distribution yields a geodetic moment estimate of 1.7 x 10(exp 18) N m with corresponding maximum slip around 0.8 m and compares well with independent and complementary information including seismic moment and source time function estimates and main shock and aftershock locations. From empirical Green's functions analyses, a rupture duration of 5 s is obtained which implies a rupture radius of 6-8 km. Most of the inferred slip lies to the north of the hypocenter, consistent with northward rupture propagation. Stress drop estimates are in the range of 2-4 MPa. In addition, predicted Coulomb stress increases correlate remarkably well with the distribution of aftershock hypocenters; most of the aftershocks occur in areas for which the mainshock rupture produced stress increases larger than about 0.1 MPa. In contrast, predicted stress changes are near zero at the hypocenter of the M(sub w) 7.3, June 28, 1992, Landers earthquake which nucleated about 20 km beyond the northernmost edge of the Joshua Tree rupture. Based on aftershock migrations and the predicted static stress field, we speculate that redistribution of Joshua Tree-induced stress perturbations played a role in the spatio-temporal development of the earth sequence culminating in the Landers event.
NASA Technical Reports Server (NTRS)
Mayhew, M. A.; Myers, D. M.
1984-01-01
A very prominent magnetic anomaly measured by MAGSAT over the eastern mid-continent of the United States was inferred to have a source region beneath Kentucky and Tennessee. Prominent aeromagnetic and gravity anomalies are also associated with the inferred source region. A crustal model constructed to fit these anomalies interpreted the complex as a large mafic plutonic intrusion of Precambrian age. The complex was named the Kentucky body. It was noticed that the Jessamine Dome, which is a locus of intense faulting and mineralization, occurs near the northern end of the Kentucky body, and that more generally there seemed to be a spatial relationship between mineral occurrence and the body. The relationship between mineral deposits in Kentucky and Tennessee and the Kentucky body was investigated. A compilation of mineral occurrences in the region, classified according to type and age, is presented.
Digital Archives - Thomas M. Bown's Bighorn Basin Maps: The Suite of Forty-Four Office Master Copies
McKinney, Kevin C.
2001-01-01
This CD-ROM is a digitally scanned suite of master 'locality' maps produced by Dr. Thomas M. Bown. The maps are archived in the US Geological Survey Field Records. The maps feature annual compilations of newly established fossil (nineteen 7.5 degree maps) of central basin data collections. This master suite of forty-four maps represents a considerably broader geographic range within the basin. Additionally, three field seasons of data were compiled into the master suite of maps after the final editing of the Professional Paper. These maps are the culmination of Dr. Bown's Bighorn Basin research as a vertebrate paleontologist for the USGS. Data include Yale, Wyoming, Duke, Michigan and USGS localities. Practical topographic features are also indicated, such as jeep=trail access, new reservoirs, rerouted roadbeds, measured sections, fossil reconnaissance evaluations (G=good, NG=no good and H=hideous), faults, palcosol stages, and occasionally 'camp' vernacular for locality names.
Earthquake scaling laws for rupture geometry and slip heterogeneity
NASA Astrophysics Data System (ADS)
Thingbaijam, Kiran K. S.; Mai, P. Martin; Goda, Katsuichiro
2016-04-01
We analyze an extensive compilation of finite-fault rupture models to investigate earthquake scaling of source geometry and slip heterogeneity to derive new relationships for seismic and tsunami hazard assessment. Our dataset comprises 158 earthquakes with a total of 316 rupture models selected from the SRCMOD database (http://equake-rc.info/srcmod). We find that fault-length does not saturate with earthquake magnitude, while fault-width reveals inhibited growth due to the finite seismogenic thickness. For strike-slip earthquakes, fault-length grows more rapidly with increasing magnitude compared to events of other faulting types. Interestingly, our derived relationship falls between the L-model and W-model end-members. In contrast, both reverse and normal dip-slip events are more consistent with self-similar scaling of fault-length. However, fault-width scaling relationships for large strike-slip and normal dip-slip events, occurring on steeply dipping faults (δ~90° for strike-slip faults, and δ~60° for normal faults), deviate from self-similarity. Although reverse dip-slip events in general show self-similar scaling, the restricted growth of down-dip fault extent (with upper limit of ~200 km) can be seen for mega-thrust subduction events (M~9.0). Despite this fact, for a given earthquake magnitude, subduction reverse dip-slip events occupy relatively larger rupture area, compared to shallow crustal events. In addition, we characterize slip heterogeneity in terms of its probability distribution and spatial correlation structure to develop a complete stochastic random-field characterization of earthquake slip. We find that truncated exponential law best describes the probability distribution of slip, with observable scale parameters determined by the average and maximum slip. Applying Box-Cox transformation to slip distributions (to create quasi-normal distributed data) supports cube-root transformation, which also implies distinctive non-Gaussian slip distributions. To further characterize the spatial correlations of slip heterogeneity, we analyze the power spectral decay of slip applying the 2-D von Karman auto-correlation function (parameterized by the Hurst exponent, H, and correlation lengths along strike and down-slip). The Hurst exponent is scale invariant, H = 0.83 (± 0.12), while the correlation lengths scale with source dimensions (seismic moment), thus implying characteristic physical scales of earthquake ruptures. Our self-consistent scaling relationships allow constraining the generation of slip-heterogeneity scenarios for physics-based ground-motion and tsunami simulations.
Zeng, Yuehua; Shen, Zheng-Kang
2016-01-01
We invert Global Positioning System (GPS) velocity data to estimate fault slip rates in California using a fault‐based crustal deformation model with geologic constraints. The model assumes buried elastic dislocations across the region using Uniform California Earthquake Rupture Forecast Version 3 (UCERF3) fault geometries. New GPS velocity and geologic slip‐rate data were compiled by the UCERF3 deformation working group. The result of least‐squares inversion shows that the San Andreas fault slips at 19–22 mm/yr along Santa Cruz to the North Coast, 25–28 mm/yr along the central California creeping segment to the Carrizo Plain, 20–22 mm/yr along the Mojave, and 20–24 mm/yr along the Coachella to the Imperial Valley. Modeled slip rates are 7–16 mm/yr lower than the preferred geologic rates from the central California creeping section to the San Bernardino North section. For the Bartlett Springs section, fault slip rates of 7–9 mm/yr fall within the geologic bounds but are twice the preferred geologic rates. For the central and eastern Garlock, inverted slip rates of 7.5 and 4.9 mm/yr, respectively, match closely with the geologic rates. For the western Garlock, however, our result suggests a low slip rate of 1.7 mm/yr. Along the eastern California shear zone and southern Walker Lane, our model shows a cumulative slip rate of 6.2–6.9 mm/yr across its east–west transects, which is ∼1 mm/yr increase of the geologic estimates. For the off‐coast faults of central California, from Hosgri to San Gregorio, fault slips are modeled at 1–5 mm/yr, similar to the lower geologic bounds. For the off‐fault deformation, the total moment rate amounts to 0.88×1019 N·m/yr, with fast straining regions found around the Mendocino triple junction, Transverse Ranges and Garlock fault zones, Landers and Brawley seismic zones, and farther south. The overall California moment rate is 2.76×1019 N·m/yr, which is a 16% increase compared with the UCERF2 model.
Mechanical deformation model of the western United States instantaneous strain-rate field
Pollitz, F.F.; Vergnolle, M.
2006-01-01
We present a relationship between the long-term fault slip rates and instantaneous velocities as measured by Global Positioning System (GPS) or other geodetic measurements over a short time span. The main elements are the secularly increasing forces imposed by the bounding Pacific and Juan de Fuca (JdF) plates on the North American plate, viscoelastic relaxation following selected large earthquakes occurring on faults that are locked during their respective interseismic periods, and steady slip along creeping portions of faults in the context of a thin-plate system. In detail, the physical model allows separate treatments of faults with known geometry and slip history, faults with incomplete characterization (i.e. fault geometry but not necessarily slip history is available), creeping faults, and dislocation sources distributed between the faults. We model the western United States strain-rate field, derived from 746 GPS velocity vectors, in order to test the importance of the relaxation from historic events and characterize the tectonic forces imposed by the bounding Pacific and JdF plates. Relaxation following major earthquakes (M ??? 8.0) strongly shapes the present strain-rate field over most of the plate boundary zone. Equally important are lateral shear transmitted across the Pacific-North America plate boundary along ???1000 km of the continental shelf, downdip forces distributed along the Cascadia subduction interface, and distributed slip in the lower lithosphere. Post-earthquake relaxation and tectonic forcing, combined with distributed deep slip, constructively interfere near the western margin of the plate boundary zone, producing locally large strain accumulation along the San Andreas fault (SAF) system. However, they destructively interfere further into the plate interior, resulting in smaller and more variable strain accumulation patterns in the eastern part of the plate boundary zone. Much of the right-lateral strain accumulation along the SAF system is systematically underpredicted by models which account only for relaxation from known large earthquakes. This strongly suggests that in addition to viscoelastic-cycle effects, steady deep slip in the lower lithosphere is needed to explain the observed strain-rate field. ?? 2006 The Authors Journal compilation ?? 2006 RAS.
NASA Astrophysics Data System (ADS)
Walton, M. A. L.; Gulick, S. P. S.; Haeussler, P. J.; Rohr, K.; Roland, E. C.; Trehu, A. M.
2014-12-01
The Queen Charlotte Fault (QCF) is an obliquely convergent strike-slip system that accommodates offset between the Pacific and North America plates in southeast Alaska and western Canada. Two recent earthquakes, including a M7.8 thrust event near Haida Gwaii on 28 October 2012, have sparked renewed interest in the margin and led to further study of how convergent stress is accommodated along the fault. Recent studies have looked in detail at offshore structure, concluding that a change in strike of the QCF at ~53.2 degrees north has led to significant differences in stress and the style of strain accommodation along-strike. We provide updated fault mapping and seismic images to supplement and support these results. One of the highest-quality seismic reflection surveys along the Queen Charlotte system to date, EW9412, was shot aboard the R/V Maurice Ewing in 1994. The survey was last processed to post-stack time migration for a 1999 publication. Due to heightened interest in high-quality imaging along the fault, we have completed updated processing of the EW9412 seismic reflection data and provide prestack migrations with water-bottom multiple reduction. Our new imaging better resolves fault and basement surfaces at depth, as well as the highly deformed sediments within the Queen Charlotte Terrace. In addition to re-processing the EW9412 seismic reflection data, we have compiled and re-analyzed a series of publicly available USGS seismic reflection data that obliquely cross the QCF. Using these data, we are able to provide updated maps of the Queen Charlotte fault system, adding considerable detail along the northernmost QCF where it links up with the Chatham Strait and Transition fault systems. Our results support conclusions that the changing geometry of the QCF leads to fundamentally different convergent stress accommodation north and south of ~53.2 degrees; namely, reactivated splay faults to the north vs. thickening of sediments and the upper crust to the south. We also highlight areas where additional data are needed and would be ideal targets for future study.
Spatiotemporal patterns of fault slip rates across the Central Sierra Nevada frontal fault zone
NASA Astrophysics Data System (ADS)
Rood, Dylan H.; Burbank, Douglas W.; Finkel, Robert C.
2011-01-01
Patterns in fault slip rates through time and space are examined across the transition from the Sierra Nevada to the Eastern California Shear Zone-Walker Lane belt. At each of four sites along the eastern Sierra Nevada frontal fault zone between 38 and 39° N latitude, geomorphic markers, such as glacial moraines and outwash terraces, are displaced by a suite of range-front normal faults. Using geomorphic mapping, surveying, and 10Be surface exposure dating, mean fault slip rates are defined, and by utilizing markers of different ages (generally, ~ 20 ka and ~ 150 ka), rates through time and interactions among multiple faults are examined over 10 4-10 5 year timescales. At each site for which data are available for the last ~ 150 ky, mean slip rates across the Sierra Nevada frontal fault zone have probably not varied by more than a factor of two over time spans equal to half of the total time interval (~ 20 ky and ~ 150 ky timescales): 0.3 ± 0.1 mm year - 1 (mode and 95% CI) at both Buckeye Creek in the Bridgeport basin and Sonora Junction; and 0.4 + 0.3/-0.1 mm year - 1 along the West Fork of the Carson River at Woodfords. Data permit rates that are relatively constant over the time scales examined. In contrast, slip rates are highly variable in space over the last ~ 20 ky. Slip rates decrease by a factor of 3-5 northward over a distance of ~ 20 km between the northern Mono Basin (1.3 + 0.6/-0.3 mm year - 1 at Lundy Canyon site) to the Bridgeport Basin (0.3 ± 0.1 mm year - 1 ). The 3-fold decrease in the slip rate on the Sierra Nevada frontal fault zone northward from Mono Basin is indicative of a change in the character of faulting north of the Mina Deflection as extension is transferred eastward onto normal faults between the Sierra Nevada and Walker Lane belt. A compilation of regional deformation rates reveals that the spatial pattern of extension rates changes along strike of the Eastern California Shear Zone-Walker Lane belt. South of the Mina Deflection, extension is accommodated within a diffuse zone of normal and oblique faults, with extension rates increasing northward on the Fish Lake Valley fault. Where faults of the Eastern California Shear Zone terminate northward into the Mina Deflection, extension rates increase northward along the Sierra Nevada frontal fault zone to ~ 0.7 mm year - 1 in northern Mono Basin. This spatial pattern suggests that extension is transferred from more easterly fault systems, e.g., Fish Lake Valley fault, and localized on the Sierra Nevada frontal fault zone as the Eastern California Shear Zone-Walker Lane belt faulting is transferred through the Mina Deflection.
Geologic map of the Caetano caldera, Lander and Eureka counties, Nevada
Colgan, Joseph P.; Henry, Christopher D.; John, David A.
2011-01-01
The Eocene (34 Ma) Caetano caldera in north-central Nevada offers an exceptional opportunity to study the physical and petrogenetic evolution of a large (20 km by 10–18 km pre-extensional dimensions) silicic magma chamber, from precursor magmatism to caldera collapse and intrusion of resurgent plutons. Caldera-related rocks shown on this map include two units of crystal-rich intracaldera tuff totaling over 4 km thickness, caldera collapse breccias, tuff dikes that fed the eruption, hydrothermally altered post-eruption rocks, and two generations of resurgent granitic intrusions (John et al., 2008). The map also depicts middle Miocene (about 16–12 Ma) normal faults and synextensional basins that accommodated >100 percent extension and tilted the caldera into a series of ~40° east-dipping blocks, producing exceptional 3-D exposures of the caldera interior (Colgan et al., 2008). This 1:75,000-scale map is a compilation of published maps and extensive new mapping by the authors (fig. 1), and supersedes a preliminary 1:100,000-scale map published by Colgan et al. (2008) and John et al. (2008). New mapping focused on the margins of the Caetano caldera, the distribution and lithology of rocks within the caldera, and on the Miocene normal faults and sedimentary basins that record Neogene extensional faulting. The definition of geologic units and their distribution within the caldera is based entirely on new mapping, except in the northern Toiyabe Range, where mapping by Gilluly and Gates (1965) was modified with new field observations. The distribution of pre-Cenozoic rocks outside the caldera was largely compiled from existing sources with minor modifications, with the exception of the northeastern caldera margin (west of the Cortez Hills Mine), which was remapped in the course of this work and published as a stand-alone 1:6000-scale map (Moore and Henry, 2010).
Applicability of ERTS-1 to Montana geology
NASA Technical Reports Server (NTRS)
Weidman, R. M. (Principal Investigator)
1973-01-01
The author has identified the following significant results. Geologic maps of four test sites were compiled at 1/250,000. Band 7 prints enlarged to 1/500,000 scale are the best for the purpose, and negative prints provide a valuable supplement. More than 100 mapped lineaments represent most of the major faults of the area and a large number of suspected faults, including many of northeast trend. Under ideal conditions dip slopes may be recognized, laccoliths outlined, and axial traces drawn for narrow, plunging folds. Use of ERTS-1 imagery will greatly facilitate construction of a needed tectonic map of Montana. From ERTS-1 imagery alone, it was possible to identify up-turned undivided Paleozoic and Mesozoic strata and to map the boundaries of mountain glaciation, intermontane basins, a volcanic field, and an area of granitic rocks. It was also possible to outline clay pans associated with bentonite. However, widespread recognition of gross rock types will be difficult.
Schmidt, Kevin M.; Ellen, Stephen D.; Peterson, David M.
2014-01-01
To gain additional measurement of any permanent ground deformation that accompanied this damage, we compiled and conducted post-earthquake surveys along two 5-km lines of horizontal control and a 15-km level line. Measurements of horizontal distortion indicate approximately 0.1 m shortening in a NE-SW direction across the valley margin, similar to the amount measured in the channel lining. Evaluation of precise leveling by the National Geodetic Survey showed a downwarp, with an amplitude of >0.1 m over a span of >12 km, that resembled regional geodetic models of coseismic deformation. Although the leveling indicates broad, regional warping, abrupt discontinuities characteristic of faulting characterize both the broad-scale distribution of damage and the local deformation of the channel lining. Reverse movement largely along preexisting faults and probably enhanced significantly by warping combined with enhanced ground shaking, produced the documented coseismic ground deformation.
Geologic map of the Jasper Quadrangle, Newton and Boone counties, Arkansas
Hudson, M.R.; Murray, K.E.; Pezzutti, Deborah
2001-01-01
This digital geologic map compilation presents new polygon (i.e., geologic map unit contacts), line (i.e., fault, fold axis, and structure contour), and point (i.e., structural attitude, contact elevations) vector data for the Jasper 7 1/2' quadrangle in northern Arkansas. The map database, which is at 1:24,000-scale resolution, provides geologic coverage of an area of current hydrogeologic, tectonic, and stratigraphic interest. The Jasper quadrangle is located in northern Newton and southern Boone Counties about 20 km south of the town of Harrison. The map area is underlain by sedimentary rocks of Ordovician, Mississippian, and Pennsylvanian age that were mildly deformed by a series of normal and strike-slip faults and folds. The area is representative of the stratigraphic and structural setting of the southern Ozark Dome. The Jasper quadrangle map provides new geologic information for better understanding groundwater flow paths in and adjacent to the Buffalo River watershed.
Seismotectonic Map of Afghanistan and Adjacent Areas
Wheeler, Russell L.; Rukstales, Kenneth S.
2007-01-01
Introduction This map is part of an assessment of Afghanistan's geology, natural resources, and natural hazards. One of the natural hazards is from earthquake shaking. One of the tools required to address the shaking hazard is a probabilistic seismic-hazard map, which was made separately. The information on this seismotectonic map has been used in the design and computation of the hazard map. A seismotectonic map like this one shows geological, seismological, and other information that previously had been scattered among many sources. The compilation can show spatial relations that might not have been seen by comparing the original sources, and it can suggest hypotheses that might not have occurred to persons who studied those scattered sources. The main map shows faults and earthquakes of Afghanistan. Plate convergence drives the deformations that cause the earthquakes. Accordingly, smaller maps and text explain the modern plate-tectonic setting of Afghanistan and its evolution, and relate both to patterns of faults and earthquakes.
Geologic map of the Hasty Quadrangle, Boone and Newton Counties, Arkansas
Hudson, Mark R.; Murray, Kyle E.
2004-01-01
This digital geologic map compilation presents new polygon (for example, geologic map unit contacts), line (for example, fault, fold axis, and structure contour), and point (for example, structural attitude, contact elevations) vector data for the Hasty 7.5-minute quadrangle in northern Arkansas. The map database, which is at 1:24,000-scale resolution, provides geologic coverage of an area of current hydrogeologic, tectonic, and stratigraphic interest. The Hasty quadrangle is located in northern Newton and southern Boone Counties about 20 km south of the town of Harrison. The map area is underlain by sedimentary rocks of Ordovician, Mississippian, and Pennsylvanian age that were mildly deformed by a series of normal and strike-slip faults and folds. The area is representative of the stratigraphic and structural setting of the southern Ozark Dome. The Hasty quadrangle map provides new geologic information for better understanding groundwater flow paths in and adjacent to the Buffalo River watershed.
Space station needs, attributes and architectural options study. Volume 3: Requirements
NASA Technical Reports Server (NTRS)
1983-01-01
A typical system specification format is presented and requirements are compiled. A Program Specification Tree is shown showing a high inclination space station and a low inclination space station with their typical element breakdown, also represented along the top blocks are the interfaces with other systems. The specification format is directed at the Low Inclination space station.
Langenheim, V.E.; Davidson, J.G.; Anderson, M.L.; Blank, H.R.
1999-01-01
The U.S. Geological Survey (USGS) collected 811 gravity stations on the Lake Mead 30' by 60' quadrangle from October, 1997 to September, 1999. These data were collected in support of geologic mapping of the Lake Mead quadrangle. In addition to these new data, gravity stations were compiled from a number of sources. These stations were reprocessed according to the reduction method described below and used for the new data. Density and magnetic susceptibility measurements were also performed on more than 250 rock samples. The Lake Mead quadrangle ranges from 360 to 360 30' north latitude and from 114° to 115° west longitude. It spans most of Lake Mead (see index map, below), the largest manmade lake in the United States, and includes most of the Lake Mead National Recreation Area. Its geology is very complex; Mesozoic thrust faults are exposed in the Muddy Mountains, Precambrian crystalline basement rocks are exhumed in tilted fault blocks near Gold Butte, extensive Tertiary volcanism is evident in the Black Mountains, and strike-slip faults of the right-lateral Las Vegas Valley shear zone and the left-lateral Lake Mead fault system meet near the Gale Hills. These gravity data and physical property measurements will aid in the 3-dimensional characterization of structure and stratigraphy in the quadrangle as part of the Las Vegas Urban Corridor mapping project.
Geology of the Prince William Sound and Kenai Peninsula region, Alaska
Wilson, Frederic H.; Hults, Chad P.
2012-01-01
The Prince William Sound and Kenai Peninsula region includes a significant part of one of the world’s largest accretionary complexes and a small part of the classic magmatic arc geology of the Alaska Peninsula. Physiographically, the map area ranges from the high glaciated mountains of the Alaska and Aleutian Ranges and the Chugach Mountains to the coastal lowlands of Cook Inlet and the Copper River delta. Structurally, the map area is cut by a number of major faults and postulated faults, the most important of which are the Border Ranges, Contact, and Bruin Bay Fault systems. The rocks of the map area belong to the Southern Margin composite terrane, a Tertiary and Cretaceous or older subduction-related accretionary complex, and the Alaska Peninsula terrane. Mesozoic rocks between these two terranes have been variously assigned to the Peninsular or the Hidden terranes. The oldest rocks in the map area are blocks of Paleozoic age within the mélange of the McHugh Complex; however, the protolith age of the greenschist and blueschist within the Border Ranges Fault zone is not known. Extensive glacial deposits mantle the Kenai Peninsula and the lowlands on the west side of Cook Inlet and are locally found elsewhere in the map area. This map was compiled from existing mapping, without generalization, and new or revised data was added where available.
Safety Study of TCAS II for Logic Version 6.04
1992-07-01
used in the fault tree of the 198 tdy. The fu given for Logic and Altimetry effects represent the site averages, and we bued upon TCAS RAs always being...comparison with the results of Monte Carlo simulations. Five million iterations were carril out for each of the four cases (eqs. 3, 4, 6 and 7
Code of Federal Regulations, 2010 CFR
2010-10-01
..., national, or international standards. (f) The reviewer shall analyze all Fault Tree Analyses (FTA), Failure... cited by the reviewer; (4) Identification of any documentation or information sought by the reviewer...) Identification of the hardware and software verification and validation procedures for the PTC system's safety...
The Two-By-Two Array: An Aid in Conceptualization and Problem Solving
ERIC Educational Resources Information Center
Eberhart, James
2004-01-01
The fields of mathematics, science, and engineering are replete with diagrams of many varieties. They range in nature from the Venn diagrams of symbolic logic to the Periodic Chart of the Elements; and from the fault trees of risk assessment to the flow charts used to describe laboratory procedures, industrial processes, and computer programs. All…
NASA Astrophysics Data System (ADS)
Kaub, C.; Perrot, J.; Le Roy, P., Sr.; Authemayou, C.; Bollinger, L.; Hebert, H.; Geoffroy, L.
2017-12-01
The coastal Vendee (France) is located to the south of the intraplate Armorican area. This region is affected by a system of dominantly NW-SE trending shear zones and faults inherited from a long and poly-phased tectonic history since Variscan times. This area currently presents a moderate background seismic activity, but was affected by a significant historical earthquake (magnitude M 6) on the 1799 January 25th. This event generated particularly strong site effects in a Neogene basin located along a major onshore/offshore discontinuity bounding the basin, the Machecoul fault. The objective of this study is to identify and qualify active faults potentially responsible for such major seismic event in order to better constrain the seismic hazard of this area. We adopt for this purpose a multidisciplinary approach including an onshore seismological survey, high-resolution low-penetration offshore seismic data (CHIRP echo sounder, Sparker source and single channel streamer), high-resolution interferometric sonar bathymetry (GeoSwath), compilation of onshore drilling database (BSS, BRGM), and quantitative geomorphology In the meantime, the seismicity of the area was characterized by a network of 10 REFTEK stations, deployed since January 2016 around the Bay of Bourgneuf (MACHE network). About 50 local earthquakes, with coda magnitudes ranging from 0.5 to 3.1 and local magnitude ranging from 0.2 to 2.9 were identified so far. This new database complement a local earthquake catalog acquired since 2011 from previous regional networks. We surveyed the fault segments offshore, in the Bay of Bourgneuf, analyzing 700 km of high-resolution seismic profiles and 40 km² of high-resolution bathymetry acquired during the RETZ1 (2016) and RETZ2 (2017) campaigns, in addition to HR-bathymetry along the fault scarp. Those data are interpreted in conjunction with onshore wells to determine if (and since when) the Machecoul fault controlled tectonically the Neogene sedimentation.
Pierce, Herbert A.
2001-01-01
As of 1999, surface water collected and stored in reservoirs is the sole source of municipal water for the city of Williams. During 1996 and 1999, reservoirs reached historically low levels. Understanding the ground-water flow system is critical to managing the ground-water resources in this part of the Coconino Plateau. The nearly 1,000-meter-deep regional aquifer in the Redwall and Muav Limestones, however, makes studying or utilizing the resource difficult. Near-vertical faults and complex geologic structures control the ground-water flow system on the southwest side of the Kaibab Uplift near Williams, Arizona. To address the hydrogeologic complexities in the study area, a suite of techniques, which included aeromagnetic, gravity, square-array resistivity, and audiomagnetotelluric surveys, were applied as part of a regional study near Bill Williams Mountain. Existing well data and interpreted geophysical data were compiled and used to estimate depths to the water table and to prepare a potentiometric map. Geologic characteristics, such as secondary porosity, coefficient of anisotropy, and fracture-strike direction, were calculated at several sites to examine how these characteristics change with depth. The 14-kilometer-wide, seismically active northwestward-trending Cataract Creek and the northeastward-trending Mesa Butte Fault systems intersect near Bill Williams Mountain. Several north-south-trending faults may provide additional block faulting north and west of Bill Williams Mountain. Because of the extensive block faulting and regional folding, the volcanic and sedimentary rocks are tilted toward one or more of these faults. These faults provide near-vertical flow paths to the regional water table. The nearly radial fractures allow water that reaches the regional aquifer to move away from the Bill Williams Mountain area. Depth to the regional aquifer is highly variable and depends on location and local structures. On the basis of interpreted audiomagnetotelluric and square-array resistivity sounding curves and limited well data, depths to water may range from 450 to 1,300 meters.
NASA Astrophysics Data System (ADS)
Kamer, Yavor; Ouillon, Guy; Sornette, Didier; Wössner, Jochen
2014-05-01
We present applications of a new clustering method for fault network reconstruction based on the spatial distribution of seismicity. Unlike common approaches that start from the simplest large scale and gradually increase the complexity trying to explain the small scales, our method uses a bottom-up approach, by an initial sampling of the small scales and then reducing the complexity. The new approach also exploits the location uncertainty associated with each event in order to obtain a more accurate representation of the spatial probability distribution of the seismicity. For a given dataset, we first construct an agglomerative hierarchical cluster (AHC) tree based on Ward's minimum variance linkage. Such a tree starts out with one cluster and progressively branches out into an increasing number of clusters. To atomize the structure into its constitutive protoclusters, we initialize a Gaussian Mixture Modeling (GMM) at a given level of the hierarchical clustering tree. We then let the GMM converge using an Expectation Maximization (EM) algorithm. The kernels that become ill defined (less than 4 points) at the end of the EM are discarded. By incrementing the number of initialization clusters (by atomizing at increasingly populated levels of the AHC tree) and repeating the procedure above, we are able to determine the maximum number of Gaussian kernels the structure can hold. The kernels in this configuration constitute our protoclusters. In this setting, merging of any pair will lessen the likelihood (calculated over the pdf of the kernels) but in turn will reduce the model's complexity. The information loss/gain of any possible merging can thus be quantified based on the Minimum Description Length (MDL) principle. Similar to an inter-distance matrix, where the matrix element di,j gives the distance between points i and j, we can construct a MDL gain/loss matrix where mi,j gives the information gain/loss resulting from the merging of kernels i and j. Based on this matrix, merging events resulting in MDL gain are performed in descending order until no gainful merging is possible anymore. We envision that the results of this study could lead to a better understanding of the complex interactions within the Californian fault system and hopefully use the acquired insights for earthquake forecasting.
NASA Astrophysics Data System (ADS)
Rust, D.; Korjenkov, A.; Tibaldi, A.; Usmanova, M.
2009-04-01
The Toktogul hydroelectric and irrigation scheme is the largest in central Asia, with a reservoir containing almost 20 km3 of water behind a 230 m-high dam. Annually, the scheme generates 1200 MW of electricity that is distributed over Kyrgyzstan, Uzbekistan, Tajikistan, Kazakhstan and Russia. The scheme is vital for the economic, social and agricultural stability and development of the emerging central Asian republics it serves and, since it is no longer administered centrally as it was in Soviet times, is increasingly the focus of cross-border tensions involving competing needs for irrigation water and power supplies. Our work aims to identify and evaluate potential geo-environmental threats to this region for the benefit of stakeholders; with recommendations for measures to mitigate a range of threat scenarios, presented in a user-friendly GIS format. Most notably these scenarios involve the potential for very large magnitude earthquakes, with associated widespread slope instability, occurring on the little known Talas - Fergana fault. This structure, some 700 km long, bisects the Toktogul region within the actively (~20 mm a-1) contracting Tien Shan mountain range and exhibits geological characteristics similar to large strike-slip faults such as the San Andreas. Historical records are limited in this inaccessible mountainous region that, until Soviet times, was occupied by mainly nomadic peoples, but do not indicate recent fault rupture. This highlights the role of geological investigations in assembling a record of past catastrophic events to serve as a guide for what may be expected in the future, as well as the inherent difficulties in attempting geological forecasts to a precision that is useful on human timescales. Such forecasts in this region must also include the presence of some 23 uranium mining waste dumps within the mountain valleys, a legacy from Soviet times, as well as arsenic-rich waste dumps remaining from an earlier era of gold mining. Many of these toxic dumps are vulnerable to seismically induced landsliding, release of reservoir water and breaching of very large (up to several km3) landslide-dammed lakes within the deep mountain valleys typical of the fault zone. The May 2008 earthquake in neighboring Sichuan, in which some 30 landslide-dammed lakes were created, may be useful in refining hazard scenarios developed from the multi-pronged analysis employed in our study. This analysis involves compiling all relevant existing data, such as seismic archives held in paper format, within the project GIS. Spatial and temporal patterns exhibited by these compiled data, together with focal mechanism determinations where possible, are combined with data on the distribution and nature of geological units to provide estimates of peak ground acceleration and the likely incidence of seismically-triggered slope instability. This compilation also identifies data deficiencies to be targeted using a portable seismometer network, geophysical and geodetic surveys, InSAR and other remote sensing data; all combined with geotechnical and palaeoseismological fieldwork. Initial results from this approach confirm the ground-shaking potential of Talas-Fergana rupture events, suggest a long-term slip rate as high as 15 mm a-1, and the occurrence of the last ground-rupturing event some 4-500 years BP. The lack of significant activity since that event suggests the Talas-Fergana structure may comprise a seismic gap within the Tien-Shan, highlighting the importance of hazard scenarios in proposing mitigation measures against potentially catastrophic threats, such as extensive pollution of irrigated lands in the Fergana Valley downstream from Toktogul on which some 10 million people depend.
NASA Astrophysics Data System (ADS)
Shi, J. T.; Han, X. T.; Xie, J. F.; Yao, L.; Huang, L. T.; Li, L.
2013-03-01
A Pulsed High Magnetic Field Facility (PHMFF) has been established in Wuhan National High Magnetic Field Center (WHMFC) and various protection measures are applied in its control system. In order to improve the reliability and robustness of the control system, the safety analysis of the PHMFF is carried out based on Fault Tree Analysis (FTA) technique. The function and realization of 5 protection systems, which include sequence experiment operation system, safety assistant system, emergency stop system, fault detecting and processing system and accident isolating protection system, are given. The tests and operation indicate that these measures improve the safety of the facility and ensure the safety of people.
Nelson, Alan R.; Personius, Stephen F.; Sherrod, Brian L.; Buck, Jason; Bradley, Lee-Ann; Henley, Gary; Liberty, Lee M.; Kelsey, Harvey M.; Witter, Robert C.; Koehler, R.D.; Schermer, Elizabeth R.; Nemser, Eliza S.; Cladouhos, Trenton T.
2008-01-01
As part of the effort to assess seismic hazard in the Puget Sound region, we map fault scarps on Airborne Laser Swath Mapping (ALSM, an application of LiDAR) imagery (with 2.5-m elevation contours on 1:4,000-scale maps) and show field and laboratory data from backhoe trenches across the scarps that are being used to develop a latest Pleistocene and Holocene history of large earthquakes on the Tacoma fault. We supplement previous Tacoma fault paleoseismic studies with data from five trenches on the hanging wall of the fault. In a new trench across the Catfish Lake scarp, broad folding of more tightly folded glacial sediment does not predate 4.3 ka because detrital charcoal of this age was found in stream-channel sand in the trench beneath the crest of the scarp. A post-4.3-ka age for scarp folding is consistent with previously identified uplift across the fault during AD 770-1160. In the trench across the younger of the two Stansberry Lake scarps, six maximum 14C ages on detrital charcoal in pre-faulting B and C soil horizons and three minimum ages on a tree root in post-faulting colluvium, limit a single oblique-slip (right-lateral) surface faulting event to AD 410-990. Stratigraphy and sedimentary structures in the trench across the older scarp at the same site show eroded glacial sediments, probably cut by a meltwater channel, with no evidence of post-glacial deformation. At the northeast end of the Sunset Beach scarps, charcoal ages in two trenches across graben-forming scarps give a close maximum age of 1.3 ka for graben formation. The ages that best limit the time of faulting and folding in each of the trenches are consistent with the time of the large regional earthquake in southern Puget Sound about AD 900-930.
Monotone Boolean approximation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hulme, B.L.
1982-12-01
This report presents a theory of approximation of arbitrary Boolean functions by simpler, monotone functions. Monotone increasing functions can be expressed without the use of complements. Nonconstant monotone increasing functions are important in their own right since they model a special class of systems known as coherent systems. It is shown here that when Boolean expressions for noncoherent systems become too large to treat exactly, then monotone approximations are easily defined. The algorithms proposed here not only provide simpler formulas but also produce best possible upper and lower monotone bounds for any Boolean function. This theory has practical application formore » the analysis of noncoherent fault trees and event tree sequences.« less
Rockwell, Thomas K.; Lindvall, Scott; Dawson, Tim; Langridge, Rob; Lettis, William; Klinger, Yann
2002-01-01
Surveys of multiple tree lines within groves of poplar trees, planted in straight lines across the fault prior to the earthquake, show surprisingly large lateral variations. In one grove, slip increases by nearly 1.8 m, or 35% of the maximum measured value, over a lateral distance of nearly 100 m. This and other observations along the 1999 ruptures suggest that the lateral variability of slip observed from displaced geomorphic features in many earthquakes of the past may represent a combination of (1) actual differences in slip at the surface and (2) the difficulty in recognizing distributed nonbrittle deformation.
The distribution of modified mercalli intensity in the 18 April 1906 San Francisco earthquake
Boatwright, J.; Bundock, H.
2008-01-01
We analyze Boatwright and Bundock's (2005) modified Mercalli intensity (MMI) map for the 18 April 1906 San Francisco earthquake, reviewing their interpretation of the MMI scale and testing their correlation of 1906 cemetery damage with MMI intensity. We consider in detail four areas of the intensity map where Boatwright and Bundock (2005) added significantly to the intensity descriptions compiled by Lawson (1908). We show that the distribution of off-fault damage in Sonoma County suggests that the rupture velocity approached the P-wave velocity along Tomales Bay. In contrast, the falloff of intensity with distance from the fault appears approximately constant throughout Mendocino County. The intensity in Humboldt County appears somewhat higher than the intensity in Mendocino County, suggesting that the rupture process at the northern end of the rupture was relatively energetic and that there was directivity consistent with a subsonic rupture velocity on the section of the fault south of Shelter Cove. Finally, we show that the intensity sites added in Santa Cruz County change the intensity distribution so that it decreases gradually along the southeastern section of rupture from Corralitos to San Juan Bautista and implies that the stress release on this section of rupture was relatively low.
Burton, William C.; Armstrong, Thomas R.
2013-01-01
The bedrock geology of the Pinardville quadrangle includes the Massabesic Gneiss Complex, exposed in the core of a regional northeast-trending anticlinorium, and highly deformed metasedimentary rocks of the Rangeley Formation, exposed along the northwest limb of the anticlinorium. Both formations were subjected to high-grade metamorphism and partial melting: the Rangeley during the middle Paleozoic Acadian orogeny, and the Massabesic Gneiss Complex during both the Acadian and the late Paleozoic Alleghanian orogeny. Granitoids produced during these orogenies range in age from Devonian (Spaulding Tonalite) to Permian (granite at Damon Pond), each with associated pegmatite. In the latest Paleozoic the Massabesic Gneiss Complex was uplifted with respect to the Rangeley Formation along the ductile Powder Hill fault, which also had a left-lateral component. Uplift continued into the early Mesozoic, producing the 2-kilometer-wide Campbell Hill fault zone, which is marked by northwest-dipping normal faults and dilational map-scale quartz bodies. Rare, undeformed Jurassic diabase dikes cut all older lithologies and structures. A second map is a compilation of joint orientations measured at all outcrops in the quadrangle. There is a great diversity of strike trends, with northeast perhaps being the most predominant.
NASA Astrophysics Data System (ADS)
Guo, Changbao; Zhang, Yongshuang; Montgomery, David R.; Du, Yuben; Zhang, Guangze; Wang, Shifeng
2016-04-01
In the Tibetan Plateau, active tectonic deformation triggers frequent earthquakes, and giant landslides associated with active faults produce serious consequences. A study of the characteristics and mechanism of a historical long-runout landslide in Luanshibao (LSB), Tibetan Plateau, China, finds a maximum sliding distance (L) of 3.83 km with an elevation drop (H) of 820 m. The landslide volume (V) was ~ 0.64-0.94 × 108 m3, and it produced a long-runout (H/L = 0.21). Recent surface offset along the sinistral strike-slip Litang-Dewu fault passes through the middle part of the landslide, which initiated on the hanging wall of the fault. Geological mapping, geophysical prospecting, trenching, and 14C dating together indicate that the LSB landslide occurred in jointed granite ca. 1980 ± 30 YBP, probably triggered by a large earthquake. Compilation of volume and runout distance data for this landslide and other previously published data for volcanic and nonvolcanic long-runout landslides yields a composite runout length-volume relation (L = 12.52V0.37) that closely predicts runout of the LSB landslide, although substantial variation is noted in runout length around the central tendency.
Using minimal spanning trees to compare the reliability of network topologies
NASA Technical Reports Server (NTRS)
Leister, Karen J.; White, Allan L.; Hayhurst, Kelly J.
1990-01-01
Graph theoretic methods are applied to compute the reliability for several types of networks of moderate size. The graph theory methods used are minimal spanning trees for networks with bi-directional links and the related concept of strongly connected directed graphs for networks with uni-directional links. A comparison is conducted of ring networks and braided networks. The case is covered where just the links fail and the case where both links and nodes fail. Two different failure modes for the links are considered. For one failure mode, the link no longer carries messages. For the other failure mode, the link delivers incorrect messages. There is a description and comparison of link-redundancy versus path-redundancy as methods to achieve reliability. All the computations are carried out by means of a fault tree program.
NASA Technical Reports Server (NTRS)
Patterson, Jonathan D.; Breckenridge, Jonathan T.; Johnson, Stephen B.
2013-01-01
Building upon the purpose, theoretical approach, and use of a Goal-Function Tree (GFT) being presented by Dr. Stephen B. Johnson, described in a related Infotech 2013 ISHM abstract titled "Goal-Function Tree Modeling for Systems Engineering and Fault Management", this paper will describe the core framework used to implement the GFTbased systems engineering process using the Systems Modeling Language (SysML). These two papers are ideally accepted and presented together in the same Infotech session. Statement of problem: SysML, as a tool, is currently not capable of implementing the theoretical approach described within the "Goal-Function Tree Modeling for Systems Engineering and Fault Management" paper cited above. More generally, SysML's current capabilities to model functional decompositions in the rigorous manner required in the GFT approach are limited. The GFT is a new Model-Based Systems Engineering (MBSE) approach to the development of goals and requirements, functions, and its linkage to design. As a growing standard for systems engineering, it is important to develop methods to implement GFT in SysML. Proposed Method of Solution: Many of the central concepts of the SysML language are needed to implement a GFT for large complex systems. In the implementation of those central concepts, the following will be described in detail: changes to the nominal SysML process, model view definitions and examples, diagram definitions and examples, and detailed SysML construct and stereotype definitions.
NASA Astrophysics Data System (ADS)
Lamarche, Geoffroy; Lebrun, Jean-Frédéric
2000-01-01
South of New Zealand the Pacific-Australia (PAC-AUS) plate boundary runs along the intracontinental Alpine Fault, the Puysegur subduction front and the intraoceanic Puysegur Fault. The Puysegur Fault is located along Puysegur Ridge, which terminates at ca. 47°S against the continental Puysegur Bank in a complex zone of deformation called the Snares Zone. At Puysegur Trench, the Australian Plate subducts beneath Puysegur Bank and the Fiordland Massif. East of Fiordland and Puysegur Bank, the Moonlight Fault System (MFS) represents the Eocene strike-slip plate boundary. Interpretation of seafloor morphology and seismic reflection profiles acquired over Puysegur Bank and the Snares Zone allows study of the transition from intraoceanic strike-slip faulting along the Puysegur Ridge to oblique subduction at the Puysegur Trench and to better understand the genetic link between the Puysegur Fault and the MFS. Seafloor morphology is interpreted from a bathymetric dataset compiled from swath bathymetry data acquired during the 1993 Geodynz survey, and single beam echo soundings acquired by the NZ Royal Navy. The Snares Zone is the key transition zone from strike-slip faulting to subduction. It divides into three sectors, namely East, NW and SW sectors. A conspicuous 3600 m-deep trough (the Snares Trough) separates the NW and East sectors. The East sector is characterised by the NE termination of Puysegur Ridge into right-stepping en echelon ridges that accommodate a change of strike from the Puysegur Fault to the MFS. Between 48°S and 47°S, in the NW sector and the Snares Trough, a series of transpressional faults splay northwards from the Puysegur Fault. Between 49°50'S and 48°S, thrusts develop progressively at Puysegur Trench into a decollement. North of 48°S the Snares Trough develops between two splays of the Puysegur Fault, indicating superficial extension associated with the subsidence of Puysegur Ridge. Seismic reflection profiles and bathymetric maps show a series of transpressional faults that splay northwards across the Snares Fault, and terminate at the top of the Puysegur trench slope. Between ca. 48°S and 46°30'S, the relative plate motion appears to be distributed over the Puysegur subduction zone and the strike-slip faults located on the edge of the upper plate. Conversely, north of ca. 46°S, a lack of active strike-slip faulting along the MFS and across most of Puysegur Bank indicates that the subduction in the northern part of Puysegur Trench accounts for most of the oblique convergence. Hence, active transpression in the Snares fault zone indicates that the relative PAC-AUS plate motion is transferred from strike-slip faulting along the Puysegur Fault to subduction at Puysegur Trench. The progressive transition from thrusts at Puysegur Trench and strike-slip faulting at the Puysegur Fault to oblique subduction at Puysegur Trench suggests that the subduction interface progressively developed from a western shallow splay of the Puysegur Fault. It implies that the transfer fault links the subduction interface at depth. A tectonic sliver is identified between Puysegur Trench and the Puysegur Fault. Its northwards motion relative to the Pacific Plate implies that is might collide with Puysegur Bank.
NASA Astrophysics Data System (ADS)
Galvin, J. L.; Deqiang, C.; Abimbola, A.; Shuler, S.; Hayashi, K.; Fox, J.; Craig, M. S.; Strayer, L. M.; Drumm, P.
2015-12-01
We conducted a geophysical study at a site proposed for a new dorm building prior to trenching planned as part of a separate fault investigation study. The study area was located on the south side of the CSU East Bay campus, roughly 100 - 300 m SSE of the current dorm complex. In addition to its proximity to the Hayward Fault, several smaller faults have been previously mapped within the proposed location, including the East and West Dibblee Faults. These faults are thought to represent contacts between the Leona Rhyolite and the Knoxville Formation. Data acquisition included seismic, resistivity, and GPS data collected in an effort to develop a better understanding of the geological and structural profile of this area, including the location of lithologic contacts, faults, and the thickness of soil and fill. Geophysical profiles were collected over the locations of future trenches. The survey included geophysical lines that were located coincident with two planned trenching sites, which were chosen to intersect mapped faults. Survey positions were recorded using differential GPS. Seismic refraction and MASW (multichannel analysis of surface waves) surveys were performed over two of the planned trench sites using a 48-channel seismographic system with 4.5 Hz geophones and a 10-lb sledgehammer. For one of the lines, geophones were spaced every 3 m with a total spread length of 141 m and a shot spacing of 9 m. For the second line, geophones were spaced every 4 m with a total spread length of 188 m. Shots were taken every 12 m. Resistivity surveys were also performed along one of the line locations using both a capacitively-coupled dipole (CCD) system and 48-electrode system. Geospatial data for the survey area were compiled, including 0.3 m color orthoimagery and vector line files for geologic unit boundaries and presumed fault locations. The products of this study will include the geophysical response of geologic formations, location of unit contacts and faults, thickness of soil and fill, shear wave velocity (VS and VS30). The results of this study will enable improved seismic hazard assessment of the site and will contribute to a better understanding of the overall geologic profile of this area.
Upper-crustal structure beneath the strait of Georgia, Southwest British Columbia
Dash, R.K.; Spence, G.D.; Riedel, M.; Hyndman, R.D.; Brocher, T.M.
2007-01-01
We present a new three-dimensional (3-D) P-wave velocity model for the upper-crustal structure beneath the Strait of Georgia, southwestern British Columbia based on non-linear tomographic inversion of wide-angle seismic refraction data. Our study, part of the Georgia Basin Geohazards Initiative (GBGI) is primarily aimed at mapping the depth of the Cenozoic sedimentary basin and delineating the near-surface crustal faults associated with recent seismic activities (e.g. M = 4.6 in 1997 and M = 5.0 in 1975) in the region. Joint inversion of first-arrival traveltimes from the 1998 Seismic Hazards Investigation in Puget Sound (SHIPS) and the 2002 Georgia Basin experiment provides a high-resolution velocity model of the subsurface to a depth of ???7 km. In the southcentral Georgia Basin, sedimentary rocks of the Cretaceous Nanaimo Group and early Tertiary rocks have seismic velocities between 3.0 and 5.5 km s-1. The basin thickness increases from north to south with a maximum thickness of 7 (??1) km (depth to velocities of 5.5 km s-1) at the southeast end of the strait. The underlying basement rocks, probably representing the Wrangellia terrane, have velocities of 5.5-6.5 km-1 with considerable lateral variation. Our tomographic model reveals that the Strait of Georgia is underlain by a fault-bounded block within the central Georgia Basin. It also shows a correlation between microearthquakes and areas of rapid change in basin thickness. The 1997/1975 earthquakes are located near a northeast-trending hinge line where the thicknesses of sedimentary rocks increase rapidly to the southeast. Given its association with instrumentally recorded, moderate sized earthquakes, we infer that the hinge region is cored by an active fault that we informally name the Gabriola Island fault. A northwest-trending, southwest dipping velocity discontinuity along the eastern side of Vancouver Island correlates spatially with the surface expression of the Outer Island fault. The Outer Island fault as mapped in our seismic tomography model is a thrust fault that projects directly into the Lummi Island fault, suggesting that they are related structures forming a fault system that is continuous for nearly 90 km. Together, these inferred thrust faults may account for at least a portion of the basement uplift at the San Juan Islands. ?? 2007 The Authors Journal compilation ?? 2007 RAS.
Krůček, Martin; Vrška, Tomáš; Král, Kamil
2017-01-01
Terrestrial laser scanning is a powerful technology for capturing the three-dimensional structure of forests with a high level of detail and accuracy. Over the last decade, many algorithms have been developed to extract various tree parameters from terrestrial laser scanning data. Here we present 3D Forest, an open-source non-platform-specific software application with an easy-to-use graphical user interface with the compilation of algorithms focused on the forest environment and extraction of tree parameters. The current version (0.42) extracts important parameters of forest structure from the terrestrial laser scanning data, such as stem positions (X, Y, Z), tree heights, diameters at breast height (DBH), as well as more advanced parameters such as tree planar projections, stem profiles or detailed crown parameters including convex and concave crown surface and volume. Moreover, 3D Forest provides quantitative measures of between-crown interactions and their real arrangement in 3D space. 3D Forest also includes an original algorithm of automatic tree segmentation and crown segmentation. Comparison with field data measurements showed no significant difference in measuring DBH or tree height using 3D Forest, although for DBH only the Randomized Hough Transform algorithm proved to be sufficiently resistant to noise and provided results comparable to traditional field measurements. PMID:28472167
NASA Astrophysics Data System (ADS)
Veloso, E. E.; Tardani, D.; Aron, F.; Elizalde, J. D.; Sanchez-Alfaro, P.; Godoy, B.
2017-12-01
South of 19°S, geothermal fields and Pliocene-to-Holocene volcanic centers of the Central Andean Volcanic Zone are spatially associated with distinct, large-scale fault systems disrupting the volcanic arc, which control the architecture and dynamics of the fluids reservoirs at shallow crustal levels. Based on an extensive compilation of structural, lithological and isotopic data, and satellite imagery band-ratio analyses, we produced detailed maps of 13 areas comprising 19 identified and/or potential geothermal fields, to examine if particular local-scale tectonic configurations are associated to fluids migrating from different crustal levels. We defined three main tectonic environments according to the specific, kilometer-scale structural arrangement and its spatial relation to the geothermal surface manifestations. T1, dominated by left-lateral, pure strike-slip motion on a NW-trending duplex-like geometry with geothermal fields located along the faults - in turn distributed into five major subparallel zones cutting across the orogenic belt between ca. 20° and 27°S. T2, dominated by shortening on a series of N-trending thrust faults and fault-propagated folds, cut and displaced by the above mentioned NW-trending faults, with geothermal fields hosted at fault intersections and at fold hinges. And T3, characterized by transtension accommodated by NW-to-WNW-trending left-lateral/normal faults, with hot-springs lying along the fault traces. Interestingly, each of the independently defined tectonic environments has distinctive helium (in fluids) and strontium (in lavas) isotopic signatures and estimated geothermal reservoir temperatures. T1 shows a large 4He contribution, low 87Sr/86Sr ratio and temperatures varying between ca. 220°-310°C; T3 low 4He and high 87Sr/86Sr ratio and temperature (260°-320°C); T2 isotopic values fall between T1 and T3, yet showing the lowest (130°-250°C) temperatures. We suggest that these particular isotopic signatures are due to a strong structural control on the hot reservoir location and meteoric water content, T3 allowing deeper hot fluid provenances and T1 more meteoric influx.
NASA Astrophysics Data System (ADS)
Ando, R.; Aoki, Y.; Uchide, T.; Imanishi, K.; Matsumoto, S.; Nishimura, T.
2016-12-01
A couple of interesting earthquake rupture phenomena were observed associated with the sequence of the 2016 Kumamoto, Japan, earthquake sequence. The sequence includes the April 15, 2016, Mw 7.0, mainshock, which was preceded by multiple M6-class foreshock. The mainshock mainly broke the Futagawa fault segment striking NE-SW direction extending over 50km, and it further triggered a M6-class earthquake beyond the distance more than 50km to the northeast (Uchide et al., 2016, submitted), where an active volcano is situated. Compiling the data of seismic analysis and InSAR, we presumed this dynamic triggering event occurred on an active fault known as Yufuin fault (Ando et al., 2016, JPGU general assembly). It is also reported that the coseismic slip was significantly large at a shallow portion of Futagawa Fault near Aso volcano. Since the seismogenic depth becomes significantly shallower in these two areas, we presume the geothermal anomaly play a role as well as the elasto-dynamic processes associated with the coseismic rupture. In this study, we conducted a set of fully dynamic simulations of the earthquake rupture process by assuming the inferred 3D fault geometry and the regional stress field obtained referring the stress tensor inversion. As a result, we showed that the dynamic rupture process was mainly controlled by the irregularity of the fault geometry subjected to the gently varying regional stress field. The foreshocks ruptures have been arrested at the juncture of the branch faults. We also show that the dynamic triggering of M-6 class earthquakes occurred along the Yufuin fault segment (located 50 km NE) because of the strong stress transient up to a few hundreds of kPa due to the rupture directivity effect of the M-7 event. It is also shown that the geothermal condition may lead to the susceptible condition of the dynamic triggering by considering the plastic shear zone on the down dip extension of the Yufuin segment, situated in the vicinity of an active volcano.
Greninger, Mark L.; Klemperer, Simon L.; Nokleberg, Warren J.
1999-01-01
The accompanying directory structure contains a Geographic Information Systems (GIS) compilation of geophysical, geological, and tectonic data for the Circum-North Pacific. This area includes the Russian Far East, Alaska, the Canadian Cordillera, linking continental shelves, and adjacent oceans. This GIS compilation extends from 120?E to 115?W, and from 40?N to 80?N. This area encompasses: (1) to the south, the modern Pacific plate boundary of the Japan-Kuril and Aleutian subduction zones, the Queen Charlotte transform fault, and the Cascadia subduction zone; (2) to the north, the continent-ocean transition from the Eurasian and North American continents to the Arctic Ocean; (3) to the west, the diffuse Eurasian-North American plate boundary, including the probable Okhotsk plate; and (4) to the east, the Alaskan-Canadian Cordilleran fold belt. This compilation should be useful for: (1) studying the Mesozoic and Cenozoic collisional and accretionary tectonics that assembled this continental crust of this region; (2) studying the neotectonics of active and passive plate margins in this region; and (3) constructing and interpreting geophysical, geologic, and tectonic models of the region. Geographic Information Systems (GIS) programs provide powerful tools for managing and analyzing spatial databases. Geological applications include regional tectonics, geophysics, mineral and petroleum exploration, resource management, and land-use planning. This CD-ROM contains thematic layers of spatial data-sets for geology, gravity field, magnetic field, oceanic plates, overlap assemblages, seismology (earthquakes), tectonostratigraphic terranes, topography, and volcanoes. The GIS compilation can be viewed, manipulated, and plotted with commercial software (ArcView and ArcInfo) or through a freeware program (ArcExplorer) that can be downloaded from http://www.esri.com for both Unix and Windows computers using the button below.
Architecture Analysis with AADL: The Speed Regulation Case-Study
2014-11-01
Overview Functional Hazard Analysis ( FHA ) Failures inventory with description, classification, etc. Fault-Tree Analysis (FTA) Dependencies between...University Pittsburgh, PA 15213 Julien Delange Report Documentation Page Form ApprovedOMB No. 0704-0188 Public reporting burden for the collection of...Information Operations and Reports , 1215 Jefferson Davis Highway, Suite 1204, Arlington VA 22202-4302. Respondents should be aware that notwithstanding any
Journal of Air Transportation, Volume 12, No. 2 (ATRS Special Edition)
NASA Technical Reports Server (NTRS)
Bowen, Brent D. (Editor); Kabashkin, Igor (Editor); Fink, Mary (Editor)
2007-01-01
Topics covered include: Competition and Change in the Long-Haul Markets from Europe; Insights into the Maintenance, Repair, and Overhaul Configurations of European Airlines; Validation of Fault Tree Analysis in Aviation Safety Management; An Investigation into Airline Service Quality Performance between U.S. Legacy Carriers and Their EU Competitors and Partners; and Climate Impact of Aircraft Technology and Design Changes.
Risk Analysis of a Fuel Storage Terminal Using HAZOP and FTA
Baixauli-Pérez, Mª Piedad
2017-01-01
The size and complexity of industrial chemical plants, together with the nature of the products handled, means that an analysis and control of the risks involved is required. This paper presents a methodology for risk analysis in chemical and allied industries that is based on a combination of HAZard and OPerability analysis (HAZOP) and a quantitative analysis of the most relevant risks through the development of fault trees, fault tree analysis (FTA). Results from FTA allow prioritizing the preventive and corrective measures to minimize the probability of failure. An analysis of a case study is performed; it consists in the terminal for unloading chemical and petroleum products, and the fuel storage facilities of two companies, in the port of Valencia (Spain). HAZOP analysis shows that loading and unloading areas are the most sensitive areas of the plant and where the most significant danger is a fuel spill. FTA analysis indicates that the most likely event is a fuel spill in tank truck loading area. A sensitivity analysis from the FTA results show the importance of the human factor in all sequences of the possible accidents, so it should be mandatory to improve the training of the staff of the plants. PMID:28665325
TH-EF-BRC-04: Quality Management Program
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yorke, E.
2016-06-15
This Hands-on Workshop will be focused on providing participants with experience with the principal tools of TG 100 and hence start to build both competence and confidence in the use of risk-based quality management techniques. The three principal tools forming the basis of TG 100’s risk analysis: Process mapping, Failure-Modes and Effects Analysis and fault-tree analysis will be introduced with a 5 minute refresher presentation and each presentation will be followed by a 30 minute small group exercise. An exercise on developing QM from the risk analysis follows. During the exercise periods, participants will apply the principles in 2 differentmore » clinical scenarios. At the conclusion of each exercise there will be ample time for participants to discuss with each other and the faculty their experience and any challenges encountered. Learning Objectives: To review the principles of Process Mapping, Failure Modes and Effects Analysis and Fault Tree Analysis. To gain familiarity with these three techniques in a small group setting. To share and discuss experiences with the three techniques with faculty and participants. Director, TreatSafely, LLC. Director, Center for the Assessment of Radiological Sciences. Occasional Consultant to the IAEA and Varian.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
2016-06-15
This Hands-on Workshop will be focused on providing participants with experience with the principal tools of TG 100 and hence start to build both competence and confidence in the use of risk-based quality management techniques. The three principal tools forming the basis of TG 100’s risk analysis: Process mapping, Failure-Modes and Effects Analysis and fault-tree analysis will be introduced with a 5 minute refresher presentation and each presentation will be followed by a 30 minute small group exercise. An exercise on developing QM from the risk analysis follows. During the exercise periods, participants will apply the principles in 2 differentmore » clinical scenarios. At the conclusion of each exercise there will be ample time for participants to discuss with each other and the faculty their experience and any challenges encountered. Learning Objectives: To review the principles of Process Mapping, Failure Modes and Effects Analysis and Fault Tree Analysis. To gain familiarity with these three techniques in a small group setting. To share and discuss experiences with the three techniques with faculty and participants. Director, TreatSafely, LLC. Director, Center for the Assessment of Radiological Sciences. Occasional Consultant to the IAEA and Varian.« less