System Analysis by Mapping a Fault-tree into a Bayesian-network
NASA Astrophysics Data System (ADS)
Sheng, B.; Deng, C.; Wang, Y. H.; Tang, L. H.
2018-05-01
In view of the limitations of fault tree analysis in reliability assessment, Bayesian Network (BN) has been studied as an alternative technology. After a brief introduction to the method for mapping a Fault Tree (FT) into an equivalent BN, equations used to calculate the structure importance degree, the probability importance degree and the critical importance degree are presented. Furthermore, the correctness of these equations is proved mathematically. Combining with an aircraft landing gear’s FT, an equivalent BN is developed and analysed. The results show that richer and more accurate information have been achieved through the BN method than the FT, which demonstrates that the BN is a superior technique in both reliability assessment and fault diagnosis.
DG TO FT - AUTOMATIC TRANSLATION OF DIGRAPH TO FAULT TREE MODELS
NASA Technical Reports Server (NTRS)
Iverson, D. L.
1994-01-01
Fault tree and digraph models are frequently used for system failure analysis. Both types of models represent a failure space view of the system using AND and OR nodes in a directed graph structure. Each model has its advantages. While digraphs can be derived in a fairly straightforward manner from system schematics and knowledge about component failure modes and system design, fault tree structure allows for fast processing using efficient techniques developed for tree data structures. The similarities between digraphs and fault trees permits the information encoded in the digraph to be translated into a logically equivalent fault tree. The DG TO FT translation tool will automatically translate digraph models, including those with loops or cycles, into fault tree models that have the same minimum cut set solutions as the input digraph. This tool could be useful, for example, if some parts of a system have been modeled using digraphs and others using fault trees. The digraphs could be translated and incorporated into the fault trees, allowing them to be analyzed using a number of powerful fault tree processing codes, such as cut set and quantitative solution codes. A cut set for a given node is a group of failure events that will cause the failure of the node. A minimum cut set for a node is any cut set that, if any of the failures in the set were to be removed, the occurrence of the other failures in the set will not cause the failure of the event represented by the node. Cut sets calculations can be used to find dependencies, weak links, and vital system components whose failures would cause serious systems failure. The DG TO FT translation system reads in a digraph with each node listed as a separate object in the input file. The user specifies a terminal node for the digraph that will be used as the top node of the resulting fault tree. A fault tree basic event node representing the failure of that digraph node is created and becomes a child of the terminal root node. A subtree is created for each of the inputs to the digraph terminal node and the root of those subtrees are added as children of the top node of the fault tree. Every node in the digraph upstream of the terminal node will be visited and converted. During the conversion process, the algorithm keeps track of the path from the digraph terminal node to the current digraph node. If a node is visited twice, then the program has found a cycle in the digraph. This cycle is broken by finding the minimal cut sets of the twice visited digraph node and forming those cut sets into subtrees. Another implementation of the algorithm resolves loops by building a subtree based on the digraph minimal cut sets calculation. It does not reduce the subtree to minimal cut set form. This second implementation produces larger fault trees, but runs much faster than the version using minimal cut sets since it does not spend time reducing the subtrees to minimal cut sets. The fault trees produced by DG TO FT will contain OR gates, AND gates, Basic Event nodes, and NOP gates. The results of a translation can be output as a text object description of the fault tree similar to the text digraph input format. The translator can also output a LISP language formatted file and an augmented LISP file which can be used by the FTDS (ARC-13019) diagnosis system, available from COSMIC, which performs diagnostic reasoning using the fault tree as a knowledge base. DG TO FT is written in C-language to be machine independent. It has been successfully implemented on a Sun running SunOS, a DECstation running ULTRIX, a Macintosh running System 7, and a DEC VAX running VMS. The RAM requirement varies with the size of the models. DG TO FT is available in UNIX tar format on a .25 inch streaming magnetic tape cartridge (standard distribution) or on a 3.5 inch diskette. It is also available on a 3.5 inch Macintosh format diskette or on a 9-track 1600 BPI magnetic tape in DEC VAX FILES-11 format. Sample input and sample output are provided on the distribution medium. An electronic copy of the documentation in Macintosh Microsoft Word format is provided on the distribution medium. DG TO FT was developed in 1992. Sun, and SunOS are trademarks of Sun Microsystems, Inc. DECstation, ULTRIX, VAX, and VMS are trademarks of Digital Equipment Corporation. UNIX is a registered trademark of AT&T Bell Laboratories. Macintosh is a registered trademark of Apple Computer, Inc. System 7 is a trademark of Apple Computers Inc. Microsoft Word is a trademark of Microsoft Corporation.
Uncertainty analysis in fault tree models with dependent basic events.
Pedroni, Nicola; Zio, Enrico
2013-06-01
In general, two types of dependence need to be considered when estimating the probability of the top event (TE) of a fault tree (FT): "objective" dependence between the (random) occurrences of different basic events (BEs) in the FT and "state-of-knowledge" (epistemic) dependence between estimates of the epistemically uncertain probabilities of some BEs of the FT model. In this article, we study the effects on the TE probability of objective and epistemic dependences. The well-known Frèchet bounds and the distribution envelope determination (DEnv) method are used to model all kinds of (possibly unknown) objective and epistemic dependences, respectively. For exemplification, the analyses are carried out on a FT with six BEs. Results show that both types of dependence significantly affect the TE probability; however, the effects of epistemic dependence are likely to be overwhelmed by those of objective dependence (if present). © 2012 Society for Risk Analysis.
Fault Tree Analysis for an Inspection Robot in a Nuclear Power Plant
NASA Astrophysics Data System (ADS)
Ferguson, Thomas A.; Lu, Lixuan
2017-09-01
The life extension of current nuclear reactors has led to an increasing demand on inspection and maintenance of critical reactor components that are too expensive to replace. To reduce the exposure dosage to workers, robotics have become an attractive alternative as a preventative safety tool in nuclear power plants. It is crucial to understand the reliability of these robots in order to increase the veracity and confidence of their results. This study presents the Fault Tree (FT) analysis to a coolant outlet piper snake-arm inspection robot in a nuclear power plant. Fault trees were constructed for a qualitative analysis to determine the reliability of the robot. Insight on the applicability of fault tree methods for inspection robotics in the nuclear industry is gained through this investigation.
Evidential Networks for Fault Tree Analysis with Imprecise Knowledge
NASA Astrophysics Data System (ADS)
Yang, Jianping; Huang, Hong-Zhong; Liu, Yu; Li, Yan-Feng
2012-06-01
Fault tree analysis (FTA), as one of the powerful tools in reliability engineering, has been widely used to enhance system quality attributes. In most fault tree analyses, precise values are adopted to represent the probabilities of occurrence of those events. Due to the lack of sufficient data or imprecision of existing data at the early stage of product design, it is often difficult to accurately estimate the failure rates of individual events or the probabilities of occurrence of the events. Therefore, such imprecision and uncertainty need to be taken into account in reliability analysis. In this paper, the evidential networks (EN) are employed to quantify and propagate the aforementioned uncertainty and imprecision in fault tree analysis. The detailed conversion processes of some logic gates to EN are described in fault tree (FT). The figures of the logic gates and the converted equivalent EN, together with the associated truth tables and the conditional belief mass tables, are also presented in this work. The new epistemic importance is proposed to describe the effect of ignorance degree of event. The fault tree of an aircraft engine damaged by oil filter plugs is presented to demonstrate the proposed method.
Mines Systems Safety Improvement Using an Integrated Event Tree and Fault Tree Analysis
NASA Astrophysics Data System (ADS)
Kumar, Ranjan; Ghosh, Achyuta Krishna
2017-04-01
Mines systems such as ventilation system, strata support system, flame proof safety equipment, are exposed to dynamic operational conditions such as stress, humidity, dust, temperature, etc., and safety improvement of such systems can be done preferably during planning and design stage. However, the existing safety analysis methods do not handle the accident initiation and progression of mine systems explicitly. To bridge this gap, this paper presents an integrated Event Tree (ET) and Fault Tree (FT) approach for safety analysis and improvement of mine systems design. This approach includes ET and FT modeling coupled with redundancy allocation technique. In this method, a concept of top hazard probability is introduced for identifying system failure probability and redundancy is allocated to the system either at component or system level. A case study on mine methane explosion safety with two initiating events is performed. The results demonstrate that the presented method can reveal the accident scenarios and improve the safety of complex mine systems simultaneously.
NASA Astrophysics Data System (ADS)
Chartier, Thomas; Scotti, Oona; Lyon-Caen, Hélène; Boiselet, Aurélien
2017-10-01
Modeling the seismic potential of active faults is a fundamental step of probabilistic seismic hazard assessment (PSHA). An accurate estimation of the rate of earthquakes on the faults is necessary in order to obtain the probability of exceedance of a given ground motion. Most PSHA studies consider faults as independent structures and neglect the possibility of multiple faults or fault segments rupturing simultaneously (fault-to-fault, FtF, ruptures). The Uniform California Earthquake Rupture Forecast version 3 (UCERF-3) model takes into account this possibility by considering a system-level approach rather than an individual-fault-level approach using the geological, seismological and geodetical information to invert the earthquake rates. In many places of the world seismological and geodetical information along fault networks is often not well constrained. There is therefore a need to propose a methodology relying on geological information alone to compute earthquake rates of the faults in the network. In the proposed methodology, a simple distance criteria is used to define FtF ruptures and consider single faults or FtF ruptures as an aleatory uncertainty, similarly to UCERF-3. Rates of earthquakes on faults are then computed following two constraints: the magnitude frequency distribution (MFD) of earthquakes in the fault system as a whole must follow an a priori chosen shape and the rate of earthquakes on each fault is determined by the specific slip rate of each segment depending on the possible FtF ruptures. The modeled earthquake rates are then compared to the available independent data (geodetical, seismological and paleoseismological data) in order to weight different hypothesis explored in a logic tree.The methodology is tested on the western Corinth rift (WCR), Greece, where recent advancements have been made in the understanding of the geological slip rates of the complex network of normal faults which are accommodating the ˜ 15 mm yr-1 north-south extension. Modeling results show that geological, seismological and paleoseismological rates of earthquakes cannot be reconciled with only single-fault-rupture scenarios and require hypothesizing a large spectrum of possible FtF rupture sets. In order to fit the imposed regional Gutenberg-Richter (GR) MFD target, some of the slip along certain faults needs to be accommodated either with interseismic creep or as post-seismic processes. Furthermore, computed individual faults' MFDs differ depending on the position of each fault in the system and the possible FtF ruptures associated with the fault. Finally, a comparison of modeled earthquake rupture rates with those deduced from the regional and local earthquake catalog statistics and local paleoseismological data indicates a better fit with the FtF rupture set constructed with a distance criteria based on 5 km rather than 3 km, suggesting a high connectivity of faults in the WCR fault system.
Fault and joint measurements in Austin Chalk, Superconducting Super Collider Site, Texas
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nance, H.S.; Laubach, S.E.; Dutton, A.R.
1994-12-31
Structure maps of 9.4 mi of nearly continuous tunnel excavations and more than 10 mi of other exposures and excavations in Austin Chalk at the Superconducting Super Collider (SSC) site in Ellis County, Texas, record normal-fault and joint populations in the subsurface within the northern segment of the Balcones Fault Zone with unmatched resolution for such a long traverse. Small faults (<10 ft throw) occur in clusters or swarms that have as many as 24 faults. Fault swarms are as much as 2,000 ft wide, and spacing between swarms ranges from 800 to 2,000 ft, averaging about 1,000 ft. Predominantlymore » northeast-trending joints are in swarms spaced 500 to more than 21,000 ft apart.« less
Gravity and magnetic data across the Ghost Dance Fault in WT-2 Wash, Yucca Mountain, Nevada
DOE Office of Scientific and Technical Information (OSTI.GOV)
Oliver, H.W.; Sikora, R.F.
1994-12-31
Detailed gravity and ground magnetic data were obtained in September 1993 along a 4,650 ft-long profile across the Ghost Dance Fault system in WT-2 Wash. Gravity stations were established every 150 feet along the profile. Total-field magnetic measurements made initially every 50 ft along the profile, then remade every 20 ft through the fault zone. These new data are part of a geologic and geophysical study of the Ghost Dance Fault (GDF) which includes detailed geologic mapping, seismic reflection, and some drilling including geologic and geophysical logging. The Ghost Dance Fault is the only through-going fault that has been identifiedmore » within the potential repository for high-level radioactive waste at Yucca Mountain, Nevada. Preliminary gravity results show a distinct decrease of 0.1 to 0.2 mGal over a 600-ft-wide zone to the east of and including the mapped fault. The gravity decrease probably marks a zone of brecciation. Another fault-offset located about 2,000 ft to the east of the GDF was detected by seismic reflection data and is also marked by a distinct gravity low. The ground magnetic data show a 200-ft-wide magnetic low of about 400 nT centered about 100 ft east of the Ghost Dance Fault. The magnetic low probably marks a zone of brecciation within the normally polarized Topopah Spring Tuff, the top of which is about 170 ft below the surface, and which is known from drilling to extend to a depth of about 1,700 ft. Three-component magnetometer logging in drill hole WT-2 located about 2,700 ft east of the Ghost Dance Fault shows that the Topopah Spring Tuff is strongly polarized magnetically in this area, so that fault brecciation of a vertical zone within the Tuff could provide an average negative magnetic contrast of the 4 Am{sup {minus}1} needed to produce the 400 nT low observed at the surface.« less
Integrated Approach To Design And Analysis Of Systems
NASA Technical Reports Server (NTRS)
Patterson-Hine, F. A.; Iverson, David L.
1993-01-01
Object-oriented fault-tree representation unifies evaluation of reliability and diagnosis of faults. Programming/fault tree described more fully in "Object-Oriented Algorithm For Evaluation Of Fault Trees" (ARC-12731). Augmented fault tree object contains more information than fault tree object used in quantitative analysis of reliability. Additional information needed to diagnose faults in system represented by fault tree.
[The Application of the Fault Tree Analysis Method in Medical Equipment Maintenance].
Liu, Hongbin
2015-11-01
In this paper, the traditional fault tree analysis method is presented, detailed instructions for its application characteristics in medical instrument maintenance is made. It is made significant changes when the traditional fault tree analysis method is introduced into the medical instrument maintenance: gave up the logic symbolic, logic analysis and calculation, gave up its complicated programs, and only keep its image and practical fault tree diagram, and the fault tree diagram there are also differences: the fault tree is no longer a logical tree but the thinking tree in troubleshooting, the definition of the fault tree's nodes is different, the composition of the fault tree's branches is also different.
NASA Technical Reports Server (NTRS)
Lee, Charles; Alena, Richard L.; Robinson, Peter
2004-01-01
We started from ISS fault trees example to migrate to decision trees, presented a method to convert fault trees to decision trees. The method shows that the visualizations of root cause of fault are easier and the tree manipulating becomes more programmatic via available decision tree programs. The visualization of decision trees for the diagnostic shows a format of straight forward and easy understands. For ISS real time fault diagnostic, the status of the systems could be shown by mining the signals through the trees and see where it stops at. The other advantage to use decision trees is that the trees can learn the fault patterns and predict the future fault from the historic data. The learning is not only on the static data sets but also can be online, through accumulating the real time data sets, the decision trees can gain and store faults patterns in the trees and recognize them when they come.
Structural character of the Ghost Dance fault, Yucca Mountain, Nevada
Spengler, R.W.; Braun, C.A.; Linden, R.M.; Martin, L.G.; Ross-Brown, D. M.; Blackburn, R.L.
1993-01-01
Detailed structural mapping of an area that straddles the southern part of the Ghost Dance Fault has revealed the presence of several additional subparallel to anastomosing faults. These faults, mapped at a scale of 1:240, are: 1) dominantly north trending, 2) present on both the upthrown and downthrown sides of the surface trace of the Ghost Dance fault, 3) near-vertical features that commonly offset strata down to the west by 3 to 6 m (10 to 20 ft), and 4) commonly spaced 15 to 46 m (50 to 150 ft) apart. The zone also exhibits a structural fabric, containing an abundance of northwest-trending fractures. The width of the zone appears to be at least 213 m (700 ft) near the southernmost boundary of the study area but remains unknown near the northern extent of the study area, where the width of the study area is only 183 m (600 ft).
Automatic translation of digraph to fault-tree models
NASA Technical Reports Server (NTRS)
Iverson, David L.
1992-01-01
The author presents a technique for converting digraph models, including those models containing cycles, to a fault-tree format. A computer program which automatically performs this translation using an object-oriented representation of the models has been developed. The fault-trees resulting from translations can be used for fault-tree analysis and diagnosis. Programs to calculate fault-tree and digraph cut sets and perform diagnosis with fault-tree models have also been developed. The digraph to fault-tree translation system has been successfully tested on several digraphs of varying size and complexity. Details of some representative translation problems are presented. Most of the computation performed by the program is dedicated to finding minimal cut sets for digraph nodes in order to break cycles in the digraph. Fault-trees produced by the translator have been successfully used with NASA's Fault-Tree Diagnosis System (FTDS) to produce automated diagnostic systems.
NASA Technical Reports Server (NTRS)
Butler, Ricky W.; Boerschlein, David P.
1993-01-01
Fault-Tree Compiler (FTC) program, is software tool used to calculate probability of top event in fault tree. Gates of five different types allowed in fault tree: AND, OR, EXCLUSIVE OR, INVERT, and M OF N. High-level input language easy to understand and use. In addition, program supports hierarchical fault-tree definition feature, which simplifies tree-description process and reduces execution time. Set of programs created forming basis for reliability-analysis workstation: SURE, ASSIST, PAWS/STEM, and FTC fault-tree tool (LAR-14586). Written in PASCAL, ANSI-compliant C language, and FORTRAN 77. Other versions available upon request.
Application Research of Fault Tree Analysis in Grid Communication System Corrective Maintenance
NASA Astrophysics Data System (ADS)
Wang, Jian; Yang, Zhenwei; Kang, Mei
2018-01-01
This paper attempts to apply the fault tree analysis method to the corrective maintenance field of grid communication system. Through the establishment of the fault tree model of typical system and the engineering experience, the fault tree analysis theory is used to analyze the fault tree model, which contains the field of structural function, probability importance and so on. The results show that the fault tree analysis can realize fast positioning and well repairing of the system. Meanwhile, it finds that the analysis method of fault tree has some guiding significance to the reliability researching and upgrading f the system.
Structural character of the Ghost Dance Fault, Yucca Mountain, Nevada
DOE Office of Scientific and Technical Information (OSTI.GOV)
Spengler, R.W.; Braun, C.A.; Linden, R.M.
1993-12-31
Detailed structural mapping of an area that straddles the southern part of the Ghost Dance Fault has revealed the presence of several additional subparallel to anastomosing faults. These faults, mapped at a scale of 1:240, are: (1) dominantly north-trending, (2) present on both the upthrown and downthrown sides of the surface trace of the Ghost Dance fault, (3) near-vertical features that commonly offset strata down to the west by 3 to 6 m (10 to 20 ft), and (4) commonly spaced 15 to 46 m (50 to 150 ft) apart. The zone also exhibits a structural fabric, containing an abundancemore » of northwest-trending fractures. The width of the zone appears to be at least 213 m (700 ft) near the southernmost boundary of the study area but remains unknown near the northern extent of the study area, where the width of the study area is only 183 m (600 ft).« less
Fault Tree in the Trenches, A Success Story
NASA Technical Reports Server (NTRS)
Long, R. Allen; Goodson, Amanda (Technical Monitor)
2000-01-01
Getting caught up in the explanation of Fault Tree Analysis (FTA) minutiae is easy. In fact, most FTA literature tends to address FTA concepts and methodology. Yet there seems to be few articles addressing actual design changes resulting from the successful application of fault tree analysis. This paper demonstrates how fault tree analysis was used to identify and solve a potentially catastrophic mechanical problem at a rocket motor manufacturer. While developing the fault tree given in this example, the analyst was told by several organizations that the piece of equipment in question had been evaluated by several committees and organizations, and that the analyst was wasting his time. The fault tree/cutset analysis resulted in a joint-redesign of the control system by the tool engineering group and the fault tree analyst, as well as bragging rights for the analyst. (That the fault tree found problems where other engineering reviews had failed was not lost on the other engineering groups.) Even more interesting was that this was the analyst's first fault tree which further demonstrates how effective fault tree analysis can be in guiding (i.e., forcing) the analyst to take a methodical approach in evaluating complex systems.
NASA Technical Reports Server (NTRS)
Butler, Ricky W.; Martensen, Anna L.
1992-01-01
FTC, Fault-Tree Compiler program, is reliability-analysis software tool used to calculate probability of top event of fault tree. Five different types of gates allowed in fault tree: AND, OR, EXCLUSIVE OR, INVERT, and M OF N. High-level input language of FTC easy to understand and use. Program supports hierarchical fault-tree-definition feature simplifying process of description of tree and reduces execution time. Solution technique implemented in FORTRAN, and user interface in Pascal. Written to run on DEC VAX computer operating under VMS operating system.
Proactive Fault Tolerance for HPC with Xen Virtualization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nagarajan, Arun Babu; Mueller, Frank; Engelmann, Christian
2007-01-01
with thousands of processors. At such large counts of compute nodes, faults are becoming common place. Current techniques to tolerate faults focus on reactive schemes to recover from faults and generally rely on a checkpoint/restart mechanism. Yet, in today's systems, node failures can often be anticipated by detecting a deteriorating health status. Instead of a reactive scheme for fault tolerance (FT), we are promoting a proactive one where processes automatically migrate from unhealthy nodes to healthy ones. Our approach relies on operating system virtualization techniques exemplied by but not limited to Xen. This paper contributes an automatic and transparent mechanismmore » for proactive FT for arbitrary MPI applications. It leverages virtualization techniques combined with health monitoring and load-based migration. We exploit Xen's live migration mechanism for a guest operating system (OS) to migrate an MPI task from a health-deteriorating node to a healthy one without stopping the MPI task during most of the migration. Our proactive FT daemon orchestrates the tasks of health monitoring, load determination and initiation of guest OS migration. Experimental results demonstrate that live migration hides migration costs and limits the overhead to only a few seconds making it an attractive approach to realize FT in HPC systems. Overall, our enhancements make proactive FT a valuable asset for long-running MPI application that is complementary to reactive FT using full checkpoint/ restart schemes since checkpoint frequencies can be reduced as fewer unanticipated failures are encountered. In the context of OS virtualization, we believe that this is the rst comprehensive study of proactive fault tolerance where live migration is actually triggered by health monitoring.« less
Tutorial: Advanced fault tree applications using HARP
NASA Technical Reports Server (NTRS)
Dugan, Joanne Bechta; Bavuso, Salvatore J.; Boyd, Mark A.
1993-01-01
Reliability analysis of fault tolerant computer systems for critical applications is complicated by several factors. These modeling difficulties are discussed and dynamic fault tree modeling techniques for handling them are described and demonstrated. Several advanced fault tolerant computer systems are described, and fault tree models for their analysis are presented. HARP (Hybrid Automated Reliability Predictor) is a software package developed at Duke University and NASA Langley Research Center that is capable of solving the fault tree models presented.
Technology transfer by means of fault tree synthesis
NASA Astrophysics Data System (ADS)
Batzias, Dimitris F.
2012-12-01
Since Fault Tree Analysis (FTA) attempts to model and analyze failure processes of engineering, it forms a common technique for good industrial practice. On the contrary, fault tree synthesis (FTS) refers to the methodology of constructing complex trees either from dentritic modules built ad hoc or from fault tress already used and stored in a Knowledge Base. In both cases, technology transfer takes place in a quasi-inductive mode, from partial to holistic knowledge. In this work, an algorithmic procedure, including 9 activity steps and 3 decision nodes is developed for performing effectively this transfer when the fault under investigation occurs within one of the latter stages of an industrial procedure with several stages in series. The main parts of the algorithmic procedure are: (i) the construction of a local fault tree within the corresponding production stage, where the fault has been detected, (ii) the formation of an interface made of input faults that might occur upstream, (iii) the fuzzy (to count for uncertainty) multicriteria ranking of these faults according to their significance, and (iv) the synthesis of an extended fault tree based on the construction of part (i) and on the local fault tree of the first-ranked fault in part (iii). An implementation is presented, referring to 'uneven sealing of Al anodic film', thus proving the functionality of the developed methodology.
Faults Discovery By Using Mined Data
NASA Technical Reports Server (NTRS)
Lee, Charles
2005-01-01
Fault discovery in the complex systems consist of model based reasoning, fault tree analysis, rule based inference methods, and other approaches. Model based reasoning builds models for the systems either by mathematic formulations or by experiment model. Fault Tree Analysis shows the possible causes of a system malfunction by enumerating the suspect components and their respective failure modes that may have induced the problem. The rule based inference build the model based on the expert knowledge. Those models and methods have one thing in common; they have presumed some prior-conditions. Complex systems often use fault trees to analyze the faults. Fault diagnosis, when error occurs, is performed by engineers and analysts performing extensive examination of all data gathered during the mission. International Space Station (ISS) control center operates on the data feedback from the system and decisions are made based on threshold values by using fault trees. Since those decision-making tasks are safety critical and must be done promptly, the engineers who manually analyze the data are facing time challenge. To automate this process, this paper present an approach that uses decision trees to discover fault from data in real-time and capture the contents of fault trees as the initial state of the trees.
Fault trees and sequence dependencies
NASA Technical Reports Server (NTRS)
Dugan, Joanne Bechta; Boyd, Mark A.; Bavuso, Salvatore J.
1990-01-01
One of the frequently cited shortcomings of fault-tree models, their inability to model so-called sequence dependencies, is discussed. Several sources of such sequence dependencies are discussed, and new fault-tree gates to capture this behavior are defined. These complex behaviors can be included in present fault-tree models because they utilize a Markov solution. The utility of the new gates is demonstrated by presenting several models of the fault-tolerant parallel processor, which include both hot and cold spares.
McElroy, Lisa M; Khorzad, Rebeca; Rowe, Theresa A; Abecassis, Zachary A; Apley, Daniel W; Barnard, Cynthia; Holl, Jane L
The purpose of this study was to use fault tree analysis to evaluate the adequacy of quality reporting programs in identifying root causes of postoperative bloodstream infection (BSI). A systematic review of the literature was used to construct a fault tree to evaluate 3 postoperative BSI reporting programs: National Surgical Quality Improvement Program (NSQIP), Centers for Medicare and Medicaid Services (CMS), and The Joint Commission (JC). The literature review revealed 699 eligible publications, 90 of which were used to create the fault tree containing 105 faults. A total of 14 identified faults are currently mandated for reporting to NSQIP, 5 to CMS, and 3 to JC; 2 or more programs require 4 identified faults. The fault tree identifies numerous contributing faults to postoperative BSI and reveals substantial variation in the requirements and ability of national quality data reporting programs to capture these potential faults. Efforts to prevent postoperative BSI require more comprehensive data collection to identify the root causes and develop high-reliability improvement strategies.
Integration of Advanced Probabilistic Analysis Techniques with Multi-Physics Models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cetiner, Mustafa Sacit; none,; Flanagan, George F.
2014-07-30
An integrated simulation platform that couples probabilistic analysis-based tools with model-based simulation tools can provide valuable insights for reactive and proactive responses to plant operating conditions. The objective of this work is to demonstrate the benefits of a partial implementation of the Small Modular Reactor (SMR) Probabilistic Risk Assessment (PRA) Detailed Framework Specification through the coupling of advanced PRA capabilities and accurate multi-physics plant models. Coupling a probabilistic model with a multi-physics model will aid in design, operations, and safety by providing a more accurate understanding of plant behavior. This represents the first attempt at actually integrating these two typesmore » of analyses for a control system used for operations, on a faster than real-time basis. This report documents the development of the basic communication capability to exchange data with the probabilistic model using Reliability Workbench (RWB) and the multi-physics model using Dymola. The communication pathways from injecting a fault (i.e., failing a component) to the probabilistic and multi-physics models were successfully completed. This first version was tested with prototypic models represented in both RWB and Modelica. First, a simple event tree/fault tree (ET/FT) model was created to develop the software code to implement the communication capabilities between the dynamic-link library (dll) and RWB. A program, written in C#, successfully communicates faults to the probabilistic model through the dll. A systems model of the Advanced Liquid-Metal Reactor–Power Reactor Inherently Safe Module (ALMR-PRISM) design developed under another DOE project was upgraded using Dymola to include proper interfaces to allow data exchange with the control application (ConApp). A program, written in C+, successfully communicates faults to the multi-physics model. The results of the example simulation were successfully plotted.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carr, M.D.; Waddell, S.J.; Vick, G.S.
1986-12-31
Yucca Mountain in southern Nye County, Nevada, has been proposed as a potential site for the underground disposal of high-level nuclear waste. An exploratory drill hole designated UE25p No. 1 was drilled 3 km east of the proposed repository site to investigate the geology and hydrology of the rocks that underlie the Tertiary volcanic and sedimentary rock sequence forming Yucca Mountain. Silurian dolomite assigned to the Roberts Mountain and Lone Mountain Formations was intersected below the Tertiary section between a depth of approximately 1244 m (4080 ft) and the bottom of the drill hole at 1807 m (5923 ft). Thesemore » formations are part of an important regional carbonate aquifer in the deep ground-water system. Tertiary units deeper than 1139 m (3733 ft) in drill hole UE25p No. 1 are stratigraphically older than any units previously penetrated by drill holes at Yucca Mountain. These units are, in ascending order, the tuff of Yucca Flat, an unnamed calcified ash-flow tuff, and a sequence of clastic deposits. The upper part of the Tertiary sequence in drill hole UE25p No. 1 is similar to that found in other drill holes at Yucca Mountain. The Tertiary sequence is in fault contact with the Silurian rocks. This fault between Tertiary and Paleozoic rocks may correlate with the Fran Ridge fault, a steeply westward-dipping fault exposed approximately 0.5 km east of the drill hole. Another fault intersects UE25p No. 1 at 873 m (2863 ft), but its surface trace is concealed beneath the valley west of the Fran Ridge fault. The Paintbrush Canyon fault, the trace of which passes less than 100 m (330 ft) east of the drilling site, intersects drill hole UE25p No. 1 at a depth of approximately 78 m (255 ft). The drill hole apparently intersected the west flank of a structural high of pre-Tertiary rocks, near the eastern edge of the Crater Flat structural depression.« less
A dynamic fault tree model of a propulsion system
NASA Technical Reports Server (NTRS)
Xu, Hong; Dugan, Joanne Bechta; Meshkat, Leila
2006-01-01
We present a dynamic fault tree model of the benchmark propulsion system, and solve it using Galileo. Dynamic fault trees (DFT) extend traditional static fault trees with special gates to model spares and other sequence dependencies. Galileo solves DFT models using a judicious combination of automatically generated Markov and Binary Decision Diagram models. Galileo easily handles the complexities exhibited by the benchmark problem. In particular, Galileo is designed to model phased mission systems.
Bayesian-network-based safety risk assessment for steel construction projects.
Leu, Sou-Sen; Chang, Ching-Miao
2013-05-01
There are four primary accident types at steel building construction (SC) projects: falls (tumbles), object falls, object collapse, and electrocution. Several systematic safety risk assessment approaches, such as fault tree analysis (FTA) and failure mode and effect criticality analysis (FMECA), have been used to evaluate safety risks at SC projects. However, these traditional methods ineffectively address dependencies among safety factors at various levels that fail to provide early warnings to prevent occupational accidents. To overcome the limitations of traditional approaches, this study addresses the development of a safety risk-assessment model for SC projects by establishing the Bayesian networks (BN) based on fault tree (FT) transformation. The BN-based safety risk-assessment model was validated against the safety inspection records of six SC building projects and nine projects in which site accidents occurred. The ranks of posterior probabilities from the BN model were highly consistent with the accidents that occurred at each project site. The model accurately provides site safety-management abilities by calculating the probabilities of safety risks and further analyzing the causes of accidents based on their relationships in BNs. In practice, based on the analysis of accident risks and significant safety factors, proper preventive safety management strategies can be established to reduce the occurrence of accidents on SC sites. Copyright © 2013 Elsevier Ltd. All rights reserved.
Chen, Yingyi; Zhen, Zhumi; Yu, Huihui; Xu, Jing
2017-01-14
In the Internet of Things (IoT) equipment used for aquaculture is often deployed in outdoor ponds located in remote areas. Faults occur frequently in these tough environments and the staff generally lack professional knowledge and pay a low degree of attention in these areas. Once faults happen, expert personnel must carry out maintenance outdoors. Therefore, this study presents an intelligent method for fault diagnosis based on fault tree analysis and a fuzzy neural network. In the proposed method, first, the fault tree presents a logic structure of fault symptoms and faults. Second, rules extracted from the fault trees avoid duplicate and redundancy. Third, the fuzzy neural network is applied to train the relationship mapping between fault symptoms and faults. In the aquaculture IoT, one fault can cause various fault symptoms, and one symptom can be caused by a variety of faults. Four fault relationships are obtained. Results show that one symptom-to-one fault, two symptoms-to-two faults, and two symptoms-to-one fault relationships can be rapidly diagnosed with high precision, while one symptom-to-two faults patterns perform not so well, but are still worth researching. This model implements diagnosis for most kinds of faults in the aquaculture IoT.
Chen, Yingyi; Zhen, Zhumi; Yu, Huihui; Xu, Jing
2017-01-01
In the Internet of Things (IoT) equipment used for aquaculture is often deployed in outdoor ponds located in remote areas. Faults occur frequently in these tough environments and the staff generally lack professional knowledge and pay a low degree of attention in these areas. Once faults happen, expert personnel must carry out maintenance outdoors. Therefore, this study presents an intelligent method for fault diagnosis based on fault tree analysis and a fuzzy neural network. In the proposed method, first, the fault tree presents a logic structure of fault symptoms and faults. Second, rules extracted from the fault trees avoid duplicate and redundancy. Third, the fuzzy neural network is applied to train the relationship mapping between fault symptoms and faults. In the aquaculture IoT, one fault can cause various fault symptoms, and one symptom can be caused by a variety of faults. Four fault relationships are obtained. Results show that one symptom-to-one fault, two symptoms-to-two faults, and two symptoms-to-one fault relationships can be rapidly diagnosed with high precision, while one symptom-to-two faults patterns perform not so well, but are still worth researching. This model implements diagnosis for most kinds of faults in the aquaculture IoT. PMID:28098822
Reliability computation using fault tree analysis
NASA Technical Reports Server (NTRS)
Chelson, P. O.
1971-01-01
A method is presented for calculating event probabilities from an arbitrary fault tree. The method includes an analytical derivation of the system equation and is not a simulation program. The method can handle systems that incorporate standby redundancy and it uses conditional probabilities for computing fault trees where the same basic failure appears in more than one fault path.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Walsh, Patrick; Fercho, Steven; Perkin, Doug
2015-06-01
The engineering and studies phase of the Glass Buttes project was aimed at reducing risk during the early stages of geothermal project development. The project’s inclusion of high resolution geophysical and geochemical surveys allowed Ormat to evaluate the value of these surveys both independently and in combination to quantify the most valuable course of action for exploration in an area where structure, permeability, and temperature are the most pressing questions. The sizes of the thermal anomalies at Glass Buttes are unusually large. Over the course of Phase I Ormat acquired high resolution LIDAR data to accurately map fault manifestations atmore » the surface and collected detailed gravity and aeromagnetic surveys to map subsurface structural features. In addition, Ormat collected airborne hyperspectral data to assist with mapping the rock petrology and mineral alteration assemblages along Glass Buttes faults and magnetotelluric (MT) survey to try to better constrain the structures at depth. Direct and indirect identification of alteration assemblages reveal not only the geochemical character and temperature of the causative hydrothermal fluids but can also constrain areas of upflow along specific fault segments. All five datasets were merged along with subsurface lithologies and temperatures to predict the most likely locations for high permeability and hot fluids. The Glass Buttes temperature anomalies include 2 areas, totaling 60 km2 (23 mi2) of measured temperature gradients over 165° C/km (10° F/100ft). The Midnight Point temperature anomaly includes the Strat-1 well with 90°C (194 °F) at 603 m (1981 ft) with a 164 °C/km (10°F/100ft) temperature gradient at bottom hole and the GB-18 well with 71°C (160 °F) at 396 m (1300 ft) with a 182°C/km (11°F/100ft) gradient. The primary area of alteration and elevated temperature occurs near major fault intersections associated with Brothers Fault Zone and Basin and Range systems. Evidence for faulting is observed in each data set as follows. Field observations include fault plane orientations, complicated fault intersections, and hydrothermal alteration apparently pre-dating basalt flows. Geophysical anomalies include large, linear gradients in gravity and aeromagnetic data with magnetic lows possibly associated with alteration. Resistivity low anomalies also appear to have offsets associated with faulting. Hyperspectral and XRF identified alteration and individual volcanic flow units, respectively. When incorporated into a 3D geologic model, the fault intersections near the highest proven temperature and geophysical anomalies provide the first priority targets at Midnight Point. Ormat geologists selected the Midnight Point 52-33 drilling target based on a combination of pre-existing drilling data, geologic field work, geophysical interpretation, and geochemical analysis. Deep temperatures of well 52-33 was lower than anticipated. Temperature gradients in the well mirrored those found in historical drilling, but they decreased below 1500 ft and were isothermal below 2000 ft.« less
Object-oriented fault tree evaluation program for quantitative analyses
NASA Technical Reports Server (NTRS)
Patterson-Hine, F. A.; Koen, B. V.
1988-01-01
Object-oriented programming can be combined with fault free techniques to give a significantly improved environment for evaluating the safety and reliability of large complex systems for space missions. Deep knowledge about system components and interactions, available from reliability studies and other sources, can be described using objects that make up a knowledge base. This knowledge base can be interrogated throughout the design process, during system testing, and during operation, and can be easily modified to reflect design changes in order to maintain a consistent information source. An object-oriented environment for reliability assessment has been developed on a Texas Instrument (TI) Explorer LISP workstation. The program, which directly evaluates system fault trees, utilizes the object-oriented extension to LISP called Flavors that is available on the Explorer. The object representation of a fault tree facilitates the storage and retrieval of information associated with each event in the tree, including tree structural information and intermediate results obtained during the tree reduction process. Reliability data associated with each basic event are stored in the fault tree objects. The object-oriented environment on the Explorer also includes a graphical tree editor which was modified to display and edit the fault trees.
FT overexpression induces precocious flowering and normal reproductive development in Eucalyptus.
Klocko, Amy L; Ma, Cathleen; Robertson, Sarah; Esfandiari, Elahe; Nilsson, Ove; Strauss, Steven H
2016-02-01
Eucalyptus trees are among the most important species for industrial forestry worldwide. However, as with most forest trees, flowering does not begin for one to several years after planting which can limit the rate of conventional and molecular breeding. To speed flowering, we transformed a Eucalyptus grandis × urophylla hybrid (SP7) with a variety of constructs that enable overexpression of FLOWERING LOCUS T (FT). We found that FT expression led to very early flowering, with events showing floral buds within 1-5 months of transplanting to the glasshouse. The most rapid flowering was observed when the cauliflower mosaic virus 35S promoter was used to drive the Arabidopsis thaliana FT gene (AtFT). Early flowering was also observed with AtFT overexpression from a 409S ubiquitin promoter and under heat induction conditions with Populus trichocarpa FT1 (PtFT1) under control of a heat-shock promoter. Early flowering trees grew robustly, but exhibited a highly branched phenotype compared to the strong apical dominance of nonflowering transgenic and control trees. AtFT-induced flowers were morphologically normal and produced viable pollen grains and viable self- and cross-pollinated seeds. Many self-seedlings inherited AtFT and flowered early. FT overexpression-induced flowering in Eucalyptus may be a valuable means for accelerating breeding and genetic studies as the transgene can be easily segregated away in progeny, restoring normal growth and form. © 2015 Society for Experimental Biology, Association of Applied Biologists and John Wiley & Sons Ltd.
NASA Technical Reports Server (NTRS)
Martensen, Anna L.; Butler, Ricky W.
1987-01-01
The Fault Tree Compiler Program is a new reliability tool used to predict the top event probability for a fault tree. Five different gate types are allowed in the fault tree: AND, OR, EXCLUSIVE OR, INVERT, and M OF N gates. The high level input language is easy to understand and use when describing the system tree. In addition, the use of the hierarchical fault tree capability can simplify the tree description and decrease program execution time. The current solution technique provides an answer precise (within the limits of double precision floating point arithmetic) to the five digits in the answer. The user may vary one failure rate or failure probability over a range of values and plot the results for sensitivity analyses. The solution technique is implemented in FORTRAN; the remaining program code is implemented in Pascal. The program is written to run on a Digital Corporation VAX with the VMS operation system.
The Fault Tree Compiler (FTC): Program and mathematics
NASA Technical Reports Server (NTRS)
Butler, Ricky W.; Martensen, Anna L.
1989-01-01
The Fault Tree Compiler Program is a new reliability tool used to predict the top-event probability for a fault tree. Five different gate types are allowed in the fault tree: AND, OR, EXCLUSIVE OR, INVERT, AND m OF n gates. The high-level input language is easy to understand and use when describing the system tree. In addition, the use of the hierarchical fault tree capability can simplify the tree description and decrease program execution time. The current solution technique provides an answer precisely (within the limits of double precision floating point arithmetic) within a user specified number of digits accuracy. The user may vary one failure rate or failure probability over a range of values and plot the results for sensitivity analyses. The solution technique is implemented in FORTRAN; the remaining program code is implemented in Pascal. The program is written to run on a Digital Equipment Corporation (DEC) VAX computer with the VMS operation system.
A regional 17-18 MA thermal event in Southwestern Arizona
NASA Technical Reports Server (NTRS)
Brooks, W. E.
1985-01-01
A regional thermal event in southwestern Arizona 17 to 18 Ma ago is suggested by discordances between fission track (FT) and K-Ar dates in Tertiary volcanic and sedimentary rocks, by the abundance of primary hydrothermal orthoclase in quenched volcanic rocks, and by the concentration of Mn, Ba, Cu, Ag, and Au deposits near detachment faults. A high condont alteration index (CAI) of 3 to 7 is found in Paleozoic rocks of southwestern Arizona. The high CAI may have been caused by this mid-Tertiary thermal event. Resetting of temperature-sensitive TF dates (2) 17 to 18 Ma with respect to K-Ar dates of 24 and 20 Ma has occurred in upper plate volcanic rocks at the Harcuvar and Picacho Peak detachments. Discordances between FT and K-Ar dates are most pronounced at detachment faults. However, on a regional scale Ft dates from volcanic and sedimentary rocks approach 17 to 18 Ma event in areas away from known detachment faults. Effects of detachment faulting on the K-Ar system suggest that dates of correlative rocks will be younger as the detachment fault is approached.
Systems Theoretic Process Analysis Applied to an Offshore Supply Vessel Dynamic Positioning System
2016-06-01
additional safety issues that were either not identified or inadequately mitigated through the use of Fault Tree Analysis and Failure Modes and...Techniques ...................................................................................................... 15 1.3.1. Fault Tree Analysis...49 3.2. Fault Tree Analysis Comparison
Effects of various spacings on loblolly pine growth
W.E. Walmer; E.G. Owens; J.R. Jorgensen
1975-01-01
Four spacings of loblolly pine trees (6 by 6 ft, 8 by 8 ft, 10 by 10 ft, II by 12 ft) were studied for 15 years at the Calhoun Experlmental Forest ne.ar Union, South carolina. The two wider spacings at 15 years produced trees of greater height, larger diometer, and more sawtimber voll.tne while the two narrower spacings favored bds4l area growth and total cubfc volume...
An overview of the phase-modular fault tree approach to phased mission system analysis
NASA Technical Reports Server (NTRS)
Meshkat, L.; Xing, L.; Donohue, S. K.; Ou, Y.
2003-01-01
We look at how fault tree analysis (FTA), a primary means of performing reliability analysis of PMS, can meet this challenge in this paper by presenting an overview of the modular approach to solving fault trees that represent PMS.
Try Fault Tree Analysis, a Step-by-Step Way to Improve Organization Development.
ERIC Educational Resources Information Center
Spitzer, Dean
1980-01-01
Fault Tree Analysis, a systems safety engineering technology used to analyze organizational systems, is described. Explains the use of logic gates to represent the relationship between failure events, qualitative analysis, quantitative analysis, and effective use of Fault Tree Analysis. (CT)
Fault Tree Analysis: A Research Tool for Educational Planning. Technical Report No. 1.
ERIC Educational Resources Information Center
Alameda County School Dept., Hayward, CA. PACE Center.
This ESEA Title III report describes fault tree analysis and assesses its applicability to education. Fault tree analysis is an operations research tool which is designed to increase the probability of success in any system by analyzing the most likely modes of failure that could occur. A graphic portrayal, which has the form of a tree, is…
Geologic and geophysical investigations of Climax Stock intrusive, Nevada
,
1983-01-01
The Climax stock is a composite granitic intrusive of Cretaceous age, composed of quartz monzonite and granodiorite, which intrudes rocks of Paleozoic and Precambrian age. Tertiary volcanic rocks, consisting of ashflow and ash-fall tuffs, and tuffaceous sedimentary rocks overlie the sedimentary rocks and the stock. Erosion has removed much of the Tertiary volcanic rocks. Hydrothermal alteration of quartz monzonite and granodiorite is found mainly along joints and faults and varies from location to location. The Paleozoic carbonate rocks have been thermally and metasomatically altered to marble and tactite as much as 457 m (1,500 ft) from the contact with the stock, although minor discontinuous metasomatic effects are noted in all rocks out to 914 m (3,000 ft). Three major faults which define the Climax area structurally are the Tippinip, Boundary and Yucca faults. North of the junction of the Boundary and Yucca faults, the faults are collectively referred to as the Butte fault. The dominant joint sets and their average attitudes are N. 32? W., 22? NE; N. 60? W., vertical and N. 35? E., vertical. Joints in outcrop are weathered and generally open, but in subsurface, the joints are commonly filled and healed with secondary mineral s. The location of the water table and the degree of saturation of the granitic rocks are presently unknown. Measurement from drill holes indicated that depth to perched water levels ranges from 30 to 244 m (100-800 ft). Recent field investigations have shown the contact between the Pogonip marble and the granodiorite is a contact rather than a fault as previously mapped. The thickness of the weathered granodiorite is estimated to be 8 to 46 m (25 to 150 ft).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mace, R.E.; Nance, H.S.; Laubach, S.E.
1995-06-01
Faults and joints are conduits for ground-water flow and targets for horizontal drilling in the petroleum industry. Spacing and size distribution are rarely predicted accurately by current structural models or documented adequately by conventional borehole or outcrop samples. Tunnel excavations present opportunities to measure fracture attributes in continuous subsurface exposures. These fracture measurements ran be used to improve structural models, guide interpretation of conventional borehole and outcrop data, and geostatistically quantify spatial and spacing characteristics for comparison to outcrop data or for generating distributions of fracture for numerical flow and transport modeling. Structure maps of over 9 mi of nearlymore » continuous tunnel excavations in Austin Chalk at the Superconducting Super Collider (SSC) site in Ellis County, Texas, provide a unique database of fault and joint populations for geostatistical analysis. Observationally, small faults (<10 ft. throw) occur in clusters or swarms that have as many as 24 faults, fault swarms are as much as 2,000 ft. wide and appear to be on average 1,000 ft. apart, and joints are in swarms spaced 500 to more than 2l,000 ft. apart. Semi-variograms show varying degrees of spatial correlation. These variograms have structured sills that correlate directly to highs and lows in fracture frequency observed in the tunnel. Semi-variograms generated with respect to fracture spacing and number also have structured sills, but tend to not show any near-field correlation. The distribution of fault spacing can be described with a negative exponential, which suggests a random distribution. However, there is clearly some structure and clustering in the spacing data as shown by running average and variograms, which implies that a number of different methods should be utilized to characterize fracture spacing.« less
Review: Evaluation of Foot-and-Mouth Disease Control Using Fault Tree Analysis.
Isoda, N; Kadohira, M; Sekiguchi, S; Schuppers, M; Stärk, K D C
2015-06-01
An outbreak of foot-and-mouth disease (FMD) causes huge economic losses and animal welfare problems. Although much can be learnt from past FMD outbreaks, several countries are not satisfied with their degree of contingency planning and aiming at more assurance that their control measures will be effective. The purpose of the present article was to develop a generic fault tree framework for the control of an FMD outbreak as a basis for systematic improvement and refinement of control activities and general preparedness. Fault trees are typically used in engineering to document pathways that can lead to an undesired event, that is, ineffective FMD control. The fault tree method allows risk managers to identify immature parts of the control system and to analyse the events or steps that will most probably delay rapid and effective disease control during a real outbreak. The present developed fault tree is generic and can be tailored to fit the specific needs of countries. For instance, the specific fault tree for the 2001 FMD outbreak in the UK was refined based on control weaknesses discussed in peer-reviewed articles. Furthermore, the specific fault tree based on the 2001 outbreak was applied to the subsequent FMD outbreak in 2007 to assess the refinement of control measures following the earlier, major outbreak. The FMD fault tree can assist risk managers to develop more refined and adequate control activities against FMD outbreaks and to find optimum strategies for rapid control. Further application using the current tree will be one of the basic measures for FMD control worldwide. © 2013 Blackwell Verlag GmbH.
The weakest t-norm based intuitionistic fuzzy fault-tree analysis to evaluate system reliability.
Kumar, Mohit; Yadav, Shiv Prasad
2012-07-01
In this paper, a new approach of intuitionistic fuzzy fault-tree analysis is proposed to evaluate system reliability and to find the most critical system component that affects the system reliability. Here weakest t-norm based intuitionistic fuzzy fault tree analysis is presented to calculate fault interval of system components from integrating expert's knowledge and experience in terms of providing the possibility of failure of bottom events. It applies fault-tree analysis, α-cut of intuitionistic fuzzy set and T(ω) (the weakest t-norm) based arithmetic operations on triangular intuitionistic fuzzy sets to obtain fault interval and reliability interval of the system. This paper also modifies Tanaka et al.'s fuzzy fault-tree definition. In numerical verification, a malfunction of weapon system "automatic gun" is presented as a numerical example. The result of the proposed method is compared with the listing approaches of reliability analysis methods. Copyright © 2012 ISA. Published by Elsevier Ltd. All rights reserved.
Software For Fault-Tree Diagnosis Of A System
NASA Technical Reports Server (NTRS)
Iverson, Dave; Patterson-Hine, Ann; Liao, Jack
1993-01-01
Fault Tree Diagnosis System (FTDS) computer program is automated-diagnostic-system program identifying likely causes of specified failure on basis of information represented in system-reliability mathematical models known as fault trees. Is modified implementation of failure-cause-identification phase of Narayanan's and Viswanadham's methodology for acquisition of knowledge and reasoning in analyzing failures of systems. Knowledge base of if/then rules replaced with object-oriented fault-tree representation. Enhancement yields more-efficient identification of causes of failures and enables dynamic updating of knowledge base. Written in C language, C++, and Common LISP.
Fault tree models for fault tolerant hypercube multiprocessors
NASA Technical Reports Server (NTRS)
Boyd, Mark A.; Tuazon, Jezus O.
1991-01-01
Three candidate fault tolerant hypercube architectures are modeled, their reliability analyses are compared, and the resulting implications of these methods of incorporating fault tolerance into hypercube multiprocessors are discussed. In the course of performing the reliability analyses, the use of HARP and fault trees in modeling sequence dependent system behaviors is demonstrated.
Comparative analysis of techniques for evaluating the effectiveness of aircraft computing systems
NASA Technical Reports Server (NTRS)
Hitt, E. F.; Bridgman, M. S.; Robinson, A. C.
1981-01-01
Performability analysis is a technique developed for evaluating the effectiveness of fault-tolerant computing systems in multiphase missions. Performability was evaluated for its accuracy, practical usefulness, and relative cost. The evaluation was performed by applying performability and the fault tree method to a set of sample problems ranging from simple to moderately complex. The problems involved as many as five outcomes, two to five mission phases, permanent faults, and some functional dependencies. Transient faults and software errors were not considered. A different analyst was responsible for each technique. Significantly more time and effort were required to learn performability analysis than the fault tree method. Performability is inherently as accurate as fault tree analysis. For the sample problems, fault trees were more practical and less time consuming to apply, while performability required less ingenuity and was more checkable. Performability offers some advantages for evaluating very complex problems.
Product Support Manager Guidebook
2011-04-01
package is being developed using supportability analysis concepts such as Failure Mode, Effects and Criticality Analysis (FMECA), Fault Tree Analysis ( FTA ...Analysis (LORA) Condition Based Maintenance + (CBM+) Fault Tree Analysis ( FTA ) Failure Mode, Effects, and Criticality Analysis (FMECA) Maintenance Task...Reporting and Corrective Action System (FRACAS), Fault Tree Analysis ( FTA ), Level of Repair Analysis (LORA), Maintenance Task Analysis (MTA
Proactive Fault Tolerance Using Preemptive Migration
DOE Office of Scientific and Technical Information (OSTI.GOV)
Engelmann, Christian; Vallee, Geoffroy R; Naughton, III, Thomas J
2009-01-01
Proactive fault tolerance (FT) in high-performance computing is a concept that prevents compute node failures from impacting running parallel applications by preemptively migrating application parts away from nodes that are about to fail. This paper provides a foundation for proactive FT by defining its architecture and classifying implementation options. This paper further relates prior work to the presented architecture and classification, and discusses the challenges ahead for needed supporting technologies.
MIRAP, microcomputer reliability analysis program
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jehee, J.N.T.
1989-01-01
A program for a microcomputer is outlined that can determine minimal cut sets from a specified fault tree logic. The speed and memory limitations of the microcomputers on which the program is implemented (Atari ST and IBM) are addressed by reducing the fault tree's size and by storing the cut set data on disk. Extensive well proven fault tree restructuring techniques, such as the identification of sibling events and of independent gate events, reduces the fault tree's size but does not alter its logic. New methods are used for the Boolean reduction of the fault tree logic. Special criteria formore » combining events in the 'AND' and 'OR' logic avoid the creation of many subsuming cut sets which all would cancel out due to existing cut sets. Figures and tables illustrates these methods. 4 refs., 5 tabs.« less
The FTA Method And A Possibility Of Its Application In The Area Of Road Freight Transport
NASA Astrophysics Data System (ADS)
Poliaková, Adela
2015-06-01
The Fault Tree process utilizes logic diagrams to portray and analyse potentially hazardous events. Three basic symbols (logic gates) are adequate for diagramming any fault tree. However, additional recently developed symbols can be used to reduce the time and effort required for analysis. A fault tree is a graphical representation of the relationship between certain specific events and the ultimate undesired event (2). This paper deals to method of Fault Tree Analysis basic description and provides a practical view on possibility of application by quality improvement in road freight transport company.
Pajon, Melanie; Febres, Vicente J; Moore, Gloria A
2017-08-30
In citrus the transition from juvenility to mature phase is marked by the capability of a tree to flower and fruit consistently. The long period of juvenility in citrus severely impedes the use of genetic based strategies to improve fruit quality, disease resistance, and responses to abiotic environmental factors. One of the genes whose expression signals flower development in many plant species is FLOWERING LOCUS T (FT). In this study, gene expression levels of flowering genes CiFT1, CiFT2 and CiFT3 were determined using reverse-transcription quantitative real-time PCR in citrus trees over a 1 year period in Florida. Distinct genotypes of citrus trees of different ages were used. In mature trees of pummelo (Citrus grandis Osbeck) and 'Pineapple' sweet orange (Citrus sinensis (L.) Osbeck) the expression of all three CiFT genes was coordinated and significantly higher in April, after flowering was over, regardless of whether they were in the greenhouse or in the field. Interestingly, immature 'Pineapple' seedlings showed significantly high levels of CiFT3 expression in April and June, while CiFT1 and CiFT2 were highest in June, and hence their expression induction was not simultaneous as in mature plants. In mature citrus trees the induction of CiFTs expression in leaves occurs at the end of spring and after flowering has taken place suggesting it is not associated with dormancy interruption and further flower bud development but is probably involved with shoot apex differentiation and flower bud determination. CiFTs were also seasonally induced in immature seedlings, indicating that additional factors must be suppressing flowering induction and their expression has other functions.
Fault Tree Analysis: Its Implications for Use in Education.
ERIC Educational Resources Information Center
Barker, Bruce O.
This study introduces the concept of Fault Tree Analysis as a systems tool and examines the implications of Fault Tree Analysis (FTA) as a technique for isolating failure modes in educational systems. A definition of FTA and discussion of its history, as it relates to education, are provided. The step by step process for implementation and use of…
Preventing medical errors by designing benign failures.
Grout, John R
2003-07-01
One way to successfully reduce medical errors is to design health care systems that are more resistant to the tendencies of human beings to err. One interdisciplinary approach entails creating design changes, mitigating human errors, and making human error irrelevant to outcomes. This approach is intended to facilitate the creation of benign failures, which have been called mistake-proofing devices and forcing functions elsewhere. USING FAULT TREES TO DESIGN FORCING FUNCTIONS: A fault tree is a graphical tool used to understand the relationships that either directly cause or contribute to the cause of a particular failure. A careful analysis of a fault tree enables the analyst to anticipate how the process will behave after the change. EXAMPLE OF AN APPLICATION: A scenario in which a patient is scalded while bathing can serve as an example of how multiple fault trees can be used to design forcing functions. The first fault tree shows the undesirable event--patient scalded while bathing. The second fault tree has a benign event--no water. Adding a scald valve changes the outcome from the undesirable event ("patient scalded while bathing") to the benign event ("no water") Analysis of fault trees does not ensure or guarantee that changes necessary to eliminate error actually occur. Most mistake-proofing is used to prevent simple errors and to create well-defended processes, but complex errors can also result. The utilization of mistake-proofing or forcing functions can be thought of as changing the logic of a process. Errors that formerly caused undesirable failures can be converted into the causes of benign failures. The use of fault trees can provide a variety of insights into the design of forcing functions that will improve patient safety.
Salehifar, Mehdi; Moreno-Equilaz, Manuel
2016-01-01
Due to its fault tolerance, a multiphase brushless direct current (BLDC) motor can meet high reliability demand for application in electric vehicles. The voltage-source inverter (VSI) supplying the motor is subjected to open circuit faults. Therefore, it is necessary to design a fault-tolerant (FT) control algorithm with an embedded fault diagnosis (FD) block. In this paper, finite control set-model predictive control (FCS-MPC) is developed to implement the fault-tolerant control algorithm of a five-phase BLDC motor. The developed control method is fast, simple, and flexible. A FD method based on available information from the control block is proposed; this method is simple, robust to common transients in motor and able to localize multiple open circuit faults. The proposed FD and FT control algorithm are embedded in a five-phase BLDC motor drive. In order to validate the theory presented, simulation and experimental results are conducted on a five-phase two-level VSI supplying a five-phase BLDC motor. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.
Fault reactivation and seismicity risk from CO2 sequestration in the Chinshui gas field, NW Taiwan
NASA Astrophysics Data System (ADS)
Sung, Chia-Yu; Hung, Jih-Hao
2015-04-01
The Chinshui gas field located in the fold-thrust belt of western Taiwan was a depleted reservoir. Recently, CO2 sequestration has been planned at shallower depths of this structure. CO2 injection into reservoir will generate high fluid pressure and trigger slip on reservoir-bounding faults. We present detailed in-situ stresses from deep wells in the Chinshui gas field and evaluated the risk of fault reactivation for underground CO2 injection. The magnitudes of vertical stress (Sv), formation pore pressure (Pf) and minimum horizontal stress (Shmin) were obtained from formation density logs, repeat formation tests, sonic logs, mud weight, and hydraulic fracturing including leak-off tests and hydraulic fracturing. The magnitude of maximum horizontal stress (SHmax) was constrained by frictional limit of critically stressed faults. Results show that vertical stress gradient is about 23.02 MPa/km (1.02 psi/ft), and minimum horizontal stress gradient is 18.05 MPa/km (0.80 psi/ft). Formation pore pressures were hydrostatic at depths 2 km, and increase with a gradient of 16.62 MPa/km (0.73 psi/ft). The ratio of fluid pressure and overburden pressure (λp) is 0.65. The upper bound of maximum horizontal stress constrained by strike-slip fault stress regime (SHmax>Sv>Shmin) and coefficient of friction (μ=0.6) is about 18.55 MPa/km (0.82 psi/ft). The orientation of maximum horizontal stresses was calculated from four-arm caliper tools through the methodology suggested by World Stress Map (WMS). The mean azimuth of preferred orientation of borehole breakouts are in ~65。N. Consequently, the maximum horizontal stress axis trends in 155。N and sub-parallel to the far-field plate-convergence direction. Geomechanical analyses of the reactivation of pre-existing faults was assessed using 3DStress and Traptester software. Under current in-situ stress, the middle block fault has higher slip tendency, but still less than frictional coefficient of 0.6 a common threshold value for motion on incohesive faults. The results also indicate that CO2 injection in the Chinshui gas field will not compromise the stability of faults.
Fault Tree Analysis Application for Safety and Reliability
NASA Technical Reports Server (NTRS)
Wallace, Dolores R.
2003-01-01
Many commercial software tools exist for fault tree analysis (FTA), an accepted method for mitigating risk in systems. The method embedded in the tools identifies a root as use in system components, but when software is identified as a root cause, it does not build trees into the software component. No commercial software tools have been built specifically for development and analysis of software fault trees. Research indicates that the methods of FTA could be applied to software, but the method is not practical without automated tool support. With appropriate automated tool support, software fault tree analysis (SFTA) may be a practical technique for identifying the underlying cause of software faults that may lead to critical system failures. We strive to demonstrate that existing commercial tools for FTA can be adapted for use with SFTA, and that applied to a safety-critical system, SFTA can be used to identify serious potential problems long before integrator and system testing.
Performance of a cut-to-length harvester in a single-tree and group selection cut
Neil K. Huyler; Chris LeDoux
1999-01-01
Presents production and cost data for a mechanized and cut-to-length (CTL) harvester used in a single-tree and group-selection cut on the Groton State Forest in central Vermont. For trees whose average volume (size) was 7 to 18 ft3, production ranged from 464 to 734 ft3 per productive machine hour (PMH). The cycle time for processing trees into bunches to forward to a...
Fan, Hailan; McGuire, Mary Anne; Teskey, Robert O
2017-11-01
Carbon dioxide (CO2) released from respiring cells in the stems of trees (RS) can diffuse radially to the atmosphere (EA) or dissolve in xylem sap and move internally in the tree (FT). Previous studies have observed that EA decreases as stem or branch diameter increases, but the cause of this relationship has not been determined, nor has the relationship been confirmed between stem diameter and RS, which includes both EA and FT. In this study, for the first time the mass balance technique was used to estimate RS of stems of Liriodendron tulipifera L. trees of different diameters, ranging from 16 to 60 cm, growing on the same site. The magnitude of the component fluxes scaled with tree size. Among the five trees, the contribution of EA to RS decreased linearly with increasing stem diameter and sapwood area while the contribution of FT to RS increased linearly with stem diameter and sapwood area. For the smallest tree EA was 86% of RS but it was only 46% of RS in the largest tree. As tree size increased a greater proportion of respired CO2 dissolved in sap and remained within the tree. Due to increase in FT with tree size, we observed that trees of different sizes had the same RS even though they had different EA. This appears to explain why the EA of stems and branches decreases as their size increases. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Mahdiani, Hamid Reza; Fakhraie, Sied Mehdi; Lucas, Caro
2012-08-01
Reliability should be identified as the most important challenge in future nano-scale very large scale integration (VLSI) implementation technologies for the development of complex integrated systems. Normally, fault tolerance (FT) in a conventional system is achieved by increasing its redundancy, which also implies higher implementation costs and lower performance that sometimes makes it even infeasible. In contrast to custom approaches, a new class of applications is categorized in this paper, which is inherently capable of absorbing some degrees of vulnerability and providing FT based on their natural properties. Neural networks are good indicators of imprecision-tolerant applications. We have also proposed a new class of FT techniques called relaxed fault-tolerant (RFT) techniques which are developed for VLSI implementation of imprecision-tolerant applications. The main advantage of RFT techniques with respect to traditional FT solutions is that they exploit inherent FT of different applications to reduce their implementation costs while improving their performance. To show the applicability as well as the efficiency of the RFT method, the experimental results for implementation of a face-recognition computationally intensive neural network and its corresponding RFT realization are presented in this paper. The results demonstrate promising higher performance of artificial neural network VLSI solutions for complex applications in faulty nano-scale implementation environments.
Surface geology of the Jeptha Knob cryptoexplosion structure, Shelby County, Kentucky
Cressman, Earle Rupert
1981-01-01
The Jeptha Knob crytoexplosion structure, described by Bucher in 1925, was remapped in 1973 as part of the U.S. Geological Survey and the Kentucky Geological Survey cooperative mapping program. The knob is in the western part of the Blue Grass region. Hilltops in the rolling farmland adjacent to the knob are underlain by the nearly flat-lying Grant Lake and Callaway Creek Limestones of middle Late Ordovician age, and the valleys are cut in interbedded limestone and shale of the Clays Ferry Formation of late Middle and early Late Ordovician age. Precambrian basement is estimated to be 4,000 ft below the surface. The mapped area is 50 miles west of the crest of the Cincinnati arch; the regional dip is westward 16 ft per mile. The 38th parallel lineament is 50 miles to the south. The structure, about 14,000 ft in diameter, consists of a central area 6,300 ft in diameter of uplifted Clays Ferry Formation surrounded by a belt of annular faults that are divided into segments by radial faults. The grass structure of the Clays Ferry Formation is that of a broad dame, but same evidence indicates that, in detail, the beds are complexly folded. The limestone of the Clays Ferry is brecciated and infiltrated by limonite. The brecciation is confined to single beds, and there is no mixing of fragments from different beds. A small plug of the Logana Member of the Lexington Limestone (Middle Ordovician) has been upfaulted at least 700 ft and emplaced within the Clays Ferry. The central uplift is separated by high-angle and, in places, reverse faults from the belt of annular faulting. The concentric faults in the zone of annular faults are extensional, and the general aspect is of collapse and inward movement. Lenses of breccia are present along many of the concentric faults, but not along the radial faults. At least same of the breccia was injected from below. The youngest beds involved in the faulting are in the Bardstown Member of the Drakes Formation of late Late Ordovician age. The faulted and brecciated beds are overlain by nearly horizontal dolomite and shale of Early and Middle Silurian age. The basal 5 ft of the oldest Silurian unit, the Brassfield Formation, contains calcarenite and calcirudite composed, in large part, of locally derived fragments from the Upper Ordovician formations. The Jeptha Knob structure was formed in latest Late Ordovician or earliest Early Silurian time. At the time of formation, the area was either very slightly above or very slightly below sea level; the sediments were already largely indurated. At the onset of Silurian deposition, the area of the central uplift was probably a broad shallow depression not more than about 15 ft deep, possibly surrounded by a rim of Upper Ordovician rocks or rock fragments. The origin of the Jeptha Knob structure cannot be determined from the available data. Shatter cones and coesite, considered by many to be definitive criteria far origin by impact, have not been found. On the other hand, geophysical studies indicate that there is no coincident uplift of the basement, and there is no certain relation of Jeptha Knob to any obvious structural trend.
ERIC Educational Resources Information Center
Barker, Bruce O.; Petersen, Paul D.
This paper explores the fault-tree analysis approach to isolating failure modes within a system. Fault tree investigates potentially undesirable events and then looks for failures in sequence that would lead to their occurring. Relationships among these events are symbolized by AND or OR logic gates, AND used when single events must coexist to…
Object-oriented fault tree models applied to system diagnosis
NASA Technical Reports Server (NTRS)
Iverson, David L.; Patterson-Hine, F. A.
1990-01-01
When a diagnosis system is used in a dynamic environment, such as the distributed computer system planned for use on Space Station Freedom, it must execute quickly and its knowledge base must be easily updated. Representing system knowledge as object-oriented augmented fault trees provides both features. The diagnosis system described here is based on the failure cause identification process of the diagnostic system described by Narayanan and Viswanadham. Their system has been enhanced in this implementation by replacing the knowledge base of if-then rules with an object-oriented fault tree representation. This allows the system to perform its task much faster and facilitates dynamic updating of the knowledge base in a changing diagnosis environment. Accessing the information contained in the objects is more efficient than performing a lookup operation on an indexed rule base. Additionally, the object-oriented fault trees can be easily updated to represent current system status. This paper describes the fault tree representation, the diagnosis algorithm extensions, and an example application of this system. Comparisons are made between the object-oriented fault tree knowledge structure solution and one implementation of a rule-based solution. Plans for future work on this system are also discussed.
Probabilistic fault tree analysis of a radiation treatment system.
Ekaette, Edidiong; Lee, Robert C; Cooke, David L; Iftody, Sandra; Craighead, Peter
2007-12-01
Inappropriate administration of radiation for cancer treatment can result in severe consequences such as premature death or appreciably impaired quality of life. There has been little study of vulnerable treatment process components and their contribution to the risk of radiation treatment (RT). In this article, we describe the application of probabilistic fault tree methods to assess the probability of radiation misadministration to patients at a large cancer treatment center. We conducted a systematic analysis of the RT process that identified four process domains: Assessment, Preparation, Treatment, and Follow-up. For the Preparation domain, we analyzed possible incident scenarios via fault trees. For each task, we also identified existing quality control measures. To populate the fault trees we used subjective probabilities from experts and compared results with incident report data. Both the fault tree and the incident report analysis revealed simulation tasks to be most prone to incidents, and the treatment prescription task to be least prone to incidents. The probability of a Preparation domain incident was estimated to be in the range of 0.1-0.7% based on incident reports, which is comparable to the mean value of 0.4% from the fault tree analysis using probabilities from the expert elicitation exercise. In conclusion, an analysis of part of the RT system using a fault tree populated with subjective probabilities from experts was useful in identifying vulnerable components of the system, and provided quantitative data for risk management.
2014-03-01
Trees and woody vines are sampled in large plots with 9 m (30 ft) radii. Saplings, shrubs , and herbs are sampled in nested smaller plots with 2 m (5 ft... woody vines in 9 m (30 ft) radius plots and saplings, shrubs , and herbaceous species in 2 m (5 ft) radius plots. In herbaceous meadows, only the 2 m (5...suggests stratifying vegetation by growth forms of trees, shrubs , herbs, and vines and sampling plant communities by using nested circular plots
Reconfigurable tree architectures using subtree oriented fault tolerance
NASA Technical Reports Server (NTRS)
Lowrie, Matthew B.
1987-01-01
An approach to the design of reconfigurable tree architecture is presented in which spare processors are allocated at the leaves. The approach is unique in that spares are associated with subtrees and sharing of spares between these subtrees can occur. The Subtree Oriented Fault Tolerance (SOFT) approach is more reliable than previous approaches capable of tolerating link and switch failures for both single chip and multichip tree implementations while reducing redundancy in terms of both spare processors and links. VLSI layout is 0(n) for binary trees and is directly extensible to N-ary trees and fault tolerance through performance degradation.
Secure Embedded System Design Methodologies for Military Cryptographic Systems
2016-03-31
Fault- Tree Analysis (FTA); Built-In Self-Test (BIST) Introduction Secure access-control systems restrict operations to authorized users via methods...failures in the individual software/processor elements, the question of exactly how unlikely is difficult to answer. Fault- Tree Analysis (FTA) has a...Collins of Sandia National Laboratories for years of sharing his extensive knowledge of Fail-Safe Design Assurance and Fault- Tree Analysis
Rymer, M.J.
2000-01-01
The Coachella Valley area was strongly shaken by the 1992 Joshua Tree (23 April) and Landers (28 June) earthquakes, and both events caused triggered slip on active faults within the area. Triggered slip associated with the Joshua Tree earthquake was on a newly recognized fault, the East Wide Canyon fault, near the southwestern edge of the Little San Bernardino Mountains. Slip associated with the Landers earthquake formed along the San Andreas fault in the southeastern Coachella Valley. Surface fractures formed along the East Wide Canyon fault in association with the Joshua Tree earthquake. The fractures extended discontinuously over a 1.5-km stretch of the fault, near its southern end. Sense of slip was consistently right-oblique, west side down, similar to the long-term style of faulting. Measured offset values were small, with right-lateral and vertical components of slip ranging from 1 to 6 mm and 1 to 4 mm, respectively. This is the first documented historic slip on the East Wide Canyon fault, which was first mapped only months before the Joshua Tree earthquake. Surface slip associated with the Joshua Tree earthquake most likely developed as triggered slip given its 5 km distance from the Joshua Tree epicenter and aftershocks. As revealed in a trench investigation, slip formed in an area with only a thin (<3 m thick) veneer of alluvium in contrast to earlier documented triggered slip events in this region, all in the deep basins of the Salton Trough. A paleoseismic trench study in an area of 1992 surface slip revealed evidence of two and possibly three surface faulting events on the East Wide Canyon fault during the late Quaternary, probably latest Pleistocene (first event) and mid- to late Holocene (second two events). About two months after the Joshua Tree earthquake, the Landers earthquake then triggered slip on many faults, including the San Andreas fault in the southeastern Coachella Valley. Surface fractures associated with this event formed discontinuous breaks over a 54-km-long stretch of the fault, from the Indio Hills southeastward to Durmid Hill. Sense of slip was right-lateral; only locally was there a minor (~1 mm) vertical component of slip. Measured dextral displacement values ranged from 1 to 20 mm, with the largest amounts found in the Mecca Hills where large slip values have been measured following past triggered-slip events.
NASA Astrophysics Data System (ADS)
de Barros, Felipe P. J.; Bolster, Diogo; Sanchez-Vila, Xavier; Nowak, Wolfgang
2011-05-01
Assessing health risk in hydrological systems is an interdisciplinary field. It relies on the expertise in the fields of hydrology and public health and needs powerful translation concepts to provide decision support and policy making. Reliable health risk estimates need to account for the uncertainties and variabilities present in hydrological, physiological, and human behavioral parameters. Despite significant theoretical advancements in stochastic hydrology, there is still a dire need to further propagate these concepts to practical problems and to society in general. Following a recent line of work, we use fault trees to address the task of probabilistic risk analysis and to support related decision and management problems. Fault trees allow us to decompose the assessment of health risk into individual manageable modules, thus tackling a complex system by a structural divide and conquer approach. The complexity within each module can be chosen individually according to data availability, parsimony, relative importance, and stage of analysis. Three differences are highlighted in this paper when compared to previous works: (1) The fault tree proposed here accounts for the uncertainty in both hydrological and health components, (2) system failure within the fault tree is defined in terms of risk being above a threshold value, whereas previous studies that used fault trees used auxiliary events such as exceedance of critical concentration levels, and (3) we introduce a new form of stochastic fault tree that allows us to weaken the assumption of independent subsystems that is required by a classical fault tree approach. We illustrate our concept in a simple groundwater-related setting.
1980-09-01
Spillway. Type Trapezoidal, broad - crested , concrete weir Width 6 ft at bottom, 18 ft at top Crest elevation 994.0 ft Gates None Upstream Channel None... crested concrete weir Length of weir 18 f t (top), 6 f t (bottom) Crest elevation 994 ft Gates None Upstream channel None Downstream channel Earth...instability of the embankment was observed at the time of our inspectici. The slopes and crest of the dam have a thick grass cover with scattered brush and
Grauch, V.J.S.; Drenth, Benjamin J.
2009-01-01
High-resolution aeromagnetic data were acquired over the town of Poncha Springs and areas to the northwest to image faults, especially where they are concealed. Because this area has known hot springs, faults or fault intersections at depth can provide pathways for upward migration of geothermal fluids or concentrate fracturing that enhances permeability. Thus, mapping concealed faults provides a focus for follow-up geothermal studies. Fault interpretation was accomplished by synthesizing interpretative maps derived from several different analytical methods, along with preliminary depth estimates. Faults were interpreted along linear aeromagnetic anomalies and breaks in anomaly patterns. Many linear features correspond to topographic features, such as drainages. A few of these are inferred to be fault-related. The interpreted faults show an overall pattern of criss-crossing fault zones, some of which appear to step over where they cross. Faults mapped by geologists suggest similar crossing patterns in exposed rocks along the mountain front. In low-lying areas, interpreted faults show zones of west-northwest-, north-, and northwest-striking faults that cross ~3 km (~2 mi) west-northwest of the town of Poncha Springs. More easterly striking faults extend east from this juncture. The associated aeromagnetic anomalies are likely caused by magnetic contrasts associated with faulted sediments that are concealed less than 200 m (656 ft) below the valley floor. The faults may involve basement rocks at greater depth as well. A relatively shallow (<300 m or <984 ft), faulted basement block is indicated under basin-fill sediments just north of the hot springs and south of the town of Poncha Springs.
Planning effectiveness may grow on fault trees.
Chow, C W; Haddad, K; Mannino, B
1991-10-01
The first step of a strategic planning process--identifying and analyzing threats and opportunities--requires subjective judgments. By using an analytical tool known as a fault tree, healthcare administrators can reduce the unreliability of subjective decision making by creating a logical structure for problem solving and decision making. A case study of 11 healthcare administrators showed that an analysis technique called prospective hindsight can add to a fault tree's ability to improve a strategic planning process.
NASA Astrophysics Data System (ADS)
Batzias, Dimitris F.
2012-12-01
Fault Tree Analysis (FTA) can be used for technology transfer when the relevant problem (called 'top even' in FTA) is solved in a technology centre and the results are diffused to interested parties (usually Small Medium Enterprises - SMEs) that have not the proper equipment and the required know-how to solve the problem by their own. Nevertheless, there is a significant drawback in this procedure: the information usually provided by the SMEs to the technology centre, about production conditions and corresponding quality characteristics of the product, and (sometimes) the relevant expertise in the Knowledge Base of this centre may be inadequate to form a complete fault tree. Since such cases are quite frequent in practice, we have developed a methodology for transforming incomplete fault tree to Ishikawa diagram, which is more flexible and less strict in establishing causal chains, because it uses a surface phenomenological level with a limited number of categories of faults. On the other hand, such an Ishikawa diagram can be extended to simulate a fault tree as relevant knowledge increases. An implementation of this transformation, referring to anodization of aluminium, is presented.
Block 25 field, Chandeleur Sound, St. Bernard Parish, Louisiana
DOE Office of Scientific and Technical Information (OSTI.GOV)
Woltz, D.
1980-01-01
Two pay sands occur on a subtle, east-west elongated, rollover structure situated on the downthrown side of a down-to-the-coast growth fault. The 4800 Ft sand has a total original hydrocarbons column of ca. 30 ft, and the 5200 ft or BB sand has a total, original hydrocarbon column of ca. 55 ft. and is the primary producer in the field. The down-to-the-coast fault which trends east-west and lies on the northern side of the field apparently has contributed to the trapping of hydrocarbons in the structure. The geometry of the BB sand suggests that it is a bar type deposit.more » Apparently, hydrocarbons present in the pay sands have not been derived from the sediments directly above or below the reservoirs. The oil accumulated in the sand reservoirs probably migrated into the block 25 structure from peripheral areas.« less
Townsend, D.R.; Baldwin, M.J.; Carroll, R.D.; Ellis, W.L.; Magner, J.E.
1982-01-01
The Hybla Gold experiment was conducted in the U12e.20 drifts of the E-tunnel complex beneath the surface of Rainier Mesa at the Nevada Test Site. Though the proximity of the Hybla Gold working point to the chimney of the Dining Car event was important to the experiment, the observable geologic effects from Dining Car on the Hybla Gold site were minor. Overburden above the working point is approximately 385 m (1,263 ft). The pre-Tertiary surface, probably quartzite, lies approximately 254 m (833 ft) below the working point. The drifts are mined in zeolitized ash-fall tuffs of tunnel bed 4, subunits K and J, all of Miocene age. The working point is in subunit 4J. Geologic structure in the region around the working point is not complex. The U12e.20 main drift follows the axis of a shallow depositional syncline. A northeast-dipping fault with displacement of approximately 3 m (10 ft) passes within 15.2 m (50 ft) of the Hybla Gold working point. Three faults of smaller displacement pass within 183-290 m (600-950 ft) of the working point, and are antithetic to the 3-m (10-ft) fault. Three exploratory holes were drilled to investigate the chimney of the nearby Dining Car event. Four horizontal holes were drilled during the construction of the U12e.20 drifts to investigate the geology of the Hybla Gold working point.
A systematic risk management approach employed on the CloudSat project
NASA Technical Reports Server (NTRS)
Basilio, R. R.; Plourde, K. S.; Lam, T.
2000-01-01
The CloudSat Project has developed a simplified approach for fault tree analysis and probabilistic risk assessment. A system-level fault tree has been constructed to identify credible fault scenarios and failure modes leading up to a potential failure to meet the nominal mission success criteria.
Fault Tree Analysis: A Bibliography
NASA Technical Reports Server (NTRS)
2000-01-01
Fault tree analysis is a top-down approach to the identification of process hazards. It is as one of the best methods for systematically identifying an graphically displaying the many ways some things can go wrong. This bibliography references 266 documents in the NASA STI Database that contain the major concepts. fault tree analysis, risk an probability theory, in the basic index or major subject terms. An abstract is included with most citations, followed by the applicable subject terms.
Haberman, Amnon; Bakhshian, Ortal; Cerezo-Medina, Sergio; Paltiel, Judith; Adler, Chen; Ben-Ari, Giora; Mercado, Jose Angel; Pliego-Alfaro, Fernando; Lavee, Shimon; Samach, Alon
2017-08-01
Olive (Olea europaea L.) inflorescences, formed in lateral buds, flower in spring. However, there is some debate regarding time of flower induction and inflorescence initiation. Olive juvenility and seasonality of flowering were altered by overexpressing genes encoding flowering locus T (FT). OeFT1 and OeFT2 caused early flowering under short days when expressed in Arabidopsis. Expression of OeFT1/2 in olive leaves and OeFT2 in buds increased in winter, while initiation of inflorescences occurred i n late winter. Trees exposed to an artificial warm winter expressed low levels of OeFT1/2 in leaves and did not flower. Olive flower induction thus seems to be mediated by an increase in FT levels in response to cold winters. Olive flowering is dependent on additional internal factors. It was severely reduced in trees that carried a heavy fruit load the previous season (harvested in November) and in trees without fruit to which cold temperatures were artificially applied in summer. Expression analysis suggested that these internal factors work either by reducing the increase in OeFT1/2 expression or through putative flowering repressors such as TFL1. With expected warmer winters, future consumption of olive oil, as part of a healthy Mediterranean diet, should benefit from better understanding these factors. © 2017 John Wiley & Sons Ltd.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sarrack, A.G.
The purpose of this report is to document fault tree analyses which have been completed for the Defense Waste Processing Facility (DWPF) safety analysis. Logic models for equipment failures and human error combinations that could lead to flammable gas explosions in various process tanks, or failure of critical support systems were developed for internal initiating events and for earthquakes. These fault trees provide frequency estimates for support systems failures and accidents that could lead to radioactive and hazardous chemical releases both on-site and off-site. Top event frequency results from these fault trees will be used in further APET analyses tomore » calculate accident risk associated with DWPF facility operations. This report lists and explains important underlying assumptions, provides references for failure data sources, and briefly describes the fault tree method used. Specific commitments from DWPF to provide new procedural/administrative controls or system design changes are listed in the ''Facility Commitments'' section. The purpose of the ''Assumptions'' section is to clarify the basis for fault tree modeling, and is not necessarily a list of items required to be protected by Technical Safety Requirements (TSRs).« less
Graphical fault tree analysis for fatal falls in the construction industry.
Chi, Chia-Fen; Lin, Syuan-Zih; Dewi, Ratna Sari
2014-11-01
The current study applied a fault tree analysis to represent the causal relationships among events and causes that contributed to fatal falls in the construction industry. Four hundred and eleven work-related fatalities in the Taiwanese construction industry were analyzed in terms of age, gender, experience, falling site, falling height, company size, and the causes for each fatality. Given that most fatal accidents involve multiple events, the current study coded up to a maximum of three causes for each fall fatality. After the Boolean algebra and minimal cut set analyses, accident causes associated with each falling site can be presented as a fault tree to provide an overview of the basic causes, which could trigger fall fatalities in the construction industry. Graphical icons were designed for each falling site along with the associated accident causes to illustrate the fault tree in a graphical manner. A graphical fault tree can improve inter-disciplinary discussion of risk management and the communication of accident causation to first line supervisors. Copyright © 2014 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, S.Y.; Watkins, J.S.
Mapping of Miocene stratigraphy and structure of the Sabine Pass, West Cameron, and East Cameron areas of the western Louisiana outer continental shelf - based on over 1300 mi of seismic data on a 4-mi grid, paleotops from 60 wells, and logs from 35 wells - resulted in time-structure and isochron maps at six intervals from the upper Pliocene to lower Miocene. The most pronounced structural features are the fault systems, which trend east-northeast to east along the Miocene stratigraphic trend. Isolated normal faults with small displacements characterize the inner inner shelf, whereas interconnected faults with greater displacements characterize themore » outer inner shelf. The inner inner shelf faults exhibit little growth, but expansion across the interconnected outer inner shelf fault ranges up to 1 sec two-way traveltime. The interconnected faults belong to two structurally independent fault families. The innermost shelf faults appear to root in the sediment column. A third set of faults located in the Sabine Pass area trends north-south. This fault set is thought to be related to basement movement and/or basement structure. Very little salt is evident in the area. A single diapir is located in West Cameron Block 110 and vicinity. There is little evidence of deep salt. Overall sediment thickness probably exceeds 20,000 ft, with the middle Miocene accounting for 8000 ft.« less
Liu, Guohai; Yang, Junqin; Chen, Ming; Chen, Qian
2014-01-01
A fault-tolerant permanent-magnet vernier (FT-PMV) machine is designed for direct-drive applications, incorporating the merits of high torque density and high reliability. Based on the so-called magnetic gearing effect, PMV machines have the ability of high torque density by introducing the flux-modulation poles (FMPs). This paper investigates the fault-tolerant characteristic of PMV machines and provides a design method, which is able to not only meet the fault-tolerant requirements but also keep the ability of high torque density. The operation principle of the proposed machine has been analyzed. The design process and optimization are presented specifically, such as the combination of slots and poles, the winding distribution, and the dimensions of PMs and teeth. By using the time-stepping finite element method (TS-FEM), the machine performances are evaluated. Finally, the FT-PMV machine is manufactured, and the experimental results are presented to validate the theoretical analysis.
Interim reliability evaluation program, Browns Ferry fault trees
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stewart, M.E.
1981-01-01
An abbreviated fault tree method is used to evaluate and model Browns Ferry systems in the Interim Reliability Evaluation programs, simplifying the recording and displaying of events, yet maintaining the system of identifying faults. The level of investigation is not changed. The analytical thought process inherent in the conventional method is not compromised. But the abbreviated method takes less time, and the fault modes are much more visible.
Pantea, Michael P.; Cole, James C.
2004-01-01
This report describes a digital, three-dimensional faulted hydrostratigraphic model constructed to represent the geologic framework of the Edwards aquifer system in the area of San Antonio, northern Bexar County, Texas. The model is based on mapped geologic relationships that reflect the complex structures of the Balcones fault zone, detailed lithologic descriptions and interpretations of about 40 principal wells (and qualified data from numerous other wells), and a conceptual model of the gross geometry of the Edwards Group units derived from prior interpretations of depositional environments and paleogeography. The digital model depicts the complicated intersections of numerous major and minor faults in the subsurface, as well as their individual and collective impacts on the continuity of the aquifer-forming units of the Edwards Group and the Georgetown Formation. The model allows for detailed examination of the extent of fault dislocation from place to place, and thus the extent to which the effective cross-sectional area of the aquifer is reduced by faulting. The model also depicts the internal hydrostratigraphic subdivisions of the Edwards aquifer, consisting of three major and eight subsidiary hydrogeologic units. This geologic framework model is useful for visualizing the geologic structures within the Balcones fault zone and the interactions of en-echelon fault strands and flexed connecting fault-relay ramps. The model also aids in visualizing the lateral connections between hydrostratigraphic units of relatively high and low permeability across the fault strands. Introduction The Edwards aquifer is the principal source of water for municipal, agricultural, industrial, and military uses by nearly 1.5 million inhabitants of the greater San Antonio, Texas, region (Hovorka and others, 1996; Sharp and Banner, 1997). Discharges from the Edwards aquifer also support local recreation and tourism industries at Barton, Comal, and San Marcos Springs located northeast of San Antonio (Barker and others, 1994), as well as base flow for agricultural applications farther downstream. Average annual discharge from large springs (Comal, San Marcos, Hueco, and others) from the Edwards aquifer was about 365,000 acre-ft from 1934 to1998, with sizeable fluctuations related to annual variations in rainfall. Withdrawals through pumping have increased steadily from about 250,000 acre-ft during the 1960s to over 400,000 acre-ft in the 1990s in response to population growth, especially in the San Antonio metropolitan area (Slattery and Brown, 1999). Average annual recharge to the system (determined through stream gaging) has also varied considerably with annual rainfall fluctuations, but has been about 635,000 acre-ft over the last several decades.
Object-Oriented Algorithm For Evaluation Of Fault Trees
NASA Technical Reports Server (NTRS)
Patterson-Hine, F. A.; Koen, B. V.
1992-01-01
Algorithm for direct evaluation of fault trees incorporates techniques of object-oriented programming. Reduces number of calls needed to solve trees with repeated events. Provides significantly improved software environment for such computations as quantitative analyses of safety and reliability of complicated systems of equipment (e.g., spacecraft or factories).
NASA Technical Reports Server (NTRS)
English, Thomas
2005-01-01
A standard tool of reliability analysis used at NASA-JSC is the event tree. An event tree is simply a probability tree, with the probabilities determining the next step through the tree specified at each node. The nodal probabilities are determined by a reliability study of the physical system at work for a particular node. The reliability study performed at a node is typically referred to as a fault tree analysis, with the potential of a fault tree existing.for each node on the event tree. When examining an event tree it is obvious why the event tree/fault tree approach has been adopted. Typical event trees are quite complex in nature, and the event tree/fault tree approach provides a systematic and organized approach to reliability analysis. The purpose of this study was two fold. Firstly, we wanted to explore the possibility that a semi-Markov process can create dependencies between sojourn times (the times it takes to transition from one state to the next) that can decrease the uncertainty when estimating time to failures. Using a generalized semi-Markov model, we studied a four element reliability model and were able to demonstrate such sojourn time dependencies. Secondly, we wanted to study the use of semi-Markov processes to introduce a time variable into the event tree diagrams that are commonly developed in PRA (Probabilistic Risk Assessment) analyses. Event tree end states which change with time are more representative of failure scenarios than are the usual static probability-derived end states.
Structural system reliability calculation using a probabilistic fault tree analysis method
NASA Technical Reports Server (NTRS)
Torng, T. Y.; Wu, Y.-T.; Millwater, H. R.
1992-01-01
The development of a new probabilistic fault tree analysis (PFTA) method for calculating structural system reliability is summarized. The proposed PFTA procedure includes: developing a fault tree to represent the complex structural system, constructing an approximation function for each bottom event, determining a dominant sampling sequence for all bottom events, and calculating the system reliability using an adaptive importance sampling method. PFTA is suitable for complicated structural problems that require computer-intensive computer calculations. A computer program has been developed to implement the PFTA.
Using Fault Trees to Advance Understanding of Diagnostic Errors.
Rogith, Deevakar; Iyengar, M Sriram; Singh, Hardeep
2017-11-01
Diagnostic errors annually affect at least 5% of adults in the outpatient setting in the United States. Formal analytic techniques are only infrequently used to understand them, in part because of the complexity of diagnostic processes and clinical work flows involved. In this article, diagnostic errors were modeled using fault tree analysis (FTA), a form of root cause analysis that has been successfully used in other high-complexity, high-risk contexts. How factors contributing to diagnostic errors can be systematically modeled by FTA to inform error understanding and error prevention is demonstrated. A team of three experts reviewed 10 published cases of diagnostic error and constructed fault trees. The fault trees were modeled according to currently available conceptual frameworks characterizing diagnostic error. The 10 trees were then synthesized into a single fault tree to identify common contributing factors and pathways leading to diagnostic error. FTA is a visual, structured, deductive approach that depicts the temporal sequence of events and their interactions in a formal logical hierarchy. The visual FTA enables easier understanding of causative processes and cognitive and system factors, as well as rapid identification of common pathways and interactions in a unified fashion. In addition, it enables calculation of empirical estimates for causative pathways. Thus, fault trees might provide a useful framework for both quantitative and qualitative analysis of diagnostic errors. Future directions include establishing validity and reliability by modeling a wider range of error cases, conducting quantitative evaluations, and undertaking deeper exploration of other FTA capabilities. Copyright © 2017 The Joint Commission. Published by Elsevier Inc. All rights reserved.
Locating hardware faults in a data communications network of a parallel computer
Archer, Charles J.; Megerian, Mark G.; Ratterman, Joseph D.; Smith, Brian E.
2010-01-12
Hardware faults location in a data communications network of a parallel computer. Such a parallel computer includes a plurality of compute nodes and a data communications network that couples the compute nodes for data communications and organizes the compute node as a tree. Locating hardware faults includes identifying a next compute node as a parent node and a root of a parent test tree, identifying for each child compute node of the parent node a child test tree having the child compute node as root, running a same test suite on the parent test tree and each child test tree, and identifying the parent compute node as having a defective link connected from the parent compute node to a child compute node if the test suite fails on the parent test tree and succeeds on all the child test trees.
Khan, F I; Abbasi, S A
2000-07-10
Fault tree analysis (FTA) is based on constructing a hypothetical tree of base events (initiating events) branching into numerous other sub-events, propagating the fault and eventually leading to the top event (accident). It has been a powerful technique used traditionally in identifying hazards in nuclear installations and power industries. As the systematic articulation of the fault tree is associated with assigning probabilities to each fault, the exercise is also sometimes called probabilistic risk assessment. But powerful as this technique is, it is also very cumbersome and costly, limiting its area of application. We have developed a new algorithm based on analytical simulation (named as AS-II), which makes the application of FTA simpler, quicker, and cheaper; thus opening up the possibility of its wider use in risk assessment in chemical process industries. Based on the methodology we have developed a computer-automated tool. The details are presented in this paper.
Zhang, Huanling; Harry, David E; Ma, Cathleen; Yuceer, Cetin; Hsu, Chuan-Yu; Vikram, Vikas; Shevchenko, Olga; Etherington, Elizabeth; Strauss, Steven H
2010-06-01
Expression of FLOWERING LOCUS T (FT) and its homologues has been shown to accelerate the onset of flowering in a number of plant species, including poplar (Populus spp.). The application of FT should be of particular use in forest trees, as it could greatly accelerate and enable new kinds of breeding and research. Recent evidence showing the extent to which FT is effective in promoting flowering in trees is discussed, and its effectiveness in poplar is reported. Results using one FT gene from Arabidopsis and two from poplar, all driven by a heat-inducible promoter, transformed into two poplar genotypes are also described. Substantial variation in flowering response was observed depending on the FT gene and genetic background. Heat-induced plants shorter than 30 cm failed to flower as well as taller plants. Plants exposed to daily heat treatments lasting 3 weeks tended to produce fewer abnormal flowers than those in heat treatments of shorter durations; increasing the inductive temperature from 37 degrees C to 40 degrees C produced similar benefits. Using optimal induction conditions, approximately 90% of transgenic plants could be induced to flower. When induced FT rootstocks were grafted with scions that lacked FT, flowering was only observed in rootstocks. The results suggest that a considerable amount of species- or genotype-specific adaptation will be required to develop FT into a reliable means for shortening the generation cycle for breeding in poplar.
Ziv, Dafna; Zviran, Tali; Zezak, Oshrat; Samach, Alon; Irihimovitch, Vered
2014-01-01
In many perennials, heavy fruit load on a shoot decreases the ability of the plant to undergo floral induction in the following spring, resulting in a pattern of crop production known as alternate bearing. Here, we studied the effects of fruit load on floral determination in ‘Hass' avocado (Persea americana). De-fruiting experiments initially confirmed the negative effects of fruit load on return to flowering. Next, we isolated a FLOWERING LOCUS T-like gene, PaFT, hypothesized to act as a phloem-mobile florigen signal and examined its expression profile in shoot tissues of on (fully loaded) and off (fruit-lacking) trees. Expression analyses revealed a strong peak in PaFT transcript levels in leaves of off trees from the end of October through November, followed by a return to starting levels. Moreover and concomitant with inflorescence development, only off buds displayed up-regulation of the floral identity transcripts PaAP1 and PaLFY, with significant variation being detected from October and November, respectively. Furthermore, a parallel microscopic study of off apical buds revealed the presence of secondary inflorescence axis structures that only appeared towards the end of November. Finally, ectopic expression of PaFT in Arabidopsis resulted in early flowering transition. Together, our data suggests a link between increased PaFT expression observed during late autumn and avocado flower induction. Furthermore, our results also imply that, as in the case of other crop trees, fruit-load might affect flowering by repressing the expression of PaFT in the leaves. Possible mechanism(s) by which fruit crop might repress PaFT expression, are discussed. PMID:25330324
Ziv, Dafna; Zviran, Tali; Zezak, Oshrat; Samach, Alon; Irihimovitch, Vered
2014-01-01
In many perennials, heavy fruit load on a shoot decreases the ability of the plant to undergo floral induction in the following spring, resulting in a pattern of crop production known as alternate bearing. Here, we studied the effects of fruit load on floral determination in 'Hass' avocado (Persea americana). De-fruiting experiments initially confirmed the negative effects of fruit load on return to flowering. Next, we isolated a FLOWERING LOCUS T-like gene, PaFT, hypothesized to act as a phloem-mobile florigen signal and examined its expression profile in shoot tissues of on (fully loaded) and off (fruit-lacking) trees. Expression analyses revealed a strong peak in PaFT transcript levels in leaves of off trees from the end of October through November, followed by a return to starting levels. Moreover and concomitant with inflorescence development, only off buds displayed up-regulation of the floral identity transcripts PaAP1 and PaLFY, with significant variation being detected from October and November, respectively. Furthermore, a parallel microscopic study of off apical buds revealed the presence of secondary inflorescence axis structures that only appeared towards the end of November. Finally, ectopic expression of PaFT in Arabidopsis resulted in early flowering transition. Together, our data suggests a link between increased PaFT expression observed during late autumn and avocado flower induction. Furthermore, our results also imply that, as in the case of other crop trees, fruit-load might affect flowering by repressing the expression of PaFT in the leaves. Possible mechanism(s) by which fruit crop might repress PaFT expression, are discussed.
Fault Model Development for Fault Tolerant VLSI Design
1988-05-01
0 % .%. . BEIDGING FAULTS A bridging fault in a digital circuit connects two or more conducting paths of the circuit. The resistance...Melvin Breuer and Arthur Friedman, "Diagnosis and Reliable Design of Digital Systems", Computer Science Press, Inc., 1976. 4. [Chandramouli,1983] R...2138 AEDC LIBARY (TECH REPORTS FILE) MS-O0 ARNOLD AFS TN 37389-9998 USAG1 Attn: ASH-PCA-CRT Ft Huachuca AZ 85613-6000 DOT LIBRARY/iQA SECTION - ATTN
Reliability database development for use with an object-oriented fault tree evaluation program
NASA Technical Reports Server (NTRS)
Heger, A. Sharif; Harringtton, Robert J.; Koen, Billy V.; Patterson-Hine, F. Ann
1989-01-01
A description is given of the development of a fault-tree analysis method using object-oriented programming. In addition, the authors discuss the programs that have been developed or are under development to connect a fault-tree analysis routine to a reliability database. To assess the performance of the routines, a relational database simulating one of the nuclear power industry databases has been constructed. For a realistic assessment of the results of this project, the use of one of existing nuclear power reliability databases is planned.
Fault diagnosis of power transformer based on fault-tree analysis (FTA)
NASA Astrophysics Data System (ADS)
Wang, Yongliang; Li, Xiaoqiang; Ma, Jianwei; Li, SuoYu
2017-05-01
Power transformers is an important equipment in power plants and substations, power distribution transmission link is made an important hub of power systems. Its performance directly affects the quality and health of the power system reliability and stability. This paper summarizes the five parts according to the fault type power transformers, then from the time dimension divided into three stages of power transformer fault, use DGA routine analysis and infrared diagnostics criterion set power transformer running state, finally, according to the needs of power transformer fault diagnosis, by the general to the section by stepwise refinement of dendritic tree constructed power transformer fault
CUTSETS - MINIMAL CUT SET CALCULATION FOR DIGRAPH AND FAULT TREE RELIABILITY MODELS
NASA Technical Reports Server (NTRS)
Iverson, D. L.
1994-01-01
Fault tree and digraph models are frequently used for system failure analysis. Both type of models represent a failure space view of the system using AND and OR nodes in a directed graph structure. Fault trees must have a tree structure and do not allow cycles or loops in the graph. Digraphs allow any pattern of interconnection between loops in the graphs. A common operation performed on digraph and fault tree models is the calculation of minimal cut sets. A cut set is a set of basic failures that could cause a given target failure event to occur. A minimal cut set for a target event node in a fault tree or digraph is any cut set for the node with the property that if any one of the failures in the set is removed, the occurrence of the other failures in the set will not cause the target failure event. CUTSETS will identify all the minimal cut sets for a given node. The CUTSETS package contains programs that solve for minimal cut sets of fault trees and digraphs using object-oriented programming techniques. These cut set codes can be used to solve graph models for reliability analysis and identify potential single point failures in a modeled system. The fault tree minimal cut set code reads in a fault tree model input file with each node listed in a text format. In the input file the user specifies a top node of the fault tree and a maximum cut set size to be calculated. CUTSETS will find minimal sets of basic events which would cause the failure at the output of a given fault tree gate. The program can find all the minimal cut sets of a node, or minimal cut sets up to a specified size. The algorithm performs a recursive top down parse of the fault tree, starting at the specified top node, and combines the cut sets of each child node into sets of basic event failures that would cause the failure event at the output of that gate. Minimal cut set solutions can be found for all nodes in the fault tree or just for the top node. The digraph cut set code uses the same techniques as the fault tree cut set code, except it includes all upstream digraph nodes in the cut sets for a given node and checks for cycles in the digraph during the solution process. CUTSETS solves for specified nodes and will not automatically solve for all upstream digraph nodes. The cut sets will be output as a text file. CUTSETS includes a utility program that will convert the popular COD format digraph model description files into text input files suitable for use with the CUTSETS programs. FEAT (MSC-21873) and FIRM (MSC-21860) available from COSMIC are examples of programs that produce COD format digraph model description files that may be converted for use with the CUTSETS programs. CUTSETS is written in C-language to be machine independent. It has been successfully implemented on a Sun running SunOS, a DECstation running ULTRIX, a Macintosh running System 7, and a DEC VAX running VMS. The RAM requirement varies with the size of the models. CUTSETS is available in UNIX tar format on a .25 inch streaming magnetic tape cartridge (standard distribution) or on a 3.5 inch diskette. It is also available on a 3.5 inch Macintosh format diskette or on a 9-track 1600 BPI magnetic tape in DEC VAX FILES-11 format. Sample input and sample output are provided on the distribution medium. An electronic copy of the documentation in Macintosh Microsoft Word format is included on the distribution medium. Sun and SunOS are trademarks of Sun Microsystems, Inc. DEC, DeCstation, ULTRIX, VAX, and VMS are trademarks of Digital Equipment Corporation. UNIX is a registered trademark of AT&T Bell Laboratories. Macintosh is a registered trademark of Apple Computer, Inc.
Significant role of structural fractures in Ren-Qiu buried-block oil field, eastern China
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fei, Q.; Xie-Pei, W.
1983-03-01
Ren-qui oil field is in a buried block of Sinian (upper Proterozoic) rocks located in the Ji-zhong depression of the western Bohai Bay basin in eastern China. The main reservoir consists of Sinian dolomite rocks. It is a fault block with a large growth fault on the west side which trends north-northeast with throws of up to 1 km (0.6 mi) or more. The source rocks for the oil are Paleogene age and overlie the Sinian dolomite rocks. The structural fractures are the main factor forming the reservoir of the buried-block oil field. Three structural lines, trending northeast, north-northeast, andmore » northwest, form the regional netted fracture system. The north-northeast growth fault controlled the structural development of the buried block. The block was raised and eroded before the Tertiary sediments were deposited. In the Eocene Epoch, the Ji-zhong depression subsided, but the deposition, faulting, and related uplift of the block happened synchronously as the block was gradually submerged. At the same time, several horizontal and vertical karst zones were formed by the karst water along the netted structural fractures. The Eocene oil source rocks lapped onto the block and so the buried block, with many developed karst fractures, was surrounded by a great thickness of source rocks. As the growth fault developed, the height of the block was increased from 400 m (1300 ft) before the Oligocene to 1300 m (4250 ft) after. As the petroleum was generated, it migrated immediately into the karst fractures of the buried block along the growth fault. The karst-fractured block reservoir has an 800-m (2600-ft) high oil-bearing closure and good connections developed between the karst fractures.« less
Fault trees for decision making in systems analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lambert, Howard E.
1975-10-09
The application of fault tree analysis (FTA) to system safety and reliability is presented within the framework of system safety analysis. The concepts and techniques involved in manual and automated fault tree construction are described and their differences noted. The theory of mathematical reliability pertinent to FTA is presented with emphasis on engineering applications. An outline of the quantitative reliability techniques of the Reactor Safety Study is given. Concepts of probabilistic importance are presented within the fault tree framework and applied to the areas of system design, diagnosis and simulation. The computer code IMPORTANCE ranks basic events and cut setsmore » according to a sensitivity analysis. A useful feature of the IMPORTANCE code is that it can accept relative failure data as input. The output of the IMPORTANCE code can assist an analyst in finding weaknesses in system design and operation, suggest the most optimal course of system upgrade, and determine the optimal location of sensors within a system. A general simulation model of system failure in terms of fault tree logic is described. The model is intended for efficient diagnosis of the causes of system failure in the event of a system breakdown. It can also be used to assist an operator in making decisions under a time constraint regarding the future course of operations. The model is well suited for computer implementation. New results incorporated in the simulation model include an algorithm to generate repair checklists on the basis of fault tree logic and a one-step-ahead optimization procedure that minimizes the expected time to diagnose system failure.« less
Fire safety in transit systems fault tree analysis
DOT National Transportation Integrated Search
1981-09-01
Fire safety countermeasures applicable to transit vehicles are identified and evaluated. This document contains fault trees which illustrate the sequences of events which may lead to a transit-fire related casualty. A description of the basis for the...
Design and Experimental Validation for Direct-Drive Fault-Tolerant Permanent-Magnet Vernier Machines
Liu, Guohai; Yang, Junqin; Chen, Ming; Chen, Qian
2014-01-01
A fault-tolerant permanent-magnet vernier (FT-PMV) machine is designed for direct-drive applications, incorporating the merits of high torque density and high reliability. Based on the so-called magnetic gearing effect, PMV machines have the ability of high torque density by introducing the flux-modulation poles (FMPs). This paper investigates the fault-tolerant characteristic of PMV machines and provides a design method, which is able to not only meet the fault-tolerant requirements but also keep the ability of high torque density. The operation principle of the proposed machine has been analyzed. The design process and optimization are presented specifically, such as the combination of slots and poles, the winding distribution, and the dimensions of PMs and teeth. By using the time-stepping finite element method (TS-FEM), the machine performances are evaluated. Finally, the FT-PMV machine is manufactured, and the experimental results are presented to validate the theoretical analysis. PMID:25045729
A diagnosis system using object-oriented fault tree models
NASA Technical Reports Server (NTRS)
Iverson, David L.; Patterson-Hine, F. A.
1990-01-01
Spaceborne computing systems must provide reliable, continuous operation for extended periods. Due to weight, power, and volume constraints, these systems must manage resources very effectively. A fault diagnosis algorithm is described which enables fast and flexible diagnoses in the dynamic distributed computing environments planned for future space missions. The algorithm uses a knowledge base that is easily changed and updated to reflect current system status. Augmented fault trees represented in an object-oriented form provide deep system knowledge that is easy to access and revise as a system changes. Given such a fault tree, a set of failure events that have occurred, and a set of failure events that have not occurred, this diagnosis system uses forward and backward chaining to propagate causal and temporal information about other failure events in the system being diagnosed. Once the system has established temporal and causal constraints, it reasons backward from heuristically selected failure events to find a set of basic failure events which are a likely cause of the occurrence of the top failure event in the fault tree. The diagnosis system has been implemented in common LISP using Flavors.
Reset Tree-Based Optical Fault Detection
Lee, Dong-Geon; Choi, Dooho; Seo, Jungtaek; Kim, Howon
2013-01-01
In this paper, we present a new reset tree-based scheme to protect cryptographic hardware against optical fault injection attacks. As one of the most powerful invasive attacks on cryptographic hardware, optical fault attacks cause semiconductors to misbehave by injecting high-energy light into a decapped integrated circuit. The contaminated result from the affected chip is then used to reveal secret information, such as a key, from the cryptographic hardware. Since the advent of such attacks, various countermeasures have been proposed. Although most of these countermeasures are strong, there is still the possibility of attack. In this paper, we present a novel optical fault detection scheme that utilizes the buffers on a circuit's reset signal tree as a fault detection sensor. To evaluate our proposal, we model radiation-induced currents into circuit components and perform a SPICE simulation. The proposed scheme is expected to be used as a supplemental security tool. PMID:23698267
Fault tree applications within the safety program of Idaho Nuclear Corporation
NASA Technical Reports Server (NTRS)
Vesely, W. E.
1971-01-01
Computerized fault tree analyses are used to obtain both qualitative and quantitative information about the safety and reliability of an electrical control system that shuts the reactor down when certain safety criteria are exceeded, in the design of a nuclear plant protection system, and in an investigation of a backup emergency system for reactor shutdown. The fault tree yields the modes by which the system failure or accident will occur, the most critical failure or accident causing areas, detailed failure probabilities, and the response of safety or reliability to design modifications and maintenance schemes.
Expectable Earthquakes and their ground motions in the Van Norman Reservoirs Area
Wesson, R.L.; Page, R.A.; Boore, D.M.; Yerkes, R.F.
1974-01-01
The upper and lower Van Norman dams, in northwesternmost San Fernando Valley about 20 mi (32 km) northwest of downtown Los Angeles, were severely damaged during the 1971 San Fernando earthquake. An investigation of the geologic-seismologic setting of the Van Norman area indicates that an earthquake of at least M 7.7 may be expected in the Van Norman area. The expectable transitory effects in the Van Norman area of such an earthquake are as follows: peak horizontal acceleration of at least 1.15 g, peak velocity of displacement of 4.43 ft/sec (135 cm/sec), peak displacement of 2.3 ft (70 cm), and duration of shaking at accelerations greater than 0.05 g, 40 sec. A great earthquake (M 8+) on the San Andreas fault, 25 mi distant, also is expectable. Transitory effects in the Van Norman area from such an earthquake are estimated as follows: peak horizontal acceleration of 0.5 g, peak velocity of 1.97 ft/sec (60 cm/sec), displacement of 1.31 ft (40 cm), and duration of shaking at accelerations greater than 0.05 g, 80 sec. The permanent effects of the expectable local earthquake could include simultaneous fault movement at the lower damsite, the upper damsite, and the site proposed for a replacement dam halfway between the upper and lower dams. The maximum differential displacements due to such movements are estimated at 16.4 ft (5 m) at the lower damsite and about 9.6 ft (2.93 m) at the upper and proposed damsites. The 1971 San Fernando earthquake (M 6?) was accompanied by the most intense ground motions ever recorded instrumentally for a natural earthquake. At the lower Van Norman dam, horizontal accelerations exceeded 0.6 g, and shaking greater than 0.25 g lasted for about 13 see; at Pacoima dam, 6 mi (10 km) northeast of the lower dam, high-frequency peak horizontal accelerations of 1.25 g were recorded in two directions, and shaking greater than 0.25 g lasted for about 7 sec. Permanent effects of the earthquake include slope failures in the embankments of the upper and lower Van Norman dams, rupturing of the ground surface by faulting along parts of the zone of old faults that extends easterly through the reservoir area and across the northern part of the valley, folding or arching of the ground surface, and differential horizontal displacement of the terrane north and south of the fault zone. Although a zone of old faults extends through the reservoir area, the 1971 surface ruptures apparently did not; however, arching and horizontal displacements caused small relative displacements of the abutment areas of each of the three damsites. The 1971 arching coincided with preexisting topographic highs, and the surface ruptures coincided with eroded fault scarps and a buried ground-water impediment formed by pre-1971 faulting in young valley fill. This coincidence with evidence of past deformation indicates that the 1971 deformations were the result of a continuing geologic process that is expected to produce similar deformations during future events. The 1971 San Fernando earthquake probably was not the largest that has occurred in this area during the last approximately 200 years, as indicated by a buried fault like scarp about 200 years old that is higher than, and aligned with, 1971 fault scarps. In addition, the San Fernando zone of 1971 ruptures is part of a regional tectonic system that includes the San Andreas and associated faults; one of these, the White Wolf fault north of the San Andreas, is symmetrical in structural attitude with the San Fernando zone and ruptured the ground surface during the 1952 Kern County earthquake (M 7.7). Other large earthquakes associated with surface rupturing on faults of this system include the 1857 Fort Tejon earthquake (M 8+) and possibly the 1852 Big Pine earthquake. Several other historic earthquakes in this general area are not known to be associated with surface ruptures, but were large enough to cause damage in the northern San Fernando Valley. The Van Norman rese
Geologic environment of the Van Norman Reservoirs area
Yerkes, R.F.; Bonilla, M.G.; Youd, T.L.; Sims, J.D.
1974-01-01
The upper and lower Van Norman dams, in northwesternmost San Fernando Valley about 20 mi (32 km) northwest of downtown Los Angeles, were severely damaged during the 1971 San Fernando earthquake. An investigation of the geologic-seismologic setting of the Van Norman area indicates that an earthquake of at least M 7.7 may be expected in the Van Norman area. The expectable transitory effects in the Van Norman area of such an earthquake are as follows: peak horizontal acceleration of at least 1.15 g, peak velocity of displacement of 4.43 ft/sec (135 cm/sec), peak displacement of 2.3 ft (70 cm), and duration of shaking at accelerations greater than 0.05 g, 40 sec. A great earthquake (M 8+) on the San Andreas fault, 25 mi distant, also is expectable. Transitory effects in the Van Norman area from such an earthquake are estimated as follows: peak horizontal acceleration of 0.5 g, peak velocity of 1.97 ft/sec (60 cm/sec), displacement of 1.31 ft (40 cm), and duration of shaking at accelerations greater than 0.05 g, 80 sec. The permanent effects of the expectable local earthquake could include simultaneous fault movement at the lower damsite, the upper damsite, and the site proposed for a replacement dam halfway between the upper and lower dams. The maximum differential displacements due to such movements are estimated at 16.4 ft (5 m) at the lower damsite and about 9.6 ft (2.93 m) at the upper and proposed damsites. The 1971 San Fernando earthquake (M 6?) was accompanied by the most intense ground motions ever recorded instrumentally for a natural earthquake. At the lower Van Norman dam, horizontal accelerations exceeded 0.6 g, and shaking greater than 0.25 g lasted for about 13 see; at Pacoima dam, 6 mi (10 km) northeast of the lower dam, high-frequency peak horizontal accelerations of 1.25 g were recorded in two directions, and shaking greater than 0.25 g lasted for about 7 sec. Permanent effects of the earthquake include slope failures in the embankments of the upper and lower Van Norman dams, rupturing of the ground surface by faulting along parts of the zone of old faults that extends easterly through the reservoir area and across the northern part of the valley, folding or arching of the ground surface, and differential horizontal displacement of the terrane north and south of the fault zone. Although a zone of old faults extends through the reservoir area, the 1971 surface ruptures apparently did not; however, arching and horizontal displacements caused small relative displacements of the abutment areas of each of the three damsites. The 1971 arching coincided with preexisting topographic highs, and the surface ruptures coincided with eroded fault scarps and a buried ground-water impediment formed by pre-1971 faulting in young valley fill. This coincidence with evidence of past deformation indicates that the 1971 deformations were the result of a continuing geologic process that is expected to produce similar deformations during future events. The 1971 San Fernando earthquake probably was not the largest that has occurred in this area during the last approximately 200 years, as indicated by a buried fault like scarp about 200 years old that is higher than, and aligned with, 1971 fault scarps. In addition, the San Fernando zone of 1971 ruptures is part of a regional tectonic system that includes the San Andreas and associated faults; one of these, the White Wolf fault north of the San Andreas, is symmetrical in structural attitude with the San Fernando zone and ruptured the ground surface during the 1952 Kern County earthquake (M 7.7). Other large earthquakes associated with surface rupturing on faults of this system include the 1857 Fort Tejon earthquake (M 8+) and possibly the 1852 Big Pine earthquake. Several other historic earthquakes in this general area are not known to be associated with surface ruptures, but were large enough to cause damage in the northern San Fernando Valley. The Van Norman rese
Lognormal Approximations of Fault Tree Uncertainty Distributions.
El-Shanawany, Ashraf Ben; Ardron, Keith H; Walker, Simon P
2018-01-26
Fault trees are used in reliability modeling to create logical models of fault combinations that can lead to undesirable events. The output of a fault tree analysis (the top event probability) is expressed in terms of the failure probabilities of basic events that are input to the model. Typically, the basic event probabilities are not known exactly, but are modeled as probability distributions: therefore, the top event probability is also represented as an uncertainty distribution. Monte Carlo methods are generally used for evaluating the uncertainty distribution, but such calculations are computationally intensive and do not readily reveal the dominant contributors to the uncertainty. In this article, a closed-form approximation for the fault tree top event uncertainty distribution is developed, which is applicable when the uncertainties in the basic events of the model are lognormally distributed. The results of the approximate method are compared with results from two sampling-based methods: namely, the Monte Carlo method and the Wilks method based on order statistics. It is shown that the closed-form expression can provide a reasonable approximation to results obtained by Monte Carlo sampling, without incurring the computational expense. The Wilks method is found to be a useful means of providing an upper bound for the percentiles of the uncertainty distribution while being computationally inexpensive compared with full Monte Carlo sampling. The lognormal approximation method and Wilks's method appear attractive, practical alternatives for the evaluation of uncertainty in the output of fault trees and similar multilinear models. © 2018 Society for Risk Analysis.
Molecular characterization of FLOWERING LOCUS T-like genes of apple (Malus x domestica Borkh.).
Kotoda, Nobuhiro; Hayashi, Hidehiro; Suzuki, Motoko; Igarashi, Megumi; Hatsuyama, Yoshimichi; Kidou, Shin-Ichiro; Igasaki, Tomohiro; Nishiguchi, Mitsuru; Yano, Kanako; Shimizu, Tokurou; Takahashi, Sae; Iwanami, Hiroshi; Moriya, Shigeki; Abe, Kazuyuki
2010-04-01
The two FLOWERING LOCUS T (FT)-like genes of apple (Malus x domestica Borkh.), MdFT1 and MdFT2, have been isolated and characterized. MdFT1 and MdFT2 were mapped, respectively, on distinct linkage groups (LGs) with partial homoeology, LG 12 and LG 4. The expression pattern of MdFT1 and MdFT2 differed in that MdFT1 was expressed mainly in apical buds of fruit-bearing shoots in the adult phase, with little expression in the juvenile tissues, whereas MdFT2 was expressed mainly in reproductive organs, including flower buds and young fruit. On the other hand, both genes had the potential to induce early flowering since transgenic Arabidopsis, which ectopically expressed MdFT1 or MdFT2, flowered earlier than wild-type plants. Furthermore, overexpression of MdFT1 conferred precocious flowering in apple, with altered expression of other endogenous genes, such as MdMADS12. These results suggest that MdFT1 could function to promote flowering by altering the expression of those genes and that, at least, other genes may play an important role as well in the regulation of flowering in apple. The long juvenile period of fruit trees prevents early cropping and efficient breeding. Our findings will be useful information to unveil the molecular mechanism of flowering and to develop methods to shorten the juvenile period in various fruit trees, including apple.
Fault Tree Analysis as a Planning and Management Tool: A Case Study
ERIC Educational Resources Information Center
Witkin, Belle Ruth
1977-01-01
Fault Tree Analysis is an operations research technique used to analyse the most probable modes of failure in a system, in order to redesign or monitor the system more closely in order to increase its likelihood of success. (Author)
Houser, Brenda B.; Peters, Lisa; Esser, Richard P.; Gettings, Mark E.
2004-01-01
The Tucson Basin is a relatively large late Cenozoic extensional basin developed in the upper plate of the Catalina detachment fault in the southern Basin and Range Province, southeastern Arizona. In 1972, Exxon Company, U.S.A., drilled an exploration well (Exxon State (32)-1) near the center of the Tucson Basin that penetrated 3,658 m (12,001 ft) of sedimentary and volcanic rocks above granitoid basement. Detailed study of cuttings and geophysical logs of the Exxon State well has led to revision of the previously reported subsurface stratigraphy for the basin and provided new insight into its depositional and tectonic history. There is evidence that detachment faulting and uplift of the adjacent Catalina core complex on the north have affected the subsurface geometry of the basin. The gravity anomaly map of the Tucson Basin indicates that the locations of subbasins along the north-trending axis of the main basin coincide with the intersection of this axis with west-southwest projections of synforms in the adjacent core complex. In other words, the subbasins overlie synforms and the ridges between subbasins overlie antiforms. The Exxon State well was drilled near the center of one of the subbasins. The Exxon well was drilled to a total depth of 3,827 m (12,556 ft), and penetrated the following stratigraphic section: Pleistocene(?) to middle(?) Miocene upper basin-fill sedimentary rocks (0-908 m [0-2,980 ft]) lower basin-fill sedimentary rocks (908-1,880 m [2,980-6,170 ft]) lower Miocene and upper Oligocene Pantano Formation (1,880-2,516 m [6,170-8,256 ft]) upper Oligocene to Paleocene(?) volcanic and sedimentary rocks (2,516-3,056 m [8,256-10,026 ft]) Lower Cretaceous to Upper Jurassic Bisbee Group (3,056-3,658 m [10,026-12,001 ft]) pre-Late Jurassic granitoid plutonic rock (3,658-3,827 m [12,001- 12,556 ft]). Stratigraphy and Tectonic History of the Tucson Basin, Pima County, Arizona, Based on the Exxon State (32)-1 Well The 1,880 m (6,170 ft) of basin-fill sedimentary rocks consist of alluvial-fan, alluvial-plain, and playa facies. The uppermost unit, a 341-m-thick (1,120-ft) lower Pleistocene and upper Pliocene alluvial-fan deposit (named the Cienega Creek fan in this study), is an important aquifer in the Tucson basin. The facies change at the base of the alluvial fan may prove to be recognizable in well data throughout much of the basin. The well data show that a sharp boundary at 908 m (2,980 ft) separates relatively unconsolidated and undeformed upper basin fill from denser, significantly faulted lower basin fill, indicating that there were two stages of basin filling in the Tucson basin as in other basins of the region. The two stages apparently occurred during times of differing tectonic style in the region. In the Tucson area the Pantano Formation, which contains an andesite flow dated at about 25 Ma, fills a syntectonic basin in the hanging wall of the Catalina detachment fault, reflecting middle Tertiary extension on the fault. The formation in the well is 636 m thick (2,086 ft) and consists of alluvial-fan, playa, and lacustrine sedimentary facies, a lava flow, and rock- avalanche deposits. Analysis of the geophysical logs indicates that a K-Ar date of 23.4 Ma reported previously for the Pantano interval of the well was obtained on selected cuttings collected from a rock-avalanche deposit near the base of the unit and, thus, does not date the Pantano Formation. The middle Tertiary volcanic and sedimentary rocks have an aggregate thickness of 540 m (1,770 ft). We obtained a new 40Ar/ 39Ar age of 26.91+0.18 Ma on biotite sampled at a depth of 2,584-2,609 m (8,478-8,560 ft) from a 169-m-thick (554-ft) silicic tuff in this interval. The volcanic rocks probably correlate with other middle Tertiary volcanic rocks of the area, and the sedimentary rocks may correlate with the Cloudburst and Mineta Formations exposed on the flanks of the San Pedro Basin to the northeast. The Bisbee Group in the Exxon well is 602 m (1,975 f
NASA Astrophysics Data System (ADS)
Rodak, C. M.; McHugh, R.; Wei, X.
2016-12-01
The development and combination of horizontal drilling and hydraulic fracturing has unlocked unconventional hydrocarbon reserves around the globe. These advances have triggered a number of concerns regarding aquifer contamination and over-exploitation, leading to scientific studies investigating potential risks posed by directional hydraulic fracturing activities. These studies, balanced with potential economic benefits of energy production, are a crucial source of information for communities considering the development of unconventional reservoirs. However, probabilistic quantification of the overall risk posed by hydraulic fracturing at the system level are rare. Here we present the concept of fault tree analysis to determine the overall probability of groundwater contamination or over-exploitation, broadly referred to as the probability of failure. The potential utility of fault tree analysis for the quantification and communication of risks is approached with a general application. However, the fault tree design is robust and can handle various combinations of regional-specific data pertaining to relevant spatial scales, geological conditions, and industry practices where available. All available data are grouped into quantity and quality-based impacts and sub-divided based on the stage of the hydraulic fracturing process in which the data is relevant as described by the USEPA. Each stage is broken down into the unique basic events required for failure; for example, to quantify the risk of an on-site spill we must consider the likelihood, magnitude, composition, and subsurface transport of the spill. The structure of the fault tree described above can be used to render a highly complex system of variables into a straightforward equation for risk calculation based on Boolean logic. This project shows the utility of fault tree analysis for the visual communication of the potential risks of hydraulic fracturing activities on groundwater resources.
Structural and Lithologic Characterization of the SAFOD Pilot Hole and Phase One Main Hole
NASA Astrophysics Data System (ADS)
Barton, D. C.; Bradbury, K.; Solum, J. G.; Evans, J. P.
2005-12-01
Petrological and microstructural analyses of drill cuttings were conducted for the San Andreas Fault Observatory at Depth (SAFOD) Pilot Hole and Main Hole projects. Grain mounts were produced at ~30 m (100 ft) intervals from drill cuttings collected from the Pilot Hole to a depth of 2164 m (7100 ft) and from Phase 1 of the SAFOD main hole to a depth of 3067 m (10062 ft). . Thin-section grain mount analysis included identification of mineral composition, alteration, and deformation within individual grains, measured at .5 mm increments on an equally spaced, 300 point grid pattern. Lithologic features in the Quaternary/Tertiary deposits from 30 - 640 m (100-2100 ft) in the Pilot Hole, and 670 - 792 m (2200 - 2600 ft) in the Phase 1 main hole, include fine-grained, thinly bedded sediments with clasts of fine-grained volcanic groundmass. Preliminary grain mount analysis from 1920 - 3067 m (6300 - 10062) in the Phase 1 main hole, indicates a sedimentary sequence consisting of fine-grained lithic fragments of very fine-grained shale. Deformation mechanisms observed within the cuttings of granitic rocks from 914 - 1860 m (3000 - 6100 ft.) include intracrystalline plasticity and cataclasis. Intracrystalline plastic deformation within quartz and feldspar grains is indicated by undulatory extinction, ribbon grains, chessboard patterns, and deformation twins and lamellae. Cataclastic deformation is characterized by intra- and intergranular microfractures, angular grains, gouge zones, iron-oxide banding, and comminution. Mineral and cataclasite abundances were plotted as a function of weight percent vs. depth. Plots of quartz and feldspar abundances are also correlated with XRD weight percent data from 1160 - 1890 m (3800 - 6200 ft.) in the granitic and granodioritic sequences of the Phase 1 main hole. Regions of the both of the drill holes with cataclasite abundances ranging from 20 - 30 wt% are interpreted as shear zones. Shear zones identified in this study from 1150 - 1420 m (3773 - 4659 ft.) in the Pilot Hole occur in the same location as shear zones recognized by Boness and Zoback (2004) using borehole geophysical data. These shear zones may possibly be correlated to shear zones identified in the Phase I main hole from 1615 - 2012 m (5300 - 6600 ft). If this is the case, it can be explained by steeply dipping subsidiary fault zones, likely associated with the San Andreas Fault system.
Stand Parameters of a 27-Year-Old Water Oak Plantation on Old Field Loessial Soils
Roger M. Krinard; Robert L. Johnson
1988-01-01
At age 27, water oak (Quercus nigra L.) plantings on Macon Ridge old field loessial soil near Winnsboro, Louisiana, had per-acre stand values as follows: number of trees, 356; average d.b.h., 6.6 inches; basal area, 86 ft2; total volume from the stump to the tip (of trees with d.b.h. 25.0 in), 2,017 ft3...
Fault Tree Analysis: An Emerging Methodology for Instructional Science.
ERIC Educational Resources Information Center
Wood, R. Kent; And Others
1979-01-01
Describes Fault Tree Analysis, a tool for systems analysis which attempts to identify possible modes of failure in systems to increase the probability of success. The article defines the technique and presents the steps of FTA construction, focusing on its application to education. (RAO)
Common Faults and Their Prioritization in Small Commercial Buildings: February 2017 - December 2017
DOE Office of Scientific and Technical Information (OSTI.GOV)
Frank, Stephen M; Kim, Janghyun; Cai, Jie
To support an ongoing project at NREL titled 'An Open, Cloud-Based Platform for Whole-Building Fault Detection and Diagnostics' (work breakdown structure number 3.2.6.18 funded by the Department of Energy Building Technologies Office), this report documents faults that are commonly found in small commercial buildings (with a floor area of 10,000 ft2 or less) based on a literature review and discussions with building commissioning experts. It also provides a list of prioritized faults based on an estimation of the prevalence, energy impact, and financial impact of each fault.
Program listing for fault tree analysis of JPL technical report 32-1542
NASA Technical Reports Server (NTRS)
Chelson, P. O.
1971-01-01
The computer program listing for the MAIN program and those subroutines unique to the fault tree analysis are described. Some subroutines are used for analyzing the reliability block diagram. The program is written in FORTRAN 5 and is running on a UNIVAC 1108.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Powell, Danny H; Elwood Jr, Robert H
2011-01-01
Analysis of the material protection, control, and accountability (MPC&A) system is necessary to understand the limits and vulnerabilities of the system to internal threats. A self-appraisal helps the facility be prepared to respond to internal threats and reduce the risk of theft or diversion of nuclear material. The material control and accountability (MC&A) system effectiveness tool (MSET) fault tree was developed to depict the failure of the MPC&A system as a result of poor practices and random failures in the MC&A system. It can also be employed as a basis for assessing deliberate threats against a facility. MSET uses faultmore » tree analysis, which is a top-down approach to examining system failure. The analysis starts with identifying a potential undesirable event called a 'top event' and then determining the ways it can occur (e.g., 'Fail To Maintain Nuclear Materials Under The Purview Of The MC&A System'). The analysis proceeds by determining how the top event can be caused by individual or combined lower level faults or failures. These faults, which are the causes of the top event, are 'connected' through logic gates. The MSET model uses AND-gates and OR-gates and propagates the effect of event failure using Boolean algebra. To enable the fault tree analysis calculations, the basic events in the fault tree are populated with probability risk values derived by conversion of questionnaire data to numeric values. The basic events are treated as independent variables. This assumption affects the Boolean algebraic calculations used to calculate results. All the necessary calculations are built into the fault tree codes, but it is often useful to estimate the probabilities manually as a check on code functioning. The probability of failure of a given basic event is the probability that the basic event primary question fails to meet the performance metric for that question. The failure probability is related to how well the facility performs the task identified in that basic event over time (not just one performance or exercise). Fault tree calculations provide a failure probability for the top event in the fault tree. The basic fault tree calculations establish a baseline relative risk value for the system. This probability depicts relative risk, not absolute risk. Subsequent calculations are made to evaluate the change in relative risk that would occur if system performance is improved or degraded. During the development effort of MSET, the fault tree analysis program used was SAPHIRE. SAPHIRE is an acronym for 'Systems Analysis Programs for Hands-on Integrated Reliability Evaluations.' Version 1 of the SAPHIRE code was sponsored by the Nuclear Regulatory Commission in 1987 as an innovative way to draw, edit, and analyze graphical fault trees primarily for safe operation of nuclear power reactors. When the fault tree calculations are performed, the fault tree analysis program will produce several reports that can be used to analyze the MPC&A system. SAPHIRE produces reports showing risk importance factors for all basic events in the operational MC&A system. The risk importance information is used to examine the potential impacts when performance of certain basic events increases or decreases. The initial results produced by the SAPHIRE program are considered relative risk values. None of the results can be interpreted as absolute risk values since the basic event probability values represent estimates of risk associated with the performance of MPC&A tasks throughout the material balance area (MBA). The RRR for a basic event represents the decrease in total system risk that would result from improvement of that one event to a perfect performance level. Improvement of the basic event with the greatest RRR value produces a greater decrease in total system risk than improvement of any other basic event. Basic events with the greatest potential for system risk reduction are assigned performance improvement values, and new fault tree calculations show the improvement in total system risk. The operational impact or cost-effectiveness from implementing the performance improvements can then be evaluated. The improvements being evaluated can be system performance improvements, or they can be potential, or actual, upgrades to the system. The RIR for a basic event represents the increase in total system risk that would result from failure of that one event. Failure of the basic event with the greatest RIR value produces a greater increase in total system risk than failure of any other basic event. Basic events with the greatest potential for system risk increase are assigned failure performance values, and new fault tree calculations show the increase in total system risk. This evaluation shows the importance of preventing performance degradation of the basic events. SAPHIRE identifies combinations of basic events where concurrent failure of the events results in failure of the top event.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Prescott, M.P.
1990-09-01
Significant new gas reserves have recently been discovered in the Marginulina texana sands along the Oligocene trend at the Maurice field. Detailed subsurface maps and seismic data are presented to exhibit the extent and nature of this local buried structure and to demonstrate future opportunities along the Oligocene trend. Since discovery in 1988, the MARG. TEX. RC has extended the Maurice field one-half mile south and has encountered over 170 ft of Marginulina texana pay Estimated reserves are in the order of 160 BCFG with limits of the reservoir still unknown. This reserve addition would increase the estimated ultimate ofmore » the Maurice field by over 70% from 220 BCFG to 380 BCFG. Cross sections across the field depict the new reservoir trap as a buried upthrown fault closure with an anticipated gas column of 700 ft. Interpretation of the origin of this local structure is that of a buried rotated fault block on an overall larger depositional structure. Detailed subsurface maps at the Marginulina texana and the overlying Miogypsinoides level are presented. These maps indicate that one common fault block is productive from two different levels. The deeper Marginulina texana sands are trapped on north dip upthrown to a southern boundary fault, Fault B. The overlying Miogypsinoides sands are trapped on south dip downthrown to a northern boundary fault, Fault A. The northern boundary fault, Fault A, was the Marginulina texana expansion fault and rotated that downthrown section to north dip. Because of the difference in dip between the two levels, the apex of the deeper Marginulina texana fault closure is juxtaposed by one mile south relative to the overlying Miogypsinoides fault closure. Analysis indicates that important structural growth occur-red during Marginulina texana deposition with a local unconformity covering the apex of the upthrown fault closure. State-of-the-art reconnaissance seismic data clearly exhibit this buried rotated fault block.« less
Direct evaluation of fault trees using object-oriented programming techniques
NASA Technical Reports Server (NTRS)
Patterson-Hine, F. A.; Koen, B. V.
1989-01-01
Object-oriented programming techniques are used in an algorithm for the direct evaluation of fault trees. The algorithm combines a simple bottom-up procedure for trees without repeated events with a top-down recursive procedure for trees with repeated events. The object-oriented approach results in a dynamic modularization of the tree at each step in the reduction process. The algorithm reduces the number of recursive calls required to solve trees with repeated events and calculates intermediate results as well as the solution of the top event. The intermediate results can be reused if part of the tree is modified. An example is presented in which the results of the algorithm implemented with conventional techniques are compared to those of the object-oriented approach.
NASA Astrophysics Data System (ADS)
Guns, K. A.; Bennett, R. A.; Blisniuk, K.
2017-12-01
To better evaluate the distribution and transfer of strain and slip along the Southern San Andreas Fault (SSAF) zone in the northern Coachella valley in southern California, we integrate geological and geodetic observations to test whether strain is being transferred away from the SSAF system towards the Eastern California Shear Zone through microblock rotation of the Eastern Transverse Ranges (ETR). The faults of the ETR consist of five east-west trending left lateral strike slip faults that have measured cumulative offsets of up to 20 km and as low as 1 km. Present kinematic and block models present a variety of slip rate estimates, from as low as zero to as high as 7 mm/yr, suggesting a gap in our understanding of what role these faults play in the larger system. To determine whether present-day block rotation along these faults is contributing to strain transfer in the region, we are applying 10Be surface exposure dating methods to observed offset channel and alluvial fan deposits in order to estimate fault slip rates along two faults in the ETR. We present observations of offset geomorphic landforms using field mapping and LiDAR data at three sites along the Blue Cut Fault and one site along the Smoke Tree Wash Fault in Joshua Tree National Park which indicate recent Quaternary fault activity. Initial results of site mapping and clast count analyses reveal at least three stages of offset, including potential Holocene offsets, for one site along the Blue Cut Fault, while preliminary 10Be geochronology is in progress. This geologic slip rate data, combined with our new geodetic surface velocity field derived from updated campaign-based GPS measurements within Joshua Tree National Park will allow us to construct a suite of elastic fault block models to elucidate rates of strain transfer away from the SSAF and how that strain transfer may be affecting the length of the interseismic period along the SSAF.
FAULT TREE ANALYSIS FOR EXPOSURE TO REFRIGERANTS USED FOR AUTOMOTIVE AIR CONDITIONING IN THE U.S.
A fault tree analysis was used to estimate the number of refrigerant exposures of automotive service technicians and vehicle occupants in the United States. Exposures of service technicians can occur when service equipment or automotive air-conditioning systems leak during servic...
A Fault Tree Approach to Analysis of Organizational Communication Systems.
ERIC Educational Resources Information Center
Witkin, Belle Ruth; Stephens, Kent G.
Fault Tree Analysis (FTA) is a method of examing communication in an organization by focusing on: (1) the complex interrelationships in human systems, particularly in communication systems; (2) interactions across subsystems and system boundaries; and (3) the need to select and "prioritize" channels which will eliminate noise in the…
Robert E. Keane
2006-01-01
The Tree Data (TD) methods are used to sample individual live and dead trees on a fixed-area plot to estimate tree density, size, and age class distributions before and after fire in order to assess tree survival and mortality rates. This method can also be used to sample individual shrubs if they are over 4.5 ft tall. When trees are larger than the user-specified...
Applying fault tree analysis to the prevention of wrong-site surgery.
Abecassis, Zachary A; McElroy, Lisa M; Patel, Ronak M; Khorzad, Rebeca; Carroll, Charles; Mehrotra, Sanjay
2015-01-01
Wrong-site surgery (WSS) is a rare event that occurs to hundreds of patients each year. Despite national implementation of the Universal Protocol over the past decade, development of effective interventions remains a challenge. We performed a systematic review of the literature reporting root causes of WSS and used the results to perform a fault tree analysis to assess the reliability of the system in preventing WSS and identifying high-priority targets for interventions aimed at reducing WSS. Process components where a single error could result in WSS were labeled with OR gates; process aspects reinforced by verification were labeled with AND gates. The overall redundancy of the system was evaluated based on prevalence of AND gates and OR gates. In total, 37 studies described risk factors for WSS. The fault tree contains 35 faults, most of which fall into five main categories. Despite the Universal Protocol mandating patient verification, surgical site signing, and a brief time-out, a large proportion of the process relies on human transcription and verification. Fault tree analysis provides a standardized perspective of errors or faults within the system of surgical scheduling and site confirmation. It can be adapted by institutions or specialties to lead to more targeted interventions to increase redundancy and reliability within the preoperative process. Copyright © 2015 Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Ly, U. L.; Ho, J. K.
1986-01-01
A systematic procedure for the synthesis of fault tolerant control laws to actuator failure has been presented. Two design methods were used to synthesize fault tolerant controllers: the conventional LQ design method and a direct feedback controller design method SANDY. The latter method is used primarily to streamline the full-state Q feedback design into a practical implementable output feedback controller structure. To achieve robustness to control actuator failure, the redundant surfaces are properly balanced according to their control effectiveness. A simple gain schedule based on the landing gear up/down logic involving only three gains was developed to handle three design flight conditions: Mach .25 and Mach .60 at 5000 ft and Mach .90 at 20,000 ft. The fault tolerant control law developed in this study provides good stability augmentation and performance for the relaxed static stability aircraft. The augmented aircraft responses are found to be invariant to the presence of a failure. Furthermore, single-loop stability margins of +6 dB in gain and +30 deg in phase were achieved along with -40 dB/decade rolloff at high frequency.
NASA Astrophysics Data System (ADS)
Sagar, M. W.; Seward, D.; Norton, K. P.
2016-12-01
The 650 km-long Australian-Pacific plate boundary Alpine Fault is remarkably straight at a regional scale, except for a prominent S-shaped bend in the northern South Island. This is a restraining bend and has been referred to as the `Big Bend' due to similarities with the Transverse Ranges section of the San Andreas Fault. The Alpine Fault is the main source of seismic hazard in the South Island, yet there are no constraints on slip rates at the Big Bend. Furthermore, the timing of Big Bend development is poorly constrained to the Miocene. To address these issues we are using the fission-track (FT) and 40Ar/39Ar thermochronometers, together with basin-averaged cosmogenic nuclide 10Be concentrations to constrain the onset and rate of Neogene-Quaternary exhumation of the Australian and Pacific plates at the Big Bend. Exhumation rates at the Big Bend are expected to be greater than those for adjoining sections of the Alpine Fault due to locally enhanced shortening. Apatite FT ages and modelled thermal histories indicate that exhumation of the Australian Plate had begun by 13 Ma and 3 km of exhumation has occurred since that time, requiring a minimum exhumation rate of 0.2 mm/year. In contrast, on the Pacific Plate, zircon FT cooling ages suggest ≥7 km of exhumation in the past 2-3 Ma, corresponding to a minimum exhumation rate of 2 mm/year. Preliminary assessment of stream channel gradients either side of the Big Bend suggests equilibrium between uplift and erosion. The implication of this is that Quaternary erosion rates estimated from 10Be concentrations will approximate uplift rates. These uplift rates will help to better constrain the dip-slip rate of the Alpine Fault, which will allow the National Seismic Hazard Model to be updated.
In Situ Measurement of Permeability in the Vicinity of Faulted Nonwelded Bishop Tuff, Bishop, CA
NASA Astrophysics Data System (ADS)
Dinwiddie, C. L.; Fedors, R. W.; Ferrill, D. A.; Bradbury, K. K.
2002-12-01
The nonwelded Bishop Tuff includes matrix-supported massive ignimbrites and clast-supported bedded deposits. Fluid flow through such faulted nonwelded tuff is likely to be influenced by a combination of host rock properties and the presence of deformation features, such as open fractures, mineralized fractures, and fault zones that exhibit comminuted fault rock and clays. Lithologic contacts between fine- and coarse-grained sub-units of nonwelded tuff may induce formation of capillary and/or permeability barriers within the unsaturated zone, potentially leading to down-dip lateral diversion of otherwise vertically flowing fluid. However, discontinuities (e.g., fractures and faults) may lead to preferential sub-vertical fast flow paths in the event of episodic infiltration rates, thus disrupting the potential for both (1) large-scale capillary and/or permeability barriers to form and for (2) redirection of water flow over great lateral distances. This study focuses on an innovative technique for measuring changes in matrix permeability near faults in situ--changes that may lead to enhancement of vertical fluid flow and disruption of lateral fluid flow. A small-drillhole minipermeameter probe provides a means to eliminate extraction of fragile nonwelded tuffs as a necessity for permeability measurement. Advantages of this approach include (1) a reduction of weathering-effects on measured permeability, and (2) provision of a superior sealing mechanism around the gas injection zone. In order to evaluate the effect of faults and fault zone deformation on nonwelded tuff matrix permeability, as well as to address the potential for disruption of lithologic barrier-induced lateral diversion of flow, data were collected from two fault systems and from unfaulted host rock. Two hundred and sixty-seven gas-permeability measurements were made at 89 locations; i.e. permeability measurements were made in triplicate at each location with three flow rates. Data were collected at the first fault and perpendicularly away from it within the hanging wall to a distance of 6 m [20 ft] along one transect, and perpendicular to the fault from the foot wall to the hanging wall for a distance of 6 m [20 ft] along a second transect. Additionally, eight water-permeameter tests were conducted in order to augment the gas-permeability data. Gas-permeability measurements were collected along two transects at the main fault of the second fault system and perpendicularly away from it within the foot wall to a distance of 10.5 m [34 ft], crossing several secondary faults in the process. Data were also collected within the fault gouge of the main fault, and were found to vary therein by an order of magnitude. This Bishop Tuff study supports the U.S. Nuclear Regulatory Commission (NRC) review of hydrologic property studies at Yucca Mountain, Nevada, which are conducted by the U.S. Department of Energy. This abstract is an independent product of the CNWRA and does not necessarily reflect the views or regulatory position of the NRC.
Langenheim, Victoria E.; Rymer, Michael J.; Catchings, Rufus D.; Goldman, Mark R.; Watt, Janet T.; Powell, Robert E.; Matti, Jonathan C.
2016-03-02
We describe high-resolution gravity and seismic refraction surveys acquired to determine the thickness of valley-fill deposits and to delineate geologic structures that might influence groundwater flow beneath the Smoke Tree Wash area in Joshua Tree National Park. These surveys identified a sedimentary basin that is fault-controlled. A profile across the Smoke Tree Wash fault zone reveals low gravity values and seismic velocities that coincide with a mapped strand of the Smoke Tree Wash fault. Modeling of the gravity data reveals a basin about 2–2.5 km long and 1 km wide that is roughly centered on this mapped strand, and bounded by inferred faults. According to the gravity model the deepest part of the basin is about 270 m, but this area coincides with low velocities that are not characteristic of typical basement complex rocks. Most likely, the density contrast assumed in the inversion is too high or the uncharacteristically low velocities represent highly fractured or weathered basement rocks, or both. A longer seismic profile extending onto basement outcrops would help differentiate which scenario is more accurate. The seismic velocities also determine the depth to water table along the profile to be about 40–60 m, consistent with water levels measured in water wells near the northern end of the profile.
A Fault Tree Approach to Needs Assessment -- An Overview.
ERIC Educational Resources Information Center
Stephens, Kent G.
A "failsafe" technology is presented based on a new unified theory of needs assessment. Basically the paper discusses fault tree analysis as a technique for enhancing the probability of success in any system by analyzing the most likely modes of failure that could occur and then suggesting high priority avoidance strategies for those…
Huang, Weiqing; Fan, Hongbo; Qiu, Yongfu; Cheng, Zhiyu; Xu, Pingru; Qian, Yu
2016-05-01
Recently, China has frequently experienced large-scale, severe and persistent haze pollution due to surging urbanization and industrialization and a rapid growth in the number of motor vehicles and energy consumption. The vehicle emission due to the consumption of a large number of fossil fuels is no doubt a critical factor of the haze pollution. This work is focused on the causation mechanism of haze pollution related to the vehicle emission for Guangzhou city by employing the Fault Tree Analysis (FTA) method for the first time. With the establishment of the fault tree system of "Haze weather-Vehicle exhausts explosive emission", all of the important risk factors are discussed and identified by using this deductive FTA method. The qualitative and quantitative assessments of the fault tree system are carried out based on the structure, probability and critical importance degree analysis of the risk factors. The study may provide a new simple and effective tool/strategy for the causation mechanism analysis and risk management of haze pollution in China. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Sanchez-Vila, X.; de Barros, F.; Bolster, D.; Nowak, W.
2010-12-01
Assessing the potential risk of hydro(geo)logical supply systems to human population is an interdisciplinary field. It relies on the expertise in fields as distant as hydrogeology, medicine, or anthropology, and needs powerful translation concepts to provide decision support and policy making. Reliable health risk estimates need to account for the uncertainties in hydrological, physiological and human behavioral parameters. We propose the use of fault trees to address the task of probabilistic risk analysis (PRA) and to support related management decisions. Fault trees allow decomposing the assessment of health risk into individual manageable modules, thus tackling a complex system by a structural “Divide and Conquer” approach. The complexity within each module can be chosen individually according to data availability, parsimony, relative importance and stage of analysis. The separation in modules allows for a true inter- and multi-disciplinary approach. This presentation highlights the three novel features of our work: (1) we define failure in terms of risk being above a threshold value, whereas previous studies used auxiliary events such as exceedance of critical concentration levels, (2) we plot an integrated fault tree that handles uncertainty in both hydrological and health components in a unified way, and (3) we introduce a new form of stochastic fault tree that allows to weaken the assumption of independent subsystems that is required by a classical fault tree approach. We illustrate our concept in a simple groundwater-related setting.
A fuzzy decision tree for fault classification.
Zio, Enrico; Baraldi, Piero; Popescu, Irina C
2008-02-01
In plant accident management, the control room operators are required to identify the causes of the accident, based on the different patterns of evolution of the monitored process variables thereby developing. This task is often quite challenging, given the large number of process parameters monitored and the intense emotional states under which it is performed. To aid the operators, various techniques of fault classification have been engineered. An important requirement for their practical application is the physical interpretability of the relationships among the process variables underpinning the fault classification. In this view, the present work propounds a fuzzy approach to fault classification, which relies on fuzzy if-then rules inferred from the clustering of available preclassified signal data, which are then organized in a logical and transparent decision tree structure. The advantages offered by the proposed approach are precisely that a transparent fault classification model is mined out of the signal data and that the underlying physical relationships among the process variables are easily interpretable as linguistic if-then rules that can be explicitly visualized in the decision tree structure. The approach is applied to a case study regarding the classification of simulated faults in the feedwater system of a boiling water reactor.
Geology of the Gladys McCall geopressured-geothermal prospect, Cameron Parish, Louisiana
DOE Office of Scientific and Technical Information (OSTI.GOV)
John, C.J.
The Gladys McCall prospect lies at the western edge of the Rockefeller Wildlife Refuge about 88 km (55 mi) southeast of Lake Charles in Cameron Parish, Louisiana. The test well is 4825 m (15,831 ft) deep and was drilled in 1981 under the U.S. Department of Energy geopressured-geothermal research program. The well was shut in at the end of October 1987 after it had produced over 27 million barrels of brine and 676 MMscf gas, without any significant pressure decline. The stratigraphic section seen in this test well consists of alternating sandstones and shales with about 350 m (1150 ft)more » of net sand between 4393 m (14,412 ft) and 4974 m (16,320 ft). The producing reservoir is bounded on the north and south by faults. The east-west dimension is poorly defined due to lack of deep well control. Eleven prospective production zones have been identified. The pressure maintenance and the continuous high brine yield from the reservoir may be due to laterally overlapping and connected sandstones, communication between overlying and/or underlying reservoirs, growth faults acting as passageways for brine, shale dewatering, or possible communication of zones behind the casing.« less
FTC - THE FAULT-TREE COMPILER (SUN VERSION)
NASA Technical Reports Server (NTRS)
Butler, R. W.
1994-01-01
FTC, the Fault-Tree Compiler program, is a tool used to calculate the top-event probability for a fault-tree. Five different gate types are allowed in the fault tree: AND, OR, EXCLUSIVE OR, INVERT, and M OF N. The high-level input language is easy to understand and use. In addition, the program supports a hierarchical fault tree definition feature which simplifies the tree-description process and reduces execution time. A rigorous error bound is derived for the solution technique. This bound enables the program to supply an answer precisely (within the limits of double precision floating point arithmetic) at a user-specified number of digits accuracy. The program also facilitates sensitivity analysis with respect to any specified parameter of the fault tree such as a component failure rate or a specific event probability by allowing the user to vary one failure rate or the failure probability over a range of values and plot the results. The mathematical approach chosen to solve a reliability problem may vary with the size and nature of the problem. Although different solution techniques are utilized on different programs, it is possible to have a common input language. The Systems Validation Methods group at NASA Langley Research Center has created a set of programs that form the basis for a reliability analysis workstation. The set of programs are: SURE reliability analysis program (COSMIC program LAR-13789, LAR-14921); the ASSIST specification interface program (LAR-14193, LAR-14923), PAWS/STEM reliability analysis programs (LAR-14165, LAR-14920); and the FTC fault tree tool (LAR-14586, LAR-14922). FTC is used to calculate the top-event probability for a fault tree. PAWS/STEM and SURE are programs which interpret the same SURE language, but utilize different solution methods. ASSIST is a preprocessor that generates SURE language from a more abstract definition. SURE, ASSIST, and PAWS/STEM are also offered as a bundle. Please see the abstract for COS-10039/COS-10041, SARA - SURE/ASSIST Reliability Analysis Workstation, for pricing details. FTC was originally developed for DEC VAX series computers running VMS and was later ported for use on Sun computers running SunOS. The program is written in PASCAL, ANSI compliant C-language, and FORTRAN 77. The TEMPLATE graphics library is required to obtain graphical output. The standard distribution medium for the VMS version of FTC (LAR-14586) is a 9-track 1600 BPI magnetic tape in VMSINSTAL format. It is also available on a TK50 tape cartridge in VMSINSTAL format. Executables are included. The standard distribution medium for the Sun version of FTC (LAR-14922) is a .25 inch streaming magnetic tape cartridge in UNIX tar format. Both Sun3 and Sun4 executables are included. FTC was developed in 1989 and last updated in 1992. DEC, VAX, VMS, and TK50 are trademarks of Digital Equipment Corporation. UNIX is a registered trademark of AT&T Bell Laboratories. SunOS is a trademark of Sun Microsystems, Inc.
FTC - THE FAULT-TREE COMPILER (VAX VMS VERSION)
NASA Technical Reports Server (NTRS)
Butler, R. W.
1994-01-01
FTC, the Fault-Tree Compiler program, is a tool used to calculate the top-event probability for a fault-tree. Five different gate types are allowed in the fault tree: AND, OR, EXCLUSIVE OR, INVERT, and M OF N. The high-level input language is easy to understand and use. In addition, the program supports a hierarchical fault tree definition feature which simplifies the tree-description process and reduces execution time. A rigorous error bound is derived for the solution technique. This bound enables the program to supply an answer precisely (within the limits of double precision floating point arithmetic) at a user-specified number of digits accuracy. The program also facilitates sensitivity analysis with respect to any specified parameter of the fault tree such as a component failure rate or a specific event probability by allowing the user to vary one failure rate or the failure probability over a range of values and plot the results. The mathematical approach chosen to solve a reliability problem may vary with the size and nature of the problem. Although different solution techniques are utilized on different programs, it is possible to have a common input language. The Systems Validation Methods group at NASA Langley Research Center has created a set of programs that form the basis for a reliability analysis workstation. The set of programs are: SURE reliability analysis program (COSMIC program LAR-13789, LAR-14921); the ASSIST specification interface program (LAR-14193, LAR-14923), PAWS/STEM reliability analysis programs (LAR-14165, LAR-14920); and the FTC fault tree tool (LAR-14586, LAR-14922). FTC is used to calculate the top-event probability for a fault tree. PAWS/STEM and SURE are programs which interpret the same SURE language, but utilize different solution methods. ASSIST is a preprocessor that generates SURE language from a more abstract definition. SURE, ASSIST, and PAWS/STEM are also offered as a bundle. Please see the abstract for COS-10039/COS-10041, SARA - SURE/ASSIST Reliability Analysis Workstation, for pricing details. FTC was originally developed for DEC VAX series computers running VMS and was later ported for use on Sun computers running SunOS. The program is written in PASCAL, ANSI compliant C-language, and FORTRAN 77. The TEMPLATE graphics library is required to obtain graphical output. The standard distribution medium for the VMS version of FTC (LAR-14586) is a 9-track 1600 BPI magnetic tape in VMSINSTAL format. It is also available on a TK50 tape cartridge in VMSINSTAL format. Executables are included. The standard distribution medium for the Sun version of FTC (LAR-14922) is a .25 inch streaming magnetic tape cartridge in UNIX tar format. Both Sun3 and Sun4 executables are included. FTC was developed in 1989 and last updated in 1992. DEC, VAX, VMS, and TK50 are trademarks of Digital Equipment Corporation. UNIX is a registered trademark of AT&T Bell Laboratories. SunOS is a trademark of Sun Microsystems, Inc.
SPACE PROPULSION SYSTEM PHASED-MISSION PROBABILITY ANALYSIS USING CONVENTIONAL PRA METHODS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Curtis Smith; James Knudsen
As part of a series of papers on the topic of advance probabilistic methods, a benchmark phased-mission problem has been suggested. This problem consists of modeling a space mission using an ion propulsion system, where the mission consists of seven mission phases. The mission requires that the propulsion operate for several phases, where the configuration changes as a function of phase. The ion propulsion system itself consists of five thruster assemblies and a single propellant supply, where each thruster assembly has one propulsion power unit and two ion engines. In this paper, we evaluate the probability of mission failure usingmore » the conventional methodology of event tree/fault tree analysis. The event tree and fault trees are developed and analyzed using Systems Analysis Programs for Hands-on Integrated Reliability Evaluations (SAPHIRE). While the benchmark problem is nominally a "dynamic" problem, in our analysis the mission phases are modeled in a single event tree to show the progression from one phase to the next. The propulsion system is modeled in fault trees to account for the operation; or in this case, the failure of the system. Specifically, the propulsion system is decomposed into each of the five thruster assemblies and fed into the appropriate N-out-of-M gate to evaluate mission failure. A separate fault tree for the propulsion system is developed to account for the different success criteria of each mission phase. Common-cause failure modeling is treated using traditional (i.e., parametrically) methods. As part of this paper, we discuss the overall results in addition to the positive and negative aspects of modeling dynamic situations with non-dynamic modeling techniques. One insight from the use of this conventional method for analyzing the benchmark problem is that it requires significant manual manipulation to the fault trees and how they are linked into the event tree. The conventional method also requires editing the resultant cut sets to obtain the correct results. While conventional methods may be used to evaluate a dynamic system like that in the benchmark, the level of effort required may preclude its use on real-world problems.« less
Worl, Ronald G.; Lewis, Reed S.
2001-01-01
Mineral deposits in the Croesus and Hailey gold belt mineralized areas in Blaine County, south-central Idaho, are preciousand base-metal quartz veins that are part of a family of vein deposits spatially and temporally associated with the Idaho batholith. Historic production from these veins has been mainly gold and silver. Host rocks are older border phase plutons of the Idaho batholith that are characterized by more potassium and less sodium as compared to rocks from the main body of the batholith to the west. Host structures are reverse faults that have moderate to shallow dips to the northeast and high-angle normal faults that also strike northwest. The veins are characterized by several generations of quartz and generally sparse sulfide minerals; gold is associated with late-stage comb quartz. The precious-metal ore bodies are in a series of shoots, each of which is as much as 8 ft in width, 400 ft in breadth, and 1,000 ft in pitch length.
A Fault Tree Approach to Analysis of Behavioral Systems: An Overview.
ERIC Educational Resources Information Center
Stephens, Kent G.
Developed at Brigham Young University, Fault Tree Analysis (FTA) is a technique for enhancing the probability of success in any system by analyzing the most likely modes of failure that could occur. It provides a logical, step-by-step description of possible failure events within a system and their interaction--the combinations of potential…
Poplar FT2 Shortens the Juvenile Phase and Promotes Seasonal Flowering[W
Hsu, Chuan-Yu; Liu, Yunxia; Luthe, Dawn S.; Yuceer, Cetin
2006-01-01
Many woody perennials, such as poplar (Populus deltoides), are not able to form flower buds during the first several years of their life cycle. They must undergo a transition from the juvenile phase to the reproductive phase to be competent to produce flower buds. After this transition, trees begin to form flower buds in the spring of each growing season. The genetic factors that control flower initiation, ending the juvenile phase, are unknown in poplar. The factors that regulate seasonal flower bud formation are also unknown. Here, we report that poplar FLOWERING LOCUS T2 (FT2), a relative of the Arabidopsis thaliana flowering-time gene FT, controls first-time and seasonal flowering in poplar. The FT2 transcript is rare during the juvenile phase of poplar. When juvenile poplar is transformed with FT2 and transcript levels are increased, flowering is induced within 1 year. During the transition between vegetative and reproductive growth in mature trees, FT2 transcripts are abundant during reproductive growth under long days. Subsequently, floral meristems emerge on flanks of the axillary inflorescence shoots. These findings suggest that FT2 is part of the flower initiation pathway in poplar and plays an additional role in regulating seasonal flower initiation that is integrated with the poplar perennial growth habit. PMID:16844908
Dennis E. Ferguson; John C. Byrne; William R. Wykoff; Brian Kummet; Ted Hensold
2011-01-01
Stands of dense, natural ponderosa pine (Pinus ponderosa var. ponderosa) regeneration were operationally, precommercially thinned at seven sites - four on Nez Perce Tribal lands in northern Idaho and three on Spokane Tribal lands in eastern Washington. Five spacing treatments were studied - control (no thinning), 5x5 ft, 7x7 ft, 10x10 ft, and 14x14 ft. Sample trees...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reid, W.M.
1988-02-01
One of the newest major oil plays in the Gulf Coast basin, the Austin Chalk reportedly produces in three belts: an updip belt, where production is from fractured chalk in structurally high positions along faults above 7000 ft; a shallow downdip belt, where the chalk is uniformly saturated with oil from 7000 to 9000 ft; and a deeper downdip belt saturated with gas and condensate below 9000 ft. The updip fields usually occur on the southeastern, upthrown side of the Luling, Mexia, and Charlotte fault zones. Production is from fractures that connect the relatively sparse matrix pores with more permeablemore » fracture systems. The fractures resulted from regional extensional stress during the opening of the Gulf Coast basin on the divergent margin of the North American plate during the Laramide orogeny. The fractures are more common in the more brittle chalk than in the overlying Navarro and underlying Eagle Ford shales, which are less brittle. The oil in the updip traps in the chalk may have been generated in place downdip, and migrated updip along the extension fractures into the updip traps during or after the Laramide orogeny. A fairway of previously unmapped updip faults and drag folds has been mapped using Thematic Mapper imagery and seismic, structural, and resistivity maps near the Nixon field, Burleson County, Texas. This fairway, prospective for oil from the Austin Chalk, contains wells reported to produce from the Austin Chalk which lie along lineaments and linear features on the Thematic Mapper imagery and faults in the seismic and structure maps.« less
Potential fire behavior is reduced following forest restoration treatments
Peter Z. Fule; Charles McHugh; Thomas A. Heinlein; W. Wallace Covington
2001-01-01
Potential fire behavior was compared under dry, windy weather conditions in 12 ponderosa pine stands treated with alternative thinning prescriptions in the wildland/urban interface of Flagstaff, Arizona. Prior to thinning, stands averaged 474 trees/ acre, 158 ft2/acre basal area, crown bulk density 0.0045 lb/ft3, and crown base height 19.2 ft. Three thinning treatments...
The engine fuel system fault analysis
NASA Astrophysics Data System (ADS)
Zhang, Yong; Song, Hanqiang; Yang, Changsheng; Zhao, Wei
2017-05-01
For improving the reliability of the engine fuel system, the typical fault factor of the engine fuel system was analyzed from the point view of structure and functional. The fault character was gotten by building the fuel system fault tree. According the utilizing of fault mode effect analysis method (FMEA), several factors of key component fuel regulator was obtained, which include the fault mode, the fault cause, and the fault influences. All of this made foundation for next development of fault diagnosis system.
Expanding options for reforestation of the Cumberland Plateau
DOE Office of Scientific and Technical Information (OSTI.GOV)
McGee, C.E.
1980-01-01
Stems of d.b.h. 4 inches or greater in a low quality stand in Tennessee dominated by white and scarlet oak (Quercus coccinea) were sheared in September-November 1976, chipped, and removed. Sawtimber quality trees (30) in the 37-acre area were felled separately by conventional methods. Residual trees (2-3 inches d.b.h., ht. over 4.5 ft) in some areas were injected with herbicide. One-acre plots were planted with 1+0 loblolly pine, 2+0 white pine (Pinus strobus), or 1+0 yellow poplar, or left to regenerate naturally. After 2 years, survival of all trees was good (83% or over) and average height of loblolly pine,more » yellow poplar and desirable natural stems (white, scarlet or black oak, Quercus velutina) was 3.3 ft, significantly different from that of white pine (1.5 ft). It is concluded that poor quality stands can be cheaply improved by this method, although release from competing vegetation may be necessary, especially in the case of white pine.« less
Fault tree analysis: NiH2 aerospace cells for LEO mission
NASA Technical Reports Server (NTRS)
Klein, Glenn C.; Rash, Donald E., Jr.
1992-01-01
The Fault Tree Analysis (FTA) is one of several reliability analyses or assessments applied to battery cells to be utilized in typical Electric Power Subsystems for spacecraft in low Earth orbit missions. FTA is generally the process of reviewing and analytically examining a system or equipment in such a way as to emphasize the lower level fault occurrences which directly or indirectly contribute to the major fault or top level event. This qualitative FTA addresses the potential of occurrence for five specific top level events: hydrogen leakage through either discrete leakage paths or through pressure vessel rupture; and four distinct modes of performance degradation - high charge voltage, suppressed discharge voltage, loss of capacity, and high pressure.
Modular techniques for dynamic fault-tree analysis
NASA Technical Reports Server (NTRS)
Patterson-Hine, F. A.; Dugan, Joanne B.
1992-01-01
It is noted that current approaches used to assess the dependability of complex systems such as Space Station Freedom and the Air Traffic Control System are incapable of handling the size and complexity of these highly integrated designs. A novel technique for modeling such systems which is built upon current techniques in Markov theory and combinatorial analysis is described. It enables the development of a hierarchical representation of system behavior which is more flexible than either technique alone. A solution strategy which is based on an object-oriented approach to model representation and evaluation is discussed. The technique is virtually transparent to the user since the fault tree models can be built graphically and the objects defined automatically. The tree modularization procedure allows the two model types, Markov and combinatoric, to coexist and does not require that the entire fault tree be translated to a Markov chain for evaluation. This effectively reduces the size of the Markov chain required and enables solutions with less truncation, making analysis of longer mission times possible. Using the fault-tolerant parallel processor as an example, a model is built and solved for a specific mission scenario and the solution approach is illustrated in detail.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rush, Jason; Holubnyak, Yevhen; Watney, Willard
This DOE-funded project evaluates the utility of seismic volumetric curvature (VC) for predicting stratal and structural architecture diagnostic of paleokarst reservoirs. Of special interest are applications geared toward carbon capture, utilization, and storage (CCUS). VC has been championed for identifying faults (offset <¼ λ) that cannot be imaged by conventional 3-D seismic attributes such as coherence. The objective of this research was to evaluate VC-techniques for reducing uncertainties in reservoir compartmentalization studies and seal risk assessments especially for saline aquifers. A 2000-ft horizontal lateral was purposefully drilled across VC-imaged lineaments—interpreted to record a fractured and a fault-bounded doline—to physically confirmmore » their presence. The 15-mi² study area is located in southeastern Bemis-Shutts Field, which is situated along the crest of the Central Kansas Uplift (CKU) in Ellis County, Kansas. The uppermost Arbuckle (200+ ft) has extensive paleokarst including collapsed paleocaverns and dolines related to exceedingly prolonged pre-Simpson (Sauk–Tippecanoe) and/or pre-Pennsylvanian subaerial exposure. A lateral borehole was successfully drilled across the full extent (~1100 ft) of a VC-inferred paleokarst doline. Triple combo (GR-neutron/density-resistivity), full-wave sonic, and borehole micro-imager logs were successfully run to TD on drill-pipe. Results from the formation evaluation reveal breccias (e.g., crackle, mosaic, chaotic), fractures, faults, vugs (1-6"), and unaffected host strata consistent with the pre-spud interpretation. Well-rounded pebbles were also observed on the image log. VC-inferred lineaments coincide with 20–80-ft wide intervals of high GR values (100+ API), matrix-rich breccias, and faults. To further demonstrate their utility, VC attributes are integrated into a geocellular modeling workflow: 1) to constrain the structural model; 2) to generate facies probability grids, and; 3) to collocate petrophysical models to separate-vug rock fabrics along solution-enlarged fault and fracture systems. Simulation-based studies demonstrate a potential alternative field development model for developing CO 2 storage sites that target carbonate reservoirs overprinted by paleokarst. Simulation results for this complex reservoir indicate that individual fault blocks could function as discrete containers for CO 2 storage thereby reducing the risk of plume migration outside the legally defined extent of the permitted storage site. Vertically extensive, anastomosing, solution-enlarged fault/fracture systems — infilled by clay-rich sediments — would operate as non-to-low permeability vertical "curtains" that restrict CO 2 movement beyond the confines of the CO 2 storage site. Such a location could be developed in a checker-board fashion with CO 2 injection operations occurring in one block and surveillance operations occurring in the adjacent block. Such naturally partitioned reservoirs may be ideal candidates for reducing risks associated with CO 2 plume breakthrough.« less
Ikegami, Hidetoshi; Nogata, Hitoshi; Inoue, Yoshiaki; Himeno, Shuichi; Yakushiji, Hiroshi; Hirata, Chiharu; Hirashima, Keita; Mori, Masashi; Awamura, Mitsuo; Nakahara, Takao
2013-12-16
Because the floral induction occurs in many plants when specific environmental conditions are satisfied, most plants bloom and bear fruit during the same season each year. In fig, by contrast, the time interval during which inflorescence (flower bud, fruit) differentiation occurs corresponds to the shoot elongation period. Fig trees thus differ from many species in their reproductive growth characteristics. To date, however, the molecular mechanisms underlying this unorthodox physiology of floral induction and fruit setting in fig trees have not been elucidated. We isolated a FLOWERING LOCUS T (FT)-like gene from fig and examined its function, characteristics, and expression patterns. The isolated gene, F. carica FT (FcFT1), is single copy in fig and shows the highest similarity at the amino acid level (93.1%) to apple MdFT2. We sequenced its upstream region (1,644 bp) and identified many light-responsive elements. FcFT1 was mainly expressed in leaves and induced early flowering in transgenic tobacco, suggesting that FcFT1 is a fig FT ortholog. Real-time reverse-transcription PCR analysis revealed that FcFT1 mRNA expression occurred only in leaves at the lower nodes, the early fruit setting positions. mRNA levels remained a constant for approximately 5 months from spring to autumn, corresponding almost exactly to the inflorescence differentiation season. Diurnal variation analysis revealed that FcFT1 mRNA expression increased under relative long-day and short-day conditions, but not under continuous darkness. These results suggest that FcFT1 activation is regulated by light conditions and may contribute to fig's unique fruit-setting characteristics.
Clearcut harvesting costs and production rates for young-growth mixed-conifer stands
William A. Atkinson; Dale O. Hall
1966-01-01
In clearcutting 90-year-old stands at the Challenge Experimental Forest, all merchantable trees greater than 12 inches d.b.h. were removed. Felling costs averaged $3.86 and required 0.55 man-hours per M bd. ft. in cut volumes averaging 19,700 bd. ft. per acre. Yarding, at a rate of 0.54 hours per M bd. ft., cost $4.42.
Soft error evaluation and vulnerability analysis in Xilinx Zynq-7010 system-on chip
NASA Astrophysics Data System (ADS)
Du, Xuecheng; He, Chaohui; Liu, Shuhuan; Zhang, Yao; Li, Yonghong; Xiong, Ceng; Tan, Pengkang
2016-09-01
Radiation-induced soft errors are an increasingly important threat to the reliability of modern electronic systems. In order to evaluate system-on chip's reliability and soft error, the fault tree analysis method was used in this work. The system fault tree was constructed based on Xilinx Zynq-7010 All Programmable SoC. Moreover, the soft error rates of different components in Zynq-7010 SoC were tested by americium-241 alpha radiation source. Furthermore, some parameters that used to evaluate the system's reliability and safety were calculated using Isograph Reliability Workbench 11.0, such as failure rate, unavailability and mean time to failure (MTTF). According to fault tree analysis for system-on chip, the critical blocks and system reliability were evaluated through the qualitative and quantitative analysis.
Learning from examples - Generation and evaluation of decision trees for software resource analysis
NASA Technical Reports Server (NTRS)
Selby, Richard W.; Porter, Adam A.
1988-01-01
A general solution method for the automatic generation of decision (or classification) trees is investigated. The approach is to provide insights through in-depth empirical characterization and evaluation of decision trees for software resource data analysis. The trees identify classes of objects (software modules) that had high development effort. Sixteen software systems ranging from 3,000 to 112,000 source lines were selected for analysis from a NASA production environment. The collection and analysis of 74 attributes (or metrics), for over 4,700 objects, captured information about the development effort, faults, changes, design style, and implementation style. A total of 9,600 decision trees were automatically generated and evaluated. The trees correctly identified 79.3 percent of the software modules that had high development effort or faults, and the trees generated from the best parameter combinations correctly identified 88.4 percent of the modules on the average.
Spent Fuel Test-Climax: core logging for site investigation and instrumentation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wilder, D.G.; Yow, J.L. Jr.; Thorpe, R.K.
1982-05-28
As an integral part of the Spent Fuel Test-Climax 5150 ft (1570 m) of granite core was obtained. This core was diamond drilled in various sizes, mainly 38-mm and 76-mm diameters. The core was teken with single tube core barrels and was unoriented. Techniques used to drill and log this core are discussed, as well as techniques to orient the core. Of the 5150 ft (1570 m) of core more than 3645 ft (1111 m) was retained and logged in some detail. As a result of the core logging, geologic discontinuities were identified, joint frequency and spacing characterized. Discontinuities identifiedmore » included several joint sets, shear zones and faults. Correlations based on coring along were generally found to be impossible, even for the more prominent features. The only feature properly correlated from the exploratory drilling was the fault system at the end of the facility, but it was not identified from the exploratory core as a fault. Identification of discontinuities was later helped by underground mapping that identified several different joint sets with different characteristics. It was found that joint frequency varied from 0.3 to 1.1 joint per foot of core for open fractures and from 0.3 to 3.3/ft for closed or healed fractures. Histograms of fracture spacing indicate that there is likely a random distribution of spacing superimposed upon uniformly spaced fractures. It was found that a low angle joint set had a persistent mean orientation. These joints were healed and had pervasive wall rock alteration which made identification of joints in this set possible. The recognition of a joint set with known attitude allowed orientation of much of the core. This orientation technique was found to be effective. 10 references, 25 figures, 4 tables.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lowe, J.T.; Carrington, D.B.
1990-09-01
The Austin Chalk is buried to a depth of only 2,100-2,500 ft and has retained primary microporosity unlike the typical deep fractured chalk reservoirs. The Van structure is a complexly faulted domal anticline created by salt intrusion and is approximately 2,000 ft higher than surrounding structures in the area. A major northwest-dipping fault acts as the primary trapping mechanism. The field has produced 0.5 billion BO from thick Woodbine sands since its discovery in 1929. Occurrence of oil in the Austin Chalk has been known since the field discovery, but prior completions were low rate oil producers. Recent development ofmore » a large fracture stimulation technique has resulted in increased production rates of up to 300 BOPD. The Austin Chalk reservoir limits were determined by isopaching feet of minimum productive resistivity having porosity above a cutoff value. The resistivity/porosity isopach showed a direct correlation between Austin Chalk productivity and the Austin Chalk structure and faulting pattern. Structural evidence along with oil typing indicate that the oil in the Austin Chalk has migrated upward along fault planes and through fault juxtaposition from the Woodbine sands 200 ft below the Austin Chalk. Thin-section and scanning electron microscopy work performed on conventional cores showed that the Van Austin Chalk formation is a very fine grained limestone composed primarily of coccoliths. Various amounts of detrital illite clay are present in the coccolith matrix. All effective porosity is micro-intergranular and ranges from 15 to 35%. Based on the core analyses, the main porosity reducing agent and therefore control on reservoir quality is the amount of detrital clay present filling the micropores. Permeability is very low with values ranging from 0.01 to 1.5 md. There is no evidence of significant natural fractures in the core. Artificial fractures are therefore required to create the permeability needed to sustain commercial production rates.« less
Decision tree and PCA-based fault diagnosis of rotating machinery
NASA Astrophysics Data System (ADS)
Sun, Weixiang; Chen, Jin; Li, Jiaqing
2007-04-01
After analysing the flaws of conventional fault diagnosis methods, data mining technology is introduced to fault diagnosis field, and a new method based on C4.5 decision tree and principal component analysis (PCA) is proposed. In this method, PCA is used to reduce features after data collection, preprocessing and feature extraction. Then, C4.5 is trained by using the samples to generate a decision tree model with diagnosis knowledge. At last the tree model is used to make diagnosis analysis. To validate the method proposed, six kinds of running states (normal or without any defect, unbalance, rotor radial rub, oil whirl, shaft crack and a simultaneous state of unbalance and radial rub), are simulated on Bently Rotor Kit RK4 to test C4.5 and PCA-based method and back-propagation neural network (BPNN). The result shows that C4.5 and PCA-based diagnosis method has higher accuracy and needs less training time than BPNN.
EFFECTS OF THE 1906 EARTHQUAKE ON THE BALD HILL OUTLET SYSTEM, SAN MATEO COUNTY, CALIFORNIA.
Pampeyan, Earl H.
1986-01-01
Following the earthquake of April 18, 1906, it was discovered that a brick forebay and other parts of the reservoir outlet system were in the slip zone of the San Andreas fault. The original outlet through which water was directed to San Francisco consisted of two tunnels joined at the brick forebay; one tunnel extends 2,820 ft to the east under Bald Hill on Buri Buri Ridge, and the other tunnel intersects the lake bottom about 250 ft west of the forebay. In 1897 a second intake was added to the system, also joining the original forebay. During the present study the accessible parts of this original outlet system were examined with the hope of learning how the system had been affected by fault slip in 1906.
East Cameron Block 270, a Pleistocene field
DOE Office of Scientific and Technical Information (OSTI.GOV)
Holland, D.S.; Sutley, C.E.; Berlitz, R.E.
1974-01-01
Exploration of the Plio-Pleistocene in the Gulf of Mexico since 1970 has discovered significant hydrocarbon reserves. One of the better gas fields to date has been the Block 270 E. Cameron field. Utilization of a coordinated exploitation plan with Schlumberger has allowed Pennzoil as operator, to develop and put on production the Block 270 field in a minimum time period. Block 270 field is a N.-S. trending faulted nose at 6,000 ft. At G-Sand depth (8,700 ft), the structure has closed, forming an elongated N.-S. structure with dip in all directions from the Block 270 area. Closure is the resultmore » of contemporaneous growth on the E. bounding regional fault. Structural and stratigraphic interpretations from dipmeters were used to help determine the most favorable offset locations.« less
NASA Technical Reports Server (NTRS)
Chang, Chi-Yung (Inventor); Fang, Wai-Chi (Inventor); Curlander, John C. (Inventor)
1995-01-01
A system for data compression utilizing systolic array architecture for Vector Quantization (VQ) is disclosed for both full-searched and tree-searched. For a tree-searched VQ, the special case of a Binary Tree-Search VQ (BTSVQ) is disclosed with identical Processing Elements (PE) in the array for both a Raw-Codebook VQ (RCVQ) and a Difference-Codebook VQ (DCVQ) algorithm. A fault tolerant system is disclosed which allows a PE that has developed a fault to be bypassed in the array and replaced by a spare at the end of the array, with codebook memory assignment shifted one PE past the faulty PE of the array.
Harayama, Hisanori; Ikeda, Takefumi; Ishida, Atsushi; Yamamoto, Shin-Ichi
2006-08-01
We investigated seasonal patterns of water relations in current-year leaves of three evergreen broad-leaved trees (Ilex pedunculosa Miq., Ligustrum japonicum Thunb., and Eurya japonica Thunb.) with delayed greening in a warm-temperate forest in Japan. We used the pressure-volume method to: (1) assess the extent to which seasonal variation in leaf water relations is attributable to leaf development processes in delayed greening leaves versus seasonal variation in environmental variables; and (2) investigate variation in leaf water relations during the transition from the sapling to the adult tree stage. Leaf mass per unit leaf area was generally lowest just after completion of leaf expansion in May (late spring), and increased gradually throughout the year. Osmotic potential at full turgor (Psi(o) (ft)) and leaf water potential at the turgor loss point (Psi(w) (tlp)) were highest in May, and lowest in midwinter in all species. In response to decreasing air temperature, Psi(o) (ft) dropped at the rate of 0.037 MPa degrees C(-1). Dry-mass-based water content of leaves and the symplastic water fraction of total leaf water content gradually decreased throughout the year in all species. These results indicate that reductions in the symplastic water fraction during leaf development contributed to the passive concentration of solutes in cells and the resulting drop in winter Psi(o) (ft). The ratio of solutes to water volume increased in winter in current-year leaves of L. japonicum and E. japonica, indicating that osmotic adjustment (active accumulation of solutes) also contributed to the drop in winter in Psi(o) (ft). Bulk modulus of elasticity in cell walls fluctuated seasonally, but no general trend was found across species. Over the growing season, Psi(o) (ft) and Psi(w) (tlp) were lower in adult trees than in saplings especially in the case of I. pedunculosa, suggesting that adult-tree leaves are more drought and cold tolerant than sapling leaves. The ontogenetic increase in the stress resistance of I. pedunculosa may be related to its characteristic life form because I. pedunculosa grows taller than the other species studied.
Fault tree analysis for system modeling in case of intentional EMI
NASA Astrophysics Data System (ADS)
Genender, E.; Mleczko, M.; Döring, O.; Garbe, H.; Potthast, S.
2011-08-01
The complexity of modern systems on the one hand and the rising threat of intentional electromagnetic interference (IEMI) on the other hand increase the necessity for systematical risk analysis. Most of the problems can not be treated deterministically since slight changes in the configuration (source, position, polarization, ...) can dramatically change the outcome of an event. For that purpose, methods known from probabilistic risk analysis can be applied. One of the most common approaches is the fault tree analysis (FTA). The FTA is used to determine the system failure probability and also the main contributors to its failure. In this paper the fault tree analysis is introduced and a possible application of that method is shown using a small computer network as an example. The constraints of this methods are explained and conclusions for further research are drawn.
NASA Astrophysics Data System (ADS)
Akinci, A.; Pace, B.
2017-12-01
In this study, we discuss the seismic hazard variability of peak ground acceleration (PGA) at 475 years return period in the Southern Apennines of Italy. The uncertainty and parametric sensitivity are presented to quantify the impact of the several fault parameters on ground motion predictions for 10% exceedance in 50-year hazard. A time-independent PSHA model is constructed based on the long-term recurrence behavior of seismogenic faults adopting the characteristic earthquake model for those sources capable of rupturing the entire fault segment with a single maximum magnitude. The fault-based source model uses the dimensions and slip rates of mapped fault to develop magnitude-frequency estimates for characteristic earthquakes. Variability of the selected fault parameter is given with a truncated normal random variable distribution presented by standard deviation about a mean value. A Monte Carlo approach, based on the random balanced sampling by logic tree, is used in order to capture the uncertainty in seismic hazard calculations. For generating both uncertainty and sensitivity maps, we perform 200 simulations for each of the fault parameters. The results are synthesized both in frequency-magnitude distribution of modeled faults as well as the different maps: the overall uncertainty maps provide a confidence interval for the PGA values and the parameter uncertainty maps determine the sensitivity of hazard assessment to variability of every logic tree branch. These branches of logic tree, analyzed through the Monte Carlo approach, are maximum magnitudes, fault length, fault width, fault dip and slip rates. The overall variability of these parameters is determined by varying them simultaneously in the hazard calculations while the sensitivity of each parameter to overall variability is determined varying each of the fault parameters while fixing others. However, in this study we do not investigate the sensitivity of mean hazard results to the consideration of different GMPEs. Distribution of possible seismic hazard results is illustrated by 95% confidence factor map, which indicates the dispersion about mean value, and coefficient of variation map, which shows percent variability. The results of our study clearly illustrate the influence of active fault parameters to probabilistic seismic hazard maps.
Geology of an Ordovician stratiform base-metal deposit in the Long Canyon Area, Blaine County, Idaho
Otto, B.R.; Zieg, G.A.
2003-01-01
In the Long Canyon area, Blaine County, Idaho, a strati-form base-metal-bearing gossan is exposed within a complexly folded and faulted sequence of Ordovician strata. The gossan horizon in graptolitic mudrock suggests preservation of bedded sulfides that were deposited by an Ordovician subaqueous hydrothermal system. Abrupt thickness changes and geochemi-cal zoning in the metal-bearing strata suggest that the gossan is near the source of the hydrothermal system. Ordovician sedimentary rocks at Long Canyon represent a coarsening-upward section that was deposited below wave base in a submarine depositional environment. The lowest exposed rocks represent deposition in a starved, euxinic basin and over-lying strata represent a prograding clastic wedge of terrigenous and calcareous detritus. The metalliferous strata are between these two types of strata. Strata at Long Canyon have been deformed by two periods of thrust faulting, at least three periods of normal faulting, and two periods of folding. Tertiary extensional faulting formed five subhorizontal structural plates. These low-angle fault-bounded plates truncate Sevier-age and possibly Antler-age thrust faults. The presence of gossan-bearing strata in the four upper plates suggests that there was only minor, although locally complex, stratigraphic displacement and rotation. The lack of correlative strata in the lowest plate suggests the displacement was greater than 2000 ft. The metalliferous strata were exposed to surface weathering, oxidation, and erosion prior to and during deposition of the Eocene Challis Volcanic Group. The orientations of erosional canyons formed during this early period of exposure were related to the orientations of Sevier-age thrust faults, and stream-channel gravel was deposited in the canyons. During this and subsequent intervals of exposure, sulfidic strata were oxi-dized to a minimum depth of 700 ft.
Li, Jia; Wang, Deming; Huang, Zonghou
2017-01-01
Coal dust explosions (CDE) are one of the main threats to the occupational safety of coal miners. Aiming to identify and assess the risk of CDE, this paper proposes a novel method of fuzzy fault tree analysis combined with the Visual Basic (VB) program. In this methodology, various potential causes of the CDE are identified and a CDE fault tree is constructed. To overcome drawbacks from the lack of exact probability data for the basic events, fuzzy set theory is employed and the probability data of each basic event is treated as intuitionistic trapezoidal fuzzy numbers. In addition, a new approach for calculating the weighting of each expert is also introduced in this paper to reduce the error during the expert elicitation process. Specifically, an in-depth quantitative analysis of the fuzzy fault tree, such as the importance measure of the basic events and the cut sets, and the CDE occurrence probability is given to assess the explosion risk and acquire more details of the CDE. The VB program is applied to simplify the analysis process. A case study and analysis is provided to illustrate the effectiveness of this proposed method, and some suggestions are given to take preventive measures in advance and avoid CDE accidents. PMID:28793348
Shi, Lei; Shuai, Jian; Xu, Kui
2014-08-15
Fire and explosion accidents of steel oil storage tanks (FEASOST) occur occasionally during the petroleum and chemical industry production and storage processes and often have devastating impact on lives, the environment and property. To contribute towards the development of a quantitative approach for assessing the occurrence probability of FEASOST, a fault tree of FEASOST is constructed that identifies various potential causes. Traditional fault tree analysis (FTA) can achieve quantitative evaluation if the failure data of all of the basic events (BEs) are available, which is almost impossible due to the lack of detailed data, as well as other uncertainties. This paper makes an attempt to perform FTA of FEASOST by a hybrid application between an expert elicitation based improved analysis hierarchy process (AHP) and fuzzy set theory, and the occurrence possibility of FEASOST is estimated for an oil depot in China. A comparison between statistical data and calculated data using fuzzy fault tree analysis (FFTA) based on traditional and improved AHP is also made. Sensitivity and importance analysis has been performed to identify the most crucial BEs leading to FEASOST that will provide insights into how managers should focus effective mitigation. Copyright © 2014 Elsevier B.V. All rights reserved.
Wang, Hetang; Li, Jia; Wang, Deming; Huang, Zonghou
2017-01-01
Coal dust explosions (CDE) are one of the main threats to the occupational safety of coal miners. Aiming to identify and assess the risk of CDE, this paper proposes a novel method of fuzzy fault tree analysis combined with the Visual Basic (VB) program. In this methodology, various potential causes of the CDE are identified and a CDE fault tree is constructed. To overcome drawbacks from the lack of exact probability data for the basic events, fuzzy set theory is employed and the probability data of each basic event is treated as intuitionistic trapezoidal fuzzy numbers. In addition, a new approach for calculating the weighting of each expert is also introduced in this paper to reduce the error during the expert elicitation process. Specifically, an in-depth quantitative analysis of the fuzzy fault tree, such as the importance measure of the basic events and the cut sets, and the CDE occurrence probability is given to assess the explosion risk and acquire more details of the CDE. The VB program is applied to simplify the analysis process. A case study and analysis is provided to illustrate the effectiveness of this proposed method, and some suggestions are given to take preventive measures in advance and avoid CDE accidents.
Estimating Natural Recharge in a Desert Environment Facing Increasing Ground-Water Demands
NASA Astrophysics Data System (ADS)
Nishikawa, T.; Izbicki, J. A.; Hevesi, J. A.; Martin, P.
2004-12-01
Ground water historically has been the sole source of water supply for the community of Joshua Tree in the Joshua Tree ground-water subbasin of the Morongo ground-water basin in the southern Mojave Desert. Joshua Basin Water District (JBWD) supplies water to the community from the underlying Joshua Tree ground-water subbasin, and ground-water withdrawals averaging about 960 acre-ft/yr have resulted in as much as 35 ft of drawdown. As growth continues in the desert, ground-water resources may need to be supplemented using imported water. To help meet future demands, JBWD plans to construct production wells in the adjacent Copper Mountain ground-water subbasin. To manage the ground-water resources and to identify future mitigating measures, a thorough understanding of the ground-water system is needed. To this end, field and numerical techniques were applied to determine the distribution and quantity of natural recharge. Field techniques included the installation of instrumented boreholes in selected washes and at a nearby control site. Numerical techniques included the use of a distributed-parameter watershed model and a ground-water flow model. The results from the field techniques indicated that as much as 70 acre-ft/yr of water infiltrated downward through the two principal washes during the study period (2001-3). The results from the watershed model indicated that the average annual recharge in the ground-water subbasins is about 160 acre-ft/yr. The results from the calibrated ground-water flow model indicated that the average annual recharge for the same area is about 125 acre-ft/yr. Although the field and numerical techniques were applied to different scales (local vs. large), all indicate that natural recharge in the Joshua Tree area is very limited; therefore, careful management of the limited ground-water resources is needed. Moreover, the calibrated model can now be used to estimate the effects of different water-management strategies on the ground-water subbasins.
Space Radar Image of Karakax Valley, China 3-D
1999-04-15
This three-dimensional perspective of the remote Karakax Valley in the northern Tibetan Plateau of western China was created by combining two spaceborne radar images using a technique known as interferometry. Visualizations like this are helpful to scientists because they reveal where the slopes of the valley are cut by erosion, as well as the accumulations of gravel deposits at the base of the mountains. These gravel deposits, called alluvial fans, are a common landform in desert regions that scientists are mapping in order to learn more about Earth's past climate changes. Higher up the valley side is a clear break in the slope, running straight, just below the ridge line. This is the trace of the Altyn Tagh fault, which is much longer than California's San Andreas fault. Geophysicists are studying this fault for clues it may be able to give them about large faults. Elevations range from 4000 m (13,100 ft) in the valley to over 6000 m (19,700 ft) at the peaks of the glaciated Kun Lun mountains running from the front right towards the back. Scale varies in this perspective view, but the area is about 20 km (12 miles) wide in the middle of the image, and there is no vertical exaggeration. http://photojournal.jpl.nasa.gov/catalog/PIA01800
Growth and yield for a 7-year-old yellow-poplar plantation in northern West Virginia
John R. Brooks
2013-01-01
Results for several major stand level variables from a 7-year-old yellow-poplar (Liriodendron tulipifera L.) plantation established in a converted pasture in northern West Virginia were summarized based on initial planting densities of 1,517 trees/ac, 972 trees/ac, 765 trees/ac, and 602 trees/ac. Stand basal area/acre at age 7 was greatest (54.7 ft...
Graphical workstation capability for reliability modeling
NASA Technical Reports Server (NTRS)
Bavuso, Salvatore J.; Koppen, Sandra V.; Haley, Pamela J.
1992-01-01
In addition to computational capabilities, software tools for estimating the reliability of fault-tolerant digital computer systems must also provide a means of interfacing with the user. Described here is the new graphical interface capability of the hybrid automated reliability predictor (HARP), a software package that implements advanced reliability modeling techniques. The graphics oriented (GO) module provides the user with a graphical language for modeling system failure modes through the selection of various fault-tree gates, including sequence-dependency gates, or by a Markov chain. By using this graphical input language, a fault tree becomes a convenient notation for describing a system. In accounting for any sequence dependencies, HARP converts the fault-tree notation to a complex stochastic process that is reduced to a Markov chain, which it can then solve for system reliability. The graphics capability is available for use on an IBM-compatible PC, a Sun, and a VAX workstation. The GO module is written in the C programming language and uses the graphical kernal system (GKS) standard for graphics implementation. The PC, VAX, and Sun versions of the HARP GO module are currently in beta-testing stages.
Fault tree analysis for urban flooding.
ten Veldhuis, J A E; Clemens, F H L R; van Gelder, P H A J M
2009-01-01
Traditional methods to evaluate flood risk generally focus on heavy storm events as the principal cause of flooding. Conversely, fault tree analysis is a technique that aims at modelling all potential causes of flooding. It quantifies both overall flood probability and relative contributions of individual causes of flooding. This paper presents a fault model for urban flooding and an application to the case of Haarlem, a city of 147,000 inhabitants. Data from a complaint register, rainfall gauges and hydrodynamic model calculations are used to quantify probabilities of basic events in the fault tree. This results in a flood probability of 0.78/week for Haarlem. It is shown that gully pot blockages contribute to 79% of flood incidents, whereas storm events contribute only 5%. This implies that for this case more efficient gully pot cleaning is a more effective strategy to reduce flood probability than enlarging drainage system capacity. Whether this is also the most cost-effective strategy can only be decided after risk assessment has been complemented with a quantification of consequences of both types of events. To do this will be the next step in this study.
NASA Astrophysics Data System (ADS)
Koji, Yusuke; Kitamura, Yoshinobu; Kato, Yoshikiyo; Tsutsui, Yoshio; Mizoguchi, Riichiro
In conceptual design, it is important to develop functional structures which reflect the rich experience in the knowledge from previous design failures. Especially, if a designer learns possible abnormal behaviors from a previous design failure, he or she can add an additional function which prevents such abnormal behaviors and faults. To do this, it is a crucial issue to share such knowledge about possible faulty phenomena and how to cope with them. In fact, a part of such knowledge is described in FMEA (Failure Mode and Effect Analysis) sheets, function structure models for systematic design and fault trees for FTA (Fault Tree Analysis).
Failure analysis of energy storage spring in automobile composite brake chamber
NASA Astrophysics Data System (ADS)
Luo, Zai; Wei, Qing; Hu, Xiaofeng
2015-02-01
This paper set energy storage spring of parking brake cavity, part of automobile composite brake chamber, as the research object. And constructed the fault tree model of energy storage spring which caused parking brake failure based on the fault tree analysis method. Next, the parking brake failure model of energy storage spring was established by analyzing the working principle of composite brake chamber. Finally, the data of working load and the push rod stroke measured by comprehensive test-bed valve was used to validate the failure model above. The experimental result shows that the failure model can distinguish whether the energy storage spring is faulted.
A fast bottom-up algorithm for computing the cut sets of noncoherent fault trees
DOE Office of Scientific and Technical Information (OSTI.GOV)
Corynen, G.C.
1987-11-01
An efficient procedure for finding the cut sets of large fault trees has been developed. Designed to address coherent or noncoherent systems, dependent events, shared or common-cause events, the method - called SHORTCUT - is based on a fast algorithm for transforming a noncoherent tree into a quasi-coherent tree (COHERE), and on a new algorithm for reducing cut sets (SUBSET). To assure sufficient clarity and precision, the procedure is discussed in the language of simple sets, which is also developed in this report. Although the new method has not yet been fully implemented on the computer, we report theoretical worst-casemore » estimates of its computational complexity. 12 refs., 10 figs.« less
Electromagnetic Compatibility (EMC) in Microelectronics.
1983-02-01
Fault Tree Analysis", System Saftey Symposium, June 8-9, 1965, Seattle: The Boeing Company . 12. Fussell, J.B., "Fault Tree Analysis-Concepts and...procedure for assessing EMC in microelectronics and for applying DD, 1473 EOiTO OP I, NOV6 IS OESOL.ETE UNCLASSIFIED SECURITY CLASSIFICATION OF THIS...CRITERIA 2.1 Background 2 2.2 The Probabilistic Nature of EMC 2 2.3 The Probabilistic Approach 5 2.4 The Compatibility Factor 6 3 APPLYING PROBABILISTIC
A graphical language for reliability model generation
NASA Technical Reports Server (NTRS)
Howell, Sandra V.; Bavuso, Salvatore J.; Haley, Pamela J.
1990-01-01
A graphical interface capability of the hybrid automated reliability predictor (HARP) is described. The graphics-oriented (GO) module provides the user with a graphical language for modeling system failure modes through the selection of various fault tree gates, including sequence dependency gates, or by a Markov chain. With this graphical input language, a fault tree becomes a convenient notation for describing a system. In accounting for any sequence dependencies, HARP converts the fault-tree notation to a complex stochastic process that is reduced to a Markov chain which it can then solve for system reliability. The graphics capability is available for use on an IBM-compatible PC, a Sun, and a VAX workstation. The GO module is written in the C programming language and uses the Graphical Kernel System (GKS) standard for graphics implementation. The PC, VAX, and Sun versions of the HARP GO module are currently in beta-testing.
Noncharacteristic Slip on the Northern San Andreas Fault at the Vedanta Marsh, Marin County, CA
NASA Astrophysics Data System (ADS)
Zhang, H.; Niemi, T. M.; Allison, A.; Fumal, T. E.
2004-12-01
Three-dimensional excavations along the 1906 trace of the northern San Andreas fault at the Vedanta marsh paleoseismic site near Olema, CA have yielded new data on the timing and amount of slip during the penultimate earthquake on this fault section. The excavations exposed a 3-m-wide paleochannel that has been offset right-laterally 7.8-8.3 m by coseismic slip during the past two large earthquakes: 1906 and the penultimate earthquake. The paleochannel was eroded into a silty clay marsh deposit and was filled after AD 1400. Both the silty clay layer and the paleochannel deposit are directly overlain by an in situ burn/peat sequence. The penultimate earthquake occurred while the peat was at the ground surface whereas faulting from the 1906 earthquake terminates within an overlying gravel/fill sequence. Preliminary OxCal analyses of radiocarbon dates indicate that the penultimate earthquake occurred in the late 17th to early 18th century. In plan view, two main fault traces were mapped in the excavation. The northwestern portion of the paleochannel is offset across a single fault trace. Just southeast of this portion of the channel the fault splits into two traces. We believe that one of these traces likely slipped only during 1906 and the other trace slipped on during the penultimate earthquake. Unfortunately, the overlying stratigraphic section that could resolve the exact reconstruction of movement on these faults is missing due to the excavation of an artificial drainage ditch at this location in the 1940's. Matching the north margin of the paleochannel to the first exposure of gravel in the zone between the two fault traces gives an offset of 5 m. We have historic records that show the 1906 coseismic slip near the study site was about 5m from field notes of David Starr Jordan (Stanford University Archives) who describes two 16 ft (5m) offsets: one of a tree located about 150m SE of the offset channel and the other of a path to the Shafter barn located about 300m NW. As the locations of these two historical records are so close to the study site, it is reasonable to assume that our excavation site has the same amount of coseismic slip in 1906. Our data indicate that the paleochannel was offset about 2.8 to 3.3 m during the penultimate earthquake which occurred in the late 17th to early 18th century, and that the San Andreas fault at this section is capable of slip in earthquakes smaller than 1906.
NASA Astrophysics Data System (ADS)
Wu, Jianing; Yan, Shaoze; Xie, Liyang
2011-12-01
To address the impact of solar array anomalies, it is important to perform analysis of the solar array reliability. This paper establishes the fault tree analysis (FTA) and fuzzy reasoning Petri net (FRPN) models of a solar array mechanical system and analyzes reliability to find mechanisms of the solar array fault. The index final truth degree (FTD) and cosine matching function (CMF) are employed to resolve the issue of how to evaluate the importance and influence of different faults. So an improvement reliability analysis method is developed by means of the sorting of FTD and CMF. An example is analyzed using the proposed method. The analysis results show that harsh thermal environment and impact caused by particles in space are the most vital causes of the solar array fault. Furthermore, other fault modes and the corresponding improvement methods are discussed. The results reported in this paper could be useful for the spacecraft designers, particularly, in the process of redesigning the solar array and scheduling its reliability growth plan.
Seera, Manjeevan; Lim, Chee Peng; Ishak, Dahaman; Singh, Harapajan
2012-01-01
In this paper, a novel approach to detect and classify comprehensive fault conditions of induction motors using a hybrid fuzzy min-max (FMM) neural network and classification and regression tree (CART) is proposed. The hybrid model, known as FMM-CART, exploits the advantages of both FMM and CART for undertaking data classification and rule extraction problems. A series of real experiments is conducted, whereby the motor current signature analysis method is applied to form a database comprising stator current signatures under different motor conditions. The signal harmonics from the power spectral density are extracted as discriminative input features for fault detection and classification with FMM-CART. A comprehensive list of induction motor fault conditions, viz., broken rotor bars, unbalanced voltages, stator winding faults, and eccentricity problems, has been successfully classified using FMM-CART with good accuracy rates. The results are comparable, if not better, than those reported in the literature. Useful explanatory rules in the form of a decision tree are also elicited from FMM-CART to analyze and understand different fault conditions of induction motors.
Lessel, Uta; Wellenzohn, Bernd; Fischer, J Robert; Rarey, Matthias
2012-02-27
A case study is presented illustrating the design of a focused CDK2 library. The scaffold of the library was detected by a feature trees search in a fragment space based on reactions from combinatorial chemistry. For the design the software LoFT (Library optimizer using Feature Trees) was used. The special feature called FTMatch was applied to restrict the parts of the queries where the reagents are permitted to match. This way a 3D scoring function could be simulated. Results were compared with alternative designs by GOLD docking and ROCS 3D alignments.
NASA Astrophysics Data System (ADS)
Oesterle, J.; Seward, D.; Little, T.; Stockli, D. F.; Mizera, M.
2016-12-01
Low-temperature thermochronology is a powerful tool for revealing the thermal and kinematic evolution of metamorphic core complexes (MCCs). Most globally studied MCCs are ancient, partially eroded, and have been modified by deformation events that postdate their origin. The Mai'iu Fault is a rapidly slipping active low-angle normal fault (LANF) in the Woodlark Rift in Papua New Guinea that has exhumed a >25 km-wide (in the slip direction), and over 3 km-high domal fault surface in its footwall called the Suckling-Dayman massif. Some knowledge of the present-day thermal structure in the adjacent Woodlark Rift, and the pristine nature of this active MCC make it an ideal candidate for thermochronological study of a high finite-slip LANF. To constrain the thermal and kinematic evolution of this MCC we apply the U/Pb, fission-track (FT) and (U-Th)/He methods. Zircon U/Pb analyses from the syn-extensional Suckling Granite that intrudes the footwall of the MCC yield an intrusion age of 3.3 Ma. Preliminary zircon FT ages from the same body indicate cooling below 300 °C at 2.7 Ma. Ages decrease to 2.0 Ma with increasing proximity to the Mai'iu Fault and imply cooling controlled by tectonic exhumation. Almost coincident zircon U/Pb and FT ages from the nearby syn-extensional Mai'iu Monzonite, on the other hand, record extremely rapid cooling from magmatic temperatures to 300 °C at 2 Ma. As apparent from the preliminary He extraction stage, these syn-extensional plutons have young zircon and apatite (U-Th)/He ages. These initial results suggest that the Mai'iu Fault was initiated as an extensional structure by 3.3 Ma. We infer that it reactivated an older ophiolitic suture that had emplaced the Papuan Ultramafic body in the Paleogene. Rapid cooling of the Mai'iu Monzonite indicates that it was intruded into a part of the MCC's footwall that was already shallow in the crust by 2 Ma. This inference is further supported by the mineral andalusite occurring in the contact aureole of the monzonite.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reid, W.M.
1988-01-01
One of the newest major oil plays in the Gulf Coast basin, the Austin Chalk reportedly produces in three belts: an updip belt, where production is from fractured chalk in structurally high positions along faults above 7,000 ft; a shallow downdip belt, where the chalk is uniformly saturated with oil from 7,000 to 9,000 ft; and a deeper downdip belt saturated with gas and condensate below 9,000 ft. The updip fields usually occur on the southeastern, upthrown side of the Luling, Mexia, and Charlotte fault zones. Production is from fractures that connect the relatively sparse matrix pores with more permeablemore » fracture systems. The fractures resulted from regional extensional stress during the opening of the Gulf Coast basin on the divergent margin of the North American plate during the Laramide orogeny. The fractures are more common in the more brittle chalk than in the overlying Navarro and underlying Eagle Ford shales, which are less brittle. The oil in the updip traps in the chalk may have been generated in place downdip, and migrated updip along the extension fractures into the updip traps during or after the Laramide orogeny.« less
2016-08-16
Force Research Laboratory Space Vehicles Directorate AFRL /RVSV 3550 Aberdeen Ave, SE 11. SPONSOR/MONITOR’S REPORT Kirtland AFB, NM 87117-5776 NUMBER...Ft Belvoir, VA 22060-6218 1 cy AFRL /RVIL Kirtland AFB, NM 87117-5776 2 cys Official Record Copy AFRL /RVSV/Richard S. Erwin 1 cy... AFRL -RV-PS- AFRL -RV-PS- TR-2016-0112 TR-2016-0112 SPECIFICATION, SYNTHESIS, AND VERIFICATION OF SOFTWARE-BASED CONTROL PROTOCOLS FOR FAULT-TOLERANT
C. Villari; R.A. Sniezko; L.E. Rodriguez-Saona; P. Bonello
2017-01-01
A strong focus on tree germplasm that can resist threats such as non-native insects and pathogens, or a changing climate, is fundamental for successful genetic conservation efforts. However, the unavailability of tools for rapid screening of tree germplasm for resistance to critical pathogens and insect pests is becoming an increasingly serious bottleneck. Here we...
Impact of product mix and markets on the economic feasibility of hardwood thinning
John E. Baumgras; Chris B. LeDoux
1989-01-01
Results demonstrate how the economic feasibility of commercial hardwood thinning is impacted by tree diameter, product mix, and primary product markets. These results indicate that multiproduct harvesting can increase revenues by $0.01/ft³ to $0.32/ft³; and that small shifts in price levels or haul distance can postpone commercial thinning...
Aydin, Ilhan; Karakose, Mehmet; Akin, Erhan
2014-03-01
Although reconstructed phase space is one of the most powerful methods for analyzing a time series, it can fail in fault diagnosis of an induction motor when the appropriate pre-processing is not performed. Therefore, boundary analysis based a new feature extraction method in phase space is proposed for diagnosis of induction motor faults. The proposed approach requires the measurement of one phase current signal to construct the phase space representation. Each phase space is converted into an image, and the boundary of each image is extracted by a boundary detection algorithm. A fuzzy decision tree has been designed to detect broken rotor bars and broken connector faults. The results indicate that the proposed approach has a higher recognition rate than other methods on the same dataset. © 2013 ISA Published by ISA All rights reserved.
Enomoto, Catherine B.; Olea, Ricardo A.; Coleman, James L.
2014-01-01
The Middle Devonian Marcellus Shale in the Appalachian basin extends from central Ohio on the west to eastern New York on the east, and from north-central New York on the north to northern Tennessee on the south. Its thickness ranges from 0 feet (ft) where it pinches out to the west to as much as 700 ft in its eastern extent. Within the Broadtop synclinorium, the thickness of the Marcellus Shale ranges from 250 to 565 ft. Although stratigraphic complexities have been documented, a significant range in thickness most likely is because of tectonic thickening from folds and thrust faults. Outcrop studies in the Valley and Ridge and Appalachian Plateaus provinces illustrate the challenges of interpreting the relation of third-order faults, folds, and “disturbed” zones to the regional tectonic framework. Recent field work within the Valley and Ridge province determined that significant faulting and intraformational deformation are present within the Marcellus Shale at the outcrop scale. In an attempt to determine if this scale of deformation is detectable with conventional wireline logs, petrophysical properties (primarily mineralogy and porosity) were measured by interpretation of gamma-ray and bulk-density logs. The results of performing a statistical correlation of wireline logs from nine wells indicated that there are discontinuities within the Millboro Shale (undifferentiated Marcellus Shale and Mahantango Formation) where there are significant thickness differences between wells. Also, some intervals likely contain mineralogy that makes these zones more prone to layer-shortening cleavage duplexes. The Correlator program proved to be a useful tool in a region of contractional deformation.
The P-Mesh: A Commodity-based Scalable Network Architecture for Clusters
NASA Technical Reports Server (NTRS)
Nitzberg, Bill; Kuszmaul, Chris; Stockdale, Ian; Becker, Jeff; Jiang, John; Wong, Parkson; Tweten, David (Technical Monitor)
1998-01-01
We designed a new network architecture, the P-Mesh which combines the scalability and fault resilience of a torus with the performance of a switch. We compare the scalability, performance, and cost of the hub, switch, torus, tree, and P-Mesh architectures. The latter three are capable of scaling to thousands of nodes, however, the torus has severe performance limitations with that many processors. The tree and P-Mesh have similar latency, bandwidth, and bisection bandwidth, but the P-Mesh outperforms the switch architecture (a lower bound for tree performance) on 16-node NAB Parallel Benchmark tests by up to 23%, and costs 40% less. Further, the P-Mesh has better fault resilience characteristics. The P-Mesh architecture trades increased management overhead for lower cost, and is a good bridging technology while the price of tree uplinks is expensive.
1994-04-05
forts. The tops of the masts are camouflaged with tree branches to hide their position from artillerymen in Fort Jackson. Source: Print Room, New York...point 52 m (170 ft) riverward from the river centerline. Proposed preparation of the bankline and adjacent area will include tree removal and grading by a...outgrows the black willow and becomes the dominant tree , except where frequent and extended flooding during the growing season covers the trees and limits
Mean wind speed below building height in residential neighborhoods with different tree densities
G.M. Heisler
1990-01-01
There is little available knowledge of the absolute or relative effects of trees and buildings on wind at or below building height in residential neighborhoods. In this study, mean wind speed was measured at a height of 6.6 ft (2 m) in neighborhoods of single-family houses. BuIlding densities ranged between 6% and 12% of the land ares, and tree-cover densities were...
The 1992 Landers earthquake sequence; seismological observations
Egill Hauksson,; Jones, Lucile M.; Hutton, Kate; Eberhart-Phillips, Donna
1993-01-01
The (MW6.1, 7.3, 6.2) 1992 Landers earthquakes began on April 23 with the MW6.1 1992 Joshua Tree preshock and form the most substantial earthquake sequence to occur in California in the last 40 years. This sequence ruptured almost 100 km of both surficial and concealed faults and caused aftershocks over an area 100 km wide by 180 km long. The faulting was predominantly strike slip and three main events in the sequence had unilateral rupture to the north away from the San Andreas fault. The MW6.1 Joshua Tree preshock at 33°N58′ and 116°W19′ on 0451 UT April 23 was preceded by a tightly clustered foreshock sequence (M≤4.6) beginning 2 hours before the mainshock and followed by a large aftershock sequence with more than 6000 aftershocks. The aftershocks extended along a northerly trend from about 10 km north of the San Andreas fault, northwest of Indio, to the east-striking Pinto Mountain fault. The Mw7.3 Landers mainshock occurred at 34°N13′ and 116°W26′ at 1158 UT, June 28, 1992, and was preceded for 12 hours by 25 small M≤3 earthquakes at the mainshock epicenter. The distribution of more than 20,000 aftershocks, analyzed in this study, and short-period focal mechanisms illuminate a complex sequence of faulting. The aftershocks extend 60 km to the north of the mainshock epicenter along a system of at least five different surficial faults, and 40 km to the south, crossing the Pinto Mountain fault through the Joshua Tree aftershock zone towards the San Andreas fault near Indio. The rupture initiated in the depth range of 3–6 km, similar to previous M∼5 earthquakes in the region, although the maximum depth of aftershocks is about 15 km. The mainshock focal mechanism showed right-lateral strike-slip faulting with a strike of N10°W on an almost vertical fault. The rupture formed an arclike zone well defined by both surficial faulting and aftershocks, with more westerly faulting to the north. This change in strike is accomplished by jumping across dilational jogs connecting surficial faults with strikes rotated progressively to the west. A 20-km-long linear cluster of aftershocks occurred 10–20 km north of Barstow, or 30–40 km north of the end of the mainshock rupture. The most prominent off-fault aftershock cluster occurred 30 km to the west of the Landers mainshock. The largest aftershock was within this cluster, the Mw6.2 Big Bear aftershock occurring at 34°N10′ and 116°W49′ at 1505 UT June 28. It exhibited left-lateral strike-slip faulting on a northeast striking and steeply dipping plane. The Big Bear aftershocks form a linear trend extending 20 km to the northeast with a scattered distribution to the north. The Landers mainshock occurred near the southernmost extent of the Eastern California Shear Zone, an 80-km-wide, more than 400-km-long zone of deformation. This zone extends into the Death Valley region and accommodates about 10 to 20% of the plate motion between the Pacific and North American plates. The Joshua Tree preshock, its aftershocks, and Landers aftershocks form a previously missing link that connects the Eastern California Shear Zone to the southern San Andreas fault.
Mt, St. Helens, Mt. Adams, and Mt. Rainier, WA, USA
NASA Technical Reports Server (NTRS)
1992-01-01
This view of Mt. St. Helens (46.5N, 122.0W), taken 12 years after the volcanic eruption on 18 May 1980, in which the top 1300 ft. of the 9,677 ft. mountain was blown away, shows the rapid vegetation recovery within the blast area. Many fir trees have grown to heights of 20 ft. within the 150 square mile devastated area. Mt. Adams, an extinct volcano is just to the west and Mt. Rainier is to the north. Checkerboard logging can be seen throughout.
Structural evolution of Grand Lake field, Cameron Parish, Louisiana
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johanson, D.B.
Detailed analysis of sedimentary thicknesses at Grand Lake field has revealed that hydrocarbon accumulation was controlled by faulting that was related to diapiric uplift of shale. Grand Lake field is located in the northeastern corner of Cameron Parish, Louisiana. This area contains about 12,000 ft of Miocene and younger fluviodeltaic sediments. Structurally, the field is a northwest-trending anticline. Diapiric shale in the western part of the field may be salt related although, to date, no salt has been penetrated. A major down-to-the-south regional growth fault crosses the top of the structure, striking roughly northwest. Several down-to-the-north faults are antithetic tomore » this master fault. Second and third generation antithetic faults also are present in the field. Diapiric uplift in Grand Lake field was initiated in the early Miocene by an influx of relatively heavy deltaic sands onto undercompacted shales. The master fault in the field formed almost immediately after the onset of uplift, and movement was essentially uninterrupted until the Pliocene-Pleistocene.« less
Early Tertiary exhumation of the flank of a forearc basin, southwest Talkeetna Mountains, Alaska
Bleick, Heather A.; Till, Alison B.; Bradley, Dwight C.; O’Sullivan, Paul; Wooden, Joe L.; Bradley, Dan B.; Taylor, Theresa A.; Friedman, Sam B.; Hults, Chad P.
2012-01-01
New geochronologic and thermochronologic data from rocks near Hatcher Pass, southwest Talkeetna Mountains, Alaska, record earliest Paleocene erosional and structural exhumation on the flank of the active Cook Inlet forearc basin. Cretaceous plutons shed sediments to the south, forming the Paleocene Arkose Ridge Formation. A Paleocene(?)-Eocene detachment fault juxtaposed ~60 Ma metamorphic rocks with the base of the Arkose Ridge Formation. U-Pb (analyzed by Sensitive High Resolution Ion Micro Probe Reverse Geometry (SHRIMP-RG)) zircon ages of the Cretaceous plutons, more diverse than previously documented, are 90.3±0.3 (previously considered a Jurassic unit), 79.1±1.0, 76.1±0.9, 75.8±0.7, 72.5±0.4, 71.9±0.3, 70.5±0.2, and 67.3±0.2 Ma. The cooling of these plutons occurred between 72 and 66 Ma (zircon fission track (FT) closure ~225°C). 40Ar/39Ar analyses of hornblende, white mica, and biotite fall into this range (Harlan and others, 2003). New apatite FT data collected on a west-to-east transect reveal sequential exhumation of fault blocks at 62.8±2.9, 54±2.5, 52.6±2.8, and 44.4±2.2 Ma. Plutonic clasts accumulated in the Paleocene Arkose Ridge Formation to the south. Detrital zircon (DZ) ages from the formation reflect this provenance: a new sample yielded one grain at 61 Ma, a dominant peak at 76 Ma, and minor peaks at 70, 80, 88, and 92 Ma. The oldest zircon is 181 Ma. Our apatite FT ages range from 35.1 to 50.9 Ma. Greenschist facies rocks now sit structurally between the plutonic rocks and the Arkose Ridge Formation. They are separated from plutonic rocks by the vertical Hatcher Pass fault and from the sedimentary rocks by a detachment fault. Ar cooling ages (Harlan and others, 2003) and new zircon FT ages for these rocks are concordant at 61-57 Ma, synchronous with deposition of the Arkose Ridge Formation. A cooling age of ~46 Ma came from one apatite FT sample. The metamorphic protolith (previously considered Jurassic) was deposited at or after 75 Ma based on new DZ data. The probability curve has a major peak from 76 to 102 Ma, minor peaks at 186, 197, 213, 303, 346, and 1,828, and two discordant grains at ~2,700 Ma. This is similar to DZ populations in the Valdez Group. The short period of time between deposition, metamorphism, and exhumation are consistent with metamorphism in a subduction-zone setting. Ductile and brittle structures in the metamorphic rocks are consistent with exhumation in a transtensional setting.
Reliability analysis of the solar array based on Fault Tree Analysis
NASA Astrophysics Data System (ADS)
Jianing, Wu; Shaoze, Yan
2011-07-01
The solar array is an important device used in the spacecraft, which influences the quality of in-orbit operation of the spacecraft and even the launches. This paper analyzes the reliability of the mechanical system and certifies the most vital subsystem of the solar array. The fault tree analysis (FTA) model is established according to the operating process of the mechanical system based on DFH-3 satellite; the logical expression of the top event is obtained by Boolean algebra and the reliability of the solar array is calculated. The conclusion shows that the hinges are the most vital links between the solar arrays. By analyzing the structure importance(SI) of the hinge's FTA model, some fatal causes, including faults of the seal, insufficient torque of the locking spring, temperature in space, and friction force, can be identified. Damage is the initial stage of the fault, so limiting damage is significant to prevent faults. Furthermore, recommendations for improving reliability associated with damage limitation are discussed, which can be used for the redesigning of the solar array and the reliability growth planning.
Fault tree safety analysis of a large Li/SOCl(sub)2 spacecraft battery
NASA Technical Reports Server (NTRS)
Uy, O. Manuel; Maurer, R. H.
1987-01-01
The results of the safety fault tree analysis on the eight module, 576 F cell Li/SOCl2 battery on the spacecraft and in the integration and test environment prior to launch on the ground are presented. The analysis showed that with the right combination of blocking diodes, electrical fuses, thermal fuses, thermal switches, cell balance, cell vents, and battery module vents the probability of a single cell or a 72 cell module exploding can be reduced to .000001, essentially the probability due to explosion for unexplained reasons.
Basidiomycetes Associated with Decay of Living Oak Trees
Frederick H. Berry; Frances F. Lombard
1978-01-01
Thirty-one identified species of wood-rotting hymenomycetes were associated with decay and cull in upland oak stands in Illinois, Indiana, Kentucky, Missouri, and Ohio. Seven of these species produced brown rots that accounted for a volume loss of approximately 381 ft3 in the trees sampled. The remaining species produced white rots that were...
Analysis of yellow "fat" deposits on Inuit boots.
Edwards, Howell G M; Stern, Ben; Burgio, Lucia; Kite, Marion
2009-08-01
Irregular residues of a yellow deposit that was assumed to be seal fat used for waterproofing were observed in the creases of the outer surface of a pair of Inuit boots from Arctic Canada. A sample of this deposit detached from one of these areas on these boots was examined initially by FT-Raman microscopy, from which interesting and rather surprising results demanded further analysis using FT-IR and GC-MS. The non-destructive Raman spectroscopic analysis yielded spectra which indicated the presence of a tree resin from the Pinaceae sp. The Raman spectra were also characteristic of a well-preserved keratotic protein and indicative of adherent skin. Subsequent FT-IR spectroscopic analysis supported the attribution of a Pinaceae resin to the yellow deposit. GC-MS analysis of the same deposits identified the presence of pimaric, sandaracopimaric, dehydroabietic and abietic acids, all indicative of an aged Pinaceae resin. These results confirmed that the Inuit people had access to tree resins which they probably used as a waterproofing agent.
DOE Office of Scientific and Technical Information (OSTI.GOV)
C.J.Lewis; A.Lavine; S.L.Reneau
2002-12-01
We present data that elucidate the stratigraphy, geomorphology, and structure in the western part of Los Alamos National Laboratory between Technical Areas 3 and 16 (TA-3 and TA-16). Data include those gathered by geologic mapping of surficial, post-Bandelier Tuff strata, conventional and high-precision geologic mapping and geochemical analysis of cooling units within the Bandelier Tuff, logging of boreholes and a gas pipeline trench, and structural analysis using profiles, cross sections, structure contour maps, and stereographic projections. This work contributes to an improved understanding of the paleoseismic and geomorphic history of the area, which will aid in future seismic hazard evaluationsmore » and other investigations. The study area lies at the base of the main, 120-m (400-ft) high escarpment formed by the Pajarito fault, an active fault of the Rio Grande rift that bounds Los Alamos National Laboratory on the west. Subsidiary fracturing, faulting, and folding associated with the Pajarito fault zone extends at least 1,500 m (5,000 ft) to the east of the main Pajarito fault escarpment. Stratigraphic units in the study area include upper units of the Tshirege Member of the early Pleistocene Bandelier Tuff, early Pleistocene alluvial fan deposits that predate incision of canyons on this part of the Pajarito Plateau, and younger Pleistocene and Holocene alluvium and colluvium that postdate drainage incision. We discriminate four sets of structures in the area between TA-3 and TA-16: (a) north-striking faults and folds that mark the main zone of deformation, including a graben in the central part of the study area; (b) north-northwest-striking fractures and rare faults that bound the eastern side of the principal zone of deformation and may be the surface expression of deep-seated faulting; (c) rare northeast-striking structures near the northern limit of the area associated with the southern end of the Rendija Canyon fault; and (d) several small east-west-striking faults. We consider all structures to be Quaternary in that they postdate the Tshirege Member (1.22 million years old) of the Bandelier Tuff. Older mesa-top alluvial deposits (Qoal), which may have a large age range but are probably in part about 1.13 million years old, are clearly faulted or deformed by many structures. At two localities, younger alluvial units (Qfo and Qfi) appear to be truncated by faults, but field relations are obscure, and we cannot confirm the presence of fault contacts. The youngest known faulting in the study area occurred in Holocene time on a down-to-the-west fault, recently trenched at the site of a new LANL Emergency Operations Center (Reneau et al. 2002).« less
James B. Baker; Michael G. Shelton
1998-01-01
Development of 86 intermediate and suppressed loblolly pine (Pinus taeda L.) trees, that had been recently released from overtopping pines and hardwoods, was monitored over a 15 year period. The trees were growing in natural stands on good sites (site index = 90 ft at 50 years) that had been recently cut to stocking levels ranging from 10 to 50 percent. At time of...
Influence of managed pine stands and mixed pine/hardwood stands on well being of deer
Lowell K. Halls; Charles E. Boyd
1982-01-01
A 172-acre enclosure where all the hardwood trees were removed or deadened and a 167 -acre enclosure where hardwoods comprised 25 percent of the tree basal area were each stocked in 1965 with 3 white-tailed deer (1 buck and 2 does). In 1963, before any timber cutting practices were imposed, tree basal area averaged 111 sq. ft. per acre and forage yields averaged 260...
NASA Astrophysics Data System (ADS)
Liang, B.; Iwnicki, S. D.; Zhao, Y.
2013-08-01
The power spectrum is defined as the square of the magnitude of the Fourier transform (FT) of a signal. The advantage of FT analysis is that it allows the decomposition of a signal into individual periodic frequency components and establishes the relative intensity of each component. It is the most commonly used signal processing technique today. If the same principle is applied for the detection of periodicity components in a Fourier spectrum, the process is called the cepstrum analysis. Cepstrum analysis is a very useful tool for detection families of harmonics with uniform spacing or the families of sidebands commonly found in gearbox, bearing and engine vibration fault spectra. Higher order spectra (HOS) (also known as polyspectra) consist of higher order moment of spectra which are able to detect non-linear interactions between frequency components. For HOS, the most commonly used is the bispectrum. The bispectrum is the third-order frequency domain measure, which contains information that standard power spectral analysis techniques cannot provide. It is well known that neural networks can represent complex non-linear relationships, and therefore they are extremely useful for fault identification and classification. This paper presents an application of power spectrum, cepstrum, bispectrum and neural network for fault pattern extraction of induction motors. The potential for using the power spectrum, cepstrum, bispectrum and neural network as a means for differentiating between healthy and faulty induction motor operation is examined. A series of experiments is done and the advantages and disadvantages between them are discussed. It has been found that a combination of power spectrum, cepstrum and bispectrum plus neural network analyses could be a very useful tool for condition monitoring and fault diagnosis of induction motors.
Tractor-logging costs and production in old-growth redwood forests
Kenneth N. Boe
1963-01-01
A cost accounting analysis of full-scale logging operations in old-growth redwood during 2 years revealed that it cost $12.24 per M bd. ft. (gross Scribner log scale) to get logs on trucks. Road development costs averaged another $5.19 per M bd. ft. Felling-bucking production was calculated by average tree d.b.h. Both skidding and loading outputs per hour were...
Survival and growth of planted northern red oak in northern West Virginia
Charles A. McNeel; David M. Hix; Edwin C. Townsend
1993-01-01
The survival and growth of northern red oak (Quercus rubra L.) seedlings planted beneath a shelterwood in northern West Virginia were evaluated one year after planting. The use of 1.5 m (5 ft) tall TUBEX tree shelters on planted seedlings was also examined. The study was conducted on both excellent and good sites (site indices of 27 m (89 ft) and 22...
Paul A. Murphy; Michael G. Shelton
1994-01-01
The effects of three levels of residual basal area (40, 60, and 80 ft2/ac), maximum dbh (12, 16, 20 in.) and site index (90 ft) on the growth of loblolly pine (Pinus taeda L.) stands after 5 yr of uneven-aged silviculture were determined from plots located in the south Arkansas and north...
NASA Astrophysics Data System (ADS)
LI, Y.; Yang, S. H.
2017-05-01
The Antarctica astronomical telescopes work chronically on the top of the unattended South Pole, and they have only one chance to maintain every year. Due to the complexity of the optical, mechanical, and electrical systems, the telescopes are hard to be maintained and need multi-tasker expedition teams, which means an excessive awareness is essential for the reliability of the Antarctica telescopes. Based on the fault mechanism and fault mode of the main-axis control system for the equatorial Antarctica astronomical telescope AST3-3 (Antarctic Schmidt Telescopes 3-3), the method of fault tree analysis is introduced in this article, and we obtains the importance degree of the top event from the importance degree of the bottom event structure. From the above results, the hidden problems and weak links can be effectively found out, which will indicate the direction for promoting the stability of the system and optimizing the design of the system.
Fault tree analysis of most common rolling bearing tribological failures
NASA Astrophysics Data System (ADS)
Vencl, Aleksandar; Gašić, Vlada; Stojanović, Blaža
2017-02-01
Wear as a tribological process has a major influence on the reliability and life of rolling bearings. Field examinations of bearing failures due to wear indicate possible causes and point to the necessary measurements for wear reduction or elimination. Wear itself is a very complex process initiated by the action of different mechanisms, and can be manifested by different wear types which are often related. However, the dominant type of wear can be approximately determined. The paper presents the classification of most common bearing damages according to the dominant wear type, i.e. abrasive wear, adhesive wear, surface fatigue wear, erosive wear, fretting wear and corrosive wear. The wear types are correlated with the terms used in ISO 15243 standard. Each wear type is illustrated with an appropriate photograph, and for each wear type, appropriate description of causes and manifestations is presented. Possible causes of rolling bearing failure are used for the fault tree analysis (FTA). It was performed to determine the root causes for bearing failures. The constructed fault tree diagram for rolling bearing failure can be useful tool for maintenance engineers.
Renjith, V R; Madhu, G; Nayagam, V Lakshmana Gomathi; Bhasi, A B
2010-11-15
The hazards associated with major accident hazard (MAH) industries are fire, explosion and toxic gas releases. Of these, toxic gas release is the worst as it has the potential to cause extensive fatalities. Qualitative and quantitative hazard analyses are essential for the identification and quantification of these hazards related to chemical industries. Fault tree analysis (FTA) is an established technique in hazard identification. This technique has the advantage of being both qualitative and quantitative, if the probabilities and frequencies of the basic events are known. This paper outlines the estimation of the probability of release of chlorine from storage and filling facility of chlor-alkali industry using FTA. An attempt has also been made to arrive at the probability of chlorine release using expert elicitation and proven fuzzy logic technique for Indian conditions. Sensitivity analysis has been done to evaluate the percentage contribution of each basic event that could lead to chlorine release. Two-dimensional fuzzy fault tree analysis (TDFFTA) has been proposed for balancing the hesitation factor involved in expert elicitation. Copyright © 2010 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Graham, Gary Thomas
2014-01-01
Tree fruit, although desirable from a crew nutrition and menu diversity perspective, have long been dismissed as candidate crops based on their long juvenile phase, large architecture, low short-term harvest index, and dormancy requirements. Recent developments in Rapid Cycle Crop Breeding (RCCB) have overcome these historical limitations, opening the door to a new era in candidate crop research. Researchers at the United States Department of Agriculture (USDA) have developed FT-construct (Flowering Locus T) dwarf plum lines that have a very short juvenile phase, vine-like architecture, and no obligate dormancy period. In a collaborative research effort, NASA and the USDA are evaluating the performance of these FT-lines under controlled environment conditions relevant to spaceflight.
Fault Detection/Isolation Verification,
1982-08-01
63 - A I MCC ’I UNCLASSIFIED SECURITY CLASSIPICATION OP THIS PAGE tMh*f Dal f&mered, REPORT D00CUMENTATION PAGE " .O ORM 1. REPORT NUM.9ft " 2. GOVT...test the performance of th .<ver) DO 2" 1473 EoIoTON OP iNov os i OSoLTe UNCLASSIFIED SECURITY CLASSIPICATION 0 T"IS PAGE (P 3 . at Sted) I...UNCLASSIFIED Acumy, C .AMICATIN Of THIS PAGS. (m ... DO&.m , Algorithm on these netowrks , several different fault scenarios were designed for each network. Each
Development and validation of techniques for improving software dependability
NASA Technical Reports Server (NTRS)
Knight, John C.
1992-01-01
A collection of document abstracts are presented on the topic of improving software dependability through NASA grant NAG-1-1123. Specific topics include: modeling of error detection; software inspection; test cases; Magnetic Stereotaxis System safety specifications and fault trees; and injection of synthetic faults into software.
Trade Studies of Space Launch Architectures using Modular Probabilistic Risk Analysis
NASA Technical Reports Server (NTRS)
Mathias, Donovan L.; Go, Susie
2006-01-01
A top-down risk assessment in the early phases of space exploration architecture development can provide understanding and intuition of the potential risks associated with new designs and technologies. In this approach, risk analysts draw from their past experience and the heritage of similar existing systems as a source for reliability data. This top-down approach captures the complex interactions of the risk driving parts of the integrated system without requiring detailed knowledge of the parts themselves, which is often unavailable in the early design stages. Traditional probabilistic risk analysis (PRA) technologies, however, suffer several drawbacks that limit their timely application to complex technology development programs. The most restrictive of these is a dependence on static planning scenarios, expressed through fault and event trees. Fault trees incorporating comprehensive mission scenarios are routinely constructed for complex space systems, and several commercial software products are available for evaluating fault statistics. These static representations cannot capture the dynamic behavior of system failures without substantial modification of the initial tree. Consequently, the development of dynamic models using fault tree analysis has been an active area of research in recent years. This paper discusses the implementation and demonstration of dynamic, modular scenario modeling for integration of subsystem fault evaluation modules using the Space Architecture Failure Evaluation (SAFE) tool. SAFE is a C++ code that was originally developed to support NASA s Space Launch Initiative. It provides a flexible framework for system architecture definition and trade studies. SAFE supports extensible modeling of dynamic, time-dependent risk drivers of the system and functions at the level of fidelity for which design and failure data exists. The approach is scalable, allowing inclusion of additional information as detailed data becomes available. The tool performs a Monte Carlo analysis to provide statistical estimates. Example results of an architecture system reliability study are summarized for an exploration system concept using heritage data from liquid-fueled expendable Saturn V/Apollo launch vehicles.
Survey of critical failure events in on-chip interconnect by fault tree analysis
NASA Astrophysics Data System (ADS)
Yokogawa, Shinji; Kunii, Kyousuke
2018-07-01
In this paper, a framework based on reliability physics is proposed for adopting fault tree analysis (FTA) to the on-chip interconnect system of a semiconductor. By integrating expert knowledge and experience regarding the possibilities of failure on basic events, critical issues of on-chip interconnect reliability will be evaluated by FTA. In particular, FTA is used to identify the minimal cut sets with high risk priority. Critical events affecting the on-chip interconnect reliability are identified and discussed from the viewpoint of long-term reliability assessment. The moisture impact is evaluated as an external event.
Designated fiber stress for wood poles
Ronald W. Wolfe; Robert O. Kluge
2005-01-01
Wood poles have been used to support utility distribution lines for well over 100 years. Over that time, specifications for a âwood utility poleâ have evolved from the closest available tree stem more than 15 ft in length to straight, durable timbers of lengths ranging up 125 ft and base diameters of as much as 27 in. The continued success of wood poles in this...
Sun, Weifang; Yao, Bin; Zeng, Nianyin; Chen, Binqiang; He, Yuchao; Cao, Xincheng; He, Wangpeng
2017-07-12
As a typical example of large and complex mechanical systems, rotating machinery is prone to diversified sorts of mechanical faults. Among these faults, one of the prominent causes of malfunction is generated in gear transmission chains. Although they can be collected via vibration signals, the fault signatures are always submerged in overwhelming interfering contents. Therefore, identifying the critical fault's characteristic signal is far from an easy task. In order to improve the recognition accuracy of a fault's characteristic signal, a novel intelligent fault diagnosis method is presented. In this method, a dual-tree complex wavelet transform (DTCWT) is employed to acquire the multiscale signal's features. In addition, a convolutional neural network (CNN) approach is utilized to automatically recognise a fault feature from the multiscale signal features. The experiment results of the recognition for gear faults show the feasibility and effectiveness of the proposed method, especially in the gear's weak fault features.
Foundation Report on Stonewall Jackson Dam, West Fork River Basin, Weston, West Virginia. Volume 1.
1987-12-21
zone , aid occasiOriIly , reremen-ta-t ioanr had a. cut r ed in the trocks adjicingr the fault. zone.- The -.ource_ o~f t h is waiter is app- arently ...hi Kibler- Project Eritnr---------------------------------------Emil Ito Men den ilIlla Partty CIhief...45) SIULGL - W, W - FUWTIM MllWNS 3 QM XWP To U" IU MMN MEN TiS M WMS MIW O STATID EV. OU ZME STIE IMMKi. M UM l O IwoD TIE FM l M M (ft) (ft) 4s) k4
Distribution and extent of tree mortality in North Central hardwood forests
J. Michael Vasievich; Sharon L. Hobrla; Mark H. Hansen
1997-01-01
Forest inventory data shows that biophysical agents and human causes account for annual losses of more than a half-billion ft³ of timber in North Central hardwood forests. This paper reports on an analysis of forest inventory data to determine the extent and distribution of tree mortality in four forest types - Aspen-Birch, Elm-Ash-Cottonwood, Maple-Beech-...
Logging costs and cutting methods in young-growth ponderosa pine in California
Philip M. McDonald; William A. Atkinson; Dale O. Hall
1969-01-01
Mixed-conifer stands at the Challenge Experimental Forest, Calif., were cut to four specifications: seed-tree, group selection, single tree selection, and clearcut. Logging costs and production rates were compared and evaluated. Cutting method had little effect on felling or skidding production; felling ranged from 1,802 to 2,019 bd ft per hour, and skidding from 3,138...
Fifty-year development of Douglas-fir stands planted at various spacings.
Donald L. Reukema
1979-01-01
A 51-yr record of observations in stands planted at six spacings, ranging from 4 to 12 ft, illustrates clearly the beneficial effects of wide initial spacing and the detrimental effects of carrying too many trees relative to the size to which they will be grown. Not only are trees larger, but yields per acre are greater at wide spacings.
Natural decomposition of hornbeam wood decayed by the white rot fungus Trametes versicolor.
Karim, Maryam; Daryaei, Mehrdad Ghodskhah; Torkaman, Javad; Oladi, Reza; Ghanbary, Mohammad Ali Tajick; Bari, Ehsan; Yilgor, Nural
2017-01-01
The impacts of white-rot fungi on altering wood chemistry have been studied mostly in vitro. However, in vivo approaches may enable better assessment of the nature of interactions between saprotrophic fungi and host tree in nature. Hence, decayed and sound wood samples were collected from a naturally infected tree (Carpinus betulus L.). Fruiting bodies of the white rot fungus Trametes versicolor grown on the same tree were identified using rDNA ITS sequencing. Chemical compositions (cellulose and lignin) of both sound and infected wood were studied. FT-IR spectroscopy was used to collect spectra of decayed and un-decayed wood samples. The results of chemical compositions indicated that T. versicolor reduced cellulose and lignin in similar quantities. Fungal activities in decayed wood causes serious decline in pH content. The amount of alcohol-benzene soluble extractives was severely decreased, while a remarkable increase was found in 1% sodium hydroxide soluble and hot water extractive contents in the decayed wood samples, respectively. FT-IR analyses demonstrated that T. versicolor causes simultaneous white rot in the hornbeam tree in vivo which is in line with in vitro experiments.
Analysis of a hardware and software fault tolerant processor for critical applications
NASA Technical Reports Server (NTRS)
Dugan, Joanne B.
1993-01-01
Computer systems for critical applications must be designed to tolerate software faults as well as hardware faults. A unified approach to tolerating hardware and software faults is characterized by classifying faults in terms of duration (transient or permanent) rather than source (hardware or software). Errors arising from transient faults can be handled through masking or voting, but errors arising from permanent faults require system reconfiguration to bypass the failed component. Most errors which are caused by software faults can be considered transient, in that they are input-dependent. Software faults are triggered by a particular set of inputs. Quantitative dependability analysis of systems which exhibit a unified approach to fault tolerance can be performed by a hierarchical combination of fault tree and Markov models. A methodology for analyzing hardware and software fault tolerant systems is applied to the analysis of a hypothetical system, loosely based on the Fault Tolerant Parallel Processor. The models consider both transient and permanent faults, hardware and software faults, independent and related software faults, automatic recovery, and reconfiguration.
Determining preventability of pediatric readmissions using fault tree analysis.
Jonas, Jennifer A; Devon, Erin Pete; Ronan, Jeanine C; Ng, Sonia C; Owusu-McKenzie, Jacqueline Y; Strausbaugh, Janet T; Fieldston, Evan S; Hart, Jessica K
2016-05-01
Previous studies attempting to distinguish preventable from nonpreventable readmissions reported challenges in completing reviews efficiently and consistently. (1) Examine the efficiency and reliability of a Web-based fault tree tool designed to guide physicians through chart reviews to a determination about preventability. (2) Investigate root causes of general pediatrics readmissions and identify the percent that are preventable. General pediatricians from The Children's Hospital of Philadelphia used a Web-based fault tree tool to classify root causes of all general pediatrics 15-day readmissions in 2014. The tool guided reviewers through a logical progression of questions, which resulted in 1 of 18 root causes of readmission, 8 of which were considered potentially preventable. Twenty percent of cases were cross-checked to measure inter-rater reliability. Of the 7252 discharges, 248 were readmitted, for an all-cause general pediatrics 15-day readmission rate of 3.4%. Of those readmissions, 15 (6.0%) were deemed potentially preventable, corresponding to 0.2% of total discharges. The most common cause of potentially preventable readmissions was premature discharge. For the 50 cross-checked cases, both reviews resulted in the same root cause for 44 (86%) of files (κ = 0.79; 95% confidence interval: 0.60-0.98). Completing 1 review using the tool took approximately 20 minutes. The Web-based fault tree tool helped physicians to identify root causes of hospital readmissions and classify them as either preventable or not preventable in an efficient and consistent way. It also confirmed that only a small percentage of general pediatrics 15-day readmissions are potentially preventable. Journal of Hospital Medicine 2016;11:329-335. © 2016 Society of Hospital Medicine. © 2016 Society of Hospital Medicine.
Risk Analysis of Return Support Material on Gas Compressor Platform Project
NASA Astrophysics Data System (ADS)
Silvianita; Aulia, B. U.; Khakim, M. L. N.; Rosyid, Daniel M.
2017-07-01
On a fixed platforms project are not only carried out by a contractor, but two or more contractors. Cooperation in the construction of fixed platforms is often not according to plan, it is caused by several factors. It takes a good synergy between the contractor to avoid miss communication may cause problems on the project. For the example is about support material (sea fastening, skid shoe and shipping support) used in the process of sending a jacket structure to operation place often does not return to the contractor. It needs a systematic method to overcome the problem of support material. This paper analyses the causes and effects of GAS Compressor Platform that support material is not return, using Fault Tree Analysis (FTA) and Event Tree Analysis (ETA). From fault tree analysis, the probability of top event is 0.7783. From event tree analysis diagram, the contractors lose Rp.350.000.000, - to Rp.10.000.000.000, -.
Use of ground-based radiometers for L-Band Freeze/Thaw retrieval in a boreal forest site
NASA Astrophysics Data System (ADS)
Roy, A.; Sonnentag, O.; Derksen, C.; Toose, P.; Pappas, C.; Mavrovic, A.; El Amine, M.; Royer, A.; Berg, A. A.; Rowlandson, T. L.; Barr, A.; Black, T. A.
2017-12-01
The boreal forest is the second largest land biome in the world and thus plays a major role in the global and regional climate systems. The extent, timing and duration of the seasonal freeze/thaw (F/T) state influences vegetation developmental stages (phenology) and, consequently, constitutes an important control on how boreal forest ecosystems exchange carbon, water and energy with the atmosphere. Recently, new L-Band satellite-derived F/T information has become available. However, disentangling the seasonally differing contributions from forest overstory and understory vegetation, and the ground surface to the satellite signal remains challenging. Here we present results from an ongoing campaign with two L-Band surface-based radiometers (SBR) installed on a micrometeorological tower at the Southern Old Black Spruce site (53.99°N / 105.12°W) in central Saskatchewan. One radiometer unit is installed on top of the tower viewing the multi-layer vegetation canopy from above. A second radiometer unit is installed within the multi-layer canopy, viewing the understory and the ground surface only. The objectives of our study are to (i) disentangle the L-Band F/T signal contribution of boreal forest overstory from the combined understory and ground surface contribution, and (ii) link the L-Band F/T signal to related boreal forest structural and functional characteristics. Analysis of these radiometer measurements made from September to November 2016 shows that when the ground surface is thawed, the main contributor to both radiometer signals is soil moisture. The Pearson correlation coefficient between brightness temperature (TB) at vertical polarization (V-pol) and soil permittivity is 0.79 for the radiometer above the canopy and 0.74 for the radiometer below the canopy. Under cold conditions when the soil was thawed (snow insulation) and the trees were frozen (below 0°C), TB at V-pol is negatively correlated with tree permittivity. The freezing tree contribution to the L-Band signal is however confirmed with L-Band coaxial probe measurements that show significant changes in tree L-Band permittivity when the tree temperature falls below 0 °C. This study will help develop freeze/thaw product and ecosystemic processes in boreal forest from satellite based remote sensing.
Vegetation dielectric characterization using an open-ended coaxial probe
NASA Astrophysics Data System (ADS)
Mavrovic, A.; Roy, A.; Royer, A.; Boone, F.; Pappas, C.; Filali, B.
2017-12-01
The detection of freeze/thaw (F/T) physical state of soil is one of the main objectives of the SMAP mission as well as one of the secondary objectives of the SMOS mission. Annual F/T cycles have substantial impacts on surface energy budgets, permafrost conditions, as well as forest water and carbon dynamics. It has been shown that spaceborne L-band passive radiometry is a promising tool to monitor F/T due to the substantial differences between the permittivity of water and ice at these frequencies. However, the decoupling of the signal between soil and vegetation components remains challenging for all microwave remote sensing applications at various spatial scales. Radiative transfer models in the microwave domain are generally poorly parameterized to consider the non-negligible contribution of vegetation. The main objective of this research is to assess the skill of a recently developed Open-Ended Coaxial Probe (OECP) to measure the complex microwave permittivity of vegetation and soils and to derive a relation between the impact of vegetation on the microwave signal and the vegetation permittivity that could serve as a validation tool for soil models especially in frozen state. Results show that the OECP is a suitable tool to infer the radial profile of the complex permittivity in L-band of trees. A clear distinction can be made between the dielectric characterization of the sapwood where the permittivity is high because of the high permittivity of water but decrease with depth, and the heartwood where the permittivity is low and relatively constant. The seasonal cycle of the F/T state of the vegetation can also be observed since it is strongly correlated with the permittivity of the wood. The permittivity of a tree over the winter season is very low and homogenous since the permittivity of ice is significantly lower than water and the sap flow is negligible. The fluctuation of the frozen and thawed permittivity for different tree species was evaluated, focusing on four widespread boreal tree species. Future work will focus on observing the effect of the tree permittivity on the vegetation emission and brightness temperature (Tb) and to upscale that information for satellite-borne passive microwave observations and global monitoring of freeze/thaw and soil moisture.
NASA Astrophysics Data System (ADS)
Li, Yongbo; Li, Guoyan; Yang, Yuantao; Liang, Xihui; Xu, Minqiang
2018-05-01
The fault diagnosis of planetary gearboxes is crucial to reduce the maintenance costs and economic losses. This paper proposes a novel fault diagnosis method based on adaptive multi-scale morphological filter (AMMF) and modified hierarchical permutation entropy (MHPE) to identify the different health conditions of planetary gearboxes. In this method, AMMF is firstly adopted to remove the fault-unrelated components and enhance the fault characteristics. Second, MHPE is utilized to extract the fault features from the denoised vibration signals. Third, Laplacian score (LS) approach is employed to refine the fault features. In the end, the obtained features are fed into the binary tree support vector machine (BT-SVM) to accomplish the fault pattern identification. The proposed method is numerically and experimentally demonstrated to be able to recognize the different fault categories of planetary gearboxes.
Cost comparisons for three harvesting systems operating in northern hardwood stands
Chris B. LeDoux; Neil K. Huyler
2000-01-01
Production rates, break-even piece (tree) sizes/costs (BEP), and operating costs were compared for a Koller K-300 cable yarder, a cut-to-length (CTL) harvester, and an A60F Holder tractor operating at three machine utilization rates (MUR) in northern hardwood stands. At an average product price of $0.40/ft3, the BEP size at an MUR of 90 was 7.64 ft3 for the Koller...
2013-05-01
specifics of the correlation will be explored followed by discussion of new paradigms— the ordered event list (OEL) and the decision tree — that result from...4.2.1 Brief Overview of the Decision Tree Paradigm ................................................15 4.2.2 OEL Explained...6 Figure 3. A depiction of a notional fault/activation tree . ................................................................7
Stratigraphy and structure of the western Kentucky fluorspar district
Trace, R.D.; Amos, D.H.
1984-01-01
The western Kentucky fluorspar district is part of the larger Illinois-Kentucky fluorspar district, the largest producer of fluorspar in the United States. This report is based largely on data gathered from 1960 to 1974 during the U.S. Geological Survey-Kentucky Geological Survey cooperative geologic mapping program of Kentucky. It deals chiefly with the stratigraphy and structure of the district and, to a lesser extent, with the fluorspar-zinc-lead-barite deposits. Sedimentary rocks exposed in the district range in age from Early Mississippian (Osagean) to Quaternary. Most rocks exposed at the surface are Mississippian in age; two-thirds are marine fossiliferous limestones, and the remainder are shales, siltstones, and sandstones. Osagean deep-water marine silty limestone and chert are present at the surface in the southwestern corner of the district. Meramecian marine limestone is exposed at the surface in about half the area. Chesterian marine and fluvial to fluviodeltaic clastic sedimentary rocks and marine limestone underlie about one-third of the area. The total sequence of Mississippian rocks is about 3,000 ft thick. Pennsylvanian rocks are dominantly fluvial clastic sedimentary rocks that change upward into younger fluviodeltaic strata. Pennsylvanian strata of Morrowan and Atokan age are locally thicker than 600 ft along the eastern and southeastern margin and in the major grabens of the district where they have been preserved from erosion. Cretaceous and Tertiary sediments of the Mississippi embayment truncate Paleozoic formations in and near the southwestern corner of the district and are preserved mostly as erosional outliers. The deposits are Gulfian nonmarine gravels, sands, and clays as much as 170 ft thick and upper Pliocene fluvial continental deposits as thick as 45 ft. Pleistocene loess deposits mantle the upland surface of the district, and Quaternary fluvial and fluviolacustrine deposits are common and widespread along the Ohio and Cumberland Rivers and their major tributaries. Many mafic dikes and a few mafic sills are present. The mafic rocks are mostly altered mica peridotites or lamprophyres that are composed of carbonate minerals, serpentine, chlorite, and biotite and contain some hornblende, pyroxene, and olivine. Most of the dikes are in a north-north west-trending belt 6 to 8 mi wide and strike N. 20 0 -30 0 W. The dikes dip from 80 0 to 90 0 and are commonly 5 to 10 ft wide. Radioisotopic study indicates that the dikes are Early Permian in age. The district is just southeast of the intersection of the east-trending Rough Creek-Shawneetown and northeast-trending New Madrid fault systems. The district's principal structural features are a northwest-trending domal anticline, the Tolu Arch, and a series of steeply dipping to nearly vertical normal faults and fault zones that trend dominantly northeastward and divide the area into elongated northeast-trending grabens and horsts. Formation of these grabens and horsts was one of the major tectonic events in the district. Vertical displacement may be as much as 3,000 ft but commonly ranges from a few feet to a few hundred feet; no substantial horizontal movement is believed to have taken place. Many cross faults having only a few feet of displacement trend northwestward and are occupied at places by mafic dikes. Faulting was mostly post-Early Permian to pre-middle Cretaceous in age. Many theories have been advanced to explain the structural history of the district. A generally acceptable overall hypothesis that would account for all the structural complexities, however, is still lacking. Useful structural data, such as the structural differences between the grabens and the horsts, have been obtained, however, from the recently completed geologic mapping. Mapping also has more clearly shown the alinement of the Tolu Arch, the belt of dikes, and an unusually deep graben (the Griffith Bluff graben); this alinement suggests that possibl
Monitoring of Microseismicity with ArrayTechniques in the Peach Tree Valley Region
NASA Astrophysics Data System (ADS)
Garcia-Reyes, J. L.; Clayton, R. W.
2016-12-01
This study is focused on the analysis of microseismicity along the San Andreas Fault in the PeachTree Valley region. This zone is part of the transition zone between the locked portion to the south (Parkfield, CA) and the creeping section to the north (Jovilet, et al., JGR, 2014). The data for the study comes from a 2-week deployment of 116 Zland nodes in a cross-shaped configuration along (8.2 km) and across (9 km) the Fault. We analyze the distribution of microseismicity using a 3D backprojection technique, and we explore the use of Hidden Markov Models to identify different patterns of microseismicity (Hammer et al., GJI, 2013). The goal of the study is to relate the style of seismicity to the mechanical state of the Fault. The results show the evolution of seismic activity as well as at least two different patterns of seismic signals.
[Impact of water pollution risk in water transfer project based on fault tree analysis].
Liu, Jian-Chang; Zhang, Wei; Wang, Li-Min; Li, Dai-Qing; Fan, Xiu-Ying; Deng, Hong-Bing
2009-09-15
The methods to assess water pollution risk for medium water transfer are gradually being explored. The event-nature-proportion method was developed to evaluate the probability of the single event. Fault tree analysis on the basis of calculation on single event was employed to evaluate the extent of whole water pollution risk for the channel water body. The result indicates, that the risk of pollutants from towns and villages along the line of water transfer project to the channel water body is at high level with the probability of 0.373, which will increase pollution to the channel water body at the rate of 64.53 mg/L COD, 4.57 mg/L NH4(+) -N and 0.066 mg/L volatilization hydroxybenzene, respectively. The measurement of fault probability on the basis of proportion method is proved to be useful in assessing water pollution risk under much uncertainty.
Viewpoint on ISA TR84.0.02--simplified methods and fault tree analysis.
Summers, A E
2000-01-01
ANSI/ISA-S84.01-1996 and IEC 61508 require the establishment of a safety integrity level for any safety instrumented system or safety related system used to mitigate risk. Each stage of design, operation, maintenance, and testing is judged against this safety integrity level. Quantitative techniques can be used to verify whether the safety integrity level is met. ISA-dTR84.0.02 is a technical report under development by ISA, which discusses how to apply quantitative analysis techniques to safety instrumented systems. This paper discusses two of those techniques: (1) Simplified equations and (2) Fault tree analysis.
TH-EF-BRC-03: Fault Tree Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thomadsen, B.
2016-06-15
This Hands-on Workshop will be focused on providing participants with experience with the principal tools of TG 100 and hence start to build both competence and confidence in the use of risk-based quality management techniques. The three principal tools forming the basis of TG 100’s risk analysis: Process mapping, Failure-Modes and Effects Analysis and fault-tree analysis will be introduced with a 5 minute refresher presentation and each presentation will be followed by a 30 minute small group exercise. An exercise on developing QM from the risk analysis follows. During the exercise periods, participants will apply the principles in 2 differentmore » clinical scenarios. At the conclusion of each exercise there will be ample time for participants to discuss with each other and the faculty their experience and any challenges encountered. Learning Objectives: To review the principles of Process Mapping, Failure Modes and Effects Analysis and Fault Tree Analysis. To gain familiarity with these three techniques in a small group setting. To share and discuss experiences with the three techniques with faculty and participants. Director, TreatSafely, LLC. Director, Center for the Assessment of Radiological Sciences. Occasional Consultant to the IAEA and Varian.« less
Estimating earthquake-induced failure probability and downtime of critical facilities.
Porter, Keith; Ramer, Kyle
2012-01-01
Fault trees have long been used to estimate failure risk in earthquakes, especially for nuclear power plants (NPPs). One interesting application is that one can assess and manage the probability that two facilities - a primary and backup - would be simultaneously rendered inoperative in a single earthquake. Another is that one can calculate the probabilistic time required to restore a facility to functionality, and the probability that, during any given planning period, the facility would be rendered inoperative for any specified duration. A large new peer-reviewed library of component damageability and repair-time data for the first time enables fault trees to be used to calculate the seismic risk of operational failure and downtime for a wide variety of buildings other than NPPs. With the new library, seismic risk of both the failure probability and probabilistic downtime can be assessed and managed, considering the facility's unique combination of structural and non-structural components, their seismic installation conditions, and the other systems on which the facility relies. An example is offered of real computer data centres operated by a California utility. The fault trees were created and tested in collaboration with utility operators, and the failure probability and downtime results validated in several ways.
Huang, Weiqing; Fan, Hongbo; Qiu, Yongfu; Cheng, Zhiyu; Qian, Yu
2016-02-15
Haze weather has become a serious environmental pollution problem which occurs in many Chinese cities. One of the most critical factors for the formation of haze weather is the exhausts of coal combustion, thus it is meaningful to figure out the causation mechanism between urban haze and the exhausts of coal combustion. Based on above considerations, the fault tree analysis (FAT) approach was employed for the causation mechanism of urban haze in Beijing by considering the risk events related with the exhausts of coal combustion for the first time. Using this approach, firstly the fault tree of the urban haze causation system connecting with coal combustion exhausts was established; consequently the risk events were discussed and identified; then, the minimal cut sets were successfully determined using Boolean algebra; finally, the structure, probability and critical importance degree analysis of the risk events were completed for the qualitative and quantitative assessment. The study results proved that the FTA was an effective and simple tool for the causation mechanism analysis and risk management of urban haze in China. Copyright © 2015 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Mulyana, Cukup; Muhammad, Fajar; Saad, Aswad H.; Mariah, Riveli, Nowo
2017-03-01
Storage tank component is the most critical component in LNG regasification terminal. It has the risk of failure and accident which impacts to human health and environment. Risk assessment is conducted to detect and reduce the risk of failure in storage tank. The aim of this research is determining and calculating the probability of failure in regasification unit of LNG. In this case, the failure is caused by Boiling Liquid Expanding Vapor Explosion (BLEVE) and jet fire in LNG storage tank component. The failure probability can be determined by using Fault Tree Analysis (FTA). Besides that, the impact of heat radiation which is generated is calculated. Fault tree for BLEVE and jet fire on storage tank component has been determined and obtained with the value of failure probability for BLEVE of 5.63 × 10-19 and for jet fire of 9.57 × 10-3. The value of failure probability for jet fire is high enough and need to be reduced by customizing PID scheme of regasification LNG unit in pipeline number 1312 and unit 1. The value of failure probability after customization has been obtained of 4.22 × 10-6.
USDA-ARS?s Scientific Manuscript database
Tree fruits (e.g., apples, plums, cherries) are appealing constituents of a crew menu for long-duration exploration missions (i.e., Mars), both in terms of their nutritive and menu diversity contributions. Although appealing, tree fruit species have long been precluded as candidate crops for use in...
Tree growth and soil relations at the 1925 Wind River spacing test in coast Douglas-fir.
Richard E. Miller; Donald L. Reukema; Harry W. Anderson
2004-01-01
The 1925 Wind River spacing test is the earliest field trial seeking to determine the most appropriate spacing for planting Douglas-fir. Spacing treatments were not replicated, although individual spacings were subsampled by two to four tree-measurement plots. Previously, greater growth occurred at the wider spacings (10 and 12 ft) than at the closer spacings (4, 5, 6...
Five Years' Growth of Pruned and Unpruned Cottonwood Planted at 40- by 40-Foot Spacing
Roger M. Krinard
1979-01-01
Four pruning treatments have been applied for 5 years on cottonwood (Populus deltoides Bartr.) select clone Stoneville 66, planted at 40- by 40-ft spacing. As pruning severity increased, average diameter and maximum crown width decreased. Diameters ranged from 9.2 inches for trees pruned half of height yearly to 11.4 inches for unpruned trees; crown widths ranged from...
National Wild Turkey Federation Programs
Rob Keck
2005-01-01
I recently read an article about several women who were preparing to sit 80 ft (25 m) above a forest floor in tree-sitting nets to protest a logging operation in Jefferson National Forest (Appalachia). Tree hugging is nothing new in this country. But did environmental activists know we have more forests now than we did in the 1920s? In 1920, we only had 735 million ac...
NASA Astrophysics Data System (ADS)
Li, Shuanghong; Cao, Hongliang; Yang, Yupu
2018-02-01
Fault diagnosis is a key process for the reliability and safety of solid oxide fuel cell (SOFC) systems. However, it is difficult to rapidly and accurately identify faults for complicated SOFC systems, especially when simultaneous faults appear. In this research, a data-driven Multi-Label (ML) pattern identification approach is proposed to address the simultaneous fault diagnosis of SOFC systems. The framework of the simultaneous-fault diagnosis primarily includes two components: feature extraction and ML-SVM classifier. The simultaneous-fault diagnosis approach can be trained to diagnose simultaneous SOFC faults, such as fuel leakage, air leakage in different positions in the SOFC system, by just using simple training data sets consisting only single fault and not demanding simultaneous faults data. The experimental result shows the proposed framework can diagnose the simultaneous SOFC system faults with high accuracy requiring small number training data and low computational burden. In addition, Fault Inference Tree Analysis (FITA) is employed to identify the correlations among possible faults and their corresponding symptoms at the system component level.
NASA Astrophysics Data System (ADS)
Schwartz, D. P.; Haeussler, P. J.; Seitz, G. G.; Dawson, T. E.; Stenner, H. D.; Matmon, A.; Crone, A. J.; Personius, S.; Burns, P. B.; Cadena, A.; Thoms, E.
2005-12-01
Developing accurate rupture histories of long, high-slip-rate strike-slip faults is is especially challenging where recurrence is relatively short (hundreds of years), adjacent segments may fail within decades of each other, and uncertainties in dating can be as large as, or larger than, the time between events. The Denali Fault system (DFS) is the major active structure of interior Alaska, but received little study since pioneering fault investigations in the early 1970s. Until the summer of 2003 essentially no data existed on the timing or spatial distribution of past ruptures on the DFS. This changed with the occurrence of the M7.9 2002 Denali fault earthquake, which has been a catalyst for present paleoseismic investigations. It provided a well-constrained rupture length and slip distribution. Strike-slip faulting occurred along 290 km of the Denali and Totschunda faults, leaving unruptured ?140km of the eastern Denali fault, ?180 km of the western Denali fault, and ?70 km of the eastern Totschunda fault. The DFS presents us with a blank canvas on which to fill a chronology of past earthquakes using modern paleoseismic techniques. Aware of correlation issues with potentially closely-timed earthquakes we have a) investigated 11 paleoseismic sites that allow a variety of dating techniques, b) measured paleo offsets, which provide insight into magnitude and rupture length of past events, at 18 locations, and c) developed late Pleistocene and Holocene slip rates using exposure age dating to constrain long-term fault behavior models. We are in the process of: 1) radiocarbon-dating peats involved in faulting and liquefaction, and especially short-lived forest floor vegetation that includes outer rings of trees, spruce needles, and blueberry leaves killed and buried during paleoearthquakes; 2) supporting development of a 700-900 year tree-ring time-series for precise dating of trees used in event timing; 3) employing Pb 210 for constraining the youngest ruptures in sag ponds on the eastern and western Denali fault; and 4) using volcanic ashes in trenches for dating and correlation. Initial results are: 1) Large earthquakes occurred along the 2002 rupture section 350-700 yrb02 (2-sigma, calendar-corrected, years before 2002) with offsets about the same as 2002. The Denali penultimate rupture appears younger (350-570 yrb02) than the Totschunda (580-700 yrb02); 2) The western Denali fault is geomorphically fresh, its MRE likely occurred within the past 250 years, the penultimate event occurred 570-680 yrb02, and slip in each event was 4m; 3) The eastern Denali MRE post-dates peat dated at 550-680 yrb02, is younger than the penultimate Totschunda event, and could be part of the penultimate Denali fault rupture or a separate earthquake; 4) A 120-km section of the Denali fault between tNenana glacier and the Delta River may be a zone of overlap for large events and/or capable of producing smaller earthquakes; its western part has fresh scarps with small (1m) offsets. 2004/2005 field observations show there are longer datable records, with 4-5 events recorded in trenches on the eastern Denali fault and the west end of the 2002 rupture, 2-3 events on the western part of the fault in Denali National Park, and 3-4 events on the Totschunda fault. These and extensive datable material provide the basis to define the paleoseismic history of DFS earthquake ruptures through multiple and complete earthquake cycles.
Support vector machines-based fault diagnosis for turbo-pump rotor
NASA Astrophysics Data System (ADS)
Yuan, Sheng-Fa; Chu, Fu-Lei
2006-05-01
Most artificial intelligence methods used in fault diagnosis are based on empirical risk minimisation principle and have poor generalisation when fault samples are few. Support vector machines (SVM) is a new general machine-learning tool based on structural risk minimisation principle that exhibits good generalisation even when fault samples are few. Fault diagnosis based on SVM is discussed. Since basic SVM is originally designed for two-class classification, while most of fault diagnosis problems are multi-class cases, a new multi-class classification of SVM named 'one to others' algorithm is presented to solve the multi-class recognition problems. It is a binary tree classifier composed of several two-class classifiers organised by fault priority, which is simple, and has little repeated training amount, and the rate of training and recognition is expedited. The effectiveness of the method is verified by the application to the fault diagnosis for turbo pump rotor.
History of displacement along Ste. Genevieve Fault Zone, Southwestern Illinois
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schwalb, H.R.
1983-09-01
The Ste. Genevieve fault zone extends eastward from Missouri across the Mississippi River into Jackson County, Illinois, about 75 mi (120 km) southeast of St. Louis. Outcrop studies have dated movement along portions of the zone as pre-Middle Devonian, post-Mississippian, and post-Pennsylvanian. Present displacement is down to the north and east with throw ranging up to 3,000 ft (915 m). However, pre-Middle Devonian movement was down to the south and west. The present upthrown block shows no evidence of vertical movement during the Cambrian and Ordovician. Nor is there any indication that the fault zone was part of the northernmore » border of the Reelfoot basin, where earliest Paleozoic sediments infilled an aulacogen at the northern end of the Mississippi embayment.« less
A Passive Microwave L-Band Boreal Forest Freeze/Thaw and Vegetation Phenology Study
NASA Astrophysics Data System (ADS)
Roy, A.; Sonnentag, O.; Pappas, C.; Mavrovic, A.; Royer, A.; Berg, A. A.; Rowlandson, T. L.; Lemay, J.; Helgason, W.; Barr, A.; Black, T. A.; Derksen, C.; Toose, P.
2016-12-01
The boreal forest is the second largest land biome in the world and thus plays a major role in the global and regional climate systems. The extent, timing and duration of seasonal freeze/thaw (F/T) state influences vegetation developmental stages (phenology) and, consequently, constitute an important control on how boreal forest ecosystems exchange carbon, water and energy with the atmosphere. The effective retrieval of seasonal F/T state from L-Band radiometry was demonstrated using satellite mission. However, disentangling the seasonally differing contributions from forest overstory and understory vegetation, and the soil surface to the satellite signal remains challenging. Here we present initial results from a radiometer field campaign to improve our understanding of the L-Band derived boreal forest F/T signal and vegetation phenology. Two L-Band surface-based radiometers (SBR) are installed on a micrometeorological tower at the Southern Old Black Spruce site in central Saskatchewan over the 2016-2017 F/T season. One radiometer unit is installed on the flux tower so it views forest including all overstory and understory vegetation and the moss-covered ground surface. A second radiometer unit is installed within the boreal forest overstory, viewing the understory and the ground surface. The objectives of our study are (i) to disentangle the L-Band F/T signal contribution of boreal forest overstory from the understory and ground surface, (ii) to link the L-Band F/T signal to related boreal forest structural and functional characteristics, and (iii) to investigate the use of the L-Band signal to characterize boreal forest carbon, water and energy fluxes. The SBR observations above and within the forest canopy are used to retrieve the transmissivity (γ) and the scattering albedo (ω), two parameters that describe the emission of the forest canopy though the F/T season. These two forest parameters are compared with boreal forest structural and functional characteristics including eddy-covariance measurements of carbon dioxide, water and energy exchanges, sap flux density measurements of tree-level water dynamics, L-Band tree permittivity and temperature. The study will lead to improved monitoring of soil F/T and vegetation phenology at the boreal forest-scale from satellite L-Band observations.
Early wide spacing in red alder (Alnus rubra Bong.): effects on stem form and stem growth.
Bernard T. Bormann
1985-01-01
A thinning trial was established in 1962 in a 7-year-old red alder stand in northwestern Washington. Spacings were 8 x 8 ft (dense), 12 x 12 it (intermediate), and 16 x 16 ft (open). The effect of early thinning on growth and stem form was measured in 1982, 20 years after spacing treatment. There was negligible tree lean and sweep in open and intermediate stands except...
EDNA: Expert fault digraph analysis using CLIPS
NASA Technical Reports Server (NTRS)
Dixit, Vishweshwar V.
1990-01-01
Traditionally fault models are represented by trees. Recently, digraph models have been proposed (Sack). Digraph models closely imitate the real system dependencies and hence are easy to develop, validate and maintain. However, they can also contain directed cycles and analysis algorithms are hard to find. Available algorithms tend to be complicated and slow. On the other hand, the tree analysis (VGRH, Tayl) is well understood and rooted in vast research effort and analytical techniques. The tree analysis algorithms are sophisticated and orders of magnitude faster. Transformation of a digraph (cyclic) into trees (CLP, LP) is a viable approach to blend the advantages of the representations. Neither the digraphs nor the trees provide the ability to handle heuristic knowledge. An expert system, to capture the engineering knowledge, is essential. We propose an approach here, namely, expert network analysis. We combine the digraph representation and tree algorithms. The models are augmented by probabilistic and heuristic knowledge. CLIPS, an expert system shell from NASA-JSC will be used to develop a tool. The technique provides the ability to handle probabilities and heuristic knowledge. Mixed analysis, some nodes with probabilities, is possible. The tool provides graphics interface for input, query, and update. With the combined approach it is expected to be a valuable tool in the design process as well in the capture of final design knowledge.
NASA Astrophysics Data System (ADS)
Hu, Bingbing; Li, Bing
2016-02-01
It is very difficult to detect weak fault signatures due to the large amount of noise in a wind turbine system. Multiscale noise tuning stochastic resonance (MSTSR) has proved to be an effective way to extract weak signals buried in strong noise. However, the MSTSR method originally based on discrete wavelet transform (DWT) has disadvantages such as shift variance and the aliasing effects in engineering application. In this paper, the dual-tree complex wavelet transform (DTCWT) is introduced into the MSTSR method, which makes it possible to further improve the system output signal-to-noise ratio and the accuracy of fault diagnosis by the merits of DTCWT (nearly shift invariant and reduced aliasing effects). Moreover, this method utilizes the relationship between the two dual-tree wavelet basis functions, instead of matching the single wavelet basis function to the signal being analyzed, which may speed up the signal processing and be employed in on-line engineering monitoring. The proposed method is applied to the analysis of bearing outer ring and shaft coupling vibration signals carrying fault information. The results confirm that the method performs better in extracting the fault features than the original DWT-based MSTSR, the wavelet transform with post spectral analysis, and EMD-based spectral analysis methods.
Locating hardware faults in a parallel computer
Archer, Charles J.; Megerian, Mark G.; Ratterman, Joseph D.; Smith, Brian E.
2010-04-13
Locating hardware faults in a parallel computer, including defining within a tree network of the parallel computer two or more sets of non-overlapping test levels of compute nodes of the network that together include all the data communications links of the network, each non-overlapping test level comprising two or more adjacent tiers of the tree; defining test cells within each non-overlapping test level, each test cell comprising a subtree of the tree including a subtree root compute node and all descendant compute nodes of the subtree root compute node within a non-overlapping test level; performing, separately on each set of non-overlapping test levels, an uplink test on all test cells in a set of non-overlapping test levels; and performing, separately from the uplink tests and separately on each set of non-overlapping test levels, a downlink test on all test cells in a set of non-overlapping test levels.
Sbaa basin: A new oil-producing regino in Algeria
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baghdadli, S.M.
1988-08-01
Discovery of a paraffinic oil in 1980 in the Adrar area, the west part of the Algerian Sahara within the Sbaa half-graben depression, opens a new oil- and gas-bearing region in Algeria. The oil and gas fields are located on highly faulted structures generated by differential movements of basement blocks. Oil deposits are connected with tidal sandy sediments of Strunian and Tournaisian age and occur at depths of 500 to 1,000 m (1,640 to 3,280 ft). Gas and wet gas deposits are related to sandstone reservoirs of Cambrian-Ordovician age at depths of 1,500 to 2,000 m (4,920 to 6,562 ft).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schermer, E.R.
1993-04-01
New structural and stratigraphy data from the NE Mojave Block (NEMB) establish the timing and style of Cenozoic deformation south of the Garlock fault and west of the Avawatz Mts. Unlike adjacent areas, most of the NEMB did not undergo early-mid Miocene extension. Major fault zones strike EW; offset markers and small-scale shear criteria indicate left-lateral strike slip with a small reverse component. Lateral offsets average ca. 1--6 km and vertical offset is locally >200m. Pre-Tertiary markers indicate minimum cumulative sinistral shear of ca. 15 km in the area between the Garlock and Coyote Lake faults. Tertiary strata are deformedmore » together with the older rocks. Along the Ft. Irwin fault, alluvial fan deposits interpreted to be <11Ma appear to be displaced as much as Mesozoic igneous rocks. EW sinistral faults S. of the Garlock fault cut unconsolidated Quaternary deposits; geomorphologic features and trench exposures along segments of the McLean Lake fault and the Tiefort Mt. fault suggest Late Quaternary activity. The EW faults do not cut modern drainages and are not seismically active. NW-striking faults are largely absent within the NEMB; the largest faults bound the domain of EW-striking faults. Offset of Cretaceous and Miocene rocks suggests the W boundary (Goldstone Lake fault) has <2km right separation. Along the E boundary (Soda-Avawatz fault zone), the presence of distinctive clasts in mid-late Miocene conglomerates west of the Avawatz Mts. supports the suggestion of Brady (1984) of ca. 20 km dextral displacement. Other NW-striking faults are cut by EW faults, have unknown or minor dextral displacement (Desert King Spring Fault, Garlic Spring fault) or are low- to moderate-angle left-oblique thrust faults (Red Pass Lake fault zone).« less
Amorphization of quartz by friction: Implication to silica-gel lubrication of fault surfaces
NASA Astrophysics Data System (ADS)
Nakamura, Yu; Muto, Jun; Nagahama, Hiroyuki; Shimizu, Ichiko; Miura, Takashi; Arakawa, Ichiro
2012-11-01
To understand physico-chemical processes at real contacts (asperities) on fault surfaces, we conducted pin-on-disk friction experiments at room temperature, using single crystalline quartz disks and quartz pins. Velocity weakening from friction coefficient μ ˜ 0.6 to 0.4 was observed under apparent normal stresses of 8-19 (18 > 19), when the slip rate was increased from 0.003 to 2.6 m/s. Frictional surfaces revealed ductile deformation of wear materials. The Raman spectra of frictional tracks showed blue shifts and broadening of quartz main bands, and appearance of new peaks at 490-520 and 610 cm-1. All these features are indicative of pressure- and strain-induced amorphization of quartz. The mapping analyses of Fourier transform infrared (FT-IR) spectroscopy at room dry conditions suggest selective hydration of wear materials. It is possible that the strained Si-O-Si bridges in amorphous silica preferentially react with water to form silica-gel. In natural fault systems, amorphous materials would be produced at real fault contacts and accumulate over the fault surfaces with displacements. Subsequent hydration would lead to significant reduction of fault strength during slip.
Soil spot herbicides for single-stem hardwood control
James H. Miller
1988-01-01
Soil spot treatments of undiluted Velpar® L and a concentrated mixture of Spike® 80W were applied aorund test trees of five hardwood species. The test rates were 2, 4, and 6 ml of herbicidelin, of dbh applied to the soil within 3ft of each tree. Hardwood topkill was assessed after two growing seasons. The 4-ml rate of Velpar L was required to achieve...
Effects of crown release on growth and quality of even-aged red maple stands
Terry F. Strong; Audra E. Hubbell; Adam H. Weise; Gayne G. Erdmann
2006-01-01
The effects of six crown-release treatments on growth and bole quality of 54 dominant, codominant, and intermediate red maples (Acer rubrum L.) were examined in an even-aged stand in upper Michigan. Treatments included an unreleased control, a single-tree and a two-tree crown release, and a full crown-to-crown release of 5, 10, and 15ft. Twenty-two...
KaDonna Randolph
2010-01-01
The use of the geometric and arithmetic means for estimating tree crown diameter and crown cross-sectional area were examined for trees with crown width measurements taken at the widest point of the crown and perpendicular to the widest point of the crown. The average difference between the geometric and arithmetic mean crown diameters was less than 0.2 ft in absolute...
Low-Temperature Thermochronology for Unraveling Thermal Processes and Dating of Fault Zones
NASA Astrophysics Data System (ADS)
Tagami, T.
2016-12-01
Thermal signatures as well as timing of fault motions can be constrained by thermochronological analyses of fault-zone rocks (e.g., Tagami, 2012). Fault-zone materials suitable for such analyses are produced by tectocic and geochemical processes, such as (1) mechanical fragmentation of host rocks, grain-size reduction of fragments and recrystallization of grains to form mica and clay minerals, (2) secondary heating/melting of host rocks by frictional fault motions, and (3) mineral vein formation as a consequence of fluid advection associated with fault motions. The geothermal structure of fault zones are primarily controlled by the following three factors: (a) regional geothermal structure around the fault zone that reflect background thermo-tectonic history of studied province, (b) frictional heating of wall rocks by fault motions and resultant heat transfer into surrounding rocks, and (c) thermal influences by hot fluid advection in and around the fault zone. Thermochronological methods widely applied in fault zones are K-Ar (40Ar/39Ar), fission-track (FT), and U-Th methods. In addition, OSL, TL, ESR and (U-Th)/He methods are applied in some fault zones, in order to extract temporal imformation related to low temperature and/or very recent fault activities. Here I briefly review the thermal sensitivity of individual thermochronological systems, which basically controls the response of each method against faulting processes. Then, the thermal sensitivity of FTs is highlighted, with a particular focus on the thermal processes characteristic to fault zones, i.e., flash and hydrothermal heating. On these basis, representative examples as well as key issues, including sampling strategy, are presented to make thermochronologic analysis of fault-zone materials, such as fault gouges, pseudotachylytes and mylonites, along with geological, geomorphological and seismological implications. Finally, the thermochronologic analyses of the Nojima fault are overviewed, as an example of multidisciplinary investigations of an active seismogenic fault system. References: T. Tagami, 2012. Thermochronological investigation of fault zones. Tectonophys., 538-540, 67-85, doi:10.1016/j.tecto.2012.01.032.
Model authoring system for fail safe analysis
NASA Technical Reports Server (NTRS)
Sikora, Scott E.
1990-01-01
The Model Authoring System is a prototype software application for generating fault tree analyses and failure mode and effects analyses for circuit designs. Utilizing established artificial intelligence and expert system techniques, the circuits are modeled as a frame-based knowledge base in an expert system shell, which allows the use of object oriented programming and an inference engine. The behavior of the circuit is then captured through IF-THEN rules, which then are searched to generate either a graphical fault tree analysis or failure modes and effects analysis. Sophisticated authoring techniques allow the circuit to be easily modeled, permit its behavior to be quickly defined, and provide abstraction features to deal with complexity.
A quantitative analysis of the F18 flight control system
NASA Technical Reports Server (NTRS)
Doyle, Stacy A.; Dugan, Joanne B.; Patterson-Hine, Ann
1993-01-01
This paper presents an informal quantitative analysis of the F18 flight control system (FCS). The analysis technique combines a coverage model with a fault tree model. To demonstrate the method's extensive capabilities, we replace the fault tree with a digraph model of the F18 FCS, the only model available to us. The substitution shows that while digraphs have primarily been used for qualitative analysis, they can also be used for quantitative analysis. Based on our assumptions and the particular failure rates assigned to the F18 FCS components, we show that coverage does have a significant effect on the system's reliability and thus it is important to include coverage in the reliability analysis.
Chen, Gang; Song, Yongduan; Lewis, Frank L
2016-05-03
This paper investigates the distributed fault-tolerant control problem of networked Euler-Lagrange systems with actuator and communication link faults. An adaptive fault-tolerant cooperative control scheme is proposed to achieve the coordinated tracking control of networked uncertain Lagrange systems on a general directed communication topology, which contains a spanning tree with the root node being the active target system. The proposed algorithm is capable of compensating for the actuator bias fault, the partial loss of effectiveness actuation fault, the communication link fault, the model uncertainty, and the external disturbance simultaneously. The control scheme does not use any fault detection and isolation mechanism to detect, separate, and identify the actuator faults online, which largely reduces the online computation and expedites the responsiveness of the controller. To validate the effectiveness of the proposed method, a test-bed of multiple robot-arm cooperative control system is developed for real-time verification. Experiments on the networked robot-arms are conduced and the results confirm the benefits and the effectiveness of the proposed distributed fault-tolerant control algorithms.
Friction Laws Derived From the Acoustic Emissions of a Laboratory Fault by Machine Learning
NASA Astrophysics Data System (ADS)
Rouet-Leduc, B.; Hulbert, C.; Ren, C. X.; Bolton, D. C.; Marone, C.; Johnson, P. A.
2017-12-01
Fault friction controls nearly all aspects of fault rupture, yet it is only possible to measure in the laboratory. Here we describe laboratory experiments where acoustic emissions are recorded from the fault. We find that by applying a machine learning approach known as "extreme gradient boosting trees" to the continuous acoustical signal, the fault friction can be directly inferred, showing that instantaneous characteristics of the acoustic signal are a fingerprint of the frictional state. This machine learning-based inference leads to a simple law that links the acoustic signal to the friction state, and holds for every stress cycle the laboratory fault goes through. The approach does not use any other measured parameter than instantaneous statistics of the acoustic signal. This finding may have importance for inferring frictional characteristics from seismic waves in Earth where fault friction cannot be measured.
The Design of a Fault-Tolerant COTS-Based Bus Architecture for Space Applications
NASA Technical Reports Server (NTRS)
Chau, Savio N.; Alkalai, Leon; Tai, Ann T.
2000-01-01
The high-performance, scalability and miniaturization requirements together with the power, mass and cost constraints mandate the use of commercial-off-the-shelf (COTS) components and standards in the X2000 avionics system architecture for deep-space missions. In this paper, we report our experiences and findings on the design of an IEEE 1394 compliant fault-tolerant COTS-based bus architecture. While the COTS standard IEEE 1394 adequately supports power management, high performance and scalability, its topological criteria impose restrictions on fault tolerance realization. To circumvent the difficulties, we derive a "stack-tree" topology that not only complies with the IEEE 1394 standard but also facilitates fault tolerance realization in a spaceborne system with limited dedicated resource redundancies. Moreover, by exploiting pertinent standard features of the 1394 interface which are not purposely designed for fault tolerance, we devise a comprehensive set of fault detection mechanisms to support the fault-tolerant bus architecture.
NASA Technical Reports Server (NTRS)
Srivas, Mandayam; Bickford, Mark
1991-01-01
The design and formal verification of a hardware system for a task that is an important component of a fault tolerant computer architecture for flight control systems is presented. The hardware system implements an algorithm for obtaining interactive consistancy (byzantine agreement) among four microprocessors as a special instruction on the processors. The property verified insures that an execution of the special instruction by the processors correctly accomplishes interactive consistency, provided certain preconditions hold. An assumption is made that the processors execute synchronously. For verification, the authors used a computer aided design hardware design verification tool, Spectool, and the theorem prover, Clio. A major contribution of the work is the demonstration of a significant fault tolerant hardware design that is mechanically verified by a theorem prover.
NASA Technical Reports Server (NTRS)
Bickford, Mark; Srivas, Mandayam
1991-01-01
Presented here is a formal specification and verification of a property of a quadruplicately redundant fault tolerant microprocessor system design. A complete listing of the formal specification of the system and the correctness theorems that are proved are given. The system performs the task of obtaining interactive consistency among the processors using a special instruction on the processors. The design is based on an algorithm proposed by Pease, Shostak, and Lamport. The property verified insures that an execution of the special instruction by the processors correctly accomplishes interactive consistency, providing certain preconditions hold, using a computer aided design verification tool, Spectool, and the theorem prover, Clio. A major contribution of the work is the demonstration of a significant fault tolerant hardware design that is mechanically verified by a theorem prover.
Fault-zone waves observed at the southern Joshua Tree earthquake rupture zone
Hough, S.E.; Ben-Zion, Y.; Leary, P.
1994-01-01
Waveform and spectral characteristics of several aftershocks of the M 6.1 22 April 1992 Joshua Tree earthquake recorded at stations just north of the Indio Hills in the Coachella Valley can be interpreted in terms of waves propagating within narrow, low-velocity, high-attenuation, vertical zones. Evidence for our interpretation consists of: (1) emergent P arrivals prior to and opposite in polarity to the impulsive direct phase; these arrivals can be modeled as headwaves indicative of a transfault velocity contrast; (2) spectral peaks in the S wave train that can be interpreted as internally reflected, low-velocity fault-zone wave energy; and (3) spatial selectivity of event-station pairs at which these data are observed, suggesting a long, narrow geologic structure. The observed waveforms are modeled using the analytical solution of Ben-Zion and Aki (1990) for a plane-parallel layered fault-zone structure. Synthetic waveform fits to the observed data indicate the presence of NS-trending vertical fault-zone layers characterized by a thickness of 50 to 100 m, a velocity decrease of 10 to 15% relative to the surrounding rock, and a P-wave quality factor in the range 25 to 50.
Probability and possibility-based representations of uncertainty in fault tree analysis.
Flage, Roger; Baraldi, Piero; Zio, Enrico; Aven, Terje
2013-01-01
Expert knowledge is an important source of input to risk analysis. In practice, experts might be reluctant to characterize their knowledge and the related (epistemic) uncertainty using precise probabilities. The theory of possibility allows for imprecision in probability assignments. The associated possibilistic representation of epistemic uncertainty can be combined with, and transformed into, a probabilistic representation; in this article, we show this with reference to a simple fault tree analysis. We apply an integrated (hybrid) probabilistic-possibilistic computational framework for the joint propagation of the epistemic uncertainty on the values of the (limiting relative frequency) probabilities of the basic events of the fault tree, and we use possibility-probability (probability-possibility) transformations for propagating the epistemic uncertainty within purely probabilistic and possibilistic settings. The results of the different approaches (hybrid, probabilistic, and possibilistic) are compared with respect to the representation of uncertainty about the top event (limiting relative frequency) probability. Both the rationale underpinning the approaches and the computational efforts they require are critically examined. We conclude that the approaches relevant in a given setting depend on the purpose of the risk analysis, and that further research is required to make the possibilistic approaches operational in a risk analysis context. © 2012 Society for Risk Analysis.
Geology of Raymond Canyon, Sublette Range, western Wyoming
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shoemaker, W.A.
1984-07-01
Raymond Canyon is located on the west side of the Sublette Range, Lincoln County, Wyoming. The study area is just east of the Idaho border and 10 mi (16 km) southeast of Geneva, Idaho. Formations exposed range in age from Late Pennsylvanian to Tertiary (Pliocene) and include: the lower part of the Wells Formation (Pennsylvanian, total thickness 720 ft or 219 m); the upper part of the Wells Formation and the Phosphoria Formation (both Permian, 153-210 ft or 47-64 m); the Dinwoody Formation (185 ft or 56 m); Woodside Shale (540 ft or 165 m); Thaynes Limestone (2345 ft ormore » 715 m); and Ankareh Formation (930 ft or 283 m), all of Triassic age; the Nugget Sandstone (1610 ft or 491 m), Twin Creek Limestone, Preuss Sandstone, and Stump Formation, all of Jurassic age; and the Salt Lake formation and the Sublette conglomerate, both Pliocene postorogenic continental deposits. Generally these formations are thinner than in nearby areas to the west and northwest. Raymond Canyon lies on the upper plate of the Tunp thrust and the lower plate of the Crawford thrust of the Idaho-Wyoming thrust belt. Thus, it lies near the middle of the imbricate stack of shallowly dipping thrust faults that formed in the late Mesozoic. Study of the stratigraphy, structure, petrography, and inferred depositional environments exposed in Raymond Canyon may be helpful to those engaged in energy development in the Idaho-Wyoming thrust belt.« less
Fault Tree Based Diagnosis with Optimal Test Sequencing for Field Service Engineers
NASA Technical Reports Server (NTRS)
Iverson, David L.; George, Laurence L.; Patterson-Hine, F. A.; Lum, Henry, Jr. (Technical Monitor)
1994-01-01
When field service engineers go to customer sites to service equipment, they want to diagnose and repair failures quickly and cost effectively. Symptoms exhibited by failed equipment frequently suggest several possible causes which require different approaches to diagnosis. This can lead the engineer to follow several fruitless paths in the diagnostic process before they find the actual failure. To assist in this situation, we have developed the Fault Tree Diagnosis and Optimal Test Sequence (FTDOTS) software system that performs automated diagnosis and ranks diagnostic hypotheses based on failure probability and the time or cost required to isolate and repair each failure. FTDOTS first finds a set of possible failures that explain exhibited symptoms by using a fault tree reliability model as a diagnostic knowledge to rank the hypothesized failures based on how likely they are and how long it would take or how much it would cost to isolate and repair them. This ordering suggests an optimal sequence for the field service engineer to investigate the hypothesized failures in order to minimize the time or cost required to accomplish the repair task. Previously, field service personnel would arrive at the customer site and choose which components to investigate based on past experience and service manuals. Using FTDOTS running on a portable computer, they can now enter a set of symptoms and get a list of possible failures ordered in an optimal test sequence to help them in their decisions. If facilities are available, the field engineer can connect the portable computer to the malfunctioning device for automated data gathering. FTDOTS is currently being applied to field service of medical test equipment. The techniques are flexible enough to use for many different types of devices. If a fault tree model of the equipment and information about component failure probabilities and isolation times or costs are available, a diagnostic knowledge base for that device can be developed easily.
Sequential Test Strategies for Multiple Fault Isolation
NASA Technical Reports Server (NTRS)
Shakeri, M.; Pattipati, Krishna R.; Raghavan, V.; Patterson-Hine, Ann; Kell, T.
1997-01-01
In this paper, we consider the problem of constructing near optimal test sequencing algorithms for diagnosing multiple faults in redundant (fault-tolerant) systems. The computational complexity of solving the optimal multiple-fault isolation problem is super-exponential, that is, it is much more difficult than the single-fault isolation problem, which, by itself, is NP-hard. By employing concepts from information theory and Lagrangian relaxation, we present several static and dynamic (on-line or interactive) test sequencing algorithms for the multiple fault isolation problem that provide a trade-off between the degree of suboptimality and computational complexity. Furthermore, we present novel diagnostic strategies that generate a static diagnostic directed graph (digraph), instead of a static diagnostic tree, for multiple fault diagnosis. Using this approach, the storage complexity of the overall diagnostic strategy reduces substantially. Computational results based on real-world systems indicate that the size of a static multiple fault strategy is strictly related to the structure of the system, and that the use of an on-line multiple fault strategy can diagnose faults in systems with as many as 10,000 failure sources.
MacDonald Iii, Angus W; Zick, Jennifer L; Chafee, Matthew V; Netoff, Theoden I
2015-01-01
The grand challenges of schizophrenia research are linking the causes of the disorder to its symptoms and finding ways to overcome those symptoms. We argue that the field will be unable to address these challenges within psychiatry's standard neo-Kraepelinian (DSM) perspective. At the same time the current corrective, based in molecular genetics and cognitive neuroscience, is also likely to flounder due to its neglect for psychiatry's syndromal structure. We suggest adopting a new approach long used in reliability engineering, which also serves as a synthesis of these approaches. This approach, known as fault tree analysis, can be combined with extant neuroscientific data collection and computational modeling efforts to uncover the causal structures underlying the cognitive and affective failures in people with schizophrenia as well as other complex psychiatric phenomena. By making explicit how causes combine from basic faults to downstream failures, this approach makes affordances for: (1) causes that are neither necessary nor sufficient in and of themselves; (2) within-diagnosis heterogeneity; and (3) between diagnosis co-morbidity.
Optical fiber-fault surveillance for passive optical networks in S-band operation window
NASA Astrophysics Data System (ADS)
Yeh, Chien-Hung; Chi, Sien
2005-07-01
An S-band (1470 to 1520 nm) fiber laser scheme, which uses multiple fiber Bragg grating (FBG) elements as feedback elements on each passive branch, is proposed and described for in-service fault identification in passive optical networks (PONs). By tuning a wavelength selective filter located within the laser cavity over a gain bandwidth, the fiber-fault of each branch can be monitored without affecting the in-service channels. In our experiment, an S-band four-branch monitoring tree-structured PON system is demonstrated and investigated experimentally.
Optical fiber-fault surveillance for passive optical networks in S-band operation window.
Yeh, Chien-Hung; Chi, Sien
2005-07-11
An S-band (1470 to 1520 nm) fiber laser scheme, which uses multiple fiber Bragg grating (FBG) elements as feedback elements on each passive branch, is proposed and described for in-service fault identification in passive optical networks (PONs). By tuning a wavelength selective filter located within the laser cavity over a gain bandwidth, the fiber-fault of each branch can be monitored without affecting the in-service channels. In our experiment, an S-band four-branch monitoring tree-structured PON system is demonstrated and investigated experimentally.
Sun, Weifang; Yao, Bin; Zeng, Nianyin; He, Yuchao; Cao, Xincheng; He, Wangpeng
2017-01-01
As a typical example of large and complex mechanical systems, rotating machinery is prone to diversified sorts of mechanical faults. Among these faults, one of the prominent causes of malfunction is generated in gear transmission chains. Although they can be collected via vibration signals, the fault signatures are always submerged in overwhelming interfering contents. Therefore, identifying the critical fault’s characteristic signal is far from an easy task. In order to improve the recognition accuracy of a fault’s characteristic signal, a novel intelligent fault diagnosis method is presented. In this method, a dual-tree complex wavelet transform (DTCWT) is employed to acquire the multiscale signal’s features. In addition, a convolutional neural network (CNN) approach is utilized to automatically recognise a fault feature from the multiscale signal features. The experiment results of the recognition for gear faults show the feasibility and effectiveness of the proposed method, especially in the gear’s weak fault features. PMID:28773148
Ohlin, Henry N.; McLaughlin, Robert J.; Moring, Barry C.; Sawyer, Thomas L.
2010-01-01
The Lake Pillsbury area lies in the eastern part of the northern California Coast Ranges, along the east side of the transform boundary between the Pacific and North American plates (fig. 1). The Bartlett Springs Fault Zone is a northwest-trending zone of faulting associated with this eastern part of the transform boundary. It is presently active, based on surface creep (Svarc and others, 2008), geomorphic expression, offset of Holocene units (Lienkaemper and Brown, 2009), and microseismicity (Bolt and Oakeshott, 1982; Dehlinger and Bolt, 1984; DePolo and Ohlin, 1984). Faults associated with the Bartlett Springs Fault Zone at Lake Pillsbury are steeply dipping and offset older low to steeply dipping faults separating folded and imbricated Mesozoic terranes of the Franciscan Complex and interleaved rocks of the Coast Range Ophiolite and Great Valley Sequence. Parts of this area were mapped in the late 1970s and 1980s by several investigators who were focused on structural relations in the Franciscan Complex (Lehman, 1978; Jordan, 1975; Layman, 1977; Etter, 1979). In the 1980s the U.S. Geological Survey (USGS) mapped a large part of the area as part of a mineral resource appraisal of two U.S. Forest Service Roadless areas. For evaluating mineral resource potential, the USGS mapping was published at a scale of 1:62,500 as a generalized geologic summary map without a topographic base (Ohlin and others, 1983; Ohlin and Spear, 1984). The previously unpublished mapping with topographic base is presented here at a scale of 1:30,000, compiled with other mapping in the vicinity of Lake Pillsbury. The mapping provides a geologic framework for ongoing investigations to evaluate potential earthquake hazards and structure of the Bartlett Springs Fault Zone. This geologic map includes part of Mendocino National Forest (the Elk Creek Roadless Area) in Mendocino, Glenn, and Lake Counties and is traversed by several U.S. Forest Service Routes, including M1 and M6 (fig. 2). The study area is characterized by northwest-trending ridges separated by steep-sided valleys. Elevations in this part of the Coast Ranges vary from 1,500 ft (457 m) to 6,600 ft (2,012 m), commonly with gradients of 1,000 ft per mile (90 m per km). The steep slopes are covered by brush, grass, oak, and conifer forests. Access to most of the area is by county roads and Forest Service Route M6 from Potter Valley to Lake Pillsbury and by county road and Forest Service Route M6 and M1 from Upper Lake and State Highway 20. From the north, State Highway 261 provides access from Covelo. Forest Service Route M1 trends roughly north from its intersection with Route M6 south of Hull Mountain and through the Elk Creek and Black Butte Roadless areas to State Highway 261. Side roads used for logging and jeep trails provide additional access in parts of the area.
East Cameron Block 270, offshore Louisiana: a Pleistocene field
DOE Office of Scientific and Technical Information (OSTI.GOV)
Holland, D.S.; Sutley, C.E.; Berlitz, R.E.
1976-01-01
Exploration of the Plio-Pleistocene in the Gulf of Mexico since 1970 has led to the discovery of significant hydrocarbon reserves. One of the better gas fields found to date has been the East Cameron Block 270 field, offshore Louisiana. Utilization of a coordinated exploitation plan with Schlumberger Offshore Services has allowed Pennzoil Co., as operator, to develop and put the Block 270 field on production in minimum time. The structure at Block 270 field is a north-south-trending, faulted nose at 6000 ft (1825 m). At the depth of the ''G'' sandstone (8700 ft or 2650 m), the structure is closed;more » it is elongated north-south and dips in all directions from the Block 270 area. Closure is the result of contemporaneous growth of the east-bounding regional fault. Structural and stratigraphic interpretations from dipmeters were used to determine the most favorable offset locations. The producing zones consist of various combinations of bar-like, channel-like, and distributary-front sandstones. The sediment source for most of the producing zones was southwest of the area, except for two zones which derived their sediments from the north through a system of channels paralleling the east-bounding fault. Computed logs were used to convert conventional logging measurements into a more readily usable form for evaluation. The computed results were used for reserve calculations, reservoir-quality determinations, and confirmation of depositional environments as determined from other sources.« less
Hoenicka, Hans; Lehnhardt, Denise; Nilsson, Ove; Hanelt, Dieter; Fladung, Matthias
2014-10-01
In forest tree species, the reproductive phase is reached only after many years or even decades of juvenile growth. Different early flowering systems based on the genetic transfer of heat-shock promoter driven flowering-time genes have been proposed for poplar; however, no fertile flowers were reported until now. Here, we studied flower and pollen development in both HSP::AtFT and wild-type male poplar in detail and developed an optimized heat treatment protocol to obtain fertile HSP::AtFT flowers. Anthers from HSP::AtFT poplar flowers containing fertile pollen grains showed arrested development in stage 12 instead of reaching phase 13 as do wild-type flowers. Pollen grains could be isolated under the binocular microscope and were used for intra- and interspecific crossings with wild-type poplar. F1-seedlings segregating the HSP::AtFT gene construct according to Mendelian laws were obtained. A comparison between intra- and interspecific crossings revealed that genetic transformation had no detrimental effects on F1-seedlings. However, interspecific crossings, a broadly accepted breeding method, produced 47% seedlings with an aberrant phenotype. The early flowering system presented in this study opens new possibilities for accelerating breeding of poplar and other forest tree species. Fast breeding and the selection of transgene-free plants, once the breeding process is concluded, can represent an attractive alternative even under very restrictive regulations. © 2014 Society for Experimental Biology, Association of Applied Biologists and John Wiley & Sons Ltd.
McBride, J.H.; Nelson, W.J.
2001-01-01
High-resolution seismic reflection surveys document tectonic faults that displace Pleistocene and older strata just beyond the northeast termination of the New Madrid seismic zone, at the northernmost extent of the Mississippi embayment. These faults, which are part of the Fluorspar Area fault complex in southeastern Illinois, are directly in line with the northeast-trending seismic zone. The reflection data were acquired using an elastic weight-drop source recorded to 500 msec by a 48-geophone array (24-fold) with a 10-ft (??3.0m) station interval. Recognizable reflections were recorded to about 200 msec (100-150 m). The effects of multiple reflections, numerous diffractions, low apparent velocity (i.e., steeply dipping) noise, and the relatively low-frequency content of the recorded signal provided challenges for data processing and interpreting subtle fault offsets. Data processing steps that were critical to the detection of faults included residual statics, post-stack migration, deconvolution, and noise-reduction filtering. Seismic migration was crucial for detecting and mitigating complex fault-related diffraction patterns, which produced an apparent 'folding' of reflectors on unmigrated sections. Detected individual offsets of shallow reflectors range from 5 to 10 m for the top of Paleozoic bedrock and younger strata. The migrated sections generally indicate vertical to steeply dipping normal and reverse faults, which in places outline small horsts and/or grabens. Tilting or folding of stratal reflectors associated with faulting is also locally observed. At one site, the observed faulting is superimposed over a prominent antiformal structure, which may itself be a product of the Quaternary deformation that produced the steep normal and reverse faults. Our results suggest that faulting of the Paleozoic bedrock and younger sediments of the northern Mississippi embayment is more pervasive and less localized than previously thought.
Muñoz-Fambuena, Natalia; Mesejo, Carlos; González-Mas, M. Carmen; Primo-Millo, Eduardo; Agustí, Manuel; Iglesias, Domingo J.
2012-01-01
Background and Aims Gene determination of flowering is the result of complex interactions involving both promoters and inhibitors. In this study, the expression of flowering-related genes at the meristem level in alternate-bearing citrus trees is analysed, together with the interplay between buds and leaves in the determination of flowering. Methods First defruiting experiments were performed to manipulate blossoming intensity in ‘Moncada’ mandarin, Citrus clementina. Further defoliation was performed to elucidate the role leaves play in the flowering process. In both cases, the activity of flowering-related genes was investigated at the flower induction (November) and differentiation (February) stages. Key Results Study of the expression pattern of flowering-genes in buds from on (fully loaded) and off (without fruits) trees revealed that homologues of FLOWERING LOCUS T (CiFT), TWIN SISTER OF FT (TSF), APETALA1 (CsAP1) and LEAFY (CsLFY) were negatively affected by fruit load. CiFT and TSF activities showed a marked increase in buds from off trees through the study period (ten-fold in November). By contrast, expression of the homologues of the flowering inhibitors of TERMINAL FLOWER 1 (CsTFL), TERMINAL FLOWER 2 (TFL2) and FLOWERING LOCUS C (FLC) was generally lower in off trees. Regarding floral identity genes, the increase in CsAP1 expression in off trees was much greater in buds than in leaves, and significant variations in CsLFY expression (approx. 20 %) were found only in February. Defoliation experiments further revealed that the absence of leaves completely abolished blossoming and severely affected the expression of most of the flowering-related genes, particularly decreasing the activity of floral promoters and of CsAP1 at the induction stage. Conclusions These results suggest that the presence of fruit affects flowering by greatly altering gene-expression not only at the leaf but also at the meristem level. Although leaves are required for flowering to occur, their absence strongly affects the activity of floral promoters and identity genes. PMID:22915579
NASA Astrophysics Data System (ADS)
Chartier, Thomas; Scotti, Oona; Boiselet, Aurelien; Lyon-Caen, Hélène
2016-04-01
Including faults in probabilistic seismic hazard assessment tends to increase the degree of uncertainty in the results due to the intrinsically uncertain nature of the fault data. This is especially the case in the low to moderate seismicity regions of Europe, where slow slipping faults are difficult to characterize. In order to better understand the key parameters that control the uncertainty in the fault-related hazard computations, we propose to build an analytic tool that provides a clear link between the different components of the fault-related hazard computations and their impact on the results. This will allow identifying the important parameters that need to be better constrained in order to reduce the resulting uncertainty in hazard and also provide a more hazard-oriented strategy for collecting relevant fault parameters in the field. The tool will be illustrated through the example of the West Corinth rifts fault-models. Recent work performed in the gulf has shown the complexity of the normal faulting system that is accommodating the extensional deformation of the rift. A logic-tree approach is proposed to account for this complexity and the multiplicity of scientifically defendable interpretations. At the nodes of the logic tree, different options that could be considered at each step of the fault-related seismic hazard will be considered. The first nodes represent the uncertainty in the geometries of the faults and their slip rates, which can derive from different data and methodologies. The subsequent node explores, for a given geometry/slip rate of faults, different earthquake rupture scenarios that may occur in the complex network of faults. The idea is to allow the possibility of several faults segments to break together in a single rupture scenario. To build these multiple-fault-segment scenarios, two approaches are considered: one based on simple rules (i.e. minimum distance between faults) and a second one that relies on physically-based simulations. The following nodes represents for each rupture scenario different rupture forecast models (i.e; characteristic or Gutenberg-Richter) and for a given rupture forecast, two probability models commonly used in seismic hazard assessment: poissonian or time-dependent. The final node represents an exhaustive set of ground motion prediction equations chosen in order to be compatible with the region. Finally, the expected probability of exceeding a given ground motion level is computed at each sites. Results will be discussed for a few specific localities of the West Corinth Gulf.
Fruit regulates seasonal expression of flowering genes in alternate-bearing ‘Moncada’ mandarin
Muñoz-Fambuena, Natalia; Mesejo, Carlos; Carmen González-Mas, M.; Primo-Millo, Eduardo; Agustí, Manuel; Iglesias, Domingo J.
2011-01-01
Background and Aims The presence of fruit has been widely reported to act as an inhibitor of flowering in fruit trees. This study is an investigation into the effect of fruit load on flowering of ‘Moncada’ mandarin and on the expression of putative orthologues of genes involved in flowering pathways to provide insight into the molecular mechanisms underlying alternate bearing in citrus. Methods The relationship between fruit load and flowering intensity was examined first. Defruiting experiments were further conducted to demonstrate the causal effect of fruit removal upon flowering. Finally, the activity of flowering-related genes was investigated to determine the extent to which their seasonal expression is affected by fruit yield. Key Results First observations and defruiting experiments indicated a significant inverse relationship between preceding fruit load and flowering intensity. Moreover, data indicated that when fruit remained on the tree from November onwards, a dramatic inhibition of flowering occurred the following spring. The study of the expression pattern of flowering-genes of on (fully loaded) and off (without fruits) trees revealed that homologues of FLOWERING LOCUS T (FT), SUPRESSOR OF OVEREXPRESSION OF CONSTANS 1 (SOC1), APETALA1 (AP1) and LEAFY (LFY) were negatively affected by fruit load. Thus, CiFT expression showed a progressive increase in leaves from off trees through the study period, the highest differences found from December onwards (10-fold). Whereas differences in the relative expression of SOC1 only reached significance from September to mid-December, CsAP1 expression was constantly higher in those trees through the whole study period. Significant variations in CsLFY expression only were found in late February (close to 20 %). On the other hand, the expression of the homologues of TERMINAL FLOWER 1 (TFL1) and FLOWERING LOCUS C (FLC) did not appear to be related to fruit load. Conclusions These results suggest for the first time that fruit inhibits flowering by repressing CiFT and SOC1 expression in leaves of alternate-bearing citrus. Fruit also reduces CsAP1 expression in leaves, and the significant increase in leaf CsLFY expression from off trees in late February was associated with the onset of floral differentiation. PMID:21856639
Yamagishi, Norioko; Li, Chunjiang; Yoshikawa, Nobuyuki
2016-01-01
Plant viral vectors are superior tools for genetic manipulation, allowing rapid induction or suppression of expression of a target gene in plants. This is a particularly effective technology for use in breeding fruit trees, which are difficult to manipulate using recombinant DNA technologies. We reported previously that if apple seed embryos (cotyledons) are infected with an Apple latent spherical virus (ALSV) vector (ALSV-AtFT/MdTFL1) concurrently expressing the Arabidopsis thaliana florigen (AtFT) gene and suppressing the expression of the apple MdTFL1-1 gene, the period prior to initial flowering (generally lasts 5-12 years) will be reduced to about 2 months. In this study, we examined whether or not ALSV vector technology can be used to promote flowering in pear, which undergoes a very long juvenile period (germination to flowering) similar to that of apple. The MdTFL1 sequence in ALSV-AtFT/MdTFL1 was replaced with a portion of the pear PcTFL1-1 gene. The resulting virus (ALSV-AtFT/PcTFL1) and ALSV-AtFT/MdTFL1 were used individually for inoculation to pear cotyledons immediately after germination in two inoculation groups. Those inoculated with ALSV-AtFT/MdTFL1 and ALSV-AtFT/PcTFL1 then initiated flower bud formation starting one to 3 months after inoculation, and subsequently exhibited continuous flowering and fruition by pollination. Conversely, Japanese pear exhibited extremely low systemic infection rates when inoculated with ALSV-AtFT/MdTFL1, and failed to exhibit any induction of flowering. We also developed a simple method for eliminating ALSV vectors from infected plants. An evaluation of the method for eliminating the ALSV vectors from infected apple and pear seedlings revealed that a 4-week high-temperature (37°C) incubation of ALSV-infected apples and pears disabled the movement of ALSV to new growing tissues. This demonstrates that only high-temperature treatment can easily eliminate ALSV from infected fruit trees. A method combining the promotion of flowering in apple and pear by ALSV vector with an ALSV elimination technique is expected to see future application as a new plant breeding technique that can significantly shorten the breeding periods of apple and pear.
Field, Edward; Biasi, Glenn P.; Bird, Peter; Dawson, Timothy E.; Felzer, Karen R.; Jackson, David A.; Johnson, Kaj M.; Jordan, Thomas H.; Madden, Christopher; Michael, Andrew J.; Milner, Kevin; Page, Morgan T.; Parsons, Thomas E.; Powers, Peter; Shaw, Bruce E.; Thatcher, Wayne R.; Weldon, Ray J.; Zeng, Yuehua
2015-01-01
The 2014 Working Group on California Earthquake Probabilities (WGCEP 2014) presents time-dependent earthquake probabilities for the third Uniform California Earthquake Rupture Forecast (UCERF3). Building on the UCERF3 time-independent model, published previously, renewal models are utilized to represent elastic-rebound-implied probabilities. A new methodology has been developed that solves applicability issues in the previous approach for un-segmented models. The new methodology also supports magnitude-dependent aperiodicity and accounts for the historic open interval on faults that lack a date-of-last-event constraint. Epistemic uncertainties are represented with a logic tree, producing 5,760 different forecasts. Results for a variety of evaluation metrics are presented, including logic-tree sensitivity analyses and comparisons to the previous model (UCERF2). For 30-year M≥6.7 probabilities, the most significant changes from UCERF2 are a threefold increase on the Calaveras fault and a threefold decrease on the San Jacinto fault. Such changes are due mostly to differences in the time-independent models (e.g., fault slip rates), with relaxation of segmentation and inclusion of multi-fault ruptures being particularly influential. In fact, some UCERF2 faults were simply too long to produce M 6.7 sized events given the segmentation assumptions in that study. Probability model differences are also influential, with the implied gains (relative to a Poisson model) being generally higher in UCERF3. Accounting for the historic open interval is one reason. Another is an effective 27% increase in the total elastic-rebound-model weight. The exact factors influencing differences between UCERF2 and UCERF3, as well as the relative importance of logic-tree branches, vary throughout the region, and depend on the evaluation metric of interest. For example, M≥6.7 probabilities may not be a good proxy for other hazard or loss measures. This sensitivity, coupled with the approximate nature of the model and known limitations, means the applicability of UCERF3 should be evaluated on a case-by-case basis.
Folding and unfolding phylogenetic trees and networks.
Huber, Katharina T; Moulton, Vincent; Steel, Mike; Wu, Taoyang
2016-12-01
Phylogenetic networks are rooted, labelled directed acyclic graphswhich are commonly used to represent reticulate evolution. There is a close relationship between phylogenetic networks and multi-labelled trees (MUL-trees). Indeed, any phylogenetic network N can be "unfolded" to obtain a MUL-tree U(N) and, conversely, a MUL-tree T can in certain circumstances be "folded" to obtain aphylogenetic network F(T) that exhibits T. In this paper, we study properties of the operations U and F in more detail. In particular, we introduce the class of stable networks, phylogenetic networks N for which F(U(N)) is isomorphic to N, characterise such networks, and show that they are related to the well-known class of tree-sibling networks. We also explore how the concept of displaying a tree in a network N can be related to displaying the tree in the MUL-tree U(N). To do this, we develop aphylogenetic analogue of graph fibrations. This allows us to view U(N) as the analogue of the universal cover of a digraph, and to establish a close connection between displaying trees in U(N) and reconciling phylogenetic trees with networks.
Fault Tolerant Frequent Pattern Mining
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shohdy, Sameh; Vishnu, Abhinav; Agrawal, Gagan
FP-Growth algorithm is a Frequent Pattern Mining (FPM) algorithm that has been extensively used to study correlations and patterns in large scale datasets. While several researchers have designed distributed memory FP-Growth algorithms, it is pivotal to consider fault tolerant FP-Growth, which can address the increasing fault rates in large scale systems. In this work, we propose a novel parallel, algorithm-level fault-tolerant FP-Growth algorithm. We leverage algorithmic properties and MPI advanced features to guarantee an O(1) space complexity, achieved by using the dataset memory space itself for checkpointing. We also propose a recovery algorithm that can use in-memory and disk-based checkpointing,more » though in many cases the recovery can be completed without any disk access, and incurring no memory overhead for checkpointing. We evaluate our FT algorithm on a large scale InfiniBand cluster with several large datasets using up to 2K cores. Our evaluation demonstrates excellent efficiency for checkpointing and recovery in comparison to the disk-based approach. We have also observed 20x average speed-up in comparison to Spark, establishing that a well designed algorithm can easily outperform a solution based on a general fault-tolerant programming model.« less
Depositional environment of downdip Yegua (Eocene) sandstones, Jackson County, Texas
DOE Office of Scientific and Technical Information (OSTI.GOV)
Whitten, C.J.; Berg, R.R.
Yegua sandstones at a depth of 8300-8580 ft (2530-2615 m) were partly cored in the Arco Jansky 1 dry hole. Total thickness of the sandstone section is approximately 240 ft (73 m). The sandstones are enclosed in thick marine shales and are about 20 mi (32 km) downdip from thicker and more abundant sandstones in the Yegua Formation. The section is similar to reservoirs recently discovered in the area at the Toro Grande (1984), Lost Bridge (1984), and El Torito (1985) fields. The sandstones are fine to very fine grained and occur in thin beds that are 0.5-9 ft (0.15-2.7more » m) thick. Sedimentary structures within the beds range from a lower massive division to a laminated or rippled upper division. Grain size within beds fines upward from 0.18 mm at the base to 0.05 mm at the top. The sandstones are interpreted to be turbidites of the AB type that were deposited within channels. The sandstones contain an average of 50% quartz and are classified as volcanic-arenites to feldspathic litharenites. Carbonate cement ranges from 0 to 27%. Average porosity is 29% and permeabilities are in the range of 60-1600 md in the clean sandstones. Much of the porosity is secondary and is the result of the dissolution of cements, volcanic rock fragments, and feldspar grains. Yegua sandstones produce gas and condensate at nearby Toro Grande field on a gentle, faulted anticline. The local trend of reservoir sandstones may be controlled in part by faulting that was contemporaneous with deposition.« less
Qin, Cheng; Chen, Weiwei; Shen, Jiajia; Cheng, Linming; Akande, Femi; Zhang, Ke; Yuan, Chen; Li, Chunyang; Zhang, Pengcheng; Shi, Nongnong; Cheng, Qi; Liu, Yule; Jackson, Stephen; Hong, Yiguo
2017-06-01
Virus-induced flowering (VIF) uses virus vectors to express Flowering Locus T ( FT ) to induce flowering in plants. This approach has recently attracted wide interest for its practical applications in accelerating breeding in crops and woody fruit trees. However, the insight into VIF and its potential as a powerful tool for dissecting florigenic proteins remained to be elucidated. Here, we describe the mechanism and further applications of Potato virus X (PVX)-based VIF in the short-day Nicotiana tabacum cultivar Maryland Mammoth. Ectopic delivery of Arabidopsis ( Arabidopsis thaliana ) AtFT by PVX/AtFT did not induce the expression of the endogenous FT ortholog NtFT4 ; however, it was sufficient to trigger flowering in Maryland Mammoth plants grown under noninductive long-day conditions. Infected tobacco plants developed no systemic symptoms, and the PVX-based VIF did not cause transgenerational flowering. We showed that the PVX-based VIF is a much more rapid method to examine the impacts of single amino acid mutations on AtFT for floral induction than making individual transgenic Arabidopsis lines for each mutation. We also used the PVX-based VIF to demonstrate that adding a His- or FLAG-tag to the N or C terminus of AtFT could affect its florigenic activity and that this system can be applied to assay the function of FT genes from heterologous species, including tomato ( Solanum lycopersicum ) SFT and rice ( Oryza sativa ) Hd3a Thus, the PVX-based VIF represents a simple and efficient system to identify individual amino acids that are essential for FT-mediated floral induction and to test the ability of mono- and dicotyledonous FT genes and FT fusion proteins to induce flowering. © 2017 American Society of Plant Biologists. All Rights Reserved.
Yarur, Antonia; Soto, Esteban; León, Gabriel; Almeida, Andrea Miyasaka
2016-12-01
FT gene is expressed in leaves and buds and is involved in floral meristem determination and bud development in sweet cherry. In woody fruit perennial trees, floral determination, dormancy and bloom, depends on perception of different environmental and endogenous cues which converge to a systemic signaling gene known as FLOWERING LOCUS T (FT). In long-day flowering plants, FT is expressed in the leaves on long days. The protein travels through the phloem to the shoot apical meristem, where it induces flower determination. In perennial plants, meristem determination and flowering are separated by a dormancy period. Meristem determination takes place in summer, but flowering occurs only after a dormancy period and cold accumulation during winter. The roles of FT are not completely clear in meristem determination, dormancy release, and flowering in perennial plants. We cloned FT from sweet cherry (Prunus avium) and analyzed its expression pattern in leaves and floral buds during spring and summer. Phylogenetic analysis shows high identity of the FT cloned sequence with orthologous genes from other Rosaceae species. Our results show that FT is expressed in both leaves and floral buds and increases when the daylight reached 12 h. The peak in FT expression was coincident with floral meristem identity genes expression and morphological changes typical of floral meristem determination. The Edi-0 Arabidopsis ecotype, which requires vernalization to flower, was transformed with a construct for overexpression of PavFT. These transgenic plants showed an early-flowering phenotype without cold treatment. Our results suggest that FT is involved in floral meristem determination and bud development in sweet cherry. Moreover, we show that FT is expressed in both leaves and floral buds in this species, in contrast to annual plants.
NASA Astrophysics Data System (ADS)
Thomson, S. N.; Lefebvre, C.; Umhoefer, P. J.; Darin, M. H.; Whitney, D.; Teyssier, C. P.
2016-12-01
The central part of the Anatolian microplate in Turkey forms a complex tectonic zone situated between ongoing convergence of the Arabian and Eurasian plates to the east, and lateral escape of the Anatolian microplate as a rigid block to the west facilitated by two major strike-slip faults (the North and East Anatolian fault zones) that transitions westward into an extensional tectonic regime in western Turkey and the Aegean Sea related to subduction retreat. However, the geodynamic processes behind the transition from collision to escape, and the timing and nature of this transition, are complex and remain poorly understood. To gain a better understanding of the timing and nature of this transition, including the debated timing of ca. 35-20 Ma onset of collision between Arabia and Eurasia, we have undertaken a comprehensive low-temperature thermochronologic study in central Turkey to provide a record of exhumation patterns. We have collected over 150 samples, focused on the Central Anatolian Crystalline Complex (CACC), the Central Anatolian fault zone (CAFZ - proposed as a major lithosphere-scale structure that may also be related to onset of tectonic escape), and Eocene to Neogene sedimentary basins. Results include 113 apatite fission track (FT) ages (62 bedrock ages and 51 detrital ages), 26 detrital zircon FT ages, 218 apatite (U-Th)/He (He) ages from 84 mostly bedrock samples, and 15 zircon He ages from 6 bedrock samples. Our most significant new finding is identification of an early Miocene (ca. 22-15 Ma) phase of rapid cooling seen in the CACC. These cooling ages are localized in the footwalls of several large high-angle NW-SE trending normal faults, and imply significant footwall uplift and exhumation at this time. This early Miocene exhumation is restricted to entirely west of the CAFZ, and supports this fault marking a major tectonic transition active at this time. East of the CAFZ, AFT ages in sedimentary rocks show Eocene and older detrital ages despite much higher elevations (up to 3000m) suggesting uplift of the fault block east of CAFZ occurred since the late Miocene. An earlier Eocene (40-35 Ma) phase of cooling and exhumation is identified in deformed Paleocene-Eocene sedimentary rocks either side of the CAFZ likely related to a regional episode of shortening during final closure of the inner Tauride suture.
Redundancy management for efficient fault recovery in NASA's distributed computing system
NASA Technical Reports Server (NTRS)
Malek, Miroslaw; Pandya, Mihir; Yau, Kitty
1991-01-01
The management of redundancy in computer systems was studied and guidelines were provided for the development of NASA's fault-tolerant distributed systems. Fault recovery and reconfiguration mechanisms were examined. A theoretical foundation was laid for redundancy management by efficient reconfiguration methods and algorithmic diversity. Algorithms were developed to optimize the resources for embedding of computational graphs of tasks in the system architecture and reconfiguration of these tasks after a failure has occurred. The computational structure represented by a path and the complete binary tree was considered and the mesh and hypercube architectures were targeted for their embeddings. The innovative concept of Hybrid Algorithm Technique was introduced. This new technique provides a mechanism for obtaining fault tolerance while exhibiting improved performance.
Failure mode effect analysis and fault tree analysis as a combined methodology in risk management
NASA Astrophysics Data System (ADS)
Wessiani, N. A.; Yoshio, F.
2018-04-01
There have been many studies reported the implementation of Failure Mode Effect Analysis (FMEA) and Fault Tree Analysis (FTA) as a method in risk management. However, most of the studies usually only choose one of these two methods in their risk management methodology. On the other side, combining these two methods will reduce the drawbacks of each methods when implemented separately. This paper aims to combine the methodology of FMEA and FTA in assessing risk. A case study in the metal company will illustrate how this methodology can be implemented. In the case study, this combined methodology will assess the internal risks that occur in the production process. Further, those internal risks should be mitigated based on their level of risks.
Chemically inducing lightwood formation in southern pines
DOE Office of Scientific and Technical Information (OSTI.GOV)
Roberts, D.R.; Peters, W.J.
1977-06-01
Chemical induction of lightwood formation promises to be a new method of naval stores production. A broad range of paraquat concentrations and many methods of application induced lightwood formation. Loblolly, slash, and longleaf pines were found to produce increased amounts of turpentine and tall oil in response to paraquat treatment. In one experiment, loblolly pines treated with 8 percent paraquat on a single bark streak yielded, 9 months after treatment, an average of 10 pounds more extractives per tree than did untreated trees. Most of the yield increase was in the lower portion of the tree near the wound, butmore » some increase was noted for heights as great as 27 ft. In another experiment, 8 to 10 inch DBH loblolly, slash, and longleaf pine trees were treated with 0.5, 1.0, and 2.5 percent paraquat applied in ax chops spanning one-third the circumference of the trees. All treated trees yielded more resin acids than did control trees.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gordon, P.T.
Devillier field is an overpressured gas reservoir producing from upper Vicksburg (lower Oligocene) Loxostoma B Delicata-age sands which pinch-out near the crest of an anticline located on the downthrown side of the Vicksburg flexure. The field is located 50 miles (80 km) east of Houston in NE Chambers County, Texas. The first year's production per well has averaged 1.0 billion cu ft of gas and 13,000 bbl condensate, for the 7 wells completed since the field discovery in 1975. Calculated open flows have ranged as high as 600,000 MCFD of gas from an average net-sand interval of 25 ft (7.6more » m) at depths between 10,550 and 10,750 ft (3216 and 3277 m). The upper Vicksburg sediments are considered to have been deposited in a shallow-marine environment. The field pay, the Loxostoma sand, is interpreted to have been deposited in a delta distributary mouth bar.« less
GR712RC- Dual-Core Processor- Product Status
NASA Astrophysics Data System (ADS)
Sturesson, Fredrik; Habinc, Sandi; Gaisler, Jiri
2012-08-01
The GR712RC System-on-Chip (SoC) is a dual core LEON3FT system suitable for advanced high reliability space avionics. Fault tolerance features from Aeroflex Gaisler’s GRLIB IP library and an implementation using Ramon Chips RadSafe cell library enables superior radiation hardness.The GR712RC device has been designed to provide high processing power by including two LEON3FT 32- bit SPARC V8 processors, each with its own high- performance IEEE754 compliant floating-point-unit and SPARC reference memory management unit.This high processing power is combined with a large number of serial interfaces, ranging from high-speed links for data transfers to low-speed control buses for commanding and status acquisition.
Using Decision Trees to Detect and Isolate Simulated Leaks in the J-2X Rocket Engine
NASA Technical Reports Server (NTRS)
Schwabacher, Mark A.; Aguilar, Robert; Figueroa, Fernando F.
2009-01-01
The goal of this work was to use data-driven methods to automatically detect and isolate faults in the J-2X rocket engine. It was decided to use decision trees, since they tend to be easier to interpret than other data-driven methods. The decision tree algorithm automatically "learns" a decision tree by performing a search through the space of possible decision trees to find one that fits the training data. The particular decision tree algorithm used is known as C4.5. Simulated J-2X data from a high-fidelity simulator developed at Pratt & Whitney Rocketdyne and known as the Detailed Real-Time Model (DRTM) was used to "train" and test the decision tree. Fifty-six DRTM simulations were performed for this purpose, with different leak sizes, different leak locations, and different times of leak onset. To make the simulations as realistic as possible, they included simulated sensor noise, and included a gradual degradation in both fuel and oxidizer turbine efficiency. A decision tree was trained using 11 of these simulations, and tested using the remaining 45 simulations. In the training phase, the C4.5 algorithm was provided with labeled examples of data from nominal operation and data including leaks in each leak location. From the data, it "learned" a decision tree that can classify unseen data as having no leak or having a leak in one of the five leak locations. In the test phase, the decision tree produced very low false alarm rates and low missed detection rates on the unseen data. It had very good fault isolation rates for three of the five simulated leak locations, but it tended to confuse the remaining two locations, perhaps because a large leak at one of these two locations can look very similar to a small leak at the other location.
MacDonald III, Angus W.; Zick, Jennifer L.; Chafee, Matthew V.; Netoff, Theoden I.
2016-01-01
The grand challenges of schizophrenia research are linking the causes of the disorder to its symptoms and finding ways to overcome those symptoms. We argue that the field will be unable to address these challenges within psychiatry’s standard neo-Kraepelinian (DSM) perspective. At the same time the current corrective, based in molecular genetics and cognitive neuroscience, is also likely to flounder due to its neglect for psychiatry’s syndromal structure. We suggest adopting a new approach long used in reliability engineering, which also serves as a synthesis of these approaches. This approach, known as fault tree analysis, can be combined with extant neuroscientific data collection and computational modeling efforts to uncover the causal structures underlying the cognitive and affective failures in people with schizophrenia as well as other complex psychiatric phenomena. By making explicit how causes combine from basic faults to downstream failures, this approach makes affordances for: (1) causes that are neither necessary nor sufficient in and of themselves; (2) within-diagnosis heterogeneity; and (3) between diagnosis co-morbidity. PMID:26779007
Heywood, Charles E.; Griffith, Jason M.
2013-01-01
Groundwater withdrawals have caused saltwater to encroach into freshwater-bearing aquifers beneath Baton Rouge, Louisiana. Groundwater investigations in the 1960s identified a freshwater-saltwater interface located at the Baton Rouge Fault, across which abrupt changes in water levels occur. Aquifers south of the fault generally contain saltwater, and aquifers north of the fault contain freshwater, though limited saltwater encroachment has been detected within 7 of the 10 aquifers north of the fault. The 10 aquifers beneath the Baton Rouge area, which includes East and West Baton Rouge Parishes, Pointe Coupee Parish, and East and West Feliciana Parishes, provided about 167 million gallons per day (Mgal/day) for public supply and industrial use in 2010. Groundwater withdrawals from an aquifer that is 2,000-feet (ft) deep in East Baton Rouge Parish (the “2,000-foot” sand of the Baton Rouge area) have caused water-level drawdown up to 356 ft and induced saltwater movement northward across the fault. Groundwater withdrawals from the “2,000-foot” sand averaged 23.9 Mgal/d during 2010. Saltwater encroachment threatens wells that are located about 3 miles north of the fault, where industrial withdrawals account for about 66 percent of the water withdrawn from the “2,000-foot” sand in East Baton Rouge Parish. Constant and variable-density groundwater models were developed with the MODFLOW and SEAWAT groundwater modeling codes to evaluate strategies to control saltwater migration, including changes in the distribution of groundwater withdrawals and installation of “scavenger” wells to intercept saltwater before it reaches existing production wells. Five hypothetical scenarios simulated the effects of different groundwater withdrawal options on groundwater levels within the “1,500-foot” sand and the “2,000-foot” sand and the transport of saltwater within the “2,000-foot” sand. Scenario 1 is considered a base case for comparison to the other four scenarios and simulates continuation of 2007 reported groundwater withdrawals. Scenario 2 simulates discontinuation of withdrawals from seven selected industrial wells located in the northwest corner of East Baton Rouge Parish, and water levels within the “1,500-foot” sand were predicted to be about 15 to 20 ft higher under this withdrawal scenario than under scenario 1. Scenario 3 simulates the effects of a scavenger well, which withdraws water from the base of the “2,000-foot” sand at a rate of 2 Mgal/d, at two possible locations on water levels and concentrations within the “2,000-foot” sand. In comparison to the concentrations simulated in scenario 1, operation of the scavenger well in the locations specified in scenario 3 reduces the chloride concentrations at all existing chloride-observation well locations. Scenario 4 simulates a 3.6 Mgal/d reduction in total groundwater withdrawals from selected wells screened in the “2,000-foot” sand that are located in the Baton Rouge industrial district. For scenario 4, the median and mean plume concentrations are slightly lower than scenario 1. Scenario 5 simulates the effect of total cessation of groundwater withdrawals from the “2,000-foot” sand in the industrial district. The simulated chloride-concentration distribution in scenario 5 reflects the change in groundwater flow direction. Although some saltwater would continue to cross the Baton Rouge Fault and encroach toward municipal supply wells, further encroachment toward the industrial district would be abated.
Geology of Joshua Tree National Park geodatabase
Powell, Robert E.; Matti, Jonathan C.; Cossette, Pamela M.
2015-09-16
The database in this Open-File Report describes the geology of Joshua Tree National Park and was completed in support of the National Cooperative Geologic Mapping Program of the U.S. Geological Survey (USGS) and in cooperation with the National Park Service (NPS). The geologic observations and interpretations represented in the database are relevant to both the ongoing scientific interests of the USGS in southern California and the management requirements of NPS, specifically of Joshua Tree National Park (JOTR).Joshua Tree National Park is situated within the eastern part of California’s Transverse Ranges province and straddles the transition between the Mojave and Sonoran deserts. The geologically diverse terrain that underlies JOTR reveals a rich and varied geologic evolution, one that spans nearly two billion years of Earth history. The Park’s landscape is the current expression of this evolution, its varied landforms reflecting the differing origins of underlying rock types and their differing responses to subsequent geologic events. Crystalline basement in the Park consists of Proterozoic plutonic and metamorphic rocks intruded by a composite Mesozoic batholith of Triassic through Late Cretaceous plutons arrayed in northwest-trending lithodemic belts. The basement was exhumed during the Cenozoic and underwent differential deep weathering beneath a low-relief erosion surface, with the deepest weathering profiles forming on quartz-rich, biotite-bearing granitoid rocks. Disruption of the basement terrain by faults of the San Andreas system began ca. 20 Ma and the JOTR sinistral domain, preceded by basalt eruptions, began perhaps as early as ca. 7 Ma, but no later than 5 Ma. Uplift of the mountain blocks during this interval led to erosional stripping of the thick zones of weathered quartz-rich granitoid rocks to form etchplains dotted by bouldery tors—the iconic landscape of the Park. The stripped debris filled basins along the fault zones.Mountain ranges and basins in the Park exhibit an east-west physiographic grain controlled by left-lateral fault zones that form a sinistral domain within the broad zone of dextral shear along the transform boundary between the North American and Pacific plates. Geologic and geophysical evidence reveal that movement on the sinistral faults zones has resulted in left steps along the zones, resulting in the development of sub-basins beneath Pinto Basin and Shavers and Chuckwalla Valleys. The sinistral fault zones connect the Mojave Desert dextral faults of the Eastern California Shear Zone to the north and east with the Coachella Valley strands of the southern San Andreas Fault Zone to the west.Quaternary surficial deposits accumulated in alluvial washes and playas and lakes along the valley floors; in alluvial fans, washes, and sheet wash aprons along piedmonts flanking the mountain ranges; and in eolian dunes and sand sheets that span the transition from valley floor to piedmont slope. Sequences of Quaternary pediments are planed into piedmonts flanking valley-floor and upland basins, each pediment in turn overlain by successively younger residual and alluvial surficial deposits.
Improved FTA methodology and application to subsea pipeline reliability design.
Lin, Jing; Yuan, Yongbo; Zhang, Mingyuan
2014-01-01
An innovative logic tree, Failure Expansion Tree (FET), is proposed in this paper, which improves on traditional Fault Tree Analysis (FTA). It describes a different thinking approach for risk factor identification and reliability risk assessment. By providing a more comprehensive and objective methodology, the rather subjective nature of FTA node discovery is significantly reduced and the resulting mathematical calculations for quantitative analysis are greatly simplified. Applied to the Useful Life phase of a subsea pipeline engineering project, the approach provides a more structured analysis by constructing a tree following the laws of physics and geometry. Resulting improvements are summarized in comparison table form.
Improved FTA Methodology and Application to Subsea Pipeline Reliability Design
Lin, Jing; Yuan, Yongbo; Zhang, Mingyuan
2014-01-01
An innovative logic tree, Failure Expansion Tree (FET), is proposed in this paper, which improves on traditional Fault Tree Analysis (FTA). It describes a different thinking approach for risk factor identification and reliability risk assessment. By providing a more comprehensive and objective methodology, the rather subjective nature of FTA node discovery is significantly reduced and the resulting mathematical calculations for quantitative analysis are greatly simplified. Applied to the Useful Life phase of a subsea pipeline engineering project, the approach provides a more structured analysis by constructing a tree following the laws of physics and geometry. Resulting improvements are summarized in comparison table form. PMID:24667681
Callahan, Ann; Scorza, Ralph
2012-01-01
The Flowering Locus T1 (FT1) gene from Populus trichocarpa under the control of the 35S promoter was transformed into European plum (Prunus domestica L). Transgenic plants expressing higher levels of FT flowered and produced fruits in the greenhouse within 1 to 10 months. FT plums did not enter dormancy after cold or short day treatments yet field planted FT plums remained winter hardy down to at least −10°C. The plants also displayed pleiotropic phenotypes atypical for plum including shrub-type growth habit and panicle flower architecture. The flowering and fruiting phenotype was found to be continuous in the greenhouse but limited to spring and fall in the field. The pattern of flowering in the field correlated with lower daily temperatures. This apparent temperature effect was subsequently confirmed in growth chamber studies. The pleitropic phenotypes associated with FT1 expression in plum suggests a fundamental role of this gene in plant growth and development. This study demonstrates the potential for a single transgene event to markedly affect the vegetative and reproductive growth and development of an economically important temperate woody perennial crop. We suggest that FT1 may be a useful tool to modify temperate plants to changing climates and/or to adapt these crops to new growing areas. PMID:22859952
Ryder, Robert T.; Harris, Anita G.; Repetski, John E.; revised and digitized by Crangle, Robert D.
2003-01-01
A 275-mi-long restored stratigraphic cross section from Medina County, Ohio, through southwestern and south-central Pennsylvania to Hampshire County, W. Va., provides new details on Cambrian and Ordovician stratigraphy in the central Appalachian basin and the structure of underlying Precambrian basement rocks. From west to east, the major structural elements of the block-faulted basement in this section are (1) the relatively stable, slightly extended craton, which includes the Wooster arch, (2) the fault-controlled Ohio-West Virginia hinge zone, which separates the craton from the adjoining Rome trough, (3) the Rome trough, which consists of an east-facing asymmetric graben and an overlying sag basin, and (4) a positive fault block, named here the South-central Pennsylvania arch, which borders the eastern margin of the graben part of the Rome trough. Pre-Middle Ordovician structural relief on Precambrian basement rocks across the down-to-the-west normal fault that separates the Rome trough and the adjoining South-central Pennsylvania arch amounted to between 6,000 and 7,000 ft. The restored cross section shows eastward thickening of the Cambrian and Ordovician sequence from about 3,000 ft near the crest of the Wooster arch at the western end of the section to about 5,150 ft at the Ohio-West Virginia hinge zone adjoining the western margin of the Rome trough to about 19,800 ft near the depositional axis of the Rome trough. East of the Rome trough, at the adjoining western edge of the South-central Pennsylvania arch, the Cambrian and Ordovician sequence thins abruptly to about 13,500 ft and then thins gradually eastward across the arch to about 12,700 ft near the Allegheny structural front and to about 10,150 ft at the eastern end of the restored section. In general, the Cambrian and Ordovician sequence along this section consists of four major lithofacies that are predominantly shallow marine to peritidal in origin. In ascending stratigraphic order, the lithofacies are identified by the following descriptive names: (1) sandstone, shale, limestone, and dolomite unit, (2) dolomite and sandstone unit, (3) limestone and black shale unit, and (4) shale and sandstone unit. Each of these units and their associated subunits thicken from west to east across the restored section to a maximum near the depositional axis of the Rome trough and then thin eastward to the end of the section. The sandstone, shale, limestone, and dolomite unit is largely confined to the asymmetric graben that marks the initial phase of the Rome trough. This unit is Early and Middle Cambrian in age and consists, in ascending order, of a basal sandstone unit (undrilled but probably present), the Tomstown Dolomite (undrilled but probably present), the Waynesboro Formation, and the Pleasant Hill Limestone and its equivalent lower one-third of the Elbrook Formation at the eastern end of the section. The dolomite and sandstone unit forms the core of the Cambrian and Ordovician sequence. In the Rome trough and on the adjoining South-central Pennsylvania arch, this unit consists, in ascending order, of the Middle and Upper Cambrian Warrior Formation and the equivalent upper two-thirds of the Elbrook Formation at the eastern end of the section, the Upper Cambrian Gatesburg Formation, and the Lower Ordovician and Middle Ordovician (Whiterockian and Chazyan) Beekmantown Group. West of the Ohio-West Virginia hinge zone, the dolomite and sandstone unit consists, in ascending order, of the Conasauga Formation of Janssens (1973), the Krysik sandstone of driller's usage, the B zone of Calvert (1964), the Knox Dolomite and the associated Rose Run Sandstone Member, and the Wells Creek Formation. The widespread Knox unconformity is located at the base of the Wells Creek Formation and at or near the top of the adjoining Beekmantown Group, except near the depositional axis of the Rome trough, where the unconformity seems to be absent. The limestone and black shale unit i
Petroleum geology and resources of northeastern Mexico
Peterson, James A.
1985-01-01
Petroleum deposits (primarily gas) in northeastern Mexico occur in two main basins, the Tertiary Burgos basin and the Mesozoic Sabinas basin. About 90 gas fields are present in the Burgos basin, which has undergone active exploration for the past 30-40 years. Production in this basin is from Oligocene and Eocene nearshore marine and deltaic sandstone reservoirs. Most of the fields are small to medium in size on faulted anticlinal or domal structures, some of which may be related to deep-seated salt intrusion. Cumulative production from these fields is about 4 trillion cubic feet gas and 100 million barrels condensate and oil. Since 1975, about 10 gas fields, some with large production rates, have been discovered in Cretaceous carbonate and Jurassic sandstone reservoirs in the Sabinas basin and adjacent Burro-Picachos platform areas. The Sabinas basin, which is in the early stages of exploration and development, may have potential for very large gas reserves. The Sabinas basin is oriented northwesterly with a large number of elongate northwest- or west-trending asymmetric and overturned Laramide anticlines, most of which-are faulted. Some of the structures may be related to movement of Jurassic salt or gypsum. Lower Cretaceous and in some cases Jurassic rocks are exposed in the centers of the larger anticlines, and Upper Cretaceous rocks are exposed in much of the remainder of the basin. A thick section of Upper Cretaceous clastic rocks is partly exposed in tightly folded and thrust-faulted structures of the west-east oriented, deeply subsided Parras basin, which lies south of the Sabinas basin and north of the Sierra Madre Oriental fold and thrust belt south and west of Monterrey. The sedimentary cover of Cretaceous and Jurassic rocks in the Sabinas and Parras basins ranges from about 1,550 m (5,000 ft) to 9,000 m (30,000 ft) in thickness. Upper Jurassic rocks are composed of carbonate and dark organic shaly or sandy beds underlain by an unknown thickness of Late Jurassic and older redbed clastics and evaporites, including halite. Lower Cretaceous rocks are mainly platform carbonate and fine clastic beds with some evaporites (gypsum or anhydrite) deposited in two main rudist reef-bearing carbonate cycles. Upper Cretaceous rocks are mainly continental and marine clastic beds related to early development of the Laramide orogeny. This Upper Cretaceous sequence contains a marine shale and deltaic clastic complex as much as 6,000 m (20,000 ft) or more thick in the Parras basin, which grades northward and eastward to open marine, fine clastic beds. The Burgos basin, which is an extension of the Rio Grande embayment of the western Gulf of Mexico basin province, contains an eastward-thickening wedge of Tertiary continental and marine clastics. These beds are about 1,550 to 3,000 m (5,000-10,000 ft) thick in the outcrop belt on the west side of the basin and thicken to more than 16,000 m (50,000 ft) near the Gulf Coast.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sartor, C.L.; Howard, S.R.
1984-09-01
The discovery in 1980 of gas production in the Smackover C sand in the East Dykesville field of Claiborne and Webster Parishes, Louisiana, extended the productive limits of this reservoir 6 mi (10 km) south of the production in the Haynesville field. The development of East Dykesville field has revealed three productive fault blocks within an area 6 mi (10 km) by 3 mi (5 km). The Smackover C and B sand of East Dykesville are present 700 ft (213 m) above the Louann Salt as a portion of a more or less continuous sand body covering an area 9more » mi (15 km) from east to west. This sand body extends southward from the Arkansas-Louisiana state line for more than 10 mi (16 km), and also produces at the Haynesville field. Production has been encountered in the C sand at East Dykesville from 10,912 ft (3326 m) subsea down to 11,605 ft (3536 m) subsea, an interval of 693 ft (211 m). The source of the sediments which constitute the Smackover C sand appears to be north of the sand body, as it thickens to more than 100 ft (31 m) in the Red Rock-Haynesville area and thins southward. The sand also thins both to the east toward Haynesville and to the west toward Shongaloo. The C sand is 60 ft (18 m) thick in the north portion of East Dykesville field and thins to 20 ft (6 m) in the most southern wells. Isopach studies suggest a submarine-fan depositional environment on a stable shelf.« less
Time-dependent seismic hazard analysis for the Greater Tehran and surrounding areas
NASA Astrophysics Data System (ADS)
Jalalalhosseini, Seyed Mostafa; Zafarani, Hamid; Zare, Mehdi
2018-01-01
This study presents a time-dependent approach for seismic hazard in Tehran and surrounding areas. Hazard is evaluated by combining background seismic activity, and larger earthquakes may emanate from fault segments. Using available historical and paleoseismological data or empirical relation, the recurrence time and maximum magnitude of characteristic earthquakes for the major faults have been explored. The Brownian passage time (BPT) distribution has been used to calculate equivalent fictitious seismicity rate for major faults in the region. To include ground motion uncertainty, a logic tree and five ground motion prediction equations have been selected based on their applicability in the region. Finally, hazard maps have been presented.
LIDAR Helps Identify Source of 1872 Earthquake Near Chelan, Washington
NASA Astrophysics Data System (ADS)
Sherrod, B. L.; Blakely, R. J.; Weaver, C. S.
2015-12-01
One of the largest historic earthquakes in the Pacific Northwest occurred on 15 December 1872 (M6.5-7) near the south end of Lake Chelan in north-central Washington State. Lack of recognized surface deformation suggested that the earthquake occurred on a blind, perhaps deep, fault. New LiDAR data show landslides and a ~6 km long, NW-side-up scarp in Spencer Canyon, ~30 km south of Lake Chelan. Two landslides in Spencer Canyon impounded small ponds. An historical account indicated that dead trees were visible in one pond in AD1884. Wood from a snag in the pond yielded a calibrated age of AD1670-1940. Tree ring counts show that the oldest living trees on each landslide are 130 and 128 years old. The larger of the two landslides obliterated the scarp and thus, post-dates the last scarp-forming event. Two trenches across the scarp exposed a NW-dipping thrust fault. One trench exposed alluvial fan deposits, Mazama ash, and scarp colluvium cut by a single thrust fault. Three charcoal samples from a colluvium buried during the last fault displacement had calibrated ages between AD1680 and AD1940. The second trench exposed gneiss thrust over colluvium during at least two, and possibly three fault displacements. The younger of two charcoal samples collected from a colluvium below gneiss had a calibrated age of AD1665- AD1905. For an historical constraint, we assume that the lack of felt reports for large earthquakes in the period between 1872 and today indicates that no large earthquakes capable of rupturing the ground surface occurred in the region after the 1872 earthquake; thus the last displacement on the Spencer Canyon scarp cannot post-date the 1872 earthquake. Modeling of the age data suggests that the last displacement occurred between AD1840 and AD1890. These data, combined with the historical record, indicate that this fault is the source of the 1872 earthquake. Analyses of aeromagnetic data reveal lithologic contacts beneath the scarp that form an ENE-striking, curvilinear zone ~2.5 km wide and ~55 km long. This zone coincides with monoclines mapped in Mesozoic bedrock and Miocene flood basalts. This study ends uncertainty regarding the source of the 1872 earthquake and provides important information for seismic hazard analyses of major infrastructure projects in Washington and British Columbia.
Fault detection and fault tolerance in robotics
NASA Technical Reports Server (NTRS)
Visinsky, Monica; Walker, Ian D.; Cavallaro, Joseph R.
1992-01-01
Robots are used in inaccessible or hazardous environments in order to alleviate some of the time, cost and risk involved in preparing men to endure these conditions. In order to perform their expected tasks, the robots are often quite complex, thus increasing their potential for failures. If men must be sent into these environments to repair each component failure in the robot, the advantages of using the robot are quickly lost. Fault tolerant robots are needed which can effectively cope with failures and continue their tasks until repairs can be realistically scheduled. Before fault tolerant capabilities can be created, methods of detecting and pinpointing failures must be perfected. This paper develops a basic fault tree analysis of a robot in order to obtain a better understanding of where failures can occur and how they contribute to other failures in the robot. The resulting failure flow chart can also be used to analyze the resiliency of the robot in the presence of specific faults. By simulating robot failures and fault detection schemes, the problems involved in detecting failures for robots are explored in more depth.
NASA Astrophysics Data System (ADS)
Lai, Wenqing; Wang, Yuandong; Li, Wenpeng; Sun, Guang; Qu, Guomin; Cui, Shigang; Li, Mengke; Wang, Yongqiang
2017-10-01
Based on long term vibration monitoring of the No.2 oil-immersed fat wave reactor in the ±500kV converter station in East Mongolia, the vibration signals in normal state and in core loose fault state were saved. Through the time-frequency analysis of the signals, the vibration characteristics of the core loose fault were obtained, and a fault diagnosis method based on the dual tree complex wavelet (DT-CWT) and support vector machine (SVM) was proposed. The vibration signals were analyzed by DT-CWT, and the energy entropy of the vibration signals were taken as the feature vector; the support vector machine was used to train and test the feature vector, and the accurate identification of the core loose fault of the flat wave reactor was realized. Through the identification of many groups of normal and core loose fault state vibration signals, the diagnostic accuracy of the result reached 97.36%. The effectiveness and accuracy of the method in the fault diagnosis of the flat wave reactor core is verified.
Method and system for dynamic probabilistic risk assessment
NASA Technical Reports Server (NTRS)
Dugan, Joanne Bechta (Inventor); Xu, Hong (Inventor)
2013-01-01
The DEFT methodology, system and computer readable medium extends the applicability of the PRA (Probabilistic Risk Assessment) methodology to computer-based systems, by allowing DFT (Dynamic Fault Tree) nodes as pivot nodes in the Event Tree (ET) model. DEFT includes a mathematical model and solution algorithm, supports all common PRA analysis functions and cutsets. Additional capabilities enabled by the DFT include modularization, phased mission analysis, sequence dependencies, and imperfect coverage.
Fault diagnosis of helical gearbox using acoustic signal and wavelets
NASA Astrophysics Data System (ADS)
Pranesh, SK; Abraham, Siju; Sugumaran, V.; Amarnath, M.
2017-05-01
The efficient transmission of power in machines is needed and gears are an appropriate choice. Faults in gears result in loss of energy and money. The monitoring and fault diagnosis are done by analysis of the acoustic and vibrational signals which are generally considered to be unwanted by products. This study proposes the usage of machine learning algorithm for condition monitoring of a helical gearbox by using the sound signals produced by the gearbox. Artificial faults were created and subsequently signals were captured by a microphone. An extensive study using different wavelet transformations for feature extraction from the acoustic signals was done, followed by waveletselection and feature selection using J48 decision tree and feature classification was performed using K star algorithm. Classification accuracy of 100% was obtained in the study
Engineering Design Handbook. Military Vehicle Power Plant Cooling
1975-06-01
ILTR BYASSCOOLING NOZZLES PRESSURE OI COLE OIL PUMP -TO ENGINE MAIN BEARINGS,BYPASS VALVE CONNECTING ROD BEARINGS. Is psi -ft-4 CAMSHAFT BEARINGS, I I...intended missions, tasks, and tree stumps, dust, and mud. Most vehicles functions under the conditions specified in have requirements for towing trailers
Inferring patterns in mitochondrial DNA sequences through hypercube independent spanning trees.
Silva, Eduardo Sant Ana da; Pedrini, Helio
2016-03-01
Given a graph G, a set of spanning trees rooted at a vertex r of G is said vertex/edge independent if, for each vertex v of G, v≠r, the paths of r to v in any pair of trees are vertex/edge disjoint. Independent spanning trees (ISTs) provide a number of advantages in data broadcasting due to their fault tolerant properties. For this reason, some studies have addressed the issue by providing mechanisms for constructing independent spanning trees efficiently. In this work, we investigate how to construct independent spanning trees on hypercubes, which are generated based upon spanning binomial trees, and how to use them to predict mitochondrial DNA sequence parts through paths on the hypercube. The prediction works both for inferring mitochondrial DNA sequences comprised of six bases as well as infer anomalies that probably should not belong to the mitochondrial DNA standard. Copyright © 2016 Elsevier Ltd. All rights reserved.
Nouri.Gharahasanlou, Ali; Mokhtarei, Ashkan; Khodayarei, Aliasqar; Ataei, Mohammad
2014-01-01
Evaluating and analyzing the risk in the mining industry is a new approach for improving the machinery performance. Reliability, safety, and maintenance management based on the risk analysis can enhance the overall availability and utilization of the mining technological systems. This study investigates the failure occurrence probability of the crushing and mixing bed hall department at Azarabadegan Khoy cement plant by using fault tree analysis (FTA) method. The results of the analysis in 200 h operating interval show that the probability of failure occurrence for crushing, conveyor systems, crushing and mixing bed hall department is 73, 64, and 95 percent respectively and the conveyor belt subsystem found as the most probable system for failure. Finally, maintenance as a method of control and prevent the occurrence of failure is proposed. PMID:26779433
Towards generating ECSS-compliant fault tree analysis results via ConcertoFLA
NASA Astrophysics Data System (ADS)
Gallina, B.; Haider, Z.; Carlsson, A.
2018-05-01
Attitude Control Systems (ACSs) maintain the orientation of the satellite in three-dimensional space. ACSs need to be engineered in compliance with ECSS standards and need to ensure a certain degree of dependability. Thus, dependability analysis is conducted at various levels and by using ECSS-compliant techniques. Fault Tree Analysis (FTA) is one of these techniques. FTA is being automated within various Model Driven Engineering (MDE)-based methodologies. The tool-supported CHESS-methodology is one of them. This methodology incorporates ConcertoFLA, a dependability analysis technique enabling failure behavior analysis and thus FTA-results generation. ConcertoFLA, however, similarly to other techniques, still belongs to the academic research niche. To promote this technique within the space industry, we apply it on an ACS and discuss about its multi-faceted potentialities in the context of ECSS-compliant engineering.
NASA Astrophysics Data System (ADS)
Zeng, Yajun; Skibniewski, Miroslaw J.
2013-08-01
Enterprise resource planning (ERP) system implementations are often characterised with large capital outlay, long implementation duration, and high risk of failure. In order to avoid ERP implementation failure and realise the benefits of the system, sound risk management is the key. This paper proposes a probabilistic risk assessment approach for ERP system implementation projects based on fault tree analysis, which models the relationship between ERP system components and specific risk factors. Unlike traditional risk management approaches that have been mostly focused on meeting project budget and schedule objectives, the proposed approach intends to address the risks that may cause ERP system usage failure. The approach can be used to identify the root causes of ERP system implementation usage failure and quantify the impact of critical component failures or critical risk events in the implementation process.
Accelerated Monte Carlo Simulation for Safety Analysis of the Advanced Airspace Concept
NASA Technical Reports Server (NTRS)
Thipphavong, David
2010-01-01
Safe separation of aircraft is a primary objective of any air traffic control system. An accelerated Monte Carlo approach was developed to assess the level of safety provided by a proposed next-generation air traffic control system. It combines features of fault tree and standard Monte Carlo methods. It runs more than one order of magnitude faster than the standard Monte Carlo method while providing risk estimates that only differ by about 10%. It also preserves component-level model fidelity that is difficult to maintain using the standard fault tree method. This balance of speed and fidelity allows sensitivity analysis to be completed in days instead of weeks or months with the standard Monte Carlo method. Results indicate that risk estimates are sensitive to transponder, pilot visual avoidance, and conflict detection failure probabilities.
Logic flowgraph methodology - A tool for modeling embedded systems
NASA Technical Reports Server (NTRS)
Muthukumar, C. T.; Guarro, S. B.; Apostolakis, G. E.
1991-01-01
The logic flowgraph methodology (LFM), a method for modeling hardware in terms of its process parameters, has been extended to form an analytical tool for the analysis of integrated (hardware/software) embedded systems. In the software part of a given embedded system model, timing and the control flow among different software components are modeled by augmenting LFM with modified Petrinet structures. The objective of the use of such an augmented LFM model is to uncover possible errors and the potential for unanticipated software/hardware interactions. This is done by backtracking through the augmented LFM mode according to established procedures which allow the semiautomated construction of fault trees for any chosen state of the embedded system (top event). These fault trees, in turn, produce the possible combinations of lower-level states (events) that may lead to the top event.
Nouri Gharahasanlou, Ali; Mokhtarei, Ashkan; Khodayarei, Aliasqar; Ataei, Mohammad
2014-04-01
Evaluating and analyzing the risk in the mining industry is a new approach for improving the machinery performance. Reliability, safety, and maintenance management based on the risk analysis can enhance the overall availability and utilization of the mining technological systems. This study investigates the failure occurrence probability of the crushing and mixing bed hall department at Azarabadegan Khoy cement plant by using fault tree analysis (FTA) method. The results of the analysis in 200 h operating interval show that the probability of failure occurrence for crushing, conveyor systems, crushing and mixing bed hall department is 73, 64, and 95 percent respectively and the conveyor belt subsystem found as the most probable system for failure. Finally, maintenance as a method of control and prevent the occurrence of failure is proposed.
Risk assessment techniques with applicability in marine engineering
NASA Astrophysics Data System (ADS)
Rudenko, E.; Panaitescu, F. V.; Panaitescu, M.
2015-11-01
Nowadays risk management is a carefully planned process. The task of risk management is organically woven into the general problem of increasing the efficiency of business. Passive attitude to risk and awareness of its existence are replaced by active management techniques. Risk assessment is one of the most important stages of risk management, since for risk management it is necessary first to analyze and evaluate risk. There are many definitions of this notion but in general case risk assessment refers to the systematic process of identifying the factors and types of risk and their quantitative assessment, i.e. risk analysis methodology combines mutually complementary quantitative and qualitative approaches. Purpose of the work: In this paper we will consider as risk assessment technique Fault Tree analysis (FTA). The objectives are: understand purpose of FTA, understand and apply rules of Boolean algebra, analyse a simple system using FTA, FTA advantages and disadvantages. Research and methodology: The main purpose is to help identify potential causes of system failures before the failures actually occur. We can evaluate the probability of the Top event.The steps of this analize are: the system's examination from Top to Down, the use of symbols to represent events, the use of mathematical tools for critical areas, the use of Fault tree logic diagrams to identify the cause of the Top event. Results: In the finally of study it will be obtained: critical areas, Fault tree logical diagrams and the probability of the Top event. These results can be used for the risk assessment analyses.
Using certification trails to achieve software fault tolerance
NASA Technical Reports Server (NTRS)
Sullivan, Gregory F.; Masson, Gerald M.
1993-01-01
A conceptually novel and powerful technique to achieve fault tolerance in hardware and software systems is introduced. When used for software fault tolerance, this new technique uses time and software redundancy and can be outlined as follows. In the initial phase, a program is run to solve a problem and store the result. In addition, this program leaves behind a trail of data called a certification trail. In the second phase, another program is run which solves the original problem again. This program, however, has access to the certification trail left by the first program. Because of the availability of the certification trail, the second phase can be performed by a less complex program and can execute more quickly. In the final phase, the two results are accepted as correct; otherwise an error is indicated. An essential aspect of this approach is that the second program must always generate either an error indication or a correct output even when the certification trail it receives from the first program is incorrect. The certification trail approach to fault tolerance was formalized and it was illustrated by applying it to the fundamental problem of finding a minimum spanning tree. Cases in which the second phase can be run concorrectly with the first and act as a monitor are discussed. The certification trail approach was compared to other approaches to fault tolerance. Because of space limitations we have omitted examples of our technique applied to the Huffman tree, and convex hull problems. These can be found in the full version of this paper.
Bodin, Paul; Bilham, Roger; Behr, Jeff; Gomberg, Joan; Hudnut, Kenneth W.
1994-01-01
Five out of six functioning creepmeters on southern California faults recorded slip triggered at the time of some or all of the three largest events of the 1992 Landers earthquake sequence. Digital creep data indicate that dextral slip was triggered within 1 min of each mainshock and that maximum slip velocities occurred 2 to 3 min later. The duration of triggered slip events ranged from a few hours to several weeks. We note that triggered slip occurs commonly on faults that exhibit fault creep. To account for the observation that slip can be triggered repeatedly on a fault, we propose that the amplitude of triggered slip may be proportional to the depth of slip in the creep event and to the available near-surface tectonic strain that would otherwise eventually be released as fault creep. We advance the notion that seismic surface waves, perhaps amplified by sediments, generate transient local conditions that favor the release of tectonic strain to varying depths. Synthetic strain seismograms are presented that suggest increased pore pressure during periods of fault-normal contraction may be responsible for triggered slip, since maximum dextral shear strain transients correspond to times of maximum fault-normal contraction.
NASA Astrophysics Data System (ADS)
Pham, Binh Thai; Tien Bui, Dieu; Pourghasemi, Hamid Reza; Indra, Prakash; Dholakia, M. B.
2017-04-01
The objective of this study is to make a comparison of the prediction performance of three techniques, Functional Trees (FT), Multilayer Perceptron Neural Networks (MLP Neural Nets), and Naïve Bayes (NB) for landslide susceptibility assessment at the Uttarakhand Area (India). Firstly, a landslide inventory map with 430 landslide locations in the study area was constructed from various sources. Landslide locations were then randomly split into two parts (i) 70 % landslide locations being used for training models (ii) 30 % landslide locations being employed for validation process. Secondly, a total of eleven landslide conditioning factors including slope angle, slope aspect, elevation, curvature, lithology, soil, land cover, distance to roads, distance to lineaments, distance to rivers, and rainfall were used in the analysis to elucidate the spatial relationship between these factors and landslide occurrences. Feature selection of Linear Support Vector Machine (LSVM) algorithm was employed to assess the prediction capability of these conditioning factors on landslide models. Subsequently, the NB, MLP Neural Nets, and FT models were constructed using training dataset. Finally, success rate and predictive rate curves were employed to validate and compare the predictive capability of three used models. Overall, all the three models performed very well for landslide susceptibility assessment. Out of these models, the MLP Neural Nets and the FT models had almost the same predictive capability whereas the MLP Neural Nets (AUC = 0.850) was slightly better than the FT model (AUC = 0.849). The NB model (AUC = 0.838) had the lowest predictive capability compared to other models. Landslide susceptibility maps were final developed using these three models. These maps would be helpful to planners and engineers for the development activities and land-use planning.
NASA Astrophysics Data System (ADS)
Carpenter, B. M.; Marone, C.; Saffer, D. M.
2010-12-01
The debate concerning the apparent low strength of tectonic faults, including the San Andreas Fault (SAF), continues to focus on: 1) low intrinsic friction resulting from mineralogy and/or fabric, and 2) decreased effective normal stress due to elevated pore pressure. Here we inform this debate with laboratory measurements of the frictional behavior and permeability of cuttings and core returned from the SAF at a vertical depth of 2.7 km. We conducted experiments on cuttings and core recovered during SAFOD Phase III drilling. All samples in this study are adjacent to and within the active fault zone penetrated at 10814.5 ft (3296m) measured depth in the SAFOD borehole. We sheared gouge samples composed of drilling cuttings in a double-direct shear configuration subject to true-triaxial loading under constant effective normal stress, confining pressure, and pore pressure. Intact wafers of material were sheared in a single-direct shear configuration under similar conditions of effective stress, confining pressure, and pore pressure. We also report on permeability measurements on intact wafers of wall rock and fault gouge prior to shearing. Initial results from experiments on cuttings show: 1) a weak fault (µ=~0.21) compared to the surrounding wall rock (µ=~0.35), 2) velocity strengthening behavior, (a-b > 0), consistent with aseismic slip, and 3) near zero healing rates in material from the active fault. XRD analysis on cuttings indicates the main mineralogical difference between fault rock and wall rock, is the presence of significant amounts of smectite within the fault rock. Taken together, the measured frictional behavior and clay mineral content suggest that the clay composition exhibits a basic control on fault behavior. Our results document the first direct evidence of weak material from an active fault at seismogenic depths. In addition, our results could explain why the SAF in central California fails aseismically and hosts only small earthquakes.
Machine Learning of Fault Friction
NASA Astrophysics Data System (ADS)
Johnson, P. A.; Rouet-Leduc, B.; Hulbert, C.; Marone, C.; Guyer, R. A.
2017-12-01
We are applying machine learning (ML) techniques to continuous acoustic emission (AE) data from laboratory earthquake experiments. Our goal is to apply explicit ML methods to this acoustic datathe AE in order to infer frictional properties of a laboratory fault. The experiment is a double direct shear apparatus comprised of fault blocks surrounding fault gouge comprised of glass beads or quartz powder. Fault characteristics are recorded, including shear stress, applied load (bulk friction = shear stress/normal load) and shear velocity. The raw acoustic signal is continuously recorded. We rely on explicit decision tree approaches (Random Forest and Gradient Boosted Trees) that allow us to identify important features linked to the fault friction. A training procedure that employs both the AE and the recorded shear stress from the experiment is first conducted. Then, testing takes place on data the algorithm has never seen before, using only the continuous AE signal. We find that these methods provide rich information regarding frictional processes during slip (Rouet-Leduc et al., 2017a; Hulbert et al., 2017). In addition, similar machine learning approaches predict failure times, as well as slip magnitudes in some cases. We find that these methods work for both stick slip and slow slip experiments, for periodic slip and for aperiodic slip. We also derive a fundamental relationship between the AE and the friction describing the frictional behavior of any earthquake slip cycle in a given experiment (Rouet-Leduc et al., 2017b). Our goal is to ultimately scale these approaches to Earth geophysical data to probe fault friction. References Rouet-Leduc, B., C. Hulbert, N. Lubbers, K. Barros, C. Humphreys and P. A. Johnson, Machine learning predicts laboratory earthquakes, in review (2017). https://arxiv.org/abs/1702.05774Rouet-LeDuc, B. et al., Friction Laws Derived From the Acoustic Emissions of a Laboratory Fault by Machine Learning (2017), AGU Fall Meeting Session S025: Earthquake source: from the laboratory to the fieldHulbert, C., Characterizing slow slip applying machine learning (2017), AGU Fall Meeting Session S019: Slow slip, Tectonic Tremor, and the Brittle-to-Ductile Transition Zone: What mechanisms control the diversity of slow and fast earthquakes?
Database for the geologic map of the Bend 30- x 60-minute quadrangle, central Oregon
Koch, Richard D.; Ramsey, David W.; Sherrod, David R.; Taylor, Edward M.; Ferns, Mark L.; Scott, William E.; Conrey, Richard M.; Smith, Gary A.
2010-01-01
The Bend 30- x 60-minute quadrangle has been the locus of volcanism, faulting, and sedimentation for the past 35 million years. It encompasses parts of the Cascade Range and Blue Mountain geomorphic provinces, stretching from snowclad Quaternary stratovolcanoes on the west to bare rocky hills and sparsely forested juniper plains on the east. The Deschutes River and its large tributaries, the Metolius and Crooked Rivers, drain the area. Topographic relief ranges from 3,157 m (10,358 ft) at the top of South Sister to 590 m (1,940 ft) at the floor of the Deschutes and Crooked Rivers where they exit the area at the north-central edge of the map area. The map encompasses a part of rapidly growing Deschutes County. The city of Bend, which has over 70,000 people living in its urban growth boundary, lies at the south-central edge of the map. Redmond, Sisters, and a few smaller villages lie scattered along the major transportation routes of U.S. Highways 97 and 20. This geologic map depicts the geologic setting as a basis for structural and stratigraphic analysis of the Deschutes basin, a major hydrologic discharge area on the east flank of the Cascade Range. The map also provides a framework for studying potentially active faults of the Sisters fault zone, which trends northwest across the map area from Bend to beyond Sisters. This digital release contains all of the information used to produce the geologic map published as U.S. Geological Survey Geologic Investigations Series I-2683 (Sherrod and others, 2004). The main component of this digital release is a geologic map database prepared using ArcInfo GIS. This release also contains files to view or print the geologic map and accompanying descriptive pamphlet from I-2683.
Industry shows faith in deep Anadarko
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wroblewski, E.F.
1973-10-08
The shallow shelf of the Anadarko Basin furnished much gas from the Pennsylvanian and Mississippian reservoirs during the 1950s and 1960s. The search for gas reserves on the shelf will continue to go on for many years, because of the relatively low drilling cost even though the reserves per well on the shelf tend to be limited to about 1 to 3 billion cu ft/well. The much greater reserves of up to 50 billion cu ft/well found in the deeper part of the Anadarko Basin have made the deep Anadarko Basin an enticing area to look for major gas reserves.more » A regional Hunton map of the deep Anadarko Basin is presented showing fields that are producing from the Hunton and Simpson at depths of more than 15,000 ft. The fields shown on this map represent about 5 trillion cu ft of gas reserve. A generalized section showing only the major features and gross stratigraphic intervals also is presented. A seismic interpretation of the N. Carter structure on which the Lone Star l Baden is drilled is shown, one the seismic Springer structure and the other the seismic Hunton structure. The latter shows the faulting that exists below the Springer level.« less
1983-04-01
tolerances or spaci - able assets diagnostic/fault ness float fications isolation devices Operation of cannibalL- zation point Why Sustain materiel...with diagnostic software based on "fault tree " representation of the M65 ThS) to bridge the gap in diagnostics capability was demonstrated in 1980 and... identification friend or foe) which has much lower reliability than TSQ-73 peculiar hardware). Thus, as in other examples, reported readiness does not reflect
AADL Fault Modeling and Analysis Within an ARP4761 Safety Assessment
2014-10-01
Analysis Generator 27 3.2.3 Mapping to OpenFTA Format File 27 3.2.4 Mapping to Generic XML Format 28 3.2.5 AADL and FTA Mapping Rules 28 3.2.6 Issues...PSSA), System Safety Assessment (SSA), Common Cause Analysis (CCA), Fault Tree Analysis ( FTA ), Failure Modes and Effects Analysis (FMEA), Failure...Modes and Effects Summary, Mar - kov Analysis (MA), and Dependence Diagrams (DDs), also referred to as Reliability Block Dia- grams (RBDs). The
Unsupervised Learning —A Novel Clustering Method for Rolling Bearing Faults Identification
NASA Astrophysics Data System (ADS)
Kai, Li; Bo, Luo; Tao, Ma; Xuefeng, Yang; Guangming, Wang
2017-12-01
To promptly process the massive fault data and automatically provide accurate diagnosis results, numerous studies have been conducted on intelligent fault diagnosis of rolling bearing. Among these studies, such as artificial neural networks, support vector machines, decision trees and other supervised learning methods are used commonly. These methods can detect the failure of rolling bearing effectively, but to achieve better detection results, it often requires a lot of training samples. Based on above, a novel clustering method is proposed in this paper. This novel method is able to find the correct number of clusters automatically the effectiveness of the proposed method is validated using datasets from rolling element bearings. The diagnosis results show that the proposed method can accurately detect the fault types of small samples. Meanwhile, the diagnosis results are also relative high accuracy even for massive samples.
1981-03-01
allidlllIiml-llaatl Ila 1,I1 1 It s Kud kes (I like f II s , iiit cvet I el htitksI i ri tte l(ir HlI,i ss 5I pp IP ki I ;IwIti it ob)c t ti ves...vegetated with mixed lowland trees, shrubs, and grasses. At its deepest point Lake Port was 17 ft deep; the upper one- third consisted of a shallow mud flat...Arkansas, and Washington County, Mississippi. Lake Lee was about 6 miles long, shallow at both ends, and up to 40 ft deep in the center two- thirds
Fault Analysis on Bevel Gear Teeth Surface Damage of Aeroengine
NASA Astrophysics Data System (ADS)
Cheng, Li; Chen, Lishun; Li, Silu; Liang, Tao
2017-12-01
Aiming at the trouble phenomenon for bevel gear teeth surface damage of Aero-engine, Fault Tree of bevel gear teeth surface damage was drawing by logical relations, the possible cause of trouble was analyzed, scanning electron-microscope, energy spectrum analysis, Metallographic examination, hardness measurement and other analysis means were adopted to investigate the spall gear tooth. The results showed that Material composition, Metallographic structure, Micro-hardness, Carburization depth of the fault bevel gear accord with technical requirements. Contact fatigue spall defect caused bevel gear teeth surface damage. The small magnitude of Interference of accessory gearbox install hole and driving bevel gear bearing seat was mainly caused. Improved measures were proposed, after proof, Thermoelement measures are effective.
Goal-Function Tree Modeling for Systems Engineering and Fault Management
NASA Technical Reports Server (NTRS)
Johnson, Stephen B.; Breckenridge, Jonathan T.
2013-01-01
This paper describes a new representation that enables rigorous definition and decomposition of both nominal and off-nominal system goals and functions: the Goal-Function Tree (GFT). GFTs extend the concept and process of functional decomposition, utilizing state variables as a key mechanism to ensure physical and logical consistency and completeness of the decomposition of goals (requirements) and functions, and enabling full and complete traceabilitiy to the design. The GFT also provides for means to define and represent off-nominal goals and functions that are activated when the system's nominal goals are not met. The physical accuracy of the GFT, and its ability to represent both nominal and off-nominal goals enable the GFT to be used for various analyses of the system, including assessments of the completeness and traceability of system goals and functions, the coverage of fault management failure detections, and definition of system failure scenarios.
Risk management of PPP project in the preparation stage based on Fault Tree Analysis
NASA Astrophysics Data System (ADS)
Xing, Yuanzhi; Guan, Qiuling
2017-03-01
The risk management of PPP(Public Private Partnership) project can improve the level of risk control between government departments and private investors, so as to make more beneficial decisions, reduce investment losses and achieve mutual benefit as well. Therefore, this paper takes the PPP project preparation stage venture as the research object to identify and confirm four types of risks. At the same time, fault tree analysis(FTA) is used to evaluate the risk factors that belong to different parts, and quantify the influencing degree of risk impact on the basis of risk identification. In addition, it determines the importance order of risk factors by calculating unit structure importance on PPP project preparation stage. The result shows that accuracy of government decision-making, rationality of private investors funds allocation and instability of market returns are the main factors to generate the shared risk on the project.
Rocket engine system reliability analyses using probabilistic and fuzzy logic techniques
NASA Technical Reports Server (NTRS)
Hardy, Terry L.; Rapp, Douglas C.
1994-01-01
The reliability of rocket engine systems was analyzed by using probabilistic and fuzzy logic techniques. Fault trees were developed for integrated modular engine (IME) and discrete engine systems, and then were used with the two techniques to quantify reliability. The IRRAS (Integrated Reliability and Risk Analysis System) computer code, developed for the U.S. Nuclear Regulatory Commission, was used for the probabilistic analyses, and FUZZYFTA (Fuzzy Fault Tree Analysis), a code developed at NASA Lewis Research Center, was used for the fuzzy logic analyses. Although both techniques provided estimates of the reliability of the IME and discrete systems, probabilistic techniques emphasized uncertainty resulting from randomness in the system whereas fuzzy logic techniques emphasized uncertainty resulting from vagueness in the system. Because uncertainty can have both random and vague components, both techniques were found to be useful tools in the analysis of rocket engine system reliability.
Enterprise architecture availability analysis using fault trees and stakeholder interviews
NASA Astrophysics Data System (ADS)
Närman, Per; Franke, Ulrik; König, Johan; Buschle, Markus; Ekstedt, Mathias
2014-01-01
The availability of enterprise information systems is a key concern for many organisations. This article describes a method for availability analysis based on Fault Tree Analysis and constructs from the ArchiMate enterprise architecture (EA) language. To test the quality of the method, several case-studies within the banking and electrical utility industries were performed. Input data were collected through stakeholder interviews. The results from the case studies were compared with availability of log data to determine the accuracy of the method's predictions. In the five cases where accurate log data were available, the yearly downtime estimates were within eight hours from the actual downtimes. The cost of performing the analysis was low; no case study required more than 20 man-hours of work, making the method ideal for practitioners with an interest in obtaining rapid availability estimates of their enterprise information systems.
Khan, F I; Iqbal, A; Ramesh, N; Abbasi, S A
2001-10-12
As it is conventionally done, strategies for incorporating accident--prevention measures in any hazardous chemical process industry are developed on the basis of input from risk assessment. However, the two steps-- risk assessment and hazard reduction (or safety) measures--are not linked interactively in the existing methodologies. This prevents a quantitative assessment of the impacts of safety measures on risk control. We have made an attempt to develop a methodology in which risk assessment steps are interactively linked with implementation of safety measures. The resultant system tells us the extent of reduction of risk by each successive safety measure. It also tells based on sophisticated maximum credible accident analysis (MCAA) and probabilistic fault tree analysis (PFTA) whether a given unit can ever be made 'safe'. The application of the methodology has been illustrated with a case study.
A fault tree model to assess probability of contaminant discharge from shipwrecks.
Landquist, H; Rosén, L; Lindhe, A; Norberg, T; Hassellöv, I-M; Lindgren, J F; Dahllöf, I
2014-11-15
Shipwrecks on the sea floor around the world may contain hazardous substances that can cause harm to the marine environment. Today there are no comprehensive methods for environmental risk assessment of shipwrecks, and thus there is poor support for decision-making on prioritization of mitigation measures. The purpose of this study was to develop a tool for quantitative risk estimation of potentially polluting shipwrecks, and in particular an estimation of the annual probability of hazardous substance discharge. The assessment of the probability of discharge is performed using fault tree analysis, facilitating quantification of the probability with respect to a set of identified hazardous events. This approach enables a structured assessment providing transparent uncertainty and sensitivity analyses. The model facilitates quantification of risk, quantification of the uncertainties in the risk calculation and identification of parameters to be investigated further in order to obtain a more reliable risk calculation. Copyright © 2014 Elsevier Ltd. All rights reserved.
Qualitative Importance Measures of Systems Components - A New Approach and Its Applications
NASA Astrophysics Data System (ADS)
Chybowski, Leszek; Gawdzińska, Katarzyna; Wiśnicki, Bogusz
2016-12-01
The paper presents an improved methodology of analysing the qualitative importance of components in the functional and reliability structures of the system. We present basic importance measures, i.e. the Birnbaum's structural measure, the order of the smallest minimal cut-set, the repetition count of an i-th event in the Fault Tree and the streams measure. A subsystem of circulation pumps and fuel heaters in the main engine fuel supply system of a container vessel illustrates the qualitative importance analysis. We constructed a functional model and a Fault Tree which we analysed using qualitative measures. Additionally, we compared the calculated measures and introduced corrected measures as a tool for improving the analysis. We proposed scaled measures and a common measure taking into account the location of the component in the reliability and functional structures. Finally, we proposed an area where the measures could be applied.
Schwartz, D.P.; Pantosti, D.; Okumura, K.; Powers, T.J.; Hamilton, J.C.
1998-01-01
Trenching, microgeomorphic mapping, and tree ring analysis provide information on timing of paleoearthquakes and behavior of the San Andreas fault in the Santa Cruz mountains. At the Grizzly Flat site alluvial units dated at 1640-1659 A.D., 1679-1894 A.D., 1668-1893 A.D., and the present ground surface are displaced by a single event. This was the 1906 surface rupture. Combined trench dates and tree ring analysis suggest that the penultimate event occurred in the mid-1600s, possibly in an interval as narrow as 1632-1659 A.D. There is no direct evidence in the trenches for the 1838 or 1865 earthquakes, which have been proposed as occurring on this part of the fault zone. In a minimum time of about 340 years only one large surface faulting event (1906) occurred at Grizzly Flat, in contrast to previous recurrence estimates of 95-110 years for the Santa Cruz mountains segment. Comparison with dates of the penultimate San Andreas earthquake at sites north of San Francisco suggests that the San Andreas fault between Point Arena and the Santa Cruz mountains may have failed either as a sequence of closely timed earthquakes on adjacent segments or as a single long rupture similar in length to the 1906 rupture around the mid-1600s. The 1906 coseismic geodetic slip and the late Holocene geologic slip rate on the San Francisco peninsula and southward are about 50-70% and 70% of their values north of San Francisco, respectively. The slip gradient along the 1906 rupture section of the San Andreas reflects partitioning of plate boundary slip onto the San Gregorio, Sargent, and other faults south of the Golden Gate. If a mid-1600s event ruptured the same section of the fault that failed in 1906, it supports the concept that long strike-slip faults can contain master rupture segments that repeat in both length and slip distribution. Recognition of a persistent slip rate gradient along the northern San Andreas fault and the concept of a master segment remove the requirement that lower slip sections of large events such as 1906 must fill in on a periodic basis with smaller and more frequent earthquakes.
NASA Astrophysics Data System (ADS)
Schwab, D.; Bidgoli, T.; Taylor, M. H.
2015-12-01
South-central Kansas has experienced an unprecedented increase in seismic activity since 2013. The spatial and temporal relationship of the seismicity with brine disposal operations has renewed interest in the role of fluids in fault reactivation. This study focuses on determining the suitability of CO2 injection into a Cambro-Ordovician reservoir for long-term storage and a Mississippian reservoir for enhanced oil recovery in Wellington Field, Sumner County, Kansas. Our approach for determining the potential for induced seismicity has been to (1) map subsurface faults and estimate in-situ stresses, (2) perform slip and dilation tendency analysis to identify optimally-oriented faults relative to the estimated stress field, and (3) monitor surface deformation through cGPS data and InSAR imaging. Through the use of 3D seismic reflection data, 60 near vertical, NNE-striking faults have been identified. The faults range in length from 140-410 m and have vertical separations of 3-32m. A number of faults appear to be restricted to shallow intervals, while others clearly cut the top basement reflector. Drilling-induced tensile fractures (N=78) identified from image logs and inversion of earthquake focal mechanism solutions (N=54) are consistent with the maximum horizontal stress (SHmax) oriented ~E-W. Both strike-slip and normal-slip fault plane solutions for earthquakes near the study area suggest that SHmax and Sv may be similar in magnitude. Estimates of stress magnitudes using step rate tests (Shmin = 2666 psi), density logs (Sv = 5308 psi), and calculations from wells with drilling induced tensile fractures (SHmax = 4547-6655 psi) are determined at the gauge depth of 4869ft. Preliminary slip and dilation tendency analysis indicates that faults striking 0°-20° are stable, whereas faults striking 26°-44° may have a moderate risk for reactivation with increasing pore-fluid pressure.
Moran, Michael J.; Wilson, Jon W.; Beard, L. Sue
2015-11-03
Several major faults, including the Salt Cedar Fault and the Palm Tree Fault, play an important role in the movement of groundwater. Groundwater may move along these faults and discharge where faults intersect volcanic breccias or fractured rock. Vertical movement of groundwater along faults is suggested as a mechanism for the introduction of heat energy present in groundwater from many of the springs. Groundwater altitudes in the study area indicate a potential for flow from Eldorado Valley to Black Canyon although current interpretations of the geology of this area do not favor such flow. If groundwater from Eldorado Valley discharges at springs in Black Canyon then the development of groundwater resources in Eldorado Valley could result in a decrease in discharge from the springs. Geology and structure indicate that it is not likely that groundwater can move between Detrital Valley and Black Canyon. Thus, the development of groundwater resources in Detrital Valley may not result in a decrease in discharge from springs in Black Canyon.
NASA Astrophysics Data System (ADS)
Abdelrhman, Ahmed M.; Sei Kien, Yong; Salman Leong, M.; Meng Hee, Lim; Al-Obaidi, Salah M. Ali
2017-07-01
The vibration signals produced by rotating machinery contain useful information for condition monitoring and fault diagnosis. Fault severities assessment is a challenging task. Wavelet Transform (WT) as a multivariate analysis tool is able to compromise between the time and frequency information in the signals and served as a de-noising method. The CWT scaling function gives different resolutions to the discretely signals such as very fine resolution at lower scale but coarser resolution at a higher scale. However, the computational cost increased as it needs to produce different signal resolutions. DWT has better low computation cost as the dilation function allowed the signals to be decomposed through a tree of low and high pass filters and no further analysing the high-frequency components. In this paper, a method for bearing faults identification is presented by combing Continuous Wavelet Transform (CWT) and Discrete Wavelet Transform (DWT) with envelope analysis for bearing fault diagnosis. The experimental data was sampled by Case Western Reserve University. The analysis result showed that the proposed method is effective in bearing faults detection, identify the exact fault’s location and severity assessment especially for the inner race and outer race faults.
Experimental evaluation of the certification-trail method
NASA Technical Reports Server (NTRS)
Sullivan, Gregory F.; Wilson, Dwight S.; Masson, Gerald M.; Itoh, Mamoru; Smith, Warren W.; Kay, Jonathan S.
1993-01-01
Certification trails are a recently introduced and promising approach to fault-detection and fault-tolerance. A comprehensive attempt to assess experimentally the performance and overall value of the method is reported. The method is applied to algorithms for the following problems: huffman tree, shortest path, minimum spanning tree, sorting, and convex hull. Our results reveal many cases in which an approach using certification-trails allows for significantly faster overall program execution time than a basic time redundancy-approach. Algorithms for the answer-validation problem for abstract data types were also examined. This kind of problem provides a basis for applying the certification-trail method to wide classes of algorithms. Answer-validation solutions for two types of priority queues were implemented and analyzed. In both cases, the algorithm which performs answer-validation is substantially faster than the original algorithm for computing the answer. Next, a probabilistic model and analysis which enables comparison between the certification-trail method and the time-redundancy approach were presented. The analysis reveals some substantial and sometimes surprising advantages for ther certification-trail method. Finally, the work our group performed on the design and implementation of fault injection testbeds for experimental analysis of the certification trail technique is discussed. This work employs two distinct methodologies, software fault injection (modification of instruction, data, and stack segments of programs on a Sun Sparcstation ELC and on an IBM 386 PC) and hardware fault injection (control, address, and data lines of a Motorola MC68000-based target system pulsed at logical zero/one values). Our results indicate the viability of the certification trail technique. It is also believed that the tools developed provide a solid base for additional exploration.
Investigation of Fuel Oil/Lube Oil Spray Fires On Board Vessels. Volume 3.
1998-11-01
U.S. Coast Guard Research and Development Center 1082 Shennecossett Road, Groton, CT 06340-6096 Report No. CG-D-01-99, III Investigation of Fuel ...refinery). Developed the technical and mathematical specifications for BRAVO™2.0, a state-of-the-art Windows program for performing event tree and fault...tree analyses. Also managed the development of and prepared the technical specifications for QRA ROOTS™, a Windows program for storing, searching K-4
1992-01-01
boost plenum which houses the camshaft . The compressed mixture is metered by a throttle to intake valves of the engine. The engine is constructed from...difficulties associated with a time-tagged fault tree . In particular, recent work indicates that the multi-layer perception architecture can give good fdi...Abstract: In the past decade, wastepaper recycling has gained a wider acceptance. Depletion of tree stocks, waste water treatment demands and
Interim reliability evaluation program, Browns Ferry 1
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mays, S.E.; Poloski, J.P.; Sullivan, W.H.
1981-01-01
Probabilistic risk analysis techniques, i.e., event tree and fault tree analysis, were utilized to provide a risk assessment of the Browns Ferry Nuclear Plant Unit 1. Browns Ferry 1 is a General Electric boiling water reactor of the BWR 4 product line with a Mark 1 (drywell and torus) containment. Within the guidelines of the IREP Procedure and Schedule Guide, dominant accident sequences that contribute to public health and safety risks were identified and grouped according to release categories.
Cost-effectiveness analysis of risk-reduction measures to reach water safety targets.
Lindhe, Andreas; Rosén, Lars; Norberg, Tommy; Bergstedt, Olof; Pettersson, Thomas J R
2011-01-01
Identifying the most suitable risk-reduction measures in drinking water systems requires a thorough analysis of possible alternatives. In addition to the effects on the risk level, also the economic aspects of the risk-reduction alternatives are commonly considered important. Drinking water supplies are complex systems and to avoid sub-optimisation of risk-reduction measures, the entire system from source to tap needs to be considered. There is a lack of methods for quantification of water supply risk reduction in an economic context for entire drinking water systems. The aim of this paper is to present a novel approach for risk assessment in combination with economic analysis to evaluate risk-reduction measures based on a source-to-tap approach. The approach combines a probabilistic and dynamic fault tree method with cost-effectiveness analysis (CEA). The developed approach comprises the following main parts: (1) quantification of risk reduction of alternatives using a probabilistic fault tree model of the entire system; (2) combination of the modelling results with CEA; and (3) evaluation of the alternatives with respect to the risk reduction, the probability of not reaching water safety targets and the cost-effectiveness. The fault tree method and CEA enable comparison of risk-reduction measures in the same quantitative unit and consider costs and uncertainties. The approach provides a structured and thorough analysis of risk-reduction measures that facilitates transparency and long-term planning of drinking water systems in order to avoid sub-optimisation of available resources for risk reduction. Copyright © 2010 Elsevier Ltd. All rights reserved.
1935-01-02
Navy Santa Cruz 'Airport'. Airport looking west: Municipal Rating: Location; 6 miles east of city- Altitude; 75ft - Layout: L-shaped, hard smooth dirt, Drainage; natural. East-west 2,500' X 300' North-South 2,000 X 400' to S.E. trees, to N.E. Hanger and Aviation fuel, Tower: 50' hight - Day Service Only
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morton, J.F.
A description is given of Acacia auriculiformis, together with a warning against its use for ornamental landscaping in Florida (a hurricane area). The tree grows very fast, reaching 30-55 ft in 8 years, lacks wind resistance, produces much persistent litter, seeds itself freely and is now a common weed species in Florida. The wood is of value for handicrafts. 3 references.
Paul F. Doruska; David W. Patterson; Matthew B. Hurd; Jonathan I. Hartley
2013-01-01
Equations were developed to estimate outside-bark, merchantable stem green weight (lb) and inside-bark merchantable stem volume (ft3) for sawtimber-sized Nuttall oak (Quercus texana Buckley), overcup oak (Quercus lyrata Walt.), water oak (Quercus nigra L.), and willow oak (Quercus...
James H. Miller
1999-01-01
Plant: Twining and trailing semi-woody vine, 10-30 m (35-100 ft) long, with rope-like vines covering mature trees and forming dense patches, having 3-leaflet leaves on hairy pctioles and stems, deciduous and dying with first frost, yielding fragrant red-purple flowers in mid- summer. and hairy flat capsules with few seeds in fall. Large, semi-...
7 CFR 301.50-10 - Treatments and management method.
Code of Federal Regulations, 2010 CFR
2010-01-01
... be treated with methyl bromide at normal atmospheric pressure with 48 g/m3 (3 lb/1000 ft3) for 16... garlands. Cut pine Christmas trees and raw pine materials for pine wreaths and garlands may be treated with methyl bromide at normal atmospheric pressure as follows: Temperature Dosage: pounds per 1000 feet 3...
Pinus ponderosa: geographic races and subspecies based on morphological variation
Robert Z. Callaham
2013-01-01
Morphological variation of ponderosa pine (Pinus ponderosa Dougl. ex Laws.), growing north of Mexico, is described. A map shows distributions of five putative races that are analyzed and discussed. Characteristics of branches, shoots, and needles were measured for 10 or fewer trees growing on 147 plots located at 1,500-ft elevational intervals...
CARE3MENU- A CARE III USER FRIENDLY INTERFACE
NASA Technical Reports Server (NTRS)
Pierce, J. L.
1994-01-01
CARE3MENU generates an input file for the CARE III program. CARE III is used for reliability prediction of complex, redundant, fault-tolerant systems including digital computers, aircraft, nuclear and chemical control systems. The CARE III input file often becomes complicated and is not easily formatted with a text editor. CARE3MENU provides an easy, interactive method of creating an input file by automatically formatting a set of user-supplied inputs for the CARE III system. CARE3MENU provides detailed on-line help for most of its screen formats. The reliability model input process is divided into sections using menu-driven screen displays. Each stage, or set of identical modules comprising the model, must be identified and described in terms of number of modules, minimum number of modules for stage operation, and critical fault threshold. The fault handling and fault occurence models are detailed in several screens by parameters such as transition rates, propagation and detection densities, Weibull or exponential characteristics, and model accuracy. The system fault tree and critical pairs fault tree screens are used to define the governing logic and to identify modules affected by component failures. Additional CARE3MENU screens prompt the user for output options and run time control values such as mission time and truncation values. There are fourteen major screens, many with default values and HELP options. The documentation includes: 1) a users guide with several examples of CARE III models, the dialog required to input them to CARE3MENU, and the output files created; and 2) a maintenance manual for assistance in changing the HELP files and modifying any of the menu formats or contents. CARE3MENU is written in FORTRAN 77 for interactive execution and has been implemented on a DEC VAX series computer operating under VMS. This program was developed in 1985.
NASA Astrophysics Data System (ADS)
Polverino, Pierpaolo; Pianese, Cesare; Sorrentino, Marco; Marra, Dario
2015-04-01
The paper focuses on the design of a procedure for the development of an on-field diagnostic algorithm for solid oxide fuel cell (SOFC) systems. The diagnosis design phase relies on an in-deep analysis of the mutual interactions among all system components by exploiting the physical knowledge of the SOFC system as a whole. This phase consists of the Fault Tree Analysis (FTA), which identifies the correlations among possible faults and their corresponding symptoms at system components level. The main outcome of the FTA is an inferential isolation tool (Fault Signature Matrix - FSM), which univocally links the faults to the symptoms detected during the system monitoring. In this work the FTA is considered as a starting point to develop an improved FSM. Making use of a model-based investigation, a fault-to-symptoms dependency study is performed. To this purpose a dynamic model, previously developed by the authors, is exploited to simulate the system under faulty conditions. Five faults are simulated, one for the stack and four occurring at BOP level. Moreover, the robustness of the FSM design is increased by exploiting symptom thresholds defined for the investigation of the quantitative effects of the simulated faults on the affected variables.
TU-AB-BRD-03: Fault Tree Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dunscombe, P.
2015-06-15
Current quality assurance and quality management guidelines provided by various professional organizations are prescriptive in nature, focusing principally on performance characteristics of planning and delivery devices. However, published analyses of events in radiation therapy show that most events are often caused by flaws in clinical processes rather than by device failures. This suggests the need for the development of a quality management program that is based on integrated approaches to process and equipment quality assurance. Industrial engineers have developed various risk assessment tools that are used to identify and eliminate potential failures from a system or a process before amore » failure impacts a customer. These tools include, but are not limited to, process mapping, failure modes and effects analysis, fault tree analysis. Task Group 100 of the American Association of Physicists in Medicine has developed these tools and used them to formulate an example risk-based quality management program for intensity-modulated radiotherapy. This is a prospective risk assessment approach that analyzes potential error pathways inherent in a clinical process and then ranks them according to relative risk, typically before implementation, followed by the design of a new process or modification of the existing process. Appropriate controls are then put in place to ensure that failures are less likely to occur and, if they do, they will more likely be detected before they propagate through the process, compromising treatment outcome and causing harm to the patient. Such a prospective approach forms the basis of the work of Task Group 100 that has recently been approved by the AAPM. This session will be devoted to a discussion of these tools and practical examples of how these tools can be used in a given radiotherapy clinic to develop a risk based quality management program. Learning Objectives: Learn how to design a process map for a radiotherapy process Learn how to perform failure modes and effects analysis analysis for a given process Learn what fault trees are all about Learn how to design a quality management program based upon the information obtained from process mapping, failure modes and effects analysis and fault tree analysis. Dunscombe: Director, TreatSafely, LLC and Center for the Assessment of Radiological Sciences; Consultant to IAEA and Varian Thomadsen: President, Center for the Assessment of Radiological Sciences Palta: Vice President of the Center for the Assessment of Radiological Sciences.« less
Probabilistic seismic hazard study based on active fault and finite element geodynamic models
NASA Astrophysics Data System (ADS)
Kastelic, Vanja; Carafa, Michele M. C.; Visini, Francesco
2016-04-01
We present a probabilistic seismic hazard analysis (PSHA) that is exclusively based on active faults and geodynamic finite element input models whereas seismic catalogues were used only in a posterior comparison. We applied the developed model in the External Dinarides, a slow deforming thrust-and-fold belt at the contact between Adria and Eurasia.. is the Our method consists of establishing s two earthquake rupture forecast models: (i) a geological active fault input (GEO) model and, (ii) a finite element (FEM) model. The GEO model is based on active fault database that provides information on fault location and its geometric and kinematic parameters together with estimations on its slip rate. By default in this model all deformation is set to be released along the active faults. The FEM model is based on a numerical geodynamic model developed for the region of study. In this model the deformation is, besides along the active faults, released also in the volumetric continuum elements. From both models we calculated their corresponding activity rates, its earthquake rates and their final expected peak ground accelerations. We investigated both the source model and the earthquake model uncertainties by varying the main active fault and earthquake rate calculation parameters through constructing corresponding branches of the seismic hazard logic tree. Hazard maps and UHS curves have been produced for horizontal ground motion on bedrock conditions VS 30 ≥ 800 m/s), thereby not considering local site amplification effects. The hazard was computed over a 0.2° spaced grid considering 648 branches of the logic tree and the mean value of 10% probability of exceedance in 50 years hazard level, while the 5th and 95th percentiles were also computed to investigate the model limits. We conducted a sensitivity analysis to control which of the input parameters influence the final hazard results in which measure. The results of such comparison evidence the deformation model and with their internal variability together with the choice of the ground motion prediction equations (GMPEs) are the most influencing parameter. Both of these parameters have significan affect on the hazard results. Thus having good knowledge of the existence of active faults and their geometric and activity characteristics is of key importance. We also show that PSHA models based exclusively on active faults and geodynamic inputs, which are thus not dependent on past earthquake occurrences, provide a valid method for seismic hazard calculation.
1991-11-01
Just above Cornay’s Bridge they sunk the steamer Flycatcher and a schooner loaded with bricks, plus live oak trees were cut down and thrown into the...contour level) (Feet) Single Objects Engine camshaft 20 fi x 2 m 45 45 x 50 feet 15 Cas’ Iron soil pipe 10 ft long. 100 lbs 1407 45 x 65 feet 4 Iron...hitting any of the numerous fallen trees , snags, submerged logs, shallow sand bars, etc., 52 Chapter 3. Remote-Sensing Survey which occur along much of the
A fault is born: The Landers-Mojave earthquake line
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nur, A.; Ron, H.
1993-04-01
The epicenter and the southern portion of the 1992 Landers earthquake fell on an approximately N-S earthquake line, defined by both epicentral locations and by the rupture directions of four previous M>5 earthquakes in the Mojave: The 1947 Manix; 1975 Galway Lake; 1979 Homestead Valley: and 1992 Joshua Tree events. Another M 5.2 earthquake epicenter in 1965 fell on this line where it intersects the Calico fault. In contrast, the northern part of the Landers rupture followed the NW-SE trending Camp Rock and parallel faults, exhibiting an apparently unusual rupture kink. The block tectonic model (Ron et al., 1984) combiningmore » fault kinematic and mechanics, explains both the alignment of the events, and their ruptures (Nur et al., 1986, 1989), as well as the Landers kink (Nur et al., 1992). Accordingly, the now NW oriented faults have rotated into their present direction away from the direction of maximum shortening, close to becoming locked, whereas a new fault set, optimally oriented relative to the direction of shortening, is developing to accommodate current crustal deformation. The Mojave-Landers line may thus be a new fault in formation. During the transition of faulting from the old, well developed and wak but poorly oriented faults to the strong, but favorably oriented new ones, both can slip simultaneously, giving rise to kinks such as Landers.« less
Quantitative method of medication system interface evaluation.
Pingenot, Alleene Anne; Shanteau, James; Pingenot, James D F
2007-01-01
The objective of this study was to develop a quantitative method of evaluating the user interface for medication system software. A detailed task analysis provided a description of user goals and essential activity. A structural fault analysis was used to develop a detailed description of the system interface. Nurses experienced with use of the system under evaluation provided estimates of failure rates for each point in this simplified fault tree. Means of estimated failure rates provided quantitative data for fault analysis. Authors note that, although failures of steps in the program were frequent, participants reported numerous methods of working around these failures so that overall system failure was rare. However, frequent process failure can affect the time required for processing medications, making a system inefficient. This method of interface analysis, called Software Efficiency Evaluation and Fault Identification Method, provides quantitative information with which prototypes can be compared and problems within an interface identified.
An Application of the Geo-Semantic Micro-services in Seamless Data-Model Integration
NASA Astrophysics Data System (ADS)
Jiang, P.; Elag, M.; Kumar, P.; Liu, R.; Hu, Y.; Marini, L.; Peckham, S. D.; Hsu, L.
2016-12-01
We are applying machine learning (ML) techniques to continuous acoustic emission (AE) data from laboratory earthquake experiments. Our goal is to apply explicit ML methods to this acoustic datathe AE in order to infer frictional properties of a laboratory fault. The experiment is a double direct shear apparatus comprised of fault blocks surrounding fault gouge comprised of glass beads or quartz powder. Fault characteristics are recorded, including shear stress, applied load (bulk friction = shear stress/normal load) and shear velocity. The raw acoustic signal is continuously recorded. We rely on explicit decision tree approaches (Random Forest and Gradient Boosted Trees) that allow us to identify important features linked to the fault friction. A training procedure that employs both the AE and the recorded shear stress from the experiment is first conducted. Then, testing takes place on data the algorithm has never seen before, using only the continuous AE signal. We find that these methods provide rich information regarding frictional processes during slip (Rouet-Leduc et al., 2017a; Hulbert et al., 2017). In addition, similar machine learning approaches predict failure times, as well as slip magnitudes in some cases. We find that these methods work for both stick slip and slow slip experiments, for periodic slip and for aperiodic slip. We also derive a fundamental relationship between the AE and the friction describing the frictional behavior of any earthquake slip cycle in a given experiment (Rouet-Leduc et al., 2017b). Our goal is to ultimately scale these approaches to Earth geophysical data to probe fault friction. References Rouet-Leduc, B., C. Hulbert, N. Lubbers, K. Barros, C. Humphreys and P. A. Johnson, Machine learning predicts laboratory earthquakes, in review (2017). https://arxiv.org/abs/1702.05774Rouet-LeDuc, B. et al., Friction Laws Derived From the Acoustic Emissions of a Laboratory Fault by Machine Learning (2017), AGU Fall Meeting Session S025: Earthquake source: from the laboratory to the fieldHulbert, C., Characterizing slow slip applying machine learning (2017), AGU Fall Meeting Session S019: Slow slip, Tectonic Tremor, and the Brittle-to-Ductile Transition Zone: What mechanisms control the diversity of slow and fast earthquakes?
Ground-water resources of southern Tangipahoa Parish and adjacent areas, Louisiana
Rapp, T.R.
1994-01-01
Groundwater resources in southern Tangipahoa Parish and adjacent areas were studied to determine their potential for development as an alternative to the Mississippi River as a water-supply source for Jefferson Parish. Eight major aquifers consisting of thick sand units that underlie the study area are, in descending order: (1) shallow, (2) upper Ponchatoula, (3) lower Ponchatoula, (4) Abita, (5) Covington, (6) Tchefuncta, (7) Hammond, and (8) Amite. A fault zone, referred to as the Baton Rouge fault, crosses southern Tangipahoa Parish. Analyses of geophysical logs indicated that the deep aquifers south of the fault zone had been displaced from 350 to 400 feet, and that the deeper aquifers were not in hydraulic connection with the flow system north of the fault. The groundwater resources of southeastern Louisiana are immense and the quality of groundwater in Tangipahoa Parish is suitable for most uses. The quality of water in these aquifers generally meets the U.S. Environmental Protection Agency's standards for public supply. The hydrologic system underlying Tangipahoa Parish and adjacent areas in 1990 supplied about 19 Mgal/d of water that was suitable for public supply. However, substantial increases in pumping from the aquifer system would result in renewed water-level declines throughout the hydrologic system until a new equilibrium is established. A test we11 in southern Tangipahoa Parish, penetrated all eight aquifers. Total thickness of freshwater sand beds penetrated by the 3003-ft test hole was more than 1900 ft. Resistivity values from an electric log of the test typically averaged 200 ohm-meters, which indicates that the water has low dissolved-solids and chloride concentrations. An analysis of the Abita aquifer at Ruddock in St. John the Baptist Parish, for two of three hypothetical well fields, indicated that for a hypothetical we11 field with a pumping rate of 112 Mgal/d, the freshwater/saltwater interface could arrive at the outer perimeter we11 in 10 to 14 years. The 1990 location of the interface in the Abita aquifer is 1.9 mi from the southernmost part of the potential location of the 112 Mgal/d well field.
Perched Ground Water in Zeolitized-Bedded Tuff, Rainier Mesa and Vicinity, Nevada Test Site, Nevada
Thordarson, William
1965-01-01
Rainier Mesa--site of the first series of underground nuclear detonations--is the highest of a group of ridges and mesas within the Nevada Test Site. The mesa is about 9.5 square miles in area and reaches a maximum altitude of 7,679 feet. The mesa is underlain by welded tuff, friable-bedded tuff, and zeolitized-bedded tuff of the Piapi Canyon Group and the Indian Trail Formation of Tertiary age. The tuff--2,000 to 9,000 feet thick--rests unconformably upon thrust-faulted miogeosynclinal rocks of Paleozoic age. Zeolitic-bedded tuff at the base of the tuff sequence controls the recharge rate of ground water to the underlying and more permeable Paleozoic aquifers. The zeolitic tuff--600 to 800 feet thick--is a fractured aquitard with high interstitial porosity, but with very low interstitial permeability and fracture transmissibility. The interstitial porosity ranges from 29 to 38 percent, the interstitial permeability is generally less than 0.009 gpd/ft3, and the fracture transmissibility ranges from 10 to 100 gpd/ft for 900 feet of saturated rock. The tuff is generally fully saturated interstitially hundreds of feet above the regional water table, yet no appreciable volume of water moves through the interstices because of the very low permeability. The only freely moving water observed in miles of underground workings occurred in fractures, usually fault zones.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mays, S.E.; Poloski, J.P.; Sullivan, W.H.
1982-07-01
This report describes a risk study of the Browns Ferry, Unit 1, nuclear plant. The study is one of four such studies sponsored by the NRC Office of Research, Division of Risk Assessment, as part of its Interim Reliability Evaluation Program (IREP), Phase II. This report is contained in four volumes: a main report and three appendixes. Appendix B provides a description of Browns Ferry, Unit 1, plant systems and the failure evaluation of those systems as they apply to accidents at Browns Ferry. Information is presented concerning front-line system fault analysis; support system fault analysis; human error models andmore » probabilities; and generic control circuit analyses.« less
Risk Analysis Methods for Deepwater Port Oil Transfer Systems
DOT National Transportation Integrated Search
1976-06-01
This report deals with the risk analysis methodology for oil spills from the oil transfer systems in deepwater ports. Failure mode and effect analysis in combination with fault tree analysis are identified as the methods best suited for the assessmen...
A-Priori Rupture Models for Northern California Type-A Faults
Wills, Chris J.; Weldon, Ray J.; Field, Edward H.
2008-01-01
This appendix describes how a-priori rupture models were developed for the northern California Type-A faults. As described in the main body of this report, and in Appendix G, ?a-priori? models represent an initial estimate of the rate of single and multi-segment surface ruptures on each fault. Whether or not a given model is moment balanced (i.e., satisfies section slip-rate data) depends on assumptions made regarding the average slip on each segment in each rupture (which in turn depends on the chosen magnitude-area relationship). Therefore, for a given set of assumptions, or branch on the logic tree, the methodology of the present Working Group (WGCEP-2007) is to find a final model that is as close as possible to the a-priori model, in the least squares sense, but that also satisfies slip rate and perhaps other data. This is analogous the WGCEP- 2002 approach of effectively voting on the relative rate of each possible rupture, and then finding the closest moment-balance model (under a more limiting set of assumptions than adopted by the present WGCEP, as described in detail in Appendix G). The 2002 Working Group Report (WCCEP, 2003, referred to here as WGCEP-2002), created segmented earthquake rupture forecast models for all faults in the region, including some that had been designated as Type B faults in the NSHMP, 1996, and one that had not previously been considered. The 2002 National Seismic Hazard Maps used the values from WGCEP-2002 for all the faults in the region, essentially treating all the listed faults as Type A faults. As discussed in Appendix A, the current WGCEP found that there are a number of faults with little or no data on slip-per-event, or dates of previous earthquakes. As a result, the WGCEP recommends that faults with minimal available earthquake recurrence data: the Greenville, Mount Diablo, San Gregorio, Monte Vista-Shannon and Concord-Green Valley be modeled as Type B faults to be consistent with similarly poorly-known faults statewide. As a result, the modified segmented models discussed here only concern the San Andreas, Hayward-Rodgers Creek, and Calaveras faults. Given the extensive level of effort given by the recent Bay-Area WGCEP-2002, our approach has been to adopt their final average models as our preferred a-prior models. We have modified the WGCEP-2002 models where necessary to match data that were not available or not used by that WGCEP and where the models needed by WGCEP-2007 for a uniform statewide model require different assumptions and/or logic-tree branch weights. In these cases we have made what are usually slight modifications to the WGCEP-2002 model. This Appendix presents the minor changes needed to accomodate updated information and model construction. We do not attempt to reproduce here the extensive documentation of data, model parameters and earthquake probablilities in the WG-2002 report.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Haywood, J.D.
1993-09-01
Areas in a 4-year-old loblolly pine (Pinus taeda L.) plantation were treated with aerially applied Roundup (glyphosate), Pronone 10G (hexazinone), and Velpar L (hexazinone) plus Lo Drift (a spray additive). All herbicides were applied with appropriate helicopter-mounted equipment. The proportion of free-to-grow pine trees increased over a 2-year period in both the treated and untreated areas, but the increase was slightly greater in the treated areas. Final loblolly pine height, d.b.h., and volume per tree did not differ significantly among the four treatments. About 1,200 hardwood trees and 4,700 shrubs over 3 ft tall per acre were present at themore » beginning of the study.« less
Opon gas renews interest in the hydrocarbon prolific middle Magdalena basin, Colombia
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stone, D.M.; Elliott, R.; Latimer, G.
1996-08-01
A total of 45 fields have been discovered in the Middle Magdelena basin of Colombia since 1918. In August 1994, the Amoco-operated Opon-3 well tested significant hydrocarbons in the basin. The well flowed 45 MMCFGPD and 2000 BCPD from 1118 ft of perforations between 10,018 and 12,348 feet. A second well, Opon-4, tested 58 MMCFGPD from the same interval. Opon now appears to be a significant gas condensate field. A full assessment of commercial potential requires 3D seismic data, as well as further development drilling. Operational challenges include: drilling, coring, logging, cementing and testing in a world-record setting 23 ppgmore » hematite-weighted mud environment involving simultaneous perforation over a 2330 ft gross pay interval and management of high production rates at >8000 psi FwHP. The Opon structure is a surface anticline on the western edge of the Eastern Cordillera fold and thrust belt. Seismic definition of the trap, however, is complicated by multiple faults, steep dips, rugged topography and variable surface velocities. The main gas reservoirs are fluvial sandstones of the Eocene La Paz Formation sealed by the overlying upper Eocene Las Esmeraldas Formation shales. Individual producing sandstones range from 40 to 200 ft thick in a minimum gross gas column of 4200 ft. Average porosity is 6%. Tectonically induced fractures probably enhance reservoir performance.« less
Thermal history of a metamorphic core complex
NASA Astrophysics Data System (ADS)
Dokka, R. K.; Mahaffie, M. J.; Snoke, A. W.
Fission track (FT) thermochronology studies of lower plate rocks of the Ruby Mountains-East Humbolt Range metamorphic core complex provide important constraints on the timing an nature of major middle Tertiary extension of northeast Nevada. Rocks analyzed include several varieties of mylonitic orthogneiss as well as amphibolitic orthognesses from the non-mylonitic infrastructural core. Oligocene-age porphyritic biotite granodiorite of the Harrison Pass pluton was also studied. The minerals dated include apatite, zircon, and sphene and were obtained from the same rocks that have been previously studied. FT ages are concordant and range in age from 26.4 Ma to 23.8 Ma, with all showing overlap at 1 sigma between 25.4 to 23.4 Ma. Concordancy of all FT ages from all structural levels indicates that the lower plate cooled rapidly from temperatures above approx. 285 C (assumed sphene closure temperature (2)) to below approx. 150 C (assumed apatite closure temperature) near the beginning of the Miocene. This suggests that the lower plate cooled at a rate of at least approx. 36 deg C/Ma during this event. Rapid cooling of the region is considered to reflect large-scale tectonic denudation (intracrustal thinning), the vertical complement to intense crustal extension. FT data firmly establish the upper limit on the timing of mylonitization during detachment faulting and also coincide with the age of extensive landscape disruption.
Thermal history of a metamorphic core complex
NASA Technical Reports Server (NTRS)
Dokka, R. K.; Mahaffie, M. J.; Snoke, A. W.
1985-01-01
Fission track (FT) thermochronology studies of lower plate rocks of the Ruby Mountains-East Humbolt Range metamorphic core complex provide important constraints on the timing an nature of major middle Tertiary extension of northeast Nevada. Rocks analyzed include several varieties of mylonitic orthogneiss as well as amphibolitic orthognesses from the non-mylonitic infrastructural core. Oligocene-age porphyritic biotite granodiorite of the Harrison Pass pluton was also studied. The minerals dated include apatite, zircon, and sphene and were obtained from the same rocks that have been previously studied. FT ages are concordant and range in age from 26.4 Ma to 23.8 Ma, with all showing overlap at 1 sigma between 25.4 to 23.4 Ma. Concordancy of all FT ages from all structural levels indicates that the lower plate cooled rapidly from temperatures above approx. 285 C (assumed sphene closure temperature (2)) to below approx. 150 C (assumed apatite closure temperature) near the beginning of the Miocene. This suggests that the lower plate cooled at a rate of at least approx. 36 deg C/Ma during this event. Rapid cooling of the region is considered to reflect large-scale tectonic denudation (intracrustal thinning), the vertical complement to intense crustal extension. FT data firmly establish the upper limit on the timing of mylonitization during detachment faulting and also coincide with the age of extensive landscape disruption.
Illinois' forest resources, 2005
Susan J. Crocker; Gary J. Brand; Dick C. Little
2007-01-01
Results of the completed 2005 Illinois annual inventory show an estimated 4.5 million acres of forest land that supports 7.6 billion cubic feet (ft3) of total net live-tree volume. Since 1948, timberland area has steadily increased and now represents 96 percent of total forest land. Growing-stock volume on timberland has risen to an estimated 6.8...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-07-27
...) in height, and 20 m (66 ft) or more given good environmental conditions (Wiggins and Porter 1971, p... bark, lichens, feathers, and other materials, with a small, round side entrance (Jackson 1985, p. 191... adapt to varying conditions extremely well and therefore they thrive at all elevations in the Galapagos...
R. Justin DeRose; John D. Shaw; Giorgio Vacchiano; James N. Long
2008-01-01
The Southern Variant of the Forest Vegetation Simulator (FVS-SN) is made up of individual submodels that predict tree growth, recruitment and mortality. Forest managers on Ft. Bragg, North Carolina, discovered biologically unrealistic longleaf pine (Pinus palustris) size-density predictions at large diameters when using FVS-SN to project red-cockaded...
Methodology for Designing Fault-Protection Software
NASA Technical Reports Server (NTRS)
Barltrop, Kevin; Levison, Jeffrey; Kan, Edwin
2006-01-01
A document describes a methodology for designing fault-protection (FP) software for autonomous spacecraft. The methodology embodies and extends established engineering practices in the technical discipline of Fault Detection, Diagnosis, Mitigation, and Recovery; and has been successfully implemented in the Deep Impact Spacecraft, a NASA Discovery mission. Based on established concepts of Fault Monitors and Responses, this FP methodology extends the notion of Opinion, Symptom, Alarm (aka Fault), and Response with numerous new notions, sub-notions, software constructs, and logic and timing gates. For example, Monitor generates a RawOpinion, which graduates into Opinion, categorized into no-opinion, acceptable, or unacceptable opinion. RaiseSymptom, ForceSymptom, and ClearSymptom govern the establishment and then mapping to an Alarm (aka Fault). Local Response is distinguished from FP System Response. A 1-to-n and n-to- 1 mapping is established among Monitors, Symptoms, and Responses. Responses are categorized by device versus by function. Responses operate in tiers, where the early tiers attempt to resolve the Fault in a localized step-by-step fashion, relegating more system-level response to later tier(s). Recovery actions are gated by epoch recovery timing, enabling strategy, urgency, MaxRetry gate, hardware availability, hazardous versus ordinary fault, and many other priority gates. This methodology is systematic, logical, and uses multiple linked tables, parameter files, and recovery command sequences. The credibility of the FP design is proven via a fault-tree analysis "top-down" approach, and a functional fault-mode-effects-and-analysis via "bottoms-up" approach. Via this process, the mitigation and recovery strategy(s) per Fault Containment Region scope (width versus depth) the FP architecture.
Automated Generation of Fault Management Artifacts from a Simple System Model
NASA Technical Reports Server (NTRS)
Kennedy, Andrew K.; Day, John C.
2013-01-01
Our understanding of off-nominal behavior - failure modes and fault propagation - in complex systems is often based purely on engineering intuition; specific cases are assessed in an ad hoc fashion as a (fallible) fault management engineer sees fit. This work is an attempt to provide a more rigorous approach to this understanding and assessment by automating the creation of a fault management artifact, the Failure Modes and Effects Analysis (FMEA) through querying a representation of the system in a SysML model. This work builds off the previous development of an off-nominal behavior model for the upcoming Soil Moisture Active-Passive (SMAP) mission at the Jet Propulsion Laboratory. We further developed the previous system model to more fully incorporate the ideas of State Analysis, and it was restructured in an organizational hierarchy that models the system as layers of control systems while also incorporating the concept of "design authority". We present software that was developed to traverse the elements and relationships in this model to automatically construct an FMEA spreadsheet. We further discuss extending this model to automatically generate other typical fault management artifacts, such as Fault Trees, to efficiently portray system behavior, and depend less on the intuition of fault management engineers to ensure complete examination of off-nominal behavior.
Naive Bayes Bearing Fault Diagnosis Based on Enhanced Independence of Data
Zhang, Nannan; Wu, Lifeng; Yang, Jing; Guan, Yong
2018-01-01
The bearing is the key component of rotating machinery, and its performance directly determines the reliability and safety of the system. Data-based bearing fault diagnosis has become a research hotspot. Naive Bayes (NB), which is based on independent presumption, is widely used in fault diagnosis. However, the bearing data are not completely independent, which reduces the performance of NB algorithms. In order to solve this problem, we propose a NB bearing fault diagnosis method based on enhanced independence of data. The method deals with data vector from two aspects: the attribute feature and the sample dimension. After processing, the classification limitation of NB is reduced by the independence hypothesis. First, we extract the statistical characteristics of the original signal of the bearings effectively. Then, the Decision Tree algorithm is used to select the important features of the time domain signal, and the low correlation features is selected. Next, the Selective Support Vector Machine (SSVM) is used to prune the dimension data and remove redundant vectors. Finally, we use NB to diagnose the fault with the low correlation data. The experimental results show that the independent enhancement of data is effective for bearing fault diagnosis. PMID:29401730
NASA Astrophysics Data System (ADS)
Li, Yongbo; Xu, Minqiang; Wang, Rixin; Huang, Wenhu
2016-01-01
This paper presents a new rolling bearing fault diagnosis method based on local mean decomposition (LMD), improved multiscale fuzzy entropy (IMFE), Laplacian score (LS) and improved support vector machine based binary tree (ISVM-BT). When the fault occurs in rolling bearings, the measured vibration signal is a multi-component amplitude-modulated and frequency-modulated (AM-FM) signal. LMD, a new self-adaptive time-frequency analysis method can decompose any complicated signal into a series of product functions (PFs), each of which is exactly a mono-component AM-FM signal. Hence, LMD is introduced to preprocess the vibration signal. Furthermore, IMFE that is designed to avoid the inaccurate estimation of fuzzy entropy can be utilized to quantify the complexity and self-similarity of time series for a range of scales based on fuzzy entropy. Besides, the LS approach is introduced to refine the fault features by sorting the scale factors. Subsequently, the obtained features are fed into the multi-fault classifier ISVM-BT to automatically fulfill the fault pattern identifications. The experimental results validate the effectiveness of the methodology and demonstrate that proposed algorithm can be applied to recognize the different categories and severities of rolling bearings.
Mumma, Joel M; Durso, Francis T; Ferguson, Ashley N; Gipson, Christina L; Casanova, Lisa; Erukunuakpor, Kimberly; Kraft, Colleen S; Walsh, Victoria L; Zimring, Craig; DuBose, Jennifer; Jacob, Jesse T
2018-03-05
Doffing protocols for personal protective equipment (PPE) are critical for keeping healthcare workers (HCWs) safe during care of patients with Ebola virus disease. We assessed the relationship between errors and self-contamination during doffing. Eleven HCWs experienced with doffing Ebola-level PPE participated in simulations in which HCWs donned PPE marked with surrogate viruses (ɸ6 and MS2), completed a clinical task, and were assessed for contamination after doffing. Simulations were video recorded, and a failure modes and effects analysis and fault tree analyses were performed to identify errors during doffing, quantify their risk (risk index), and predict contamination data. Fifty-one types of errors were identified, many having the potential to spread contamination. Hand hygiene and removing the powered air purifying respirator (PAPR) hood had the highest total risk indexes (111 and 70, respectively) and number of types of errors (9 and 13, respectively). ɸ6 was detected on 10% of scrubs and the fault tree predicted a 10.4% contamination rate, likely occurring when the PAPR hood inadvertently contacted scrubs during removal. MS2 was detected on 10% of hands, 20% of scrubs, and 70% of inner gloves and the predicted rates were 7.3%, 19.4%, 73.4%, respectively. Fault trees for MS2 and ɸ6 contamination suggested similar pathways. Ebola-level PPE can both protect and put HCWs at risk for self-contamination throughout the doffing process, even among experienced HCWs doffing with a trained observer. Human factors methodologies can identify error-prone steps, delineate the relationship between errors and self-contamination, and suggest remediation strategies.
NASA Astrophysics Data System (ADS)
Krechowicz, Maria
2017-10-01
Nowadays, one of the characteristic features of construction industry is an increased complexity of a growing number of projects. Almost each construction project is unique, has its project-specific purpose, its own project structural complexity, owner’s expectations, ground conditions unique to a certain location, and its own dynamics. Failure costs and costs resulting from unforeseen problems in complex construction projects are very high. Project complexity drivers pose many vulnerabilities to a successful completion of a number of projects. This paper discusses the process of effective risk management in complex construction projects in which renewable energy sources were used, on the example of the realization phase of the ENERGIS teaching-laboratory building, from the point of view of DORBUD S.A., its general contractor. This paper suggests a new approach to risk management for complex construction projects in which renewable energy sources were applied. The risk management process was divided into six stages: gathering information, identification of the top, critical project risks resulting from the project complexity, construction of the fault tree for each top, critical risks, logical analysis of the fault tree, quantitative risk assessment applying fuzzy logic and development of risk response strategy. A new methodology for the qualitative and quantitative risk assessment for top, critical risks in complex construction projects was developed. Risk assessment was carried out applying Fuzzy Fault Tree analysis on the example of one top critical risk. Application of the Fuzzy sets theory to the proposed model allowed to decrease uncertainty and eliminate problems with gaining the crisp values of the basic events probability, common during expert risk assessment with the objective to give the exact risk score of each unwanted event probability.
Taheriyoun, Masoud; Moradinejad, Saber
2015-01-01
The reliability of a wastewater treatment plant is a critical issue when the effluent is reused or discharged to water resources. Main factors affecting the performance of the wastewater treatment plant are the variation of the influent, inherent variability in the treatment processes, deficiencies in design, mechanical equipment, and operational failures. Thus, meeting the established reuse/discharge criteria requires assessment of plant reliability. Among many techniques developed in system reliability analysis, fault tree analysis (FTA) is one of the popular and efficient methods. FTA is a top down, deductive failure analysis in which an undesired state of a system is analyzed. In this study, the problem of reliability was studied on Tehran West Town wastewater treatment plant. This plant is a conventional activated sludge process, and the effluent is reused in landscape irrigation. The fault tree diagram was established with the violation of allowable effluent BOD as the top event in the diagram, and the deficiencies of the system were identified based on the developed model. Some basic events are operator's mistake, physical damage, and design problems. The analytical method is minimal cut sets (based on numerical probability) and Monte Carlo simulation. Basic event probabilities were calculated according to available data and experts' opinions. The results showed that human factors, especially human error had a great effect on top event occurrence. The mechanical, climate, and sewer system factors were in subsequent tier. Literature shows applying FTA has been seldom used in the past wastewater treatment plant (WWTP) risk analysis studies. Thus, the developed FTA model in this study considerably improves the insight into causal failure analysis of a WWTP. It provides an efficient tool for WWTP operators and decision makers to achieve the standard limits in wastewater reuse and discharge to the environment.
Jetter, J J; Forte, R; Rubenstein, R
2001-02-01
A fault tree analysis was used to estimate the number of refrigerant exposures of automotive service technicians and vehicle occupants in the United States. Exposures of service technicians can occur when service equipment or automotive air-conditioning systems leak during servicing. The number of refrigerant exposures of service technicians was estimated to be 135,000 per year. Exposures of vehicle occupants can occur when refrigerant enters passenger compartments due to sudden leaks in air-conditioning systems, leaks following servicing, or leaks caused by collisions. The total number of exposures of vehicle occupants was estimated to be 3,600 per year. The largest number of exposures of vehicle occupants was estimated for leaks caused by collisions, and the second largest number of exposures was estimated for leaks following servicing. Estimates used in the fault tree analysis were based on a survey of automotive air-conditioning service shops, the best available data from the literature, and the engineering judgement of the authors and expert reviewers from the Society of Automotive Engineers Interior Climate Control Standards Committee. Exposure concentrations and durations were estimated and compared with toxicity data for refrigerants currently used in automotive air conditioners. Uncertainty was high for the estimated numbers of exposures, exposure concentrations, and exposure durations. Uncertainty could be reduced in the future by conducting more extensive surveys, measurements of refrigerant concentrations, and exposure monitoring. Nevertheless, the analysis indicated that the risk of exposure of service technicians and vehicle occupants is significant, and it is recommended that no refrigerant that is substantially more toxic than currently available substitutes be accepted for use in vehicle air-conditioning systems, absent a means of mitigating exposure.
Fault tree analysis for integrated and probabilistic risk analysis of drinking water systems.
Lindhe, Andreas; Rosén, Lars; Norberg, Tommy; Bergstedt, Olof
2009-04-01
Drinking water systems are vulnerable and subject to a wide range of risks. To avoid sub-optimisation of risk-reduction options, risk analyses need to include the entire drinking water system, from source to tap. Such an integrated approach demands tools that are able to model interactions between different events. Fault tree analysis is a risk estimation tool with the ability to model interactions between events. Using fault tree analysis on an integrated level, a probabilistic risk analysis of a large drinking water system in Sweden was carried out. The primary aims of the study were: (1) to develop a method for integrated and probabilistic risk analysis of entire drinking water systems; and (2) to evaluate the applicability of Customer Minutes Lost (CML) as a measure of risk. The analysis included situations where no water is delivered to the consumer (quantity failure) and situations where water is delivered but does not comply with water quality standards (quality failure). Hard data as well as expert judgements were used to estimate probabilities of events and uncertainties in the estimates. The calculations were performed using Monte Carlo simulations. CML is shown to be a useful measure of risks associated with drinking water systems. The method presented provides information on risk levels, probabilities of failure, failure rates and downtimes of the system. This information is available for the entire system as well as its different sub-systems. Furthermore, the method enables comparison of the results with performance targets and acceptable levels of risk. The method thus facilitates integrated risk analysis and consequently helps decision-makers to minimise sub-optimisation of risk-reduction options.
Naghibi, Seyed Amir; Pourghasemi, Hamid Reza; Dixon, Barnali
2016-01-01
Groundwater is considered one of the most valuable fresh water resources. The main objective of this study was to produce groundwater spring potential maps in the Koohrang Watershed, Chaharmahal-e-Bakhtiari Province, Iran, using three machine learning models: boosted regression tree (BRT), classification and regression tree (CART), and random forest (RF). Thirteen hydrological-geological-physiographical (HGP) factors that influence locations of springs were considered in this research. These factors include slope degree, slope aspect, altitude, topographic wetness index (TWI), slope length (LS), plan curvature, profile curvature, distance to rivers, distance to faults, lithology, land use, drainage density, and fault density. Subsequently, groundwater spring potential was modeled and mapped using CART, RF, and BRT algorithms. The predicted results from the three models were validated using the receiver operating characteristics curve (ROC). From 864 springs identified, 605 (≈70 %) locations were used for the spring potential mapping, while the remaining 259 (≈30 %) springs were used for the model validation. The area under the curve (AUC) for the BRT model was calculated as 0.8103 and for CART and RF the AUC were 0.7870 and 0.7119, respectively. Therefore, it was concluded that the BRT model produced the best prediction results while predicting locations of springs followed by CART and RF models, respectively. Geospatially integrated BRT, CART, and RF methods proved to be useful in generating the spring potential map (SPM) with reasonable accuracy.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ege, J R; Danilchik, W; Feazel, C T
1980-05-01
Mining of the Ul2n.02 drift for the Midi Mist event started on December 31, 1965, in Rainier Mesa, Nevada Test Site, and was completed on December 30, 1966. The drift was mined along a bearing of S. 65/sup 0/ W. at an altitude of 1,850.2 m (6,070.2 ft) to a length of 643 m (2,109 ft). The drift lies in tunnel bed 4 and penetrates stratigraphically up the section through sub-units 4AB, 4CD, 4E, 4F, 4G, 4H, and 4J, all of Tertiary age. Two faults mapped at the surface of the mesa were identified as having cut the complex atmore » drift level. No engineering construction or support problems greater than minor rock slabbing, ravelly ground, or water inflow along fractures were uncountered. Visual inspection showed that shot-induced effects in the rock medium at drift level extended for 237.7 m (780 ft) from the working point in the form of fractures and small shear displacements along bedding planes.« less
Risk management of key issues of FPSO
NASA Astrophysics Data System (ADS)
Sun, Liping; Sun, Hai
2012-12-01
Risk analysis of key systems have become a growing topic late of because of the development of offshore structures. Equipment failures of offloading system and fire accidents were analyzed based on the floating production, storage and offloading (FPSO) features. Fault tree analysis (FTA), and failure modes and effects analysis (FMEA) methods were examined based on information already researched on modules of relex reliability studio (RRS). Equipment failures were also analyzed qualitatively by establishing a fault tree and Boolean structure function based on the shortage of failure cases, statistical data, and risk control measures examined. Failure modes of fire accident were classified according to the different areas of fire occurrences during the FMEA process, using risk priority number (RPN) methods to evaluate their severity rank. The qualitative analysis of FTA gave the basic insight of forming the failure modes of FPSO offloading, and the fire FMEA gave the priorities and suggested processes. The research has practical importance for the security analysis problems of FPSO.
Yazdi, Mohammad; Korhan, Orhan; Daneshvar, Sahand
2018-05-09
This study aimed at establishing fault tree analysis (FTA) using expert opinion to compute the probability of an event. To find the probability of the top event (TE), all probabilities of the basic events (BEs) should be available when the FTA is drawn. In this case, employing expert judgment can be used as an alternative to failure data in an awkward situation. The fuzzy analytical hierarchy process as a standard technique is used to give a specific weight to each expert, and fuzzy set theory is engaged for aggregating expert opinion. In this regard, the probability of BEs will be computed and, consequently, the probability of the TE obtained using Boolean algebra. Additionally, to reduce the probability of the TE in terms of three parameters (safety consequences, cost and benefit), the importance measurement technique and modified TOPSIS was employed. The effectiveness of the proposed approach is demonstrated with a real-life case study.
NASA Astrophysics Data System (ADS)
Guan, Yifeng; Zhao, Jie; Shi, Tengfei; Zhu, Peipei
2016-09-01
In recent years, China's increased interest in environmental protection has led to a promotion of energy-efficient dual fuel (diesel/natural gas) ships in Chinese inland rivers. A natural gas as ship fuel may pose dangers of fire and explosion if a gas leak occurs. If explosions or fires occur in the engine rooms of a ship, heavy damage and losses will be incurred. In this paper, a fault tree model is presented that considers both fires and explosions in a dual fuel ship; in this model, dual fuel engine rooms are the top events. All the basic events along with the minimum cut sets are obtained through the analysis. The primary factors that affect accidents involving fires and explosions are determined by calculating the degree of structure importance of the basic events. According to these results, corresponding measures are proposed to ensure and improve the safety and reliability of Chinese inland dual fuel ships.
Kingman, D M; Field, W E
2005-11-01
Findings reported by researchers at Illinois State University and Purdue University indicated that since 1980, an average of eight individuals per year have become engulfed and died in farm grain bins in the U.S. and Canada and that all these deaths are significant because they are believed to be preventable. During a recent effort to develop intervention strategies and recommendations for an ASAE farm grain bin safety standard, fault tree analysis (FTA) was utilized to identify contributing factors to engulfments in grain stored in on-farm grain bins. FTA diagrams provided a spatial perspective of the circumstances that occurred prior to engulfment incidents, a perspective never before presented in other hazard analyses. The FTA also demonstrated relationships and interrelationships of the contributing factors. FTA is a useful tool that should be applied more often in agricultural incident investigations to assist in the more complete understanding of the problem studied.
Fault tree analysis for data-loss in long-term monitoring networks.
Dirksen, J; ten Veldhuis, J A E; Schilperoort, R P S
2009-01-01
Prevention of data-loss is an important aspect in the design as well as the operational phase of monitoring networks since data-loss can seriously limit intended information yield. In the literature limited attention has been paid to the origin of unreliable or doubtful data from monitoring networks. Better understanding of causes of data-loss points out effective solutions to increase data yield. This paper introduces FTA as a diagnostic tool to systematically deduce causes of data-loss in long-term monitoring networks in urban drainage systems. In order to illustrate the effectiveness of FTA, a fault tree is developed for a monitoring network and FTA is applied to analyze the data yield of a UV/VIS submersible spectrophotometer. Although some of the causes of data-loss cannot be recovered because the historical database of metadata has been updated infrequently, the example points out that FTA still is a powerful tool to analyze the causes of data-loss and provides useful information on effective data-loss prevention.
Accurate reliability analysis method for quantum-dot cellular automata circuits
NASA Astrophysics Data System (ADS)
Cui, Huanqing; Cai, Li; Wang, Sen; Liu, Xiaoqiang; Yang, Xiaokuo
2015-10-01
Probabilistic transfer matrix (PTM) is a widely used model in the reliability research of circuits. However, PTM model cannot reflect the impact of input signals on reliability, so it does not completely conform to the mechanism of the novel field-coupled nanoelectronic device which is called quantum-dot cellular automata (QCA). It is difficult to get accurate results when PTM model is used to analyze the reliability of QCA circuits. To solve this problem, we present the fault tree models of QCA fundamental devices according to different input signals. After that, the binary decision diagram (BDD) is used to quantitatively investigate the reliability of two QCA XOR gates depending on the presented models. By employing the fault tree models, the impact of input signals on reliability can be identified clearly and the crucial components of a circuit can be found out precisely based on the importance values (IVs) of components. So this method is contributive to the construction of reliable QCA circuits.
Physical and chemical characterizations of biochars derived from different agricultural residues
NASA Astrophysics Data System (ADS)
Jindo, K.; Mizumoto, H.; Sawada, Y.; Sanchez-Monedero, M. A.; Sonoki, T.
2014-08-01
Biochar has received large attention as a strategy to tackle against carbon emission. Not only carbon fixation has been carried out but also other merits for agricultural application due to unique physical and chemical character such as absorption of contaminated compounds in soil, trapping ammonia and methane emission from compost, and enhancement of fertilizer quality. In our study, different local waste feed stocks (rice husk, rice straw, wood chips of apple tree (Malus Pumila) and oak tree (Quercus serrata)), in Aomori, Japan, were utilized for creating biochar with different temperature (400-800 °C). Concerning to the biochar production, the pyrolysis of lower temperature had more biochar yield than higher temperature pyrolysis process. On the contrary, surface areas and adsorption characters have been increased as increasing temperature. The proportions of carbon content in the biochars also increased together with increased temperatures. Infrared-Fourier spectra (FT-IR) and 13C-NMR were used to understand carbon chemical compositions in our biochars, and it was observed that the numbers of the shoulders representing aromatic groups, considered as stable carbon structure appeared as the temperature came closer to 600 °C, as well as in FT-IR. In rice materials, the peak assigned to SiO2, was observed in all biochars (400-800 °C) in FT-IR. We suppose that the pyrolysis at 600 °C creates the most recalcitrant character for carbon sequestration, meanwhile the pyrolysis at 400 °C produces the superior properties as a fertilizer by retaining volatile and easily labile compounds which promotes soil microbial activities.
A Controllable Earthquake Rupture Experiment on the Homestake Fault
NASA Astrophysics Data System (ADS)
Germanovich, L. N.; Murdoch, L. C.; Garagash, D.; Reches, Z.; Martel, S. J.; Gwaba, D.; Elsworth, D.; Lowell, R. P.; Onstott, T. C.
2010-12-01
Fault-slip is typically simulated in the laboratory at the cm-to-dm scale. Laboratory results are then up-scaled by orders of magnitude to understand faulting and earthquakes processes. We suggest an experimental approach to reactivate faults in-situ at scales ~10-100 m using thermal techniques and fluid injection to modify in situ stresses and the fault strength to the point where the rock fails. Mines where the modified in-situ stresses are sufficient to drive faulting, present an opportunity to conduct such experiments. During our recent field work in the former Homestake gold mine in the northern Black Hills, South Dakota, we found a large fault present on multiple mine levels. The fault is subparallel to the local foliation in the Poorman formation, a Proterozoic metamorphic rock deformed into regional-scale folds with axes plunging ~40° to the SSE. The fault extends at least 1.5 km along strike and dip, with a center ~1.5 km deep. It strikes ~320-340° N, dips ~45-70° NE, and is recognized by a ~0.3-0.5 m thick distinct gouge that contains crushed host rock and black material that appears to be graphite. Although we could not find clear evidence for fault displacement, secondary features suggest that it is a normal fault. The size and distinct structure of this fault make it a promising target for in-situ experimentation of fault strength, hydrological properties, and slip nucleation processes. Most earthquakes are thought to be the result of unstable slip on existing faults, Activation of the Homestake fault in response to the controlled fluid injection and thermally changing background stresses is likely to be localized on a crack-like patch. Slow patch propagation, moderated by the injection rate and the rate of change of the background stresses, may become unstable, leading to the nucleation of a small earthquake (dynamic) rupture. This controlled instability is intimately related to the dependence of the fault strength on the slip process and has been analyzed for the Homestake fault conditions. Scale analyses indicate that this transition occurs for the nucleation patch size ~1 m. This represents a fundamental limitation for laboratory experiments, where the induced dynamic patch could be tractable, and necessitates larger scale field tests ~10-100 m. The ongoing dewatering is expected to affect displacements in the fault vicinity. This poroelastic effect can be used to better characterize the fault. Nucleation, propagation, and arrest of dynamic fault slip is governed by fluid overpressure source, diffusion, and the magnitude of the background loading in relation to the peak and residual strength in the fault zone at the ambient pore pressure level. More information on in-situ stresses than currently available is required to evaluate the fault state. Yet, initial modeling suggests that a suitable place for such an experiment is where the Homestake fault intersects the 4850-ft mine level or at greater depths.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bryngelson, R.H.
Describes installation of 3 satellite subsea trees in 500 ft of water from a semisubmersible drilling rig. These wet, diver-assist trees are part of Phillips Petroleum's early development and production program (EDPP) for the Espoir field offshore Ivory Coast, with plans calling for 5 satellite wells with downhole completion equipment and subsea production trees. Diagram shows how a converted jackup, Dan Duke, supports equipment to handle production from subsea wells. Table gives time breakdown of subsea tree installation. Before mobilizing the subsea trees, control system, and tubulars to the rig, a study of deck layout, payloads, and traffic patterns wasmore » performed. Concludes that, based on experience in this project and the cost differences between purchase and installation costs, final success is 90% dependent on informed and trained field personnel after engineering, design, and manufacturing; attention to installation procedures and training of field and operational personnel are as critical or more critical than design changes to equipment; and selection of a supplier for high technology equipment, based on a low bid alone, may not translate into lower installation costs.« less
Acoustic analysis of warp potential of green ponderosa pine lumber
Xiping Wang; William T. Simpson
2005-01-01
This study evaluated the potential of acoustic analysis as presorting criteria to identify warp-prone boards before kiln drying. Dimension lumber, 38 by 89 mm (nominal 2 by 4 in.) and 2.44 m (8 ft) long, sawn from open-grown small-diameter ponderosa pine trees, was acoustically tested lengthwise at green condition. Three acoustic properties (acoustic speed, rate of...
Catalytic conversion wood syngas to synthetic aviation turbine fuels over a multifunctional catalyst
Qiangu Yan; Fei Yu; Jian Liu; Jason Street; Jinsen Gao; Zhiyong Cai; Jilei Zhang
2013-01-01
A continuous process involving gasification, syngas cleaning, and FischerâTropsch (FT) synthesis was developed to efficiently produce synthetic aviation turbine fuels (SATFs). Oak-tree wood chips were first gasified to syngas over a commercial pilot plant downdraft gasifier. The raw wood syngas contains about 47% N2, 21% CO, 18% H2...
Jefferey C. Goelz
2001-01-01
Water oak (Quercus nigra L.[Fagaceae]), Nuttall oak (Q. nuttallii Palmer), and green ash (Fraxinus pennsylvanica Marsh. [Oleaceae]) were planted in mixtures at 2 spacings, 1.8 and 2.7m (6 and 9 ft) triangular spacing, on 2 contrasting soil types: Sharkey and Dundee. Survival was high for green ash and...
Spacing and shrub competition influence 20-year development of planted ponderosa pine
William W. Oliver
1990-01-01
Growth and stand development of ponderosa pine (Pinus ponderosa) were monitored for 20 years after planting at five different square spacings (6, 9, 12, 15, and 18 ft) in the presence or absence of competing shrubs on the westside Sierra Nevada. Mean tree size was positively correlated and stand values negatively correlated with spacing in the...
49 CFR Appendix B to Part 236 - Risk Assessment Criteria
Code of Federal Regulations, 2012 CFR
2012-10-01
... availability calculations for subsystems and components, Fault Tree Analysis (FTA) of the subsystems, and... upper bound, as estimated with a sensitivity analysis, and the risk value selected must be demonstrated... interconnected subsystems/components? The risk assessment of each safety-critical system (product) must account...
49 CFR Appendix B to Part 236 - Risk Assessment Criteria
Code of Federal Regulations, 2014 CFR
2014-10-01
... availability calculations for subsystems and components, Fault Tree Analysis (FTA) of the subsystems, and... upper bound, as estimated with a sensitivity analysis, and the risk value selected must be demonstrated... interconnected subsystems/components? The risk assessment of each safety-critical system (product) must account...
49 CFR Appendix D to Part 236 - Independent Review of Verification and Validation
Code of Federal Regulations, 2010 CFR
2010-10-01
... standards. (f) The reviewer shall analyze all Fault Tree Analyses (FTA), Failure Mode and Effects... for each product vulnerability cited by the reviewer; (4) Identification of any documentation or... not properly followed; (6) Identification of the software verification and validation procedures, as...
14 CFR 417.309 - Flight safety system analysis.
Code of Federal Regulations, 2012 CFR
2012-01-01
... system anomaly occurring and all of its effects as determined by the single failure point analysis and... termination system. (c) Single failure point. A command control system must undergo an analysis that... fault tree analysis or a failure modes effects and criticality analysis; (2) Identify all possible...
14 CFR 417.309 - Flight safety system analysis.
Code of Federal Regulations, 2010 CFR
2010-01-01
... system anomaly occurring and all of its effects as determined by the single failure point analysis and... termination system. (c) Single failure point. A command control system must undergo an analysis that... fault tree analysis or a failure modes effects and criticality analysis; (2) Identify all possible...
14 CFR 417.309 - Flight safety system analysis.
Code of Federal Regulations, 2013 CFR
2013-01-01
... system anomaly occurring and all of its effects as determined by the single failure point analysis and... termination system. (c) Single failure point. A command control system must undergo an analysis that... fault tree analysis or a failure modes effects and criticality analysis; (2) Identify all possible...
14 CFR 417.309 - Flight safety system analysis.
Code of Federal Regulations, 2014 CFR
2014-01-01
... system anomaly occurring and all of its effects as determined by the single failure point analysis and... termination system. (c) Single failure point. A command control system must undergo an analysis that... fault tree analysis or a failure modes effects and criticality analysis; (2) Identify all possible...
14 CFR 417.309 - Flight safety system analysis.
Code of Federal Regulations, 2011 CFR
2011-01-01
... system anomaly occurring and all of its effects as determined by the single failure point analysis and... termination system. (c) Single failure point. A command control system must undergo an analysis that... fault tree analysis or a failure modes effects and criticality analysis; (2) Identify all possible...
Toward a Model-Based Approach for Flight System Fault Protection
NASA Technical Reports Server (NTRS)
Day, John; Meakin, Peter; Murray, Alex
2012-01-01
Use SysML/UML to describe the physical structure of the system This part of the model would be shared with other teams - FS Systems Engineering, Planning & Execution, V&V, Operations, etc., in an integrated model-based engineering environment Use the UML Profile mechanism, defining Stereotypes to precisely express the concepts of the FP domain This extends the UML/SysML languages to contain our FP concepts Use UML/SysML, along with our profile, to capture FP concepts and relationships in the model Generate typical FP engineering products (the FMECA, Fault Tree, MRD, V&V Matrices)
NASA Astrophysics Data System (ADS)
Akintunde, Olusoga M.; Knapp, Camelia C.; Knapp, James H.
2014-09-01
A simple, new porosity/permeability-depth profile was developed from available laboratory measurements on Triassic sedimentary red beds (sandstone) from parts of the South Georgia Rift (SGR) basin in order to investigate the feasibility for long-term CO2 storage. The study locations were: Sumter, Berkeley, Dunbarton, Clubhouse Crossroad-3 (CC-3) and Norris Lightsey wells. As expected, both porosity and permeability show changes with depth at the regional scale that was much greater than at local scale. The significant changes in porosity and permeability with depth suggest a highly compacted, deformed basin, and potentially, a history of uplift and erosion. The permeability is generally low both at shallow (less than 1826 ft/556.56 m) and deeper depths (greater than 1826 ft/556.56 m). Both porosity and permeability follow the normal trend, decreasing linearly with depth for most parts of the study locations with the exception of the Norris Lightsey well. A petrophysical study on a suite of well logs penetrating the Norris Lightsey red beds at depths sampled by the core-derived laboratory measurements shows an abnormal shift (by 50%) in the acoustic travel time and/or in the sonic-derived P-wave velocity that indicates possible faulting or fracturing at depth. The departure of the Norris Lightsey's porosities and permeabilities from the normal compaction trend may be a consequence of the existence of a fault/fracture controlled abnormal pressure condition at depth. The linear and non-linear behaviors of the porosity/permeability distribution throughout the basin imply the composition of the SGR red beds, and by extension analog/similar Triassic-Jurassic formations within the Eastern North American Margin have been altered by compaction, uplift, erosion and possible faulting that have shaped the evolution of these Triassic formations following the major phase of rifting.
Conversion of Questionnaire Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Powell, Danny H; Elwood Jr, Robert H
During the survey, respondents are asked to provide qualitative answers (well, adequate, needs improvement) on how well material control and accountability (MC&A) functions are being performed. These responses can be used to develop failure probabilities for basic events performed during routine operation of the MC&A systems. The failure frequencies for individual events may be used to estimate total system effectiveness using a fault tree in a probabilistic risk analysis (PRA). Numeric risk values are required for the PRA fault tree calculations that are performed to evaluate system effectiveness. So, the performance ratings in the questionnaire must be converted to relativemore » risk values for all of the basic MC&A tasks performed in the facility. If a specific material protection, control, and accountability (MPC&A) task is being performed at the 'perfect' level, the task is considered to have a near zero risk of failure. If the task is performed at a less than perfect level, the deficiency in performance represents some risk of failure for the event. As the degree of deficiency in performance increases, the risk of failure increases. If a task that should be performed is not being performed, that task is in a state of failure. The failure probabilities of all basic events contribute to the total system risk. Conversion of questionnaire MPC&A system performance data to numeric values is a separate function from the process of completing the questionnaire. When specific questions in the questionnaire are answered, the focus is on correctly assessing and reporting, in an adjectival manner, the actual performance of the related MC&A function. Prior to conversion, consideration should not be given to the numeric value that will be assigned during the conversion process. In the conversion process, adjectival responses to questions on system performance are quantified based on a log normal scale typically used in human error analysis (see A.D. Swain and H.E. Guttmann, 'Handbook of Human Reliability Analysis with Emphasis on Nuclear Power Plant Applications,' NUREG/CR-1278). This conversion produces the basic event risk of failure values required for the fault tree calculations. The fault tree is a deductive logic structure that corresponds to the operational nuclear MC&A system at a nuclear facility. The conventional Delphi process is a time-honored approach commonly used in the risk assessment field to extract numerical values for the failure rates of actions or activities when statistically significant data is absent.« less
Preliminary Isostatic Gravity Map of Joshua Tree National Park and Vicinity, Southern California
Langenheim, V.E.; Biehler, Shawn; McPhee, D.K.; McCabe, C.A.; Watt, J.T.; Anderson, M.L.; Chuchel, B.A.; Stoffer, P.
2007-01-01
This isostatic residual gravity map is part of an effort to map the three-dimensional distribution of rocks in Joshua Tree National Park, southern California. This map will serve as a basis for modeling the shape of basins beneath the Park and in adjacent valleys and also for determining the location and geometry of faults within the area. Local spatial variations in the Earth's gravity field, after accounting for variations caused by elevation, terrain, and deep crustal structure, reflect the distribution of densities in the mid- to upper crust. Densities often can be related to rock type, and abrupt spatial changes in density commonly mark lithologic or structural boundaries. High-density basement rocks exposed within the Eastern Transverse Ranges include crystalline rocks that range in age from Proterozoic to Mesozoic and these rocks are generally present in the mountainous areas of the quadrangle. Alluvial sediments, usually located in the valleys, and Tertiary sedimentary rocks are characterized by low densities. However, with increasing depth of burial and age, the densities of these rocks may become indistinguishable from those of basement rocks. Tertiary volcanic rocks are characterized by a wide range of densities, but, on average, are less dense than the pre-Cenozoic basement rocks. Basalt within the Park is as dense as crystalline basement, but is generally thin (less than 100 m thick; e.g., Powell, 2003). Isostatic residual gravity values within the map area range from about 44 mGal over Coachella Valley to about 8 mGal between the Mecca Hills and the Orocopia Mountains. Steep linear gravity gradients are coincident with the traces of several Quaternary strike-slip faults, most notably along the San Andreas Fault bounding the east side of Coachella Valley and east-west-striking, left-lateral faults, such as the Pinto Mountain, Blue Cut, and Chiriaco Faults (Fig. 1). Gravity gradients also define concealed basin-bounding faults, such as those beneath the Chuckwalla Valley (e.g. Rotstein and others, 1976). These gradients result from juxtaposing dense basement rocks against thick Cenozoic sedimentary rocks.
Leonard, B.F.; Erdman, James A.
1983-01-01
New exploration targets for gold, molybdenum, tungsten, and tin are indicated by the systematic distribution of metals in soil and plants of the Red Mountain stockwork and its environs. The stockwork is built of countless quartz veins and veinlets, extensively argillized and containing sparsely disseminated gold, pyrite, arsenoDyrite, Dyrrhotite, fluorite, and other minerals. The stockwork developed along the outer ring-fracture zone of the Eocene Quartz Creek cauldron where subsidence strain, unrelieved by radial faulting, produced internal deformation and intense small-scale fracturing in rocks of the Idaho batholith. The stockwork, cropping out as a fault-bounded polygon 2,700 ft long and 2,000 ft wide, contains (underlies?) a large, virtually barren quartz body 1,350 ft long and 350 ft wide. Deformed, shattered granite flanks and presumably underlies the stockwork, which may plunge southward beneath inclusion-bearing granodiorite and alaskite of the Idaho batholith suite. Narrow dikes of rhyolite and latite radiate from two centers within the quartz body. Many dikes and small bosses of rhyolite and latite intrude stockwork and granite. The radial habit of some dikes and the high frequency of small intrusives concentrated within the sericite and kaolinite alteration zones suggest that a Tertiary porphyry is concealed beneath the mountain. Clay-mineral alteration zones mapped in residual soil extend far outward from the stockwork. Valleys filled with Quaternary deposits bound the stockwork-granite complex on the east, north, and west, effectively concealing elements of a crudely elliptical substructure that differs from the fault-dissected, mappable part of the complex. Metal anomalies in soil of the 3,600-ft x 8,400-ft gridded area are mostly weak and small. In contrast, metal anomalies in plants are strong, large, and consistent with the inference of a concealed elliptical substructure that may be hoodlike and may contain more than one mineralized zone. Gold in ashed sapwood of douglas-fir (Pseudotsuga menziesii) indicates a major gold anomaly, peak value 14.2 ppm Au, on inclusion-bearing granodiorite in an unprospected area south of the exposed stockwork. Locally, the gold anomaly is accompanied by a sizable tin anomaly, peak value 100 ppm Sn, in douglas-fir. Molybdenum in ashed leaves of beargrass (Xerophyllum tenax), peak value >500 ppm Mo, indicates extensive anomalies within a 2-milelong semi-elliptical belt of Mo values exceeding the 20 ppm median. Most of the belt is on valley fill, but the southern segment of it, near the major gold anomaly, is mainly on inclusion-bearing granodiorite. Here, where beargrass is sparse, the peak value of Mo in beargrass drops to 70 ppm. Where beargrass is absent, ashed leaves of sedge (Carex geyeri) have anomalous values of 100 ppm Mo. An additional target for molybdenum is indicated by molybdenite from the inaccessible Dart of the main adit. The molybdenite, seen only in dump specimens, is in weakly silicified granite that may underlie part of the stockwork. The source body of the molybdenite is blind, and no clue to it is given by samples of soil or plants. Just north of the major gold anomaly, 2 ppm values of W in beargrass coincide approximately with a tungsten anomaly in soil (peak value 22 ppm W) in an area of float containing disseminated scheelite in weakly silicified inclusion-bearing granodiorite. Other areas of 2 ppm W values in beargrass are nearly coextensive with the Moin-beargrass anomalies on Quaternary deposits of Quartz Creek valley. Some of the target areas for gold, molybdenum, tungsten, and tin have not been circumscribed by our sampling.
Yehle, Lynn A.
1977-01-01
A program to study the engineering geology of most larger Alaska coastal communities and to evaluate their earthquake and other geologic hazards was started following the 1964 Alaska earthquake; this report about the Metlakatla area, Annette Island, is a product of that program. Field-study methods were of a reconnaissance nature, and thus the interpretations in the report are tentative. Landscape of the Metlakatla Peninsula, on which the city of Metlakatla is located, is characterized by a muskeg-covered terrane of very low relief. In contrast, most of the rest of Annette Island is composed of mountainous terrane with steep valleys and numerous lakes. During the Pleistocene Epoch the Metlakatla area was presumably covered by ice several times; glaciers smoothed the present Metlakatla Peninsula and deeply eroded valleys on the rest. of Annette Island. The last major deglaciation was completed probably before 10,000 years ago. Rebound of the earth's crust, believed to be related to glacial melting, has caused land emergence at Metlakatla of at least 50 ft (15 m) and probably more than 200 ft (61 m) relative to present sea level. Bedrock in the Metlakatla area is composed chiefly of hard metamorphic rocks: greenschist and greenstone with minor hornfels and schist. Strike and dip of beds are generally variable and minor offsets are common. Bedrock is of late Paleozoic to early Mesozoic age. Six types of surficial geologic materials of Quaternary age were recognized: firm diamicton, emerged shore, modern shore and delta, and alluvial deposits, very soft muskeg and other organic deposits, and firm to soft artificial fill. A combination map unit is composed of bedrock or diamicton. Geologic structure in southeastern Alaska is complex because, since at least early Paleozoic time, there have been several cycles of tectonic deformation that affected different parts of the region. Southeastern Alaska is transected by numerous faults and possible faults that attest to major movements of the earth's crust. The latest of the major tectonic events in the Metlakatla region occurred in middle Tertiary time; some minor fault activity probably continues today at depth. Along the outer coast of southeastern Alaska and British Columbia, major faulting activity occurs in the form of active, strike-slip movement along the Queen Charlotte fault about 100 mi (160 kin) west-southwest of Metlakatla. Some branching subsidiary faults also may be active, at least one of which may be the Sandspit fault. Many major and smaller earthquakes occur along the outer coast. These shocks are related to movements along the Queen Charlotte fault. A few small earthquakes occur in the region between the outer coast and the Coast Mountains, which includes Metlakatla. 0nly a few earthquakes have been reported as felt at Metlakatla; these shocks and others felt in the region are tabulated. Historically, the closest major earthquake was the magnitude 8.1 Queen Charlotte Islands earthquake of August 22, 1949, which occurred along the Queen Charlotte fault 125 mi (200 km) southwest of Metlakatla. No damage was reported at Metlakatla. The probability of destructive earthquakes affecting Metlakatla is unknown. A consideration of the tectonics and earthquake history of the region, however, suggests that sometime in the future an earthquake with a magnitude of about 8 will occur along that segment of the Queen Charlotte fault nearest to Metlakatla. Smaller earthquakes with magnitudes of 6 or more might occur elsewhere in the Metlakatla region or south-southeastward near Dixon Entrance or Hecate Strait. Several geologic effects that have characterized large earthquakes elsewh6re may be expected to accompany some of the possible major earthquakes that might affect the Metlakatla area in the future. Evaluation of effects indicates that fault displacement and tectonic uplift or subsidence are probably unlikely, and ground shaking in general probably would be strongest
Wehmeyer, Loren L.; Winters, Karl E.; Ockerman, Darwin J.
2013-01-01
During the August 19–25, 2011, base-flow period, three reaches had gains greater than the uncertainty in the computed streamflow, including reach 3 on the Comal River (168 ft3/s gain), which was one of the reaches where gains in streamflow also were measured in March 2010 and April 2011. Streamflow gains in August 2011 were primarily from (1) inflows from Comal Springs, (2) inflows from the Yegua Jackson aquifer, and (3) groundwater inflows from the Gulf Coast aquifer, which are enhanced by seepage losses from Coleto Creek Reservoir. During this base-flow period, five reaches had losses greater in magnitude than the uncertainty in the computed streamflow. The reach including the confluence of the Guadalupe and Comal Rivers lost 82.8 ft3/s. Much of that loss likely seeped into the local groundwater system. The reach of the Guadalupe River south of New Braunfels, Tex., to Seguin, Tex., lost 53.5 ft3/s. Part of that loss may have been from seepage through streambed alluvium. Reaches 9 and 10 of the Blanco River near Kyle lost 2.20 and 6.60 ft3/s, respectively, likely as infiltration through numerous faults intersecting the stream channel northwest of Kyle. Plum Creek between Lockhart, Tex., and Luling, Tex., lost 2.11 ft3/s, likely as recharge to the Carrizo-Wilcox aquifer. A base-flow period during September 22–28, 2012, was studied for the reach of the Guadalupe River between Seguin and Gonzalez, including flows from San Marcos River and Plum Creek. During this period, for the Guadalupe River reach between Seguin and Oak Forest, no computed gains or losses were greater in magnitude than the uncertainty in the computed streamflow.
“Can LUSI be stopped? - A case study and lessons learned from the relief wells”
NASA Astrophysics Data System (ADS)
Sutrisna, E.
2009-12-01
Since May 2006, in East Java, Indonesia, the LUSI mud volcano has been erupting huge volumes of mixture of predominately mud and water, with little sign of slowing down. It has disrupted social and economic life in this highly populated region. Most geologists believe LUSI is a naturally-occurring mud volcano (MV), like other MV in the Java island of particular interest are the MV along the Watukosek fault, such as, Kalang Anyar, Pulungan, Gunung Anyar, and Socah MV. All of these MV lie in the vicinity of the SSW/NNE trending Watukosek fault that passes through LUSI. The Porong collapse structure is an ancient MV closest to LUSI approx. 7 km away, which on seismic sections demonstrate its complex multi-branching plumbing system. Assuming that the mudflow passed through the wellbore due to an underground blowout, relief wells (RW) were planned to kill the mudflow and carried out in 3 stages, these were: 1. Re-entering the original Banjarpanji-1 (BJP-1) well to obtain accurate survey data so the relief wells could be steered into intersect this original well. 2. Drilling a monitoring well (M-1) to ascertain whether the soil had sufficient strength to support relief wells. 3. Drilling RW-1 and RW-2. Both RW-1 and RW-2 suffered of surface and subsurface problems never achieved their objectives and had to be aborted. Numbers of good lessons were learned from the relief well initiative, such as: 1. No gas or liquid flowed from the wellhead area when it was excavated one month after the eruption started. The wellhead remained intact and totally dead suggesting that the mud flowed to surface through a fault zone or a fracture network instead of up the wellbore. 2. The ‘fish’ in BJP-1 wellbore was found at its original location and not eroded away. This suggests that the mud flow did not pass through the wellbore. 3. The Temperature log showed lower temp. than surface mud temp. The Sonan log response was quiet. These results suggest that there was no near casing mudflow. 4. Dynamic subsurface conditions of the area with shear movement at a depth of 1,100 ft to 1,500 ft. 5. The RW-1 experienced alternate loss and kicks at a depth of around 3,200 ft. as it entered the unstable fault zone and fracture network which likely served as the mud flow conduit. Drilling in the zone of instability around the mudflow conduit cannot be avoided and is full of hazards. 6. The area suffers a dynamic geological condition. The subsidence rate at the rig site of more than 100 cm in a month. The subsidence also had a lateral component. 7. LUSI has multiple mudflow conduits as reflected in the more than 100 gas bubbles currently occurring within a radius of 1.5 km. Although the relief wells did not achieve their intended purpose to stop the mudflow, they allowed the collection of valuable data, all of which suggests that the mudflow did not originate from the BJP-1 wellbore as originally assumed. The use of relief wells to kill the mudflow is a futile attempt since in such complex plumbing system. New conduits or the two dormant mudflow centers along the fault line that appeared at the beginning of LUSI may reactivate if the currently active conduit is blocked. In conclusion, LUSI appears to be another naturally occurring MV that is impossible to kill using relief wells.
Quality-based Multimodal Classification Using Tree-Structured Sparsity
2014-03-08
Pennsylvania State University soheil@psu.edu Asok Ray Pennsylvania State University axr2@psu.edu@psu.edu Nasser M. Nasrabadi Army Research Laboratory...clustering for on- line fault detection and isolation. Applied Intelligence, 35(2):269–284, 2011. 4 [2] S. Bahrampour, A. Ray , S. Sarkar, T. Damarla, and N
Assessing Institutional Ineffectiveness: A Strategy for Improvement.
ERIC Educational Resources Information Center
Cameron, Kim S.
1984-01-01
Based on the theory that institutional change and improvement are motivated more by knowledge of problems than by knowledge of successes, a fault tree analysis technique using Boolean logic for assessing institutional ineffectiveness by determining weaknesses in the system is presented. Advantages and disadvantages of focusing on weakness rather…
An earthquake rate forecast for Europe based on smoothed seismicity and smoothed fault contribution
NASA Astrophysics Data System (ADS)
Hiemer, Stefan; Woessner, Jochen; Basili, Roberto; Wiemer, Stefan
2013-04-01
The main objective of project SHARE (Seismic Hazard Harmonization in Europe) is to develop a community-based seismic hazard model for the Euro-Mediterranean region. The logic tree of earthquake rupture forecasts comprises several methodologies including smoothed seismicity approaches. Smoothed seismicity thus represents an alternative concept to express the degree of spatial stationarity of seismicity and provides results that are more objective, reproducible, and testable. Nonetheless, the smoothed-seismicity approach suffers from the common drawback of being generally based on earthquake catalogs alone, i.e. the wealth of knowledge from geology is completely ignored. We present a model that applies the kernel-smoothing method to both past earthquake locations and slip rates on mapped crustal faults and subductions. The result is mainly driven by the data, being independent of subjective delineation of seismic source zones. The core parts of our model are two distinct location probability densities: The first is computed by smoothing past seismicity (using variable kernel smoothing to account for varying data density). The second is obtained by smoothing fault moment rate contributions. The fault moment rates are calculated by summing the moment rate of each fault patch on a fully parameterized and discretized fault as available from the SHARE fault database. We assume that the regional frequency-magnitude distribution of the entire study area is well known and estimate the a- and b-value of a truncated Gutenberg-Richter magnitude distribution based on a maximum likelihood approach that considers the spatial and temporal completeness history of the seismic catalog. The two location probability densities are linearly weighted as a function of magnitude assuming that (1) the occurrence of past seismicity is a good proxy to forecast occurrence of future seismicity and (2) future large-magnitude events occur more likely in the vicinity of known faults. Consequently, the underlying location density of our model depends on the magnitude. We scale the density with the estimated a-value in order to construct a forecast that specifies the earthquake rate in each longitude-latitude-magnitude bin. The model is intended to be one branch of SHARE's logic tree of rupture forecasts and provides rates of events in the magnitude range of 5 <= m <= 8.5 for the entire region of interest and is suitable for comparison with other long-term models in the framework of the Collaboratory for the Study of Earthquake Predictability (CSEP).
Managing Risk to Ensure a Successful Cassini/Huygens Saturn Orbit Insertion (SOI)
NASA Technical Reports Server (NTRS)
Witkowski, Mona M.; Huh, Shin M.; Burt, John B.; Webster, Julie L.
2004-01-01
I. Design: a) S/C designed to be largely single fault tolerant; b) Operate in flight demonstrated envelope, with margin; and c) Strict compliance with requirements & flight rules. II. Test: a) Baseline, fault & stress testing using flight system testbeds (H/W & S/W); b) In-flight checkout & demos to remove first time events. III. Failure Analysis: a) Critical event driven fault tree analysis; b) Risk mitigation & development of contingencies. IV) Residual Risks: a) Accepted pre-launch waivers to Single Point Failures; b) Unavoidable risks (e.g. natural disaster). V) Mission Assurance: a) Strict process for characterization of variances (ISAs, PFRs & Waivers; b) Full time Mission Assurance Manager reports to Program Manager: 1) Independent assessment of compliance with institutional standards; 2) Oversight & risk assessment of ISAs, PFRs & Waivers etc.; and 3) Risk Management Process facilitator.
Hydrogeologic Framework of the Yakima River Basin Aquifer System, Washington
Vaccaro, J.J.; Jones, M.A.; Ely, D.M.; Keys, M.E.; Olsen, T.D.; Welch, W.B.; Cox, S.E.
2009-01-01
The Yakima River basin aquifer system underlies about 6,200 square miles in south-central Washington. The aquifer system consists of basin-fill deposits occurring in six structural-sedimentary basins, the Columbia River Basalt Group (CRBG), and generally older bedrock. The basin-fill deposits were divided into 19 hydrogeologic units, the CRBG was divided into three units separated by two interbed units, and the bedrock was divided into four units (the Paleozoic, the Mesozoic, the Tertiary, and the Quaternary bedrock units). The thickness of the basin-fill units and the depth to the top of each unit and interbed of the CRBG were mapped. Only the surficial extent of the bedrock units was mapped due to insufficient data. Average mapped thickness of the different units ranged from 10 to 600 feet. Lateral hydraulic conductivity (Kh) of the units varies widely indicating the heterogeneity of the aquifer system. Average or effective Kh values of the water-producing zones of the basin-fill units are on the order of 1 to 800 ft/d and are about 1 to 10 ft/d for the CRBG units as a whole. Effective or average Kh values for the different rock types of the Paleozoic, Mesozoic, and Tertiary units appear to be about 0.0001 to 3 ft/d. The more permeable Quaternary bedrock unit may have Kh values that range from 1 to 7,000 ft/d. Vertical hydraulic conductivity (Kv) of the units is largely unknown. Kv values have been estimated to range from about 0.009 to 2 ft/d for the basin-fill units and Kv values for the clay-to-shale parts of the units may be as small as 10-10 to 10-7 ft/d. Reported Kv values for the CRBG units ranged from 4x10-7 to 4 ft/d. Variations in the concentrations of geochemical solutes and the concentrations and ratios of the isotopes of hydrogen, oxygen, and carbon in groundwater provided information on the hydrogeologic framework and groundwater movement. Stable isotope ratios of water (deuterium and oxygen-18) indicated dispersed sources of groundwater recharge to the CRBG and basin-fill units and that the source of surface and groundwater is derived from atmospheric precipitation. The concentrations of dissolved methane were larger than could be attributable to atmospheric sources in more than 80 percent of wells with measured methane concentrations. The concentrations of the stable isotope of carbon-13 of methane were indicative of a thermogenic source of methane. Most of the occurrences of methane were at locations several miles distant from mapped structural fault features, suggesting the upward vertical movement of thermogenic methane from the underlying bedrock may be more widespread than previously assumed or there may be a more general occurrence of unmapped (buried) fault structures. Carbon and tritium isotope data and the concentrations of dissolved constituents indicate a complex groundwater flow system with multiple contributing zones to groundwater wells and relative groundwater residence time on the order of a few tens to many thousands of years. Potential mean annual recharge for water years 1950-2003 was estimated to be about 15.6 in. or 7,149 ft3/s (5.2 million acre-ft) and includes affects of human activities such as irrigation of croplands. If there had been no human activities (predevelopment conditions) during that time period, estimated recharge would have been about 11.9 in. or 5,450 ft3/s (3.9 million acre-ft). Estimated mean annual recharge ranges from virtually zero in the dry parts of the lower basin to more than 100 in. in the humid uplands, where annual precipitation is more than 120 in. Groundwater in the different hydrogeologic units occurs under perched, unconfined, semiconfined, and confined conditions. Groundwater moves from topographic highs in the uplands to topographic low areas along the streams. The flow system in the basin-fill units is compartmentalized due to topography and geologic structure. The flow system also is compartmentalized for the CRBG units but not to as large
Pinus lambertiana Dougl. Sugar Pine; Pinaceae Pine family
Bohun B. Kinloch Jr.; William H. Scheuner
1990-01-01
Called "the most princely of the genus" by its discoverer, David Douglas, sugar pine (Pinus lambertiana) is the tallest and largest of all pines, commonly reaching heights of 53 to 61 m (175 to 200 ft) and d.b.h. of 91 to 152 cm (36 to 60 in). Old trees occasionally exceed 500 years and, among associated species, are second only to giant...
Brush reduces growth of thinned ponderosa pine in northern California
William W. Oliver
1984-01-01
The effects of tree spacing and brush competition were evaluated on a ponderosa pine (Pinus ponderosa Dougl. ex Laws. var. ponderosa) site of low productivity in California's North Coast Range. Eleven-year-old saplings were thinned to square spacings of 2.1, 2.4, 3.0, and 4.3 m (7, 8, 10, and 14 ft), and all, half, and none of...
Yaoxiang Li; Chris B. LeDoux; Jingxin Wang
2006-01-01
The effects of variable width of streamside management zones (25, 50, 75, and 100 ft) (SMZs) and removal level of trees (10%, 30%, and 50% of basal area) on production and cost of implementing SMZs in central Appalachian hardwood forests were simulated by using a computer model. Harvesting operations were performed on an 80-year-old generated natural hardwood stand...
Wood density-moisture profiles in old-growth Douglas-fir and western hemlock.
W.Y. Pong; Dale R. Waddell; Lambert Michael B.
1986-01-01
Accurate estimation of the weight of each load of logs is necessary for safe and efficient aerial logging operations. The prediction of green density (lb/ft3) as a function of height is a critical element in the accurate estimation of tree bole and log weights. Two sampling methods, disk and increment core (Bergstrom xylodensimeter), were used to measure the density-...
Plantation Spacing Affects Early Growth of Planted Virginia Pine
T.E. Russell
1979-01-01
Spacings ranging from 4 x 4 to 8 x 8 ft did not affect 15 year height growth of Virginia pines planted on a cutover Cumberland Plateau site. Wider spacings produced trees of larger diameters than did closer spacings; closer spacings had more basal area and volume. Although height to the base of the live crown increased as spacing narrowed, self-pruning was poor at all...
Space Radar Image of Karakax Valley, China 3-D
NASA Technical Reports Server (NTRS)
1994-01-01
This three-dimensional perspective of the remote Karakax Valley in the northern Tibetan Plateau of western China was created by combining two spaceborne radar images using a technique known as interferometry. Visualizations like this are helpful to scientists because they reveal where the slopes of the valley are cut by erosion, as well as the accumulations of gravel deposits at the base of the mountains. These gravel deposits, called alluvial fans, are a common landform in desert regions that scientists are mapping in order to learn more about Earth's past climate changes. Higher up the valley side is a clear break in the slope, running straight, just below the ridge line. This is the trace of the Altyn Tagh fault, which is much longer than California's San Andreas fault. Geophysicists are studying this fault for clues it may be able to give them about large faults. Elevations range from 4000 m (13,100 ft) in the valley to over 6000 m (19,700 ft) at the peaks of the glaciated Kun Lun mountains running from the front right towards the back. Scale varies in this perspective view, but the area is about 20 km (12 miles) wide in the middle of the image, and there is no vertical exaggeration. The two radar images were acquired on separate days during the second flight of the Spaceborne Imaging Radar-C/X-band Synthetic Aperture Radar (SIR-C/X-SAR) aboard the space shuttle Endeavour in October 1994. The interferometry technique provides elevation measurements of all points in the scene. The resulting digital topographic map was used to create this view, looking northwest from high over the valley. Variations in the colors can be related to gravel, sand and rock outcrops. This image is centered at 36.1 degrees north latitude, 79.2 degrees east longitude. Radar image data are draped over the topography to provide the color with the following assignments: Red is L-band vertically transmitted, vertically received; green is the average of L-band vertically transmitted, vertically received and C-band vertically transmitted, vertically received; and blue is C-band vertically transmitted, vertically received. SIR-C/X-SAR, a joint mission of the German, Italian and United States space agencies, is part of NASA's Mission to Planet Earth.
DOE Office of Scientific and Technical Information (OSTI.GOV)
El Shazly, S.; Garossino, G.A.
1991-03-01
Ramadan oil field, located in the central Gulf of Suez, is one of the giant oil fields of Egypt. It is named for the Islamic month of Ramadan during which it was discovered through the drilling of the well GS303-1 in September of 1974. Production comes primarily from within the massive sandstone reservoir of the Nubia 'C' Formation which covers some 2,850 ac with an oil column in excess of 1,000 ft. Secondary production occurs within both the Nubia 'A' Formation and Nezzazat Group. The northeasterly dipping Nubia 'C' reservoir is bounded to the west by two northwest-southeasterly oriented faultmore » systems which are of similar throw but have significantly different ages. Northern Ramadan field is bounded by a late Miocene age fault, which transfers 75% of its throw through a series of southwest-northeasterly trending faults to a preexisting Oligo-Miocene age fault to the south. The latter bounds the southern half of the field and is typified by a heavily eroded scarp present at the pre-Miocene unconformity. The recent discovery of this scarp has allowed the westward repositioning of the updip limit of the Nubia 'C' reservoir resulting in a significant increase in recoverable reserves.« less
Fault-tolerant conversion between adjacent Reed-Muller quantum codes based on gauge fixing
NASA Astrophysics Data System (ADS)
Quan, Dong-Xiao; Zhu, Li-Li; Pei, Chang-Xing; Sanders, Barry C.
2018-03-01
We design forward and backward fault-tolerant conversion circuits, which convert between the Steane code and the 15-qubit Reed-Muller quantum code so as to provide a universal transversal gate set. In our method, only seven out of a total 14 code stabilizers need to be measured, and we further enhance the circuit by simplifying some stabilizers; thus, we need only to measure eight weight-4 stabilizers for one round of forward conversion and seven weight-4 stabilizers for one round of backward conversion. For conversion, we treat random single-qubit errors and their influence on syndromes of gauge operators, and our novel single-step process enables more efficient fault-tolerant conversion between these two codes. We make our method quite general by showing how to convert between any two adjacent Reed-Muller quantum codes \\overline{\\textsf{RM}}(1,m) and \\overline{\\textsf{RM}}≤ft(1,m+1\\right) , for which we need only measure stabilizers whose number scales linearly with m rather than exponentially with m obtained in previous work. We provide the explicit mathematical expression for the necessary stabilizers and the concomitant resources required.
Reliability and availability evaluation of Wireless Sensor Networks for industrial applications.
Silva, Ivanovitch; Guedes, Luiz Affonso; Portugal, Paulo; Vasques, Francisco
2012-01-01
Wireless Sensor Networks (WSN) currently represent the best candidate to be adopted as the communication solution for the last mile connection in process control and monitoring applications in industrial environments. Most of these applications have stringent dependability (reliability and availability) requirements, as a system failure may result in economic losses, put people in danger or lead to environmental damages. Among the different type of faults that can lead to a system failure, permanent faults on network devices have a major impact. They can hamper communications over long periods of time and consequently disturb, or even disable, control algorithms. The lack of a structured approach enabling the evaluation of permanent faults, prevents system designers to optimize decisions that minimize these occurrences. In this work we propose a methodology based on an automatic generation of a fault tree to evaluate the reliability and availability of Wireless Sensor Networks, when permanent faults occur on network devices. The proposal supports any topology, different levels of redundancy, network reconfigurations, criticality of devices and arbitrary failure conditions. The proposed methodology is particularly suitable for the design and validation of Wireless Sensor Networks when trying to optimize its reliability and availability requirements.
Reliability and Availability Evaluation of Wireless Sensor Networks for Industrial Applications
Silva, Ivanovitch; Guedes, Luiz Affonso; Portugal, Paulo; Vasques, Francisco
2012-01-01
Wireless Sensor Networks (WSN) currently represent the best candidate to be adopted as the communication solution for the last mile connection in process control and monitoring applications in industrial environments. Most of these applications have stringent dependability (reliability and availability) requirements, as a system failure may result in economic losses, put people in danger or lead to environmental damages. Among the different type of faults that can lead to a system failure, permanent faults on network devices have a major impact. They can hamper communications over long periods of time and consequently disturb, or even disable, control algorithms. The lack of a structured approach enabling the evaluation of permanent faults, prevents system designers to optimize decisions that minimize these occurrences. In this work we propose a methodology based on an automatic generation of a fault tree to evaluate the reliability and availability of Wireless Sensor Networks, when permanent faults occur on network devices. The proposal supports any topology, different levels of redundancy, network reconfigurations, criticality of devices and arbitrary failure conditions. The proposed methodology is particularly suitable for the design and validation of Wireless Sensor Networks when trying to optimize its reliability and availability requirements. PMID:22368497
Expert System for Test Program Set Fault Candidate Selection
1989-09-01
Mark Herbst Mr. Paul Janusz Mr. Wayne Lee Ms. Patricia Lyon Ms. Sharyn McDowell Mr. Richard Payne Ms. Elizabeth Parliman Mr. Albert Stanbury Ms. Allison...SMCAR-ESP-L AMSMC-QA(R) AMSMC-QAK-B(R), R. Fer Rock Island, IL 61299-6000 28 Commander U.S. Army Materiel Command ATTN: AMCQA-E, Mr. Chris Neubert AMCPD...ATTN: AMSEL-PA-MT-S, Mr. Paul Kogut Mr. Andy Mills AMSEL-PA AMSEL-PA-DL AMSEL-RD-SE-CRM-CM AMSEL-RD-SE-AST Ft. Monmouth, NJ 07703-5023 35 Director U.S
PAWS/STEM - PADE APPROXIMATION WITH SCALING AND SCALED TAYLOR EXPONENTIAL MATRIX (VAX VMS VERSION)
NASA Technical Reports Server (NTRS)
Butler, R. W.
1994-01-01
Traditional fault-tree techniques for analyzing the reliability of large, complex systems fail to model the dynamic reconfiguration capabilities of modern computer systems. Markov models, on the other hand, can describe fault-recovery (via system reconfiguration) as well as fault-occurrence. The Pade Approximation with Scaling (PAWS) and Scaled Taylor Exponential Matrix (STEM) programs provide a flexible, user-friendly, language-based interface for the creation and evaluation of Markov models describing the behavior of fault-tolerant reconfigurable computer systems. PAWS and STEM produce exact solutions for the probability of system failure and provide a conservative estimate of the number of significant digits in the solution. The calculation of the probability of entering a death state of a Markov model (representing system failure) requires the solution of a set of coupled differential equations. Because of the large disparity between the rates of fault arrivals and system recoveries, Markov models of fault-tolerant architectures inevitably lead to numerically stiff differential equations. Both PAWS and STEM have the capability to solve numerically stiff models. These complementary programs use separate methods to determine the matrix exponential in the solution of the model's system of differential equations. In general, PAWS is better suited to evaluate small and dense models. STEM operates at lower precision, but works faster than PAWS for larger models. The mathematical approach chosen to solve a reliability problem may vary with the size and nature of the problem. Although different solution techniques are utilized on different programs, it is possible to have a common input language. The Systems Validation Methods group at NASA Langley Research Center has created a set of programs that form the basis for a reliability analysis workstation. The set of programs are: SURE reliability analysis program (COSMIC program LAR-13789, LAR-14921); the ASSIST specification interface program (LAR-14193, LAR-14923), PAWS/STEM reliability analysis programs (LAR-14165, LAR-14920); and the FTC fault tree tool (LAR-14586, LAR-14922). FTC is used to calculate the top-event probability for a fault tree. PAWS/STEM and SURE are programs which interpret the same SURE language, but utilize different solution methods. ASSIST is a preprocessor that generates SURE language from a more abstract definition. SURE, ASSIST, and PAWS/STEM are also offered as a bundle. Please see the abstract for COS-10039/COS-10041, SARA - SURE/ASSIST Reliability Analysis Workstation, for pricing details. PAWS/STEM was originally developed for DEC VAX series computers running VMS and was later ported for use on Sun computers running SunOS. The package is written in PASCAL, ANSI compliant C-language, and FORTRAN 77. The standard distribution medium for the VMS version of PAWS/STEM (LAR-14165) is a 9-track 1600 BPI magnetic tape in VMSINSTAL format. It is also available on a TK50 tape cartridge in VMSINSTAL format. Executables are included. The standard distribution medium for the Sun version of PAWS/STEM (LAR-14920) is a .25 inch streaming magnetic tape cartridge in UNIX tar format. Both Sun3 and Sun4 executables are included. PAWS/STEM was developed in 1989 and last updated in 1991. DEC, VAX, VMS, and TK50 are trademarks of Digital Equipment Corporation. SunOS, Sun3, and Sun4 are trademarks of Sun Microsystems, Inc. UNIX is a registered trademark of AT&T Bell Laboratories.
PAWS/STEM - PADE APPROXIMATION WITH SCALING AND SCALED TAYLOR EXPONENTIAL MATRIX (SUN VERSION)
NASA Technical Reports Server (NTRS)
Butler, R. W.
1994-01-01
Traditional fault-tree techniques for analyzing the reliability of large, complex systems fail to model the dynamic reconfiguration capabilities of modern computer systems. Markov models, on the other hand, can describe fault-recovery (via system reconfiguration) as well as fault-occurrence. The Pade Approximation with Scaling (PAWS) and Scaled Taylor Exponential Matrix (STEM) programs provide a flexible, user-friendly, language-based interface for the creation and evaluation of Markov models describing the behavior of fault-tolerant reconfigurable computer systems. PAWS and STEM produce exact solutions for the probability of system failure and provide a conservative estimate of the number of significant digits in the solution. The calculation of the probability of entering a death state of a Markov model (representing system failure) requires the solution of a set of coupled differential equations. Because of the large disparity between the rates of fault arrivals and system recoveries, Markov models of fault-tolerant architectures inevitably lead to numerically stiff differential equations. Both PAWS and STEM have the capability to solve numerically stiff models. These complementary programs use separate methods to determine the matrix exponential in the solution of the model's system of differential equations. In general, PAWS is better suited to evaluate small and dense models. STEM operates at lower precision, but works faster than PAWS for larger models. The mathematical approach chosen to solve a reliability problem may vary with the size and nature of the problem. Although different solution techniques are utilized on different programs, it is possible to have a common input language. The Systems Validation Methods group at NASA Langley Research Center has created a set of programs that form the basis for a reliability analysis workstation. The set of programs are: SURE reliability analysis program (COSMIC program LAR-13789, LAR-14921); the ASSIST specification interface program (LAR-14193, LAR-14923), PAWS/STEM reliability analysis programs (LAR-14165, LAR-14920); and the FTC fault tree tool (LAR-14586, LAR-14922). FTC is used to calculate the top-event probability for a fault tree. PAWS/STEM and SURE are programs which interpret the same SURE language, but utilize different solution methods. ASSIST is a preprocessor that generates SURE language from a more abstract definition. SURE, ASSIST, and PAWS/STEM are also offered as a bundle. Please see the abstract for COS-10039/COS-10041, SARA - SURE/ASSIST Reliability Analysis Workstation, for pricing details. PAWS/STEM was originally developed for DEC VAX series computers running VMS and was later ported for use on Sun computers running SunOS. The package is written in PASCAL, ANSI compliant C-language, and FORTRAN 77. The standard distribution medium for the VMS version of PAWS/STEM (LAR-14165) is a 9-track 1600 BPI magnetic tape in VMSINSTAL format. It is also available on a TK50 tape cartridge in VMSINSTAL format. Executables are included. The standard distribution medium for the Sun version of PAWS/STEM (LAR-14920) is a .25 inch streaming magnetic tape cartridge in UNIX tar format. Both Sun3 and Sun4 executables are included. PAWS/STEM was developed in 1989 and last updated in 1991. DEC, VAX, VMS, and TK50 are trademarks of Digital Equipment Corporation. SunOS, Sun3, and Sun4 are trademarks of Sun Microsystems, Inc. UNIX is a registered trademark of AT&T Bell Laboratories.
Neotectonic Studies of the Lake Ohrid Basin (FYROM/Albania)
NASA Astrophysics Data System (ADS)
Nadine, H.; Liermann, A.; Glasmacher, U. A.; Reicherter, K. R.
2010-12-01
The Lake Ohrid Basin located on 693 m a.s.l. at the south-western border of Macedonia (FYROM) with Albania is a suitable location for neotectonic studies. The lake is set in an extensional basin-and-range-like situation, which is influenced by the roll-back and detachment of the subducted slab of the Northern Hellenic Trench. The seismicity record of the area lists frequent shallow earthquakes with magnitudes of up to 6.6, which classifies the region as one of the highest risk areas for Macedonia and Albania. A multidisciplinary approach was chosen to reveal the stress history of the region. Tectonic morphology, paleostress analysis, remote sensing and geophysical investigations have been taken out to trace the landscape evolution. Furthermore, apatite fission-track (A-FT) analysis and t-T-path modelling was performed to constrain the thermal history and the exhumation rates. The deformation history of the basin can be divided in three major phases. This idea is also supported by paleostress data collected around the lake: 1. NW-SE shortening from Late Cretaceous to Miocene with compression, thrusting and uplift; 2. Uplift and diminishing compression in Late Miocene causing strike-slip and normal faulting; 3. Vertical uplift and E-W extension from Pliocene to present associated with local subsidence and (half-) graben formation. The initiation of the Ohrid Basin can be dated to Late Miocene to Pliocene. The morphology of the basin itself shows features, which characterize the area as an active seismogenic landscape. The elongated NS-trending basin is limited by the steep flanks of Galicica and Mokra Mountains to the E and W, which are tectonically controlled by normal faulting. This is expressed in linear step-like fault scarps on land with heights between 2 and 35 m. The faults have lengths between 10 and 20 km and consist of several segments. Post-glacial bedrock fault scarps at Lake Ohrid are long-lived expressions of repeated surface faulting in tectonically active regions, where erosion cannot outpace the fault slip and are in general getting younger towards the center of the basin. Other characteristics are well preserved wineglass-shaped valleys and triangular facets. In contrast, the plains that stretch along the shore north and south of the lake are dominated by clastic input related to climate variations and uplift/erosion. Apatite fission track analysis shows a range of the apparent ages from 56.5±3.1 to 10.5±0.9 Ma, with a spatial distribution that gives evidence for the activation of separate blocks with differing exhumation and rock uplift history. Fission-track ages from molasses and flysch sediments of the basin fillings show distinctly younger ages than those from basement units. Generally, the Prespa Basin, which is located east of Ohrid Basin, reveals A-FT-ages around 10 Ma close to normal faults, whereas modelling results of the Ohrid Basin suggest a rapid uplift initiated around 1.4 Ma associated with uplift rates on the order of 1 mm/a. Therefore, we assume a westward migration of the extensional basin formation, as the initiation of the Prespa Basin can be placed well before the formation of the Ohrid Basin.
Uhrich, Mark A.
2010-01-01
A debris flow and sediment torrent occurred on the flanks of Mt Jefferson in Oregon on November 6, 2006, inundating 150 acres of forest. The massive debris flow was triggered by a rock and snow avalanche from the Milk Creek glaciers and snowfields during the early onset of an intense storm originating near the Hawaiian Islands. The debris flow consisted of a heavy conglomerate of large boulders, cobbles, and coarse-grained sediment that was deposited at depths of up to 15 ft and within 3 mi of the glaciers, and a viscous slurry that deposited finer-grained sediments at depths of 0.5 to 3 ft. The muddy slurry coated standing trees within the lower reaches of Milk Creek as it moved downslope.
Chen, Wei; Li, Hui; Hou, Enke; Wang, Shengquan; Wang, Guirong; Panahi, Mahdi; Li, Tao; Peng, Tao; Guo, Chen; Niu, Chao; Xiao, Lele; Wang, Jiale; Xie, Xiaoshen; Ahmad, Baharin Bin
2018-09-01
The aim of the current study was to produce groundwater spring potential maps using novel ensemble weights-of-evidence (WoE) with logistic regression (LR) and functional tree (FT) models. First, a total of 66 springs were identified by field surveys, out of which 70% of the spring locations were used for training the models and 30% of the spring locations were employed for the validation process. Second, a total of 14 affecting factors including aspect, altitude, slope, plan curvature, profile curvature, stream power index (SPI), topographic wetness index (TWI), sediment transport index (STI), lithology, normalized difference vegetation index (NDVI), land use, soil, distance to roads, and distance to streams was used to analyze the spatial relationship between these affecting factors and spring occurrences. Multicollinearity analysis and feature selection of the correlation attribute evaluation (CAE) method were employed to optimize the affecting factors. Subsequently, the novel ensembles of the WoE, LR, and FT models were constructed using the training dataset. Finally, the receiver operating characteristic (ROC) curves, standard error, confidence interval (CI) at 95%, and significance level P were employed to validate and compare the performance of three models. Overall, all three models performed well for groundwater spring potential evaluation. The prediction capability of the FT model, with the highest AUC values, the smallest standard errors, the narrowest CIs, and the smallest P values for the training and validation datasets, is better compared to those of other models. The groundwater spring potential maps can be adopted for the management of water resources and land use by planners and engineers. Copyright © 2018 Elsevier B.V. All rights reserved.
Communications and tracking expert systems study
NASA Technical Reports Server (NTRS)
Leibfried, T. F.; Feagin, Terry; Overland, David
1987-01-01
The original objectives of the study consisted of five broad areas of investigation: criteria and issues for explanation of communication and tracking system anomaly detection, isolation, and recovery; data storage simplification issues for fault detection expert systems; data selection procedures for decision tree pruning and optimization to enhance the abstraction of pertinent information for clear explanation; criteria for establishing levels of explanation suited to needs; and analysis of expert system interaction and modularization. Progress was made in all areas, but to a lesser extent in the criteria for establishing levels of explanation suited to needs. Among the types of expert systems studied were those related to anomaly or fault detection, isolation, and recovery.
[Medical Equipment Maintenance Methods].
Liu, Hongbin
2015-09-01
Due to the high technology and the complexity of medical equipment, as well as to the safety and effectiveness, it determines the high requirements of the medical equipment maintenance work. This paper introduces some basic methods of medical instrument maintenance, including fault tree analysis, node method and exclusive method which are the three important methods in the medical equipment maintenance, through using these three methods for the instruments that have circuit drawings, hardware breakdown maintenance can be done easily. And this paper introduces the processing methods of some special fault conditions, in order to reduce little detours in meeting the same problems. Learning is very important for stuff just engaged in this area.
Exploration in Ordovician of central Michigan Basin
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fisher, J.H.; Barratt, M.W.
1985-12-01
Deep wells in the central Michigan basin have provided sufficient data to define two new mappable formations - the Foster Formation and the Bruggers Formation. Recent conodont studies have corrected the age assignments of the strata containing these formations. Previously, the lower section (Foster) was classified as mostly Cambrian, and the upper unit (Bruggers) was identified as Early Ordovician. Conodont identifications indicate an Early and Middle Ordovician age for the Foster Formation and a Middle Ordovician age for the Bruggers Formation. The Michigan basin existed in embryonic form in the Late Cambrian, but the full outline of the present-day basinmore » did not develop until Early Ordovician. Gas and condensate are produced from the Bruggers Formation as deep as 11,252 ft (3429 m). Geothermal investigations suggest that gas production is possible to the base of the Paleozoic section in the central basin (17,000 ft or 5181 m). Paleotemperatures were higher during the Paleozoic owing to 3000-4000 ft (914-1291 m) of additional sedimentary cover. Five wells are producing from the Bruggers Formation. All are deeper tests in anticlines producing from Devonian reservoirs discovered earlier. The structures are the result of vertical movements of basement fault blocks activated by regional stresses. 12 figures, 2 tables.« less
NASA Astrophysics Data System (ADS)
Estuar, Maria Regina Justina; Victorino, John Noel; Coronel, Andrei; Co, Jerelyn; Tiausas, Francis; Señires, Chiara Veronica
2017-09-01
Use of wireless sensor networks and smartphone integration design to monitor environmental parameters surrounding plantations is made possible because of readily available and affordable sensors. Providing low cost monitoring devices would be beneficial, especially to small farm owners, in a developing country like the Philippines, where agriculture covers a significant amount of the labor market. This study discusses the integration of wireless soil sensor devices and smartphones to create an application that will use multidimensional analysis to detect the presence or absence of plant disease. Specifically, soil sensors are designed to collect soil quality parameters in a sink node from which the smartphone collects data from via Bluetooth. Given these, there is a need to develop a classification model on the mobile phone that will report infection status of a soil. Though tree classification is the most appropriate approach for continuous parameter-based datasets, there is a need to determine whether tree models will result to coherent results or not. Soil sensor data that resides on the phone is modeled using several variations of decision tree, namely: decision tree (DT), best-fit (BF) decision tree, functional tree (FT), Naive Bayes (NB) decision tree, J48, J48graft and LAD tree, where decision tree approaches the problem by considering all sensor nodes as one. Results show that there are significant differences among soil sensor parameters indicating that there are variances in scores between the infected and uninfected sites. Furthermore, analysis of variance in accuracy, recall, precision and F1 measure scores from tree classification models homogeneity among NBTree, J48graft and J48 tree classification models.
Mori, J.
1996-01-01
Details of the M 4.3 foreshock to the Joshua Tree earthquake were studied using P waves recorded on the Southern California Seismic Network and the Anza network. Deconvolution, using an M 2.4 event as an empirical Green's function, corrected for complicated path and site effects in the seismograms and produced simple far-field displacement pulses that were inverted for a slip distribution. Both possible fault planes, north-south and east-west, for the focal mechanism were tested by a least-squares inversion procedure with a range of rupture velocities. The results showed that the foreshock ruptured the north-south plane, similar to the mainshock. The foreshock initiated a few hundred meters south of the mainshock and ruptured to the north, toward the mainshock hypocenter. The mainshock (M 6.1) initiated near the northern edge of the foreshock rupture 2 hr later. The foreshock had a high stress drop (320 to 800 bars) and broke a small portion of the fault adjacent to the mainshock but was not able to immediately initiate the mainshock rupture.
Bedrosian, Paul A.; Burgess, Matthew K.; Nishikawa, Tracy
2013-01-01
Within the south-western Mojave Desert, the Joshua Basin Water District is considering applying imported water into infiltration ponds in the Joshua Tree groundwater sub-basin in an attempt to artificially recharge the underlying aquifer. Scarce subsurface hydrogeological data are available near the proposed recharge site; therefore, time-domain electromagnetic (TDEM) data were collected and analysed to characterize the subsurface. TDEM soundings were acquired to estimate the depth to water on either side of the Pinto Mountain Fault, a major east-west trending strike-slip fault that transects the proposed recharge site. While TDEM is a standard technique for groundwater investigations, special care must be taken when acquiring and interpreting TDEM data in a twodimensional (2D) faulted environment. A subset of the TDEM data consistent with a layered-earth interpretation was identified through a combination of three-dimensional (3D) forward modelling and diffusion time-distance estimates. Inverse modelling indicates an offset in water table elevation of nearly 40 m across the fault. These findings imply that the fault acts as a low-permeability barrier to groundwater flow in the vicinity of the proposed recharge site. Existing production wells on the south side of the fault, together with a thick unsaturated zone and permeable near-surface deposits, suggest the southern half of the study area is suitable for artificial recharge. These results illustrate the effectiveness of targeted TDEM in support of hydrological studies in a heavily faulted desert environment where data are scarce and the cost of obtaining these data by conventional drilling techniques is prohibitive.
Langridge, R.M.; Stenner, Heidi D.; Fumal, T.E.; Christofferson, S.A.; Rockwell, T.K.; Hartleb, R.D.; Bachhuber, J.; Barka, A.A.
2002-01-01
The Mw 7.4 17 August 1999 İzmit earthquake ruptured five major fault segments of the dextral North Anatolian Fault Zone. The 26-km-long, N86°W-trending Sakarya fault segment (SFS) extends from the Sapanca releasing step-over in the west to near the town of Akyazi in the east. The SFS emerges from Lake Sapanca as two distinct fault traces that rejoin to traverse the Adapazari Plain to Akyazi. Offsets were measured across 88 cultural and natural features that cross the fault, such as roads, cornfield rows, rows of trees, walls, rails, field margins, ditches, vehicle ruts, a dike, and ground cracks. The maximum displacement observed for the İzmit earthquake (∼5.1 m) was encountered on this segment. Dextral displacement for the SFS rises from less than 1 m at Lake Sapanca to greater than 5 m near Arifiye, only 3 km away. Average slip decreases uniformly to the east from Arifiye until the fault steps left from Sagir to Kazanci to the N75°W, 6-km-long Akyazi strand, where slip drops to less than 1 m. The Akyazi strand passes eastward into the Akyazi Bend, which consists of a high-angle bend (18°-29°) between the Sakarya and Karadere fault segments, a 6-km gap in surface rupture, and high aftershock energy release. Complex structural geometries exist between the İzmit, Düzce, and 1967 Mudurnu fault segments that have arrested surface ruptures on timescales ranging from 30 sec to 88 days to 32 yr. The largest of these step-overs may have acted as a rupture segmentation boundary in previous earthquake cycles.
1986-07-01
diameter. The machine can also chop shrub thickets of Gambel’s oak (Quercus gw’nbelii) and chokecherry (Prunus virginiana ) into 4- to 6-in. pieces and...bush hog is the side-mounted hog that can be hydraulically lifted up to 15 ft for pruning tree limbs and shrubs. This implement is used primarily on
Production and cost analysis of a feller-buncher in central Appalachian hardwood forest
Charlie Long; Jingxin Wang; Joe McNeel; John Baumgras; John Baumgras
2002-01-01
A time study was conducted to evaluate the productivity and cost of a feller-buncher operating in a Central Appalachian hardwood forest. The sites harvested during observation consisted of primarily red maple and black cherry. Trees felled in the study had an average diameter at breast height (DBH) of 16.1 in. and a total merchantable height of 16 ft. A Timbco 445C...
High-Resolution Fault Zone Monitoring and Imaging Using Long Borehole Arrays
NASA Astrophysics Data System (ADS)
Paulsson, B. N.; Karrenbach, M.; Goertz, A. V.; Milligan, P.
2004-12-01
Long borehole seismic receiver arrays are increasingly used in the petroleum industry as a tool for high--resolution seismic reservoir characterization. Placing receivers in a borehole avoids the distortion of reflected seismic waves by the near-surface weathering layer which leads to greatly improved vector fidelity and a much higher frequency content of 3-component recordings. In addition, a borehole offers a favorable geometry to image near-vertically dipping or overturned structure such as, e.g., salt flanks or faults. When used for passive seismic monitoring, long borehole receiver arrays help reducing depth uncertainties of event locations. We investigate the use of long borehole seismic arrays for high-resolution fault zone characterization in the vicinity of the San Andreas Fault Observatory at Depth (SAFOD). We present modeling scenarios to show how an image of the vertically dipping fault zone down to the penetration point of the SAFOD well can be obtained by recording surface sources in a long array within the deviated main hole. We assess the ability to invert fault zone reflections for rock physical parameters by means of amplitude versus offset or angle (AVO/AVA) analyzes. The quality of AVO/AVA studies depends on the ability to illuminate the fault zone over a wide range of incidence angles. We show how the length of the receiver array and the receiver spacing within the borehole influence the size of the volume over which reliable AVO/AVA information could be obtained. By means of AVO/AVA studies one can deduce hydraulic properties of the fault zone such as the type of fluids that might be present, the porosity, and the fluid saturation. Images of the fault zone obtained from a favorable geometry with a sufficient illumination will enable us to map fault zone properties in the surrounding of the main hole penetration point. One of the targets of SAFOD is to drill into an active rupture patch of an earthquake cluster. The question of whether or not this goal has indeed been achieved at the time the fault zone is penetrated can only be answered if the rock properties found at the penetration point can be compared to the surrounding volume. This task will require mapping of rock properties inverted from AVO/AVA analyzes of fault zone reflections. We will also show real data examples of a test deployment of a 4000 ft, 80-level clamped 3-component receiver array in the SAFOD main hole in 2004.
Certification trails for data structures
NASA Technical Reports Server (NTRS)
Sullivan, Gregory F.; Masson, Gerald M.
1993-01-01
Certification trails are a recently introduced and promising approach to fault detection and fault tolerance. The applicability of the certification trail technique is significantly generalized. Previously, certification trails had to be customized to each algorithm application; trails appropriate to wide classes of algorithms were developed. These certification trails are based on common data-structure operations such as those carried out using these sets of operations such as those carried out using balanced binary trees and heaps. Any algorithms using these sets of operations can therefore employ the certification trail method to achieve software fault tolerance. To exemplify the scope of the generalization of the certification trail technique provided, constructions of trails for abstract data types such as priority queues and union-find structures are given. These trails are applicable to any data-structure implementation of the abstract data type. It is also shown that these ideals lead naturally to monitors for data-structure operations.
NASA Technical Reports Server (NTRS)
Braden, W. B.
1992-01-01
This talk discusses the importance of providing a process operator with concise information about a process fault including a root cause diagnosis of the problem, a suggested best action for correcting the fault, and prioritization of the problem set. A decision tree approach is used to illustrate one type of approach for determining the root cause of a problem. Fault detection in several different types of scenarios is addressed, including pump malfunctions and pipeline leaks. The talk stresses the need for a good data rectification strategy and good process models along with a method for presenting the findings to the process operator in a focused and understandable way. A real time expert system is discussed as an effective tool to help provide operators with this type of information. The use of expert systems in the analysis of actual versus predicted results from neural networks and other types of process models is discussed.
Modeling Off-Nominal Behavior in SysML
NASA Technical Reports Server (NTRS)
Day, John C.; Donahue, Kenneth; Ingham, Michel; Kadesch, Alex; Kennedy, Andrew K.; Post, Ethan
2012-01-01
Specification and development of fault management functionality in systems is performed in an ad hoc way - more of an art than a science. Improvements to system reliability, availability, safety and resilience will be limited without infusion of additional formality into the practice of fault management. Key to the formalization of fault management is a precise representation of off-nominal behavior. Using the upcoming Soil Moisture Active-Passive (SMAP) mission for source material, we have modeled the off-nominal behavior of the SMAP system during its initial spin-up activity, using the System Modeling Language (SysML). In the course of developing these models, we have developed generic patterns for capturing off-nominal behavior in SysML. We show how these patterns provide useful ways of reasoning about the system (e.g., checking for completeness and effectiveness) and allow the automatic generation of typical artifacts (e.g., success trees and FMECAs) used in system analyses.
Performance improvements of an F-15 airplane with an integrated engine-flight control system
NASA Technical Reports Server (NTRS)
Myers, Lawrence P.; Walsh, Kevin R.
1988-01-01
An integrated flight and propulsion control system has been developed and flight demonstrated on the NASA Ames-Dryden F-15 research aircraft. The highly integrated digital control (HIDEC) system provides additional engine thrust by increasing engine pressure ratio (EPR) at intermediate and afterburning power. The amount of EPR uptrim is modulated based on airplane maneuver requirements, flight conditions, and engine information. Engine thrust was increased as much as 10.5 percent at subsonic flight conditions by uptrimming EPR. The additional thrust significantly improved aircraft performance. Rate of climb was increased 14 percent at 40,000 ft and the time to climb from 10,000 to 40,000 ft was reduced 13 percent. A 14 and 24 percent increase in acceleration was obtained at intermediate and maximum power, respectively. The HIDEC logic performed fault free. No engine anomalies were encountered for EPR increases up to 12 percent and for angles of attack and sideslip of 32 and 11 deg, respectively.
Performance improvements of an F-15 airplane with an integrated engine-flight control system
NASA Technical Reports Server (NTRS)
Myers, Lawrence P.; Walsh, Kevin R.
1988-01-01
An integrated flight and propulsion control system has been developed and flight demonstrated on the NASA Ames-Dryden F-15 research aircraft. The highly integrated digital control (HIDEC) system provides additional engine thrust by increasing engine pressure ratio (EPR) at intermediate and afterburning power. The amount of EPR uptrim is modulated based on airplane maneuver requirements, flight conditions, and engine information. Engine thrust was increased as much as 10.5 percent at subsonic flight conditions by uptrimming EPR. The additional thrust significantly improved aircraft performance. Rate of climb was increased 14 percent at 40,000 ft and the time to climb from 10,000 to 40,000 ft was reduced 13 percent. A 14 and 24 percent increase in acceleration was obtained at intermediate and maximum power, respectively. The HIDEC logic performed fault free. No engine anomalies were encountered for EPR increases up to 12 percent and for angles of attack and sideslip of 32 and 11 degrees, respectively.
Fault detection and diagnosis for gas turbines based on a kernelized information entropy model.
Wang, Weiying; Xu, Zhiqiang; Tang, Rui; Li, Shuying; Wu, Wei
2014-01-01
Gas turbines are considered as one kind of the most important devices in power engineering and have been widely used in power generation, airplanes, and naval ships and also in oil drilling platforms. However, they are monitored without man on duty in the most cases. It is highly desirable to develop techniques and systems to remotely monitor their conditions and analyze their faults. In this work, we introduce a remote system for online condition monitoring and fault diagnosis of gas turbine on offshore oil well drilling platforms based on a kernelized information entropy model. Shannon information entropy is generalized for measuring the uniformity of exhaust temperatures, which reflect the overall states of the gas paths of gas turbine. In addition, we also extend the entropy to compute the information quantity of features in kernel spaces, which help to select the informative features for a certain recognition task. Finally, we introduce the information entropy based decision tree algorithm to extract rules from fault samples. The experiments on some real-world data show the effectiveness of the proposed algorithms.
Fault Detection and Diagnosis for Gas Turbines Based on a Kernelized Information Entropy Model
Wang, Weiying; Xu, Zhiqiang; Tang, Rui; Li, Shuying; Wu, Wei
2014-01-01
Gas turbines are considered as one kind of the most important devices in power engineering and have been widely used in power generation, airplanes, and naval ships and also in oil drilling platforms. However, they are monitored without man on duty in the most cases. It is highly desirable to develop techniques and systems to remotely monitor their conditions and analyze their faults. In this work, we introduce a remote system for online condition monitoring and fault diagnosis of gas turbine on offshore oil well drilling platforms based on a kernelized information entropy model. Shannon information entropy is generalized for measuring the uniformity of exhaust temperatures, which reflect the overall states of the gas paths of gas turbine. In addition, we also extend the entropy to compute the information quantity of features in kernel spaces, which help to select the informative features for a certain recognition task. Finally, we introduce the information entropy based decision tree algorithm to extract rules from fault samples. The experiments on some real-world data show the effectiveness of the proposed algorithms. PMID:25258726
Evolution of salt and hydrocarbon migration: Sweet Lake area, Cameron Parish, Louisiana
DOE Office of Scientific and Technical Information (OSTI.GOV)
Spencer, J.A.; Sharpe, C.L.
The interpretation of seismic, gravity, and well data in northern Cameron Parish, Louisiana suggest that lateral salt flow has influenced the area`s structural evolution, depositional patterns, and hydrocarbon migration. Sweet Lake Field has produced over 46 MMBO and 15 BCFG from Middle Miocene deltaic sands. The structural closure is a downthrown anticline on a fault controlled by the underlying salt feature. Sweet Lake Field overlies an allochthonous salt mass that was probably once part of an ancestral salt ridge extending from Hackberry to Big Lake fields. Nine wells encountering top of salt and several seismic lines define a detached saltmore » feature underlying over twenty square miles at depths from 8500-18,000 ft. Salt withdrawal in the East Hackberry-Big Lake area influenced the depositional patterns of the Oligocene lower Hackberry channel systems. Progradation of thick Middle Oligocene Camerina (A) and Miogypsinoides sands into the area caused salt thinning and withdrawal resulting in the development and orientation of the large Marginulina-Miogypsinoides growth fault northwest of Sweet Lake. Additional evidence for the southeast trend of the salt is a well approximately two miles southeast of Sweet Lake which encountered salt at approximately 19,800 ft. High quality 2-D and 3-D seismic data will continue to enhance the regional understanding of the evolving salt structures in the onshore Gulf Coast and the local understanding of hydrocarbon migration. Additional examples of lateral salt flow will be recognized and some may prove to have subsalt hydrocarbon potential.« less
NASA Astrophysics Data System (ADS)
Gülerce, Zeynep; Buğra Soyman, Kadir; Güner, Barış; Kaymakci, Nuretdin
2017-12-01
This contribution provides an updated planar seismic source characterization (SSC) model to be used in the probabilistic seismic hazard assessment (PSHA) for Istanbul. It defines planar rupture systems for the four main segments of the North Anatolian fault zone (NAFZ) that are critical for the PSHA of Istanbul: segments covering the rupture zones of the 1999 Kocaeli and Düzce earthquakes, central Marmara, and Ganos/Saros segments. In each rupture system, the source geometry is defined in terms of fault length, fault width, fault plane attitude, and segmentation points. Activity rates and the magnitude recurrence models for each rupture system are established by considering geological and geodetic constraints and are tested based on the observed seismicity that is associated with the rupture system. Uncertainty in the SSC model parameters (e.g., b value, maximum magnitude, slip rate, weights of the rupture scenarios) is considered, whereas the uncertainty in the fault geometry is not included in the logic tree. To acknowledge the effect of earthquakes that are not associated with the defined rupture systems on the hazard, a background zone is introduced and the seismicity rates in the background zone are calculated using smoothed-seismicity approach. The state-of-the-art SSC model presented here is the first fully documented and ready-to-use fault-based SSC model developed for the PSHA of Istanbul.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Çetinkol, Özgül Persil; Smith-Moritz, Andreia M.; Cheng, Gang
2012-12-28
Eucalypt species are a group of flowering trees widely used in pulp production for paper manufacture. For several decades, the wood pulp industry has focused research and development efforts on improving yields, growth rates and pulp quality through breeding and the genetic improvement of key tree species. Recently, this focus has shifted from the production of high quality pulps to the investigation of the use of eucalypts as feedstocks for biofuel production. Here the structure and chemical composition of the heartwood and sapwood of Eucalyptus dunnii, E. globulus, E. pillularis, E. urophylla, an E. urophylla-E. grandis cross, Corymbia citriodora ssp.more » variegata, and Acacia mangium were compared using nuclear magnetic resonance spectroscopy (NMR), X-ray diffraction (XRD) and biochemical composition analysis. Some trends relating to these compositions were also identified by Fourier transform near infrared (FT-NIR) spectroscopy. These results will serve as a foundation for a more comprehensive database of wood properties that will help develop criteria for the selection of tree species for use as biorefinery feedstocks.« less
Çetinkol, Özgül Persil; Smith-Moritz, Andreia M.; Cheng, Gang; Lao, Jeemeng; George, Anthe; Hong, Kunlun; Henry, Robert; Simmons, Blake A.; Heazlewood, Joshua L.; Holmes, Bradley M.
2012-01-01
Eucalypt species are a group of flowering trees widely used in pulp production for paper manufacture. For several decades, the wood pulp industry has focused research and development efforts on improving yields, growth rates and pulp quality through breeding and the genetic improvement of key tree species. Recently, this focus has shifted from the production of high quality pulps to the investigation of the use of eucalypts as feedstocks for biofuel production. Here the structure and chemical composition of the heartwood and sapwood of Eucalyptus dunnii, E. globulus, E. pillularis, E. urophylla, an E. urophylla-E. grandis cross, Corymbia citriodora ssp. variegata, and Acacia mangium were compared using nuclear magnetic resonance spectroscopy (NMR), X-ray diffraction (XRD) and biochemical composition analysis. Some trends relating to these compositions were also identified by Fourier transform near infrared (FT-NIR) spectroscopy. These results will serve as a foundation for a more comprehensive database of wood properties that will help develop criteria for the selection of tree species for use as biorefinery feedstocks. PMID:23300786
NASA Astrophysics Data System (ADS)
Dygert, Nick; Liang, Yan
2015-06-01
Mantle peridotites from ophiolites are commonly interpreted as having mid-ocean ridge (MOR) or supra-subduction zone (SSZ) affinity. Recently, an REE-in-two-pyroxene thermometer was developed (Liang et al., 2013) that has higher closure temperatures (designated as TREE) than major element based two-pyroxene thermometers for mafic and ultramafic rocks that experienced cooling. The REE-in-two-pyroxene thermometer has the potential to extract meaningful cooling rates from ophiolitic peridotites and thus shed new light on the thermal history of the different tectonic regimes. We calculated TREE for available literature data from abyssal peridotites, subcontinental (SC) peridotites, and ophiolites around the world (Alps, Coast Range, Corsica, New Caledonia, Oman, Othris, Puerto Rico, Russia, and Turkey), and augmented the data with new measurements for peridotites from the Trinity and Josephine ophiolites and the Mariana trench. TREE are compared to major element based thermometers, including the two-pyroxene thermometer of Brey and Köhler (1990) (TBKN). Samples with SC affinity have TREE and TBKN in good agreement. Samples with MOR and SSZ affinity have near-solidus TREE but TBKN hundreds of degrees lower. Closure temperatures for REE and Fe-Mg in pyroxenes were calculated to compare cooling rates among abyssal peridotites, MOR ophiolites, and SSZ ophiolites. Abyssal peridotites appear to cool more rapidly than peridotites from most ophiolites. On average, SSZ ophiolites have lower closure temperatures than abyssal peridotites and many ophiolites with MOR affinity. We propose that these lower temperatures can be attributed to the residence time in the cooling oceanic lithosphere prior to obduction. MOR ophiolites define a continuum spanning cooling rates from SSZ ophiolites to abyssal peridotites. Consistent high closure temperatures for abyssal peridotites and the Oman and Corsica ophiolites suggests hydrothermal circulation and/or rapid cooling events (e.g., normal faulting, unroofing) control the late thermal histories of peridotites from transform faults and slow and fast spreading centers with or without a crustal section.
NASA Technical Reports Server (NTRS)
Bennett, Richard A.; Reilinger, Robert E.; Rodi, William; Li, Yingping; Toksoz, M. Nafi; Hudnut, Ken
1995-01-01
Coseismic surface deformation associated with the M(sub w) 6.1, April 23, 1992, Joshua Tree earthquake is well represented by estimates of geodetic monument displacements at 20 locations independently derived from Global Positioning System and trilateration measurements. The rms signal to noise ratio for these inferred displacements is 1.8 with near-fault displacement estimates exceeding 40 mm. In order to determine the long-wavelength distribution of slip over the plane of rupture, a Tikhonov regularization operator is applied to these estimates which minimizes stress variability subject to purely right-lateral slip and zero surface slip constraints. The resulting slip distribution yields a geodetic moment estimate of 1.7 x 10(exp 18) N m with corresponding maximum slip around 0.8 m and compares well with independent and complementary information including seismic moment and source time function estimates and main shock and aftershock locations. From empirical Green's functions analyses, a rupture duration of 5 s is obtained which implies a rupture radius of 6-8 km. Most of the inferred slip lies to the north of the hypocenter, consistent with northward rupture propagation. Stress drop estimates are in the range of 2-4 MPa. In addition, predicted Coulomb stress increases correlate remarkably well with the distribution of aftershock hypocenters; most of the aftershocks occur in areas for which the mainshock rupture produced stress increases larger than about 0.1 MPa. In contrast, predicted stress changes are near zero at the hypocenter of the M(sub w) 7.3, June 28, 1992, Landers earthquake which nucleated about 20 km beyond the northernmost edge of the Joshua Tree rupture. Based on aftershock migrations and the predicted static stress field, we speculate that redistribution of Joshua Tree-induced stress perturbations played a role in the spatio-temporal development of the earth sequence culminating in the Landers event.
Geology and habitat of oil in Ras Budran field, Gulf of Suez, Egypt
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chowdhary, L.R.; Taha, S.
1987-05-01
Deminex discovered the Ras Budran oil field in 1978. Discovery well EE 85-1 was drilled in about 140 ft of water, 4 km off the Sinai coast of the Gulf of Suez. Appraisal drilling (EE 85-2, 3, and 4 wells) confirmed the presence of a major field with an estimated 700 million bbl oil in place. The field, developed from three wellhead platforms, went on production in April 1983. To date, 20 development wells have been drilled. The Ras Budran structure at the deepest mappable seismic reflector, top Kareem (middle Miocene), is a broad northeast-southwest-trending anticlinal feature striking nearly atmore » right angles to the main Gulf of Suez trend. At pre-Miocene producing horizons, the structure is complex and consists of a northeast-dipping flank (14-15) broken into several blocks by faults and limited to the south and west by major bounding faults. Oil is produced from three units of Nubian sandstone at a depth of 11,000 to 12,000 ft. The lower unit of Paleozoic age averages 10% porosity and up to 200 md in -situ permeability. The wells completed in this unit produce up to 2000 BOPD. In contrast, the sands of the upper two units of Lower Cretaceous age have a 15-20% porosity and up to 700 md permeability. The wells completed in these units produce 6000-8000 BOPD. The Ras Budran structure was primarily formed during the intra-Rudeis tectonic phase (lower Miocene). Migration of oil for accumulation in Ras Budran started late in the upper Miocene or Pliocene when the Santonian Brown Limestone and the Eocene Thebes Formation, the main source beds in the Gulf, reached the threshold of oil generation at a burial depth of about 10,000 ft (3000 m). At these depths, the organic matter in the source beds have a transformation ratio (0.10 to 0.15), increased yields of C15 + soluble organic matter and C15 + saturated hydrocarbons, a vitrinite reflectance of 0.62%, and a TTI value of 15.« less
Hydrogeology and structure of the Bluewater Springs area south central Montana
DOE Office of Scientific and Technical Information (OSTI.GOV)
Padilla, C.E.; Osborne, T.J.
1996-06-01
The Bluewater springs area in south central Montana was the site of oil and gas exploration in the first half of this century. Though no significant oil was found, artesian water wells produced over 3,000 gallons per minute. Artesian springs in the area produce tufa deposits over faulted, northwest dipping Mesozoic and upper Paleozoic sediments. Two new faults were mapped in the vicinity, one of these a high angle vertical fault dissects an anticline. The Tensleep and Madison aquifers (700-1,000 feet deep) leak water to the surface through faults and fractures, producing variable water quality depending on the minerals dissolvedmore » from overlying rock formations. Evaluation of limited aquifer data reveals the following: (1) hydraulic conductivity of 40,000 to 300,000 gpd/ft, (2) hydrostatic head greater than 400 feet above land surface. (3) Total dissolved solids concentrations were 2,370 ppm in Big Bluewater Springs, but only 1,200 ppm in a Tensleep well in the vicinity. Flowing wells 45 to 70 years old have failed leading to major yield reductions, and cessation of flow. Potentially corrosive groundwater coupled with excessive flow velocities and inadequate well construction has likely led to well failures. In response, major uncontrolled alterations of groundwater flow systems have occurred with outbreaks of new springs and sinkholes near failed wells. New wells must be carefully planned, constructed and tested to avoid excessive interference, depressurization and failure.« less
Tiedeman, C.R.; Kernodle, J.M.; McAda, D.P.
1998-01-01
This report documents the application of nonlinear-regression methods to a numerical model of ground-water flow in the Albuquerque Basin, New Mexico. In the Albuquerque Basin, ground water is the primary source for most water uses. Ground-water withdrawal has steadily increased since the 1940's, resulting in large declines in water levels in the Albuquerque area. A ground-water flow model was developed in 1994 and revised and updated in 1995 for the purpose of managing basin ground- water resources. In the work presented here, nonlinear-regression methods were applied to a modified version of the previous flow model. Goals of this work were to use regression methods to calibrate the model with each of six different configurations of the basin subsurface and to assess and compare optimal parameter estimates, model fit, and model error among the resulting calibrations. The Albuquerque Basin is one in a series of north trending structural basins within the Rio Grande Rift, a region of Cenozoic crustal extension. Mountains, uplifts, and fault zones bound the basin, and rock units within the basin include pre-Santa Fe Group deposits, Tertiary Santa Fe Group basin fill, and post-Santa Fe Group volcanics and sediments. The Santa Fe Group is greater than 14,000 feet (ft) thick in the central part of the basin. During deposition of the Santa Fe Group, crustal extension resulted in development of north trending normal faults with vertical displacements of as much as 30,000 ft. Ground-water flow in the Albuquerque Basin occurs primarily in the Santa Fe Group and post-Santa Fe Group deposits. Water flows between the ground-water system and surface-water bodies in the inner valley of the basin, where the Rio Grande, a network of interconnected canals and drains, and Cochiti Reservoir are located. Recharge to the ground-water flow system occurs as infiltration of precipitation along mountain fronts and infiltration of stream water along tributaries to the Rio Grande; subsurface flow from adjacent regions; irrigation and septic field seepage; and leakage through the Rio Grande, canal, and Cochiti Reservoir beds. Ground water is discharged from the basin by withdrawal; evapotranspiration; subsurface flow; and flow to the Rio Grande, canals, and drains. The transient, three-dimensional numerical model of ground-water flow to which nonlinear-regression methods were applied simulates flow in the Albuquerque Basin from 1900 to March 1995. Six different basin subsurface configurations are considered in the model. These configurations are designed to test the effects of (1) varying the simulated basin thickness, (2) including a hypothesized hydrogeologic unit with large hydraulic conductivity in the western part of the basin (the west basin high-K zone), and (3) substantially lowering the simulated hydraulic conductivity of a fault in the western part of the basin (the low-K fault zone). The model with each of the subsurface configurations was calibrated using a nonlinear least- squares regression technique. The calibration data set includes 802 hydraulic-head measurements that provide broad spatial and temporal coverage of basin conditions, and one measurement of net flow from the Rio Grande and drains to the ground-water system in the Albuquerque area. Data are weighted on the basis of estimates of the standard deviations of measurement errors. The 10 to 12 parameters to which the calibration data as a whole are generally most sensitive were estimated by nonlinear regression, whereas the remaining model parameter values were specified. Results of model calibration indicate that the optimal parameter estimates as a whole are most reasonable in calibrations of the model with with configurations 3 (which contains 1,600-ft-thick basin deposits and the west basin high-K zone), 4 (which contains 5,000-ft-thick basin de
NASA Astrophysics Data System (ADS)
Matsuda, T.; Omura, K.; Ikeda, R.
2003-12-01
National Research Institute for Earth Science and Disaster Prevention (NIED) has been conducting _gFault zone drilling_h. Fault zone drilling is especially important in understanding the structure, composition, and physical properties of an active fault. In the Chubu district of central Japan, large active faults such as the Atotsugawa (with 1858 Hietsu earthquake) and the Atera (with 1586 Tensho earthquake) faults exist. After the occurrence of the 1995 Kobe earthquake, it has been widely recognized that direct measurements in fault zones by drilling. This time, we describe about the Atera fault and the Nojima fault. Because, these two faults are similar in geological situation (mostly composed of granitic rocks), so it is easy to do comparative study of drilling investigation. The features of the Atera fault, which have been dislocated by the 1586 Tensho earthquake, are as follows. Total length is about 70 km. That general trend is NW45 degree with a left-lateral strike slip. Slip rate is estimated as 3-5 m / 1000 years. Seismicity is very low at present and lithologies around the fault are basically granitic rocks and rhyolite. Six boreholes have been drilled from the depth of 400 m to 630 m. Four of these boreholes (Hatajiri, Fukuoka, Ueno and Kawaue) are located on a line crossing in a direction perpendicular to the Atera fault. In the Kawaue well, mostly fractured and alternating granitic rock continued from the surface to the bottom at 630 m. X-ray fluorescence analysis (XRF) is conducted to estimate the amount of major chemical elements using the glass bead method for core samples. The amounts of H20+ are about from 0.5 to 2.5 weight percent. This fractured zone is also characterized by the logging data such as low resistivity, low P-wave velocity, low density and high neutron porosity. The 1995 Kobe (Hyogo-ken Nanbu) earthquake occurred along the NE-SW-trending Rokko-Awaji fault system, and the Nojima fault appeared on the surface on Awaji Island when this rupture occurred. It is more than 10 km long with 1-2 m offset along the Nojima fault. About one year after the earthquake, NIED drilled a borehole (the Hirabayashi NIED borehole) and penetrated the Nojima fault. The Hirabayashi NIED borehole was drilled to a depth of 1838 m and recovered the drill core. The main types of rock intersected by the borehole are granodiorite and cataclastic fault rocks. Three fracture zones were recognized in cores at approximate depth of 1140 m, 1300 m and 1800 m. There is remarkable foliated blue-gray gouge at a depth of 1140 m. We investigate chemical compositions by XRF analysis in the fracture zone. The amounts of H20+ are about from 1.0 to 15.0 weight percent. We investigate mineral assemblage in both drilling cores by X-ray powder diffraction analysis. From the results, we can_ft recognize so difference between the two faults. But the amount of H2O+ is very different. In the Hirabayashi NIED core at a depth of 1140 m, there is about ten times as much as the average of the Kawaue core. This is probably due to the greater degree of wall-rock fracturing in the fracture zone. We suggest that this characteristic is associated with the fault activity at the time of the 1995 Kobe earthquake and the nature of fluid-rock interactions in the fracture zone.
Safety Study of TCAS II for Logic Version 6.04
1992-07-01
used in the fault tree of the 198 tdy. The fu given for Logic and Altimetry effects represent the site averages, and we bued upon TCAS RAs always being...comparison with the results of Monte Carlo simulations. Five million iterations were carril out for each of the four cases (eqs. 3, 4, 6 and 7
Code of Federal Regulations, 2010 CFR
2010-10-01
..., national, or international standards. (f) The reviewer shall analyze all Fault Tree Analyses (FTA), Failure... cited by the reviewer; (4) Identification of any documentation or information sought by the reviewer...) Identification of the hardware and software verification and validation procedures for the PTC system's safety...
The Two-By-Two Array: An Aid in Conceptualization and Problem Solving
ERIC Educational Resources Information Center
Eberhart, James
2004-01-01
The fields of mathematics, science, and engineering are replete with diagrams of many varieties. They range in nature from the Venn diagrams of symbolic logic to the Periodic Chart of the Elements; and from the fault trees of risk assessment to the flow charts used to describe laboratory procedures, industrial processes, and computer programs. All…
Velázquez, Karelia; Agüero, Jesús; Vives, María C; Aleza, Pablo; Pina, José A; Moreno, Pedro; Navarro, Luis; Guerri, José
2016-10-01
The long juvenile period of citrus trees (often more than 6 years) has hindered genetic improvement by traditional breeding methods and genetic studies. In this work, we have developed a biotechnology tool to promote transition from the vegetative to the reproductive phase in juvenile citrus plants by expression of the Arabidopsis thaliana or citrus FLOWERING LOCUS T (FT) genes using a Citrus leaf blotch virus-based vector (clbvINpr-AtFT and clbvINpr-CiFT, respectively). Citrus plants of different genotypes graft inoculated with either of these vectors started flowering within 4-6 months, with no alteration of the plant architecture, leaf, flower or fruit morphology in comparison with noninoculated adult plants. The vector did not integrate in or recombine with the plant genome nor was it pollen or vector transmissible, albeit seed transmission at low rate was detected. The clbvINpr-AtFT is very stable, and flowering was observed over a period of at least 5 years. Precocious flowering of juvenile citrus plants after vector infection provides a helpful and safe tool to dramatically speed up genetic studies and breeding programmes. © 2016 The Authors. Plant Biotechnology Journal published by Society for Experimental Biology and The Association of Applied Biologists and John Wiley & Sons Ltd.
Log bioassay of residual effectiveness of insecticides against bark beetles
Richard H. Smith
1982-01-01
Residual effectiveness of nine insecticides applied to bark was tested against western, mountain, and Jeffrey pine beetles. Ponderosa and Jeffrey pine trees were treated and logs cut from them 2 to 13 months later, and bioassayed with the three beetles. The insecticides were sprayed at the rate of 1 gal (3.8 l) per 40- or 80-ft² (3.6 or 7.2 m²) bark surface at varying...
Matching skidder size to wood harvested to increase hardwood fiber availability: a case study
Chris B. LeDoux
2000-01-01
Integrating what we know about growing trees with what we know about harvesting them can increase the economic availability of wood fiber and add value to future crops. Results for the oak/hickory forest type in West Virginia show that up to 1,736.61 ft³/ac. of wood fiber can be harvested 10 years sooner than usual by simply matching the size of the machine to...
James B. Baker; Michael G. Shelton
1998-01-01
Plots in an unmanaged loblolly-shortleaf pine (Pinus taeda L.-P. echinata Mill.) stand that had been cutover 15 yr previously were established to represent five stocking levels: 10, 20, 30, 40, and 50%. The stand was on a good site (site indexLob = 90 ft at 50 yr) and had uneven-aged character. Two competition control treatments (none and individual tree release using...
James B. Baker; Michael G. Shelton
1998-01-01
A 3- to 6 yr-old naturally regenerated, even-aged loblollypine (Pinus taeda L.) stand and a 5- yr-old loblolly pine plantation on good sites (SIbb = 85 to 90 ft at 50 yr ) were cut to density levels of 50, 90, 180,270, and 360 seedlings and/or saplings/ac. Two pine release treatments (none and individual tree release with a herbicide) were applied to the natural stand...
NASA Astrophysics Data System (ADS)
Kamer, Yavor; Ouillon, Guy; Sornette, Didier; Wössner, Jochen
2014-05-01
We present applications of a new clustering method for fault network reconstruction based on the spatial distribution of seismicity. Unlike common approaches that start from the simplest large scale and gradually increase the complexity trying to explain the small scales, our method uses a bottom-up approach, by an initial sampling of the small scales and then reducing the complexity. The new approach also exploits the location uncertainty associated with each event in order to obtain a more accurate representation of the spatial probability distribution of the seismicity. For a given dataset, we first construct an agglomerative hierarchical cluster (AHC) tree based on Ward's minimum variance linkage. Such a tree starts out with one cluster and progressively branches out into an increasing number of clusters. To atomize the structure into its constitutive protoclusters, we initialize a Gaussian Mixture Modeling (GMM) at a given level of the hierarchical clustering tree. We then let the GMM converge using an Expectation Maximization (EM) algorithm. The kernels that become ill defined (less than 4 points) at the end of the EM are discarded. By incrementing the number of initialization clusters (by atomizing at increasingly populated levels of the AHC tree) and repeating the procedure above, we are able to determine the maximum number of Gaussian kernels the structure can hold. The kernels in this configuration constitute our protoclusters. In this setting, merging of any pair will lessen the likelihood (calculated over the pdf of the kernels) but in turn will reduce the model's complexity. The information loss/gain of any possible merging can thus be quantified based on the Minimum Description Length (MDL) principle. Similar to an inter-distance matrix, where the matrix element di,j gives the distance between points i and j, we can construct a MDL gain/loss matrix where mi,j gives the information gain/loss resulting from the merging of kernels i and j. Based on this matrix, merging events resulting in MDL gain are performed in descending order until no gainful merging is possible anymore. We envision that the results of this study could lead to a better understanding of the complex interactions within the Californian fault system and hopefully use the acquired insights for earthquake forecasting.
NASA Astrophysics Data System (ADS)
Shi, J. T.; Han, X. T.; Xie, J. F.; Yao, L.; Huang, L. T.; Li, L.
2013-03-01
A Pulsed High Magnetic Field Facility (PHMFF) has been established in Wuhan National High Magnetic Field Center (WHMFC) and various protection measures are applied in its control system. In order to improve the reliability and robustness of the control system, the safety analysis of the PHMFF is carried out based on Fault Tree Analysis (FTA) technique. The function and realization of 5 protection systems, which include sequence experiment operation system, safety assistant system, emergency stop system, fault detecting and processing system and accident isolating protection system, are given. The tests and operation indicate that these measures improve the safety of the facility and ensure the safety of people.
Geology and development of Attake oil field, Indonesia
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schwartz, C.M.; Samsoe, B.S.; Laughbaum, G.H. Jr.
1973-04-01
Of particular significance in the Indonesia oil search was the discovery of the Attaka oil field, offshore E. Kalimantan (Borneo), in late 1970. After confirmation drilling from Sept. 1970 to Feb. 1971, field development and facility installation began in early 1971. First production was achieved in Nov. 1972. This major field is in the Tertiary Balikpapan Basin. Production occurs mainly from sublittoral to deltaic sands of the Attaka Series, of Pliocene and Miocene ages. The age of the deepest bed penetrated in the field area is middle Miocene. Well logs and paleontologic data indicate a predominately regressive sequence of deposition.more » Highly permeable pay sands, 34 in number, occur at intervals from 2,000 to 7,800 ft measured depth. The structure is a faulted anticline. Faults and stratigraphic variations in part control accumulation and affect fluid properties. The oil is very low in sulfur content, and has a range of gravity from 35 to 52/sup 0/ API. Both saturated and undersaturated reservoirs are present.« less
Nelson, Alan R.; Personius, Stephen F.; Sherrod, Brian L.; Buck, Jason; Bradley, Lee-Ann; Henley, Gary; Liberty, Lee M.; Kelsey, Harvey M.; Witter, Robert C.; Koehler, R.D.; Schermer, Elizabeth R.; Nemser, Eliza S.; Cladouhos, Trenton T.
2008-01-01
As part of the effort to assess seismic hazard in the Puget Sound region, we map fault scarps on Airborne Laser Swath Mapping (ALSM, an application of LiDAR) imagery (with 2.5-m elevation contours on 1:4,000-scale maps) and show field and laboratory data from backhoe trenches across the scarps that are being used to develop a latest Pleistocene and Holocene history of large earthquakes on the Tacoma fault. We supplement previous Tacoma fault paleoseismic studies with data from five trenches on the hanging wall of the fault. In a new trench across the Catfish Lake scarp, broad folding of more tightly folded glacial sediment does not predate 4.3 ka because detrital charcoal of this age was found in stream-channel sand in the trench beneath the crest of the scarp. A post-4.3-ka age for scarp folding is consistent with previously identified uplift across the fault during AD 770-1160. In the trench across the younger of the two Stansberry Lake scarps, six maximum 14C ages on detrital charcoal in pre-faulting B and C soil horizons and three minimum ages on a tree root in post-faulting colluvium, limit a single oblique-slip (right-lateral) surface faulting event to AD 410-990. Stratigraphy and sedimentary structures in the trench across the older scarp at the same site show eroded glacial sediments, probably cut by a meltwater channel, with no evidence of post-glacial deformation. At the northeast end of the Sunset Beach scarps, charcoal ages in two trenches across graben-forming scarps give a close maximum age of 1.3 ka for graben formation. The ages that best limit the time of faulting and folding in each of the trenches are consistent with the time of the large regional earthquake in southern Puget Sound about AD 900-930.
Monotone Boolean approximation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hulme, B.L.
1982-12-01
This report presents a theory of approximation of arbitrary Boolean functions by simpler, monotone functions. Monotone increasing functions can be expressed without the use of complements. Nonconstant monotone increasing functions are important in their own right since they model a special class of systems known as coherent systems. It is shown here that when Boolean expressions for noncoherent systems become too large to treat exactly, then monotone approximations are easily defined. The algorithms proposed here not only provide simpler formulas but also produce best possible upper and lower monotone bounds for any Boolean function. This theory has practical application formore » the analysis of noncoherent fault trees and event tree sequences.« less
Rockwell, Thomas K.; Lindvall, Scott; Dawson, Tim; Langridge, Rob; Lettis, William; Klinger, Yann
2002-01-01
Surveys of multiple tree lines within groves of poplar trees, planted in straight lines across the fault prior to the earthquake, show surprisingly large lateral variations. In one grove, slip increases by nearly 1.8 m, or 35% of the maximum measured value, over a lateral distance of nearly 100 m. This and other observations along the 1999 ruptures suggest that the lateral variability of slip observed from displaced geomorphic features in many earthquakes of the past may represent a combination of (1) actual differences in slip at the surface and (2) the difficulty in recognizing distributed nonbrittle deformation.
1989-08-25
P-34692 Range : 500 km. ( 300 miles ) Smallest Resolvable Feature : 900 m. or 2,700 ft. Part of Triton's complex geological history canbe seen in this image, shot by Voyager 2. Part of a sequence, this photograph encompasses two depressions, possibly old impact basins, that have been extensively modified by floodind, melting, faulting, and collapse. Several episodes of filling and partial removal of material appear to have occurred. The rough area in the middle of the bottom depression probably marks the most recent eruption of material. Only a few impact craters dot the area, which shows the dominance of internally driven geologic processes on Triton.
Using minimal spanning trees to compare the reliability of network topologies
NASA Technical Reports Server (NTRS)
Leister, Karen J.; White, Allan L.; Hayhurst, Kelly J.
1990-01-01
Graph theoretic methods are applied to compute the reliability for several types of networks of moderate size. The graph theory methods used are minimal spanning trees for networks with bi-directional links and the related concept of strongly connected directed graphs for networks with uni-directional links. A comparison is conducted of ring networks and braided networks. The case is covered where just the links fail and the case where both links and nodes fail. Two different failure modes for the links are considered. For one failure mode, the link no longer carries messages. For the other failure mode, the link delivers incorrect messages. There is a description and comparison of link-redundancy versus path-redundancy as methods to achieve reliability. All the computations are carried out by means of a fault tree program.
Water resources in basin-fill deposits in the Tularosa Basin, New Mexico
Orr, B.R.; Myers, R.G.
1986-01-01
The Tularosa Basin, a faulted intermontane depression in south-central New Mexico, contains a thick sequence of alluvial and lacustrine deposits of Tertiary and Quaternary age. Most of these sediments are saturated with very saline water. Freshwater supplies (dissolved solids concentration < 1000 mg/L) principally are found in alluvial fans located around the basin margin. On the eastern side of the Tularosa Basin, fresh groundwater supplies are limited to alluvial fan deposits from Grapevine Canyon to about 3 mi south of Alamogordo. Data from surface geophysical surveys indicate that about 1.4 to 2.1 million acre-ft of freshwater may be in storage in this area, not all of which is recoverable. An additional 3.6 to 5.4 million acre-ft of slightly saline water (dissolved solids concentration 1000 to 3000 mg/L) may be in storage in the same area, again not all of which is recoverable. On the western side of the Tularosa Basin, alluvial fans in the vicinity of Rhodes Canyon may contain freshwater. Geophysical data indicate the freshwater zone may be as thick as 1500 ft in places; however, the limited number of wells in this area precludes a precise definition of the volume of freshwater in storage. To the south, freshwater is present in alluvial fans associated with the Ash Canyon drainage system. Geophysical data indicate that perhaps as much as 450,000 acre-ft of freshwater, not all recoverable, may be in storage in this area. Fan deposits between Ash Canyon and Rhodes canyon may contain additional freshwater supplies. Possibly 10.7 million acre-ft of freshwater, not all of which is recoverable, may be in storage on the western side of the Tularosa Basin. Possibly 180 million acre-ft of brine (concentrations of dissolved solids exceeding 35,000 mg/L), not all of which is recoverable, may be in storage in the Tularosa Basin. Information is sparse concerning the capability of saline aquifers in the Tularosa Basin to store and transmit fluid. (Author 's abstract)
NASA Technical Reports Server (NTRS)
Patterson, Jonathan D.; Breckenridge, Jonathan T.; Johnson, Stephen B.
2013-01-01
Building upon the purpose, theoretical approach, and use of a Goal-Function Tree (GFT) being presented by Dr. Stephen B. Johnson, described in a related Infotech 2013 ISHM abstract titled "Goal-Function Tree Modeling for Systems Engineering and Fault Management", this paper will describe the core framework used to implement the GFTbased systems engineering process using the Systems Modeling Language (SysML). These two papers are ideally accepted and presented together in the same Infotech session. Statement of problem: SysML, as a tool, is currently not capable of implementing the theoretical approach described within the "Goal-Function Tree Modeling for Systems Engineering and Fault Management" paper cited above. More generally, SysML's current capabilities to model functional decompositions in the rigorous manner required in the GFT approach are limited. The GFT is a new Model-Based Systems Engineering (MBSE) approach to the development of goals and requirements, functions, and its linkage to design. As a growing standard for systems engineering, it is important to develop methods to implement GFT in SysML. Proposed Method of Solution: Many of the central concepts of the SysML language are needed to implement a GFT for large complex systems. In the implementation of those central concepts, the following will be described in detail: changes to the nominal SysML process, model view definitions and examples, diagram definitions and examples, and detailed SysML construct and stereotype definitions.
Nolan, B.T.; Campbell, D.L.; Senterfit, R.M.
1998-01-01
A geophysical survey was conducted to determine the depth of the base of the water-table aquifer in the southern part of Jackson Hole, Wyoming, USA. Audio-magnetotellurics (AMT) measurements at 77 sites in the study area yielded electrical-resistivity logs of the subsurface, and these were used to infer lithologic changes with depth. A 100-600 ohm-m geoelectric layer, designated the Jackson aquifer, was used to represent surficial saturated, unconsolidated deposits of Quaternary age. The median depth of the base of the Jackson aquifer is estimated to be 200 ft (61 m), based on 62 sites that had sufficient resistivity data. AMT-measured values were kriged to predict the depth to the base of the aquifer throughout the southern part of Jackson Hole. Contour maps of the kriging predictions indicate that the depth of the base of the Jackson aquifer is shallow in the central part of the study area near the East and West Gros Ventre Buttes, deeper in the west near the Teton fault system, and shallow at the southern edge of Jackson Hole. Predicted, contoured depths range from 100 ft (30 m) in the south, near the confluences of Spring Creek and Flat Creek with the Snake River, to 700 ft (210 m) in the west, near the town of Wilson, Wyoming.
Ground-water resources in Mendocino County, California
Farrar, C.D.
1986-01-01
Mendocino County includes about 3,500 sq mi of coastal northern California. Groundwater is the main source for municipal and individual domestic water systems and contributes significantly to irrigation. Consolidated rocks of the Franciscan Complex are exposed over most of the county. The consolidated rocks are commonly dry and generally supply < 5 gal/min of water to wells. Unconsolidated fill in the inland valleys consists of gravel, sand, silt, and clay. Low permeability in the fill caused by fine grain size and poor sorting limits well yields to less than 50 gal/min in most areas; where the fill is better sorted, yields of 1,000 gal/min can be obtained. Storage capacity estimates for the three largest basins are Ukiah Valley, 90,000 acre-ft; Little lake Valley, 35,000 acre-ft; and Laytonville Valley, 14,000 acre-ft. Abundant rainfall (35 to 56 in/yr) generally recharges these basins to capacity. Seasonal water level fluctuations since the 1950 's have been nearly constant, except during the 1976-77 drought. Chemical quality of water in basement rocks and valley fill is generally acceptable for most uses. Some areas along fault zones yield water with high boron concentrations ( <2 mg/L). Sodium chloride water with dissolved solids concentrations exceeding 1,000 mg/L is found in deeper parts of Little Lake Valley. (Author 's abstract)
Architecture Analysis with AADL: The Speed Regulation Case-Study
2014-11-01
Overview Functional Hazard Analysis ( FHA ) Failures inventory with description, classification, etc. Fault-Tree Analysis (FTA) Dependencies between...University Pittsburgh, PA 15213 Julien Delange Report Documentation Page Form ApprovedOMB No. 0704-0188 Public reporting burden for the collection of...Information Operations and Reports , 1215 Jefferson Davis Highway, Suite 1204, Arlington VA 22202-4302. Respondents should be aware that notwithstanding any
Journal of Air Transportation, Volume 12, No. 2 (ATRS Special Edition)
NASA Technical Reports Server (NTRS)
Bowen, Brent D. (Editor); Kabashkin, Igor (Editor); Fink, Mary (Editor)
2007-01-01
Topics covered include: Competition and Change in the Long-Haul Markets from Europe; Insights into the Maintenance, Repair, and Overhaul Configurations of European Airlines; Validation of Fault Tree Analysis in Aviation Safety Management; An Investigation into Airline Service Quality Performance between U.S. Legacy Carriers and Their EU Competitors and Partners; and Climate Impact of Aircraft Technology and Design Changes.
Risk Analysis of a Fuel Storage Terminal Using HAZOP and FTA
Baixauli-Pérez, Mª Piedad
2017-01-01
The size and complexity of industrial chemical plants, together with the nature of the products handled, means that an analysis and control of the risks involved is required. This paper presents a methodology for risk analysis in chemical and allied industries that is based on a combination of HAZard and OPerability analysis (HAZOP) and a quantitative analysis of the most relevant risks through the development of fault trees, fault tree analysis (FTA). Results from FTA allow prioritizing the preventive and corrective measures to minimize the probability of failure. An analysis of a case study is performed; it consists in the terminal for unloading chemical and petroleum products, and the fuel storage facilities of two companies, in the port of Valencia (Spain). HAZOP analysis shows that loading and unloading areas are the most sensitive areas of the plant and where the most significant danger is a fuel spill. FTA analysis indicates that the most likely event is a fuel spill in tank truck loading area. A sensitivity analysis from the FTA results show the importance of the human factor in all sequences of the possible accidents, so it should be mandatory to improve the training of the staff of the plants. PMID:28665325
TH-EF-BRC-04: Quality Management Program
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yorke, E.
2016-06-15
This Hands-on Workshop will be focused on providing participants with experience with the principal tools of TG 100 and hence start to build both competence and confidence in the use of risk-based quality management techniques. The three principal tools forming the basis of TG 100’s risk analysis: Process mapping, Failure-Modes and Effects Analysis and fault-tree analysis will be introduced with a 5 minute refresher presentation and each presentation will be followed by a 30 minute small group exercise. An exercise on developing QM from the risk analysis follows. During the exercise periods, participants will apply the principles in 2 differentmore » clinical scenarios. At the conclusion of each exercise there will be ample time for participants to discuss with each other and the faculty their experience and any challenges encountered. Learning Objectives: To review the principles of Process Mapping, Failure Modes and Effects Analysis and Fault Tree Analysis. To gain familiarity with these three techniques in a small group setting. To share and discuss experiences with the three techniques with faculty and participants. Director, TreatSafely, LLC. Director, Center for the Assessment of Radiological Sciences. Occasional Consultant to the IAEA and Varian.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
2016-06-15
This Hands-on Workshop will be focused on providing participants with experience with the principal tools of TG 100 and hence start to build both competence and confidence in the use of risk-based quality management techniques. The three principal tools forming the basis of TG 100’s risk analysis: Process mapping, Failure-Modes and Effects Analysis and fault-tree analysis will be introduced with a 5 minute refresher presentation and each presentation will be followed by a 30 minute small group exercise. An exercise on developing QM from the risk analysis follows. During the exercise periods, participants will apply the principles in 2 differentmore » clinical scenarios. At the conclusion of each exercise there will be ample time for participants to discuss with each other and the faculty their experience and any challenges encountered. Learning Objectives: To review the principles of Process Mapping, Failure Modes and Effects Analysis and Fault Tree Analysis. To gain familiarity with these three techniques in a small group setting. To share and discuss experiences with the three techniques with faculty and participants. Director, TreatSafely, LLC. Director, Center for the Assessment of Radiological Sciences. Occasional Consultant to the IAEA and Varian.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huq, M.
2016-06-15
This Hands-on Workshop will be focused on providing participants with experience with the principal tools of TG 100 and hence start to build both competence and confidence in the use of risk-based quality management techniques. The three principal tools forming the basis of TG 100’s risk analysis: Process mapping, Failure-Modes and Effects Analysis and fault-tree analysis will be introduced with a 5 minute refresher presentation and each presentation will be followed by a 30 minute small group exercise. An exercise on developing QM from the risk analysis follows. During the exercise periods, participants will apply the principles in 2 differentmore » clinical scenarios. At the conclusion of each exercise there will be ample time for participants to discuss with each other and the faculty their experience and any challenges encountered. Learning Objectives: To review the principles of Process Mapping, Failure Modes and Effects Analysis and Fault Tree Analysis. To gain familiarity with these three techniques in a small group setting. To share and discuss experiences with the three techniques with faculty and participants. Director, TreatSafely, LLC. Director, Center for the Assessment of Radiological Sciences. Occasional Consultant to the IAEA and Varian.« less
Risk Analysis of a Fuel Storage Terminal Using HAZOP and FTA.
Fuentes-Bargues, José Luis; González-Cruz, Mª Carmen; González-Gaya, Cristina; Baixauli-Pérez, Mª Piedad
2017-06-30
The size and complexity of industrial chemical plants, together with the nature of the products handled, means that an analysis and control of the risks involved is required. This paper presents a methodology for risk analysis in chemical and allied industries that is based on a combination of HAZard and OPerability analysis (HAZOP) and a quantitative analysis of the most relevant risks through the development of fault trees, fault tree analysis (FTA). Results from FTA allow prioritizing the preventive and corrective measures to minimize the probability of failure. An analysis of a case study is performed; it consists in the terminal for unloading chemical and petroleum products, and the fuel storage facilities of two companies, in the port of Valencia (Spain). HAZOP analysis shows that loading and unloading areas are the most sensitive areas of the plant and where the most significant danger is a fuel spill. FTA analysis indicates that the most likely event is a fuel spill in tank truck loading area. A sensitivity analysis from the FTA results show the importance of the human factor in all sequences of the possible accidents, so it should be mandatory to improve the training of the staff of the plants.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dunscombe, P.
This Hands-on Workshop will be focused on providing participants with experience with the principal tools of TG 100 and hence start to build both competence and confidence in the use of risk-based quality management techniques. The three principal tools forming the basis of TG 100’s risk analysis: Process mapping, Failure-Modes and Effects Analysis and fault-tree analysis will be introduced with a 5 minute refresher presentation and each presentation will be followed by a 30 minute small group exercise. An exercise on developing QM from the risk analysis follows. During the exercise periods, participants will apply the principles in 2 differentmore » clinical scenarios. At the conclusion of each exercise there will be ample time for participants to discuss with each other and the faculty their experience and any challenges encountered. Learning Objectives: To review the principles of Process Mapping, Failure Modes and Effects Analysis and Fault Tree Analysis. To gain familiarity with these three techniques in a small group setting. To share and discuss experiences with the three techniques with faculty and participants. Director, TreatSafely, LLC. Director, Center for the Assessment of Radiological Sciences. Occasional Consultant to the IAEA and Varian.« less
Rath, Frank
2008-01-01
This article examines the concepts of quality management (QM) and quality assurance (QA), as well as the current state of QM and QA practices in radiotherapy. A systematic approach incorporating a series of industrial engineering-based tools is proposed, which can be applied in health care organizations proactively to improve process outcomes, reduce risk and/or improve patient safety, improve through-put, and reduce cost. This tool set includes process mapping and process flowcharting, failure modes and effects analysis (FMEA), value stream mapping, and fault tree analysis (FTA). Many health care organizations do not have experience in applying these tools and therefore do not understand how and when to use them. As a result there are many misconceptions about how to use these tools, and they are often incorrectly applied. This article describes these industrial engineering-based tools and also how to use them, when they should be used (and not used), and the intended purposes for their use. In addition the strengths and weaknesses of each of these tools are described, and examples are given to demonstrate the application of these tools in health care settings.
Liu, Xiao Yu; Xue, Kang Ning; Rong, Rong; Zhao, Chi Hong
2016-01-01
Epidemic hemorrhagic fever has been an ongoing threat to laboratory personnel involved in animal care and use. Laboratory transmissions and severe infections occurred over the past twenty years, even though the standards and regulations for laboratory biosafety have been issued, upgraded, and implemented in China. Therefore, there is an urgent need to identify risk factors and to seek effective preventive measures that can curb the incidences of epidemic hemorrhagic fever among laboratory personnel. In the present study, we reviewed literature that relevant to animals laboratory-acquired hemorrhagic fever infections reported from 1995 to 2015, and analyzed these incidences using fault tree analysis (FTA). The results of data analysis showed that purchasing of qualified animals and guarding against wild rats which could make sure the laboratory animals without hantaviruses, are the basic measures to prevent infections. During the process of daily management, the consciousness of personal protecting and the ability of personal protecting need to be further improved. Undoubtedly vaccination is the most direct and effective method, while it plays role after infection. So avoiding infections can't rely entirely on vaccination. Copyright © 2016 The Editorial Board of Biomedical and Environmental Sciences. Published by China CDC. All rights reserved.
Fault tree analysis of the causes of waterborne outbreaks.
Risebro, Helen L; Doria, Miguel F; Andersson, Yvonne; Medema, Gertjan; Osborn, Keith; Schlosser, Olivier; Hunter, Paul R
2007-01-01
Prevention and containment of outbreaks requires examination of the contribution and interrelation of outbreak causative events. An outbreak fault tree was developed and applied to 61 enteric outbreaks related to public drinking water supplies in the EU. A mean of 3.25 causative events per outbreak were identified; each event was assigned a score based on percentage contribution per outbreak. Source and treatment system causative events often occurred concurrently (in 34 outbreaks). Distribution system causative events occurred less frequently (19 outbreaks) but were often solitary events contributing heavily towards the outbreak (a mean % score of 87.42). Livestock and rainfall in the catchment with no/inadequate filtration of water sources contributed concurrently to 11 of 31 Cryptosporidium outbreaks. Of the 23 protozoan outbreaks experiencing at least one treatment causative event, 90% of these events were filtration deficiencies; by contrast, for bacterial, viral, gastroenteritis and mixed pathogen outbreaks, 75% of treatment events were disinfection deficiencies. Roughly equal numbers of groundwater and surface water outbreaks experienced at least one treatment causative event (18 and 17 outbreaks, respectively). Retrospective analysis of multiple outbreaks of enteric disease can be used to inform outbreak investigations, facilitate corrective measures, and further develop multi-barrier approaches.
Fault and event tree analyses for process systems risk analysis: uncertainty handling formulations.
Ferdous, Refaul; Khan, Faisal; Sadiq, Rehan; Amyotte, Paul; Veitch, Brian
2011-01-01
Quantitative risk analysis (QRA) is a systematic approach for evaluating likelihood, consequences, and risk of adverse events. QRA based on event (ETA) and fault tree analyses (FTA) employs two basic assumptions. The first assumption is related to likelihood values of input events, and the second assumption is regarding interdependence among the events (for ETA) or basic events (for FTA). Traditionally, FTA and ETA both use crisp probabilities; however, to deal with uncertainties, the probability distributions of input event likelihoods are assumed. These probability distributions are often hard to come by and even if available, they are subject to incompleteness (partial ignorance) and imprecision. Furthermore, both FTA and ETA assume that events (or basic events) are independent. In practice, these two assumptions are often unrealistic. This article focuses on handling uncertainty in a QRA framework of a process system. Fuzzy set theory and evidence theory are used to describe the uncertainties in the input event likelihoods. A method based on a dependency coefficient is used to express interdependencies of events (or basic events) in ETA and FTA. To demonstrate the approach, two case studies are discussed. © 2010 Society for Risk Analysis.
Martin A. Spetich; David L. Graney; Paul A. Murphy
1999-01-01
A test of group-selection and single-tree selection was installed in 80-year-old even-aged oak-hickory stands in the Boston Mountains of northern Arkansas. Twenty-four 11-acre plots were installed in well stocked stands representing north or east and south or west aspects. Stands between group openings were cut to residual basal areas of 65 and 85 ft2...
NASA Astrophysics Data System (ADS)
Aprilia, Ayu Rizky; Santoso, Imam; Ekasari, Dhita Murita
2017-05-01
Yogurt is a product based on milk, which has beneficial effects for health. The process for the production of yogurt is very susceptible to failure because it involves bacteria and fermentation. For an industry, the risks may cause harm and have a negative impact. In order for a product to be successful and profitable, it requires the analysis of risks that may occur during the production process. Risk analysis can identify the risks in detail and prevent as well as determine its handling, so that the risks can be minimized. Therefore, this study will analyze the risks of the production process with a case study in CV.XYZ. The method used in this research is the Fuzzy Failure Mode and Effect Analysis (fuzzy FMEA) and Fault Tree Analysis (FTA). The results showed that there are 6 risks from equipment variables, raw material variables, and process variables. Those risks include the critical risk, which is the risk of a lack of an aseptic process, more specifically if starter yogurt is damaged due to contamination by fungus or other bacteria and a lack of sanitation equipment. The results of quantitative analysis of FTA showed that the highest probability is the probability of the lack of an aseptic process, with a risk of 3.902%. The recommendations for improvement include establishing SOPs (Standard Operating Procedures), which include the process, workers, and environment, controlling the starter of yogurt and improving the production planning and sanitation equipment using hot water immersion.
October 1, 1989 tornado at the Savannah River Site
DOE Office of Scientific and Technical Information (OSTI.GOV)
Parker, M.J.; Kurzeja, R.J.
1990-01-01
A tornado with wind speeds in the 113 to 157 mph range struck the southern portion of the Savannah River Site near Aiken, SC at around 7:30 pm on October 1, 1989. The tornado was spawned from a severe thunderstorm with a height of 57,000 ft in a warm and humid air mass. Two million dollars in timber damage occurred over 2,500 acres along a ten-mile swath, but no onsite structural damage or personal injury occurred. Tree-fall patterns indicated that some of this damage was the result of thunderstorm downbursts which accompanied the tornado. Ground-based and aerial photography showed bothmore » snapped and mowed over trees which indicate that the tornado was elevated at times. 4 refs., 25 figs., 2 tabs.« less
NASA Astrophysics Data System (ADS)
Goudarzi, H.; Dousti, M. J.; Shafaei, A.; Pedram, M.
2014-05-01
This paper presents a physical mapping tool for quantum circuits, which generates the optimal universal logic block (ULB) that can, on average, perform any logical fault-tolerant (FT) quantum operations with the minimum latency. The operation scheduling, placement, and qubit routing problems tackled by the quantum physical mapper are highly dependent on one another. More precisely, the scheduling solution affects the quality of the achievable placement solution due to resource pressures that may be created as a result of operation scheduling, whereas the operation placement and qubit routing solutions influence the scheduling solution due to resulting distances between predecessor and current operations, which in turn determines routing latencies. The proposed flow for the quantum physical mapper captures these dependencies by applying (1) a loose scheduling step, which transforms an initial quantum data flow graph into one that explicitly captures the no-cloning theorem of the quantum computing and then performs instruction scheduling based on a modified force-directed scheduling approach to minimize the resource contention and quantum circuit latency, (2) a placement step, which uses timing-driven instruction placement to minimize the approximate routing latencies while making iterative calls to the aforesaid force-directed scheduler to correct scheduling levels of quantum operations as needed, and (3) a routing step that finds dynamic values of routing latencies for the qubits. In addition to the quantum physical mapper, an approach is presented to determine the single best ULB size for a target quantum circuit by examining the latency of different FT quantum operations mapped onto different ULB sizes and using information about the occurrence frequency of operations on critical paths of the target quantum algorithm to weigh these latencies. Experimental results show an average latency reduction of about 40 % compared to previous work.
High-power converters for space applications
NASA Technical Reports Server (NTRS)
Park, J. N.; Cooper, Randy
1991-01-01
Phase 1 was a concept definition effort to extend space-type dc/dc converter technology to the megawatt level with a weight of less than 0.1 kg/kW (220 lb./MW). Two system designs were evaluated in Phase 1. Each design operates from a 5 kV stacked fuel cell source and provides a voltage step-up to 100 kV at 10 A for charging capacitors (100 pps at a duty cycle of 17 min on, 17 min off). Both designs use an MCT-based, full-bridge inverter, gaseous hydrogen cooling, and crowbar fault protection. The GE-CRD system uses an advanced high-voltage transformer/rectifier filter is series with a resonant tank circuit, driven by an inverter operating at 20 to 50 kHz. Output voltage is controlled through frequency and phase shift control. Fast transient response and stability is ensured via optimal control. Super-resonant operation employing MCTs provides the advantages of lossless snubbing, no turn-on switching loss, use of medium-speed diodes, and intrinsic current limiting under load-fault conditions. Estimated weight of the GE-CRD system is 88 kg (1.5 cu ft.). Efficiency of 94.4 percent and total system loss is 55.711 kW operating at 1 MW load power. The Maxwell system is based on a resonance transformer approach using a cascade of five LC resonant sections at 100 kHz. The 5 kV bus is converted to a square wave, stepped-up to a 100 kV sine wave by the LC sections, rectified, and filtered. Output voltage is controlled with a special series regulator circuit. Estimated weight of the Maxwell system is 83.8 kg (4.0 cu ft.). Efficiency is 87.2 percent and total system loss is 146.411 kW operating at 1 MW load power.
Geothermal and heavy-oil resources in Texas
DOE Office of Scientific and Technical Information (OSTI.GOV)
Seni, S.J.; Walter, T.G.
1994-01-01
In a five-county area of South Texas, geopressured-geothermal reservoirs in the Paleocene-Eocene Wilcox Group lie below medium- to heavy-oil reservoirs in the Eocene Jackson Group. This fortuitous association suggests the use of geothermal fluids for thermally enhanced oil recovery (TEOR). Geothermal fairways are formed where thick deltaic sandstones are compartmentalized by growth faults. Wilcox geothermal reservoirs in South Texas are present at depths of 11,000 to 15,000 ft (3,350 to 4,570 m) in laterally continuous sandstones 100 to 200 ft (30 to 60 m) thick. Permeability is generally low (typically 1 md), porosity ranges from 12 to 24 percent, andmore » temperature exceeds 250{degrees}F (121{degrees}C). Reservoirs containing medium (20{degrees} to 25{degrees} API gravity) to heavy (10{degrees} to 20{degrees} API gravity) oil are concentrated along the Texas Coastal Plain in the Jackson-Yegua Barrier/Strandplain (Mirando Trend), Cap Rock, and Piercement Salt Dome plays and in the East Texas Basin in Woodbine Fluvial/Deltaic Strandplain and Paluxy Fault Line plays. Injection of hot, moderately fresh to saline brines will improve oil recovery by lowering viscosity and decreasing residual oil saturation. Smectite clay matrix could swell and clog pore throats if injected waters have low salinity. The high temperature of injected fluids will collapse some of the interlayer clays, thus increasing porosity and permeability. Reservoir heterogeneity resulting from facies variation and diagenesis must be considered when siting production and injection wells within the heavy-oil reservoir. The ability of abandoned gas wells to produce sufficient volumes of hot water over the long term will also affect the economics of TEOR.« less
Timing of late Holocene surface rupture of the Wairau Fault, Marlborough, New Zealand
Zachariasen, J.; Berryman, K.; Langridge, Rob; Prentice, C.; Rymer, M.; Stirling, M.; Villamor, P.
2006-01-01
Three trenches excavated across the central portion of the right-lateral strike-slip Wairau Fault in South Island, New Zealand, exposed a complex set of fault strands that have displaced a sequence of late Holocene alluvial and colluvial deposits. Abundant charcoal fragments provide age control for various stratigraphic horizons dating back to c. 5610 yr ago. Faulting relations from the Wadsworth trench show that the most recent surface rupture event occurred at least 1290 yr and at most 2740 yr ago. Drowned trees in landslide-dammed Lake Chalice, in combination with charcoal from the base of an unfaulted colluvial wedge at Wadsworth trench, suggest a narrower time bracket for this event of 1811-2301 cal. yr BP. The penultimate faulting event occurred between c. 2370 and 3380 yr, and possibly near 2680 ?? 60 cal. yr BP, when data from both the Wadsworth and Dillon trenches are combined. Two older events have been recognised from Dillon trench but remain poorly dated. A probable elapsed time of at least 1811 yr since the last surface rupture, and an average slip rate estimate for the Wairau Fault of 3-5 mm/yr, suggests that at least 5.4 m and up to 11.5 m of elastic shear strain has accumulated since the last rupture. This is near to or greater than the single-event displacement estimates of 5-7 m. The average recurrence interval for surface rupture of the fault determined from the trench data is 1150-1400 yr. Although the uncertainties in the timing of faulting events and variability in inter-event times remain high, the time elapsed since the last event is in the order of 1-2 times the average recurrence interval, implying that the Wairau Fault is near the end of its interseismic period. ?? The Royal Society of New Zealand 2006.
NASA Astrophysics Data System (ADS)
Dura-Gomez, I.; Addison, A.; Knapp, C. C.; Talwani, P.; Chapman, A.
2005-12-01
During the 1886 Charleston earthquake, two parallel tabby walls of Fort Dorchester broke left-laterally, and a strike of ~N25°W was inferred for the causative Sawmill Branch fault. To better define this fault, which does not have any surface expression, we planned to cut trenches across it. However, as Fort Dorchester is a protected archeological site, we were required to locate the fault accurately away from the fort, before permission could be obtained to cut short trenches. The present GPR investigations were planned as a preliminary step to determine locations for trenching. A pulseEKKO 100 GPR was used to collect data along eight profiles (varying in length from 10 m to 30 m) that were run across the projected strike of the fault, and one 50 m long profile that was run parallel to it. The locations of the profiles were obtained using a total station. To capture the signature of the fault, sixteen common-offset (COS) lines were acquired by using different antennas (50, 100 and 200 MHz) and stacking 64 times to increase the signal-to-noise ratio. The location of trees and stumps were recorded. In addition, two common-midpoint (CMP) tests were carried out, and gave an average velocity of about 0.097 m/ns. Processing included the subtraction of the low frequency "wow" on the trace (dewow), automatic gain control (AGC) and the application of bandpass filters. The signals using the 50 MHz, 100 MHz and 200 MHz antennas were found to penetrate up to about 30 meters, 20 meters and 12 meters respectively. Vertically offset reflectors and disruptions of the electrical signal were used to infer the location of the fault(s). Comparisons of the locations of these disruptions on various lines were used to infer the presence of a N30°W fault zone We plan to confirm these locations by cutting shallow trenches.